* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-11-21 13:12 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-11-21 13:12 UTC (permalink / raw
To: gentoo-commits
commit: 67d76cc6cc2bdc81a481ca7563853da3307b9331
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 21 13:11:30 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 21 13:11:30 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=67d76cc6
BMQ(BitMap Queue) Scheduler. (USE=experimental)
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch | 11188 +++++++++++++++++++++++++
5021_BMQ-and-PDS-gentoo-defaults.patch | 13 +
3 files changed, 11209 insertions(+)
diff --git a/0000_README b/0000_README
index 79d80432..2f20a332 100644
--- a/0000_README
+++ b/0000_README
@@ -86,3 +86,11 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
+From: https://gitlab.com/alfredchen/projectc
+Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
+From: https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc: Set defaults for BMQ. default to n
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
new file mode 100644
index 00000000..9eb3139f
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
@@ -0,0 +1,11188 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index f8bc1630eba0..1b90768a0916 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1673,3 +1673,12 @@ is 10 seconds.
+
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++ 0 - No yield.
++ 1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++ BitMap queue CPU Scheduler
++ --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++ Overview
++ Task policy
++ Priority management
++ BitMap Queue
++ CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++ It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++ All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++ BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++ ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index b31283d81c52..e27c5c7b05f6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -516,7 +516,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ seq_puts(m, "0 0 0\n");
+ else
+ seq_printf(m, "%llu %llu %lu\n",
+- (unsigned long long)task->se.sum_exec_runtime,
++ (unsigned long long)tsk_seruntime(task),
+ (unsigned long long)task->sched_info.run_delay,
+ task->sched_info.pcount);
+
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ [RLIMIT_LOCKS] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ [RLIMIT_SIGPENDING] = { 0, 0 }, \
+ [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
+- [RLIMIT_NICE] = { 0, 0 }, \
++ [RLIMIT_NICE] = { 30, 30 }, \
+ [RLIMIT_RTPRIO] = { 0, 0 }, \
+ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index bb343136ddd0..212d9204e9aa 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -804,9 +804,13 @@ struct task_struct {
+ struct alloc_tag *alloc_tag;
+ #endif
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ int on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ struct __call_single_node wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ unsigned int wakee_flips;
+ unsigned long wakee_flip_decay_ts;
+ struct task_struct *last_wakee;
+@@ -820,6 +824,7 @@ struct task_struct {
+ */
+ int recent_used_cpu;
+ int wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ int on_rq;
+
+@@ -828,6 +833,19 @@ struct task_struct {
+ int normal_prio;
+ unsigned int rt_priority;
+
++#ifdef CONFIG_SCHED_ALT
++ u64 last_ran;
++ s64 time_slice;
++ struct list_head sq_node;
++#ifdef CONFIG_SCHED_BMQ
++ int boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++ u64 deadline;
++#endif /* CONFIG_SCHED_PDS */
++ /* sched_clock time spent running */
++ u64 sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ struct sched_entity se;
+ struct sched_rt_entity rt;
+ struct sched_dl_entity dl;
+@@ -842,6 +860,7 @@ struct task_struct {
+ unsigned long core_cookie;
+ unsigned int core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_CGROUP_SCHED
+ struct task_group *sched_task_group;
+@@ -1609,6 +1628,15 @@ struct task_struct {
+ */
+ };
+
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t) ((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t) (0UL)
++#else /* CFS */
++#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t) ((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE (TASK_REPORT + 1)
+ #define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
+
+@@ -2135,7 +2163,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+
+ static inline bool task_is_runnable(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return p->on_rq;
++#else
+ return p->on_rq && !p->se.sched_delayed;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ extern bool sched_task_on_rq(struct task_struct *p);
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 3a912ab42bb5..269a1513a153 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++ return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p) (0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p) ((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p) ((p)->dl.deadline)
++
+ /*
+ * SCHED_DEADLINE tasks has negative priorities, reflecting
+ * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline bool dl_task(struct task_struct *p)
+ {
+ return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index 6ab43b4f72f9..ef1cff556c5e 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -19,6 +19,28 @@
+ #define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2)
+
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ (12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ (0)
++#endif
++
++#define MIN_NORMAL_PRIO (128)
++#define NORMAL_PRIO_NUM (64)
++#define MAX_PRIO (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO (MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 4e3338103654..6dfef878fe3b 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -45,8 +45,10 @@ static inline bool rt_or_dl_task_policy(struct task_struct *tsk)
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ return true;
++#ifndef CONFIG_SCHED_ALT
+ if (policy == SCHED_DEADLINE)
+ return true;
++#endif
+ return false;
+ }
+
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 4237daa5ac7a..3cebd93c49c8 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+
+ #endif /* !CONFIG_SMP */
+
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++ !defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index c521e1421ad4..131a599fcde2 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -652,6 +652,7 @@ config TASK_IO_ACCOUNTING
+
+ config PSI
+ bool "Pressure stall information tracking"
++ depends on !SCHED_ALT
+ select KERNFS
+ help
+ Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -817,6 +818,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ bool "Enable utilization clamping for RT/FAIR tasks"
+ depends on CPU_FREQ_GOV_SCHEDUTIL
++ depends on !SCHED_ALT
+ help
+ This feature enables the scheduler to track the clamped utilization
+ of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -863,6 +865,35 @@ config UCLAMP_BUCKETS_COUNT
+
+ If in doubt, use the default value.
+
++menuconfig SCHED_ALT
++ bool "Alternative CPU Schedulers"
++ default y
++ help
++ This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++ prompt "Alternative CPU Scheduler"
++ default SCHED_BMQ
++
++config SCHED_BMQ
++ bool "BMQ CPU scheduler"
++ help
++ The BitMap Queue CPU scheduler for excellent interactivity and
++ responsiveness on the desktop and solid scalability on normal
++ hardware and commodity servers.
++
++config SCHED_PDS
++ bool "PDS CPU scheduler"
++ help
++ The Priority and Deadline based Skip list multiple queue CPU
++ Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+
+ #
+@@ -928,6 +959,7 @@ config NUMA_BALANCING
+ depends on ARCH_SUPPORTS_NUMA_BALANCING
+ depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++ depends on !SCHED_ALT
+ help
+ This option adds support for automatic NUMA aware memory/task placement.
+ The mechanism is quite primitive and is based on migrating memory when
+@@ -1036,6 +1068,7 @@ menuconfig CGROUP_SCHED
+ tasks.
+
+ if CGROUP_SCHED
++if !SCHED_ALT
+ config GROUP_SCHED_WEIGHT
+ def_bool n
+
+@@ -1073,6 +1106,7 @@ config EXT_GROUP_SCHED
+ select GROUP_SCHED_WEIGHT
+ default y
+
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+
+ config SCHED_MM_CID
+@@ -1334,6 +1368,7 @@ config CHECKPOINT_RESTORE
+
+ config SCHED_AUTOGROUP
+ bool "Automatic process group scheduling"
++ depends on !SCHED_ALT
+ select CGROUPS
+ select CGROUP_SCHED
+ select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 136a8231355a..03770079619a 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -71,9 +71,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .stack = init_stack,
+ .usage = REFCOUNT_INIT(2),
+ .flags = PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++ .on_cpu = 1,
++ .prio = DEFAULT_PRIO,
++ .static_prio = DEFAULT_PRIO,
++ .normal_prio = DEFAULT_PRIO,
++#else
+ .prio = MAX_PRIO - 20,
+ .static_prio = MAX_PRIO - 20,
+ .normal_prio = MAX_PRIO - 20,
++#endif
+ .policy = SCHED_NORMAL,
+ .cpus_ptr = &init_task.cpus_mask,
+ .user_cpus_ptr = NULL,
+@@ -86,6 +93,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .restart_block = {
+ .fn = do_no_restart_syscall,
+ },
++#ifdef CONFIG_SCHED_ALT
++ .sq_node = LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++ .boost_prio = 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++ .deadline = 0,
++#endif
++ .time_slice = HZ,
++#else
+ .se = {
+ .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+ },
+@@ -93,6 +110,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .run_list = LIST_HEAD_INIT(init_task.rt.run_list),
+ .time_slice = RR_TIMESLICE,
+ },
++#endif
+ .tasks = LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ .pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index fe782cd77388..d27d2154d71a 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+
+ config SCHED_CORE
+ bool "Core Scheduling for SMT"
+- depends on SCHED_SMT
++ depends on SCHED_SMT && !SCHED_ALT
+ help
+ This option permits Core Scheduling, a means of coordinated task
+ selection across SMT siblings. When enabled -- see
+@@ -135,7 +135,7 @@ config SCHED_CORE
+
+ config SCHED_CLASS_EXT
+ bool "Extensible Scheduling Class"
+- depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF
++ depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF && !SCHED_ALT
+ select STACKTRACE if STACKTRACE_SUPPORT
+ help
+ This option enables a new scheduler class sched_ext (SCX), which
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index a4dd285cdf39..5b4ebe58d032 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -620,7 +620,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ return ret;
+ }
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * Helper routine for generate_sched_domains().
+ * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1031,7 +1031,7 @@ void rebuild_sched_domains_locked(void)
+ /* Have scheduler rebuild the domains */
+ partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -2926,12 +2926,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ goto out_unlock;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(task)) {
+ cs->nr_migrate_dl_tasks++;
+ cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ }
++#endif
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (!cs->nr_migrate_dl_tasks)
+ goto out_success;
+
+@@ -2952,6 +2955,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ }
+
+ out_success:
++#endif
+ /*
+ * Mark attach is in progress. This makes validate_change() fail
+ * changes which zero cpus/mems_allowed.
+@@ -2973,12 +2977,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ mutex_lock(&cpuset_mutex);
+ dec_attach_in_progress_locked(cs);
+
++#ifndef CONFIG_SCHED_ALT
+ if (cs->nr_migrate_dl_tasks) {
+ int cpu = cpumask_any(cs->effective_cpus);
+
+ dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ reset_migrate_dl_data(cs);
+ }
++#endif
+
+ mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index dead51de8eb5..8edef9676ab3 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ */
+ t1 = tsk->sched_info.pcount;
+ t2 = tsk->sched_info.run_delay;
+- t3 = tsk->se.sum_exec_runtime;
++ t3 = tsk_seruntime(tsk);
+
+ d->cpu_count += t1;
+
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 619f0014c33b..7dc53ddd45a8 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -175,7 +175,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->curr_target = next_thread(tsk);
+ }
+
+- add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++ add_device_randomness((const void*) &tsk_seruntime(tsk),
+ sizeof(unsigned long long));
+
+ /*
+@@ -196,7 +196,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->inblock += task_io_get_inblock(tsk);
+ sig->oublock += task_io_get_oublock(tsk);
+ task_io_accounting_add(&sig->ioac, &tsk->ioac);
+- sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++ sig->sum_sched_runtime += tsk_seruntime(tsk);
+ sig->nr_threads--;
+ __unhash_process(tsk, group_dead);
+ write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index ebebd0eec7f6..802112207855 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+
+ waiter->tree.prio = __waiter_prio(task);
+- waiter->tree.deadline = task->dl.deadline;
++ waiter->tree.deadline = __tsk_deadline(task);
+ }
+
+ /*
+@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ * Only use with rt_waiter_node_{less,equal}()
+ */
+ #define task_to_waiter_node(p) \
+- &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++ &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p) \
+ &(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline < right->deadline);
++#else
+ if (left->prio < right->prio)
+ return 1;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return dl_time_before(left->deadline, right->deadline);
++#endif
+
+ return 0;
++#endif
+ }
+
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline == right->deadline);
++#else
+ if (left->prio != right->prio)
+ return 0;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return left->deadline == right->deadline;
++#endif
+
+ return 1;
++#endif
+ }
+
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
+index 76d204b7d29c..de1a52f963e5 100644
+--- a/kernel/locking/ww_mutex.h
++++ b/kernel/locking/ww_mutex.h
+@@ -247,6 +247,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+
+ /* equal static prio */
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_prio(a_prio)) {
+ if (dl_time_before(b->task->dl.deadline,
+ a->task->dl.deadline))
+@@ -256,6 +257,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ b->task->dl.deadline))
+ return false;
+ }
++#endif
+
+ /* equal prio */
+ }
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..c59691742340
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7458 @@
++/*
++ * kernel/sched/alt_core.c
++ *
++ * Core alternative kernel scheduler code and related syscalls
++ *
++ * Copyright (C) 1991-2002 Linus Torvalds
++ *
++ * 2009-08-13 Brainfuck deadline scheduling policy by Con Kolivas deletes
++ * a whole lot of those previous things.
++ * 2017-09-06 Priority and Deadline based Skip list multiple queue kernel
++ * scheduler by Alfred Chen.
++ * 2019-02-20 BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x) (1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x) (0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.12-r0"
++
++#define STOP_PRIO (MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly = (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS (100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++ if (!p->user_cpus_ptr)
++ return cpu_possible_mask; /* &init_task.cpus_mask */
++ return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++ int i;
++
++ bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++ for(i = 0; i < SCHED_LEVELS; i++)
++ INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++ struct task_struct *idle)
++{
++ INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++ list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++ idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++ int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++ int last_prio = rq->prio;
++ int cpu, pr;
++
++ if (prio == last_prio)
++ return;
++
++ rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++ cpu = cpu_of(rq);
++ pr = atomic_read(&sched_prio_record);
++
++ if (prio < last_prio) {
++ if (IDLE_TASK_SCHED_PRIO == last_prio) {
++ rq->clear_idle_mask_func(cpu, sched_idle_mask);
++ last_prio -= 2;
++ }
++ CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++ return;
++ }
++ /* last_prio < prio */
++ if (IDLE_TASK_SCHED_PRIO == prio) {
++ rq->set_idle_mask_func(cpu, sched_idle_mask);
++ prio -= 2;
++ }
++ SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ * p->pi_lock
++ * rq->lock
++ * hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ * rq1->lock
++ * rq2->lock where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ * - sched_setaffinity()/
++ * set_cpus_allowed_ptr(): p->cpus_ptr, p->nr_cpus_allowed
++ * - set_user_nice(): p->se.load, p->*prio
++ * - __sched_setscheduler(): p->sched_class, p->policy, p->*prio,
++ * p->se.load, p->rt_priority,
++ * p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ * - sched_setnuma(): p->numa_preferred_nid
++ * - sched_move_task(): p->sched_task_group
++ * - uclamp_update_active() p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ * is changed locklessly using set_current_state(), __set_current_state() or
++ * set_special_state(), see their respective comments, or by
++ * try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ * concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ * is set by activate_task() and cleared by deactivate_task(), under
++ * rq->lock. Non-zero indicates the task is runnable, the special
++ * ON_RQ_MIGRATING state is used for migration without holding both
++ * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * Additionally it is possible to be ->on_rq but still be considered not
++ * runnable when p->se.sched_delayed is true. These tasks are on the runqueue
++ * but will be dequeued as soon as they get picked again. See the
++ * task_is_runnable() helper.
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ * is set by prepare_task() and cleared by finish_task() such that it will be
++ * set before p is scheduled-in and cleared after p is scheduled-out, both
++ * under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ * [ The astute reader will observe that it is possible for two tasks on one
++ * CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ * - Don't call set_task_cpu() on a blocked task:
++ *
++ * We don't care what CPU we're not running on, this simplifies hotplug,
++ * the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ * - for try_to_wake_up(), called under p->pi_lock:
++ *
++ * This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ * - for migration called under rq->lock:
++ * [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ * o move_queued_task()
++ * o detach_task()
++ *
++ * - for migration called under double_rq_lock():
++ *
++ * o __migrate_swap_task()
++ * o push_rt_task() / pull_rt_task()
++ * o push_dl_task() / pull_dl_task()
++ * o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock_irqsave(&rq->lock, *flags);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, *flags);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ raw_spin_lock_irqsave(&p->pi_lock, *flags);
++ if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++ *plock = &p->pi_lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++ }
++ }
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++ raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ lockdep_assert_held(&p->pi_lock);
++
++ for (;;) {
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++ return rq;
++ raw_spin_unlock(&rq->lock);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ for (;;) {
++ raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ /*
++ * move_queued_task() task_rq_lock()
++ *
++ * ACQUIRE (rq->lock)
++ * [S] ->on_rq = MIGRATING [L] rq = task_rq()
++ * WMB (__set_task_cpu()) ACQUIRE (rq->lock);
++ * [S] ->cpu = new_cpu [L] task_rq()
++ * [L] ->on_rq
++ * RELEASE (rq->lock)
++ *
++ * If we observe the old CPU in task_rq_lock(), the acquire of
++ * the old rq->lock will fully serialize against the stores.
++ *
++ * If we observe the new CPU in task_rq_lock(), the address
++ * dependency headed by '[L] rq = task_rq()' and the acquire
++ * will pair with the WMB to ensure we then also see migrating.
++ */
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++ rq_lock_irqsave(_T->lock, &_T->rf),
++ rq_unlock_irqrestore(_T->lock, &_T->rf),
++ struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++ raw_spinlock_t *lock;
++
++ /* Matches synchronize_rcu() in __sched_core_enable() */
++ preempt_disable();
++
++ for (;;) {
++ lock = __rq_lockp(rq);
++ raw_spin_lock_nested(lock, subclass);
++ if (likely(lock == __rq_lockp(rq))) {
++ /* preempt_count *MUST* be > 1 */
++ preempt_enable_no_resched();
++ return;
++ }
++ raw_spin_unlock(lock);
++ }
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++ raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++ s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++ /*
++ * Since irq_time is only updated on {soft,}irq_exit, we might run into
++ * this case when a previous update_rq_clock() happened inside a
++ * {soft,}IRQ region.
++ *
++ * When this happens, we stop ->clock_task and only update the
++ * prev_irq_time stamp to account for the part that fit, so that a next
++ * update will consume the rest. This ensures ->clock_task is
++ * monotonic.
++ *
++ * It does however cause some slight miss-attribution of {soft,}IRQ
++ * time, a more accurate solution would be to update the irq_time using
++ * the current rq->clock timestamp, except that would require using
++ * atomic ops.
++ */
++ if (irq_delta > delta)
++ irq_delta = delta;
++
++ rq->prev_irq_time += irq_delta;
++ delta -= irq_delta;
++ delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ if (static_key_false((¶virt_steal_rq_enabled))) {
++ steal = paravirt_steal_clock(cpu_of(rq));
++ steal -= rq->prev_steal_time_rq;
++
++ if (unlikely(steal > delta))
++ steal = delta;
++
++ rq->prev_steal_time_rq += steal;
++ delta -= steal;
++ }
++#endif
++
++ rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ if ((irq_delta + steal))
++ update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++ s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++ if (unlikely(delta <= 0))
++ return;
++ rq->clock += delta;
++ sched_update_rq_clock(rq);
++ update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS (sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT (8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l) (((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t) ((t) >> 17)
++#define LOAD_HALF_BLOCK(t) ((t) >> 16)
++#define BLOCK_MASK(t) ((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b) (1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++ u64 time = rq->clock;
++ u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++ u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++ u64 curr = !!rq->nr_running;
++
++ if (delta) {
++ rq->load_history = rq->load_history >> delta;
++
++ if (delta < RQ_UTIL_SHIFT) {
++ rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++ if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++ rq->load_history ^= LOAD_BLOCK_BIT(delta);
++ }
++
++ rq->load_block = BLOCK_MASK(time) * prev;
++ } else {
++ rq->load_block += (time - rq->load_stamp) * prev;
++ }
++ if (prev ^ curr)
++ rq->load_history ^= CURRENT_LOAD_BIT;
++ rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++ return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++ return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid. Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++ struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++ data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++ if (data)
++ data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++ int cpu = cpu_of(rq);
++
++ if (!tick_nohz_full_cpu(cpu))
++ return;
++
++ if (rq->nr_running < 2)
++ tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++ else
++ tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++ return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++ unsigned long ip = 0;
++ unsigned int state;
++
++ if (!p || p == current)
++ return 0;
++
++ /* Only get wchan if task is blocked and we can keep it that way. */
++ raw_spin_lock_irq(&p->pi_lock);
++ state = READ_ONCE(p->__state);
++ smp_rmb(); /* see try_to_wake_up() */
++ if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++ ip = __get_wchan(p);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func) \
++ sched_info_dequeue(rq, p); \
++ \
++ __list_del_entry(&p->sq_node); \
++ if (p->sq_node.prev == p->sq_node.next) { \
++ clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq), \
++ rq->queue.bitmap); \
++ func; \
++ }
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func) \
++ sched_info_enqueue(rq, p); \
++ { \
++ int idx, prio; \
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio); \
++ list_add_tail(&p->sq_node, &rq->queue.heads[idx]); \
++ if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) { \
++ set_bit(prio, rq->queue.bitmap); \
++ func; \
++ } \
++ }
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ --rq->nr_running;
++#ifdef CONFIG_SMP
++ if (1 == rq->nr_running)
++ cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ ++rq->nr_running;
++#ifdef CONFIG_SMP
++ if (2 == rq->nr_running)
++ cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++void requeue_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *node = &p->sq_node;
++ int deq_idx, idx, prio;
++
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++ /*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++ cpu_of(rq), task_cpu(p));
++#endif
++ if (list_is_last(node, &rq->queue.heads[idx]))
++ return;
++
++ __list_del_entry(node);
++ if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++ clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++ list_add_tail(node, &rq->queue.heads[idx]);
++ if (list_is_first(node, &rq->queue.heads[idx]))
++ set_bit(prio, rq->queue.bitmap);
++ update_sched_preempt_mask(rq);
++}
++
++/*
++ * try_cmpxchg based fetch_or() macro so it works for different integer types:
++ */
++#define fetch_or(ptr, mask) \
++ ({ \
++ typeof(ptr) _ptr = (ptr); \
++ typeof(mask) _mask = (mask); \
++ typeof(*_ptr) _val = *_ptr; \
++ \
++ do { \
++ } while (!try_cmpxchg(_ptr, &_val, _val | _mask)); \
++ _val; \
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++ do {
++ if (!(val & _TIF_POLLING_NRFLAG))
++ return false;
++ if (val & _TIF_NEED_RESCHED)
++ return true;
++ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++ return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++ set_tsk_need_resched(p);
++ return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++ return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ struct wake_q_node *node = &task->wake_q;
++
++ /*
++ * Atomically grab the task, if ->wake_q is !nil already it means
++ * it's already queued (either by us or someone else) and will get the
++ * wakeup due to that.
++ *
++ * In order to ensure that a pending wakeup will observe our pending
++ * state, even in the failed case, an explicit smp_mb() must be used.
++ */
++ smp_mb__before_atomic();
++ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++ return false;
++
++ /*
++ * The head is context local, there can be no concurrency.
++ */
++ *head->lastp = node;
++ head->lastp = &node->next;
++ return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ if (__wake_q_add(head, task))
++ get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++ if (!__wake_q_add(head, task))
++ put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++ struct wake_q_node *node = head->first;
++
++ while (node != WAKE_Q_TAIL) {
++ struct task_struct *task;
++
++ task = container_of(node, struct task_struct, wake_q);
++ /* task can safely be re-inserted now: */
++ node = node->next;
++ task->wake_q.next = NULL;
++
++ /*
++ * wake_up_process() executes a full barrier, which pairs with
++ * the queueing in wake_q_add() so as not to miss wakeups.
++ */
++ wake_up_process(task);
++ put_task_struct(task);
++ }
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void resched_curr(struct rq *rq)
++{
++ struct task_struct *curr = rq->curr;
++ int cpu;
++
++ lockdep_assert_held(&rq->lock);
++
++ if (test_tsk_need_resched(curr))
++ return;
++
++ cpu = cpu_of(rq);
++ if (cpu == smp_processor_id()) {
++ set_tsk_need_resched(curr);
++ set_preempt_need_resched();
++ return;
++ }
++
++ if (set_nr_and_not_polling(curr))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(cpu_rq(cpu));
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU. This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be up to date wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++ int i, cpu = smp_processor_id(), default_cpu = -1;
++ struct cpumask *mask;
++ const struct cpumask *hk_mask;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++ if (!idle_cpu(cpu))
++ return cpu;
++ default_cpu = cpu;
++ }
++
++ hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++ for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++ mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++ for_each_cpu_and(i, mask, hk_mask)
++ if (!idle_cpu(i))
++ return i;
++
++ if (default_cpu == -1)
++ default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++ cpu = default_cpu;
++
++ return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (cpu == smp_processor_id())
++ return;
++
++ /*
++ * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++ * part of the idle loop. This forces an exit from the idle loop
++ * and a round trip to schedule(). Now this could be optimized
++ * because a simple new idle loop iteration is enough to
++ * re-evaluate the next tick. Provided some re-ordering of tick
++ * nohz functions that would need to follow TIF_NR_POLLING
++ * clearing:
++ *
++ * - On most architectures, a simple fetch_or on ti::flags with a
++ * "0" value would be enough to know if an IPI needs to be sent.
++ *
++ * - x86 needs to perform a last need_resched() check between
++ * monitor and mwait which doesn't take timers into account.
++ * There a dedicated TIF_TIMER flag would be required to
++ * fetch_or here and be checked along with TIF_NEED_RESCHED
++ * before mwait().
++ *
++ * However, remote timer enqueue is not such a frequent event
++ * and testing of the above solutions didn't appear to report
++ * much benefits.
++ */
++ if (set_nr_and_not_polling(rq->idle))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++ /*
++ * We just need the target to call irq_exit() and re-evaluate
++ * the next tick. The nohz full kick at least implies that.
++ * If needed we can still optimize that later with an
++ * empty IRQ.
++ */
++ if (cpu_is_offline(cpu))
++ return true; /* Don't try to wake offline CPUs. */
++ if (tick_nohz_full_cpu(cpu)) {
++ if (cpu != smp_processor_id() ||
++ tick_nohz_tick_stopped())
++ tick_nohz_full_kick_cpu(cpu);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++ if (!wake_up_full_nohz_cpu(cpu))
++ wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++ struct rq *rq = info;
++ int cpu = cpu_of(rq);
++ unsigned int flags;
++
++ /*
++ * Release the rq::nohz_csd.
++ */
++ flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++ WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++ rq->idle_balance = idle_cpu(cpu);
++ if (rq->idle_balance && !need_resched()) {
++ rq->nohz_idle_balance = flags;
++ raise_softirq_irqoff(SCHED_SOFTIRQ);
++ }
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++ if (sched_rq_first_task(rq) != rq->curr)
++ resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++ if (READ_ONCE(p->__state) & state)
++ return 1;
++
++ if (READ_ONCE(p->saved_state) & state)
++ return -1;
++
++ return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++ /*
++ * Serialize against current_save_and_set_rtlock_wait_state(),
++ * current_restore_rtlock_saved_state(), and __refrigerator().
++ */
++ guard(raw_spinlock_irq)(&p->pi_lock);
++
++ return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero. When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count). If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++ unsigned long flags;
++ int running, queued, match;
++ unsigned long ncsw;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ for (;;) {
++ rq = task_rq(p);
++
++ /*
++ * If the task is actively running on another CPU
++ * still, just relax and busy-wait without holding
++ * any locks.
++ *
++ * NOTE! Since we don't hold any locks, it's not
++ * even sure that "rq" stays as the right runqueue!
++ * But we don't care, since this will return false
++ * if the runqueue has changed and p is actually now
++ * running somewhere else!
++ */
++ while (task_on_cpu(p)) {
++ if (!task_state_match(p, match_state))
++ return 0;
++ cpu_relax();
++ }
++
++ /*
++ * Ok, time to look more closely! We need the rq
++ * lock now, to be *sure*. If we're wrong, we'll
++ * just go back and repeat.
++ */
++ task_access_lock_irqsave(p, &lock, &flags);
++ trace_sched_wait_task(p);
++ running = task_on_cpu(p);
++ queued = p->on_rq;
++ ncsw = 0;
++ if ((match = __task_state_match(p, match_state))) {
++ /*
++ * When matching on p->saved_state, consider this task
++ * still queued so it will wait.
++ */
++ if (match < 0)
++ queued = 1;
++ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++ }
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ /*
++ * If it changed from the expected state, bail out now.
++ */
++ if (unlikely(!ncsw))
++ break;
++
++ /*
++ * Was it really running after all now that we
++ * checked with the proper locks actually held?
++ *
++ * Oops. Go back and try again..
++ */
++ if (unlikely(running)) {
++ cpu_relax();
++ continue;
++ }
++
++ /*
++ * It's not enough that it's not actively running,
++ * it must be off the runqueue _entirely_, and not
++ * preempted!
++ *
++ * So if it was still runnable (but just not actively
++ * running right now), it's preempted, and we should
++ * yield - it could be a while.
++ */
++ if (unlikely(queued)) {
++ ktime_t to = NSEC_PER_SEC / HZ;
++
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++ continue;
++ }
++
++ /*
++ * Ahh, all good. It wasn't running, and it wasn't
++ * runnable, which means that it will never become
++ * running in the future either. We're all done!
++ */
++ break;
++ }
++
++ return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++ if (hrtimer_active(&rq->hrtick_timer))
++ hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++ struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++ WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++ raw_spin_lock(&rq->lock);
++ resched_curr(rq);
++ raw_spin_unlock(&rq->lock);
++
++ return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ * - enabled by features
++ * - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ /**
++ * Alt schedule FW doesn't support sched_feat yet
++ if (!sched_feat(HRTICK))
++ return 0;
++ */
++ if (!cpu_active(cpu_of(rq)))
++ return 0;
++ return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ ktime_t time = rq->hrtick_time;
++
++ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++ struct rq *rq = arg;
++
++ raw_spin_lock(&rq->lock);
++ __hrtick_restart(rq);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ s64 delta;
++
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense and can cause timer DoS.
++ */
++ delta = max_t(s64, delay, 10000LL);
++
++ rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++ if (rq == this_rq())
++ __hrtick_restart(rq);
++ else
++ smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense. Rely on vruntime for fairness.
++ */
++ delay = max_t(u64, delay, 10000LL);
++ hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++ HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++ rq->hrtick_timer.function = hrtick;
++}
++#else /* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif /* CONFIG_SCHED_HRTICK */
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++ enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++ /*
++ * If in_iowait is set, the code below may not trigger any cpufreq
++ * utilization updates, so do it here explicitly with the IOWAIT flag
++ * passed.
++ */
++ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++static void block_task(struct rq *rq, struct task_struct *p)
++{
++ dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++ WRITE_ONCE(p->on_rq, 0);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible++;
++
++ if (p->in_iowait) {
++ atomic_inc(&rq->nr_iowait);
++ delayacct_blkio_start();
++ }
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++ /*
++ * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++ * successfully executed on another CPU. We must ensure that updates of
++ * per-task data have been completed by this moment.
++ */
++ smp_wmb();
++
++ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * We should never call set_task_cpu() on a blocked task,
++ * ttwu() will sort out the placement.
++ */
++ WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++ /*
++ * The caller should hold either p->pi_lock or rq->lock, when changing
++ * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++ *
++ * sched_move_task() holds both and thus holding either pins the cgroup,
++ * see task_group().
++ */
++ WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++ lockdep_is_held(&task_rq(p)->lock)));
++#endif
++ /*
++ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++ */
++ WARN_ON_ONCE(!cpu_online(new_cpu));
++
++ WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++ trace_sched_migrate_task(p, new_cpu);
++
++ if (task_cpu(p) != new_cpu)
++ {
++ rseq_migrate(p);
++ sched_mm_cid_migrate_from(p);
++ perf_event_task_migrate(p);
++ }
++
++ __set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED 0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ /*
++ * This here violates the locking rules for affinity, since we're only
++ * supposed to change these variables while holding both rq->lock and
++ * p->pi_lock.
++ *
++ * HOWEVER, it magically works, because ttwu() is the only code that
++ * accesses these variables under p->pi_lock and only does so after
++ * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++ * before finish_task().
++ *
++ * XXX do further audits, this smells like something putrid.
++ */
++ SCHED_WARN_ON(!p->on_cpu);
++ p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++ int cpu;
++
++ if (p->migration_disabled) {
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Warn about overflow half-way through the range.
++ */
++ WARN_ON_ONCE((s16)p->migration_disabled < 0);
++#endif
++ p->migration_disabled++;
++ return;
++ }
++
++ guard(preempt)();
++ cpu = smp_processor_id();
++ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++ cpu_rq(cpu)->nr_pinned++;
++ p->migration_disabled = 1;
++ p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++ /*
++ * Violates locking rules! see comment in __do_set_cpus_ptr().
++ */
++ if (p->cpus_ptr == &p->cpus_mask)
++ __do_set_cpus_ptr(p, cpumask_of(cpu));
++ }
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++ struct task_struct *p = current;
++
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Check both overflow from migrate_disable() and superfluous
++ * migrate_enable().
++ */
++ if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
++ return;
++#endif
++
++ if (p->migration_disabled > 1) {
++ p->migration_disabled--;
++ return;
++ }
++
++ /*
++ * Ensure stop_task runs either before or after this, and that
++ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++ */
++ guard(preempt)();
++ /*
++ * Assumption: current should be running on allowed cpu
++ */
++ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++ if (p->cpus_ptr != &p->cpus_mask)
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ /*
++ * Mustn't clear migration_disabled() until cpus_ptr points back at the
++ * regular cpus_mask, otherwise things that race (eg.
++ * select_fallback_rq) get confused.
++ */
++ barrier();
++ p->migration_disabled = 0;
++ this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++ /* When not in the task's cpumask, no point in looking further. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /* migrate_disabled() must be allowed to finish. */
++ if (is_migration_disabled(p))
++ return cpu_online(cpu);
++
++ /* Non kernel threads are not allowed during either online or offline. */
++ if (!(p->flags & PF_KTHREAD))
++ return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++ /* KTHREAD_IS_PER_CPU is always allowed. */
++ if (kthread_is_per_cpu(p))
++ return cpu_online(cpu);
++
++ /* Regular kernel threads don't get to stay during offline. */
++ if (cpu_dying(cpu))
++ return false;
++
++ /* But are allowed during online. */
++ return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ * stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ * off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ * it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ * is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++ lockdep_assert_held(&rq->lock);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++ dequeue_task(p, rq, 0);
++ set_task_cpu(p, new_cpu);
++ raw_spin_unlock(&rq->lock);
++
++ rq = cpu_rq(new_cpu);
++
++ raw_spin_lock(&rq->lock);
++ WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++ sched_mm_cid_migrate_to(rq, p);
++
++ sched_task_sanity_check(p, rq);
++ enqueue_task(p, rq, 0);
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ wakeup_preempt(rq);
++
++ return rq;
++}
++
++struct migration_arg {
++ struct task_struct *task;
++ int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++ /* Affinity changed (again). */
++ if (!is_cpu_allowed(p, dest_cpu))
++ return rq;
++
++ return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a high-prio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++ struct migration_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++
++ /*
++ * The original target CPU might have gone down and we might
++ * be on another CPU but it doesn't matter.
++ */
++ local_irq_save(flags);
++ /*
++ * We need to explicitly wake pending tasks before running
++ * __migrate_task() such that we will not miss enforcing cpus_ptr
++ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++ */
++ flush_smp_call_function_queue();
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++ /*
++ * If task_rq(p) != rq, it cannot be migrated here, because we're
++ * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++ * we're holding p->pi_lock.
++ */
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ rq = __migrate_task(rq, p, arg->dest_cpu);
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++ cpumask_copy(&p->cpus_mask, ctx->new_mask);
++ p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++ /*
++ * Swap in a new user_cpus_ptr if SCA_USER flag set
++ */
++ if (ctx->flags & SCA_USER)
++ swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++ lockdep_assert_held(&p->pi_lock);
++ set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .user_mask = NULL,
++ .flags = SCA_USER, /* clear the user requested mask */
++ };
++ union cpumask_rcuhead {
++ cpumask_t cpumask;
++ struct rcu_head rcu;
++ };
++
++ __do_set_cpus_allowed(p, &ac);
++
++ /*
++ * Because this is called with p->pi_lock held, it is not possible
++ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++ * kfree_rcu().
++ */
++ kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++ int node)
++{
++ cpumask_t *user_mask;
++ unsigned long flags;
++
++ /*
++ * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++ * may differ by now due to racing.
++ */
++ dst->user_cpus_ptr = NULL;
++
++ /*
++ * This check is racy and losing the race is a valid situation.
++ * It is not worth the extra overhead of taking the pi_lock on
++ * every fork/clone.
++ */
++ if (data_race(!src->user_cpus_ptr))
++ return 0;
++
++ user_mask = alloc_user_cpus_ptr(node);
++ if (!user_mask)
++ return -ENOMEM;
++
++ /*
++ * Use pi_lock to protect content of user_cpus_ptr
++ *
++ * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++ * do_set_cpus_allowed().
++ */
++ raw_spin_lock_irqsave(&src->pi_lock, flags);
++ if (src->user_cpus_ptr) {
++ swap(dst->user_cpus_ptr, user_mask);
++ cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++ }
++ raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++ if (unlikely(user_mask))
++ kfree(user_mask);
++
++ return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++ struct cpumask *user_mask = NULL;
++
++ swap(p->user_cpus_ptr, user_mask);
++
++ return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++ kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++ return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++ guard(preempt)();
++ int cpu = task_cpu(p);
++
++ if ((cpu != smp_processor_id()) && task_curr(p))
++ smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ * - cpu_active must be a subset of cpu_online
++ *
++ * - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ * see __set_cpus_allowed_ptr(). At this point the newly online
++ * CPU isn't yet part of the sched domains, and balancing will not
++ * see it.
++ *
++ * - on cpu-down we clear cpu_active() to mask the sched domains and
++ * avoid the load balancer to place new tasks on the to be removed
++ * CPU. Existing tasks will remain running there and will be taken
++ * off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++ int nid = cpu_to_node(cpu);
++ const struct cpumask *nodemask = NULL;
++ enum { cpuset, possible, fail } state = cpuset;
++ int dest_cpu;
++
++ /*
++ * If the node that the CPU is on has been offlined, cpu_to_node()
++ * will return -1. There is no CPU on the node, and we should
++ * select the CPU on the other node.
++ */
++ if (nid != -1) {
++ nodemask = cpumask_of_node(nid);
++
++ /* Look for allowed, online CPU in same node. */
++ for_each_cpu(dest_cpu, nodemask) {
++ if (is_cpu_allowed(p, dest_cpu))
++ return dest_cpu;
++ }
++ }
++
++ for (;;) {
++ /* Any allowed, online CPU? */
++ for_each_cpu(dest_cpu, p->cpus_ptr) {
++ if (!is_cpu_allowed(p, dest_cpu))
++ continue;
++ goto out;
++ }
++
++ /* No more Mr. Nice Guy. */
++ switch (state) {
++ case cpuset:
++ if (cpuset_cpus_allowed_fallback(p)) {
++ state = possible;
++ break;
++ }
++ fallthrough;
++ case possible:
++ /*
++ * XXX When called from select_task_rq() we only
++ * hold p->pi_lock and again violate locking order.
++ *
++ * More yuck to audit.
++ */
++ do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++ state = fail;
++ break;
++
++ case fail:
++ BUG();
++ break;
++ }
++ }
++
++out:
++ if (state != cpuset) {
++ /*
++ * Don't tell them about moving exiting tasks or
++ * kernel threads (both mm NULL), since they never
++ * leave kernel.
++ */
++ if (p->mm && printk_ratelimit()) {
++ printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++ task_pid_nr(p), p->comm, cpu);
++ }
++ }
++
++ return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++ int cpu;
++
++ cpumask_copy(mask, sched_preempt_mask + ref);
++ if (prio < ref) {
++ for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++ if (prio < cpu_rq(cpu)->prio)
++ cpumask_set_cpu(cpu, mask);
++ }
++ } else {
++ for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++ if (prio >= cpu_rq(cpu)->prio)
++ cpumask_clear_cpu(cpu, mask);
++ }
++ }
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++ cpumask_t *mask = sched_preempt_mask + prio;
++ int pr = atomic_read(&sched_prio_record);
++
++ if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++ sched_preempt_mask_flush(mask, prio, pr);
++ atomic_set(&sched_prio_record, prio);
++ }
++
++ return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ cpumask_t allow_mask, mask;
++
++ if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++ return select_fallback_rq(task_cpu(p), p);
++
++ if (idle_select_func(&mask, &allow_mask, sched_idle_mask) ||
++ preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++ return best_mask_cpu(task_cpu(p), &mask);
++
++ return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++ static struct lock_class_key stop_pi_lock;
++ struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++ struct sched_param start_param = { .sched_priority = 0 };
++ struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++ if (stop) {
++ /*
++ * Make it appear like a SCHED_FIFO task, its something
++ * userspace knows about and won't get confused about.
++ *
++ * Also, it will make PI more or less work without too
++ * much confusion -- but then, stop work should not
++ * rely on PI working anyway.
++ */
++ sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++ /*
++ * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++ * adjust the effective priority of a task. As a result,
++ * rt_mutex_setprio() can trigger (RT) balancing operations,
++ * which can then trigger wakeups of the stop thread to push
++ * around the current task.
++ *
++ * The stop task itself will never be part of the PI-chain, it
++ * never blocks, therefore that ->pi_lock recursion is safe.
++ * Tell lockdep about this by placing the stop->pi_lock in its
++ * own class.
++ */
++ lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++ }
++
++ cpu_rq(cpu)->stop = stop;
++
++ if (old_stop) {
++ /*
++ * Reset it back to a normal scheduling policy so that
++ * it can die in pieces.
++ */
++ sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++ }
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++ raw_spinlock_t *lock, unsigned long irq_flags)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ /* Can the task run on the task's current CPU? If so, we're done */
++ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++ if (p->migration_disabled) {
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ p->migration_flags |= MDF_FORCE_ENABLED;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++ }
++
++ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++ struct migration_arg arg = { p, dest_cpu };
++
++ /* Need help from migration thread: drop lock and wait. */
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++ return 0;
++ }
++ if (task_on_rq_queued(p)) {
++ /*
++ * OK, since we're going to drop the lock immediately
++ * afterwards anyway.
++ */
++ update_rq_clock(rq);
++ rq = move_queued_task(rq, p, dest_cpu);
++ lock = &rq->lock;
++ }
++ }
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++ struct affinity_context *ctx,
++ struct rq *rq,
++ raw_spinlock_t *lock,
++ unsigned long irq_flags)
++{
++ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++ const struct cpumask *cpu_valid_mask = cpu_active_mask;
++ bool kthread = p->flags & PF_KTHREAD;
++ int dest_cpu;
++ int ret = 0;
++
++ if (kthread || is_migration_disabled(p)) {
++ /*
++ * Kernel threads are allowed on online && !active CPUs,
++ * however, during cpu-hot-unplug, even these might get pushed
++ * away if not KTHREAD_IS_PER_CPU.
++ *
++ * Specifically, migration_disabled() tasks must not fail the
++ * cpumask_any_and_distribute() pick below, esp. so on
++ * SCA_MIGRATE_ENABLE, otherwise we'll not call
++ * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++ */
++ cpu_valid_mask = cpu_online_mask;
++ }
++
++ if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ /*
++ * Must re-check here, to close a race against __kthread_bind(),
++ * sched_setaffinity() is not guaranteed to observe the flag.
++ */
++ if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++ goto out;
++
++ dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++ if (dest_cpu >= nr_cpu_ids) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __do_set_cpus_allowed(p, ctx);
++
++ return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++ return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ unsigned long irq_flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++ * flags are set.
++ */
++ if (p->user_cpus_ptr &&
++ !(ctx->flags & SCA_USER) &&
++ cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++ ctx->new_mask = rq->scratch_mask;
++
++
++ return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++
++ return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++ struct cpumask *new_mask,
++ const struct cpumask *subset_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++ unsigned long irq_flags;
++ raw_spinlock_t *lock;
++ struct rq *rq;
++ int err;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++
++ if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++ err = -EINVAL;
++ goto err_unlock;
++ }
++
++ return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ cpumask_var_t new_mask;
++ const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++ alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++ /*
++ * __migrate_task() can fail silently in the face of concurrent
++ * offlining of the chosen destination CPU, so take the hotplug
++ * lock to ensure that the migration succeeds.
++ */
++ cpus_read_lock();
++ if (!cpumask_available(new_mask))
++ goto out_set_mask;
++
++ if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++ goto out_free_mask;
++
++ /*
++ * We failed to find a valid subset of the affinity mask for the
++ * task, so override it based on its cpuset hierarchy.
++ */
++ cpuset_cpus_allowed(p, new_mask);
++ override_mask = new_mask;
++
++out_set_mask:
++ if (printk_ratelimit()) {
++ printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++ task_pid_nr(p), p->comm,
++ cpumask_pr_args(override_mask));
++ }
++
++ WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++ cpus_read_unlock();
++ free_cpumask_var(new_mask);
++}
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ struct affinity_context ac = {
++ .new_mask = task_user_cpus(p),
++ .flags = 0,
++ };
++ int ret;
++
++ /*
++ * Try to restore the old affinity mask with __sched_setaffinity().
++ * Cpuset masking will be done there too.
++ */
++ ret = __sched_setaffinity(p, &ac);
++ WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ return 0;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq;
++
++ if (!schedstat_enabled())
++ return;
++
++ rq = this_rq();
++
++#ifdef CONFIG_SMP
++ if (cpu == rq->cpu) {
++ __schedstat_inc(rq->ttwu_local);
++ __schedstat_inc(p->stats.nr_wakeups_local);
++ } else {
++ /** Alt schedule FW ToDo:
++ * How to do ttwu_wake_remote
++ */
++ }
++#endif /* CONFIG_SMP */
++
++ __schedstat_inc(rq->ttwu_count);
++ __schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible--;
++
++ if (
++#ifdef CONFIG_SMP
++ !(wake_flags & WF_MIGRATED) &&
++#endif
++ p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ activate_task(p, rq);
++ wakeup_preempt(rq);
++
++ ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ * for (;;) {
++ * set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ * if (CONDITION)
++ * break;
++ *
++ * schedule();
++ * }
++ * __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ * %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ int ret = 0;
++
++ rq = __task_access_lock(p, &lock);
++ if (task_on_rq_queued(p)) {
++ if (!task_on_cpu(p)) {
++ /*
++ * When on_rq && !on_cpu the task is preempted, see if
++ * it should preempt the task that is current now.
++ */
++ update_rq_clock(rq);
++ wakeup_preempt(rq);
++ }
++ ttwu_do_wakeup(p);
++ ret = 1;
++ }
++ __task_access_unlock(p, lock);
++
++ return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++ struct llist_node *llist = arg;
++ struct rq *rq = this_rq();
++ struct task_struct *p, *t;
++ struct rq_flags rf;
++
++ if (!llist)
++ return;
++
++ rq_lock_irqsave(rq, &rf);
++ update_rq_clock(rq);
++
++ llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++ if (WARN_ON_ONCE(p->on_cpu))
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++ set_task_cpu(p, cpu_of(rq));
++
++ ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++ }
++
++ /*
++ * Must be after enqueueing at least once task such that
++ * idle_cpu() does not observe a false-negative -- if it does,
++ * it is possible for select_idle_siblings() to stack a number
++ * of tasks on this CPU during that window.
++ *
++ * It is OK to clear ttwu_pending when another task pending.
++ * We will receive IPI after local IRQ enabled and then enqueue it.
++ * Since now nr_running > 0, idle_cpu() will always get correct result.
++ */
++ WRITE_ONCE(rq->ttwu_pending, 0);
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++ if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++ trace_sched_wake_idle_without_ipi(cpu);
++ return false;
++ }
++
++ return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++ WRITE_ONCE(rq->ttwu_pending, 1);
++ __smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++ /*
++ * Do not complicate things with the async wake_list while the CPU is
++ * in hotplug state.
++ */
++ if (!cpu_active(cpu))
++ return false;
++
++ /* Ensure the task will still be allowed to run on the CPU. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /*
++ * If the CPU does not share cache, then queue the task on the
++ * remote rqs wakelist to avoid accessing remote data.
++ */
++ if (!cpus_share_cache(smp_processor_id(), cpu))
++ return true;
++
++ if (cpu == smp_processor_id())
++ return false;
++
++ /*
++ * If the wakee cpu is idle, or the task is descheduling and the
++ * only running task on the CPU, then use the wakelist to offload
++ * the task activation to the idle (or soon-to-be-idle) CPU as
++ * the current CPU is likely busy. nr_running is checked to
++ * avoid unnecessary task stacking.
++ *
++ * Note that we can only get here with (wakee) p->on_rq=0,
++ * p->on_cpu can be whatever, we've done the dequeue, so
++ * the wakee has been accounted out of ->nr_running.
++ */
++ if (!cpu_rq(cpu)->nr_running)
++ return true;
++
++ return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++ __ttwu_queue_wakelist(p, cpu, wake_flags);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ guard(rcu)();
++ if (is_idle_task(rcu_dereference(rq->curr))) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ if (is_idle_task(rq->curr))
++ resched_curr(rq);
++ }
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++ return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++ if (!sched_asym_cpucap_active())
++ return true;
++
++ if (this_cpu == that_cpu)
++ return true;
++
++ return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++ if (this_cpu == that_cpu)
++ return true;
++
++ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (ttwu_queue_wakelist(p, cpu, wake_flags))
++ return;
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++ ttwu_do_activate(rq, p, wake_flags);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ * The related locking code always holds p::pi_lock when updating
++ * p::saved_state, which means the code is fully serialized in both cases.
++ *
++ * For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ * No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ * For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ * allows us to prevent early wakeup of tasks before they can be run on
++ * asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++ int match;
++
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++ state != TASK_RTLOCK_WAIT);
++ }
++
++ *success = !!(match = __task_state_match(p, state));
++
++ /*
++ * Saved state preserves the task state across blocking on
++ * an RT lock or TASK_FREEZABLE tasks. If the state matches,
++ * set p::saved_state to TASK_RUNNING, but do not wake the task
++ * because it waits for a lock wakeup or __thaw_task(). Also
++ * indicate success because from the regular waker's point of
++ * view this has succeeded.
++ *
++ * After acquiring the lock the task will restore p::__state
++ * from p::saved_state which ensures that the regular
++ * wakeup is not lost. The restore will also set
++ * p::saved_state to TASK_RUNNING so any further tests will
++ * not result in false positives vs. @success
++ */
++ if (match < 0)
++ p->saved_state = TASK_RUNNING;
++
++ return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ * MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ * A) UNLOCK of the rq(c0)->lock scheduling out task t
++ * B) migration for t is required to synchronize *both* rq(c0)->lock and
++ * rq(c1)->lock (if not at the same time, then in that order).
++ * C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ * CPU0 CPU1 CPU2
++ *
++ * LOCK rq(0)->lock
++ * sched-out X
++ * sched-in Y
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(0)->lock // orders against CPU0
++ * dequeue X
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(1)->lock
++ * enqueue X
++ * UNLOCK rq(1)->lock
++ *
++ * LOCK rq(1)->lock // orders against CPU2
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(1)->lock
++ *
++ *
++ * BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ * 1) smp_store_release(X->on_cpu, 0) -- finish_task()
++ * 2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ * LOCK rq(0)->lock LOCK X->pi_lock
++ * dequeue X
++ * sched-out X
++ * smp_store_release(X->on_cpu, 0);
++ *
++ * smp_cond_load_acquire(&X->on_cpu, !VAL);
++ * X->state = WAKING
++ * set_task_cpu(X,2)
++ *
++ * LOCK rq(2)->lock
++ * enqueue X
++ * X->state = RUNNING
++ * UNLOCK rq(2)->lock
++ *
++ * LOCK rq(2)->lock // orders against CPU1
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(2)->lock
++ *
++ * UNLOCK X->pi_lock
++ * UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ * If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ * - p->sched_class
++ * - p->cpus_ptr
++ * - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ * - ttwu_runnable() -- old rq, unavoidable, see comment there;
++ * - ttwu_queue() -- new rq, for enqueue of the task;
++ * - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ * %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++ guard(preempt)();
++ int cpu, success = 0;
++
++ if (p == current) {
++ /*
++ * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++ * == smp_processor_id()'. Together this means we can special
++ * case the whole 'p->on_rq && ttwu_runnable()' case below
++ * without taking any locks.
++ *
++ * In particular:
++ * - we rely on Program-Order guarantees for all the ordering,
++ * - we're serialized against set_special_state() by virtue of
++ * it disabling IRQs (this allows not taking ->pi_lock).
++ */
++ if (!ttwu_state_match(p, state, &success))
++ goto out;
++
++ trace_sched_waking(p);
++ ttwu_do_wakeup(p);
++ goto out;
++ }
++
++ /*
++ * If we are going to wake up a thread waiting for CONDITION we
++ * need to ensure that CONDITION=1 done by the caller can not be
++ * reordered with p->state check below. This pairs with smp_store_mb()
++ * in set_current_state() that the waiting thread does.
++ */
++ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++ smp_mb__after_spinlock();
++ if (!ttwu_state_match(p, state, &success))
++ break;
++
++ trace_sched_waking(p);
++
++ /*
++ * Ensure we load p->on_rq _after_ p->state, otherwise it would
++ * be possible to, falsely, observe p->on_rq == 0 and get stuck
++ * in smp_cond_load_acquire() below.
++ *
++ * sched_ttwu_pending() try_to_wake_up()
++ * STORE p->on_rq = 1 LOAD p->state
++ * UNLOCK rq->lock
++ *
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * UNLOCK rq->lock
++ *
++ * [task p]
++ * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * A similar smp_rmb() lives in __task_needs_rq_lock().
++ */
++ smp_rmb();
++ if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++ break;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++ * possible to, falsely, observe p->on_cpu == 0.
++ *
++ * One must be running (->on_cpu == 1) in order to remove oneself
++ * from the runqueue.
++ *
++ * __schedule() (switch to task 'p') try_to_wake_up()
++ * STORE p->on_cpu = 1 LOAD p->on_rq
++ * UNLOCK rq->lock
++ *
++ * __schedule() (put 'p' to sleep)
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * STORE p->on_rq = 0 LOAD p->on_cpu
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++ * schedule()'s deactivate_task() has 'happened' and p will no longer
++ * care about it's own p->state. See the comment in __schedule().
++ */
++ smp_acquire__after_ctrl_dep();
++
++ /*
++ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++ * == 0), which means we need to do an enqueue, change p->state to
++ * TASK_WAKING such that we can unlock p->pi_lock before doing the
++ * enqueue, such as ttwu_queue_wakelist().
++ */
++ WRITE_ONCE(p->__state, TASK_WAKING);
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, considering queueing p on the remote CPUs wake_list
++ * which potentially sends an IPI instead of spinning on p->on_cpu to
++ * let the waker make forward progress. This is safe because IRQs are
++ * disabled and the IPI will deliver after on_cpu is cleared.
++ *
++ * Ensure we load task_cpu(p) after p->on_cpu:
++ *
++ * set_task_cpu(p, cpu);
++ * STORE p->cpu = @cpu
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock
++ * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu)
++ * STORE p->on_cpu = 1 LOAD p->cpu
++ *
++ * to ensure we observe the correct CPU on which the task is currently
++ * scheduling.
++ */
++ if (smp_load_acquire(&p->on_cpu) &&
++ ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++ break;
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, wait until it's done referencing the task.
++ *
++ * Pairs with the smp_store_release() in finish_task().
++ *
++ * This ensures that tasks getting woken will be fully ordered against
++ * their previous state and preserve Program Order.
++ */
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ sched_task_ttwu(p);
++
++ if ((wake_flags & WF_CURRENT_CPU) &&
++ cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++ cpu = smp_processor_id();
++ else
++ cpu = select_task_rq(p);
++
++ if (cpu != task_cpu(p)) {
++ if (p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ wake_flags |= WF_MIGRATED;
++ set_task_cpu(p, cpu);
++ }
++#else
++ sched_task_ttwu(p);
++
++ cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++ ttwu_queue(p, cpu, wake_flags);
++ }
++out:
++ if (success)
++ ttwu_stat(p, task_cpu(p), wake_flags);
++
++ return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++ * the task is blocked. Make sure to check @state since ttwu() can drop
++ * locks at the end, see ttwu_queue_wakelist().
++ */
++ if (state == TASK_RUNNING || state == TASK_WAKING)
++ return true;
++
++ /*
++ * Ensure we load p->on_rq after p->__state, otherwise it would be
++ * possible to, falsely, observe p->on_rq == 0.
++ *
++ * See try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ if (p->on_rq)
++ return true;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure the task has finished __schedule() and will not be referenced
++ * anymore. Again, see try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++ return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it. This function can use task_is_runnable() and
++ * task_curr() to work out what the state is, if required. Given that @func
++ * can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ * Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++ struct rq *rq = NULL;
++ struct rq_flags rf;
++ int ret;
++
++ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++ if (__task_needs_rq_lock(p))
++ rq = __task_rq_lock(p, &rf);
++
++ /*
++ * At this point the task is pinned; either:
++ * - blocked and we're holding off wakeups (pi->lock)
++ * - woken, and we're holding off enqueue (rq->lock)
++ * - queued, and we're holding off schedule (rq->lock)
++ * - running, and we're holding off de-schedule (rq->lock)
++ *
++ * The called function (@func) can use: task_curr(), p->on_rq and
++ * p->__state to differentiate between these states.
++ */
++ ret = func(p, arg);
++
++ if (rq)
++ __task_rq_unlock(rq, &rf);
++
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++ return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU. If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee. Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++ struct task_struct *t;
++
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ t = rcu_dereference(cpu_curr(cpu));
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++ return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++ return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ p->on_rq = 0;
++ p->on_cpu = 0;
++ p->utime = 0;
++ p->stime = 0;
++ p->sched_time = 0;
++
++#ifdef CONFIG_SCHEDSTATS
++ /* Even if schedstat is disabled, there should not be garbage */
++ memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++ INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++ p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++ p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++ init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ __sched_fork(clone_flags, p);
++ /*
++ * We mark the process as NEW here. This guarantees that
++ * nobody will actually run it, and a signal or other external
++ * event cannot wake it up and insert it on the runqueue either.
++ */
++ p->__state = TASK_NEW;
++
++ /*
++ * Make sure we do not leak PI boosting priority to the child.
++ */
++ p->prio = current->normal_prio;
++
++ /*
++ * Revert to default priority/policy on fork if requested.
++ */
++ if (unlikely(p->sched_reset_on_fork)) {
++ if (task_has_rt_policy(p)) {
++ p->policy = SCHED_NORMAL;
++ p->static_prio = NICE_TO_PRIO(0);
++ p->rt_priority = 0;
++ } else if (PRIO_TO_NICE(p->static_prio) < 0)
++ p->static_prio = NICE_TO_PRIO(0);
++
++ p->prio = p->normal_prio = p->static_prio;
++
++ /*
++ * We don't need the reset flag anymore after the fork. It has
++ * fulfilled its duty:
++ */
++ p->sched_reset_on_fork = 0;
++ }
++
++#ifdef CONFIG_SCHED_INFO
++ if (unlikely(sched_info_on()))
++ memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++ init_task_preempt_count(p);
++
++ return 0;
++}
++
++int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ /*
++ * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++ * required yet, but lockdep gets upset if rules are violated.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ /*
++ * Share the timeslice between parent and child, thus the
++ * total amount of pending timeslices in the system doesn't change,
++ * resulting in more scheduling fairness.
++ */
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ rq->curr->time_slice /= 2;
++ p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++ hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++ if (p->time_slice < RESCHED_NS) {
++ p->time_slice = sysctl_sched_base_slice;
++ resched_curr(rq);
++ }
++ sched_task_fork(p, rq);
++ raw_spin_unlock(&rq->lock);
++
++ rseq_migrate(p);
++ /*
++ * We're setting the CPU for the first time, we don't migrate,
++ * so use __set_task_cpu().
++ */
++ __set_task_cpu(p, smp_processor_id());
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++void sched_cancel_fork(struct task_struct *p)
++{
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++ if (enabled)
++ static_branch_enable(&sched_schedstats);
++ else
++ static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++ if (!schedstat_enabled()) {
++ pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++ static_branch_enable(&sched_schedstats);
++ }
++}
++
++static int __init setup_schedstats(char *str)
++{
++ int ret = 0;
++ if (!str)
++ goto out;
++
++ if (!strcmp(str, "enable")) {
++ set_schedstats(true);
++ ret = 1;
++ } else if (!strcmp(str, "disable")) {
++ set_schedstats(false);
++ ret = 1;
++ }
++out:
++ if (!ret)
++ pr_warn("Unable to parse schedstats=\n");
++
++ return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
++ size_t *lenp, loff_t *ppos)
++{
++ struct ctl_table t;
++ int err;
++ int state = static_branch_likely(&sched_schedstats);
++
++ if (write && !capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ t = *table;
++ t.data = &state;
++ err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++ if (err < 0)
++ return err;
++ if (write)
++ set_schedstats(state);
++ return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++ {
++ .procname = "sched_schedstats",
++ .data = NULL,
++ .maxlen = sizeof(unsigned int),
++ .mode = 0644,
++ .proc_handler = sysctl_schedstats,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_ONE,
++ },
++};
++static int __init sched_core_sysctl_init(void)
++{
++ register_sysctl_init("kernel", sched_core_sysctls);
++ return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++ rseq_migrate(p);
++ /*
++ * Fork balancing, do it here and not earlier because:
++ * - cpus_ptr can change in the fork path
++ * - any previously selected CPU might disappear through hotplug
++ *
++ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++ * as we're not fully set-up yet.
++ */
++ __set_task_cpu(p, cpu_of(rq));
++#endif
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ activate_task(p, rq);
++ trace_sched_wakeup_new(p);
++ wakeup_preempt(rq);
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++ static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++ static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++ if (!static_branch_unlikely(&preempt_notifier_key))
++ WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++ hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++ hlist_del(¬ifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++ /*
++ * Claim the task as running, we do this before switching to it
++ * such that any running task will have this set.
++ *
++ * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++ * its ordering comment.
++ */
++ WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++ /*
++ * This must be the very last reference to @prev from this CPU. After
++ * p->on_cpu is cleared, the task can be moved to a different CPU. We
++ * must ensure this doesn't happen until the switch is completely
++ * finished.
++ *
++ * In particular, the load of prev->state in finish_task_switch() must
++ * happen before this.
++ *
++ * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++ */
++ smp_store_release(&prev->on_cpu, 0);
++#else
++ prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ void (*func)(struct rq *rq);
++ struct balance_callback *next;
++
++ lockdep_assert_held(&rq->lock);
++
++ while (head) {
++ func = (void (*)(struct rq *))head->func;
++ next = head->next;
++ head->next = NULL;
++ head = next;
++
++ func(rq);
++ }
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++ .next = NULL,
++ .func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++ struct balance_callback *head = rq->balance_callback;
++
++ if (likely(!head))
++ return NULL;
++
++ lockdep_assert_rq_held(rq);
++ /*
++ * Must not take balance_push_callback off the list when
++ * splice_balance_callbacks() and balance_callbacks() are not
++ * in the same rq->lock section.
++ *
++ * In that case it would be possible for __schedule() to interleave
++ * and observe the list empty.
++ */
++ if (split && head == &balance_push_callback)
++ head = NULL;
++ else
++ rq->balance_callback = NULL;
++
++ return head;
++}
++
++struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++ do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ unsigned long flags;
++
++ if (unlikely(head)) {
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ do_balance_callbacks(rq, head);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++ }
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++ /*
++ * Since the runqueue lock will be released by the next
++ * task (which is an invalid locking op but in the case
++ * of the scheduler it's an obvious special-case), so we
++ * do an early lockdep release here:
++ */
++ spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++ /* this is a valid case when another task releases the spinlock */
++ rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++ /*
++ * If we are tracking spinlock dependencies then we have to
++ * fix up the runqueue lock - which gets 'carried over' from
++ * prev into current:
++ */
++ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ kcov_prepare_switch(prev);
++ sched_info_switch(rq, prev, next);
++ perf_event_task_sched_out(prev, next);
++ rseq_preempt(prev);
++ fire_sched_out_preempt_notifiers(prev, next);
++ kmap_local_sched_out();
++ prepare_task(next);
++ prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock. (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. 'prev == current' is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ struct rq *rq = this_rq();
++ struct mm_struct *mm = rq->prev_mm;
++ unsigned int prev_state;
++
++ /*
++ * The previous task will have left us with a preempt_count of 2
++ * because it left us after:
++ *
++ * schedule()
++ * preempt_disable(); // 1
++ * __schedule()
++ * raw_spin_lock_irq(&rq->lock) // 2
++ *
++ * Also, see FORK_PREEMPT_COUNT.
++ */
++ if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++ "corrupted preempt_count: %s/%d/0x%x\n",
++ current->comm, current->pid, preempt_count()))
++ preempt_count_set(FORK_PREEMPT_COUNT);
++
++ rq->prev_mm = NULL;
++
++ /*
++ * A task struct has one reference for the use as "current".
++ * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++ * schedule one last time. The schedule call will never return, and
++ * the scheduled task must drop that reference.
++ *
++ * We must observe prev->state before clearing prev->on_cpu (in
++ * finish_task), otherwise a concurrent wakeup can get prev
++ * running on another CPU and we could rave with its RUNNING -> DEAD
++ * transition, resulting in a double drop.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ vtime_task_switch(prev);
++ perf_event_task_sched_in(prev, current);
++ finish_task(prev);
++ tick_nohz_task_switch();
++ finish_lock_switch(rq);
++ finish_arch_post_lock_switch();
++ kcov_finish_switch(current);
++ /*
++ * kmap_local_sched_out() is invoked with rq::lock held and
++ * interrupts disabled. There is no requirement for that, but the
++ * sched out code does not have an interrupt enabled section.
++ * Restoring the maps on sched in does not require interrupts being
++ * disabled either.
++ */
++ kmap_local_sched_in();
++
++ fire_sched_in_preempt_notifiers(current);
++ /*
++ * When switching through a kernel thread, the loop in
++ * membarrier_{private,global}_expedited() may have observed that
++ * kernel thread and not issued an IPI. It is therefore possible to
++ * schedule between user->kernel->user threads without passing though
++ * switch_mm(). Membarrier requires a barrier after storing to
++ * rq->curr, before returning to userspace, so provide them here:
++ *
++ * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++ * provided by mmdrop(),
++ * - a sync_core for SYNC_CORE.
++ */
++ if (mm) {
++ membarrier_mm_sync_core_before_usermode(mm);
++ mmdrop_sched(mm);
++ }
++ if (unlikely(prev_state == TASK_DEAD)) {
++ /* Task is done with its stack. */
++ put_task_stack(prev);
++
++ put_task_struct_rcu_user(prev);
++ }
++
++ return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ /*
++ * New tasks start with FORK_PREEMPT_COUNT, see there and
++ * finish_task_switch() for details.
++ *
++ * finish_task_switch() will drop rq->lock() and lower preempt_count
++ * and the preempt_enable() will end up enabling preemption (on
++ * PREEMPT_COUNT kernels).
++ */
++
++ finish_task_switch(prev);
++ preempt_enable();
++
++ if (current->set_child_tid)
++ put_user(task_pid_vnr(current), current->set_child_tid);
++
++ calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ prepare_task_switch(rq, prev, next);
++
++ /*
++ * For paravirt, this is coupled with an exit in switch_to to
++ * combine the page table reload and the switch backend into
++ * one hypercall.
++ */
++ arch_start_context_switch(prev);
++
++ /*
++ * kernel -> kernel lazy + transfer active
++ * user -> kernel lazy + mmgrab() active
++ *
++ * kernel -> user switch + mmdrop() active
++ * user -> user switch
++ *
++ * switch_mm_cid() needs to be updated if the barriers provided
++ * by context_switch() are modified.
++ */
++ if (!next->mm) { // to kernel
++ enter_lazy_tlb(prev->active_mm, next);
++
++ next->active_mm = prev->active_mm;
++ if (prev->mm) // from user
++ mmgrab(prev->active_mm);
++ else
++ prev->active_mm = NULL;
++ } else { // to user
++ membarrier_switch_mm(rq, prev->active_mm, next->mm);
++ /*
++ * sys_membarrier() requires an smp_mb() between setting
++ * rq->curr / membarrier_switch_mm() and returning to userspace.
++ *
++ * The below provides this either through switch_mm(), or in
++ * case 'prev->active_mm == next->mm' through
++ * finish_task_switch()'s mmdrop().
++ */
++ switch_mm_irqs_off(prev->active_mm, next->mm, next);
++ lru_gen_use_mm(next->mm);
++
++ if (!prev->mm) { // from kernel
++ /* will mmdrop() in finish_task_switch(). */
++ rq->prev_mm = prev->active_mm;
++ prev->active_mm = NULL;
++ }
++ }
++
++ /* switch_mm_cid() requires the memory barriers above. */
++ switch_mm_cid(rq, prev, next);
++
++ prepare_lock_switch(rq, next);
++
++ /* Here we just switch the register state and the stack. */
++ switch_to(prev, next, prev);
++ barrier();
++
++ return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_online_cpu(i)
++ sum += cpu_rq(i)->nr_running;
++
++ return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race. The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++ return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++ return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++ int i;
++ unsigned long long sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += cpu_rq(i)->nr_switches;
++
++ return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++ return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += nr_iowait_cpu(i);
++
++ return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++ s64 ns = rq->clock_task - p->last_ran;
++
++ p->sched_time += ns;
++ cgroup_account_cputime(p, ns);
++ account_group_exec_runtime(p, ns);
++
++ p->time_slice -= ns;
++ p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++ /*
++ * 64-bit doesn't need locks to atomically read a 64-bit value.
++ * So we have a optimization chance when the task's delta_exec is 0.
++ * Reading ->on_cpu is racy, but this is OK.
++ *
++ * If we race with it leaving CPU, we'll take a lock. So we're correct.
++ * If we race with it entering CPU, unaccounted time is 0. This is
++ * indistinguishable from the read occurring a few cycles earlier.
++ * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++ * been accounted, so we're correct here as well.
++ */
++ if (!p->on_cpu || !task_on_rq_queued(p))
++ return tsk_seruntime(p);
++#endif
++
++ rq = task_access_lock_irqsave(p, &lock, &flags);
++ /*
++ * Must be ->curr _and_ ->on_rq. If dequeued, we would
++ * project cycles that may never be accounted to this
++ * thread, breaking clock_gettime().
++ */
++ if (p == rq->curr && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ update_curr(rq, p);
++ }
++ ns = tsk_seruntime(p);
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++ struct task_struct *p = rq->curr;
++
++ if (is_idle_task(p))
++ return;
++
++ update_curr(rq, p);
++ cpufreq_update_util(rq, 0);
++
++ /*
++ * Tasks have less than RESCHED_NS of time slice left they will be
++ * rescheduled.
++ */
++ if (p->time_slice >= RESCHED_NS)
++ return;
++ set_tsk_need_resched(p);
++ set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++ int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++ u64 resched_latency, now = rq_clock(rq);
++ static bool warned_once;
++
++ if (sysctl_resched_latency_warn_once && warned_once)
++ return 0;
++
++ if (!need_resched() || !latency_warn_ms)
++ return 0;
++
++ if (system_state == SYSTEM_BOOTING)
++ return 0;
++
++ if (!rq->last_seen_need_resched_ns) {
++ rq->last_seen_need_resched_ns = now;
++ rq->ticks_without_resched = 0;
++ return 0;
++ }
++
++ rq->ticks_without_resched++;
++ resched_latency = now - rq->last_seen_need_resched_ns;
++ if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++ return 0;
++
++ warned_once = true;
++
++ return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++ long val;
++
++ if ((kstrtol(str, 0, &val))) {
++ pr_warn("Unable to set resched_latency_warn_ms\n");
++ return 1;
++ }
++
++ sysctl_resched_latency_warn_ms = val;
++ return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++ int cpu __maybe_unused = smp_processor_id();
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *curr = rq->curr;
++ u64 resched_latency;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ arch_scale_freq_tick();
++
++ sched_clock_tick();
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ scheduler_task_tick(rq);
++ if (sched_feat(LATENCY_WARN))
++ resched_latency = cpu_resched_latency(rq);
++ calc_global_load_tick(rq);
++
++ task_tick_mm_cid(rq, rq->curr);
++
++ raw_spin_unlock(&rq->lock);
++
++ if (sched_feat(LATENCY_WARN) && resched_latency)
++ resched_latency_warn(cpu, resched_latency);
++
++ perf_event_task_tick();
++
++ if (curr->flags & PF_WQ_WORKER)
++ wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++ int cpu;
++ atomic_t state;
++ struct delayed_work work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE 0
++#define TICK_SCHED_REMOTE_OFFLINING 1
++#define TICK_SCHED_REMOTE_RUNNING 2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ * TICK_SCHED_REMOTE_OFFLINE
++ * | ^
++ * | |
++ * | | sched_tick_remote()
++ * | |
++ * | |
++ * +--TICK_SCHED_REMOTE_OFFLINING
++ * | ^
++ * | |
++ * sched_tick_start() | | sched_tick_stop()
++ * | |
++ * V |
++ * TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++ struct delayed_work *dwork = to_delayed_work(work);
++ struct tick_work *twork = container_of(dwork, struct tick_work, work);
++ int cpu = twork->cpu;
++ struct rq *rq = cpu_rq(cpu);
++ int os;
++
++ /*
++ * Handle the tick only if it appears the remote CPU is running in full
++ * dynticks mode. The check is racy by nature, but missing a tick or
++ * having one too much is no big deal because the scheduler tick updates
++ * statistics and checks timeslices in a time-independent way, regardless
++ * of when exactly it is running.
++ */
++ if (tick_nohz_tick_stopped_cpu(cpu)) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ struct task_struct *curr = rq->curr;
++
++ if (cpu_online(cpu)) {
++ update_rq_clock(rq);
++
++ if (!is_idle_task(curr)) {
++ /*
++ * Make sure the next tick runs within a
++ * reasonable amount of time.
++ */
++ u64 delta = rq_clock_task(rq) - curr->last_ran;
++ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++ }
++ scheduler_task_tick(rq);
++
++ calc_load_nohz_remote(rq);
++ }
++ }
++
++ /*
++ * Run the remote tick once per second (1Hz). This arbitrary
++ * frequency is large enough to avoid overload but short enough
++ * to keep scheduler internal stats reasonably up to date. But
++ * first update state to reflect hotplug activity if required.
++ */
++ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++ if (os == TICK_SCHED_REMOTE_RUNNING)
++ queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++ int os;
++ struct tick_work *twork;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++ if (os == TICK_SCHED_REMOTE_OFFLINE) {
++ twork->cpu = cpu;
++ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ }
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++ struct tick_work *twork;
++ int os;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ /* There cannot be competing actions, but don't rely on stop-machine. */
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++ WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++ /* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++ tick_work_cpu = alloc_percpu(struct tick_work);
++ BUG_ON(!tick_work_cpu);
++ return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++ defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++ if (preempt_count() == val) {
++ unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++ current->preempt_disable_ip = ip;
++#endif
++ trace_preempt_off(CALLER_ADDR0, ip);
++ }
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++ return;
++#endif
++ __preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Spinlock count overflowing soon?
++ */
++ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++ PREEMPT_MASK - 10);
++#endif
++ preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++ if (preempt_count() == val)
++ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++ return;
++ /*
++ * Is the spinlock portion underflowing?
++ */
++ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++ !(preempt_count() & PREEMPT_MASK)))
++ return;
++#endif
++
++ preempt_latency_stop(val);
++ __preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ return p->preempt_disable_ip;
++#else
++ return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++ /* Save this before calling printk(), since that will clobber it */
++ unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++ if (oops_in_progress)
++ return;
++
++ printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++ prev->comm, prev->pid, preempt_count());
++
++ debug_show_held_locks(prev);
++ print_modules();
++ if (irqs_disabled())
++ print_irqtrace_events(prev);
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, preempt_disable_ip);
++ }
++ check_panic_on_warn("scheduling while atomic");
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++ if (task_stack_end_corrupted(prev))
++ panic("corrupted stack end detected inside scheduler\n");
++
++ if (task_scs_end_corrupted(prev))
++ panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++ if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++ printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++ prev->comm, prev->pid, prev->non_block_count);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++ }
++#endif
++
++ if (unlikely(in_atomic_preempt_off())) {
++ __schedule_bug(prev);
++ preempt_count_set(PREEMPT_DISABLED);
++ }
++ rcu_sleep_check();
++ SCHED_WARN_ON(ct_state() == CT_STATE_USER);
++
++ profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++ schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++ printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++ " ecore_idle: 0x%04lx\n",
++ sched_rq_pending_mask.bits[0],
++ sched_idle_mask->bits[0],
++ sched_pcore_idle_mask->bits[0],
++ sched_ecore_idle_mask->bits[0]);
++}
++#endif
++
++#ifdef CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++ struct task_struct *p, *skip = rq->curr;
++ int nr_migrated = 0;
++ int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++ /* WA to check rq->curr is still on rq */
++ if (!task_on_rq_queued(skip))
++ return 0;
++
++ while (skip != rq->idle && nr_tries &&
++ (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++ skip = sched_rq_next_task(p, rq);
++ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++ __SCHED_DEQUEUE_TASK(p, rq, 0, );
++ set_task_cpu(p, dest_cpu);
++ sched_task_sanity_check(p, dest_rq);
++ sched_mm_cid_migrate_to(dest_rq, p);
++ __SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++ nr_migrated++;
++ }
++ nr_tries--;
++ }
++
++ return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++ cpumask_t *topo_mask, *end_mask, chk;
++
++ if (unlikely(!rq->online))
++ return 0;
++
++ if (cpumask_empty(&sched_rq_pending_mask))
++ return 0;
++
++ topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++ end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++ do {
++ int i;
++
++ if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++ continue;
++
++ for_each_cpu_wrap(i, &chk, cpu) {
++ int nr_migrated;
++ struct rq *src_rq;
++
++ src_rq = cpu_rq(i);
++ if (!do_raw_spin_trylock(&src_rq->lock))
++ continue;
++ spin_acquire(&src_rq->lock.dep_map,
++ SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++ if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++ src_rq->nr_running -= nr_migrated;
++ if (src_rq->nr_running < 2)
++ cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++
++ rq->nr_running += nr_migrated;
++ if (rq->nr_running > 1)
++ cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++ update_sched_preempt_mask(rq);
++ cpufreq_update_util(rq, 0);
++
++ return 1;
++ }
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++ }
++ } while (++topo_mask < end_mask);
++
++ return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++
++ sched_task_renew(p, rq);
++
++ if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++ requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++ if (unlikely(rq->idle == p))
++ return;
++
++ update_curr(rq, p);
++
++ if (p->time_slice < RESCHED_NS)
++ time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++ struct task_struct *next = sched_rq_first_task(rq);
++
++ if (next == rq->idle) {
++#ifdef CONFIG_SMP
++ if (!take_other_rq_tasks(rq, cpu)) {
++ if (likely(rq->balance_func && rq->online))
++ rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++ schedstat_inc(rq->sched_goidle);
++ /*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++ return next;
++#ifdef CONFIG_SMP
++ }
++ next = sched_rq_first_task(rq);
++#endif
++ }
++#ifdef CONFIG_HIGH_RES_TIMERS
++ hrtick_start(rq, next->time_slice);
++#endif
++ /*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++ return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock.
++ */
++ #define SM_IDLE (-1)
++ #define SM_NONE 0
++ #define SM_PREEMPT 1
++ #define SM_RTLOCK_WAIT 2
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ * paths. For example, see arch/x86/entry_64.S.
++ *
++ * To drive preemption between tasks, the scheduler sets the flag in timer
++ * interrupt handler sched_tick().
++ *
++ * 3. Wakeups don't really cause entry into schedule(). They add a
++ * task to the run-queue and that's it.
++ *
++ * Now, if the new task added to the run-queue preempts the current
++ * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ * called on the nearest possible occasion:
++ *
++ * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ * - in syscall or exception context, at the next outmost
++ * preempt_enable(). (this might be as soon as the wake_up()'s
++ * spin_unlock()!)
++ *
++ * - in IRQ context, return from interrupt-handler to
++ * preemptible context
++ *
++ * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ * then at the next:
++ *
++ * - cond_resched() call
++ * - explicit schedule() call
++ * - return from syscall or exception to user-space
++ * - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(int sched_mode)
++{
++ struct task_struct *prev, *next;
++ /*
++ * On PREEMPT_RT kernel, SM_RTLOCK_WAIT is noted
++ * as a preemption by schedule_debug() and RCU.
++ */
++ bool preempt = sched_mode > SM_NONE;
++ unsigned long *switch_count;
++ unsigned long prev_state;
++ struct rq *rq;
++ int cpu;
++
++ cpu = smp_processor_id();
++ rq = cpu_rq(cpu);
++ prev = rq->curr;
++
++ schedule_debug(prev, preempt);
++
++ /* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++ hrtick_clear(rq);
++
++ local_irq_disable();
++ rcu_note_context_switch(preempt);
++
++ /*
++ * Make sure that signal_pending_state()->signal_pending() below
++ * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++ * done by the caller to avoid the race with signal_wake_up():
++ *
++ * __set_current_state(@state) signal_wake_up()
++ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
++ * wake_up_state(p, state)
++ * LOCK rq->lock LOCK p->pi_state
++ * smp_mb__after_spinlock() smp_mb__after_spinlock()
++ * if (signal_pending_state()) if (p->state & @state)
++ *
++ * Also, the membarrier system call requires a full memory barrier
++ * after coming from user-space, before storing to rq->curr; this
++ * barrier matches a full barrier in the proximity of the membarrier
++ * system call exit.
++ */
++ raw_spin_lock(&rq->lock);
++ smp_mb__after_spinlock();
++
++ update_rq_clock(rq);
++
++ switch_count = &prev->nivcsw;
++
++ /* Task state changes only considers SM_PREEMPT as preemption */
++ preempt = sched_mode == SM_PREEMPT;
++
++ /*
++ * We must load prev->state once (task_struct::state is volatile), such
++ * that we form a control dependency vs deactivate_task() below.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ if (sched_mode == SM_IDLE) {
++ if (!rq->nr_running) {
++ next = prev;
++ goto picked;
++ }
++ } else if (!preempt && prev_state) {
++ if (signal_pending_state(prev_state, prev)) {
++ WRITE_ONCE(prev->__state, TASK_RUNNING);
++ } else {
++ prev->sched_contributes_to_load =
++ (prev_state & TASK_UNINTERRUPTIBLE) &&
++ !(prev_state & TASK_NOLOAD) &&
++ !(prev_state & TASK_FROZEN);
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ sched_task_deactivate(prev, rq);
++ block_task(rq, prev);
++ }
++ switch_count = &prev->nvcsw;
++ }
++
++ check_curr(prev, rq);
++
++ next = choose_next_task(rq, cpu);
++picked:
++ clear_tsk_need_resched(prev);
++ clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++ rq->last_seen_need_resched_ns = 0;
++#endif
++
++ if (likely(prev != next)) {
++ next->last_ran = rq->clock_task;
++
++ /*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++ rq->nr_switches++;
++ /*
++ * RCU users of rcu_dereference(rq->curr) may not see
++ * changes to task_struct made by pick_next_task().
++ */
++ RCU_INIT_POINTER(rq->curr, next);
++ /*
++ * The membarrier system call requires each architecture
++ * to have a full memory barrier after updating
++ * rq->curr, before returning to user-space.
++ *
++ * Here are the schemes providing that barrier on the
++ * various architectures:
++ * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++ * RISC-V. switch_mm() relies on membarrier_arch_switch_mm()
++ * on PowerPC and on RISC-V.
++ * - finish_lock_switch() for weakly-ordered
++ * architectures where spin_unlock is a full barrier,
++ * - switch_to() for arm64 (weakly-ordered, spin_unlock
++ * is a RELEASE barrier),
++ *
++ * The barrier matches a full barrier in the proximity of
++ * the membarrier system call entry.
++ *
++ * On RISC-V, this barrier pairing is also needed for the
++ * SYNC_CORE command when switching between processes, cf.
++ * the inline comments in membarrier_arch_switch_mm().
++ */
++ ++*switch_count;
++
++ trace_sched_switch(preempt, prev, next, prev_state);
++
++ /* Also unlocks the rq: */
++ rq = context_switch(rq, prev, next);
++
++ cpu = cpu_of(rq);
++ } else {
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++ }
++}
++
++void __noreturn do_task_dead(void)
++{
++ /* Causes final put_task_struct in finish_task_switch(): */
++ set_special_state(TASK_DEAD);
++
++ /* Tell freezer to ignore us: */
++ current->flags |= PF_NOFREEZE;
++
++ __schedule(SM_NONE);
++ BUG();
++
++ /* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++ for (;;)
++ cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++ static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++ unsigned int task_flags;
++
++ /*
++ * Establish LD_WAIT_CONFIG context to ensure none of the code called
++ * will use a blocking primitive -- which would lead to recursion.
++ */
++ lock_map_acquire_try(&sched_map);
++
++ task_flags = tsk->flags;
++ /*
++ * If a worker goes to sleep, notify and ask workqueue whether it
++ * wants to wake up a task to maintain concurrency.
++ */
++ if (task_flags & PF_WQ_WORKER)
++ wq_worker_sleeping(tsk);
++ else if (task_flags & PF_IO_WORKER)
++ io_wq_worker_sleeping(tsk);
++
++ /*
++ * spinlock and rwlock must not flush block requests. This will
++ * deadlock if the callback attempts to acquire a lock which is
++ * already acquired.
++ */
++ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++ /*
++ * If we are going to sleep and we have plugged IO queued,
++ * make sure to submit it to avoid deadlocks.
++ */
++ blk_flush_plug(tsk->plug, true);
++
++ lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++ if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++ if (tsk->flags & PF_BLOCK_TS)
++ blk_plug_invalidate_ts(tsk);
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_running(tsk);
++ else if (tsk->flags & PF_IO_WORKER)
++ io_wq_worker_running(tsk);
++ }
++}
++
++static __always_inline void __schedule_loop(int sched_mode)
++{
++ do {
++ preempt_disable();
++ __schedule(sched_mode);
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++ struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++ lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++ if (!task_is_running(tsk))
++ sched_submit_work(tsk);
++ __schedule_loop(SM_NONE);
++ sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++ /*
++ * As this skips calling sched_submit_work(), which the idle task does
++ * regardless because that function is a NOP when the task is in a
++ * TASK_RUNNING state, make sure this isn't used someplace that the
++ * current task can be in any other state. Note, idle is always in the
++ * TASK_RUNNING state.
++ */
++ WARN_ON_ONCE(current->__state);
++ do {
++ __schedule(SM_IDLE);
++ } while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++ /*
++ * If we come here after a random call to set_need_resched(),
++ * or we have been woken up remotely but the IPI has not yet arrived,
++ * we haven't yet exited the RCU idle mode. Do it here manually until
++ * we find a better solution.
++ *
++ * NB: There are buggy callers of this function. Ideally we
++ * should warn if prev_state != CT_STATE_USER, but that will trigger
++ * too frequently to make sense yet.
++ */
++ enum ctx_state prev_state = exception_enter();
++ schedule();
++ exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++ sched_preempt_enable_no_resched();
++ schedule();
++ preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++ __schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ __schedule(SM_PREEMPT);
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++
++ /*
++ * Check again in case we missed a preemption opportunity
++ * between schedule and now.
++ */
++ } while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++ /*
++ * If there is a non-zero preempt_count or interrupts are disabled,
++ * we do not want to preempt the current task. Just return..
++ */
++ if (likely(!preemptible()))
++ return;
++
++ preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled preempt_schedule
++#define preempt_schedule_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++ return;
++ preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++ enum ctx_state prev_ctx;
++
++ if (likely(!preemptible()))
++ return;
++
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ /*
++ * Needs preempt disabled in case user_exit() is traced
++ * and the tracer calls preempt_enable_notrace() causing
++ * an infinite recursion.
++ */
++ prev_ctx = exception_enter();
++ __schedule(SM_PREEMPT);
++ exception_exit(prev_ctx);
++
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++ } while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++ return;
++ preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of IRQ context.
++ * Note, that this is called and return with IRQs disabled. This will
++ * protect us against recursive calling from IRQ contexts.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++ enum ctx_state prev_state;
++
++ /* Catch callers which need to be fixed */
++ BUG_ON(preempt_count() || !irqs_disabled());
++
++ prev_state = exception_enter();
++
++ do {
++ preempt_disable();
++ local_irq_enable();
++ __schedule(SM_PREEMPT);
++ local_irq_disable();
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++
++ exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++ void *key)
++{
++ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++ return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++ /* Trigger resched if task sched_prio has been modified. */
++ if (task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ requeue_task(p, rq);
++ wakeup_preempt(rq);
++ }
++}
++
++void __setscheduler_prio(struct task_struct *p, int prio)
++{
++ p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++ lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++ sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++ lockdep_assert(current->sched_rt_mutex);
++ __schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++ sched_update_worker(current);
++ lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++ int prio;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ /* XXX used to be waiter->prio, not waiter->task->prio */
++ prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++ /*
++ * If nothing changed; bail early.
++ */
++ if (p->pi_top_task == pi_task && prio == p->prio)
++ return;
++
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Set under pi_lock && rq->lock, such that the value can be used under
++ * either lock.
++ *
++ * Note that there is loads of tricky to make this pointer cache work
++ * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++ * ensure a task is de-boosted (pi_task is set to NULL) before the
++ * task is allowed to run again (and can exit). This ensures the pointer
++ * points to a blocked task -- which guarantees the task is present.
++ */
++ p->pi_top_task = pi_task;
++
++ /*
++ * For FIFO/RR we only need to set prio, if that matches we're done.
++ */
++ if (prio == p->prio)
++ goto out_unlock;
++
++ /*
++ * Idle task boosting is a no-no in general. There is one
++ * exception, when PREEMPT_RT and NOHZ is active:
++ *
++ * The idle task calls get_next_timer_interrupt() and holds
++ * the timer wheel base->lock on the CPU and another CPU wants
++ * to access the timer (probably to cancel it). We can safely
++ * ignore the boosting request, as the idle CPU runs this code
++ * with interrupts disabled and will complete the lock
++ * protected section without being interrupted. So there is no
++ * real need to boost.
++ */
++ if (unlikely(p == rq->idle)) {
++ WARN_ON(p != rq->curr);
++ WARN_ON(p->pi_blocked_on);
++ goto out_unlock;
++ }
++
++ trace_sched_pi_setprio(p, pi_task);
++
++ __setscheduler_prio(p, prio);
++
++ check_task_changed(p, rq);
++out_unlock:
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++
++ if (task_on_rq_queued(p))
++ __balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++
++ preempt_enable();
++}
++#endif
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++ if (should_resched(0)) {
++ preempt_schedule_common();
++ return 1;
++ }
++ /*
++ * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++ * whether the current CPU is in an RCU read-side critical section,
++ * so the tick can report quiescent states even for CPUs looping
++ * in kernel context. In contrast, in non-preemptible kernels,
++ * RCU readers leave no in-memory hints, which means that CPU-bound
++ * processes executing in kernel context might never report an
++ * RCU quiescent state. Therefore, the following code causes
++ * cond_resched() to report a quiescent state, but only when RCU
++ * is in urgent need of one.
++ */
++#ifndef CONFIG_PREEMPT_RCU
++ rcu_all_qs();
++#endif
++ return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled __cond_resched
++#define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled __cond_resched
++#define might_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++ klp_sched_try_switch();
++ if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_might_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION. We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held(lock);
++
++ if (spin_needbreak(lock) || resched) {
++ spin_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ spin_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_read(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ read_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ read_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_write(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ write_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ write_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ * cond_resched <- __cond_resched
++ * might_resched <- RET0
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ * cond_resched <- __cond_resched
++ * might_resched <- __cond_resched
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ * cond_resched <- RET0
++ * might_resched <- RET0
++ * preempt_schedule <- preempt_schedule
++ * preempt_schedule_notrace <- preempt_schedule_notrace
++ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++ preempt_dynamic_undefined = -1,
++ preempt_dynamic_none,
++ preempt_dynamic_voluntary,
++ preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++ if (!strcmp(str, "none"))
++ return preempt_dynamic_none;
++
++ if (!strcmp(str, "voluntary"))
++ return preempt_dynamic_voluntary;
++
++ if (!strcmp(str, "full"))
++ return preempt_dynamic_full;
++
++ return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f) static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f) static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++ /*
++ * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++ * the ZERO state, which is invalid.
++ */
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++ switch (mode) {
++ case preempt_dynamic_none:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: none\n");
++ break;
++
++ case preempt_dynamic_voluntary:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: voluntary\n");
++ break;
++
++ case preempt_dynamic_full:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: full\n");
++ break;
++ }
++
++ preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++ mutex_lock(&sched_dynamic_mutex);
++ __sched_dynamic_update(mode);
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++ __klp_sched_try_switch();
++ return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = true;
++ static_call_update(cond_resched, klp_cond_resched);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = false;
++ __sched_dynamic_update(preempt_dynamic_mode);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++ int mode = sched_dynamic_mode(str);
++ if (mode < 0) {
++ pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++ return 0;
++ }
++
++ sched_dynamic_update(mode);
++ return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++ if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++ if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++ sched_dynamic_update(preempt_dynamic_none);
++ } else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++ sched_dynamic_update(preempt_dynamic_voluntary);
++ } else {
++ /* Default static call setting, nothing to do */
++ WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++ preempt_dynamic_mode = preempt_dynamic_full;
++ pr_info("Dynamic Preempt: full\n");
++ }
++ }
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++ bool preempt_model_##mode(void) \
++ { \
++ WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++ return preempt_dynamic_mode == preempt_dynamic_##mode; \
++ } \
++ EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC: */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++int io_schedule_prepare(void)
++{
++ int old_iowait = current->in_iowait;
++
++ current->in_iowait = 1;
++ blk_flush_plug(current->plug, true);
++ return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++ current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO. Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++ int token;
++ long ret;
++
++ token = io_schedule_prepare();
++ ret = schedule_timeout(timeout);
++ io_schedule_finish(token);
++
++ return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++ int token;
++
++ token = io_schedule_prepare();
++ schedule();
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++void sched_show_task(struct task_struct *p)
++{
++ unsigned long free;
++ int ppid;
++
++ if (!try_get_task_stack(p))
++ return;
++
++ pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++ if (task_is_running(p))
++ pr_cont(" running task ");
++ free = stack_not_used(p);
++ ppid = 0;
++ rcu_read_lock();
++ if (pid_alive(p))
++ ppid = task_pid_nr(rcu_dereference(p->real_parent));
++ rcu_read_unlock();
++ pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++ free, task_pid_nr(p), task_tgid_nr(p),
++ ppid, read_task_thread_flags(p));
++
++ print_worker_info(KERN_INFO, p);
++ print_stop_info(KERN_INFO, p);
++ show_stack(p, NULL, KERN_INFO);
++ put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /* no filter, everything matches */
++ if (!state_filter)
++ return true;
++
++ /* filter, but doesn't match */
++ if (!(state & state_filter))
++ return false;
++
++ /*
++ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++ * TASK_KILLABLE).
++ */
++ if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++ return false;
++
++ return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++ struct task_struct *g, *p;
++
++ rcu_read_lock();
++ for_each_process_thread(g, p) {
++ /*
++ * reset the NMI-timeout, listing all files on a slow
++ * console might take a lot of time:
++ * Also, reset softlockup watchdogs on all CPUs, because
++ * another CPU might be blocked waiting for us to process
++ * an IPI.
++ */
++ touch_nmi_watchdog();
++ touch_all_softlockup_watchdogs();
++ if (state_filter_match(state_filter, p))
++ sched_show_task(p);
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ /* TODO: Alt schedule FW should support this
++ if (!state_filter)
++ sysrq_sched_debug_show();
++ */
++#endif
++ rcu_read_unlock();
++ /*
++ * Only show locks if all tasks are dumped:
++ */
++ if (!state_filter)
++ debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++ if (in_hardirq() && cpu == smp_processor_id()) {
++ struct pt_regs *regs;
++
++ regs = get_irq_regs();
++ if (regs) {
++ show_regs(regs);
++ return;
++ }
++ }
++
++ if (trigger_single_cpu_backtrace(cpu))
++ return;
++
++ pr_info("Task dump for CPU %d:\n", cpu);
++ sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++ struct affinity_context ac = (struct affinity_context) {
++ .new_mask = cpumask_of(cpu),
++ .flags = 0,
++ };
++#endif
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ __sched_fork(0, idle);
++
++ raw_spin_lock_irqsave(&idle->pi_lock, flags);
++ raw_spin_lock(&rq->lock);
++
++ idle->last_ran = rq->clock_task;
++ idle->__state = TASK_RUNNING;
++ /*
++ * PF_KTHREAD should already be set at this point; regardless, make it
++ * look like a proper per-CPU kthread.
++ */
++ idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++ kthread_set_per_cpu(idle, cpu);
++
++ sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++ /*
++ * It's possible that init_idle() gets called multiple times on a task,
++ * in that case do_set_cpus_allowed() will not do the right thing.
++ *
++ * And since this is boot we can forgo the serialisation.
++ */
++ set_cpus_allowed_common(idle, &ac);
++#endif
++
++ /* Silence PROVE_RCU */
++ rcu_read_lock();
++ __set_task_cpu(idle, cpu);
++ rcu_read_unlock();
++
++ rq->idle = idle;
++ rcu_assign_pointer(rq->curr, idle);
++ idle->on_cpu = 1;
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++ /* Set the preempt count _outside_ the spinlocks! */
++ init_idle_preempt_count(idle, cpu);
++
++ ftrace_graph_init_idle_task(idle, cpu);
++ vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++ sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++ const struct cpumask __maybe_unused *trial)
++{
++ return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++ int ret = 0;
++
++ /*
++ * Kthreads which disallow setaffinity shouldn't be moved
++ * to a new cpuset; we don't want to change their CPU
++ * affinity and isolating such threads by their set of
++ * allowed nodes is unnecessary. Thus, cpusets are not
++ * applicable for such threads. This prevents checking for
++ * success of set_cpus_allowed_ptr() on all attached tasks
++ * before cpus_mask may be changed.
++ */
++ if (p->flags & PF_NO_SETAFFINITY)
++ ret = -EINVAL;
++
++ return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++ struct mm_struct *mm = current->active_mm;
++
++ BUG_ON(current != this_rq()->idle);
++
++ if (mm != &init_mm) {
++ switch_mm(mm, &init_mm, current);
++ finish_arch_post_lock_switch();
++ }
++
++ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++ struct task_struct *p = arg;
++ struct rq *rq = this_rq();
++ struct rq_flags rf;
++ int cpu;
++
++ raw_spin_lock_irq(&p->pi_lock);
++ rq_lock(rq, &rf);
++
++ update_rq_clock(rq);
++
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ cpu = select_fallback_rq(rq->cpu, p);
++ rq = __migrate_task(rq, p, cpu);
++ }
++
++ rq_unlock(rq, &rf);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ put_task_struct(p);
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++ struct task_struct *push_task = rq->curr;
++
++ lockdep_assert_held(&rq->lock);
++
++ /*
++ * Ensure the thing is persistent until balance_push_set(.on = false);
++ */
++ rq->balance_callback = &balance_push_callback;
++
++ /*
++ * Only active while going offline and when invoked on the outgoing
++ * CPU.
++ */
++ if (!cpu_dying(rq->cpu) || rq != this_rq())
++ return;
++
++ /*
++ * Both the cpu-hotplug and stop task are in this case and are
++ * required to complete the hotplug process.
++ */
++ if (kthread_is_per_cpu(push_task) ||
++ is_migration_disabled(push_task)) {
++
++ /*
++ * If this is the idle task on the outgoing CPU try to wake
++ * up the hotplug control thread which might wait for the
++ * last task to vanish. The rcuwait_active() check is
++ * accurate here because the waiter is pinned on this CPU
++ * and can't obviously be running in parallel.
++ *
++ * On RT kernels this also has to check whether there are
++ * pinned and scheduled out tasks on the runqueue. They
++ * need to leave the migrate disabled section first.
++ */
++ if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++ rcuwait_active(&rq->hotplug_wait)) {
++ raw_spin_unlock(&rq->lock);
++ rcuwait_wake_up(&rq->hotplug_wait);
++ raw_spin_lock(&rq->lock);
++ }
++ return;
++ }
++
++ get_task_struct(push_task);
++ /*
++ * Temporarily drop rq->lock such that we can wake-up the stop task.
++ * Both preemption and IRQs are still disabled.
++ */
++ preempt_disable();
++ raw_spin_unlock(&rq->lock);
++ stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++ this_cpu_ptr(&push_work));
++ preempt_enable();
++ /*
++ * At this point need_resched() is true and we'll take the loop in
++ * schedule(). The next pick is obviously going to be the stop task
++ * which kthread_is_per_cpu() and will push this task away.
++ */
++ raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct rq_flags rf;
++
++ rq_lock_irqsave(rq, &rf);
++ if (on) {
++ WARN_ON_ONCE(rq->balance_callback);
++ rq->balance_callback = &balance_push_callback;
++ } else if (rq->balance_callback == &balance_push_callback) {
++ rq->balance_callback = NULL;
++ }
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++ struct rq *rq = this_rq();
++
++ rcuwait_wait_event(&rq->hotplug_wait,
++ rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++ TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++ if (rq->online) {
++ update_rq_clock(rq);
++ rq->online = false;
++ }
++}
++
++static void set_rq_online(struct rq *rq)
++{
++ if (!rq->online)
++ rq->online = true;
++}
++
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_online(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_offline(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask. If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++ if (cpuhp_tasks_frozen) {
++ /*
++ * num_cpus_frozen tracks how many CPUs are involved in suspend
++ * resume sequence. As long as this is not the last online
++ * operation in the resume sequence, just build a single sched
++ * domain, ignoring cpusets.
++ */
++ partition_sched_domains(1, NULL, NULL);
++ if (--num_cpus_frozen)
++ return;
++ /*
++ * This is the last CPU online operation. So fall through and
++ * restore the original sched domains by considering the
++ * cpuset configurations.
++ */
++ cpuset_force_rebuild();
++ }
++
++ cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++ if (!cpuhp_tasks_frozen) {
++ cpuset_update_active_cpus();
++ } else {
++ num_cpus_frozen++;
++ partition_sched_domains(1, NULL, NULL);
++ }
++ return 0;
++}
++
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_inc_cpuslocked(&sched_smt_present);
++ cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_dec_cpuslocked(&sched_smt_present);
++ if (!static_branch_likely(&sched_smt_present))
++ cpumask_clear(sched_pcore_idle_mask);
++ cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ /*
++ * Clear the balance_push callback and prepare to schedule
++ * regular tasks.
++ */
++ balance_push_set(cpu, false);
++
++ set_cpu_active(cpu, true);
++
++ if (sched_smp_initialized)
++ cpuset_cpu_active();
++
++ /*
++ * Put the rq online, if not already. This happens:
++ *
++ * 1) In the early boot process, because we build the real domains
++ * after all cpus have been brought up.
++ *
++ * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++ * domains.
++ */
++ sched_set_rq_online(rq, cpu);
++
++ /*
++ * When going up, increment the number of cores with SMT present.
++ */
++ sched_smt_present_inc(cpu);
++
++ return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ int ret;
++
++ set_cpu_active(cpu, false);
++
++ /*
++ * From this point forward, this CPU will refuse to run any task that
++ * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++ * push those tasks away until this gets cleared, see
++ * sched_cpu_dying().
++ */
++ balance_push_set(cpu, true);
++
++ /*
++ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++ * users of this state to go away such that all new such users will
++ * observe it.
++ *
++ * Specifically, we rely on ttwu to no longer target this CPU, see
++ * ttwu_queue_cond() and is_cpu_allowed().
++ *
++ * Do sync before park smpboot threads to take care the RCU boost case.
++ */
++ synchronize_rcu();
++
++ sched_set_rq_offline(rq, cpu);
++
++ /*
++ * When going down, decrement the number of cores with SMT present.
++ */
++ sched_smt_present_dec(cpu);
++
++ if (!sched_smp_initialized)
++ return 0;
++
++ ret = cpuset_cpu_inactive(cpu);
++ if (ret) {
++ sched_smt_present_inc(cpu);
++ sched_set_rq_online(rq, cpu);
++ balance_push_set(cpu, false);
++ set_cpu_active(cpu, true);
++ return ret;
++ }
++
++ return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++ sched_rq_cpu_starting(cpu);
++ sched_tick_start(cpu);
++ return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++ balance_hotplug_wait();
++ return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the tear-down thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++ long delta = calc_load_fold_active(rq, 1);
++
++ if (delta)
++ atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++ struct task_struct *g, *p;
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_held(&rq->lock);
++
++ printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++ for_each_process_thread(g, p) {
++ if (task_cpu(p) != cpu)
++ continue;
++
++ if (!task_on_rq_queued(p))
++ continue;
++
++ printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++ }
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ /* Handle pending wakeups and then migrate everything off */
++ sched_tick_stop(cpu);
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++ WARN(true, "Dying CPU not properly vacated!");
++ dump_rq_tasks(rq, KERN_WARNING);
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ calc_load_migrate(rq);
++ hrtick_clear(rq);
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++ int cpu;
++ cpumask_t *tmp;
++
++ for_each_possible_cpu(cpu) {
++ /* init topo masks */
++ tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++ cpumask_copy(tmp, cpu_possible_mask);
++ per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++ per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++ }
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++ if (cpumask_and(topo, topo, mask)) { \
++ cpumask_copy(topo, mask); \
++ printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name, \
++ cpu, (topo++)->bits[0]); \
++ } \
++ if (!last) \
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(mask), \
++ nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++ int cpu;
++ cpumask_t *topo;
++
++ for_each_online_cpu(cpu) {
++ topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++ nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++ TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++ TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++ per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++ per_cpu(sched_cpu_llc_mask, cpu) = topo;
++ TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++ TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++ TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++ per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++ printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++ cpu, per_cpu(sd_llc_id, cpu),
++ (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++ per_cpu(sched_cpu_topo_masks, cpu)));
++ }
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++ /* Move init over to a non-isolated CPU */
++ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++ BUG();
++ current->flags &= ~PF_NO_SETAFFINITY;
++
++ sched_init_topology();
++ sched_init_topology_cpumask();
++
++ sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++ sched_cpu_starting(smp_processor_id());
++ return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++ cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++ return in_lock_functions(addr) ||
++ (addr >= (unsigned long)__sched_text_start
++ && addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++ int i;
++ struct rq *rq;
++
++ printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++ " by Alfred Chen.\n");
++
++ wait_bit_init();
++
++#ifdef CONFIG_SMP
++ for (i = 0; i < SCHED_QUEUE_BITS; i++)
++ cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++ task_group_cache = KMEM_CACHE(task_group, 0);
++
++ list_add(&root_task_group.list, &task_groups);
++ INIT_LIST_HEAD(&root_task_group.children);
++ INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++ for_each_possible_cpu(i) {
++ rq = cpu_rq(i);
++
++ sched_queue_init(&rq->queue);
++ rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = rq->prio;
++#endif
++
++ raw_spin_lock_init(&rq->lock);
++ rq->nr_running = rq->nr_uninterruptible = 0;
++ rq->calc_load_active = 0;
++ rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++ rq->online = false;
++ rq->cpu = i;
++
++ rq->clear_idle_mask_func = cpumask_clear_cpu;
++ rq->set_idle_mask_func = cpumask_set_cpu;
++ rq->balance_func = NULL;
++ rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++ INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++ rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++ rq->nr_switches = 0;
++
++ hrtick_rq_init(rq);
++ atomic_set(&rq->nr_iowait, 0);
++
++ zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++ }
++#ifdef CONFIG_SMP
++ /* Set rq->online for cpu 0 */
++ cpu_rq(0)->online = true;
++#endif
++ /*
++ * The boot idle thread does lazy MMU switching as well:
++ */
++ mmgrab(&init_mm);
++ enter_lazy_tlb(&init_mm, current);
++
++ /*
++ * The idle task doesn't need the kthread struct to function, but it
++ * is dressed up as a per-CPU kthread and thus needs to play the part
++ * if we want to avoid special-casing it in code that deals with per-CPU
++ * kthreads.
++ */
++ WARN_ON(!set_kthread_struct(current));
++
++ /*
++ * Make us the idle thread. Technically, schedule() should not be
++ * called from this thread, however somewhere below it might be,
++ * but because we are the idle thread, we just pick up running again
++ * when this runqueue becomes "idle".
++ */
++ init_idle(current, smp_processor_id());
++
++ calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++ idle_thread_set_boot_cpu();
++ balance_push_set(smp_processor_id(), false);
++
++ sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++ preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++ unsigned int state = get_current_state();
++ /*
++ * Blocking primitives will set (and therefore destroy) current->state,
++ * since we will exit with TASK_RUNNING make sure we enter with it,
++ * otherwise we will destroy state.
++ */
++ WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++ "do not call blocking ops when !TASK_RUNNING; "
++ "state=%x set at [<%p>] %pS\n", state,
++ (void *)current->task_state_change,
++ (void *)current->task_state_change);
++
++ __might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++ if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++ return;
++
++ if (preempt_count() == preempt_offset)
++ return;
++
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++ unsigned int nested = preempt_count();
++
++ nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++ return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++ /* Ratelimiting timestamp: */
++ static unsigned long prev_jiffy;
++
++ unsigned long preempt_disable_ip;
++
++ /* WARN_ON_ONCE() by default, no rate limit required: */
++ rcu_sleep_check();
++
++ if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++ !is_idle_task(current) && !current->non_block_count) ||
++ system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++ oops_in_progress)
++ return;
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ /* Save this before calling printk(), since that will clobber it: */
++ preempt_disable_ip = get_preempt_disable_ip(current);
++
++ pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++ file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), current->non_block_count,
++ current->pid, current->comm);
++ pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++ offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++ pr_err("RCU nest depth: %d, expected: %u\n",
++ rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++ }
++
++ if (task_stack_end_corrupted(current))
++ pr_emerg("Thread overran stack, or stack corrupted\n");
++
++ debug_show_held_locks(current);
++ if (irqs_disabled())
++ print_irqtrace_events(current);
++
++ print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++ preempt_disable_ip);
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > preempt_offset)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++ printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (is_migration_disabled(current))
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > 0)
++ return;
++
++ if (current->migration_flags & MDF_FORCE_ENABLED)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), is_migration_disabled(current),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++ struct task_struct *g, *p;
++ struct sched_attr attr = {
++ .sched_policy = SCHED_NORMAL,
++ };
++
++ read_lock(&tasklist_lock);
++ for_each_process_thread(g, p) {
++ /*
++ * Only normalize user tasks:
++ */
++ if (p->flags & PF_KTHREAD)
++ continue;
++
++ schedstat_set(p->stats.wait_start, 0);
++ schedstat_set(p->stats.sleep_start, 0);
++ schedstat_set(p->stats.block_start, 0);
++
++ if (!rt_or_dl_task(p)) {
++ /*
++ * Renice negative nice level userspace
++ * tasks back to 0:
++ */
++ if (task_nice(p) < 0)
++ set_user_nice(p, 0);
++ continue;
++ }
++
++ __sched_setscheduler(p, &attr, false, false);
++ }
++ read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for KDB.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++ return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++ kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++ sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++ /*
++ * We have to wait for yet another RCU grace period to expire, as
++ * print_cfs_stats() might run concurrently.
++ */
++ call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++ struct task_group *tg;
++
++ tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++ if (!tg)
++ return ERR_PTR(-ENOMEM);
++
++ return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* RCU callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++ /* Now it should be safe to free those cfs_rqs: */
++ sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++ /* Wait for possible concurrent references to cfs_rqs complete: */
++ call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++ return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++ struct task_group *parent = css_tg(parent_css);
++ struct task_group *tg;
++
++ if (!parent) {
++ /* This is early initialization for the top cgroup */
++ return &root_task_group.css;
++ }
++
++ tg = sched_create_group(parent);
++ if (IS_ERR(tg))
++ return ERR_PTR(-ENOMEM);
++ return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++ struct task_group *parent = css_tg(css->parent);
++
++ if (parent)
++ sched_online_group(tg, parent);
++ return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ /*
++ * Relies on the RCU grace period between css_released() and this.
++ */
++ sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++ return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, s64 cfs_quota_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_burst_us)
++{
++ return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 val)
++{
++ return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 rt_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++ {
++ .name = "cfs_quota_us",
++ .read_s64 = cpu_cfs_quota_read_s64,
++ .write_s64 = cpu_cfs_quota_write_s64,
++ },
++ {
++ .name = "cfs_period_us",
++ .read_u64 = cpu_cfs_period_read_u64,
++ .write_u64 = cpu_cfs_period_write_u64,
++ },
++ {
++ .name = "cfs_burst_us",
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "stat",
++ .seq_show = cpu_cfs_stat_show,
++ },
++ {
++ .name = "stat.local",
++ .seq_show = cpu_cfs_local_stat_show,
++ },
++ {
++ .name = "rt_runtime_us",
++ .read_s64 = cpu_rt_runtime_read,
++ .write_s64 = cpu_rt_runtime_write,
++ },
++ {
++ .name = "rt_period_us",
++ .read_u64 = cpu_rt_period_read_uint,
++ .write_u64 = cpu_rt_period_write_uint,
++ },
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++ { } /* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft, u64 weight)
++{
++ return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 nice)
++{
++ return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 idle)
++{
++ return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes, loff_t off)
++{
++ return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++ {
++ .name = "weight",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_weight_read_u64,
++ .write_u64 = cpu_weight_write_u64,
++ },
++ {
++ .name = "weight.nice",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_weight_nice_read_s64,
++ .write_s64 = cpu_weight_nice_write_s64,
++ },
++ {
++ .name = "idle",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++ {
++ .name = "max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_max_show,
++ .write = cpu_max_write,
++ },
++ {
++ .name = "max.burst",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++ { } /* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++ .css_alloc = cpu_cgroup_css_alloc,
++ .css_online = cpu_cgroup_css_online,
++ .css_released = cpu_cgroup_css_released,
++ .css_free = cpu_cgroup_css_free,
++ .css_extra_stat_show = cpu_extra_stat_show,
++ .css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++ .can_attach = cpu_cgroup_can_attach,
++#endif
++ .attach = cpu_cgroup_attach,
++ .legacy_cftypes = cpu_legacy_files,
++ .dfl_cftypes = cpu_files,
++ .early_init = true,
++ .threaded = true,
++};
++#endif /* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ * X = Y = 0
++ *
++ * w[X]=1 w[Y]=1
++ * MB MB
++ * r[Y]=y r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0 CPU1
++ *
++ * Context switch CS-1 Remote-clear
++ * - store to rq->curr: (N)->(Y) (TSA) - cmpxchg to *pcpu_id to LAZY (TMA)
++ * (implied barrier after cmpxchg)
++ * - switch_mm_cid()
++ * - memory barrier (see switch_mm_cid()
++ * comment explaining how this barrier
++ * is combined with other scheduler
++ * barriers)
++ * - mm_cid_get (next)
++ * - READ_ONCE(*pcpu_cid) - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++ t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid)
++{
++ struct mm_struct *mm = t->mm;
++ struct task_struct *src_task;
++ int src_cid, last_mm_cid;
++
++ if (!mm)
++ return -1;
++
++ last_mm_cid = t->last_mm_cid;
++ /*
++ * If the migrated task has no last cid, or if the current
++ * task on src rq uses the cid, it means the source cid does not need
++ * to be moved to the destination cpu.
++ */
++ if (last_mm_cid == -1)
++ return -1;
++ src_cid = READ_ONCE(src_pcpu_cid->cid);
++ if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++ return -1;
++
++ /*
++ * If we observe an active task using the mm on this rq, it means we
++ * are not the last task to be migrated from this cpu for this mm, so
++ * there is no need to move src_cid to the destination cpu.
++ */
++ guard(rcu)();
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ t->last_mm_cid = -1;
++ return -1;
++ }
++
++ return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid,
++ int src_cid)
++{
++ struct task_struct *src_task;
++ struct mm_struct *mm = t->mm;
++ int lazy_cid;
++
++ if (src_cid == -1)
++ return -1;
++
++ /*
++ * Attempt to clear the source cpu cid to move it to the destination
++ * cpu.
++ */
++ lazy_cid = mm_cid_set_lazy_put(src_cid);
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++ return -1;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, this task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ rcu_read_unlock();
++ /*
++ * We observed an active task for this mm, there is therefore
++ * no point in moving this cid to the destination cpu.
++ */
++ t->last_mm_cid = -1;
++ return -1;
++ }
++ }
++
++ /*
++ * The src_cid is unused, so it can be unset.
++ */
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ return -1;
++ return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
++{
++ struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++ struct mm_struct *mm = t->mm;
++ int src_cid, dst_cid, src_cpu;
++ struct rq *src_rq;
++
++ lockdep_assert_rq_held(dst_rq);
++
++ if (!mm)
++ return;
++ src_cpu = t->migrate_from_cpu;
++ if (src_cpu == -1) {
++ t->last_mm_cid = -1;
++ return;
++ }
++ /*
++ * Move the src cid if the dst cid is unset. This keeps id
++ * allocation closest to 0 in cases where few threads migrate around
++ * many CPUs.
++ *
++ * If destination cid is already set, we may have to just clear
++ * the src cid to ensure compactness in frequent migrations
++ * scenarios.
++ *
++ * It is not useful to clear the src cid when the number of threads is
++ * greater or equal to the number of allowed CPUs, because user-space
++ * can expect that the number of allowed cids can reach the number of
++ * allowed CPUs.
++ */
++ dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++ dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++ if (!mm_cid_is_unset(dst_cid) &&
++ atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++ return;
++ src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++ src_rq = cpu_rq(src_cpu);
++ src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++ if (src_cid == -1)
++ return;
++ src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++ src_cid);
++ if (src_cid == -1)
++ return;
++ if (!mm_cid_is_unset(dst_cid)) {
++ __mm_cid_put(mm, src_cid);
++ return;
++ }
++ /* Move src_cid to dst cpu. */
++ mm_cid_snapshot_time(dst_rq, mm);
++ WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++ int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *t;
++ int cid, lazy_cid;
++
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid))
++ return;
++
++ /*
++ * Clear the cpu cid if it is set to keep cid allocation compact. If
++ * there happens to be other tasks left on the source cpu using this
++ * mm, the next task using this mm will reallocate its cid on context
++ * switch.
++ */
++ lazy_cid = mm_cid_set_lazy_put(cid);
++ if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++ return;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, that task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ t = rcu_dereference(rq->curr);
++ if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++ return;
++ }
++
++ /*
++ * The cid is unused, so it can be unset.
++ * Disable interrupts to keep the window of cid ownership without rq
++ * lock small.
++ */
++ scoped_guard (irqsave) {
++ if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ __mm_cid_put(mm, cid);
++ }
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct mm_cid *pcpu_cid;
++ struct task_struct *curr;
++ u64 rq_clock;
++
++ /*
++ * rq->clock load is racy on 32-bit but one spurious clear once in a
++ * while is irrelevant.
++ */
++ rq_clock = READ_ONCE(rq->clock);
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++ /*
++ * In order to take care of infrequently scheduled tasks, bump the time
++ * snapshot associated with this cid if an active task using the mm is
++ * observed on this rq.
++ */
++ scoped_guard (rcu) {
++ curr = rcu_dereference(rq->curr);
++ if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++ WRITE_ONCE(pcpu_cid->time, rq_clock);
++ return;
++ }
++ }
++
++ if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++ int weight)
++{
++ struct mm_cid *pcpu_cid;
++ int cid;
++
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid) || cid < weight)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++ unsigned long now = jiffies, old_scan, next_scan;
++ struct task_struct *t = current;
++ struct cpumask *cidmask;
++ struct mm_struct *mm;
++ int weight, cpu;
++
++ SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++ work->next = work; /* Prevent double-add */
++ if (t->flags & PF_EXITING)
++ return;
++ mm = t->mm;
++ if (!mm)
++ return;
++ old_scan = READ_ONCE(mm->mm_cid_next_scan);
++ next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ if (!old_scan) {
++ unsigned long res;
++
++ res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++ if (res != old_scan)
++ old_scan = res;
++ else
++ old_scan = next_scan;
++ }
++ if (time_before(now, old_scan))
++ return;
++ if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++ return;
++ cidmask = mm_cidmask(mm);
++ /* Clear cids that were not recently used. */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_old(mm, cpu);
++ weight = cpumask_weight(cidmask);
++ /*
++ * Clear cids that are greater or equal to the cidmask weight to
++ * recompact it.
++ */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ int mm_users = 0;
++
++ if (mm) {
++ mm_users = atomic_read(&mm->mm_users);
++ if (mm_users == 1)
++ mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ }
++ t->cid_work.next = &t->cid_work; /* Protect against double add */
++ init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++ struct callback_head *work = &curr->cid_work;
++ unsigned long now = jiffies;
++
++ if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++ work->next != work)
++ return;
++ if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++ return;
++
++ /* No page allocation under rq lock */
++ task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ scoped_guard (rq_lock_irqsave, rq) {
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 1);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++ }
++ rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++ WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++ t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..12d76d9d290e
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,213 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++ return p->migration_disabled;
++#else
++ return false;
++#endif
++}
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p) rt_prio((p)->prio)
++#define rt_policy(policy) ((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p) (rt_policy((p)->policy))
++
++struct affinity_context {
++ const struct cpumask *new_mask;
++ struct cpumask *user_mask;
++ unsigned int flags;
++};
++
++/* CONFIG_SCHED_CLASS_EXT is not supported */
++#define scx_switched_all() false
++
++#define SCA_CHECK 0x01
++#define SCA_MIGRATE_DISABLE 0x02
++#define SCA_MIGRATE_ENABLE 0x04
++#define SCA_USER 0x08
++
++#ifdef CONFIG_SMP
++
++extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ /*
++ * See do_set_cpus_allowed() above for the rcu_head usage.
++ */
++ int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++ return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++#else /* !CONFIG_SMP: */
++
++static inline int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++ if (pi_task)
++ prio = min(prio, pi_task->prio);
++
++ return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++ return __rt_effective_prio(pi_task, prio);
++}
++
++#else /* !CONFIG_RT_MUTEXES: */
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ return prio;
++}
++
++#endif /* !CONFIG_RT_MUTEXES */
++
++extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
++extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++extern void __setscheduler_prio(struct task_struct *p, int prio);
++
++/*
++ * Context API
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock(&rq->lock);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ *plock = NULL;
++ return rq;
++ }
++ }
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++ if (NULL != lock)
++ raw_spin_unlock(lock);
++}
++
++void check_task_changed(struct task_struct *p, struct rq *rq);
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++ const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *next = p->sq_node.next;
++
++ if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++ struct list_head *head;
++ unsigned long idx = next - &rq->queue.heads[0];
++
++ idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++ sched_idx2prio(idx, rq) + 1);
++ head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++ }
++
++ return list_next_entry(p, sq_node);
++}
++
++extern void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef ALT_SCHED_DEBUG
++extern void alt_sched_debug(void);
++#else
++static inline void alt_sched_debug(void) {}
++#endif
++
++extern int sched_yield_type;
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++/* balance callback */
++#ifdef CONFIG_SMP
++extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
++extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
++#else
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...) \
++ do { \
++ if (m) \
++ seq_printf(m, x); \
++ else \
++ pr_cont(x); \
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++ struct seq_file *m)
++{
++ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++ get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..09c9e9f80bf4
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,971 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++ struct cgroup_subsys_state css;
++
++ struct rcu_head rcu;
++ struct list_head list;
++
++ struct task_group *parent;
++ struct list_head siblings;
++ struct list_head children;
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++ struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO (32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS (64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO (SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x) ({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++ unsigned long __w = (w); \
++ if (__w) \
++ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++ __w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) (w)
++# define scale_load_down(w) (w)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED 1
++#define TASK_ON_RQ_MIGRATING 2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++ return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC 0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK 0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU 0x08 /* Wakeup; maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC 0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED 0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU 0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS (SCHED_LEVELS - 1)
++
++struct sched_queue {
++ DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++ struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++ struct balance_callback *next;
++ void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++ struct task_struct *task;
++ int active;
++ cpumask_t *cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++ /* runqueue lock: */
++ raw_spinlock_t lock;
++
++ struct task_struct __rcu *curr;
++ struct task_struct *idle;
++ struct task_struct *stop;
++ struct mm_struct *prev_mm;
++
++ struct sched_queue queue ____cacheline_aligned;
++
++ int prio;
++#ifdef CONFIG_SCHED_PDS
++ int prio_idx;
++ u64 time_edge;
++#endif
++
++ /* switch count */
++ u64 nr_switches;
++
++ atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++ u64 last_seen_need_resched_ns;
++ int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++ int membarrier_state;
++#endif
++
++ set_idle_mask_func_t set_idle_mask_func;
++ clear_idle_mask_func_t clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++ int cpu; /* cpu of this runqueue */
++ bool online;
++
++ unsigned int ttwu_pending;
++ unsigned char nohz_idle_balance;
++ unsigned char idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ struct sched_avg avg_irq;
++#endif
++
++ balance_func_t balance_func;
++ struct balance_arg active_balance_arg ____cacheline_aligned;
++ struct cpu_stop_work active_balance_work;
++
++ struct balance_callback *balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ struct rcuwait hotplug_wait;
++#endif
++ unsigned int nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++ u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++ /* For genenal cpu load util */
++ s32 load_history;
++ u64 load_block;
++ u64 load_stamp;
++
++ /* calc_load related fields */
++ unsigned long calc_load_update;
++ long calc_load_active;
++
++ /* Ensure that all clocks are in the same cache line */
++ u64 clock ____cacheline_aligned;
++ u64 clock_task;
++
++ unsigned int nr_running;
++ unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++ call_single_data_t hrtick_csd;
++#endif
++ struct hrtimer hrtick_timer;
++ ktime_t hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++ /* latency stats */
++ struct sched_info rq_sched_info;
++ unsigned long long rq_cpu_time;
++ /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++ /* sys_sched_yield() stats */
++ unsigned int yld_count;
++
++ /* schedule() stats */
++ unsigned int sched_switch;
++ unsigned int sched_count;
++ unsigned int sched_goidle;
++
++ /* try_to_wake_up() stats */
++ unsigned int ttwu_count;
++ unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++ /* Must be inspected within a rcu lock section */
++ struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++ call_single_data_t nohz_csd;
++#endif
++ atomic_t nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++ /* Scratch cpumask to be temporarily used under rq_lock */
++ cpumask_var_t scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
++#define this_rq() this_cpu_ptr(&runqueues)
++#define task_rq(p) cpu_rq(task_cpu(p))
++#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
++#define raw_rq() raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++ SMT_LEVEL_SPACE_HOLDER,
++#endif
++ COREGROUP_LEVEL_SPACE_HOLDER,
++ CORE_LEVEL_SPACE_HOLDER,
++ OTHER_LEVEL_SPACE_HOLDER,
++ NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++ int cpu;
++
++ while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++ mask++;
++
++ return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++ return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++ return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++ return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP 0x01
++
++#define ENQUEUE_WAKEUP 0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++ unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ local_irq_disable();
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++ return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++ return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++ lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++ raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++ local_irq_disable();
++ raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++ raw_spin_rq_unlock(rq);
++ local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++ return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++ return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++ rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ WARN_ON(!rcu_read_lock_held());
++ return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ return rq->cpu;
++#else
++ return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT 0
++#define NOHZ_STATS_KICK_BIT 1
++
++#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++ u64 total;
++ u64 tick_delta;
++ u64 irq_start_time;
++ struct u64_stats_sync sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++ unsigned int seq;
++ u64 total;
++
++ do {
++ seq = __u64_stats_fetch_begin(&irqtime->sync);
++ total = irqtime->total;
++ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++ return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant() (true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant() (false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++ unsigned long min,
++ unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV 0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++ int membarrier_state;
++
++ if (prev_mm == next_mm)
++ return;
++
++ membarrier_state = atomic_read(&next_mm->membarrier_state);
++ if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++ return;
++
++ WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++ struct task_struct *p)
++{
++ return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
++#define MM_CID_SCAN_DELAY 100 /* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++ if (cid < 0)
++ return;
++ cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (!mm_cid_is_lazy_put(cid) ||
++ !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid, res;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ for (;;) {
++ if (mm_cid_is_unset(cid))
++ return MM_CID_UNSET;
++ /*
++ * Attempt transition from valid or lazy-put to unset.
++ */
++ res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++ if (res == cid)
++ break;
++ cid = res;
++ }
++ return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = mm_cid_pcpu_unset(mm);
++ if (cid == MM_CID_UNSET)
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++ struct cpumask *cpumask;
++ int cid;
++
++ cpumask = mm_cidmask(mm);
++ /*
++ * Retry finding first zero bit if the mask is temporarily
++ * filled. This only happens during concurrent remote-clear
++ * which owns a cid without holding a rq lock.
++ */
++ for (;;) {
++ cid = cpumask_first_zero(cpumask);
++ if (cid < nr_cpu_ids)
++ break;
++ cpu_relax();
++ }
++ if (cpumask_test_and_set_cpu(cid, cpumask))
++ return -1;
++ return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++ lockdep_assert_rq_held(rq);
++ WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++ int cid;
++
++ /*
++ * All allocations (even those using the cid_lock) are lock-free. If
++ * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++ * guarantee forward progress.
++ */
++ if (!READ_ONCE(use_cid_lock)) {
++ cid = __mm_cid_try_get(mm);
++ if (cid >= 0)
++ goto end;
++ raw_spin_lock(&cid_lock);
++ } else {
++ raw_spin_lock(&cid_lock);
++ cid = __mm_cid_try_get(mm);
++ if (cid >= 0)
++ goto unlock;
++ }
++
++ /*
++ * cid concurrently allocated. Retry while forcing following
++ * allocations to use the cid_lock to ensure forward progress.
++ */
++ WRITE_ONCE(use_cid_lock, 1);
++ /*
++ * Set use_cid_lock before allocation. Only care about program order
++ * because this is only required for forward progress.
++ */
++ barrier();
++ /*
++ * Retry until it succeeds. It is guaranteed to eventually succeed once
++ * all newcoming allocations observe the use_cid_lock flag set.
++ */
++ do {
++ cid = __mm_cid_try_get(mm);
++ cpu_relax();
++ } while (cid < 0);
++ /*
++ * Allocate before clearing use_cid_lock. Only care about
++ * program order because this is for forward progress.
++ */
++ barrier();
++ WRITE_ONCE(use_cid_lock, 0);
++unlock:
++ raw_spin_unlock(&cid_lock);
++end:
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ struct cpumask *cpumask;
++ int cid;
++
++ lockdep_assert_rq_held(rq);
++ cpumask = mm_cidmask(mm);
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (mm_cid_is_valid(cid)) {
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++ }
++ if (mm_cid_is_lazy_put(cid)) {
++ if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++ }
++ cid = __mm_cid_get(rq, mm);
++ __this_cpu_write(pcpu_cid->cid, cid);
++ return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++ struct task_struct *prev,
++ struct task_struct *next)
++{
++ /*
++ * Provide a memory barrier between rq->curr store and load of
++ * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++ *
++ * Should be adapted if context_switch() is modified.
++ */
++ if (!next->mm) { // to kernel
++ /*
++ * user -> kernel transition does not guarantee a barrier, but
++ * we can use the fact that it performs an atomic operation in
++ * mmgrab().
++ */
++ if (prev->mm) // from user
++ smp_mb__after_mmgrab();
++ /*
++ * kernel -> kernel transition does not change rq->curr->mm
++ * state. It stays NULL.
++ */
++ } else { // to user
++ /*
++ * kernel -> user transition does not provide a barrier
++ * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++ * Provide it here.
++ */
++ if (!prev->mm) // from kernel
++ smp_mb();
++ /*
++ * user -> user transition guarantees a memory barrier through
++ * switch_mm() when current->mm changes. If current->mm is
++ * unchanged, no barrier is needed.
++ */
++ }
++ if (prev->mm_cid_active) {
++ mm_cid_snapshot_time(rq, prev->mm);
++ mm_cid_put_lazy(prev);
++ prev->mm_cid = -1;
++ }
++ if (next->mm_cid_active)
++ next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++ struct balance_callback *head,
++ void (*func)(struct rq *rq))
++{
++ lockdep_assert_rq_held(rq);
++
++ /*
++ * Don't (re)queue an already queued item; nor queue anything when
++ * balance_push() is active, see the comment with
++ * balance_push_callback.
++ */
++ if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++ return;
++
++ head->func = func;
++ head->next = rq->balance_callback;
++ rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++ if (cpulist_parse(str, &sched_pcore_mask))
++ pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++ return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++ cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p + 2) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++ struct balance_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++ cpumask_t tmp;
++
++ local_irq_save(flags);
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++
++ arg->active = 0;
++
++ if (task_on_rq_queued(p) && task_rq(p) == rq &&
++ cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++ !is_migration_disabled(p)) {
++ int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++ rq = move_queued_task(rq, p, dcpu);
++ }
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++ struct balance_arg *arg;
++ unsigned long flags;
++ struct task_struct *p;
++ int res;
++
++ if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++ return 0;
++
++ arg = &rq->active_balance_arg;
++ res = (1 == rq->nr_running) && \
++ !is_migration_disabled((p = sched_rq_first_task(rq))) && \
++ cpumask_intersects(p->cpus_ptr, target_mask) && \
++ !arg->active;
++ if (res) {
++ arg->task = p;
++ arg->cpumask = target_mask;
++
++ arg->active = 1;
++ }
++
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ if (res) {
++ preempt_disable();
++ raw_spin_unlock(&src_rq->lock);
++
++ stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++ &rq->active_balance_work);
++
++ preempt_enable();
++ raw_spin_lock(&src_rq->lock);
++ }
++
++ return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, single_task_mask, cpu)
++ if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ cpumask_t smt_single_mask;
++
++ if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++ if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++ trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++ }
++
++ return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ (/* smt core group balance */
++ (static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++ ) ||
++ /* e core to idle smt core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++ return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++ return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* smt occupied p core to idle e core balance */
++ smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++ return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* idle e core to p core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++ return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...) printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...) do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func) \
++{ \
++ idle_select_func = func; \
++ printk(KERN_INFO "sched: "#func); \
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func) \
++{ \
++ rq->balance_func = func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu); \
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func) \
++{ \
++ rq->set_idle_mask_func = set_func; \
++ rq->clear_idle_mask_func = clear_func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu); \
++}
++
++void sched_init_topology(void)
++{
++ int cpu;
++ struct rq *rq;
++ cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++ int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++ if (!cpumask_empty(&sched_smt_mask))
++ printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++ printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++ sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++ ecore_present = !cpumask_empty(&sched_ecore_mask);
++ }
++
++#ifdef CONFIG_SCHED_SMT
++ /* idle select function */
++ if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++ SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++ } else
++#endif
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++ }
++
++ for_each_online_cpu(cpu) {
++ rq = cpu_rq(cpu);
++ /* take chance to reset time slice for idle tasks */
++ rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++ !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++ } else {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++ }
++
++ continue;
++ }
++#endif
++ /* !SMT or only one cpu in sg */
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++ if (ecore_present)
++ SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++ continue;
++ }
++ if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++ SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++ }
++ }
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++ int limit;
++
++ switch (p->policy) {
++ case SCHED_NORMAL:
++ limit = -MAX_PRIORITY_ADJ;
++ break;
++ case SCHED_BATCH:
++ limit = 0;
++ break;
++ default:
++ return;
++ }
++
++ p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++ if (p->boost_prio < MAX_PRIORITY_ADJ)
++ p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ prio = task_sched_prio(p); \
++ idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++ return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++ return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++ s64 delta = this_rq()->clock_task > p->last_ran;
++
++ if (likely(delta > 0))
++ boost_task(p, delta >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++ boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index fae1f5c921eb..1e06434b5b9b 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -49,15 +49,21 @@
+
+ #include "idle.c"
+
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+
+ #include "cputime.c"
++#ifndef CONFIG_SCHED_ALT
+ #include "deadline.c"
++#endif
+
+ #ifdef CONFIG_SCHED_CLASS_EXT
+ # include "ext.c"
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..58d04aa73634 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+
+ #include "clock.c"
+
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -84,7 +88,9 @@
+
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index c6ba15388ea7..56590821f074 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -197,6 +197,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned long min, max, util = scx_cpuperf_target(sg_cpu->cpu);
+
+ if (!scx_switched_all())
+@@ -205,6 +206,10 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ util = max(util, boost);
+ sg_cpu->bw_min = min;
+ sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++ sg_cpu->bw_min = 0;
++ sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -364,8 +369,10 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+ */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -684,6 +691,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ }
+
+ ret = sched_setattr_nocheck(thread, &attr);
++
+ if (ret) {
+ kthread_stop(thread);
+ pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 0bed0fa1acd9..031affa09446 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ p->utime += cputime;
+ account_group_user_time(p, cputime);
+
+- index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++ index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+
+ /* Add user time to cpustat. */
+ task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ p->gtime += cputime;
+
+ /* Add guest time to cpustat. */
+- if (task_nice(p) > 0) {
++ if (task_running_nice(p)) {
+ task_group_account_field(p, CPUTIME_NICE, cputime);
+ cpustat[CPUTIME_GUEST_NICE] += cputime;
+ } else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+- return t->se.sum_exec_runtime;
++ return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ struct rq *rq;
+
+ rq = task_rq_lock(t, &rf);
+- ns = t->se.sum_exec_runtime;
++ ns = tsk_seruntime(t);
+ task_rq_unlock(rq, t, &rf);
+
+ return ns;
+@@ -623,7 +623,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ struct task_cputime cputime = {
+- .sum_exec_runtime = p->se.sum_exec_runtime,
++ .sum_exec_runtime = tsk_seruntime(p),
+ };
+
+ if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index f4035c7a0fa1..4df4ad88d6a9 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+ * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * This allows printing both to /sys/kernel/debug/sched/debug and
+ * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+
+ #ifdef CONFIG_SMP
+@@ -468,9 +471,11 @@ static const struct file_operations fair_server_period_fops = {
+ .llseek = seq_lseek,
+ .release = single_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+
+ static struct dentry *debugfs_sched;
+
++#ifndef CONFIG_SCHED_ALT
+ static void debugfs_fair_server_init(void)
+ {
+ struct dentry *d_fair;
+@@ -491,6 +496,7 @@ static void debugfs_fair_server_init(void)
+ debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_period_fops);
+ }
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ static __init int sched_init_debug(void)
+ {
+@@ -498,14 +504,17 @@ static __init int sched_init_debug(void)
+
+ debugfs_sched = debugfs_create_dir("sched", NULL);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+
+ debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+
+@@ -530,13 +539,17 @@ static __init int sched_init_debug(void)
+ #endif
+
+ debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_fair_server_init();
++#endif /* !CONFIG_SCHED_ALT */
+
+ return 0;
+ }
+ late_initcall(sched_init_debug);
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+
+ static cpumask_var_t sd_sysctl_cpus;
+@@ -1288,6 +1301,7 @@ void proc_sched_set_task(struct task_struct *p)
+ memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index d2f096bb274c..36071f4b7b7f 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -424,6 +424,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ do_idle();
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * idle-task scheduling class.
+ */
+@@ -538,3 +539,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ .switched_to = switched_to_idle,
+ .update_curr = update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM (32)
++#define SCHED_EDGE_DELTA (SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x) ((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++ if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++ "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++ return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++ return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ if (p->prio < MIN_NORMAL_PRIO) { \
++ prio = p->prio >> 2; \
++ idx = prio; \
++ } else { \
++ u64 sched_dl = max(p->deadline, rq->time_edge); \
++ prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge; \
++ idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl); \
++ }
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++ return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++ sched_prio :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++ return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++ sched_idx :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++ struct list_head head;
++ u64 old = rq->time_edge;
++ u64 now = rq->clock >> sched_timeslice_shift;
++ u64 prio, delta;
++ DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++ if (now == old)
++ return;
++
++ rq->time_edge = now;
++ delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++ INIT_LIST_HEAD(&head);
++
++ prio = MIN_SCHED_NORMAL_PRIO;
++ for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++ list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++ SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++ bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++ if (!list_empty(&head)) {
++ u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++ __list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++ set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++ }
++ bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++ (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++ if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++ return;
++
++ rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ if (p->prio >= MIN_NORMAL_PRIO)
++ p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++ (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++ u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++ if (unlikely(p->deadline > max_dl))
++ p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++ sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++ sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index a9c65d97b3ca..a66431e6527c 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * sched_entity:
+ *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+
+ return 0;
+ }
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * hardware:
+ *
+@@ -468,6 +470,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Load avg and utiliztion metrics need to be updated periodically and before
+ * consumption. This function updates the metrics for all subsystems except for
+@@ -487,3 +490,4 @@ bool update_other_load_avgs(struct rq *rq)
+ update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ update_irq_load_avg(rq, 0);
+ }
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index f4f6a0875c66..ee780f2b6c17 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,14 +1,16 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
+ bool update_other_load_avgs(struct rq *rq);
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -45,6 +47,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ unsigned int enqueued;
+@@ -181,9 +184,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+
+ #else
+
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -201,6 +206,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ return 0;
+ }
++#endif
+
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c03b3d7b320e..08ee4a9cd6a5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3878,4 +3882,9 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx);
+
+ #include "ext.h"
+
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index eb0cdcd4d921..72224ecb5cbf 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ } else {
+ struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct sched_domain *sd;
+ int dcount = 0;
++#endif
+ #endif
+ cpu = (unsigned long)(v - 2);
+ rq = cpu_rq(cpu);
+@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ /* domain-specific stats */
+ rcu_read_lock();
+ for_each_domain(cpu, sd) {
+@@ -160,6 +163,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ sd->ttwu_move_balance);
+ }
+ rcu_read_unlock();
++#endif
+ #endif
+ }
+ return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 767e098a3bd1..4cbf4d3e611e 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart (struct rq *rq, unsigned long long delt
+
+ #endif /* CONFIG_SCHEDSTATS */
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ struct sched_entity se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 24f9f90b6574..9aa01e45c920 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -16,6 +16,14 @@
+ #include "sched.h"
+ #include "autogroup.h"
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_core.h"
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++ return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++#else /* !CONFIG_SCHED_ALT */
+ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+ int prio;
+@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+
+ return prio;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Calculate the expected normal priority: i.e. priority
+@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ */
+ static inline int normal_prio(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++#else /* !CONFIG_SCHED_ALT */
+ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
+
+ void set_user_nice(struct task_struct *p, long nice)
+ {
++#ifdef CONFIG_SCHED_ALT
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++ return;
++ /*
++ * We have to be careful, if called from sys_setpriority(),
++ * the task might be in the middle of scheduling on another CPU.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ rq = __task_access_lock(p, &lock);
++
++ p->static_prio = NICE_TO_PRIO(nice);
++ /*
++ * The RT priorities are set via sched_setscheduler(), but we still
++ * allow the 'normal' nice value to be set - but as expected
++ * it won't have any effect on scheduling until the task is
++ * not SCHED_NORMAL/SCHED_BATCH:
++ */
++ if (task_has_rt_policy(p))
++ goto out_unlock;
++
++ p->prio = effective_prio(p);
++
++ check_task_changed(p, rq);
++out_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++#else
+ bool queued, running;
+ struct rq *rq;
+ int old_prio;
+@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ * lowered its priority, then reschedule its CPU:
+ */
+ p->sched_class->prio_changed(rq, p, old_prio);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL(set_user_nice);
+
+@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
+ */
+ int task_prio(const struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++/*
++ * sched policy return value kernel prio user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53] [100 ... 139] 0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39] 100 0/[-20 ... 19]
++ * fifo, rr [-1 ... -100] [99 ... 0] [0 ... 99]
++ */
++ return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++ task_sched_prio_normal(p, task_rq(p));
++#else
+ return p->prio - MAX_RT_PRIO;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -300,10 +357,13 @@ static void __setscheduler_params(struct task_struct *p,
+
+ p->policy = policy;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_policy(policy)) {
+ __setparam_dl(p, attr);
+ } else if (fair_policy(policy)) {
++#endif /* !CONFIG_SCHED_ALT */
+ p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++#ifndef CONFIG_SCHED_ALT
+ if (attr->sched_runtime) {
+ p->se.custom_slice = 1;
+ p->se.slice = clamp_t(u64, attr->sched_runtime,
+@@ -322,6 +382,7 @@ static void __setscheduler_params(struct task_struct *p,
+ /* when switching back to non-rt policy, restore timerslack */
+ p->timer_slack_ns = p->default_timer_slack_ns;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * __sched_setscheduler() ensures attr->sched_priority == 0 when
+@@ -330,7 +391,9 @@ static void __setscheduler_params(struct task_struct *p,
+ */
+ p->rt_priority = attr->sched_priority;
+ p->normal_prio = normal_prio(p);
++#ifndef CONFIG_SCHED_ALT
+ set_load_weight(p, true);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -346,6 +409,8 @@ static bool check_same_owner(struct task_struct *p)
+ uid_eq(cred->euid, pcred->uid));
+ }
+
++#ifndef CONFIG_SCHED_ALT
++
+ #ifdef CONFIG_UCLAMP_TASK
+
+ static int uclamp_validate(struct task_struct *p,
+@@ -459,6 +524,7 @@ static inline int uclamp_validate(struct task_struct *p,
+ static void __setscheduler_uclamp(struct task_struct *p,
+ const struct sched_attr *attr) { }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Allow unprivileged RT tasks to decrease priority.
+@@ -469,11 +535,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ int policy, int reset_on_fork)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (fair_policy(policy)) {
+ if (attr->sched_nice < task_nice(p) &&
+ !is_nice_reduction(p, attr->sched_nice))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ if (rt_policy(policy)) {
+ unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
+@@ -488,6 +556,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ goto req_priv;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Can't set/change SCHED_DEADLINE policy at all for now
+ * (safest behavior); in the future we would like to allow
+@@ -505,6 +574,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ if (!is_nice_reduction(p, task_nice(p)))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Can't change other user's priorities: */
+ if (!check_same_owner(p))
+@@ -527,6 +597,158 @@ int __sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ bool user, bool pi)
+ {
++#ifdef CONFIG_SCHED_ALT
++ const struct sched_attr dl_squash_attr = {
++ .size = sizeof(struct sched_attr),
++ .sched_policy = SCHED_FIFO,
++ .sched_nice = 0,
++ .sched_priority = 99,
++ };
++ int oldpolicy = -1, policy = attr->sched_policy;
++ int retval, newprio;
++ struct balance_callback *head;
++ unsigned long flags;
++ struct rq *rq;
++ int reset_on_fork;
++ raw_spinlock_t *lock;
++
++ /* The pi code expects interrupts enabled */
++ BUG_ON(pi && in_interrupt());
++
++ /*
++ * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++ */
++ if (unlikely(SCHED_DEADLINE == policy)) {
++ attr = &dl_squash_attr;
++ policy = attr->sched_policy;
++ }
++recheck:
++ /* Double check policy once rq lock held */
++ if (policy < 0) {
++ reset_on_fork = p->sched_reset_on_fork;
++ policy = oldpolicy = p->policy;
++ } else {
++ reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++ if (policy > SCHED_IDLE)
++ return -EINVAL;
++ }
++
++ if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++ return -EINVAL;
++
++ /*
++ * Valid priorities for SCHED_FIFO and SCHED_RR are
++ * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++ * SCHED_BATCH and SCHED_IDLE is 0.
++ */
++ if (attr->sched_priority < 0 ||
++ (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++ (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++ return -EINVAL;
++ if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++ (attr->sched_priority != 0))
++ return -EINVAL;
++
++ if (user) {
++ retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++ if (retval)
++ return retval;
++
++ retval = security_task_setscheduler(p);
++ if (retval)
++ return retval;
++ }
++
++ /*
++ * Make sure no PI-waiters arrive (or leave) while we are
++ * changing the priority of the task:
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++ /*
++ * To be able to change p->policy safely, task_access_lock()
++ * must be called.
++ * IF use task_access_lock() here:
++ * For the task p which is not running, reading rq->stop is
++ * racy but acceptable as ->stop doesn't change much.
++ * An enhancemnet can be made to read rq->stop saftly.
++ */
++ rq = __task_access_lock(p, &lock);
++
++ /*
++ * Changing the policy of the stop threads its a very bad idea
++ */
++ if (p == rq->stop) {
++ retval = -EINVAL;
++ goto unlock;
++ }
++
++ /*
++ * If not changing anything there's no need to proceed further:
++ */
++ if (unlikely(policy == p->policy)) {
++ if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++ goto change;
++ if (!rt_policy(policy) &&
++ NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++ goto change;
++
++ p->sched_reset_on_fork = reset_on_fork;
++ retval = 0;
++ goto unlock;
++ }
++change:
++
++ /* Re-check policy now with rq lock held */
++ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++ policy = oldpolicy = -1;
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ goto recheck;
++ }
++
++ p->sched_reset_on_fork = reset_on_fork;
++
++ newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++ if (pi) {
++ /*
++ * Take priority boosted tasks into account. If the new
++ * effective priority is unchanged, we just store the new
++ * normal parameters and do not touch the scheduler class and
++ * the runqueue. This will be done when the task deboost
++ * itself.
++ */
++ newprio = rt_effective_prio(p, newprio);
++ }
++
++ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++ __setscheduler_params(p, attr);
++ __setscheduler_prio(p, newprio);
++ }
++
++ check_task_changed(p, rq);
++
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++ head = splice_balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ if (pi)
++ rt_mutex_adjust_pi(p);
++
++ /* Run balance callbacks after we've adjusted the PI chain: */
++ balance_callbacks(rq, head);
++ preempt_enable();
++
++ return 0;
++
++unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ return retval;
++#else /* !CONFIG_SCHED_ALT */
+ int oldpolicy = -1, policy = attr->sched_policy;
+ int retval, oldprio, newprio, queued, running;
+ const struct sched_class *prev_class, *next_class;
+@@ -764,6 +986,7 @@ int __sched_setscheduler(struct task_struct *p,
+ if (cpuset_locked)
+ cpuset_unlock();
+ return retval;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ static int _sched_setscheduler(struct task_struct *p, int policy,
+@@ -775,8 +998,10 @@ static int _sched_setscheduler(struct task_struct *p, int policy,
+ .sched_nice = PRIO_TO_NICE(p->static_prio),
+ };
+
++#ifndef CONFIG_SCHED_ALT
+ if (p->se.custom_slice)
+ attr.sched_runtime = p->se.slice;
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Fixup the legacy SCHED_RESET_ON_FORK hack. */
+ if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
+@@ -944,13 +1169,18 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
+
+ static void get_params(struct task_struct *p, struct sched_attr *attr)
+ {
+- if (task_has_dl_policy(p)) {
++#ifndef CONFIG_SCHED_ALT
++ if (task_has_dl_policy(p))
+ __getparam_dl(p, attr);
+- } else if (task_has_rt_policy(p)) {
++ else
++#endif
++ if (task_has_rt_policy(p)) {
+ attr->sched_priority = p->rt_priority;
+ } else {
+ attr->sched_nice = task_nice(p);
++#ifndef CONFIG_SCHED_ALT
+ attr->sched_runtime = p->se.slice;
++#endif
+ }
+ }
+
+@@ -1170,6 +1400,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
+ #ifdef CONFIG_SMP
+ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ {
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * If the task isn't a deadline task or admission control is
+ * disabled then we don't care about affinity changes.
+@@ -1186,6 +1417,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ guard(rcu)();
+ if (!cpumask_subset(task_rq(p)->rd->span, mask))
+ return -EBUSY;
++#endif
+
+ return 0;
+ }
+@@ -1210,9 +1442,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ ctx->new_mask = new_mask;
+ ctx->flags |= SCA_CHECK;
+
++#ifndef CONFIG_SCHED_ALT
+ retval = dl_task_check_affinity(p, new_mask);
+ if (retval)
+ goto out_free_new_mask;
++#endif
+
+ retval = __set_cpus_allowed_ptr(p, ctx);
+ if (retval)
+@@ -1392,13 +1626,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+
+ static void do_sched_yield(void)
+ {
+- struct rq_flags rf;
+ struct rq *rq;
++ struct rq_flags rf;
++
++#ifdef CONFIG_SCHED_ALT
++ struct task_struct *p;
++
++ if (!sched_yield_type)
++ return;
+
+ rq = this_rq_lock_irq(&rf);
+
++ schedstat_inc(rq->yld_count);
++
++ p = current;
++ if (rt_task(p)) {
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ } else if (rq->nr_running > 1) {
++ do_sched_yield_type_1(p, rq);
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ }
++#else /* !CONFIG_SCHED_ALT */
++ rq = this_rq_lock_irq(&rf);
++
+ schedstat_inc(rq->yld_count);
+ current->sched_class->yield_task(rq);
++#endif /* !CONFIG_SCHED_ALT */
+
+ preempt_disable();
+ rq_unlock_irq(rq, &rf);
+@@ -1467,6 +1722,9 @@ EXPORT_SYMBOL(yield);
+ */
+ int __sched yield_to(struct task_struct *p, bool preempt)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else /* !CONFIG_SCHED_ALT */
+ struct task_struct *curr = current;
+ struct rq *rq, *p_rq;
+ int yielded = 0;
+@@ -1512,6 +1770,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ schedule();
+
+ return yielded;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL_GPL(yield_to);
+
+@@ -1532,7 +1791,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
+ case SCHED_RR:
+ ret = MAX_RT_PRIO-1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1560,7 +1821,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ case SCHED_RR:
+ ret = 1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1572,7 +1835,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+
+ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned int time_slice = 0;
++#endif
+ int retval;
+
+ if (pid < 0)
+@@ -1587,6 +1852,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ if (retval)
+ return retval;
+
++#ifndef CONFIG_SCHED_ALT
+ scoped_guard (task_rq_lock, p) {
+ struct rq *rq = scope.rq;
+ if (p->sched_class->get_rr_interval)
+@@ -1595,6 +1861,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ }
+
+ jiffies_to_timespec64(time_slice, t);
++#else
++ }
++
++ alt_sched_debug();
++
++ *t = ns_to_timespec64(sysctl_sched_base_slice);
++#endif /* !CONFIG_SCHED_ALT */
+ return 0;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 9748a4c8d668..1e2bdd70d69a 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+ * Scheduler topology setup/handling methods
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1459,8 +1460,10 @@ static void asym_cpu_capacity_scan(void)
+ */
+
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1695,6 +1698,7 @@ sd_init(struct sched_domain_topology_level *tl,
+
+ return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ /*
+ * Topology list, bottom-up.
+@@ -1731,6 +1735,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ sched_domain_topology_saved = NULL;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2797,3 +2802,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++ struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++ return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 79e6cb1d5c48..61bc0352e233 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+
+ /* Constants used for minimum and maximum */
+
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1907,6 +1911,17 @@ static struct ctl_table kern_table[] = {
+ .proc_handler = proc_dointvec,
+ },
+ #endif
++#ifdef CONFIG_SCHED_ALT
++ {
++ .procname = "yield_type",
++ .data = &sched_yield_type,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_TWO,
++ },
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ {
+ .procname = "spin_retry",
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 6bcee4704059..cf88205fd4a2 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ u64 stime, utime;
+
+ task_cputime(p, &utime, &stime);
+- store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++ store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -830,6 +830,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ }
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ if (tsk->dl.dl_overrun) {
+@@ -837,6 +838,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ }
+ }
++#endif
+
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -864,8 +866,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ u64 samples[CPUCLOCK_MAX];
+ unsigned long soft;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk))
+ check_dl_overrun(tsk);
++#endif
+
+ if (expiry_cache_is_inactive(pct))
+ return;
+@@ -879,7 +883,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ if (soft != RLIM_INFINITY) {
+ /* Task RT timeout is accounted in jiffies. RTTIME is usec */
+- unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++ unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+
+ /* At the hard limit, send SIGKILL. No further action. */
+@@ -1115,8 +1119,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ return true;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk) && tsk->dl.dl_overrun)
+ return true;
++#endif
+
+ return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index 1469dd8075fa..803527a0e48a 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1419,10 +1419,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ /* Make this a -deadline thread */
+ static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++ /* No deadline on BMQ/PDS, use RR */
++ .sched_policy = SCHED_RR,
++#else
+ .sched_policy = SCHED_DEADLINE,
+ .sched_runtime = 100000ULL,
+ .sched_deadline = 10000000ULL,
+ .sched_period = 10000000ULL
++#endif
+ };
+ struct wakeup_test_data *x = data;
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 9949ffad8df0..90eac9d802a8 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1247,6 +1247,7 @@ static bool kick_pool(struct worker_pool *pool)
+
+ p = worker->task;
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ /*
+ * Idle @worker is about to execute @work and waking up provides an
+@@ -1276,6 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
+ }
+ }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ wake_up_process(p);
+ return true;
+ }
+@@ -1404,7 +1407,11 @@ void wq_worker_running(struct task_struct *task)
+ * CPU intensive auto-detection cares about how long a work item hogged
+ * CPU without sleeping. Reset the starting timestamp on wakeup.
+ */
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+
+ WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1489,7 +1496,11 @@ void wq_worker_tick(struct task_struct *task)
+ * We probably want to make this prettier in the future.
+ */
+ if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++ worker->task->sched_time - worker->current_at <
++#else
+ worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ return;
+
+@@ -3157,7 +3168,11 @@ __acquires(&pool->lock)
+ worker->current_func = work->func;
+ worker->current_pwq = pwq;
+ if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ work_data = *work_data_bits(work);
+ worker->current_color = get_work_color(work_data);
+
diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..7748d78c
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig 2024-11-13 14:45:36.566335895 -0500
++++ b/init/Kconfig 2024-11-13 14:47:02.670787774 -0500
+@@ -860,8 +860,9 @@ config UCLAMP_BUCKETS_COUNT
+ If in doubt, use the default value.
+
+ menuconfig SCHED_ALT
++ depends on X86_64
+ bool "Alternative CPU Schedulers"
+- default y
++ default n
+ help
+ This feature enable alternative CPU scheduler"
+
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-11-22 17:45 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-11-22 17:45 UTC (permalink / raw
To: gentoo-commits
commit: 84e347f66f81e2e80e29676135f13f38d88cd91e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 22 17:45:09 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 22 17:45:09 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=84e347f6
Linux patch 6.12.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 12 ++++++----
1001_linux-6.12.1.patch | 62 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/0000_README b/0000_README
index 2f20a332..4df3304e 100644
--- a/0000_README
+++ b/0000_README
@@ -43,17 +43,21 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-6.12.1.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.1
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
Patch: 1700_sparc-address-warray-bound-warnings.patch
-From: https://github.com/KSPP/linux/issues/109
-Desc: Address -Warray-bounds warnings
+From: https://github.com/KSPP/linux/issues/109
+Desc: Address -Warray-bounds warnings
Patch: 1730_parisc-Disable-prctl.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
-Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
+Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
diff --git a/1001_linux-6.12.1.patch b/1001_linux-6.12.1.patch
new file mode 100644
index 00000000..8eed7b47
--- /dev/null
+++ b/1001_linux-6.12.1.patch
@@ -0,0 +1,62 @@
+diff --git a/Makefile b/Makefile
+index 68a8faff25432a..70070e64d267c1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 0fac689c6350b2..13db0026dc1aad 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -371,7 +371,7 @@ static int uvc_parse_format(struct uvc_device *dev,
+ * Parse the frame descriptors. Only uncompressed, MJPEG and frame
+ * based formats have frame descriptors.
+ */
+- while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
++ while (ftype && buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
+ buffer[2] == ftype) {
+ unsigned int maxIntervalIndex;
+
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 79d541f1502b22..4f6e566d52faa6 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1491,7 +1491,18 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
+ vm_flags = vma->vm_flags;
+ goto file_expanded;
+ }
+- vma_iter_config(&vmi, addr, end);
++
++ /*
++ * In the unlikely even that more memory was needed, but
++ * not available for the vma merge, the vma iterator
++ * will have no memory reserved for the write we told
++ * the driver was happening. To keep up the ruse,
++ * ensure the allocation for the store succeeds.
++ */
++ if (vmg_nomem(&vmg)) {
++ mas_preallocate(&vmi.mas, vma,
++ GFP_KERNEL|__GFP_NOFAIL);
++ }
+ }
+
+ vm_flags = vma->vm_flags;
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index e2157e38721770..56c232cf5b0f4f 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -549,6 +549,7 @@ static void hvs_destruct(struct vsock_sock *vsk)
+ vmbus_hvsock_device_unregister(chan);
+
+ kfree(hvs);
++ vsk->trans = NULL;
+ }
+
+ static int hvs_dgram_bind(struct vsock_sock *vsk, struct sockaddr_vm *addr)
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-11-30 17:33 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-11-30 17:33 UTC (permalink / raw
To: gentoo-commits
commit: cf5a18f21dd174f93ebf5fcc37a3e41ce8e5fdb8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 30 17:29:45 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Nov 30 17:32:03 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cf5a18f2
Fix case for X86_USER_SHADOW_STACK
Bug: https://bugs.gentoo.org/945481
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 87b8fa95..74e75c40 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -254,7 +254,7 @@
+ select RANDOMIZE_BASE
+ select RANDOMIZE_MEMORY
+ select RELOCATABLE
-+ select X86_USER_SHADOW_STACK if AS_WRUSS=Y
++ select X86_USER_SHADOW_STACK if AS_WRUSS=y
+ select VMAP_STACK
+
+
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-02 17:15 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-02 17:15 UTC (permalink / raw
To: gentoo-commits
commit: 353d9f32e0ba5f71b437ff4c970539e584a58e0b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 2 17:13:57 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 2 17:13:57 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=353d9f32
GCC 15 defs to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2980_GCC15-gnu23-to-gnu11-fix.patch | 105 ++++++++++++++++++++++++++++++++++++
2 files changed, 109 insertions(+)
diff --git a/0000_README b/0000_README
index 4df3304e..bc514d88 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
+Patch: 2980_GCC15-gnu23-to-gnu11-fix.patch
+From: https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+Desc: GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere.
+
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_GCC15-gnu23-to-gnu11-fix.patch b/2980_GCC15-gnu23-to-gnu11-fix.patch
new file mode 100644
index 00000000..c74b6180
--- /dev/null
+++ b/2980_GCC15-gnu23-to-gnu11-fix.patch
@@ -0,0 +1,105 @@
+iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
+some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
+everywhere.
+
+https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+--- a/Makefile
++++ b/Makefile
+@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
+ # SHELL used by kbuild
+ CONFIG_SHELL := sh
+
++CSTD_FLAG := -std=gnu11
++
+ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
+ HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
+ HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
+@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
+ HOSTPKG_CONFIG = pkg-config
+
+ KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
+- -O2 -fomit-frame-pointer -std=gnu11
++ -O2 -fomit-frame-pointer $(CSTD_FLAG)
+ KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
+ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
+
+@@ -545,7 +547,7 @@ LINUXINCLUDE := \
+ KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
+
+ KBUILD_CFLAGS :=
+-KBUILD_CFLAGS += -std=gnu11
++KBUILD_CFLAGS += $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fshort-wchar
+ KBUILD_CFLAGS += -funsigned-char
+ KBUILD_CFLAGS += -fno-common
+@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
+ export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+ export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+-export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
++export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
+ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -fno-common \
+ -Werror-implicit-function-declaration \
+ -Wno-format-security \
+- -std=gnu11
++ $(CSTD_FLAG)
+ VDSO_CFLAGS += -O2
+ # Some useful compiler-dependent flags from top-level Makefile
+ VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ endif
+
+ # How to compile the 16-bit code. Note we always compile for -march=i386;
+ # that way we can complain to the user if the CPU is insufficient.
+-REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
++REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
+ -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -7,7 +7,7 @@
+ #
+
+ # non-x86 reuses KBUILD_CFLAGS, x86 does not
+-cflags-y := $(KBUILD_CFLAGS)
++cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
+ $(call cc-disable-warning, address-of-packed-member) \
+ $(call cc-disable-warning, gnu) \
+ -fno-asynchronous-unwind-tables \
+- $(CLANG_FLAGS)
++ $(CLANG_FLAGS) $(CSTD_FLAG)
+
+ # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
+ # disable the stackleak plugin
+@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
+ -ffreestanding \
+ -fno-stack-protector \
+ $(call cc-option,-fno-addrsig) \
+- -D__DISABLE_EXPORTS
++ -D__DISABLE_EXPORTS $(CSTD_FLAG)
+
+ #
+ # struct randomization only makes sense for Linux internal types, which the EFI
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -24,7 +24,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ # case of cross compiling, as it has the '--target=' flag, which is needed to
+ # avoid errors with '-march=i386', and future flags may depend on the target to
+ # be valid.
+-KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
++KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS) $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-05 14:06 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-05 14:06 UTC (permalink / raw
To: gentoo-commits
commit: 667267c9cd00cf85da39630df8c81d77fda4ec4d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 5 14:06:06 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 5 14:06:06 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=667267c9
Linux patch 6.12.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-6.12.2.patch | 47740 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 47744 insertions(+)
diff --git a/0000_README b/0000_README
index bc514d88..ac1104a1 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-6.12.1.patch
From: https://www.kernel.org
Desc: Linux 6.12.1
+Patch: 1001_linux-6.12.2.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.2
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1001_linux-6.12.2.patch b/1001_linux-6.12.2.patch
new file mode 100644
index 00000000..f10548d7
--- /dev/null
+++ b/1001_linux-6.12.2.patch
@@ -0,0 +1,47740 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index fdedf1ea944ba8..513296bb6f297f 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -311,10 +311,13 @@ Description: Do background GC aggressively when set. Set to 0 by default.
+ GC approach and turns SSR mode on.
+ gc urgent low(2): lowers the bar of checking I/O idling in
+ order to process outstanding discard commands and GC a
+- little bit aggressively. uses cost benefit GC approach.
++ little bit aggressively. always uses cost benefit GC approach,
++ and will override age-threshold GC approach if ATGC is enabled
++ at the same time.
+ gc urgent mid(3): does GC forcibly in a period of given
+ gc_urgent_sleep_time and executes a mid level of I/O idling check.
+- uses cost benefit GC approach.
++ always uses cost benefit GC approach, and will override
++ age-threshold GC approach if ATGC is enabled at the same time.
+
+ What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
+ Date: August 2017
+diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst
+index ca7b7cd806a16c..30080ff6f4062d 100644
+--- a/Documentation/RCU/stallwarn.rst
++++ b/Documentation/RCU/stallwarn.rst
+@@ -249,7 +249,7 @@ ticks this GP)" indicates that this CPU has not taken any scheduling-clock
+ interrupts during the current stalled grace period.
+
+ The "idle=" portion of the message prints the dyntick-idle state.
+-The hex number before the first "/" is the low-order 12 bits of the
++The hex number before the first "/" is the low-order 16 bits of the
+ dynticks counter, which will have an even-numbered value if the CPU
+ is in dyntick-idle mode and an odd-numbered value otherwise. The hex
+ number between the two "/"s is the value of the nesting, which will be
+diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst
+index 678d70d6e1c3ac..714a5171bfc0b8 100644
+--- a/Documentation/admin-guide/blockdev/zram.rst
++++ b/Documentation/admin-guide/blockdev/zram.rst
+@@ -47,6 +47,8 @@ The list of possible return codes:
+ -ENOMEM zram was not able to allocate enough memory to fulfil your
+ needs.
+ -EINVAL invalid input has been provided.
++-EAGAIN re-try operation later (e.g. when attempting to run recompress
++ and writeback simultaneously).
+ ======== =============================================================
+
+ If you use 'echo', the returned value is set by the 'echo' utility,
+diff --git a/Documentation/admin-guide/media/building.rst b/Documentation/admin-guide/media/building.rst
+index a0647342991637..7a413ba07f93bb 100644
+--- a/Documentation/admin-guide/media/building.rst
++++ b/Documentation/admin-guide/media/building.rst
+@@ -15,7 +15,7 @@ Please notice, however, that, if:
+
+ you should use the main media development tree ``master`` branch:
+
+- https://git.linuxtv.org/media_tree.git/
++ https://git.linuxtv.org/media.git/
+
+ In this case, you may find some useful information at the
+ `LinuxTv wiki pages <https://linuxtv.org/wiki>`_:
+diff --git a/Documentation/admin-guide/media/saa7134.rst b/Documentation/admin-guide/media/saa7134.rst
+index 51eae7eb5ab7f4..18d7cbc897db4b 100644
+--- a/Documentation/admin-guide/media/saa7134.rst
++++ b/Documentation/admin-guide/media/saa7134.rst
+@@ -67,7 +67,7 @@ Changes / Fixes
+ Please mail to linux-media AT vger.kernel.org unified diffs against
+ the linux media git tree:
+
+- https://git.linuxtv.org/media_tree.git/
++ https://git.linuxtv.org/media.git/
+
+ This is done by committing a patch at a clone of the git tree and
+ submitting the patch using ``git send-email``. Don't forget to
+diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
+index 4fd492cb49704f..ad2d8ddad27fe4 100644
+--- a/Documentation/arch/x86/boot.rst
++++ b/Documentation/arch/x86/boot.rst
+@@ -896,10 +896,19 @@ Offset/size: 0x260/4
+
+ The kernel runtime start address is determined by the following algorithm::
+
+- if (relocatable_kernel)
+- runtime_start = align_up(load_address, kernel_alignment)
+- else
+- runtime_start = pref_address
++ if (relocatable_kernel) {
++ if (load_address < pref_address)
++ load_address = pref_address;
++ runtime_start = align_up(load_address, kernel_alignment);
++ } else {
++ runtime_start = pref_address;
++ }
++
++Hence the necessary memory window location and size can be estimated by
++a boot loader as::
++
++ memory_window_start = runtime_start;
++ memory_window_size = init_size;
+
+ ============ ===============
+ Field name: handover_offset
+diff --git a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
+index 68ea5f70b75f03..ee7edc6f60e2b4 100644
+--- a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
++++ b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
+@@ -39,11 +39,11 @@ properties:
+
+ reg:
+ minItems: 2
+- maxItems: 9
++ maxItems: 10
+
+ reg-names:
+ minItems: 2
+- maxItems: 9
++ maxItems: 10
+
+ interrupts:
+ maxItems: 1
+@@ -134,6 +134,36 @@ allOf:
+ - qcom,qdu1000-llcc
+ - qcom,sc8180x-llcc
+ - qcom,sc8280xp-llcc
++ then:
++ properties:
++ reg:
++ items:
++ - description: LLCC0 base register region
++ - description: LLCC1 base register region
++ - description: LLCC2 base register region
++ - description: LLCC3 base register region
++ - description: LLCC4 base register region
++ - description: LLCC5 base register region
++ - description: LLCC6 base register region
++ - description: LLCC7 base register region
++ - description: LLCC broadcast base register region
++ reg-names:
++ items:
++ - const: llcc0_base
++ - const: llcc1_base
++ - const: llcc2_base
++ - const: llcc3_base
++ - const: llcc4_base
++ - const: llcc5_base
++ - const: llcc6_base
++ - const: llcc7_base
++ - const: llcc_broadcast_base
++
++ - if:
++ properties:
++ compatible:
++ contains:
++ enum:
+ - qcom,x1e80100-llcc
+ then:
+ properties:
+@@ -148,6 +178,7 @@ allOf:
+ - description: LLCC6 base register region
+ - description: LLCC7 base register region
+ - description: LLCC broadcast base register region
++ - description: LLCC broadcast AND register region
+ reg-names:
+ items:
+ - const: llcc0_base
+@@ -159,6 +190,7 @@ allOf:
+ - const: llcc6_base
+ - const: llcc7_base
+ - const: llcc_broadcast_base
++ - const: llcc_broadcast_and_base
+
+ - if:
+ properties:
+diff --git a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+index 5e942bccf27787..2b2041818a0a44 100644
+--- a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
++++ b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+@@ -26,9 +26,21 @@ properties:
+ description:
+ Specifies the reference clock(s) from which the output frequency is
+ derived. This must either reference one clock if only the first clock
+- input is connected or two if both clock inputs are connected.
+- minItems: 1
+- maxItems: 2
++ input is connected or two if both clock inputs are connected. The last
++ clock is the AXI bus clock that needs to be enabled so we can access the
++ core registers.
++ minItems: 2
++ maxItems: 3
++
++ clock-names:
++ oneOf:
++ - items:
++ - const: clkin1
++ - const: s_axi_aclk
++ - items:
++ - const: clkin1
++ - const: clkin2
++ - const: s_axi_aclk
+
+ '#clock-cells':
+ const: 0
+@@ -40,6 +52,7 @@ required:
+ - compatible
+ - reg
+ - clocks
++ - clock-names
+ - '#clock-cells'
+
+ additionalProperties: false
+@@ -50,5 +63,6 @@ examples:
+ compatible = "adi,axi-clkgen-2.00.a";
+ #clock-cells = <0>;
+ reg = <0xff000000 0x1000>;
+- clocks = <&osc 1>;
++ clocks = <&osc 1>, <&clkc 15>;
++ clock-names = "clkin1", "s_axi_aclk";
+ };
+diff --git a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+index fc8b97f820775b..41fe0003474285 100644
+--- a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
++++ b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+@@ -30,7 +30,7 @@ properties:
+ maxItems: 1
+
+ spi-max-frequency:
+- maximum: 30000000
++ maximum: 66000000
+
+ reset-gpios:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml b/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
+index 898c1be2d6a435..f05aab2b1addca 100644
+--- a/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
++++ b/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
+@@ -149,7 +149,7 @@ allOf:
+ then:
+ properties:
+ clocks:
+- minItems: 4
++ minItems: 6
+
+ clock-names:
+ items:
+@@ -178,7 +178,7 @@ allOf:
+ then:
+ properties:
+ clocks:
+- minItems: 4
++ minItems: 6
+
+ clock-names:
+ items:
+@@ -207,6 +207,7 @@ allOf:
+ properties:
+ clocks:
+ minItems: 4
++ maxItems: 4
+
+ clock-names:
+ items:
+diff --git a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
+index 4dfb49b0e07f73..f82a3c7e6c29e4 100644
+--- a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
+@@ -91,14 +91,17 @@ allOf:
+ - if:
+ properties:
+ compatible:
+- # Match without "contains", to skip newer variants which are still
+- # compatible with samsung,exynos7-wakeup-eint
+- enum:
+- - samsung,s5pv210-wakeup-eint
+- - samsung,exynos4210-wakeup-eint
+- - samsung,exynos5433-wakeup-eint
+- - samsung,exynos7-wakeup-eint
+- - samsung,exynos7885-wakeup-eint
++ oneOf:
++ # Match without "contains", to skip newer variants which are still
++ # compatible with samsung,exynos7-wakeup-eint
++ - enum:
++ - samsung,exynos4210-wakeup-eint
++ - samsung,exynos7-wakeup-eint
++ - samsung,s5pv210-wakeup-eint
++ - contains:
++ enum:
++ - samsung,exynos5433-wakeup-eint
++ - samsung,exynos7885-wakeup-eint
+ then:
+ properties:
+ interrupts:
+diff --git a/Documentation/devicetree/bindings/serial/rs485.yaml b/Documentation/devicetree/bindings/serial/rs485.yaml
+index 9418fd66a8e95a..b93254ad2a287a 100644
+--- a/Documentation/devicetree/bindings/serial/rs485.yaml
++++ b/Documentation/devicetree/bindings/serial/rs485.yaml
+@@ -18,16 +18,15 @@ properties:
+ description: prop-encoded-array <a b>
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ items:
+- items:
+- - description: Delay between rts signal and beginning of data sent in
+- milliseconds. It corresponds to the delay before sending data.
+- default: 0
+- maximum: 100
+- - description: Delay between end of data sent and rts signal in milliseconds.
+- It corresponds to the delay after sending data and actual release
+- of the line.
+- default: 0
+- maximum: 100
++ - description: Delay between rts signal and beginning of data sent in
++ milliseconds. It corresponds to the delay before sending data.
++ default: 0
++ maximum: 100
++ - description: Delay between end of data sent and rts signal in milliseconds.
++ It corresponds to the delay after sending data and actual release
++ of the line.
++ default: 0
++ maximum: 100
+
+ rs485-rts-active-high:
+ description: drive RTS high when sending (this is the default).
+diff --git a/Documentation/devicetree/bindings/sound/mt6359.yaml b/Documentation/devicetree/bindings/sound/mt6359.yaml
+index 23d411fc4200e6..128698630c865f 100644
+--- a/Documentation/devicetree/bindings/sound/mt6359.yaml
++++ b/Documentation/devicetree/bindings/sound/mt6359.yaml
+@@ -23,8 +23,8 @@ properties:
+ Indicates how many data pins are used to transmit two channels of PDM
+ signal. 0 means two wires, 1 means one wire. Default value is 0.
+ enum:
+- - 0 # one wire
+- - 1 # two wires
++ - 0 # two wires
++ - 1 # one wire
+
+ mediatek,mic-type-0:
+ $ref: /schemas/types.yaml#/definitions/uint32
+@@ -53,9 +53,9 @@ additionalProperties: false
+
+ examples:
+ - |
+- mt6359codec: mt6359codec {
+- mediatek,dmic-mode = <0>;
+- mediatek,mic-type-0 = <2>;
++ mt6359codec: audio-codec {
++ mediatek,dmic-mode = <0>;
++ mediatek,mic-type-0 = <2>;
+ };
+
+ ...
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index b320a39de7fe40..fbfce9b4ae6b8e 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -1013,6 +1013,8 @@ patternProperties:
+ description: Shanghai Neardi Technology Co., Ltd.
+ "^nec,.*":
+ description: NEC LCD Technologies, Ltd.
++ "^neofidelity,.*":
++ description: Neofidelity Inc.
+ "^neonode,.*":
+ description: Neonode Inc.
+ "^netgear,.*":
+diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst
+index 317934c9e8fcac..d92c276f1575af 100644
+--- a/Documentation/filesystems/mount_api.rst
++++ b/Documentation/filesystems/mount_api.rst
+@@ -770,7 +770,8 @@ process the parameters it is given.
+
+ * ::
+
+- bool fs_validate_description(const struct fs_parameter_description *desc);
++ bool fs_validate_description(const char *name,
++ const struct fs_parameter_description *desc);
+
+ This performs some validation checks on a parameter description. It
+ returns true if the description is good and false if it is not. It will
+diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
+index bfda1a5fecadc6..ec6411d02ac8f5 100644
+--- a/Documentation/locking/seqlock.rst
++++ b/Documentation/locking/seqlock.rst
+@@ -153,7 +153,7 @@ Use seqcount_latch_t when the write side sections cannot be protected
+ from interruption by readers. This is typically the case when the read
+ side can be invoked from NMI handlers.
+
+-Check `raw_write_seqcount_latch()` for more information.
++Check `write_seqcount_latch()` for more information.
+
+
+ .. _seqlock_t:
+diff --git a/MAINTAINERS b/MAINTAINERS
+index b878ddc99f94e7..6bb4ec0c162a53 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -701,7 +701,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-aimslab*
+
+ AIO
+@@ -809,7 +809,7 @@ ALLWINNER A10 CSI DRIVER
+ M: Maxime Ripard <mripard@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun4i-a10-csi.yaml
+ F: drivers/media/platform/sunxi/sun4i-csi/
+
+@@ -818,7 +818,7 @@ M: Yong Deng <yong.deng@magewell.com>
+ M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-csi.yaml
+ F: drivers/media/platform/sunxi/sun6i-csi/
+
+@@ -826,7 +826,7 @@ ALLWINNER A31 ISP DRIVER
+ M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-isp.yaml
+ F: drivers/staging/media/sunxi/sun6i-isp/
+ F: drivers/staging/media/sunxi/sun6i-isp/uapi/sun6i-isp-config.h
+@@ -835,7 +835,7 @@ ALLWINNER A31 MIPI CSI-2 BRIDGE DRIVER
+ M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-mipi-csi2.yaml
+ F: drivers/media/platform/sunxi/sun6i-mipi-csi2/
+
+@@ -3348,7 +3348,7 @@ ASAHI KASEI AK7375 LENS VOICE COIL DRIVER
+ M: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/asahi-kasei,ak7375.yaml
+ F: drivers/media/i2c/ak7375.c
+
+@@ -3765,7 +3765,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/dvb-usb-v2/az6007.c
+
+ AZTECH FM RADIO RECEIVER DRIVER
+@@ -3773,7 +3773,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-aztech*
+
+ B43 WIRELESS DRIVER
+@@ -3857,7 +3857,7 @@ M: Fabien Dessenne <fabien.dessenne@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/st/sti/bdisp
+
+ BECKHOFF CX5020 ETHERCAT MASTER DRIVER
+@@ -4865,7 +4865,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/bttv*
+ F: drivers/media/pci/bt8xx/bttv*
+
+@@ -4979,13 +4979,13 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-cadet*
+
+ CAFE CMOS INTEGRATED CAMERA CONTROLLER DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/cafe_ccic*
+ F: drivers/media/platform/marvell/
+
+@@ -5169,7 +5169,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/ABI/testing/debugfs-cec-error-inj
+ F: Documentation/devicetree/bindings/media/cec/cec-common.yaml
+ F: Documentation/driver-api/media/cec-core.rst
+@@ -5186,7 +5186,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/cec/cec-gpio.yaml
+ F: drivers/media/cec/platform/cec-gpio/
+
+@@ -5393,7 +5393,7 @@ CHRONTEL CH7322 CEC DRIVER
+ M: Joe Tessler <jrt@google.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/chrontel,ch7322.yaml
+ F: drivers/media/cec/i2c/ch7322.c
+
+@@ -5582,7 +5582,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/cobalt/
+
+ COCCINELLE/Semantic Patches (SmPL)
+@@ -6026,7 +6026,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/cs3308.c
+
+ CS5535 Audio ALSA driver
+@@ -6057,7 +6057,7 @@ M: Andy Walls <awalls@md.metrocast.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/cx18/
+ F: include/uapi/linux/ivtv*
+
+@@ -6066,7 +6066,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/common/cx2341x*
+ F: include/media/drv-intf/cx2341x.h
+
+@@ -6084,7 +6084,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/cx88*
+ F: drivers/media/pci/cx88/
+
+@@ -6320,7 +6320,7 @@ DEINTERLACE DRIVERS FOR ALLWINNER H3
+ M: Jernej Skrabec <jernej.skrabec@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun8i-h3-deinterlace.yaml
+ F: drivers/media/platform/sunxi/sun8i-di/
+
+@@ -6447,7 +6447,7 @@ M: Hugues Fruchet <hugues.fruchet@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/st/sti/delta
+
+ DENALI NAND DRIVER
+@@ -6855,7 +6855,7 @@ DONGWOON DW9714 LENS VOICE COIL DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
+ F: drivers/media/i2c/dw9714.c
+
+@@ -6863,13 +6863,13 @@ DONGWOON DW9719 LENS VOICE COIL DRIVER
+ M: Daniel Scally <djrscally@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/dw9719.c
+
+ DONGWOON DW9768 LENS VOICE COIL DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9768.yaml
+ F: drivers/media/i2c/dw9768.c
+
+@@ -6877,7 +6877,7 @@ DONGWOON DW9807 LENS VOICE COIL DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9807-vcm.yaml
+ F: drivers/media/i2c/dw9807-vcm.c
+
+@@ -7860,7 +7860,7 @@ DSBR100 USB FM RADIO DRIVER
+ M: Alexey Klimov <klimov.linux@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/dsbr100.c
+
+ DT3155 MEDIA DRIVER
+@@ -7868,7 +7868,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/dt3155/
+
+ DVB_USB_AF9015 MEDIA DRIVER
+@@ -7913,7 +7913,7 @@ S: Maintained
+ W: https://linuxtv.org
+ W: http://github.com/mkrufky
+ Q: http://patchwork.linuxtv.org/project/linux-media/list/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/dvb-usb/cxusb*
+
+ DVB_USB_EC168 MEDIA DRIVER
+@@ -8282,7 +8282,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/em28xx*
+ F: drivers/media/usb/em28xx/
+
+@@ -8578,7 +8578,7 @@ EXTRON DA HD 4K PLUS CEC DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/cec/usb/extron-da-hd-4k-plus/
+
+ EXYNOS DP DRIVER
+@@ -9400,7 +9400,7 @@ GALAXYCORE GC2145 SENSOR DRIVER
+ M: Alain Volmat <alain.volmat@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/galaxycore,gc2145.yaml
+ F: drivers/media/i2c/gc2145.c
+
+@@ -9448,7 +9448,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-gemtek*
+
+ GENERIC ARCHITECTURE TOPOLOGY
+@@ -9830,56 +9830,56 @@ GS1662 VIDEO SERIALIZER
+ M: Charles-Antoine Couret <charles-antoine.couret@nexvision.fr>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/spi/gs1662.c
+
+ GSPCA FINEPIX SUBDRIVER
+ M: Frank Zago <frank@zago.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/finepix.c
+
+ GSPCA GL860 SUBDRIVER
+ M: Olivier Lorin <o.lorin@laposte.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/gl860/
+
+ GSPCA M5602 SUBDRIVER
+ M: Erik Andren <erik.andren@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/m5602/
+
+ GSPCA PAC207 SONIXB SUBDRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/pac207.c
+
+ GSPCA SN9C20X SUBDRIVER
+ M: Brian Johnson <brijohn@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/sn9c20x.c
+
+ GSPCA T613 SUBDRIVER
+ M: Leandro Costantino <lcostantino@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/t613.c
+
+ GSPCA USB WEBCAM DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/
+
+ GTP (GPRS Tunneling Protocol)
+@@ -9996,7 +9996,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/hdpvr/
+
+ HEWLETT PACKARD ENTERPRISE ILO CHIF DRIVER
+@@ -10503,7 +10503,7 @@ M: Jean-Christophe Trotin <jean-christophe.trotin@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/st/sti/hva
+
+ HWPOISON MEMORY FAILURE HANDLING
+@@ -10531,7 +10531,7 @@ HYNIX HI556 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/hi556.c
+
+ HYNIX HI846 SENSOR DRIVER
+@@ -11502,7 +11502,7 @@ M: Dan Scally <djrscally@gmail.com>
+ R: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/userspace-api/media/v4l/pixfmt-srggb10-ipu3.rst
+ F: drivers/media/pci/intel/ipu3/
+
+@@ -11523,7 +11523,7 @@ M: Bingbu Cao <bingbu.cao@intel.com>
+ R: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/ipu6-isys.rst
+ F: drivers/media/pci/intel/ipu6/
+
+@@ -12036,7 +12036,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-isa*
+
+ ISAPNP
+@@ -12138,7 +12138,7 @@ M: Andy Walls <awalls@md.metrocast.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/ivtv*
+ F: drivers/media/pci/ivtv/
+ F: include/uapi/linux/ivtv*
+@@ -12286,7 +12286,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-keene*
+
+ KERNEL AUTOMOUNTER
+@@ -13573,7 +13573,7 @@ MA901 MASTERKIT USB FM RADIO DRIVER
+ M: Alexey Klimov <klimov.linux@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-ma901.c
+
+ MAC80211
+@@ -13868,7 +13868,7 @@ MAX2175 SDR TUNER DRIVER
+ M: Ramesh Shanmugasundaram <rashanmu@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/max2175.txt
+ F: Documentation/userspace-api/media/drivers/max2175.rst
+ F: drivers/media/i2c/max2175*
+@@ -14048,7 +14048,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-maxiradio*
+
+ MAXLINEAR ETHERNET PHY DRIVER
+@@ -14131,7 +14131,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://www.linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/mc/
+ F: include/media/media-*.h
+ F: include/uapi/linux/media.h
+@@ -14140,7 +14140,7 @@ MEDIA DRIVER FOR FREESCALE IMX PXP
+ M: Philipp Zabel <p.zabel@pengutronix.de>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/nxp/imx-pxp.[ch]
+
+ MEDIA DRIVERS FOR ASCOT2E
+@@ -14149,7 +14149,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/ascot2e*
+
+ MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS
+@@ -14157,7 +14157,7 @@ M: Jasmin Jessich <jasmin@anw.at>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/cxd2099*
+
+ MEDIA DRIVERS FOR CXD2841ER
+@@ -14166,7 +14166,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/cxd2841er*
+
+ MEDIA DRIVERS FOR CXD2880
+@@ -14174,7 +14174,7 @@ M: Yasunari Takiguchi <Yasunari.Takiguchi@sony.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: http://linuxtv.org/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/cxd2880/*
+ F: drivers/media/spi/cxd2880*
+
+@@ -14182,7 +14182,7 @@ MEDIA DRIVERS FOR DIGITAL DEVICES PCIE DEVICES
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/ddbridge/*
+
+ MEDIA DRIVERS FOR FREESCALE IMX
+@@ -14190,7 +14190,7 @@ M: Steve Longerbeam <slongerbeam@gmail.com>
+ M: Philipp Zabel <p.zabel@pengutronix.de>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/imx.rst
+ F: Documentation/devicetree/bindings/media/imx.txt
+ F: drivers/staging/media/imx/
+@@ -14204,7 +14204,7 @@ M: Martin Kepplinger <martin.kepplinger@puri.sm>
+ R: Purism Kernel Team <kernel@puri.sm>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/imx7.rst
+ F: Documentation/devicetree/bindings/media/nxp,imx-mipi-csi2.yaml
+ F: Documentation/devicetree/bindings/media/nxp,imx7-csi.yaml
+@@ -14219,7 +14219,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/helene*
+
+ MEDIA DRIVERS FOR HORUS3A
+@@ -14228,7 +14228,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/horus3a*
+
+ MEDIA DRIVERS FOR LNBH25
+@@ -14237,14 +14237,14 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/lnbh25*
+
+ MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/mxl5xx*
+
+ MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices
+@@ -14253,7 +14253,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/netup_unidvb/*
+
+ MEDIA DRIVERS FOR NVIDIA TEGRA - VDE
+@@ -14261,7 +14261,7 @@ M: Dmitry Osipenko <digetx@gmail.com>
+ L: linux-media@vger.kernel.org
+ L: linux-tegra@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/nvidia,tegra-vde.yaml
+ F: drivers/media/platform/nvidia/tegra-vde/
+
+@@ -14270,7 +14270,7 @@ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,ceu.yaml
+ F: drivers/media/platform/renesas/renesas-ceu.c
+ F: include/media/drv-intf/renesas-ceu.h
+@@ -14280,7 +14280,7 @@ M: Fabrizio Castro <fabrizio.castro.jz@renesas.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,drif.yaml
+ F: drivers/media/platform/renesas/rcar_drif.c
+
+@@ -14289,7 +14289,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,fcp.yaml
+ F: drivers/media/platform/renesas/rcar-fcp.c
+ F: include/media/rcar-fcp.h
+@@ -14299,7 +14299,7 @@ M: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,fdp1.yaml
+ F: drivers/media/platform/renesas/rcar_fdp1.c
+
+@@ -14308,7 +14308,7 @@ M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,csi2.yaml
+ F: Documentation/devicetree/bindings/media/renesas,isp.yaml
+ F: Documentation/devicetree/bindings/media/renesas,vin.yaml
+@@ -14322,7 +14322,7 @@ M: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,vsp1.yaml
+ F: drivers/media/platform/renesas/vsp1/
+
+@@ -14330,14 +14330,14 @@ MEDIA DRIVERS FOR ST STV0910 DEMODULATOR ICs
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/stv0910*
+
+ MEDIA DRIVERS FOR ST STV6111 TUNER ICs
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/stv6111*
+
+ MEDIA DRIVERS FOR STM32 - DCMI / DCMIPP
+@@ -14345,7 +14345,7 @@ M: Hugues Fruchet <hugues.fruchet@foss.st.com>
+ M: Alain Volmat <alain.volmat@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/st,stm32-dcmi.yaml
+ F: Documentation/devicetree/bindings/media/st,stm32-dcmipp.yaml
+ F: drivers/media/platform/st/stm32/stm32-dcmi.c
+@@ -14357,7 +14357,7 @@ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+ Q: http://patchwork.kernel.org/project/linux-media/list/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/
+ F: Documentation/devicetree/bindings/media/
+ F: Documentation/driver-api/media/
+@@ -14933,7 +14933,7 @@ L: linux-media@vger.kernel.org
+ L: linux-amlogic@lists.infradead.org
+ S: Supported
+ W: http://linux-meson.com/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/cec/amlogic,meson-gx-ao-cec.yaml
+ F: drivers/media/cec/platform/meson/ao-cec-g12a.c
+ F: drivers/media/cec/platform/meson/ao-cec.c
+@@ -14943,7 +14943,7 @@ M: Neil Armstrong <neil.armstrong@linaro.org>
+ L: linux-media@vger.kernel.org
+ L: linux-amlogic@lists.infradead.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/amlogic,axg-ge2d.yaml
+ F: drivers/media/platform/amlogic/meson-ge2d/
+
+@@ -14959,7 +14959,7 @@ M: Neil Armstrong <neil.armstrong@linaro.org>
+ L: linux-media@vger.kernel.org
+ L: linux-amlogic@lists.infradead.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/amlogic,gx-vdec.yaml
+ F: drivers/staging/media/meson/vdec/
+
+@@ -15557,7 +15557,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-miropcm20*
+
+ MITSUMI MM8013 FG DRIVER
+@@ -15709,7 +15709,7 @@ MR800 AVERMEDIA USB FM RADIO DRIVER
+ M: Alexey Klimov <klimov.linux@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-mr800.c
+
+ MRF24J40 IEEE 802.15.4 RADIO DRIVER
+@@ -15776,7 +15776,7 @@ MT9M114 ONSEMI SENSOR DRIVER
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml
+ F: drivers/media/i2c/mt9m114.c
+
+@@ -15784,7 +15784,7 @@ MT9P031 APTINA CAMERA SENSOR
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/aptina,mt9p031.yaml
+ F: drivers/media/i2c/mt9p031.c
+ F: include/media/i2c/mt9p031.h
+@@ -15793,7 +15793,7 @@ MT9T112 APTINA CAMERA SENSOR
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/mt9t112.c
+ F: include/media/i2c/mt9t112.h
+
+@@ -15801,7 +15801,7 @@ MT9V032 APTINA CAMERA SENSOR
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/mt9v032.txt
+ F: drivers/media/i2c/mt9v032.c
+ F: include/media/i2c/mt9v032.h
+@@ -15810,7 +15810,7 @@ MT9V111 APTINA CAMERA SENSOR
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/aptina,mt9v111.yaml
+ F: drivers/media/i2c/mt9v111.c
+
+@@ -17005,13 +17005,13 @@ OMNIVISION OV01A10 SENSOR DRIVER
+ M: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov01a10.c
+
+ OMNIVISION OV02A10 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml
+ F: drivers/media/i2c/ov02a10.c
+
+@@ -17019,28 +17019,28 @@ OMNIVISION OV08D10 SENSOR DRIVER
+ M: Jimmy Su <jimmy.su@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov08d10.c
+
+ OMNIVISION OV08X40 SENSOR DRIVER
+ M: Jason Chen <jason.z.chen@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov08x40.c
+
+ OMNIVISION OV13858 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov13858.c
+
+ OMNIVISION OV13B10 SENSOR DRIVER
+ M: Arec Kao <arec.kao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov13b10.c
+
+ OMNIVISION OV2680 SENSOR DRIVER
+@@ -17048,7 +17048,7 @@ M: Rui Miguel Silva <rmfrfs@gmail.com>
+ M: Hans de Goede <hansg@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov2680.yaml
+ F: drivers/media/i2c/ov2680.c
+
+@@ -17056,7 +17056,7 @@ OMNIVISION OV2685 SENSOR DRIVER
+ M: Shunqian Zheng <zhengsq@rock-chips.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov2685.yaml
+ F: drivers/media/i2c/ov2685.c
+
+@@ -17066,14 +17066,14 @@ R: Sakari Ailus <sakari.ailus@linux.intel.com>
+ R: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov2740.c
+
+ OMNIVISION OV4689 SENSOR DRIVER
+ M: Mikhail Rudenko <mike.rudenko@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml
+ F: drivers/media/i2c/ov4689.c
+
+@@ -17081,7 +17081,7 @@ OMNIVISION OV5640 SENSOR DRIVER
+ M: Steve Longerbeam <slongerbeam@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov5640.c
+
+ OMNIVISION OV5647 SENSOR DRIVER
+@@ -17089,7 +17089,7 @@ M: Dave Stevenson <dave.stevenson@raspberrypi.com>
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5647.yaml
+ F: drivers/media/i2c/ov5647.c
+
+@@ -17097,7 +17097,7 @@ OMNIVISION OV5670 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5670.yaml
+ F: drivers/media/i2c/ov5670.c
+
+@@ -17105,7 +17105,7 @@ OMNIVISION OV5675 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5675.yaml
+ F: drivers/media/i2c/ov5675.c
+
+@@ -17113,7 +17113,7 @@ OMNIVISION OV5693 SENSOR DRIVER
+ M: Daniel Scally <djrscally@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml
+ F: drivers/media/i2c/ov5693.c
+
+@@ -17121,21 +17121,21 @@ OMNIVISION OV5695 SENSOR DRIVER
+ M: Shunqian Zheng <zhengsq@rock-chips.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov5695.c
+
+ OMNIVISION OV64A40 SENSOR DRIVER
+ M: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov64a40.yaml
+ F: drivers/media/i2c/ov64a40.c
+
+ OMNIVISION OV7670 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ov7670.txt
+ F: drivers/media/i2c/ov7670.c
+
+@@ -17143,7 +17143,7 @@ OMNIVISION OV772x SENSOR DRIVER
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov772x.yaml
+ F: drivers/media/i2c/ov772x.c
+ F: include/media/i2c/ov772x.h
+@@ -17151,7 +17151,7 @@ F: include/media/i2c/ov772x.h
+ OMNIVISION OV7740 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ov7740.txt
+ F: drivers/media/i2c/ov7740.c
+
+@@ -17159,7 +17159,7 @@ OMNIVISION OV8856 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov8856.yaml
+ F: drivers/media/i2c/ov8856.c
+
+@@ -17168,7 +17168,7 @@ M: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
+ M: Nicholas Roth <nicholas@rothemail.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov8858.yaml
+ F: drivers/media/i2c/ov8858.c
+
+@@ -17176,7 +17176,7 @@ OMNIVISION OV9282 SENSOR DRIVER
+ M: Dave Stevenson <dave.stevenson@raspberrypi.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov9282.yaml
+ F: drivers/media/i2c/ov9282.c
+
+@@ -17192,7 +17192,7 @@ R: Akinobu Mita <akinobu.mita@gmail.com>
+ R: Sylwester Nawrocki <s.nawrocki@samsung.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ov9650.txt
+ F: drivers/media/i2c/ov9650.c
+
+@@ -17201,7 +17201,7 @@ M: Tianshu Qiu <tian.shu.qiu@intel.com>
+ R: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov9734.c
+
+ ONBOARD USB HUB DRIVER
+@@ -18646,7 +18646,7 @@ PULSE8-CEC DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/cec/usb/pulse8/
+
+ PURELIFI PLFXLC DRIVER
+@@ -18661,7 +18661,7 @@ L: pvrusb2@isely.net (subscribers-only)
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://www.isely.net/pvrusb2/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/pvrusb2*
+ F: drivers/media/usb/pvrusb2/
+
+@@ -18669,7 +18669,7 @@ PWC WEBCAM DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/pwc/*
+ F: include/trace/events/pwc.h
+
+@@ -19173,7 +19173,7 @@ R: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
+ L: linux-media@vger.kernel.org
+ L: linux-arm-msm@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/*venus*
+ F: drivers/media/platform/qcom/venus/
+
+@@ -19218,14 +19218,14 @@ RADIOSHARK RADIO DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-shark.c
+
+ RADIOSHARK2 RADIO DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-shark2.c
+ F: drivers/media/radio/radio-tea5777.c
+
+@@ -19249,7 +19249,7 @@ RAINSHADOW-CEC DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/cec/usb/rainshadow/
+
+ RALINK MIPS ARCHITECTURE
+@@ -19333,7 +19333,7 @@ M: Sean Young <sean@mess.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/rc-core.rst
+ F: Documentation/userspace-api/media/rc/
+ F: drivers/media/rc/
+@@ -20077,7 +20077,7 @@ ROTATION DRIVER FOR ALLWINNER A83T
+ M: Jernej Skrabec <jernej.skrabec@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml
+ F: drivers/media/platform/sunxi/sun8i-rotate/
+
+@@ -20331,7 +20331,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/saa6588*
+
+ SAA7134 VIDEO4LINUX DRIVER
+@@ -20339,7 +20339,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/saa7134*
+ F: drivers/media/pci/saa7134/
+
+@@ -20347,7 +20347,7 @@ SAA7146 VIDEO4LINUX-2 DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/common/saa7146/
+ F: drivers/media/pci/saa7146/
+ F: include/media/drv-intf/saa7146*
+@@ -20965,7 +20965,7 @@ SHARP RJ54N1CB0C SENSOR DRIVER
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/rj54n1cb0c.c
+ F: include/media/i2c/rj54n1cb0c.h
+
+@@ -21015,7 +21015,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/silabs,si470x.yaml
+ F: drivers/media/radio/si470x/radio-si470x-i2c.c
+
+@@ -21024,7 +21024,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si470x/radio-si470x-common.c
+ F: drivers/media/radio/si470x/radio-si470x-usb.c
+ F: drivers/media/radio/si470x/radio-si470x.h
+@@ -21034,7 +21034,7 @@ M: Eduardo Valentin <edubezval@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si4713/si4713.?
+
+ SI4713 FM RADIO TRANSMITTER PLATFORM DRIVER
+@@ -21042,7 +21042,7 @@ M: Eduardo Valentin <edubezval@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si4713/radio-platform-si4713.c
+
+ SI4713 FM RADIO TRANSMITTER USB DRIVER
+@@ -21050,7 +21050,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si4713/radio-usb-si4713.c
+
+ SIANO DVB DRIVER
+@@ -21058,7 +21058,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/common/siano/
+ F: drivers/media/mmc/siano/
+ F: drivers/media/usb/siano/
+@@ -21434,14 +21434,14 @@ SONY IMX208 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/imx208.c
+
+ SONY IMX214 SENSOR DRIVER
+ M: Ricardo Ribalda <ribalda@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml
+ F: drivers/media/i2c/imx214.c
+
+@@ -21449,7 +21449,7 @@ SONY IMX219 SENSOR DRIVER
+ M: Dave Stevenson <dave.stevenson@raspberrypi.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/imx219.yaml
+ F: drivers/media/i2c/imx219.c
+
+@@ -21457,7 +21457,7 @@ SONY IMX258 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx258.yaml
+ F: drivers/media/i2c/imx258.c
+
+@@ -21465,7 +21465,7 @@ SONY IMX274 SENSOR DRIVER
+ M: Leon Luo <leonl@leopardimaging.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml
+ F: drivers/media/i2c/imx274.c
+
+@@ -21474,7 +21474,7 @@ M: Kieran Bingham <kieran.bingham@ideasonboard.com>
+ M: Umang Jain <umang.jain@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx283.yaml
+ F: drivers/media/i2c/imx283.c
+
+@@ -21482,7 +21482,7 @@ SONY IMX290 SENSOR DRIVER
+ M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx290.yaml
+ F: drivers/media/i2c/imx290.c
+
+@@ -21491,7 +21491,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx296.yaml
+ F: drivers/media/i2c/imx296.c
+
+@@ -21499,20 +21499,20 @@ SONY IMX319 SENSOR DRIVER
+ M: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/imx319.c
+
+ SONY IMX334 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx334.yaml
+ F: drivers/media/i2c/imx334.c
+
+ SONY IMX335 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx335.yaml
+ F: drivers/media/i2c/imx335.c
+
+@@ -21520,13 +21520,13 @@ SONY IMX355 SENSOR DRIVER
+ M: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/imx355.c
+
+ SONY IMX412 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx412.yaml
+ F: drivers/media/i2c/imx412.c
+
+@@ -21534,7 +21534,7 @@ SONY IMX415 SENSOR DRIVER
+ M: Michael Riesch <michael.riesch@wolfvision.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml
+ F: drivers/media/i2c/imx415.c
+
+@@ -21823,7 +21823,7 @@ M: Benjamin Mugnier <benjamin.mugnier@foss.st.com>
+ M: Sylvain Petinot <sylvain.petinot@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
+ F: drivers/media/i2c/st-mipid02.c
+
+@@ -21859,7 +21859,7 @@ M: Benjamin Mugnier <benjamin.mugnier@foss.st.com>
+ M: Sylvain Petinot <sylvain.petinot@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/st,st-vgxy61.yaml
+ F: Documentation/userspace-api/media/drivers/vgxy61.rst
+ F: drivers/media/i2c/vgxy61.c
+@@ -22149,7 +22149,7 @@ STK1160 USB VIDEO CAPTURE DRIVER
+ M: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/stk1160/
+
+ STM32 AUDIO (ASoC) DRIVERS
+@@ -22586,7 +22586,7 @@ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+ Q: http://patchwork.linuxtv.org/project/linux-media/list/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/tda18250*
+
+ TDA18271 MEDIA DRIVER
+@@ -22632,7 +22632,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/tda9840*
+
+ TEA5761 TUNER DRIVER
+@@ -22640,7 +22640,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/tea5761.*
+
+ TEA5767 TUNER DRIVER
+@@ -22648,7 +22648,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/tea5767.*
+
+ TEA6415C MEDIA DRIVER
+@@ -22656,7 +22656,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/tea6415c*
+
+ TEA6420 MEDIA DRIVER
+@@ -22664,7 +22664,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/tea6420*
+
+ TEAM DRIVER
+@@ -22952,7 +22952,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-raremono.c
+
+ THERMAL
+@@ -23028,7 +23028,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ M: Paul Elder <paul.elder@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/thine,thp7312.yaml
+ F: Documentation/userspace-api/media/drivers/thp7312.rst
+ F: drivers/media/i2c/thp7312.c
+@@ -23615,7 +23615,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/tw68/
+
+ TW686X VIDEO4LINUX DRIVER
+@@ -23623,7 +23623,7 @@ M: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/tw686x/
+
+ U-BOOT ENVIRONMENT VARIABLES
+@@ -24106,7 +24106,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://www.ideasonboard.org/uvc/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/uvc/
+ F: include/uapi/linux/uvcvideo.h
+
+@@ -24212,7 +24212,7 @@ V4L2 ASYNC AND FWNODE FRAMEWORKS
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/v4l2-core/v4l2-async.c
+ F: drivers/media/v4l2-core/v4l2-fwnode.c
+ F: include/media/v4l2-async.h
+@@ -24378,7 +24378,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vicodec/*
+
+ VIDEO I2C POLLING DRIVER
+@@ -24406,7 +24406,7 @@ M: Daniel W. S. Almeida <dwlsalmeida@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vidtv/*
+
+ VIMC VIRTUAL MEDIA CONTROLLER DRIVER
+@@ -24415,7 +24415,7 @@ R: Kieran Bingham <kieran.bingham@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vimc/*
+
+ VIRT LIB
+@@ -24663,7 +24663,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vivid/*
+
+ VM SOCKETS (AF_VSOCK)
+@@ -25217,7 +25217,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/xc2028.*
+
+ XDP (eXpress Data Path)
+@@ -25358,8 +25358,7 @@ F: include/xen/arm/swiotlb-xen.h
+ F: include/xen/swiotlb-xen.h
+
+ XFS FILESYSTEM
+-M: Carlos Maiolino <cem@kernel.org>
+-R: Darrick J. Wong <djwong@kernel.org>
++M: Darrick J. Wong <djwong@kernel.org>
+ L: linux-xfs@vger.kernel.org
+ S: Supported
+ W: http://xfs.org/
+@@ -25441,7 +25440,7 @@ XILINX VIDEO IP CORES
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/xilinx/
+ F: drivers/media/platform/xilinx/
+ F: include/uapi/linux/xilinx-v4l2-controls.h
+diff --git a/Makefile b/Makefile
+index 70070e64d267c1..da6e99309a4da4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c
+index 4c9e61457b2f69..cc6ac7d128aa1a 100644
+--- a/arch/arc/kernel/devtree.c
++++ b/arch/arc/kernel/devtree.c
+@@ -62,7 +62,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt)
+ const struct machine_desc *mdesc;
+ unsigned long dt_root;
+
+- if (!early_init_dt_scan(dt))
++ if (!early_init_dt_scan(dt, __pa(dt)))
+ return NULL;
+
+ mdesc = of_flat_dt_match_machine(NULL, arch_get_next_mach);
+diff --git a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+index c8ca8cb7f5c94e..52ad95a2063aaf 100644
+--- a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
++++ b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+@@ -280,8 +280,8 @@ reg_dcdc4: dcdc4 {
+
+ reg_dcdc5: dcdc5 {
+ regulator-always-on;
+- regulator-min-microvolt = <1425000>;
+- regulator-max-microvolt = <1575000>;
++ regulator-min-microvolt = <1450000>;
++ regulator-max-microvolt = <1550000>;
+ regulator-name = "vcc-dram";
+ };
+
+diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+index 04a6d716ecaf8a..1e8fcb5d4700d8 100644
+--- a/arch/arm/boot/dts/microchip/sam9x60.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+@@ -186,6 +186,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 13>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -388,6 +389,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 32>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -439,6 +441,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 33>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -598,6 +601,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 9>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -649,6 +653,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 10>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -700,6 +705,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 11>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -751,6 +757,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 5>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -821,6 +828,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 6>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -891,6 +899,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 7>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -961,6 +970,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 8>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -1086,6 +1096,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 15>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -1137,6 +1148,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 16>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+diff --git a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
+index 29ba098f5dd5e8..28e703e0f152b2 100644
+--- a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
++++ b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
+@@ -53,7 +53,7 @@ partition@0 {
+
+ partition@4000000 {
+ label = "user1";
+- reg = <0x04000000 0x40000000>;
++ reg = <0x04000000 0x04000000>;
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+index c3d79ecd56e398..c217094b50abc9 100644
+--- a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+@@ -72,6 +72,7 @@ opp-1000000000 {
+ <1375000 1375000 1375000>;
+ /* only on am/dm37x with speed-binned bit set */
+ opp-supported-hw = <0xffffffff 2>;
++ turbo-mode;
+ };
+ };
+
+diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c
+index fdb74e64206a8a..3b78966e750a2d 100644
+--- a/arch/arm/kernel/devtree.c
++++ b/arch/arm/kernel/devtree.c
+@@ -200,7 +200,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
+
+ mdesc_best = &__mach_desc_GENERIC_DT;
+
+- if (!dt_virt || !early_init_dt_verify(dt_virt))
++ if (!dt_virt || !early_init_dt_verify(dt_virt, __pa(dt_virt)))
+ return NULL;
+
+ mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
+index 96db07fc9becea..1f2a0fe70a0a26 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
++++ b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
+@@ -29,12 +29,37 @@ usb_dr_connector: endpoint {
+ };
+ };
+
++/*
++ * rst_usb_hub_hog and sel_usb_hub_hog have property 'output-high',
++ * dt overlay don't support /delete-property/. Both 'output-low' and
++ * 'output-high' will be exist under hog nodes if overlay file set
++ * 'output-low'. Workaround is disable these hog and create new hog with
++ * 'output-low'.
++ */
++
+ &rst_usb_hub_hog {
+- output-low;
++ status = "disabled";
++};
++
++&expander0 {
++ rst-usb-low-hub-hog {
++ gpio-hog;
++ gpios = <13 0>;
++ output-low;
++ line-name = "RST_USB_HUB#";
++ };
+ };
+
+ &sel_usb_hub_hog {
+- output-low;
++ status = "disabled";
++};
++
++&gpio2 {
++ sel-usb-low-hub-hog {
++ gpio-hog;
++ gpios = <1 GPIO_ACTIVE_HIGH>;
++ output-low;
++ };
+ };
+
+ &usbotg1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt6358.dtsi b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+index 641d452fbc0830..e23672a2eea4af 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6358.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+@@ -15,12 +15,12 @@ pmic_adc: adc {
+ #io-channel-cells = <1>;
+ };
+
+- mt6358codec: mt6358codec {
++ mt6358codec: audio-codec {
+ compatible = "mediatek,mt6358-sound";
+ mediatek,dmic-mode = <0>; /* two-wires */
+ };
+
+- mt6358regulator: mt6358regulator {
++ mt6358regulator: regulators {
+ compatible = "mediatek,mt6358-regulator";
+
+ mt6358_vdram1_reg: buck_vdram1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+index 8d1cbc92bce320..ae0379fd42a91c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+@@ -49,6 +49,14 @@ trackpad2: trackpad@2c {
+ interrupts-extended = <&pio 117 IRQ_TYPE_LEVEL_LOW>;
+ reg = <0x2c>;
+ hid-descr-addr = <0x0020>;
++ /*
++ * The trackpad needs a post-power-on delay of 100ms,
++ * but at time of writing, the power supply for it on
++ * this board is always on. The delay is therefore not
++ * added to avoid impacting the readiness of the
++ * trackpad.
++ */
++ vdd-supply = <&mt6397_vgp6_reg>;
+ wakeup-source;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+index 19c1e2bee494c9..20b71f2e7159ad 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+@@ -30,3 +30,6 @@ touchscreen@2c {
+ };
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <4100>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+index f34964afe39b53..83bbcfe620835a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+@@ -18,6 +18,8 @@ &i2c_tunnel {
+ };
+
+ &i2c2 {
++ i2c-scl-internal-delay-ns = <25000>;
++
+ trackpad@2c {
+ compatible = "hid-over-i2c";
+ reg = <0x2c>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+index 0b45aee2e29953..65860b33c01fe8 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+@@ -30,3 +30,6 @@ &qca_wifi {
+ qcom,ath10k-calibration-variant = "GO_DAMU";
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <20000>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+index bbe6c338f465ee..f9c1ec366b2660 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+@@ -25,3 +25,6 @@ trackpad@2c {
+ };
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <21500>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+index 783c333107bcbf..49e053b932e76c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+@@ -8,28 +8,32 @@
+ #include <arm/cros-ec-keyboard.dtsi>
+
+ / {
+- pp1200_mipibrdg: pp1200-mipibrdg {
++ pp1000_mipibrdg: pp1000-mipibrdg {
+ compatible = "regulator-fixed";
+- regulator-name = "pp1200_mipibrdg";
++ regulator-name = "pp1000_mipibrdg";
++ regulator-min-microvolt = <1000000>;
++ regulator-max-microvolt = <1000000>;
+ pinctrl-names = "default";
+- pinctrl-0 = <&pp1200_mipibrdg_en>;
++ pinctrl-0 = <&pp1000_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 54 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp1800_alw>;
+ };
+
+ pp1800_mipibrdg: pp1800-mipibrdg {
+ compatible = "regulator-fixed";
+ regulator-name = "pp1800_mipibrdg";
+ pinctrl-names = "default";
+- pinctrl-0 = <&pp1800_lcd_en>;
++ pinctrl-0 = <&pp1800_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 36 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp1800_alw>;
+ };
+
+ pp3300_panel: pp3300-panel {
+@@ -44,18 +48,20 @@ pp3300_panel: pp3300-panel {
+ regulator-boot-on;
+
+ gpio = <&pio 35 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp3300_alw>;
+ };
+
+- vddio_mipibrdg: vddio-mipibrdg {
++ pp3300_mipibrdg: pp3300-mipibrdg {
+ compatible = "regulator-fixed";
+- regulator-name = "vddio_mipibrdg";
++ regulator-name = "pp3300_mipibrdg";
+ pinctrl-names = "default";
+- pinctrl-0 = <&vddio_mipibrdg_en>;
++ pinctrl-0 = <&pp3300_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 37 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp3300_alw>;
+ };
+
+ volume_buttons: volume-buttons {
+@@ -146,9 +152,9 @@ anx_bridge: anx7625@58 {
+ pinctrl-0 = <&anx7625_pins>;
+ enable-gpios = <&pio 45 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&pio 73 GPIO_ACTIVE_HIGH>;
+- vdd10-supply = <&pp1200_mipibrdg>;
++ vdd10-supply = <&pp1000_mipibrdg>;
+ vdd18-supply = <&pp1800_mipibrdg>;
+- vdd33-supply = <&vddio_mipibrdg>;
++ vdd33-supply = <&pp3300_mipibrdg>;
+
+ ports {
+ #address-cells = <1>;
+@@ -391,14 +397,14 @@ &pio {
+ "",
+ "";
+
+- pp1200_mipibrdg_en: pp1200-mipibrdg-en {
++ pp1000_mipibrdg_en: pp1000-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO54__FUNC_GPIO54>;
+ output-low;
+ };
+ };
+
+- pp1800_lcd_en: pp1800-lcd-en {
++ pp1800_mipibrdg_en: pp1800-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO36__FUNC_GPIO36>;
+ output-low;
+@@ -460,7 +466,7 @@ trackpad-int {
+ };
+ };
+
+- vddio_mipibrdg_en: vddio-mipibrdg-en {
++ pp3300_mipibrdg_en: pp3300-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO37__FUNC_GPIO37>;
+ output-low;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+index bfb9e42c8acaa7..ff02f63bac29b2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+@@ -92,9 +92,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c32";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+index 5c1bf6a1e47586..da6e767b4ceede 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+@@ -79,9 +79,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c64";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+index 0f5fa893a77426..8b56b8564ed7a2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+@@ -88,9 +88,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c32";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 22924f61ec9ed2..07ae3c8e897b7d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -290,6 +290,11 @@ dsi_out: endpoint {
+ };
+ };
+
++&dpi0 {
++ /* TODO Re-enable after DP to Type-C port muxing can be described */
++ status = "disabled";
++};
++
+ &gic {
+ mediatek,broken-save-restore-fw;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 266441e999f211..0a6578aacf8280 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1845,6 +1845,10 @@ dpi0: dpi@14015000 {
+ <&mmsys CLK_MM_DPI_MM>,
+ <&apmixedsys CLK_APMIXED_TVDPLL>;
+ clock-names = "pixel", "engine", "pll";
++
++ port {
++ dpi_out: endpoint { };
++ };
+ };
+
+ mutex: mutex@14016000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
+index 52ec58128d5615..b495a241b4432b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
+@@ -10,12 +10,6 @@
+
+ / {
+ chassis-type = "laptop";
+-
+- max98360a: max98360a {
+- compatible = "maxim,max98360a";
+- sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
+- #sound-dai-cells = <0>;
+- };
+ };
+
+ &cpu6 {
+@@ -59,19 +53,14 @@ &cluster1_opp_15 {
+ opp-hz = /bits/ 64 <2200000000>;
+ };
+
+-&rt1019p{
+- status = "disabled";
+-};
+-
+ &sound {
+ compatible = "mediatek,mt8186-mt6366-rt5682s-max98360-sound";
+- status = "okay";
++};
+
+- spk-hdmi-playback-dai-link {
+- codec {
+- sound-dai = <&it6505dptx>, <&max98360a>;
+- };
+- };
++&speaker_codec {
++ compatible = "maxim,max98360a";
++ sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
++ /delete-property/ sdb-gpios;
+ };
+
+ &spmi {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index 682c6ad2574d00..0c0b3ac5974525 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -259,15 +259,15 @@ spk-hdmi-playback-dai-link {
+ mediatek,clk-provider = "cpu";
+ /* RT1019P and IT6505 connected to the same I2S line */
+ codec {
+- sound-dai = <&it6505dptx>, <&rt1019p>;
++ sound-dai = <&it6505dptx>, <&speaker_codec>;
+ };
+ };
+ };
+
+- rt1019p: speaker-codec {
++ speaker_codec: speaker-codec {
+ compatible = "realtek,rt1019p";
+ pinctrl-names = "default";
+- pinctrl-0 = <&rt1019p_pins_default>;
++ pinctrl-0 = <&speaker_codec_pins_default>;
+ #sound-dai-cells = <0>;
+ sdb-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
+ };
+@@ -1179,7 +1179,7 @@ pins {
+ };
+ };
+
+- rt1019p_pins_default: rt1019p-default-pins {
++ speaker_codec_pins_default: speaker-codec-default-pins {
+ pins-sdb {
+ pinmux = <PINMUX_GPIO150__FUNC_GPIO150>;
+ output-low;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+index cd27966d2e3c05..91beef22e0a9c6 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+@@ -956,9 +956,9 @@ mfg0: power-domain@MT8188_POWER_DOMAIN_MFG0 {
+ #size-cells = <0>;
+ #power-domain-cells = <1>;
+
+- power-domain@MT8188_POWER_DOMAIN_MFG1 {
++ mfg1: power-domain@MT8188_POWER_DOMAIN_MFG1 {
+ reg = <MT8188_POWER_DOMAIN_MFG1>;
+- clocks = <&topckgen CLK_APMIXED_MFGPLL>,
++ clocks = <&apmixedsys CLK_APMIXED_MFGPLL>,
+ <&topckgen CLK_TOP_MFG_CORE_TMP>;
+ clock-names = "mfg", "alt";
+ mediatek,infracfg = <&infracfg_ao>;
+@@ -1689,7 +1689,6 @@ u3port1: usb-phy@700 {
+ <&clk26m>;
+ clock-names = "ref", "da_ref";
+ #phy-cells = <1>;
+- status = "disabled";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index 75d56b2d5a3d34..2c7b2223ee76b1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -438,7 +438,7 @@ audio_codec: codec@1a {
+ /* Realtek RT5682i or RT5682s, sharing the same configuration */
+ reg = <0x1a>;
+ interrupts-extended = <&pio 89 IRQ_TYPE_EDGE_BOTH>;
+- #sound-dai-cells = <0>;
++ #sound-dai-cells = <1>;
+ realtek,jd-src = <1>;
+
+ AVDD-supply = <&mt6359_vio18_ldo_reg>;
+@@ -1181,7 +1181,7 @@ hs-playback-dai-link {
+ link-name = "ETDM1_OUT_BE";
+ mediatek,clk-provider = "cpu";
+ codec {
+- sound-dai = <&audio_codec>;
++ sound-dai = <&audio_codec 0>;
+ };
+ };
+
+@@ -1189,7 +1189,7 @@ hs-capture-dai-link {
+ link-name = "ETDM2_IN_BE";
+ mediatek,clk-provider = "cpu";
+ codec {
+- sound-dai = <&audio_codec>;
++ sound-dai = <&audio_codec 0>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index e89ba384c4aafc..ade685ed2190b7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -487,7 +487,7 @@ topckgen: syscon@10000000 {
+ };
+
+ infracfg_ao: syscon@10001000 {
+- compatible = "mediatek,mt8195-infracfg_ao", "syscon", "simple-mfd";
++ compatible = "mediatek,mt8195-infracfg_ao", "syscon";
+ reg = <0 0x10001000 0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+@@ -3331,11 +3331,9 @@ &larb19 &larb21 &larb24 &larb25
+ mutex1: mutex@1c101000 {
+ compatible = "mediatek,mt8195-disp-mutex";
+ reg = <0 0x1c101000 0 0x1000>;
+- reg-names = "vdo1_mutex";
+ interrupts = <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH 0>;
+ power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ clocks = <&vdosys1 CLK_VDO1_DISP_MUTEX>;
+- clock-names = "vdo1_mutex";
+ mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x1000 0x1000>;
+ mediatek,gce-events = <CMDQ_EVENT_VDO1_STREAM_DONE_ENG_0>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+index 1ef6262b65c9ac..b4b48eb93f3c54 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+@@ -187,7 +187,7 @@ mdio {
+ compatible = "snps,dwmac-mdio";
+ #address-cells = <1>;
+ #size-cells = <0>;
+- eth_phy0: eth-phy0@1 {
++ eth_phy0: ethernet-phy@1 {
+ compatible = "ethernet-phy-id001c.c916";
+ reg = <0x1>;
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+index c00db75e391057..1c53ccc5e3cbf3 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+@@ -351,7 +351,7 @@ mmc@700b0200 {
+ #size-cells = <0>;
+
+ wifi@1 {
+- compatible = "brcm,bcm4354-fmac";
++ compatible = "brcm,bcm4354-fmac", "brcm,bcm4329-fmac";
+ reg = <1>;
+ interrupt-parent = <&gpio>;
+ interrupts = <TEGRA_GPIO(H, 2) IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+index 0d45662b8028bf..5d0167fbc70982 100644
+--- a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
++++ b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+@@ -707,7 +707,7 @@ &remoteproc_cdsp {
+ };
+
+ &remoteproc_mpss {
+- firmware-name = "qcom/qcs6490/modem.mdt";
++ firmware-name = "qcom/qcs6490/modem.mbn";
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index 0e9429684dd97b..60f71b49026153 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -3889,7 +3889,7 @@ lmh@18358800 {
+ };
+
+ cpufreq_hw: cpufreq@18323000 {
+- compatible = "qcom,cpufreq-hw";
++ compatible = "qcom,sc8180x-cpufreq-hw", "qcom,cpufreq-hw";
+ reg = <0 0x18323000 0 0x1400>, <0 0x18325800 0 0x1400>;
+ reg-names = "freq-domain0", "freq-domain1";
+
+diff --git a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+index 60412281ab27de..962c8aa4004401 100644
+--- a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
++++ b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+@@ -104,7 +104,7 @@ vreg_l10a_1p8: vreg-l10a-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vreg_l10a_1p8";
+ regulator-min-microvolt = <1804000>;
+- regulator-max-microvolt = <1896000>;
++ regulator-max-microvolt = <1804000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 7986ddb30f6e8c..4f8477de7e1b1e 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -1376,43 +1376,43 @@ gpu_opp_table: opp-table {
+ opp-850000000 {
+ opp-hz = /bits/ 64 <850000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>;
+- opp-supported-hw = <0x02>;
++ opp-supported-hw = <0x03>;
+ };
+
+ opp-800000000 {
+ opp-hz = /bits/ 64 <800000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_TURBO>;
+- opp-supported-hw = <0x04>;
++ opp-supported-hw = <0x07>;
+ };
+
+ opp-650000000 {
+ opp-hz = /bits/ 64 <650000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
+- opp-supported-hw = <0x08>;
++ opp-supported-hw = <0x0f>;
+ };
+
+ opp-565000000 {
+ opp-hz = /bits/ 64 <565000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
+- opp-supported-hw = <0x10>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-430000000 {
+ opp-hz = /bits/ 64 <430000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-355000000 {
+ opp-hz = /bits/ 64 <355000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-253000000 {
+ opp-hz = /bits/ 64 <253000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index fb4a48a1e2a8a5..2926a1aba76873 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -594,8 +594,6 @@ &usb_1_ss0_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+@@ -628,8 +626,6 @@ &usb_1_ss1_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index 0cdaff9c8cf0fc..f22e5c840a2e55 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -898,8 +898,6 @@ &usb_1_ss0_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+@@ -932,8 +930,6 @@ &usb_1_ss1_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 0510abc0edf0ff..914f9cb3aca215 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -279,8 +279,8 @@ CLUSTER_C4: cpu-sleep-0 {
+ idle-state-name = "ret";
+ arm,psci-suspend-param = <0x00000004>;
+ entry-latency-us = <180>;
+- exit-latency-us = <320>;
+- min-residency-us = <1000>;
++ exit-latency-us = <500>;
++ min-residency-us = <600>;
+ };
+ };
+
+@@ -299,7 +299,7 @@ CLUSTER_CL5: cluster-sleep-1 {
+ idle-state-name = "ret-pll-off";
+ arm,psci-suspend-param = <0x01000054>;
+ entry-latency-us = <2200>;
+- exit-latency-us = <2500>;
++ exit-latency-us = <4000>;
+ min-residency-us = <7000>;
+ };
+ };
+@@ -5752,7 +5752,7 @@ apps_smmu: iommu@15000000 {
+ intc: interrupt-controller@17000000 {
+ compatible = "arm,gic-v3";
+ reg = <0 0x17000000 0 0x10000>, /* GICD */
+- <0 0x17080000 0 0x480000>; /* GICR * 12 */
++ <0 0x17080000 0 0x300000>; /* GICR * 12 */
+
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+index 8e2db1d6ca81e2..25c55b32aafe5a 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+@@ -69,9 +69,6 @@ &rcar_sound {
+
+ status = "okay";
+
+- /* Single DAI */
+- #sound-dai-cells = <0>;
+-
+ rsnd_port: port {
+ rsnd_endpoint: endpoint {
+ remote-endpoint = <&dw_hdmi0_snd_in>;
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+index 66f3affe046973..deb69c27277566 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+@@ -84,9 +84,6 @@ &rcar_sound {
+ pinctrl-names = "default";
+ status = "okay";
+
+- /* Single DAI */
+- #sound-dai-cells = <0>;
+-
+ /* audio_clkout0/1/2/3 */
+ #clock-cells = <1>;
+ clock-frequency = <12288000 11289600>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
+index ebcaeafc3800d0..fa61633aea1526 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
++++ b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
+@@ -49,7 +49,6 @@ vcc1v8_eth: vcc1v8-eth-regulator {
+
+ vcc3v3_eth: vcc3v3-eth-regulator {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpio = <&gpio0 RK_PC0 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc3v3_eth_enn>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+index 8ba111d9283fef..d9d2bf822443bc 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+@@ -62,7 +62,7 @@ sdio_pwrseq: sdio-pwrseq {
+
+ sound {
+ compatible = "audio-graph-card";
+- label = "rockchip,es8388-codec";
++ label = "rockchip,es8388";
+ widgets = "Microphone", "Mic Jack",
+ "Headphone", "Headphones";
+ routing = "LINPUT2", "Mic Jack",
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
+index feea6b20a6bf54..6b77be64324950 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
+@@ -71,7 +71,6 @@ vcc5v0_sys: vcc5v0-sys-regulator {
+
+ vcc_3v3_sd_s0: vcc-3v3-sd-s0-regulator {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpios = <&gpio4 RK_PB5 GPIO_ACTIVE_LOW>;
+ regulator-name = "vcc_3v3_sd_s0";
+ regulator-boot-on;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
+index e4633af87eb9c5..d6ce53c6d74814 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
+@@ -433,8 +433,6 @@ &mcasp2 {
+ 0 0 0 0
+ 0 0 0 0
+ >;
+- tx-num-evt = <32>;
+- rx-num-evt = <32>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 6593c5da82c064..df39f2b1ff6ba6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -254,7 +254,7 @@ J721E_IOPAD(0x38, PIN_OUTPUT, 0) /* (Y21) MCAN3_TX */
+ };
+ };
+
+-&main_pmx1 {
++&main_pmx2 {
+ main_usbss0_pins_default: main-usbss0-default-pins {
+ pinctrl-single,pins = <
+ J721E_IOPAD(0x04, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 9386bf3ef9f684..1d11da926a8714 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -426,10 +426,28 @@ main_pmx0: pinctrl@11c000 {
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+- main_pmx1: pinctrl@11c11c {
++ main_pmx1: pinctrl@11c110 {
+ compatible = "ti,j7200-padconf", "pinctrl-single";
+ /* Proxy 0 addressing */
+- reg = <0x00 0x11c11c 0x00 0xc>;
++ reg = <0x00 0x11c110 0x00 0x004>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx2: pinctrl@11c11c {
++ compatible = "ti,j7200-padconf", "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c11c 0x00 0x00c>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx3: pinctrl@11c164 {
++ compatible = "ti,j7200-padconf", "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c164 0x00 0x008>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+@@ -1145,7 +1163,7 @@ main_spi0: spi@2100000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 266 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 266 1>;
++ clocks = <&k3_clks 266 4>;
+ status = "disabled";
+ };
+
+@@ -1156,7 +1174,7 @@ main_spi1: spi@2110000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 267 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 267 1>;
++ clocks = <&k3_clks 267 4>;
+ status = "disabled";
+ };
+
+@@ -1167,7 +1185,7 @@ main_spi2: spi@2120000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 268 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 268 1>;
++ clocks = <&k3_clks 268 4>;
+ status = "disabled";
+ };
+
+@@ -1178,7 +1196,7 @@ main_spi3: spi@2130000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 269 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 269 1>;
++ clocks = <&k3_clks 269 4>;
+ status = "disabled";
+ };
+
+@@ -1189,7 +1207,7 @@ main_spi4: spi@2140000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 270 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 270 1>;
++ clocks = <&k3_clks 270 2>;
+ status = "disabled";
+ };
+
+@@ -1200,7 +1218,7 @@ main_spi5: spi@2150000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 271 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 271 1>;
++ clocks = <&k3_clks 271 4>;
+ status = "disabled";
+ };
+
+@@ -1211,7 +1229,7 @@ main_spi6: spi@2160000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 272 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 272 1>;
++ clocks = <&k3_clks 272 4>;
+ status = "disabled";
+ };
+
+@@ -1222,7 +1240,7 @@ main_spi7: spi@2170000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 273 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 273 1>;
++ clocks = <&k3_clks 273 4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+index 5097d192c2b208..b18b2f2deb969f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+@@ -494,7 +494,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 274 0>;
++ clocks = <&k3_clks 274 4>;
+ status = "disabled";
+ };
+
+@@ -505,7 +505,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 275 0>;
++ clocks = <&k3_clks 275 4>;
+ status = "disabled";
+ };
+
+@@ -516,7 +516,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 276 0>;
++ clocks = <&k3_clks 276 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+index 3731ffb4a5c963..6f5c1401ebd6a0 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+@@ -654,7 +654,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 274 0>;
++ clocks = <&k3_clks 274 1>;
+ status = "disabled";
+ };
+
+@@ -665,7 +665,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 275 0>;
++ clocks = <&k3_clks 275 1>;
+ status = "disabled";
+ };
+
+@@ -676,7 +676,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 276 0>;
++ clocks = <&k3_clks 276 1>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+index 9ed6949b40e9df..fae534b5c8a43f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+@@ -1708,7 +1708,7 @@ main_spi0: spi@2100000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 339 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 339 1>;
++ clocks = <&k3_clks 339 2>;
+ status = "disabled";
+ };
+
+@@ -1719,7 +1719,7 @@ main_spi1: spi@2110000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 340 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 340 1>;
++ clocks = <&k3_clks 340 2>;
+ status = "disabled";
+ };
+
+@@ -1730,7 +1730,7 @@ main_spi2: spi@2120000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 341 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 341 1>;
++ clocks = <&k3_clks 341 2>;
+ status = "disabled";
+ };
+
+@@ -1741,7 +1741,7 @@ main_spi3: spi@2130000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 342 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 342 1>;
++ clocks = <&k3_clks 342 2>;
+ status = "disabled";
+ };
+
+@@ -1752,7 +1752,7 @@ main_spi4: spi@2140000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 343 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 343 1>;
++ clocks = <&k3_clks 343 2>;
+ status = "disabled";
+ };
+
+@@ -1763,7 +1763,7 @@ main_spi5: spi@2150000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 344 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 344 1>;
++ clocks = <&k3_clks 344 2>;
+ status = "disabled";
+ };
+
+@@ -1774,7 +1774,7 @@ main_spi6: spi@2160000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 345 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 345 1>;
++ clocks = <&k3_clks 345 2>;
+ status = "disabled";
+ };
+
+@@ -1785,7 +1785,7 @@ main_spi7: spi@2170000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 346 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 346 1>;
++ clocks = <&k3_clks 346 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+index 9d96b19d0e7cf5..8232d308c23cc6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+@@ -425,7 +425,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 347 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 347 0>;
++ clocks = <&k3_clks 347 2>;
+ status = "disabled";
+ };
+
+@@ -436,7 +436,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 348 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 348 0>;
++ clocks = <&k3_clks 348 2>;
+ status = "disabled";
+ };
+
+@@ -447,7 +447,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 349 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 349 0>;
++ clocks = <&k3_clks 349 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 8c0a36f72d6fcd..bc77869dbd43b2 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -353,6 +353,7 @@ __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
+ __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000)
+ __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000)
+ __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000)
++__AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400)
+ __AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000)
+ __AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000)
+ __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index bf64fed9820ea0..c315bc1a4e9adf 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -74,8 +74,6 @@ enum kvm_mode kvm_get_mode(void);
+ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
+ #endif
+
+-DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
+-
+ extern unsigned int __ro_after_init kvm_sve_max_vl;
+ extern unsigned int __ro_after_init kvm_host_sve_max_vl;
+ int __init kvm_arm_init_sve(void);
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 718728a85430fa..db994d1fd97e70 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -228,6 +228,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
++ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_XS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_BF16_SHIFT, 4, 0),
+diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
+index 3496d6169e59b2..42b69936cee34b 100644
+--- a/arch/arm64/kernel/probes/decode-insn.c
++++ b/arch/arm64/kernel/probes/decode-insn.c
+@@ -58,10 +58,13 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
+ * Instructions which load PC relative literals are not going to work
+ * when executed from an XOL slot. Instructions doing an exclusive
+ * load/store are not going to complete successfully when single-step
+- * exception handling happens in the middle of the sequence.
++ * exception handling happens in the middle of the sequence. Memory
++ * copy/set instructions require that all three instructions be placed
++ * consecutively in memory.
+ */
+ if (aarch64_insn_uses_literal(insn) ||
+- aarch64_insn_is_exclusive(insn))
++ aarch64_insn_is_exclusive(insn) ||
++ aarch64_insn_is_mops(insn))
+ return false;
+
+ return true;
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 3e7c8c8195c3c9..2bbcbb11d844c9 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -442,7 +442,7 @@ static void tls_thread_switch(struct task_struct *next)
+
+ if (is_compat_thread(task_thread_info(next)))
+ write_sysreg(next->thread.uw.tp_value, tpidrro_el0);
+- else if (!arm64_kernel_unmapped_at_el0())
++ else
+ write_sysreg(0, tpidrro_el0);
+
+ write_sysreg(*task_user_tls(next), tpidr_el0);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index b22d28ec80284b..87f61fd6783c20 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -175,7 +175,11 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
+ if (dt_virt)
+ memblock_reserve(dt_phys, size);
+
+- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
++ /*
++ * dt_virt is a fixmap address, hence __pa(dt_virt) can't be used.
++ * Pass dt_phys directly.
++ */
++ if (!early_init_dt_scan(dt_virt, dt_phys)) {
+ pr_crit("\n"
+ "Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n"
+ "The dtb must be 8-byte aligned and must not exceed 2 MB in size\n"
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 58d89d997d050f..f84c71f04d9ea9 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -287,6 +287,9 @@ SECTIONS
+ __initdata_end = .;
+ __init_end = .;
+
++ .data.rel.ro : { *(.data.rel.ro) }
++ ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
++
+ _data = .;
+ _sdata = .;
+ RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
+@@ -343,9 +346,6 @@ SECTIONS
+ *(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt)
+ }
+ ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!")
+-
+- .data.rel.ro : { *(.data.rel.ro) }
+- ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
+ }
+
+ #include "image-vars.h"
+diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
+index 879982b1cc739e..1215df59041856 100644
+--- a/arch/arm64/kvm/arch_timer.c
++++ b/arch/arm64/kvm/arch_timer.c
+@@ -206,8 +206,7 @@ void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map)
+
+ static inline bool userspace_irqchip(struct kvm *kvm)
+ {
+- return static_branch_unlikely(&userspace_irqchip_in_use) &&
+- unlikely(!irqchip_in_kernel(kvm));
++ return unlikely(!irqchip_in_kernel(kvm));
+ }
+
+ static void soft_timer_start(struct hrtimer *hrt, u64 ns)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 48cafb65d6acff..70ff9a20ef3af3 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -69,7 +69,6 @@ DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+ static bool vgic_present, kvm_arm_initialised;
+
+ static DEFINE_PER_CPU(unsigned char, kvm_hyp_initialized);
+-DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
+
+ bool is_kvm_arm_initialised(void)
+ {
+@@ -503,9 +502,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+
+ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+- if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm)))
+- static_branch_dec(&userspace_irqchip_in_use);
+-
+ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
+ kvm_timer_vcpu_terminate(vcpu);
+ kvm_pmu_vcpu_destroy(vcpu);
+@@ -848,14 +844,6 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ return ret;
+ }
+
+- if (!irqchip_in_kernel(kvm)) {
+- /*
+- * Tell the rest of the code that there are userspace irqchip
+- * VMs in the wild.
+- */
+- static_branch_inc(&userspace_irqchip_in_use);
+- }
+-
+ /*
+ * Initialize traps for protected VMs.
+ * NOTE: Move to run in EL2 directly, rather than via a hypercall, once
+@@ -1077,7 +1065,7 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret)
+ * state gets updated in kvm_timer_update_run and
+ * kvm_pmu_update_run below).
+ */
+- if (static_branch_unlikely(&userspace_irqchip_in_use)) {
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm))) {
+ if (kvm_timer_should_notify_user(vcpu) ||
+ kvm_pmu_should_notify_user(vcpu)) {
+ *ret = -EINTR;
+@@ -1199,7 +1187,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ vcpu->mode = OUTSIDE_GUEST_MODE;
+ isb(); /* Ensure work in x_flush_hwstate is committed */
+ kvm_pmu_sync_hwstate(vcpu);
+- if (static_branch_unlikely(&userspace_irqchip_in_use))
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_sync_user(vcpu);
+ kvm_vgic_sync_hwstate(vcpu);
+ local_irq_enable();
+@@ -1245,7 +1233,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ * we don't want vtimer interrupts to race with syncing the
+ * timer virtual interrupt state.
+ */
+- if (static_branch_unlikely(&userspace_irqchip_in_use))
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_sync_user(vcpu);
+
+ kvm_arch_vcpu_ctxsync_fp(vcpu);
+diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
+index cd6b7b83e2c370..ab365e839874e5 100644
+--- a/arch/arm64/kvm/mmio.c
++++ b/arch/arm64/kvm/mmio.c
+@@ -72,6 +72,31 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len)
+ return data;
+ }
+
++static bool kvm_pending_sync_exception(struct kvm_vcpu *vcpu)
++{
++ if (!vcpu_get_flag(vcpu, PENDING_EXCEPTION))
++ return false;
++
++ if (vcpu_el1_is_32bit(vcpu)) {
++ switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
++ case unpack_vcpu_flag(EXCEPT_AA32_UND):
++ case unpack_vcpu_flag(EXCEPT_AA32_IABT):
++ case unpack_vcpu_flag(EXCEPT_AA32_DABT):
++ return true;
++ default:
++ return false;
++ }
++ } else {
++ switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
++ case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC):
++ case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC):
++ return true;
++ default:
++ return false;
++ }
++ }
++}
++
+ /**
+ * kvm_handle_mmio_return -- Handle MMIO loads after user space emulation
+ * or in-kernel IO emulation
+@@ -84,8 +109,11 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu)
+ unsigned int len;
+ int mask;
+
+- /* Detect an already handled MMIO return */
+- if (unlikely(!vcpu->mmio_needed))
++ /*
++ * Detect if the MMIO return was already handled or if userspace aborted
++ * the MMIO access.
++ */
++ if (unlikely(!vcpu->mmio_needed || kvm_pending_sync_exception(vcpu)))
+ return 1;
+
+ vcpu->mmio_needed = 0;
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index ac36c438b8c18c..3940fe893783c8 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -342,7 +342,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
+
+ if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
+ reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+- reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+ reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+ }
+
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index ba945ba78cc7d7..198296933e7ebf 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -782,6 +782,9 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
+
+ ite = find_ite(its, device_id, event_id);
+ if (ite && its_is_collection_mapped(ite->collection)) {
++ struct its_device *device = find_its_device(its, device_id);
++ int ite_esz = vgic_its_get_abi(its)->ite_esz;
++ gpa_t gpa = device->itt_addr + ite->event_id * ite_esz;
+ /*
+ * Though the spec talks about removing the pending state, we
+ * don't bother here since we clear the ITTE anyway and the
+@@ -790,7 +793,8 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
+ vgic_its_invalidate_cache(its);
+
+ its_free_ite(kvm, ite);
+- return 0;
++
++ return vgic_its_write_entry_lock(its, gpa, 0, ite_esz);
+ }
+
+ return E_ITS_DISCARD_UNMAPPED_INTERRUPT;
+@@ -1139,9 +1143,11 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
+ bool valid = its_cmd_get_validbit(its_cmd);
+ u8 num_eventid_bits = its_cmd_get_size(its_cmd);
+ gpa_t itt_addr = its_cmd_get_ittaddr(its_cmd);
++ int dte_esz = vgic_its_get_abi(its)->dte_esz;
+ struct its_device *device;
++ gpa_t gpa;
+
+- if (!vgic_its_check_id(its, its->baser_device_table, device_id, NULL))
++ if (!vgic_its_check_id(its, its->baser_device_table, device_id, &gpa))
+ return E_ITS_MAPD_DEVICE_OOR;
+
+ if (valid && num_eventid_bits > VITS_TYPER_IDBITS)
+@@ -1162,7 +1168,7 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
+ * is an error, so we are done in any case.
+ */
+ if (!valid)
+- return 0;
++ return vgic_its_write_entry_lock(its, gpa, 0, dte_esz);
+
+ device = vgic_its_alloc_device(its, device_id, itt_addr,
+ num_eventid_bits);
+@@ -2086,7 +2092,6 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
+ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ struct its_ite *ite, gpa_t gpa, int ite_esz)
+ {
+- struct kvm *kvm = its->dev->kvm;
+ u32 next_offset;
+ u64 val;
+
+@@ -2095,7 +2100,8 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) |
+ ite->collection->collection_id;
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(kvm, gpa, &val, ite_esz);
++
++ return vgic_its_write_entry_lock(its, gpa, val, ite_esz);
+ }
+
+ /**
+@@ -2239,7 +2245,6 @@ static int vgic_its_restore_itt(struct vgic_its *its, struct its_device *dev)
+ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ gpa_t ptr, int dte_esz)
+ {
+- struct kvm *kvm = its->dev->kvm;
+ u64 val, itt_addr_field;
+ u32 next_offset;
+
+@@ -2250,7 +2255,8 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) |
+ (dev->num_eventid_bits - 1));
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(kvm, ptr, &val, dte_esz);
++
++ return vgic_its_write_entry_lock(its, ptr, val, dte_esz);
+ }
+
+ /**
+@@ -2437,7 +2443,8 @@ static int vgic_its_save_cte(struct vgic_its *its,
+ ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) |
+ collection->collection_id);
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz);
++
++ return vgic_its_write_entry_lock(its, gpa, val, esz);
+ }
+
+ /*
+@@ -2453,8 +2460,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
+ u64 val;
+ int ret;
+
+- BUG_ON(esz > sizeof(val));
+- ret = kvm_read_guest_lock(kvm, gpa, &val, esz);
++ ret = vgic_its_read_entry_lock(its, gpa, &val, esz);
+ if (ret)
+ return ret;
+ val = le64_to_cpu(val);
+@@ -2492,7 +2498,6 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ u64 baser = its->baser_coll_table;
+ gpa_t gpa = GITS_BASER_ADDR_48_to_52(baser);
+ struct its_collection *collection;
+- u64 val;
+ size_t max_size, filled = 0;
+ int ret, cte_esz = abi->cte_esz;
+
+@@ -2516,10 +2521,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ * table is not fully filled, add a last dummy element
+ * with valid bit unset
+ */
+- val = 0;
+- BUG_ON(cte_esz > sizeof(val));
+- ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);
+- return ret;
++ return vgic_its_write_entry_lock(its, gpa, 0, cte_esz);
+ }
+
+ /*
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index 9e50928f5d7dfd..70a44852cbafe3 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -530,6 +530,7 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
+ unsigned long val)
+ {
+ struct vgic_irq *irq;
++ u32 intid;
+
+ /*
+ * If the guest wrote only to the upper 32bit part of the
+@@ -541,9 +542,13 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
+ if ((addr & 4) || !vgic_lpis_enabled(vcpu))
+ return;
+
++ intid = lower_32_bits(val);
++ if (intid < VGIC_MIN_LPI)
++ return;
++
+ vgic_set_rdist_busy(vcpu, true);
+
+- irq = vgic_get_irq(vcpu->kvm, NULL, lower_32_bits(val));
++ irq = vgic_get_irq(vcpu->kvm, NULL, intid);
+ if (irq) {
+ vgic_its_inv_lpi(vcpu->kvm, irq);
+ vgic_put_irq(vcpu->kvm, irq);
+diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
+index f2486b4d9f9566..309295f5e1b074 100644
+--- a/arch/arm64/kvm/vgic/vgic.h
++++ b/arch/arm64/kvm/vgic/vgic.h
+@@ -146,6 +146,29 @@ static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa,
+ return ret;
+ }
+
++static inline int vgic_its_read_entry_lock(struct vgic_its *its, gpa_t eaddr,
++ u64 *eval, unsigned long esize)
++{
++ struct kvm *kvm = its->dev->kvm;
++
++ if (KVM_BUG_ON(esize != sizeof(*eval), kvm))
++ return -EINVAL;
++
++ return kvm_read_guest_lock(kvm, eaddr, eval, esize);
++
++}
++
++static inline int vgic_its_write_entry_lock(struct vgic_its *its, gpa_t eaddr,
++ u64 eval, unsigned long esize)
++{
++ struct kvm *kvm = its->dev->kvm;
++
++ if (KVM_BUG_ON(esize != sizeof(eval), kvm))
++ return -EINVAL;
++
++ return vgic_write_guest_lock(kvm, eaddr, &eval, esize);
++}
++
+ /*
+ * This struct provides an intermediate representation of the fields contained
+ * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 5db82bfc9dc115..27ef366363e4e2 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -2094,6 +2094,12 @@ static void restore_args(struct jit_ctx *ctx, int args_off, int nregs)
+ }
+ }
+
++static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
++{
++ return fentry_links->nr_links == 1 &&
++ fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
++}
++
+ /* Based on the x86's implementation of arch_prepare_bpf_trampoline().
+ *
+ * bpf prog and function entry before bpf trampoline hooked:
+@@ -2123,6 +2129,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ bool save_ret;
+ __le32 **branches = NULL;
++ bool is_struct_ops = is_struct_ops_tramp(fentry);
+
+ /* trampoline stack layout:
+ * [ parent ip ]
+@@ -2191,11 +2198,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ */
+ emit_bti(A64_BTI_JC, ctx);
+
+- /* frame for parent function */
+- emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
+- emit(A64_MOV(1, A64_FP, A64_SP), ctx);
++ /* x9 is not set for struct_ops */
++ if (!is_struct_ops) {
++ /* frame for parent function */
++ emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
++ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
++ }
+
+- /* frame for patched function */
++ /* frame for patched function for tracing, or caller for struct_ops */
+ emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx);
+ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
+
+@@ -2289,19 +2299,24 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ /* reset SP */
+ emit(A64_MOV(1, A64_SP, A64_FP), ctx);
+
+- /* pop frames */
+- emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
+- emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
+-
+- if (flags & BPF_TRAMP_F_SKIP_FRAME) {
+- /* skip patched function, return to parent */
+- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+- emit(A64_RET(A64_R(9)), ctx);
++ if (is_struct_ops) {
++ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
++ emit(A64_RET(A64_LR), ctx);
+ } else {
+- /* return to patched function */
+- emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
+- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+- emit(A64_RET(A64_R(10)), ctx);
++ /* pop frames */
++ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
++ emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
++
++ if (flags & BPF_TRAMP_F_SKIP_FRAME) {
++ /* skip patched function, return to parent */
++ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
++ emit(A64_RET(A64_R(9)), ctx);
++ } else {
++ /* return to patched function */
++ emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
++ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
++ emit(A64_RET(A64_R(10)), ctx);
++ }
+ }
+
+ kfree(branches);
+diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
+index 51012e90780d6b..fe715b707fd0a4 100644
+--- a/arch/csky/kernel/setup.c
++++ b/arch/csky/kernel/setup.c
+@@ -112,9 +112,9 @@ asmlinkage __visible void __init csky_start(unsigned int unused,
+ pre_trap_init();
+
+ if (dtb_start == NULL)
+- early_init_dt_scan(__dtb_start);
++ early_init_dt_scan(__dtb_start, __pa(dtb_start));
+ else
+- early_init_dt_scan(dtb_start);
++ early_init_dt_scan(dtb_start, __pa(dtb_start));
+
+ start_kernel();
+
+diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
+index ae3f80622f4c60..567bd122a9ee47 100644
+--- a/arch/loongarch/Makefile
++++ b/arch/loongarch/Makefile
+@@ -59,7 +59,7 @@ endif
+
+ ifdef CONFIG_64BIT
+ ld-emul = $(64bit-emul)
+-cflags-y += -mabi=lp64s
++cflags-y += -mabi=lp64s -mcmodel=normal
+ endif
+
+ cflags-y += -pipe $(CC_FLAGS_NO_FPU)
+@@ -104,7 +104,7 @@ ifdef CONFIG_OBJTOOL
+ KBUILD_CFLAGS += -fno-jump-tables
+ endif
+
+-KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat
++KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat -Ccode-model=small
+ KBUILD_RUSTFLAGS_KERNEL += -Zdirect-access-external-data=yes
+ KBUILD_RUSTFLAGS_MODULE += -Zdirect-access-external-data=no
+
+diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
+index cbd3c09a93c14c..56934fe58170e0 100644
+--- a/arch/loongarch/kernel/setup.c
++++ b/arch/loongarch/kernel/setup.c
+@@ -291,7 +291,7 @@ static void __init fdt_setup(void)
+ if (!fdt_pointer || fdt_check_header(fdt_pointer))
+ return;
+
+- early_init_dt_scan(fdt_pointer);
++ early_init_dt_scan(fdt_pointer, __pa(fdt_pointer));
+ early_init_fdt_reserve_self();
+
+ max_low_pfn = PFN_PHYS(memblock_end_of_DRAM());
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index 7dbefd4ba21071..dd350cba1252f9 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -179,7 +179,7 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
+
+ if (!is_tail_call) {
+ /* Set return value */
+- move_reg(ctx, LOONGARCH_GPR_A0, regmap[BPF_REG_0]);
++ emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0);
+ /* Return to the caller */
+ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
+ } else {
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index 40c1175823d61d..fdde1bcd4e2663 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -19,7 +19,7 @@ ccflags-vdso := \
+ cflags-vdso := $(ccflags-vdso) \
+ -isystem $(shell $(CC) -print-file-name=include) \
+ $(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
+- -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
++ -std=gnu11 -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ $(call cc-option, -fno-asynchronous-unwind-tables) \
+ $(call cc-option, -fno-stack-protector)
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 7dab46728aedaf..b6958ec2a220cf 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -93,7 +93,7 @@ static struct platform_device mcf_uart = {
+ .dev.platform_data = mcf_uart_platform_data,
+ };
+
+-#if IS_ENABLED(CONFIG_FEC)
++#ifdef MCFFEC_BASE0
+
+ #ifdef CONFIG_M5441x
+ #define FEC_NAME "enet-fec"
+@@ -145,6 +145,7 @@ static struct platform_device mcf_fec0 = {
+ .platform_data = FEC_PDATA,
+ }
+ };
++#endif /* MCFFEC_BASE0 */
+
+ #ifdef MCFFEC_BASE1
+ static struct resource mcf_fec1_resources[] = {
+@@ -182,7 +183,6 @@ static struct platform_device mcf_fec1 = {
+ }
+ };
+ #endif /* MCFFEC_BASE1 */
+-#endif /* CONFIG_FEC */
+
+ #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
+ /*
+@@ -624,12 +624,12 @@ static struct platform_device mcf_flexcan0 = {
+
+ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_uart,
+-#if IS_ENABLED(CONFIG_FEC)
++#ifdef MCFFEC_BASE0
+ &mcf_fec0,
++#endif
+ #ifdef MCFFEC_BASE1
+ &mcf_fec1,
+ #endif
+-#endif
+ #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
+ &mcf_qspi,
+ #endif
+diff --git a/arch/m68k/include/asm/mcfgpio.h b/arch/m68k/include/asm/mcfgpio.h
+index 019f244395464d..9c91ecdafc4539 100644
+--- a/arch/m68k/include/asm/mcfgpio.h
++++ b/arch/m68k/include/asm/mcfgpio.h
+@@ -136,7 +136,7 @@ static inline void gpio_free(unsigned gpio)
+ * read-modify-write as well as those controlled by the EPORT and GPIO modules.
+ */
+ #define MCFGPIO_SCR_START 40
+-#elif defined(CONFIGM5441x)
++#elif defined(CONFIG_M5441x)
+ /* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */
+ #define MCFGPIO_SCR_START 0
+ #else
+diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
+index e28eb1c0e0bfb3..dbf88059e47a4d 100644
+--- a/arch/m68k/include/asm/mvme147hw.h
++++ b/arch/m68k/include/asm/mvme147hw.h
+@@ -93,8 +93,8 @@ struct pcc_regs {
+ #define M147_SCC_B_ADDR 0xfffe3000
+ #define M147_SCC_PCLK 5000000
+
+-#define MVME147_IRQ_SCSI_PORT (IRQ_USER+0x45)
+-#define MVME147_IRQ_SCSI_DMA (IRQ_USER+0x46)
++#define MVME147_IRQ_SCSI_PORT (IRQ_USER + 5)
++#define MVME147_IRQ_SCSI_DMA (IRQ_USER + 6)
+
+ /* SCC interrupts, for MVME147 */
+
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index 3cc944df04f65e..f11ef9f1f56fcf 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -13,6 +13,7 @@
+ #include <asm/setup.h>
+
+
++#include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+
+ asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+@@ -22,7 +23,9 @@ static void __ref debug_cons_write(struct console *c,
+ {
+ #if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+ defined(CONFIG_COLDFIRE))
+- if (MACH_IS_MVME16x)
++ if (MACH_IS_MVME147)
++ mvme147_scc_write(c, s, n);
++ else if (MACH_IS_MVME16x)
+ mvme16x_cons_write(c, s, n);
+ else
+ debug_cons_nputs(s, n);
+diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
+index 8b5dc07f0811f2..cc2fb0a83cf0b4 100644
+--- a/arch/m68k/mvme147/config.c
++++ b/arch/m68k/mvme147/config.c
+@@ -32,6 +32,7 @@
+ #include <asm/mvme147hw.h>
+ #include <asm/config.h>
+
++#include "mvme147.h"
+
+ static void mvme147_get_model(char *model);
+ extern void mvme147_sched_init(void);
+@@ -185,3 +186,32 @@ int mvme147_hwclk(int op, struct rtc_time *t)
+ }
+ return 0;
+ }
++
++static void scc_delay(void)
++{
++ __asm__ __volatile__ ("nop; nop;");
++}
++
++static void scc_write(char ch)
++{
++ do {
++ scc_delay();
++ } while (!(in_8(M147_SCC_A_ADDR) & BIT(2)));
++ scc_delay();
++ out_8(M147_SCC_A_ADDR, 8);
++ scc_delay();
++ out_8(M147_SCC_A_ADDR, ch);
++}
++
++void mvme147_scc_write(struct console *co, const char *str, unsigned int count)
++{
++ unsigned long flags;
++
++ local_irq_save(flags);
++ while (count--) {
++ if (*str == '\n')
++ scc_write('\r');
++ scc_write(*str++);
++ }
++ local_irq_restore(flags);
++}
+diff --git a/arch/m68k/mvme147/mvme147.h b/arch/m68k/mvme147/mvme147.h
+new file mode 100644
+index 00000000000000..140bc98b0102aa
+--- /dev/null
++++ b/arch/m68k/mvme147/mvme147.h
+@@ -0,0 +1,6 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++struct console;
++
++/* config.c */
++void mvme147_scc_write(struct console *co, const char *str, unsigned int count);
+diff --git a/arch/microblaze/kernel/microblaze_ksyms.c b/arch/microblaze/kernel/microblaze_ksyms.c
+index c892e173ec990b..a8553f54152b76 100644
+--- a/arch/microblaze/kernel/microblaze_ksyms.c
++++ b/arch/microblaze/kernel/microblaze_ksyms.c
+@@ -16,6 +16,7 @@
+ #include <asm/page.h>
+ #include <linux/ftrace.h>
+ #include <linux/uaccess.h>
++#include <asm/xilinx_mb_manager.h>
+
+ #ifdef CONFIG_FUNCTION_TRACER
+ extern void _mcount(void);
+@@ -46,3 +47,12 @@ extern void __udivsi3(void);
+ EXPORT_SYMBOL(__udivsi3);
+ extern void __umodsi3(void);
+ EXPORT_SYMBOL(__umodsi3);
++
++#ifdef CONFIG_MB_MANAGER
++extern void xmb_manager_register(uintptr_t phys_baseaddr, u32 cr_val,
++ void (*callback)(void *data),
++ void *priv, void (*reset_callback)(void *data));
++EXPORT_SYMBOL(xmb_manager_register);
++extern asmlinkage void xmb_inject_err(void);
++EXPORT_SYMBOL(xmb_inject_err);
++#endif
+diff --git a/arch/microblaze/kernel/prom.c b/arch/microblaze/kernel/prom.c
+index e424c796e297c5..76ac4cfdfb42ce 100644
+--- a/arch/microblaze/kernel/prom.c
++++ b/arch/microblaze/kernel/prom.c
+@@ -18,7 +18,7 @@ void __init early_init_devtree(void *params)
+ {
+ pr_debug(" -> early_init_devtree(%p)\n", params);
+
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ if (!strlen(boot_command_line))
+ strscpy(boot_command_line, cmd_line, COMMAND_LINE_SIZE);
+
+diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h
+index a4374b4cb88fd8..d6ccd534402133 100644
+--- a/arch/mips/include/asm/switch_to.h
++++ b/arch/mips/include/asm/switch_to.h
+@@ -97,7 +97,7 @@ do { \
+ } \
+ } while (0)
+ #else
+-# define __sanitize_fcr31(next)
++# define __sanitize_fcr31(next) do { (void) (next); } while (0)
+ #endif
+
+ /*
+diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c
+index 6062e6fa589a87..4fd6da0a06c372 100644
+--- a/arch/mips/kernel/prom.c
++++ b/arch/mips/kernel/prom.c
+@@ -41,7 +41,7 @@ char *mips_get_machine_name(void)
+
+ void __init __dt_setup_arch(void *bph)
+ {
+- if (!early_init_dt_scan(bph))
++ if (!early_init_dt_scan(bph, __pa(bph)))
+ return;
+
+ mips_set_machine_name(of_flat_dt_get_machine_name());
+diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
+index 7eeeaf1ff95d26..cda7983e7c18d4 100644
+--- a/arch/mips/kernel/relocate.c
++++ b/arch/mips/kernel/relocate.c
+@@ -337,7 +337,7 @@ void *__init relocate_kernel(void)
+ #if defined(CONFIG_USE_OF)
+ /* Deal with the device tree */
+ fdt = plat_get_fdt();
+- early_init_dt_scan(fdt);
++ early_init_dt_scan(fdt, __pa(fdt));
+ if (boot_command_line[0]) {
+ /* Boot command line was passed in device tree */
+ strscpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+diff --git a/arch/nios2/kernel/prom.c b/arch/nios2/kernel/prom.c
+index 9a8393e6b4a85e..db049249766fc2 100644
+--- a/arch/nios2/kernel/prom.c
++++ b/arch/nios2/kernel/prom.c
+@@ -27,7 +27,7 @@ void __init early_init_devtree(void *params)
+ if (be32_to_cpup((__be32 *)CONFIG_NIOS2_DTB_PHYS_ADDR) ==
+ OF_DT_HEADER) {
+ params = (void *)CONFIG_NIOS2_DTB_PHYS_ADDR;
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ return;
+ }
+ #endif
+@@ -37,5 +37,5 @@ void __init early_init_devtree(void *params)
+ params = (void *)__dtb_start;
+ #endif
+
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ }
+diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
+index 69c0258700b28a..3279ef457c573a 100644
+--- a/arch/openrisc/Kconfig
++++ b/arch/openrisc/Kconfig
+@@ -65,6 +65,9 @@ config STACKTRACE_SUPPORT
+ config LOCKDEP_SUPPORT
+ def_bool y
+
++config FIX_EARLYCON_MEM
++ def_bool y
++
+ menu "Processor type and features"
+
+ choice
+diff --git a/arch/openrisc/include/asm/fixmap.h b/arch/openrisc/include/asm/fixmap.h
+index ecdb98a5839f7c..aaa6a26a3e9215 100644
+--- a/arch/openrisc/include/asm/fixmap.h
++++ b/arch/openrisc/include/asm/fixmap.h
+@@ -26,29 +26,18 @@
+ #include <linux/bug.h>
+ #include <asm/page.h>
+
+-/*
+- * On OpenRISC we use these special fixed_addresses for doing ioremap
+- * early in the boot process before memory initialization is complete.
+- * This is used, in particular, by the early serial console code.
+- *
+- * It's not really 'fixmap', per se, but fits loosely into the same
+- * paradigm.
+- */
+ enum fixed_addresses {
+- /*
+- * FIX_IOREMAP entries are useful for mapping physical address
+- * space before ioremap() is useable, e.g. really early in boot
+- * before kmalloc() is working.
+- */
+-#define FIX_N_IOREMAPS 32
+- FIX_IOREMAP_BEGIN,
+- FIX_IOREMAP_END = FIX_IOREMAP_BEGIN + FIX_N_IOREMAPS - 1,
++ FIX_EARLYCON_MEM_BASE,
+ __end_of_fixed_addresses
+ };
+
+ #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+ /* FIXADDR_BOTTOM might be a better name here... */
+ #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
++#define FIXMAP_PAGE_IO PAGE_KERNEL_NOCACHE
++
++extern void __set_fixmap(enum fixed_addresses idx,
++ phys_addr_t phys, pgprot_t flags);
+
+ #include <asm-generic/fixmap.h>
+
+diff --git a/arch/openrisc/kernel/prom.c b/arch/openrisc/kernel/prom.c
+index 19e6008bf114c6..e424e9bd12a793 100644
+--- a/arch/openrisc/kernel/prom.c
++++ b/arch/openrisc/kernel/prom.c
+@@ -22,6 +22,6 @@
+
+ void __init early_init_devtree(void *params)
+ {
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ memblock_allow_resize();
+ }
+diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
+index 1dcd78c8f0e99b..d0cb1a0126f95d 100644
+--- a/arch/openrisc/mm/init.c
++++ b/arch/openrisc/mm/init.c
+@@ -207,6 +207,43 @@ void __init mem_init(void)
+ return;
+ }
+
++static int __init map_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
++{
++ p4d_t *p4d;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
++
++ p4d = p4d_offset(pgd_offset_k(va), va);
++ pud = pud_offset(p4d, va);
++ pmd = pmd_offset(pud, va);
++ pte = pte_alloc_kernel(pmd, va);
++
++ if (pte == NULL)
++ return -ENOMEM;
++
++ if (pgprot_val(prot))
++ set_pte_at(&init_mm, va, pte, pfn_pte(pa >> PAGE_SHIFT, prot));
++ else
++ pte_clear(&init_mm, va, pte);
++
++ local_flush_tlb_page(NULL, va);
++ return 0;
++}
++
++void __init __set_fixmap(enum fixed_addresses idx,
++ phys_addr_t phys, pgprot_t prot)
++{
++ unsigned long address = __fix_to_virt(idx);
++
++ if (idx >= __end_of_fixed_addresses) {
++ BUG();
++ return;
++ }
++
++ map_page(address, phys, prot);
++}
++
+ static const pgprot_t protection_map[16] = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY_X,
+diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
+index c91f9c2e61ed25..f8d08eab7db8b0 100644
+--- a/arch/parisc/kernel/ftrace.c
++++ b/arch/parisc/kernel/ftrace.c
+@@ -87,7 +87,7 @@ int ftrace_enable_ftrace_graph_caller(void)
+
+ int ftrace_disable_ftrace_graph_caller(void)
+ {
+- static_key_enable(&ftrace_graph_enable.key);
++ static_key_disable(&ftrace_graph_enable.key);
+ return 0;
+ }
+ #endif
+diff --git a/arch/powerpc/include/asm/dtl.h b/arch/powerpc/include/asm/dtl.h
+index d6f43d149f8dcb..a5c21bc623cb00 100644
+--- a/arch/powerpc/include/asm/dtl.h
++++ b/arch/powerpc/include/asm/dtl.h
+@@ -1,8 +1,8 @@
+ #ifndef _ASM_POWERPC_DTL_H
+ #define _ASM_POWERPC_DTL_H
+
++#include <linux/rwsem.h>
+ #include <asm/lppaca.h>
+-#include <linux/spinlock_types.h>
+
+ /*
+ * Layout of entries in the hypervisor's dispatch trace log buffer.
+@@ -35,7 +35,7 @@ struct dtl_entry {
+ #define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT)
+
+ extern struct kmem_cache *dtl_cache;
+-extern rwlock_t dtl_access_lock;
++extern struct rw_semaphore dtl_access_lock;
+
+ extern void register_dtl_buffer(int cpu);
+ extern void alloc_dtl_buffers(unsigned long *time_limit);
+diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
+index ef40c9b6972a6e..a48f54dde4f656 100644
+--- a/arch/powerpc/include/asm/fadump.h
++++ b/arch/powerpc/include/asm/fadump.h
+@@ -19,6 +19,7 @@ extern int is_fadump_active(void);
+ extern int should_fadump_crash(void);
+ extern void crash_fadump(struct pt_regs *, const char *);
+ extern void fadump_cleanup(void);
++void fadump_setup_param_area(void);
+ extern void fadump_append_bootargs(void);
+
+ #else /* CONFIG_FA_DUMP */
+@@ -26,6 +27,7 @@ static inline int is_fadump_active(void) { return 0; }
+ static inline int should_fadump_crash(void) { return 0; }
+ static inline void crash_fadump(struct pt_regs *regs, const char *str) { }
+ static inline void fadump_cleanup(void) { }
++static inline void fadump_setup_param_area(void) { }
+ static inline void fadump_append_bootargs(void) { }
+ #endif /* !CONFIG_FA_DUMP */
+
+@@ -34,4 +36,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
+ int depth, void *data);
+ extern int fadump_reserve_mem(void);
+ #endif
++
++#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
++void fadump_cma_init(void);
++#else
++static inline void fadump_cma_init(void) { }
++#endif
++
+ #endif /* _ASM_POWERPC_FADUMP_H */
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index 2ef9a5f4e5d14c..11065313d4c123 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -684,8 +684,8 @@ int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1);
+ int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu);
+ int kvmhv_nestedv2_set_vpa(struct kvm_vcpu *vcpu, unsigned long vpa);
+
+-int kmvhv_counters_tracepoint_regfunc(void);
+-void kmvhv_counters_tracepoint_unregfunc(void);
++int kvmhv_counters_tracepoint_regfunc(void);
++void kvmhv_counters_tracepoint_unregfunc(void);
+ int kvmhv_get_l2_counters_status(void);
+ void kvmhv_set_l2_counters_status(int cpu, bool status);
+
+diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
+index 50950deedb8734..e3d0e714ff280e 100644
+--- a/arch/powerpc/include/asm/sstep.h
++++ b/arch/powerpc/include/asm/sstep.h
+@@ -173,9 +173,4 @@ int emulate_step(struct pt_regs *regs, ppc_inst_t instr);
+ */
+ extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
+
+-extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+- const void *mem, bool cross_endian);
+-extern void emulate_vsx_store(struct instruction_op *op,
+- const union vsx_reg *reg, void *mem,
+- bool cross_endian);
+ extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
+diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
+index 7650b6ce14c85a..8d972bc98b55fe 100644
+--- a/arch/powerpc/include/asm/vdso.h
++++ b/arch/powerpc/include/asm/vdso.h
+@@ -25,6 +25,7 @@ int vdso_getcpu_init(void);
+ #ifdef __VDSO64__
+ #define V_FUNCTION_BEGIN(name) \
+ .globl name; \
++ .type name,@function; \
+ name: \
+
+ #define V_FUNCTION_END(name) \
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index af4263594eb2c9..1bee15c013e75f 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -867,7 +867,7 @@ bool __init dt_cpu_ftrs_init(void *fdt)
+ using_dt_cpu_ftrs = false;
+
+ /* Setup and verify the FDT, if it fails we just bail */
+- if (!early_init_dt_verify(fdt))
++ if (!early_init_dt_verify(fdt, __pa(fdt)))
+ return false;
+
+ if (!of_scan_flat_dt(fdt_find_cpu_features, NULL))
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index a612e7513a4f8a..4641de75f7fc1e 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -78,27 +78,23 @@ static struct cma *fadump_cma;
+ * But for some reason even if it fails we still have the memory reservation
+ * with us and we can still continue doing fadump.
+ */
+-static int __init fadump_cma_init(void)
++void __init fadump_cma_init(void)
+ {
+ unsigned long long base, size;
+ int rc;
+
+- if (!fw_dump.fadump_enabled)
+- return 0;
+-
++ if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
++ fw_dump.dump_active)
++ return;
+ /*
+ * Do not use CMA if user has provided fadump=nocma kernel parameter.
+- * Return 1 to continue with fadump old behaviour.
+ */
+- if (fw_dump.nocma)
+- return 1;
++ if (fw_dump.nocma || !fw_dump.boot_memory_size)
++ return;
+
+ base = fw_dump.reserve_dump_area_start;
+ size = fw_dump.boot_memory_size;
+
+- if (!size)
+- return 0;
+-
+ rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
+ if (rc) {
+ pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
+@@ -108,7 +104,7 @@ static int __init fadump_cma_init(void)
+ * blocked from production system usage. Hence return 1,
+ * so that we can continue with fadump.
+ */
+- return 1;
++ return;
+ }
+
+ /*
+@@ -125,10 +121,7 @@ static int __init fadump_cma_init(void)
+ cma_get_size(fadump_cma),
+ (unsigned long)cma_get_base(fadump_cma) >> 20,
+ fw_dump.reserve_dump_area_size);
+- return 1;
+ }
+-#else
+-static int __init fadump_cma_init(void) { return 1; }
+ #endif /* CONFIG_CMA */
+
+ /*
+@@ -143,7 +136,7 @@ void __init fadump_append_bootargs(void)
+ if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area)
+ return;
+
+- if (fw_dump.param_area >= fw_dump.boot_mem_top) {
++ if (fw_dump.param_area < fw_dump.boot_mem_top) {
+ if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) {
+ pr_warn("WARNING: Can't use additional parameters area!\n");
+ fw_dump.param_area = 0;
+@@ -637,8 +630,6 @@ int __init fadump_reserve_mem(void)
+
+ pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
+ (size >> 20), base, (memblock_phys_mem_size() >> 20));
+-
+- ret = fadump_cma_init();
+ }
+
+ return ret;
+@@ -1586,6 +1577,12 @@ static void __init fadump_init_files(void)
+ return;
+ }
+
++ if (fw_dump.param_area) {
++ rc = sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr);
++ if (rc)
++ pr_err("unable to create bootargs_append sysfs file (%d)\n", rc);
++ }
++
+ debugfs_create_file("fadump_region", 0444, arch_debugfs_dir, NULL,
+ &fadump_region_fops);
+
+@@ -1740,7 +1737,7 @@ static void __init fadump_process(void)
+ * Reserve memory to store additional parameters to be passed
+ * for fadump/capture kernel.
+ */
+-static void __init fadump_setup_param_area(void)
++void __init fadump_setup_param_area(void)
+ {
+ phys_addr_t range_start, range_end;
+
+@@ -1748,7 +1745,7 @@ static void __init fadump_setup_param_area(void)
+ return;
+
+ /* This memory can't be used by PFW or bootloader as it is shared across kernels */
+- if (radix_enabled()) {
++ if (early_radix_enabled()) {
+ /*
+ * Anywhere in the upper half should be good enough as all memory
+ * is accessible in real mode.
+@@ -1776,12 +1773,12 @@ static void __init fadump_setup_param_area(void)
+ COMMAND_LINE_SIZE,
+ range_start,
+ range_end);
+- if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) {
++ if (!fw_dump.param_area) {
+ pr_warn("WARNING: Could not setup area to pass additional parameters!\n");
+ return;
+ }
+
+- memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE);
++ memset((void *)fw_dump.param_area, 0, COMMAND_LINE_SIZE);
+ }
+
+ /*
+@@ -1807,7 +1804,6 @@ int __init setup_fadump(void)
+ }
+ /* Initialize the kernel dump memory structure and register with f/w */
+ else if (fw_dump.reserve_dump_area_size) {
+- fadump_setup_param_area();
+ fw_dump.ops->fadump_init_mem_struct(&fw_dump);
+ register_fadump();
+ }
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 0be07ed407c703..e0059842a1c64b 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -791,7 +791,7 @@ void __init early_init_devtree(void *params)
+ DBG(" -> early_init_devtree(%px)\n", params);
+
+ /* Too early to BUG_ON(), do it by hand */
+- if (!early_init_dt_verify(params))
++ if (!early_init_dt_verify(params, __pa(params)))
+ panic("BUG: Failed verifying flat device tree, bad version?");
+
+ of_scan_flat_dt(early_init_dt_scan_model, NULL);
+@@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
+
+ mmu_early_init_devtree();
+
++ /* Setup param area for passing additional parameters to fadump capture kernel. */
++ fadump_setup_param_area();
++
+ #ifdef CONFIG_PPC_POWERNV
+ /* Scan and build the list of machine check recoverable ranges */
+ of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index 943430077375a4..b6b01502e50472 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -997,9 +997,11 @@ void __init setup_arch(char **cmdline_p)
+ initmem_init();
+
+ /*
+- * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+- * be called after initmem_init(), so that pageblock_order is initialised.
++ * Reserve large chunks of memory for use by CMA for fadump, KVM and
++ * hugetlb. These must be called after initmem_init(), so that
++ * pageblock_order is initialised.
+ */
++ fadump_cma_init();
+ kvm_cma_reserve();
+ gigantic_hugetlb_cma_reserve();
+
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 22f83fbbc762ac..1edc7cd68c10d0 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -920,6 +920,7 @@ static int __init disable_hardlockup_detector(void)
+ hardlockup_detector_disable();
+ #else
+ if (firmware_has_feature(FW_FEATURE_LPAR)) {
++ check_kvm_guest();
+ if (is_kvm_guest())
+ hardlockup_detector_disable();
+ }
+diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
+index 9738adabeb1fee..dc65c139115772 100644
+--- a/arch/powerpc/kexec/file_load_64.c
++++ b/arch/powerpc/kexec/file_load_64.c
+@@ -736,13 +736,18 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+ if (dn) {
+ u64 val;
+
+- of_property_read_u64(dn, "opal-base-address", &val);
++ ret = of_property_read_u64(dn, "opal-base-address", &val);
++ if (ret)
++ goto out;
++
+ ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
+ sizeof(val), false);
+ if (ret)
+ goto out;
+
+- of_property_read_u64(dn, "opal-entry-address", &val);
++ ret = of_property_read_u64(dn, "opal-entry-address", &val);
++ if (ret)
++ goto out;
+ ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
+ sizeof(val), false);
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index ad8dc4ccdaab9e..57b6c1ba84d47e 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4154,7 +4154,7 @@ void kvmhv_set_l2_counters_status(int cpu, bool status)
+ lppaca_of(cpu).l2_counters_enable = 0;
+ }
+
+-int kmvhv_counters_tracepoint_regfunc(void)
++int kvmhv_counters_tracepoint_regfunc(void)
+ {
+ int cpu;
+
+@@ -4164,7 +4164,7 @@ int kmvhv_counters_tracepoint_regfunc(void)
+ return 0;
+ }
+
+-void kmvhv_counters_tracepoint_unregfunc(void)
++void kvmhv_counters_tracepoint_unregfunc(void)
+ {
+ int cpu;
+
+@@ -4309,6 +4309,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
+ }
+ hvregs.hdec_expiry = time_limit;
+
++ /*
++ * hvregs has the doorbell status, so zero it here which
++ * enables us to receive doorbells when H_ENTER_NESTED is
++ * in progress for this vCPU
++ */
++
++ if (vcpu->arch.doorbell_request)
++ vcpu->arch.doorbell_request = 0;
++
+ /*
+ * When setting DEC, we must always deal with irq_work_raise
+ * via NMI vs setting DEC. The problem occurs right as we
+@@ -4912,7 +4921,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ lpcr &= ~LPCR_MER;
+ }
+ } else if (vcpu->arch.pending_exceptions ||
+- vcpu->arch.doorbell_request ||
+ xive_interrupt_pending(vcpu)) {
+ vcpu->arch.ret = RESUME_HOST;
+ goto out;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 05f5220960c63b..125440a606ee3b 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -32,7 +32,7 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ hr->pcr = vc->pcr | PCR_MASK;
+- hr->dpdes = vc->dpdes;
++ hr->dpdes = vcpu->arch.doorbell_request;
+ hr->hfscr = vcpu->arch.hfscr;
+ hr->tb_offset = vc->tb_offset;
+ hr->dawr0 = vcpu->arch.dawr0;
+@@ -105,7 +105,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+- hr->dpdes = vc->dpdes;
++ hr->dpdes = vcpu->arch.doorbell_request;
+ hr->purr = vcpu->arch.purr;
+ hr->spurr = vcpu->arch.spurr;
+ hr->ic = vcpu->arch.ic;
+@@ -143,7 +143,7 @@ static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ vc->pcr = hr->pcr | PCR_MASK;
+- vc->dpdes = hr->dpdes;
++ vcpu->arch.doorbell_request = hr->dpdes;
+ vcpu->arch.hfscr = hr->hfscr;
+ vcpu->arch.dawr0 = hr->dawr0;
+ vcpu->arch.dawrx0 = hr->dawrx0;
+@@ -170,7 +170,13 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+- vc->dpdes = hr->dpdes;
++ /*
++ * This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled.
++ * Make sure we preserve the doorbell if it was either:
++ * a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1)
++ * b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1)
++ */
++ vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes;
+ vcpu->arch.hfscr = hr->hfscr;
+ vcpu->arch.purr = hr->purr;
+ vcpu->arch.spurr = hr->spurr;
+diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
+index 77ebc724e6cdf4..35fccaa575cc15 100644
+--- a/arch/powerpc/kvm/trace_hv.h
++++ b/arch/powerpc/kvm/trace_hv.h
+@@ -538,7 +538,7 @@ TRACE_EVENT_FN_COND(kvmppc_vcpu_stats,
+ TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns l2_runtime=%llu ns",
+ __entry->vcpu_id, __entry->l1_to_l2_cs,
+ __entry->l2_to_l1_cs, __entry->l2_runtime),
+- kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc
++ kvmhv_counters_tracepoint_regfunc, kvmhv_counters_tracepoint_unregfunc
+ );
+ #endif
+ #endif /* _TRACE_KVM_HV_H */
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index e65f3fb68d06ba..ac3ee19531d8ac 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -780,8 +780,8 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea,
+ #endif /* __powerpc64 */
+
+ #ifdef CONFIG_VSX
+-void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+- const void *mem, bool rev)
++static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
++ const void *mem, bool rev)
+ {
+ int size, read_size;
+ int i, j;
+@@ -863,11 +863,9 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+ break;
+ }
+ }
+-EXPORT_SYMBOL_GPL(emulate_vsx_load);
+-NOKPROBE_SYMBOL(emulate_vsx_load);
+
+-void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+- void *mem, bool rev)
++static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
++ void *mem, bool rev)
+ {
+ int size, write_size;
+ int i, j;
+@@ -955,8 +953,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+ break;
+ }
+ }
+-EXPORT_SYMBOL_GPL(emulate_vsx_store);
+-NOKPROBE_SYMBOL(emulate_vsx_store);
+
+ static nokprobe_inline int do_vsx_load(struct instruction_op *op,
+ unsigned long ea, struct pt_regs *regs,
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 81c77ddce2e30a..c156fe0d53c378 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -439,10 +439,16 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
+ /*
+ * The kernel should never take an execute fault nor should it
+ * take a page fault to a kernel address or a page fault to a user
+- * address outside of dedicated places
++ * address outside of dedicated places.
++ *
++ * Rather than kfence directly reporting false negatives, search whether
++ * the NIP belongs to the fixup table for cases where fault could come
++ * from functions like copy_from_kernel_nofault().
+ */
+ if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) {
+- if (kfence_handle_page_fault(address, is_write, regs))
++ if (is_kfence_address((void *)address) &&
++ !search_exception_tables(instruction_pointer(regs)) &&
++ kfence_handle_page_fault(address, is_write, regs))
+ return 0;
+
+ return SIGSEGV;
+diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
+index 8cb9d36ea49159..f293588b8c7b51 100644
+--- a/arch/powerpc/platforms/pseries/dtl.c
++++ b/arch/powerpc/platforms/pseries/dtl.c
+@@ -191,7 +191,7 @@ static int dtl_enable(struct dtl *dtl)
+ return -EBUSY;
+
+ /* ensure there are no other conflicting dtl users */
+- if (!read_trylock(&dtl_access_lock))
++ if (!down_read_trylock(&dtl_access_lock))
+ return -EBUSY;
+
+ n_entries = dtl_buf_entries;
+@@ -199,7 +199,7 @@ static int dtl_enable(struct dtl *dtl)
+ if (!buf) {
+ printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n",
+ __func__, dtl->cpu);
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ return -ENOMEM;
+ }
+
+@@ -217,7 +217,7 @@ static int dtl_enable(struct dtl *dtl)
+ spin_unlock(&dtl->lock);
+
+ if (rc) {
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ kmem_cache_free(dtl_cache, buf);
+ }
+
+@@ -232,7 +232,7 @@ static void dtl_disable(struct dtl *dtl)
+ dtl->buf = NULL;
+ dtl->buf_entries = 0;
+ spin_unlock(&dtl->lock);
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ }
+
+ /* file interface */
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index c1d8bee8f7018c..bb09990eec309a 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -169,7 +169,7 @@ struct vcpu_dispatch_data {
+ */
+ #define NR_CPUS_H NR_CPUS
+
+-DEFINE_RWLOCK(dtl_access_lock);
++DECLARE_RWSEM(dtl_access_lock);
+ static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data);
+ static DEFINE_PER_CPU(u64, dtl_entry_ridx);
+ static DEFINE_PER_CPU(struct dtl_worker, dtl_workers);
+@@ -463,7 +463,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
+ {
+ int rc = 0, state;
+
+- if (!write_trylock(&dtl_access_lock)) {
++ if (!down_write_trylock(&dtl_access_lock)) {
+ rc = -EBUSY;
+ goto out;
+ }
+@@ -479,7 +479,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
+ pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n");
+ free_dtl_buffers(time_limit);
+ reset_global_dtl_mask();
+- write_unlock(&dtl_access_lock);
++ up_write(&dtl_access_lock);
+ rc = -EINVAL;
+ goto out;
+ }
+@@ -494,7 +494,7 @@ static void dtl_worker_disable(unsigned long *time_limit)
+ cpuhp_remove_state(dtl_worker_state);
+ free_dtl_buffers(time_limit);
+ reset_global_dtl_mask();
+- write_unlock(&dtl_access_lock);
++ up_write(&dtl_access_lock);
+ }
+
+ static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,
+diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c
+index 4a595493d28ae3..b1667ed05f9882 100644
+--- a/arch/powerpc/platforms/pseries/plpks.c
++++ b/arch/powerpc/platforms/pseries/plpks.c
+@@ -683,7 +683,7 @@ void __init plpks_early_init_devtree(void)
+ out:
+ fdt_nop_property(fdt, chosen_node, "ibm,plpks-pw");
+ // Since we've cleared the password, we must update the FDT checksum
+- early_init_dt_verify(fdt);
++ early_init_dt_verify(fdt, __pa(fdt));
+ }
+
+ static __init int pseries_plpks_init(void)
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index 45f9c1171a486a..dfa5cdddd3671b 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -8,6 +8,7 @@
+
+ #include <linux/bitmap.h>
+ #include <linux/jump_label.h>
++#include <linux/workqueue.h>
+ #include <asm/hwcap.h>
+ #include <asm/alternative-macros.h>
+ #include <asm/errno.h>
+@@ -60,6 +61,7 @@ void riscv_user_isa_enable(void);
+
+ #if defined(CONFIG_RISCV_MISALIGNED)
+ bool check_unaligned_access_emulated_all_cpus(void);
++void check_unaligned_access_emulated(struct work_struct *work __always_unused);
+ void unaligned_emulation_finish(void);
+ bool unaligned_ctl_available(void);
+ DECLARE_PER_CPU(long, misaligned_access_speed);
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index a2cde65b69e950..26c886db4fb3d1 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -227,7 +227,7 @@ static void __init init_resources(void)
+ static void __init parse_dtb(void)
+ {
+ /* Early scan of device tree from init memory */
+- if (early_init_dt_scan(dtb_early_va)) {
++ if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) {
+ const char *name = of_flat_dt_get_machine_name();
+
+ if (name) {
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 1b9867136b6100..9a80a12f6b48f2 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -524,11 +524,11 @@ int handle_misaligned_store(struct pt_regs *regs)
+ return 0;
+ }
+
+-static bool check_unaligned_access_emulated(int cpu)
++void check_unaligned_access_emulated(struct work_struct *work __always_unused)
+ {
++ int cpu = smp_processor_id();
+ long *mas_ptr = per_cpu_ptr(&misaligned_access_speed, cpu);
+ unsigned long tmp_var, tmp_val;
+- bool misaligned_emu_detected;
+
+ *mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
+
+@@ -536,19 +536,16 @@ static bool check_unaligned_access_emulated(int cpu)
+ " "REG_L" %[tmp], 1(%[ptr])\n"
+ : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory");
+
+- misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED);
+ /*
+ * If unaligned_ctl is already set, this means that we detected that all
+ * CPUS uses emulated misaligned access at boot time. If that changed
+ * when hotplugging the new cpu, this is something we don't handle.
+ */
+- if (unlikely(unaligned_ctl && !misaligned_emu_detected)) {
++ if (unlikely(unaligned_ctl && (*mas_ptr != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED))) {
+ pr_crit("CPU misaligned accesses non homogeneous (expected all emulated)\n");
+ while (true)
+ cpu_relax();
+ }
+-
+- return misaligned_emu_detected;
+ }
+
+ bool check_unaligned_access_emulated_all_cpus(void)
+@@ -560,8 +557,11 @@ bool check_unaligned_access_emulated_all_cpus(void)
+ * accesses emulated since tasks requesting such control can run on any
+ * CPU.
+ */
++ schedule_on_each_cpu(check_unaligned_access_emulated);
++
+ for_each_online_cpu(cpu)
+- if (!check_unaligned_access_emulated(cpu))
++ if (per_cpu(misaligned_access_speed, cpu)
++ != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED)
+ return false;
+
+ unaligned_ctl = true;
+diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
+index 160628a2116de4..f3508cc54f91ae 100644
+--- a/arch/riscv/kernel/unaligned_access_speed.c
++++ b/arch/riscv/kernel/unaligned_access_speed.c
+@@ -191,6 +191,7 @@ static int riscv_online_cpu(unsigned int cpu)
+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)
+ goto exit;
+
++ check_unaligned_access_emulated(NULL);
+ buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
+ if (!buf) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c
+index da6ff1bade0df5..f59d1c0c8c43a7 100644
+--- a/arch/riscv/kvm/aia_aplic.c
++++ b/arch/riscv/kvm/aia_aplic.c
+@@ -143,7 +143,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
+ if (sm == APLIC_SOURCECFG_SM_LEVEL_HIGH ||
+ sm == APLIC_SOURCECFG_SM_LEVEL_LOW) {
+ if (!pending)
+- goto skip_write_pending;
++ goto noskip_write_pending;
+ if ((irqd->state & APLIC_IRQ_STATE_INPUT) &&
+ sm == APLIC_SOURCECFG_SM_LEVEL_LOW)
+ goto skip_write_pending;
+@@ -152,6 +152,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
+ goto skip_write_pending;
+ }
+
++noskip_write_pending:
+ if (pending)
+ irqd->state |= APLIC_IRQ_STATE_PENDING;
+ else
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index 7de128be8db9bc..6e704ed86a83a9 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -486,19 +486,22 @@ void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu)
+ struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context;
+ const struct kvm_riscv_sbi_extension_entry *entry;
+ const struct kvm_vcpu_sbi_extension *ext;
+- int i;
++ int idx, i;
+
+ for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
+ entry = &sbi_ext[i];
+ ext = entry->ext_ptr;
++ idx = entry->ext_idx;
++
++ if (idx < 0 || idx >= ARRAY_SIZE(scontext->ext_status))
++ continue;
+
+ if (ext->probe && !ext->probe(vcpu)) {
+- scontext->ext_status[entry->ext_idx] =
+- KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE;
++ scontext->ext_status[idx] = KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE;
+ continue;
+ }
+
+- scontext->ext_status[entry->ext_idx] = ext->default_disabled ?
++ scontext->ext_status[idx] = ext->default_disabled ?
+ KVM_RISCV_SBI_EXT_STATUS_DISABLED :
+ KVM_RISCV_SBI_EXT_STATUS_ENABLED;
+ }
+diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h
+index 715bcf8fb69a51..5f5b1aa6c23312 100644
+--- a/arch/s390/include/asm/facility.h
++++ b/arch/s390/include/asm/facility.h
+@@ -88,7 +88,7 @@ static __always_inline bool test_facility(unsigned long nr)
+ return __test_facility(nr, &stfle_fac_list);
+ }
+
+-static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
++static inline unsigned long __stfle_asm(u64 *fac_list, int size)
+ {
+ unsigned long reg0 = size - 1;
+
+@@ -96,7 +96,7 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
+ " lgr 0,%[reg0]\n"
+ " .insn s,0xb2b00000,%[list]\n" /* stfle */
+ " lgr %[reg0],0\n"
+- : [reg0] "+&d" (reg0), [list] "+Q" (*stfle_fac_list)
++ : [reg0] "+&d" (reg0), [list] "+Q" (*fac_list)
+ :
+ : "memory", "cc", "0");
+ return reg0;
+@@ -104,10 +104,10 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
+
+ /**
+ * stfle - Store facility list extended
+- * @stfle_fac_list: array where facility list can be stored
++ * @fac_list: array where facility list can be stored
+ * @size: size of passed in array in double words
+ */
+-static inline void __stfle(u64 *stfle_fac_list, int size)
++static inline void __stfle(u64 *fac_list, int size)
+ {
+ unsigned long nr;
+ u32 stfl_fac_list;
+@@ -116,20 +116,20 @@ static inline void __stfle(u64 *stfle_fac_list, int size)
+ " stfl 0(0)\n"
+ : "=m" (get_lowcore()->stfl_fac_list));
+ stfl_fac_list = get_lowcore()->stfl_fac_list;
+- memcpy(stfle_fac_list, &stfl_fac_list, 4);
++ memcpy(fac_list, &stfl_fac_list, 4);
+ nr = 4; /* bytes stored by stfl */
+ if (stfl_fac_list & 0x01000000) {
+ /* More facility bits available with stfle */
+- nr = __stfle_asm(stfle_fac_list, size);
++ nr = __stfle_asm(fac_list, size);
+ nr = min_t(unsigned long, (nr + 1) * 8, size * 8);
+ }
+- memset((char *) stfle_fac_list + nr, 0, size * 8 - nr);
++ memset((char *)fac_list + nr, 0, size * 8 - nr);
+ }
+
+-static inline void stfle(u64 *stfle_fac_list, int size)
++static inline void stfle(u64 *fac_list, int size)
+ {
+ preempt_disable();
+- __stfle(stfle_fac_list, size);
++ __stfle(fac_list, size);
+ preempt_enable();
+ }
+
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 9d920ced604754..30b20ce9a70033 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -96,7 +96,6 @@ struct zpci_bar_struct {
+ u8 size; /* order 2 exponent */
+ };
+
+-struct s390_domain;
+ struct kvm_zdev;
+
+ #define ZPCI_FUNCTIONS_PER_BUS 256
+@@ -181,9 +180,10 @@ struct zpci_dev {
+ struct dentry *debugfs_dev;
+
+ /* IOMMU and passthrough */
+- struct s390_domain *s390_domain; /* s390 IOMMU domain data */
++ struct iommu_domain *s390_domain; /* attached IOMMU domain */
+ struct kvm_zdev *kzdev;
+ struct mutex kzdev_lock;
++ spinlock_t dom_lock; /* protect s390_domain change */
+ };
+
+ static inline bool zdev_enabled(struct zpci_dev *zdev)
+diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h
+index 06fbabe2f66c98..cb4cc0f59012f7 100644
+--- a/arch/s390/include/asm/set_memory.h
++++ b/arch/s390/include/asm/set_memory.h
+@@ -62,5 +62,6 @@ __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
+
+ int set_direct_map_invalid_noflush(struct page *page);
+ int set_direct_map_default_noflush(struct page *page);
++bool kernel_page_present(struct page *page);
+
+ #endif
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 5b765e3ccf0cad..3317f4878eaa70 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -759,7 +759,6 @@ static int __hw_perf_event_init(struct perf_event *event)
+ reserve_pmc_hardware();
+ refcount_set(&num_events, 1);
+ }
+- mutex_unlock(&pmc_reserve_mutex);
+ event->destroy = hw_perf_event_destroy;
+
+ /* Access per-CPU sampling information (query sampling info) */
+@@ -848,6 +847,7 @@ static int __hw_perf_event_init(struct perf_event *event)
+ if (is_default_overflow_handler(event))
+ event->overflow_handler = cpumsf_output_event_pid;
+ out:
++ mutex_unlock(&pmc_reserve_mutex);
+ return err;
+ }
+
+diff --git a/arch/s390/kernel/syscalls/Makefile b/arch/s390/kernel/syscalls/Makefile
+index 1bb78b9468e8a9..e85c14f9058b92 100644
+--- a/arch/s390/kernel/syscalls/Makefile
++++ b/arch/s390/kernel/syscalls/Makefile
+@@ -12,7 +12,7 @@ kapi-hdrs-y := $(kapi)/unistd_nr.h
+ uapi-hdrs-y := $(uapi)/unistd_32.h
+ uapi-hdrs-y += $(uapi)/unistd_64.h
+
+-targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
++targets += $(addprefix ../../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
+
+ PHONY += kapi uapi
+
+diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
+index 5f805ad42d4c3f..aec9eb16b6f7be 100644
+--- a/arch/s390/mm/pageattr.c
++++ b/arch/s390/mm/pageattr.c
+@@ -406,6 +406,21 @@ int set_direct_map_default_noflush(struct page *page)
+ return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF);
+ }
+
++bool kernel_page_present(struct page *page)
++{
++ unsigned long addr;
++ unsigned int cc;
++
++ addr = (unsigned long)page_address(page);
++ asm volatile(
++ " lra %[addr],0(%[addr])\n"
++ " ipm %[cc]\n"
++ : [cc] "=d" (cc), [addr] "+a" (addr)
++ :
++ : "cc");
++ return (cc >> 28) == 0;
++}
++
+ #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
+
+ static void ipte_range(pte_t *pte, unsigned long address, int nr)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index bd9624c20b8020..635fd8f2acbaa2 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -160,6 +160,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ u64 req = ZPCI_CREATE_REQ(zdev->fh, 0, ZPCI_MOD_FC_SET_MEASURE);
+ struct zpci_iommu_ctrs *ctrs;
+ struct zpci_fib fib = {0};
++ unsigned long flags;
+ u8 cc, status;
+
+ if (zdev->fmb || sizeof(*zdev->fmb) < zdev->fmb_length)
+@@ -171,6 +172,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ WARN_ON((u64) zdev->fmb & 0xf);
+
+ /* reset software counters */
++ spin_lock_irqsave(&zdev->dom_lock, flags);
+ ctrs = zpci_get_iommu_ctrs(zdev);
+ if (ctrs) {
+ atomic64_set(&ctrs->mapped_pages, 0);
+@@ -179,6 +181,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ atomic64_set(&ctrs->sync_map_rpcits, 0);
+ atomic64_set(&ctrs->sync_rpcits, 0);
+ }
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
+
+
+ fib.fmb_addr = virt_to_phys(zdev->fmb);
+@@ -914,10 +917,8 @@ void zpci_device_reserved(struct zpci_dev *zdev)
+ void zpci_release_device(struct kref *kref)
+ {
+ struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+- int ret;
+
+- if (zdev->has_hp_slot)
+- zpci_exit_slot(zdev);
++ WARN_ON(zdev->state != ZPCI_FN_STATE_RESERVED);
+
+ if (zdev->zbus->bus)
+ zpci_bus_remove_device(zdev, false);
+@@ -925,28 +926,14 @@ void zpci_release_device(struct kref *kref)
+ if (zdev_enabled(zdev))
+ zpci_disable_device(zdev);
+
+- switch (zdev->state) {
+- case ZPCI_FN_STATE_CONFIGURED:
+- ret = sclp_pci_deconfigure(zdev->fid);
+- zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret);
+- fallthrough;
+- case ZPCI_FN_STATE_STANDBY:
+- if (zdev->has_hp_slot)
+- zpci_exit_slot(zdev);
+- spin_lock(&zpci_list_lock);
+- list_del(&zdev->entry);
+- spin_unlock(&zpci_list_lock);
+- zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
+- fallthrough;
+- case ZPCI_FN_STATE_RESERVED:
+- if (zdev->has_resources)
+- zpci_cleanup_bus_resources(zdev);
+- zpci_bus_device_unregister(zdev);
+- zpci_destroy_iommu(zdev);
+- fallthrough;
+- default:
+- break;
+- }
++ if (zdev->has_hp_slot)
++ zpci_exit_slot(zdev);
++
++ if (zdev->has_resources)
++ zpci_cleanup_bus_resources(zdev);
++
++ zpci_bus_device_unregister(zdev);
++ zpci_destroy_iommu(zdev);
+ zpci_dbg(3, "rem fid:%x\n", zdev->fid);
+ kfree_rcu(zdev, rcu);
+ }
+diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
+index 2cb5043a997d53..38014206c16b96 100644
+--- a/arch/s390/pci/pci_debug.c
++++ b/arch/s390/pci/pci_debug.c
+@@ -71,17 +71,23 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length,
+
+ static void pci_sw_counter_show(struct seq_file *m)
+ {
+- struct zpci_iommu_ctrs *ctrs = zpci_get_iommu_ctrs(m->private);
++ struct zpci_dev *zdev = m->private;
++ struct zpci_iommu_ctrs *ctrs;
+ atomic64_t *counter;
++ unsigned long flags;
+ int i;
+
++ spin_lock_irqsave(&zdev->dom_lock, flags);
++ ctrs = zpci_get_iommu_ctrs(m->private);
+ if (!ctrs)
+- return;
++ goto unlock;
+
+ counter = &ctrs->mapped_pages;
+ for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
+ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+ atomic64_read(counter));
++unlock:
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
+ }
+
+ static int pci_perf_show(struct seq_file *m, void *v)
+diff --git a/arch/sh/kernel/cpu/proc.c b/arch/sh/kernel/cpu/proc.c
+index a306bcd6b34130..5f6d0e827baeb0 100644
+--- a/arch/sh/kernel/cpu/proc.c
++++ b/arch/sh/kernel/cpu/proc.c
+@@ -132,7 +132,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+
+ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+- return *pos < NR_CPUS ? cpu_data + *pos : NULL;
++ return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
+ }
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index 620e5cf8ae1e74..f2b6f16a46b85d 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -255,7 +255,7 @@ void __ref sh_fdt_init(phys_addr_t dt_phys)
+ dt_virt = phys_to_virt(dt_phys);
+ #endif
+
+- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
++ if (!dt_virt || !early_init_dt_scan(dt_virt, __pa(dt_virt))) {
+ pr_crit("Error: invalid device tree blob"
+ " at physical address %p\n", (void *)dt_phys);
+
+diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
+index 77c4afb8ab9071..75d04fb4994a06 100644
+--- a/arch/um/drivers/net_kern.c
++++ b/arch/um/drivers/net_kern.c
+@@ -336,7 +336,7 @@ static struct platform_driver uml_net_driver = {
+
+ static void net_device_release(struct device *dev)
+ {
+- struct uml_net *device = dev_get_drvdata(dev);
++ struct uml_net *device = container_of(dev, struct uml_net, pdev.dev);
+ struct net_device *netdev = device->dev;
+ struct uml_net_private *lp = netdev_priv(netdev);
+
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index 7f28ec1929dc0b..2bfb17373244bb 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -779,7 +779,7 @@ static int ubd_open_dev(struct ubd *ubd_dev)
+
+ static void ubd_device_release(struct device *dev)
+ {
+- struct ubd *ubd_dev = dev_get_drvdata(dev);
++ struct ubd *ubd_dev = container_of(dev, struct ubd, pdev.dev);
+
+ blk_mq_free_tag_set(&ubd_dev->tag_set);
+ *ubd_dev = ((struct ubd) DEFAULT_UBD);
+@@ -898,6 +898,8 @@ static int ubd_add(int n, char **error_out)
+ if (err)
+ goto out_cleanup_disk;
+
++ ubd_dev->disk = disk;
++
+ return 0;
+
+ out_cleanup_disk:
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index c992da83268dd8..64c09db392c16a 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -815,7 +815,8 @@ static struct platform_driver uml_net_driver = {
+
+ static void vector_device_release(struct device *dev)
+ {
+- struct vector_device *device = dev_get_drvdata(dev);
++ struct vector_device *device =
++ container_of(dev, struct vector_device, pdev.dev);
+ struct net_device *netdev = device->dev;
+
+ list_del(&device->list);
+diff --git a/arch/um/kernel/dtb.c b/arch/um/kernel/dtb.c
+index 4954188a6a0908..8d78ced9e08f6d 100644
+--- a/arch/um/kernel/dtb.c
++++ b/arch/um/kernel/dtb.c
+@@ -17,7 +17,7 @@ void uml_dtb_init(void)
+
+ area = uml_load_file(dtb, &size);
+ if (area) {
+- if (!early_init_dt_scan(area)) {
++ if (!early_init_dt_scan(area, __pa(area))) {
+ pr_err("invalid DTB %s\n", dtb);
+ memblock_free(area, size);
+ return;
+diff --git a/arch/um/kernel/physmem.c b/arch/um/kernel/physmem.c
+index fb2adfb499452b..ee693e0b2b58bf 100644
+--- a/arch/um/kernel/physmem.c
++++ b/arch/um/kernel/physmem.c
+@@ -81,10 +81,10 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
+ unsigned long len, unsigned long long highmem)
+ {
+ unsigned long reserve = reserve_end - start;
+- long map_size = len - reserve;
++ unsigned long map_size = len - reserve;
+ int err;
+
+- if(map_size <= 0) {
++ if (len <= reserve) {
+ os_warn("Too few physical memory! Needed=%lu, given=%lu\n",
+ reserve, len);
+ exit(1);
+@@ -95,7 +95,7 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
+ err = os_map_memory((void *) reserve_end, physmem_fd, reserve,
+ map_size, 1, 1, 1);
+ if (err < 0) {
+- os_warn("setup_physmem - mapping %ld bytes of memory at 0x%p "
++ os_warn("setup_physmem - mapping %lu bytes of memory at 0x%p "
+ "failed - errno = %d\n", map_size,
+ (void *) reserve_end, err);
+ exit(1);
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index be2856af6d4c31..9c6cf03ed02b03 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -292,6 +292,6 @@ int elf_core_copy_task_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
+ {
+ int cpu = current_thread_info()->cpu;
+
+- return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu);
++ return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu) == 0;
+ }
+
+diff --git a/arch/um/kernel/sysrq.c b/arch/um/kernel/sysrq.c
+index 4bb8622dc51226..e3b6a2fd75d996 100644
+--- a/arch/um/kernel/sysrq.c
++++ b/arch/um/kernel/sysrq.c
+@@ -52,5 +52,5 @@ void show_stack(struct task_struct *task, unsigned long *stack,
+ }
+
+ printk("%sCall Trace:\n", loglvl);
+- dump_trace(current, &stackops, (void *)loglvl);
++ dump_trace(task ?: current, &stackops, (void *)loglvl);
+ }
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 327c45c5013fea..2f85ed005c42f1 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -78,6 +78,32 @@ static inline void tdcall(u64 fn, struct tdx_module_args *args)
+ panic("TDCALL %lld failed (Buggy TDX module!)\n", fn);
+ }
+
++/* Read TD-scoped metadata */
++static inline u64 tdg_vm_rd(u64 field, u64 *value)
++{
++ struct tdx_module_args args = {
++ .rdx = field,
++ };
++ u64 ret;
++
++ ret = __tdcall_ret(TDG_VM_RD, &args);
++ *value = args.r8;
++
++ return ret;
++}
++
++/* Write TD-scoped metadata */
++static inline u64 tdg_vm_wr(u64 field, u64 value, u64 mask)
++{
++ struct tdx_module_args args = {
++ .rdx = field,
++ .r8 = value,
++ .r9 = mask,
++ };
++
++ return __tdcall(TDG_VM_WR, &args);
++}
++
+ /**
+ * tdx_mcall_get_report0() - Wrapper to get TDREPORT0 (a.k.a. TDREPORT
+ * subtype 0) using TDG.MR.REPORT TDCALL.
+@@ -168,7 +194,61 @@ static void __noreturn tdx_panic(const char *msg)
+ __tdx_hypercall(&args);
+ }
+
+-static void tdx_parse_tdinfo(u64 *cc_mask)
++/*
++ * The kernel cannot handle #VEs when accessing normal kernel memory. Ensure
++ * that no #VE will be delivered for accesses to TD-private memory.
++ *
++ * TDX 1.0 does not allow the guest to disable SEPT #VE on its own. The VMM
++ * controls if the guest will receive such #VE with TD attribute
++ * ATTR_SEPT_VE_DISABLE.
++ *
++ * Newer TDX modules allow the guest to control if it wants to receive SEPT
++ * violation #VEs.
++ *
++ * Check if the feature is available and disable SEPT #VE if possible.
++ *
++ * If the TD is allowed to disable/enable SEPT #VEs, the ATTR_SEPT_VE_DISABLE
++ * attribute is no longer reliable. It reflects the initial state of the
++ * control for the TD, but it will not be updated if someone (e.g. bootloader)
++ * changes it before the kernel starts. Kernel must check TDCS_TD_CTLS bit to
++ * determine if SEPT #VEs are enabled or disabled.
++ */
++static void disable_sept_ve(u64 td_attr)
++{
++ const char *msg = "TD misconfiguration: SEPT #VE has to be disabled";
++ bool debug = td_attr & ATTR_DEBUG;
++ u64 config, controls;
++
++ /* Is this TD allowed to disable SEPT #VE */
++ tdg_vm_rd(TDCS_CONFIG_FLAGS, &config);
++ if (!(config & TDCS_CONFIG_FLEXIBLE_PENDING_VE)) {
++ /* No SEPT #VE controls for the guest: check the attribute */
++ if (td_attr & ATTR_SEPT_VE_DISABLE)
++ return;
++
++ /* Relax SEPT_VE_DISABLE check for debug TD for backtraces */
++ if (debug)
++ pr_warn("%s\n", msg);
++ else
++ tdx_panic(msg);
++ return;
++ }
++
++ /* Check if SEPT #VE has been disabled before us */
++ tdg_vm_rd(TDCS_TD_CTLS, &controls);
++ if (controls & TD_CTLS_PENDING_VE_DISABLE)
++ return;
++
++ /* Keep #VEs enabled for splats in debugging environments */
++ if (debug)
++ return;
++
++ /* Disable SEPT #VEs */
++ tdg_vm_wr(TDCS_TD_CTLS, TD_CTLS_PENDING_VE_DISABLE,
++ TD_CTLS_PENDING_VE_DISABLE);
++}
++
++static void tdx_setup(u64 *cc_mask)
+ {
+ struct tdx_module_args args = {};
+ unsigned int gpa_width;
+@@ -193,21 +273,12 @@ static void tdx_parse_tdinfo(u64 *cc_mask)
+ gpa_width = args.rcx & GENMASK(5, 0);
+ *cc_mask = BIT_ULL(gpa_width - 1);
+
+- /*
+- * The kernel can not handle #VE's when accessing normal kernel
+- * memory. Ensure that no #VE will be delivered for accesses to
+- * TD-private memory. Only VMM-shared memory (MMIO) will #VE.
+- */
+ td_attr = args.rdx;
+- if (!(td_attr & ATTR_SEPT_VE_DISABLE)) {
+- const char *msg = "TD misconfiguration: SEPT_VE_DISABLE attribute must be set.";
+
+- /* Relax SEPT_VE_DISABLE check for debug TD. */
+- if (td_attr & ATTR_DEBUG)
+- pr_warn("%s\n", msg);
+- else
+- tdx_panic(msg);
+- }
++ /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
++ tdg_vm_wr(TDCS_NOTIFY_ENABLES, 0, -1ULL);
++
++ disable_sept_ve(td_attr);
+ }
+
+ /*
+@@ -929,10 +1000,6 @@ static void tdx_kexec_finish(void)
+
+ void __init tdx_early_init(void)
+ {
+- struct tdx_module_args args = {
+- .rdx = TDCS_NOTIFY_ENABLES,
+- .r9 = -1ULL,
+- };
+ u64 cc_mask;
+ u32 eax, sig[3];
+
+@@ -947,11 +1014,11 @@ void __init tdx_early_init(void)
+ setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+
+ cc_vendor = CC_VENDOR_INTEL;
+- tdx_parse_tdinfo(&cc_mask);
+- cc_set_mask(cc_mask);
+
+- /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
+- tdcall(TDG_VM_WR, &args);
++ /* Configure the TD */
++ tdx_setup(&cc_mask);
++
++ cc_set_mask(cc_mask);
+
+ /*
+ * All bits above GPA width are reserved and kernel treats shared bit
+diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
+index ad7f4c89162568..2de859173940eb 100644
+--- a/arch/x86/crypto/aegis128-aesni-asm.S
++++ b/arch/x86/crypto/aegis128-aesni-asm.S
+@@ -21,7 +21,7 @@
+ #define T1 %xmm7
+
+ #define STATEP %rdi
+-#define LEN %rsi
++#define LEN %esi
+ #define SRC %rdx
+ #define DST %rcx
+
+@@ -76,32 +76,32 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ xor %r9d, %r9d
+ pxor MSG, MSG
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1, %r8
+ jz .Lld_partial_1
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1E, %r8
+ add SRC, %r8
+ mov (%r8), %r9b
+
+ .Lld_partial_1:
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x2, %r8
+ jz .Lld_partial_2
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1C, %r8
+ add SRC, %r8
+ shl $0x10, %r9
+ mov (%r8), %r9w
+
+ .Lld_partial_2:
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x4, %r8
+ jz .Lld_partial_4
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x18, %r8
+ add SRC, %r8
+ shl $32, %r9
+@@ -111,11 +111,11 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ .Lld_partial_4:
+ movq %r9, MSG
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x8, %r8
+ jz .Lld_partial_8
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x10, %r8
+ add SRC, %r8
+ pslldq $8, MSG
+@@ -139,7 +139,7 @@ SYM_FUNC_END(__load_partial)
+ * %r10
+ */
+ SYM_FUNC_START_LOCAL(__store_partial)
+- mov LEN, %r8
++ mov LEN, %r8d
+ mov DST, %r9
+
+ movq T0, %r10
+@@ -677,7 +677,7 @@ SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
+ call __store_partial
+
+ /* mask with byte count: */
+- movq LEN, T0
++ movd LEN, T0
+ punpcklbw T0, T0
+ punpcklbw T0, T0
+ punpcklbw T0, T0
+@@ -702,7 +702,8 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
+
+ /*
+ * void crypto_aegis128_aesni_final(void *state, void *tag_xor,
+- * u64 assoclen, u64 cryptlen);
++ * unsigned int assoclen,
++ * unsigned int cryptlen);
+ */
+ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ FRAME_BEGIN
+@@ -715,8 +716,8 @@ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ movdqu 0x40(STATEP), STATE4
+
+ /* prepare length block: */
+- movq %rdx, MSG
+- movq %rcx, T0
++ movd %edx, MSG
++ movd %ecx, T0
+ pslldq $8, T0
+ pxor T0, MSG
+ psllq $3, MSG /* multiply by 8 (to get bit count) */
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index fd4670a6694e77..a087bc0c549875 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -828,11 +828,13 @@ static void pt_buffer_advance(struct pt_buffer *buf)
+ buf->cur_idx++;
+
+ if (buf->cur_idx == buf->cur->last) {
+- if (buf->cur == buf->last)
++ if (buf->cur == buf->last) {
+ buf->cur = buf->first;
+- else
++ buf->wrapped = true;
++ } else {
+ buf->cur = list_entry(buf->cur->list.next, struct topa,
+ list);
++ }
+ buf->cur_idx = 0;
+ }
+ }
+@@ -846,8 +848,11 @@ static void pt_buffer_advance(struct pt_buffer *buf)
+ static void pt_update_head(struct pt *pt)
+ {
+ struct pt_buffer *buf = perf_get_aux(&pt->handle);
++ bool wrapped = buf->wrapped;
+ u64 topa_idx, base, old;
+
++ buf->wrapped = false;
++
+ if (buf->single) {
+ local_set(&buf->data_size, buf->output_off);
+ return;
+@@ -865,7 +870,7 @@ static void pt_update_head(struct pt *pt)
+ } else {
+ old = (local64_xchg(&buf->head, base) &
+ ((buf->nr_pages << PAGE_SHIFT) - 1));
+- if (base < old)
++ if (base < old || (base == old && wrapped))
+ base += buf->nr_pages << PAGE_SHIFT;
+
+ local_add(base - old, &buf->data_size);
+diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
+index f5e46c04c145d0..a1b6c04b7f6848 100644
+--- a/arch/x86/events/intel/pt.h
++++ b/arch/x86/events/intel/pt.h
+@@ -65,6 +65,7 @@ struct pt_pmu {
+ * @head: logical write offset inside the buffer
+ * @snapshot: if this is for a snapshot/overwrite counter
+ * @single: use Single Range Output instead of ToPA
++ * @wrapped: buffer advance wrapped back to the first topa table
+ * @stop_pos: STOP topa entry index
+ * @intr_pos: INT topa entry index
+ * @stop_te: STOP topa entry pointer
+@@ -82,6 +83,7 @@ struct pt_buffer {
+ local64_t head;
+ bool snapshot;
+ bool single;
++ bool wrapped;
+ long stop_pos, intr_pos;
+ struct topa_entry *stop_te, *intr_te;
+ void **data_pages;
+diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
+index 1f650b4dde509b..6c6e9b9f98a456 100644
+--- a/arch/x86/include/asm/atomic64_32.h
++++ b/arch/x86/include/asm/atomic64_32.h
+@@ -51,7 +51,8 @@ static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v)
+ #ifdef CONFIG_X86_CMPXCHG64
+ #define __alternative_atomic64(f, g, out, in...) \
+ asm volatile("call %c[func]" \
+- : out : [func] "i" (atomic64_##g##_cx8), ## in)
++ : ALT_OUTPUT_SP(out) \
++ : [func] "i" (atomic64_##g##_cx8), ## in)
+
+ #define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8)
+ #else
+diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
+index 62cef2113ca749..fd1282a783ddbf 100644
+--- a/arch/x86/include/asm/cmpxchg_32.h
++++ b/arch/x86/include/asm/cmpxchg_32.h
+@@ -94,7 +94,7 @@ static __always_inline bool __try_cmpxchg64_local(volatile u64 *ptr, u64 *oldp,
+ asm volatile(ALTERNATIVE(_lock_loc \
+ "call cmpxchg8b_emu", \
+ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \
+- : "+a" (o.low), "+d" (o.high) \
++ : ALT_OUTPUT_SP("+a" (o.low), "+d" (o.high)) \
+ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \
+ : "memory"); \
+ \
+@@ -123,8 +123,8 @@ static __always_inline u64 arch_cmpxchg64_local(volatile u64 *ptr, u64 old, u64
+ "call cmpxchg8b_emu", \
+ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \
+ CC_SET(e) \
+- : CC_OUT(e) (ret), \
+- "+a" (o.low), "+d" (o.high) \
++ : ALT_OUTPUT_SP(CC_OUT(e) (ret), \
++ "+a" (o.low), "+d" (o.high)) \
+ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \
+ : "memory"); \
+ \
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 6d9f763a7bb9d5..427d1daf06d06a 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -26,6 +26,7 @@
+ #include <linux/irqbypass.h>
+ #include <linux/hyperv.h>
+ #include <linux/kfifo.h>
++#include <linux/sched/vhost_task.h>
+
+ #include <asm/apic.h>
+ #include <asm/pvclock-abi.h>
+@@ -1443,7 +1444,8 @@ struct kvm_arch {
+ bool sgx_provisioning_allowed;
+
+ struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter;
+- struct task_struct *nx_huge_page_recovery_thread;
++ struct vhost_task *nx_huge_page_recovery_thread;
++ u64 nx_huge_page_last;
+
+ #ifdef CONFIG_X86_64
+ /* The number of TDP MMU pages across all roots. */
+diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
+index fdfd41511b0211..fecb2a6e864be1 100644
+--- a/arch/x86/include/asm/shared/tdx.h
++++ b/arch/x86/include/asm/shared/tdx.h
+@@ -16,11 +16,20 @@
+ #define TDG_VP_VEINFO_GET 3
+ #define TDG_MR_REPORT 4
+ #define TDG_MEM_PAGE_ACCEPT 6
++#define TDG_VM_RD 7
+ #define TDG_VM_WR 8
+
+-/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */
++/* TDX TD-Scope Metadata. To be used by TDG.VM.WR and TDG.VM.RD */
++#define TDCS_CONFIG_FLAGS 0x1110000300000016
++#define TDCS_TD_CTLS 0x1110000300000017
+ #define TDCS_NOTIFY_ENABLES 0x9100000000000010
+
++/* TDCS_CONFIG_FLAGS bits */
++#define TDCS_CONFIG_FLEXIBLE_PENDING_VE BIT_ULL(1)
++
++/* TDCS_TD_CTLS bits */
++#define TD_CTLS_PENDING_VE_DISABLE BIT_ULL(0)
++
+ /* TDX hypercall Leaf IDs */
+ #define TDVMCALL_MAP_GPA 0x10001
+ #define TDVMCALL_GET_QUOTE 0x10002
+diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
+index 580636cdc257b7..4d3c9d00d6b6b2 100644
+--- a/arch/x86/include/asm/tlb.h
++++ b/arch/x86/include/asm/tlb.h
+@@ -34,4 +34,8 @@ static inline void __tlb_remove_table(void *table)
+ free_page_and_swap_cache(table);
+ }
+
++static inline void invlpg(unsigned long addr)
++{
++ asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
++}
+ #endif /* _ASM_X86_TLB_H */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 823f44f7bc9465..d8408aafeed988 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -798,6 +798,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static const struct x86_cpu_desc erratum_1386_microcode[] = {
+ AMD_CPU_DESC(0x17, 0x1, 0x2, 0x0800126e),
+ AMD_CPU_DESC(0x17, 0x31, 0x0, 0x08301052),
++ {},
+ };
+
+ static void fix_erratum_1386(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index f43bb974fc66d7..b17bcf9b67eed4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2392,12 +2392,12 @@ void __init arch_cpu_finalize_init(void)
+ alternative_instructions();
+
+ if (IS_ENABLED(CONFIG_X86_64)) {
+- unsigned long USER_PTR_MAX = TASK_SIZE_MAX-1;
++ unsigned long USER_PTR_MAX = TASK_SIZE_MAX;
+
+ /*
+ * Enable this when LAM is gated on LASS support
+ if (cpu_feature_enabled(X86_FEATURE_LAM))
+- USER_PTR_MAX = (1ul << 63) - PAGE_SIZE - 1;
++ USER_PTR_MAX = (1ul << 63) - PAGE_SIZE;
+ */
+ runtime_const_init(ptr, USER_PTR_MAX);
+
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 31a73715d75531..fb5d0c67fbab17 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -34,6 +34,7 @@
+ #include <asm/setup.h>
+ #include <asm/cpu.h>
+ #include <asm/msr.h>
++#include <asm/tlb.h>
+
+ #include "internal.h"
+
+@@ -483,11 +484,25 @@ static void scan_containers(u8 *ucode, size_t size, struct cont_desc *desc)
+ }
+ }
+
+-static int __apply_microcode_amd(struct microcode_amd *mc)
++static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)
+ {
++ unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+ u32 rev, dummy;
+
+- native_wrmsrl(MSR_AMD64_PATCH_LOADER, (u64)(long)&mc->hdr.data_code);
++ native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
++
++ if (x86_family(bsp_cpuid_1_eax) == 0x17) {
++ unsigned long p_addr_end = p_addr + psize - 1;
++
++ invlpg(p_addr);
++
++ /*
++ * Flush next page too if patch image is crossing a page
++ * boundary.
++ */
++ if (p_addr >> PAGE_SHIFT != p_addr_end >> PAGE_SHIFT)
++ invlpg(p_addr_end);
++ }
+
+ /* verify patch application was successful */
+ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+@@ -529,7 +544,7 @@ static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size)
+ if (old_rev > mc->hdr.patch_id)
+ return ret;
+
+- return !__apply_microcode_amd(mc);
++ return !__apply_microcode_amd(mc, desc.psize);
+ }
+
+ static bool get_builtin_microcode(struct cpio_data *cp)
+@@ -745,7 +760,7 @@ void reload_ucode_amd(unsigned int cpu)
+ rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+
+ if (rev < mc->hdr.patch_id) {
+- if (!__apply_microcode_amd(mc))
++ if (!__apply_microcode_amd(mc, p->size))
+ pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id);
+ }
+ }
+@@ -798,7 +813,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ goto out;
+ }
+
+- if (__apply_microcode_amd(mc_amd)) {
++ if (__apply_microcode_amd(mc_amd, p->size)) {
+ pr_err("CPU%d: update failed for patch_level=0x%08x\n",
+ cpu, mc_amd->hdr.patch_id);
+ return UCODE_ERROR;
+diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
+index 64280879c68c02..59d23cdf4ed0fa 100644
+--- a/arch/x86/kernel/devicetree.c
++++ b/arch/x86/kernel/devicetree.c
+@@ -305,7 +305,7 @@ void __init x86_flattree_get_config(void)
+ map_len = size;
+ }
+
+- early_init_dt_verify(dt);
++ early_init_dt_verify(dt, __pa(dt));
+ }
+
+ unflatten_and_copy_device_tree();
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index d00c28aaa5be45..d4705a348a8045 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -723,7 +723,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ state->sp = task->thread.sp + sizeof(*frame);
+ state->bp = READ_ONCE_NOCHECK(frame->bp);
+ state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
+- state->signal = (void *)state->ip == ret_from_fork;
++ state->signal = (void *)state->ip == ret_from_fork_asm;
+ }
+
+ if (get_stack_info((unsigned long *)state->sp, state->task,
+diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
+index f09f13c01c6bbd..d7f27a3276549b 100644
+--- a/arch/x86/kvm/Kconfig
++++ b/arch/x86/kvm/Kconfig
+@@ -18,8 +18,7 @@ menuconfig VIRTUALIZATION
+ if VIRTUALIZATION
+
+ config KVM_X86
+- def_tristate KVM if KVM_INTEL || KVM_AMD
+- depends on X86_LOCAL_APIC
++ def_tristate KVM if (KVM_INTEL != n || KVM_AMD != n)
+ select KVM_COMMON
+ select KVM_GENERIC_MMU_NOTIFIER
+ select HAVE_KVM_IRQCHIP
+@@ -29,6 +28,7 @@ config KVM_X86
+ select HAVE_KVM_IRQ_BYPASS
+ select HAVE_KVM_IRQ_ROUTING
+ select HAVE_KVM_READONLY_MEM
++ select VHOST_TASK
+ select KVM_ASYNC_PF
+ select USER_RETURN_NOTIFIER
+ select KVM_MMIO
+@@ -49,6 +49,7 @@ config KVM_X86
+
+ config KVM
+ tristate "Kernel-based Virtual Machine (KVM) support"
++ depends on X86_LOCAL_APIC
+ help
+ Support hosting fully virtualized guest machines using hardware
+ virtualization extensions. You will need a fairly recent
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 8e853a5fc867b7..3e353ed1f76736 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7281,7 +7281,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ kvm_mmu_zap_all_fast(kvm);
+ mutex_unlock(&kvm->slots_lock);
+
+- wake_up_process(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
+ }
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7427,7 +7427,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
+ mutex_lock(&kvm_lock);
+
+ list_for_each_entry(kvm, &vm_list, vm_list)
+- wake_up_process(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
+
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7530,62 +7530,56 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
+ srcu_read_unlock(&kvm->srcu, rcu_idx);
+ }
+
+-static long get_nx_huge_page_recovery_timeout(u64 start_time)
++static void kvm_nx_huge_page_recovery_worker_kill(void *data)
+ {
+- bool enabled;
+- uint period;
+-
+- enabled = calc_nx_huge_pages_recovery_period(&period);
+-
+- return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64()
+- : MAX_SCHEDULE_TIMEOUT;
+ }
+
+-static int kvm_nx_huge_page_recovery_worker(struct kvm *kvm, uintptr_t data)
++static bool kvm_nx_huge_page_recovery_worker(void *data)
+ {
+- u64 start_time;
++ struct kvm *kvm = data;
++ bool enabled;
++ uint period;
+ long remaining_time;
+
+- while (true) {
+- start_time = get_jiffies_64();
+- remaining_time = get_nx_huge_page_recovery_timeout(start_time);
+-
+- set_current_state(TASK_INTERRUPTIBLE);
+- while (!kthread_should_stop() && remaining_time > 0) {
+- schedule_timeout(remaining_time);
+- remaining_time = get_nx_huge_page_recovery_timeout(start_time);
+- set_current_state(TASK_INTERRUPTIBLE);
+- }
+-
+- set_current_state(TASK_RUNNING);
+-
+- if (kthread_should_stop())
+- return 0;
++ enabled = calc_nx_huge_pages_recovery_period(&period);
++ if (!enabled)
++ return false;
+
+- kvm_recover_nx_huge_pages(kvm);
++ remaining_time = kvm->arch.nx_huge_page_last + msecs_to_jiffies(period)
++ - get_jiffies_64();
++ if (remaining_time > 0) {
++ schedule_timeout(remaining_time);
++ /* check for signals and come back */
++ return true;
+ }
++
++ __set_current_state(TASK_RUNNING);
++ kvm_recover_nx_huge_pages(kvm);
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ return true;
+ }
+
+ int kvm_mmu_post_init_vm(struct kvm *kvm)
+ {
+- int err;
+-
+ if (nx_hugepage_mitigation_hard_disabled)
+ return 0;
+
+- err = kvm_vm_create_worker_thread(kvm, kvm_nx_huge_page_recovery_worker, 0,
+- "kvm-nx-lpage-recovery",
+- &kvm->arch.nx_huge_page_recovery_thread);
+- if (!err)
+- kthread_unpark(kvm->arch.nx_huge_page_recovery_thread);
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ kvm->arch.nx_huge_page_recovery_thread = vhost_task_create(
++ kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill,
++ kvm, "kvm-nx-lpage-recovery");
+
+- return err;
++ if (!kvm->arch.nx_huge_page_recovery_thread)
++ return -ENOMEM;
++
++ vhost_task_start(kvm->arch.nx_huge_page_recovery_thread);
++ return 0;
+ }
+
+ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
+ {
+ if (kvm->arch.nx_huge_page_recovery_thread)
+- kthread_stop(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_stop(kvm->arch.nx_huge_page_recovery_thread);
+ }
+
+ #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
+diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
+index 8f7eb3ad88fcb9..5521608077ec09 100644
+--- a/arch/x86/kvm/mmu/spte.c
++++ b/arch/x86/kvm/mmu/spte.c
+@@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
+
+ /*
+- * Optimization: for pte sync, if spte was writable the hash
+- * lookup is unnecessary (and expensive). Write protection
+- * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
+- * Same reasoning can be applied to dirty page accounting.
++ * When overwriting an existing leaf SPTE, and the old SPTE was
++ * writable, skip trying to unsync shadow pages as any relevant
++ * shadow pages must already be unsync, i.e. the hash lookup is
++ * unnecessary (and expensive).
++ *
++ * The same reasoning applies to dirty page/folio accounting;
++ * KVM will mark the folio dirty using the old SPTE, thus
++ * there's no need to immediately mark the new SPTE as dirty.
++ *
++ * Note, both cases rely on KVM not changing PFNs without first
++ * zapping the old SPTE, which is guaranteed by both the shadow
++ * MMU and the TDP MMU.
+ */
+- if (is_writable_pte(old_spte))
++ if (is_last_spte(old_spte, level) && is_writable_pte(old_spte))
+ goto out;
+
+ /*
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index d28618e9277ede..92fee5e8a3c741 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2551,28 +2551,6 @@ static bool cpu_has_sgx(void)
+ return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0));
+ }
+
+-/*
+- * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they
+- * can't be used due to errata where VM Exit may incorrectly clear
+- * IA32_PERF_GLOBAL_CTRL[34:32]. Work around the errata by using the
+- * MSR load mechanism to switch IA32_PERF_GLOBAL_CTRL.
+- */
+-static bool cpu_has_perf_global_ctrl_bug(void)
+-{
+- switch (boot_cpu_data.x86_vfm) {
+- case INTEL_NEHALEM_EP: /* AAK155 */
+- case INTEL_NEHALEM: /* AAP115 */
+- case INTEL_WESTMERE: /* AAT100 */
+- case INTEL_WESTMERE_EP: /* BC86,AAY89,BD102 */
+- case INTEL_NEHALEM_EX: /* BA97 */
+- return true;
+- default:
+- break;
+- }
+-
+- return false;
+-}
+-
+ static int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, u32 msr, u32 *result)
+ {
+ u32 vmx_msr_low, vmx_msr_high;
+@@ -2732,6 +2710,27 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
+ _vmexit_control &= ~x_ctrl;
+ }
+
++ /*
++ * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they
++ * can't be used due to an errata where VM Exit may incorrectly clear
++ * IA32_PERF_GLOBAL_CTRL[34:32]. Workaround the errata by using the
++ * MSR load mechanism to switch IA32_PERF_GLOBAL_CTRL.
++ */
++ switch (boot_cpu_data.x86_vfm) {
++ case INTEL_NEHALEM_EP: /* AAK155 */
++ case INTEL_NEHALEM: /* AAP115 */
++ case INTEL_WESTMERE: /* AAT100 */
++ case INTEL_WESTMERE_EP: /* BC86,AAY89,BD102 */
++ case INTEL_NEHALEM_EX: /* BA97 */
++ _vmentry_control &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
++ _vmexit_control &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
++ pr_warn_once("VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL "
++ "does not work properly. Using workaround\n");
++ break;
++ default:
++ break;
++ }
++
+ rdmsrl(MSR_IA32_VMX_BASIC, basic_msr);
+
+ /* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */
+@@ -4422,9 +4421,6 @@ static u32 vmx_vmentry_ctrl(void)
+ VM_ENTRY_LOAD_IA32_EFER |
+ VM_ENTRY_IA32E_MODE);
+
+- if (cpu_has_perf_global_ctrl_bug())
+- vmentry_ctrl &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
+-
+ return vmentry_ctrl;
+ }
+
+@@ -4442,10 +4438,6 @@ static u32 vmx_vmexit_ctrl(void)
+ if (vmx_pt_mode_is_system())
+ vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP |
+ VM_EXIT_CLEAR_IA32_RTIT_CTL);
+-
+- if (cpu_has_perf_global_ctrl_bug())
+- vmexit_ctrl &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
+-
+ /* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
+ return vmexit_ctrl &
+ ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
+@@ -8400,10 +8392,6 @@ __init int vmx_hardware_setup(void)
+ if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0)
+ return -EIO;
+
+- if (cpu_has_perf_global_ctrl_bug())
+- pr_warn_once("VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL "
+- "does not work properly. Using workaround\n");
+-
+ if (boot_cpu_has(X86_FEATURE_NX))
+ kvm_enable_efer_bits(EFER_NX);
+
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 86593d1b787d8a..b0678d59ebdb4a 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -20,6 +20,7 @@
+ #include <asm/cacheflush.h>
+ #include <asm/apic.h>
+ #include <asm/perf_event.h>
++#include <asm/tlb.h>
+
+ #include "mm_internal.h"
+
+@@ -1140,7 +1141,7 @@ STATIC_NOPV void native_flush_tlb_one_user(unsigned long addr)
+ bool cpu_pcide;
+
+ /* Flush 'addr' from the kernel PCID: */
+- asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
++ invlpg(addr);
+
+ /* If PTI is off there is no user PCID and nothing to flush. */
+ if (!static_cpu_has(X86_FEATURE_PTI))
+diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
+index 64fca49cd88ff9..ce4fd8d33da467 100644
+--- a/arch/x86/platform/pvh/head.S
++++ b/arch/x86/platform/pvh/head.S
+@@ -172,7 +172,14 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
+ movq %rbp, %rbx
+ subq $_pa(pvh_start_xen), %rbx
+ movq %rbx, phys_base(%rip)
+- call xen_prepare_pvh
++
++ /* Call xen_prepare_pvh() via the kernel virtual mapping */
++ leaq xen_prepare_pvh(%rip), %rax
++ subq phys_base(%rip), %rax
++ addq $__START_KERNEL_map, %rax
++ ANNOTATE_RETPOLINE_SAFE
++ call *%rax
++
+ /*
+ * Clear phys_base. __startup_64 will *add* to its value,
+ * so reset to 0.
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index bdec4a773af098..e51f2060e83089 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -216,7 +216,7 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname,
+
+ void __init early_init_devtree(void *params)
+ {
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ of_scan_flat_dt(xtensa_dt_io_area, NULL);
+
+ if (!command_line[0])
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index e831aedb464329..9fb9f353315025 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -736,6 +736,7 @@ static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
+ */
+ bfq_put_cooperator(sync_bfqq);
+ bic_set_bfqq(bic, NULL, true, act_idx);
++ bfq_release_process_ref(bfqd, sync_bfqq);
+ }
+ }
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 0747d9d0e48c8a..95dd7b79593565 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -582,23 +582,31 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
+ #define BFQ_LIMIT_INLINE_DEPTH 16
+
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
++static bool bfqq_request_over_limit(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic, blk_opf_t opf,
++ unsigned int act_idx, int limit)
+ {
+- struct bfq_data *bfqd = bfqq->bfqd;
+- struct bfq_entity *entity = &bfqq->entity;
+ struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH];
+ struct bfq_entity **entities = inline_entities;
+- int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+- int class_idx = bfqq->ioprio_class - 1;
++ int alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+ struct bfq_sched_data *sched_data;
++ struct bfq_entity *entity;
++ struct bfq_queue *bfqq;
+ unsigned long wsum;
+ bool ret = false;
+-
+- if (!entity->on_st_or_in_serv)
+- return false;
++ int depth;
++ int level;
+
+ retry:
+ spin_lock_irq(&bfqd->lock);
++ bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
++ if (!bfqq)
++ goto out;
++
++ entity = &bfqq->entity;
++ if (!entity->on_st_or_in_serv)
++ goto out;
++
+ /* +1 for bfqq entity, root cgroup not included */
+ depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1;
+ if (depth > alloc_depth) {
+@@ -643,7 +651,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ * class.
+ */
+ wsum = 0;
+- for (i = 0; i <= class_idx; i++) {
++ for (i = 0; i <= bfqq->ioprio_class - 1; i++) {
+ wsum = wsum * IOPRIO_BE_NR +
+ sched_data->service_tree[i].wsum;
+ }
+@@ -666,7 +674,9 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ return ret;
+ }
+ #else
+-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
++static bool bfqq_request_over_limit(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic, blk_opf_t opf,
++ unsigned int act_idx, int limit)
+ {
+ return false;
+ }
+@@ -704,8 +714,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ }
+
+ for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
+- struct bfq_queue *bfqq =
+- bic_to_bfqq(bic, op_is_sync(opf), act_idx);
++ /* Fast path to check if bfqq is already allocated. */
++ if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx))
++ continue;
+
+ /*
+ * Does queue (or any parent entity) exceed number of
+@@ -713,7 +724,7 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ * limit depth so that it cannot consume more
+ * available requests and thus starve other entities.
+ */
+- if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
++ if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
+ depth = 1;
+ break;
+ }
+@@ -5434,8 +5445,6 @@ void bfq_put_cooperator(struct bfq_queue *bfqq)
+ bfq_put_queue(__bfqq);
+ __bfqq = next;
+ }
+-
+- bfq_release_process_ref(bfqq->bfqd, bfqq);
+ }
+
+ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+@@ -5448,6 +5457,8 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref);
+
+ bfq_put_cooperator(bfqq);
++
++ bfq_release_process_ref(bfqd, bfqq);
+ }
+
+ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync,
+@@ -6734,6 +6745,8 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ bic_set_bfqq(bic, NULL, true, bfqq->actuator_idx);
+
+ bfq_put_cooperator(bfqq);
++
++ bfq_release_process_ref(bfqq->bfqd, bfqq);
+ return NULL;
+ }
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index bc5e8c5eaac9ff..4f791a3114a12c 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -261,6 +261,8 @@ static void blk_free_queue(struct request_queue *q)
+ blk_mq_release(q);
+
+ ida_free(&blk_queue_ida, q->id);
++ lockdep_unregister_key(&q->io_lock_cls_key);
++ lockdep_unregister_key(&q->q_lock_cls_key);
+ call_rcu(&q->rcu_head, blk_free_queue_rcu);
+ }
+
+@@ -278,18 +280,20 @@ void blk_put_queue(struct request_queue *q)
+ }
+ EXPORT_SYMBOL(blk_put_queue);
+
+-void blk_queue_start_drain(struct request_queue *q)
++bool blk_queue_start_drain(struct request_queue *q)
+ {
+ /*
+ * When queue DYING flag is set, we need to block new req
+ * entering queue, so we call blk_freeze_queue_start() to
+ * prevent I/O from crossing blk_queue_enter().
+ */
+- blk_freeze_queue_start(q);
++ bool freeze = __blk_freeze_queue_start(q, current);
+ if (queue_is_mq(q))
+ blk_mq_wake_waiters(q);
+ /* Make blk_queue_enter() reexamine the DYING flag. */
+ wake_up_all(&q->mq_freeze_wq);
++
++ return freeze;
+ }
+
+ /**
+@@ -321,6 +325,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ return -ENODEV;
+ }
+
++ rwsem_acquire_read(&q->q_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->q_lockdep_map, _RET_IP_);
+ return 0;
+ }
+
+@@ -352,6 +358,8 @@ int __bio_queue_enter(struct request_queue *q, struct bio *bio)
+ goto dead;
+ }
+
++ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
+ return 0;
+ dead:
+ bio_io_error(bio);
+@@ -441,6 +449,12 @@ struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id)
+ PERCPU_REF_INIT_ATOMIC, GFP_KERNEL);
+ if (error)
+ goto fail_stats;
++ lockdep_register_key(&q->io_lock_cls_key);
++ lockdep_register_key(&q->q_lock_cls_key);
++ lockdep_init_map(&q->io_lockdep_map, "&q->q_usage_counter(io)",
++ &q->io_lock_cls_key, 0);
++ lockdep_init_map(&q->q_lockdep_map, "&q->q_usage_counter(queue)",
++ &q->q_lock_cls_key, 0);
+
+ q->nr_requests = BLKDEV_DEFAULT_RQ;
+
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index ad763ec313b6ad..5baa950f34fe21 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -166,17 +166,6 @@ struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
+ return bio_submit_split(bio, split_sectors);
+ }
+
+-struct bio *bio_split_write_zeroes(struct bio *bio,
+- const struct queue_limits *lim, unsigned *nsegs)
+-{
+- *nsegs = 0;
+- if (!lim->max_write_zeroes_sectors)
+- return bio;
+- if (bio_sectors(bio) <= lim->max_write_zeroes_sectors)
+- return bio;
+- return bio_submit_split(bio, lim->max_write_zeroes_sectors);
+-}
+-
+ static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
+ bool is_atomic)
+ {
+@@ -211,7 +200,9 @@ static inline unsigned get_max_io_size(struct bio *bio,
+ * We ignore lim->max_sectors for atomic writes because it may less
+ * than the actual bio size, which we cannot tolerate.
+ */
+- if (is_atomic)
++ if (bio_op(bio) == REQ_OP_WRITE_ZEROES)
++ max_sectors = lim->max_write_zeroes_sectors;
++ else if (is_atomic)
+ max_sectors = lim->atomic_write_max_sectors;
+ else
+ max_sectors = lim->max_sectors;
+@@ -296,6 +287,14 @@ static bool bvec_split_segs(const struct queue_limits *lim,
+ return len > 0 || bv->bv_len > max_len;
+ }
+
++static unsigned int bio_split_alignment(struct bio *bio,
++ const struct queue_limits *lim)
++{
++ if (op_is_write(bio_op(bio)) && lim->zone_write_granularity)
++ return lim->zone_write_granularity;
++ return lim->logical_block_size;
++}
++
+ /**
+ * bio_split_rw_at - check if and where to split a read/write bio
+ * @bio: [in] bio to be split
+@@ -358,7 +357,7 @@ int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim,
+ * split size so that each bio is properly block size aligned, even if
+ * we do not use the full hardware limits.
+ */
+- bytes = ALIGN_DOWN(bytes, lim->logical_block_size);
++ bytes = ALIGN_DOWN(bytes, bio_split_alignment(bio, lim));
+
+ /*
+ * Bio splitting may cause subtle trouble such as hang when doing sync
+@@ -398,6 +397,26 @@ struct bio *bio_split_zone_append(struct bio *bio,
+ return bio_submit_split(bio, split_sectors);
+ }
+
++struct bio *bio_split_write_zeroes(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nsegs)
++{
++ unsigned int max_sectors = get_max_io_size(bio, lim);
++
++ *nsegs = 0;
++
++ /*
++ * An unset limit should normally not happen, as bio submission is keyed
++ * off having a non-zero limit. But SCSI can clear the limit in the
++ * I/O completion handler, and we can race and see this. Splitting to a
++ * zero limit obviously doesn't make sense, so band-aid it here.
++ */
++ if (!max_sectors)
++ return bio;
++ if (bio_sectors(bio) <= max_sectors)
++ return bio;
++ return bio_submit_split(bio, max_sectors);
++}
++
+ /**
+ * bio_split_to_limits - split a bio to fit the queue limits
+ * @bio: bio to be split
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cf626e061dd774..b4fba7b398e5bc 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -120,9 +120,59 @@ void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
+ inflight[1] = mi.inflight[1];
+ }
+
+-void blk_freeze_queue_start(struct request_queue *q)
++#ifdef CONFIG_LOCKDEP
++static bool blk_freeze_set_owner(struct request_queue *q,
++ struct task_struct *owner)
++{
++ if (!owner)
++ return false;
++
++ if (!q->mq_freeze_depth) {
++ q->mq_freeze_owner = owner;
++ q->mq_freeze_owner_depth = 1;
++ return true;
++ }
++
++ if (owner == q->mq_freeze_owner)
++ q->mq_freeze_owner_depth += 1;
++ return false;
++}
++
++/* verify the last unfreeze in owner context */
++static bool blk_unfreeze_check_owner(struct request_queue *q)
++{
++ if (!q->mq_freeze_owner)
++ return false;
++ if (q->mq_freeze_owner != current)
++ return false;
++ if (--q->mq_freeze_owner_depth == 0) {
++ q->mq_freeze_owner = NULL;
++ return true;
++ }
++ return false;
++}
++
++#else
++
++static bool blk_freeze_set_owner(struct request_queue *q,
++ struct task_struct *owner)
++{
++ return false;
++}
++
++static bool blk_unfreeze_check_owner(struct request_queue *q)
+ {
++ return false;
++}
++#endif
++
++bool __blk_freeze_queue_start(struct request_queue *q,
++ struct task_struct *owner)
++{
++ bool freeze;
++
+ mutex_lock(&q->mq_freeze_lock);
++ freeze = blk_freeze_set_owner(q, owner);
+ if (++q->mq_freeze_depth == 1) {
+ percpu_ref_kill(&q->q_usage_counter);
+ mutex_unlock(&q->mq_freeze_lock);
+@@ -131,6 +181,14 @@ void blk_freeze_queue_start(struct request_queue *q)
+ } else {
+ mutex_unlock(&q->mq_freeze_lock);
+ }
++
++ return freeze;
++}
++
++void blk_freeze_queue_start(struct request_queue *q)
++{
++ if (__blk_freeze_queue_start(q, current))
++ blk_freeze_acquire_lock(q, false, false);
+ }
+ EXPORT_SYMBOL_GPL(blk_freeze_queue_start);
+
+@@ -176,8 +234,10 @@ void blk_mq_freeze_queue(struct request_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
+
+-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
++bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
+ {
++ bool unfreeze;
++
+ mutex_lock(&q->mq_freeze_lock);
+ if (force_atomic)
+ q->q_usage_counter.data->force_atomic = true;
+@@ -187,15 +247,39 @@ void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
+ percpu_ref_resurrect(&q->q_usage_counter);
+ wake_up_all(&q->mq_freeze_wq);
+ }
++ unfreeze = blk_unfreeze_check_owner(q);
+ mutex_unlock(&q->mq_freeze_lock);
++
++ return unfreeze;
+ }
+
+ void blk_mq_unfreeze_queue(struct request_queue *q)
+ {
+- __blk_mq_unfreeze_queue(q, false);
++ if (__blk_mq_unfreeze_queue(q, false))
++ blk_unfreeze_release_lock(q, false, false);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
+
++/*
++ * non_owner variant of blk_freeze_queue_start
++ *
++ * Unlike blk_freeze_queue_start, the queue doesn't need to be unfrozen
++ * by the same task. This is fragile and should not be used if at all
++ * possible.
++ */
++void blk_freeze_queue_start_non_owner(struct request_queue *q)
++{
++ __blk_freeze_queue_start(q, NULL);
++}
++EXPORT_SYMBOL_GPL(blk_freeze_queue_start_non_owner);
++
++/* non_owner variant of blk_mq_unfreeze_queue */
++void blk_mq_unfreeze_queue_non_owner(struct request_queue *q)
++{
++ __blk_mq_unfreeze_queue(q, false);
++}
++EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue_non_owner);
++
+ /*
+ * FIXME: replace the scsi_internal_device_*block_nowait() calls in the
+ * mpt3sas driver such that this function can be removed.
+@@ -283,8 +367,9 @@ void blk_mq_quiesce_tagset(struct blk_mq_tag_set *set)
+ if (!blk_queue_skip_tagset_quiesce(q))
+ blk_mq_quiesce_queue_nowait(q);
+ }
+- blk_mq_wait_quiesce_done(set);
+ mutex_unlock(&set->tag_list_lock);
++
++ blk_mq_wait_quiesce_done(set);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_quiesce_tagset);
+
+@@ -2200,6 +2285,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
+ }
+ EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
+
++static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
++{
++ bool need_run;
++
++ /*
++ * When queue is quiesced, we may be switching io scheduler, or
++ * updating nr_hw_queues, or other things, and we can't run queue
++ * any more, even blk_mq_hctx_has_pending() can't be called safely.
++ *
++ * And queue will be rerun in blk_mq_unquiesce_queue() if it is
++ * quiesced.
++ */
++ __blk_mq_run_dispatch_ops(hctx->queue, false,
++ need_run = !blk_queue_quiesced(hctx->queue) &&
++ blk_mq_hctx_has_pending(hctx));
++ return need_run;
++}
++
+ /**
+ * blk_mq_run_hw_queue - Start to run a hardware queue.
+ * @hctx: Pointer to the hardware queue to run.
+@@ -2220,20 +2323,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+
+ might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
+
+- /*
+- * When queue is quiesced, we may be switching io scheduler, or
+- * updating nr_hw_queues, or other things, and we can't run queue
+- * any more, even __blk_mq_hctx_has_pending() can't be called safely.
+- *
+- * And queue will be rerun in blk_mq_unquiesce_queue() if it is
+- * quiesced.
+- */
+- __blk_mq_run_dispatch_ops(hctx->queue, false,
+- need_run = !blk_queue_quiesced(hctx->queue) &&
+- blk_mq_hctx_has_pending(hctx));
++ need_run = blk_mq_hw_queue_need_run(hctx);
++ if (!need_run) {
++ unsigned long flags;
+
+- if (!need_run)
+- return;
++ /*
++ * Synchronize with blk_mq_unquiesce_queue(), because we check
++ * if hw queue is quiesced locklessly above, we need the use
++ * ->queue_lock to make sure we see the up-to-date status to
++ * not miss rerunning the hw queue.
++ */
++ spin_lock_irqsave(&hctx->queue->queue_lock, flags);
++ need_run = blk_mq_hw_queue_need_run(hctx);
++ spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
++
++ if (!need_run)
++ return;
++ }
+
+ if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
+ blk_mq_delay_run_hw_queue(hctx, 0);
+@@ -2390,6 +2496,12 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+ return;
+
+ clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
++ /*
++ * Pairs with the smp_mb() in blk_mq_hctx_stopped() to order the
++ * clearing of BLK_MQ_S_STOPPED above and the checking of dispatch
++ * list in the subsequent routine.
++ */
++ smp_mb__after_atomic();
+ blk_mq_run_hw_queue(hctx, async);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
+@@ -2620,6 +2732,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, 0);
++ blk_mq_run_hw_queue(hctx, false);
+ return;
+ }
+
+@@ -2650,6 +2763,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, 0);
++ blk_mq_run_hw_queue(hctx, false);
+ return BLK_STS_OK;
+ }
+
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 3bd43b10032f83..f4ac1af77a267e 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -230,6 +230,19 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data
+
+ static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
+ {
++ /* Fast path: hardware queue is not stopped most of the time. */
++ if (likely(!test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
++ return false;
++
++ /*
++ * This barrier is used to order adding of dispatch list before and
++ * the test of BLK_MQ_S_STOPPED below. Pairs with the memory barrier
++ * in blk_mq_start_stopped_hw_queue() so that dispatch code could
++ * either see BLK_MQ_S_STOPPED is cleared or dispatch list is not
++ * empty to avoid missing dispatching requests.
++ */
++ smp_mb();
++
+ return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
+ }
+
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index a446654ddee5ef..7abf034089cd96 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -249,6 +249,13 @@ static int blk_validate_limits(struct queue_limits *lim)
+ if (lim->io_min < lim->physical_block_size)
+ lim->io_min = lim->physical_block_size;
+
++ /*
++ * The optimal I/O size may not be aligned to physical block size
++ * (because it may be limited by dma engines which have no clue about
++ * block size of the disks attached to them), so we round it down here.
++ */
++ lim->io_opt = round_down(lim->io_opt, lim->physical_block_size);
++
+ /*
+ * max_hw_sectors has a somewhat weird default for historical reason,
+ * but driver really should set their own instead of relying on this
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index e85941bec857b6..207577145c54f4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -794,10 +794,8 @@ int blk_register_queue(struct gendisk *disk)
+ * faster to shut down and is made fully functional here as
+ * request_queues for non-existent devices never get registered.
+ */
+- if (!blk_queue_init_done(q)) {
+- blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
+- percpu_ref_switch_to_percpu(&q->q_usage_counter);
+- }
++ blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
++ percpu_ref_switch_to_percpu(&q->q_usage_counter);
+
+ return ret;
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index af19296fa50df1..95e517723db3e4 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1541,6 +1541,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ unsigned int nr_seq_zones, nr_conv_zones = 0;
+ unsigned int pool_size;
+ struct queue_limits lim;
++ int ret;
+
+ disk->nr_zones = args->nr_zones;
+ disk->zone_capacity = args->zone_capacity;
+@@ -1593,7 +1594,11 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ }
+
+ commit:
+- return queue_limits_commit_update(q, &lim);
++ blk_mq_freeze_queue(q);
++ ret = queue_limits_commit_update(q, &lim);
++ blk_mq_unfreeze_queue(q);
++
++ return ret;
+ }
+
+ static int blk_revalidate_conv_zone(struct blk_zone *zone, unsigned int idx,
+@@ -1814,14 +1819,15 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ * Set the new disk zone parameters only once the queue is frozen and
+ * all I/Os are completed.
+ */
+- blk_mq_freeze_queue(q);
+ if (ret > 0)
+ ret = disk_update_zone_resources(disk, &args);
+ else
+ pr_warn("%s: failed to revalidate zones\n", disk->disk_name);
+- if (ret)
++ if (ret) {
++ blk_mq_freeze_queue(q);
+ disk_free_zone_resources(disk);
+- blk_mq_unfreeze_queue(q);
++ blk_mq_unfreeze_queue(q);
++ }
+
+ kfree(args.conv_zones_bitmap);
+
+diff --git a/block/blk.h b/block/blk.h
+index c718e4291db062..88fab6a81701ed 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/bio-integrity.h>
+ #include <linux/blk-crypto.h>
++#include <linux/lockdep.h>
+ #include <linux/memblock.h> /* for max_pfn/max_low_pfn */
+ #include <linux/sched/sysctl.h>
+ #include <linux/timekeeping.h>
+@@ -35,8 +36,10 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
+ void blk_free_flush_queue(struct blk_flush_queue *q);
+
+ void blk_freeze_queue(struct request_queue *q);
+-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
+-void blk_queue_start_drain(struct request_queue *q);
++bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
++bool blk_queue_start_drain(struct request_queue *q);
++bool __blk_freeze_queue_start(struct request_queue *q,
++ struct task_struct *owner);
+ int __bio_queue_enter(struct request_queue *q, struct bio *bio);
+ void submit_bio_noacct_nocheck(struct bio *bio);
+ void bio_await_chain(struct bio *bio);
+@@ -69,8 +72,11 @@ static inline int bio_queue_enter(struct bio *bio)
+ {
+ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+
+- if (blk_try_enter_queue(q, false))
++ if (blk_try_enter_queue(q, false)) {
++ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
+ return 0;
++ }
+ return __bio_queue_enter(q, bio);
+ }
+
+@@ -734,4 +740,22 @@ void blk_integrity_verify(struct bio *bio);
+ void blk_integrity_prepare(struct request *rq);
+ void blk_integrity_complete(struct request *rq, unsigned int nr_bytes);
+
++static inline void blk_freeze_acquire_lock(struct request_queue *q, bool
++ disk_dead, bool queue_dying)
++{
++ if (!disk_dead)
++ rwsem_acquire(&q->io_lockdep_map, 0, 1, _RET_IP_);
++ if (!queue_dying)
++ rwsem_acquire(&q->q_lockdep_map, 0, 1, _RET_IP_);
++}
++
++static inline void blk_unfreeze_release_lock(struct request_queue *q, bool
++ disk_dead, bool queue_dying)
++{
++ if (!queue_dying)
++ rwsem_release(&q->q_lockdep_map, _RET_IP_);
++ if (!disk_dead)
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
++}
++
+ #endif /* BLK_INTERNAL_H */
+diff --git a/block/elevator.c b/block/elevator.c
+index 9430cde13d1a41..43ba4ab1ada7fd 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -598,13 +598,19 @@ void elevator_init_mq(struct request_queue *q)
+ * drain any dispatch activities originated from passthrough
+ * requests, then no need to quiesce queue which may add long boot
+ * latency, especially when lots of disks are involved.
++ *
++ * Disk isn't added yet, so verifying queue lock only manually.
+ */
+- blk_mq_freeze_queue(q);
++ blk_freeze_queue_start_non_owner(q);
++ blk_freeze_acquire_lock(q, true, false);
++ blk_mq_freeze_queue_wait(q);
++
+ blk_mq_cancel_work_sync(q);
+
+ err = blk_mq_init_sched(q, e);
+
+- blk_mq_unfreeze_queue(q);
++ blk_unfreeze_release_lock(q, true, false);
++ blk_mq_unfreeze_queue_non_owner(q);
+
+ if (err) {
+ pr_warn("\"%s\" elevator initialization failed, "
+diff --git a/block/fops.c b/block/fops.c
+index e696ae53bf1e08..13a67940d0408d 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -35,13 +35,10 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
+ return opf;
+ }
+
+-static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
+- struct iov_iter *iter, bool is_atomic)
++static bool blkdev_dio_invalid(struct block_device *bdev, struct kiocb *iocb,
++ struct iov_iter *iter)
+ {
+- if (is_atomic && !generic_atomic_write_valid(iter, pos))
+- return true;
+-
+- return pos & (bdev_logical_block_size(bdev) - 1) ||
++ return iocb->ki_pos & (bdev_logical_block_size(bdev) - 1) ||
+ !bdev_iter_is_aligned(bdev, iter);
+ }
+
+@@ -368,13 +365,12 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
+ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+- bool is_atomic = iocb->ki_flags & IOCB_ATOMIC;
+ unsigned int nr_pages;
+
+ if (!iov_iter_count(iter))
+ return 0;
+
+- if (blkdev_dio_invalid(bdev, iocb->ki_pos, iter, is_atomic))
++ if (blkdev_dio_invalid(bdev, iocb, iter))
+ return -EINVAL;
+
+ nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
+@@ -383,7 +379,7 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ return __blkdev_direct_IO_simple(iocb, iter, bdev,
+ nr_pages);
+ return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages);
+- } else if (is_atomic) {
++ } else if (iocb->ki_flags & IOCB_ATOMIC) {
+ return -EINVAL;
+ }
+ return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages));
+@@ -625,7 +621,7 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+ if (!bdev)
+ return -ENXIO;
+
+- if (bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT)
++ if (bdev_can_atomic_write(bdev))
+ filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+
+ ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
+@@ -681,6 +677,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ struct file *file = iocb->ki_filp;
+ struct inode *bd_inode = bdev_file_inode(file);
+ struct block_device *bdev = I_BDEV(bd_inode);
++ bool atomic = iocb->ki_flags & IOCB_ATOMIC;
+ loff_t size = bdev_nr_bytes(bdev);
+ size_t shorted = 0;
+ ssize_t ret;
+@@ -700,8 +697,16 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ if ((iocb->ki_flags & (IOCB_NOWAIT | IOCB_DIRECT)) == IOCB_NOWAIT)
+ return -EOPNOTSUPP;
+
++ if (atomic) {
++ ret = generic_atomic_write_valid(iocb, from);
++ if (ret)
++ return ret;
++ }
++
+ size -= iocb->ki_pos;
+ if (iov_iter_count(from) > size) {
++ if (atomic)
++ return -EINVAL;
+ shorted = iov_iter_count(from) - size;
+ iov_iter_truncate(from, size);
+ }
+diff --git a/block/genhd.c b/block/genhd.c
+index 1c05dd4c6980b5..8645cf3b0816e4 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -581,13 +581,13 @@ static void blk_report_disk_dead(struct gendisk *disk, bool surprise)
+ rcu_read_unlock();
+ }
+
+-static void __blk_mark_disk_dead(struct gendisk *disk)
++static bool __blk_mark_disk_dead(struct gendisk *disk)
+ {
+ /*
+ * Fail any new I/O.
+ */
+ if (test_and_set_bit(GD_DEAD, &disk->state))
+- return;
++ return false;
+
+ if (test_bit(GD_OWNS_QUEUE, &disk->state))
+ blk_queue_flag_set(QUEUE_FLAG_DYING, disk->queue);
+@@ -600,7 +600,7 @@ static void __blk_mark_disk_dead(struct gendisk *disk)
+ /*
+ * Prevent new I/O from crossing bio_queue_enter().
+ */
+- blk_queue_start_drain(disk->queue);
++ return blk_queue_start_drain(disk->queue);
+ }
+
+ /**
+@@ -641,6 +641,7 @@ void del_gendisk(struct gendisk *disk)
+ struct request_queue *q = disk->queue;
+ struct block_device *part;
+ unsigned long idx;
++ bool start_drain, queue_dying;
+
+ might_sleep();
+
+@@ -668,7 +669,10 @@ void del_gendisk(struct gendisk *disk)
+ * Drop all partitions now that the disk is marked dead.
+ */
+ mutex_lock(&disk->open_mutex);
+- __blk_mark_disk_dead(disk);
++ start_drain = __blk_mark_disk_dead(disk);
++ queue_dying = blk_queue_dying(q);
++ if (start_drain)
++ blk_freeze_acquire_lock(q, true, queue_dying);
+ xa_for_each_start(&disk->part_tbl, idx, part, 1)
+ drop_partition(part);
+ mutex_unlock(&disk->open_mutex);
+@@ -718,13 +722,13 @@ void del_gendisk(struct gendisk *disk)
+ * If the disk does not own the queue, allow using passthrough requests
+ * again. Else leave the queue frozen to fail all I/O.
+ */
+- if (!test_bit(GD_OWNS_QUEUE, &disk->state)) {
+- blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
++ if (!test_bit(GD_OWNS_QUEUE, &disk->state))
+ __blk_mq_unfreeze_queue(q, true);
+- } else {
+- if (queue_is_mq(q))
+- blk_mq_exit_queue(q);
+- }
++ else if (queue_is_mq(q))
++ blk_mq_exit_queue(q);
++
++ if (start_drain)
++ blk_unfreeze_release_lock(q, true, queue_dying);
+ }
+ EXPORT_SYMBOL(del_gendisk);
+
+diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
+index d0d954fe9d54f3..7fc79e7dce44a9 100644
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -117,8 +117,10 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
+ err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
+ if (!err)
+ return -EINPROGRESS;
+- if (err == -EBUSY)
+- return -EAGAIN;
++ if (err == -EBUSY) {
++ /* try non-parallel mode */
++ return crypto_aead_encrypt(creq);
++ }
+
+ return err;
+ }
+@@ -166,8 +168,10 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
+ err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
+ if (!err)
+ return -EINPROGRESS;
+- if (err == -EBUSY)
+- return -EAGAIN;
++ if (err == -EBUSY) {
++ /* try non-parallel mode */
++ return crypto_aead_decrypt(creq);
++ }
+
+ return err;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 78b32a8232419e..29b723039a3459 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -291,15 +291,16 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ return ret;
+ }
+
+-static int
++int
+ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp_type,
+- struct vpu_jsm_msg *resp, u32 channel,
+- unsigned long timeout_ms)
++ struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms)
+ {
+ struct ivpu_ipc_consumer cons;
+ int ret;
+
++ drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++
+ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
+
+ ret = ivpu_ipc_send(vdev, &cons, req);
+@@ -325,19 +326,21 @@ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req
+ return ret;
+ }
+
+-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms)
++int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
++ u32 channel, unsigned long timeout_ms)
+ {
+ struct vpu_jsm_msg hb_req = { .type = VPU_JSM_MSG_QUERY_ENGINE_HB };
+ struct vpu_jsm_msg hb_resp;
+ int ret, hb_ret;
+
+- drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
+
+ ret = ivpu_ipc_send_receive_internal(vdev, req, expected_resp, resp, channel, timeout_ms);
+ if (ret != -ETIMEDOUT)
+- return ret;
++ goto rpm_put;
+
+ hb_ret = ivpu_ipc_send_receive_internal(vdev, &hb_req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE,
+ &hb_resp, VPU_IPC_CHAN_ASYNC_CMD,
+@@ -345,21 +348,7 @@ int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *r
+ if (hb_ret == -ETIMEDOUT)
+ ivpu_pm_trigger_recovery(vdev, "IPC timeout");
+
+- return ret;
+-}
+-
+-int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms)
+-{
+- int ret;
+-
+- ret = ivpu_rpm_get(vdev);
+- if (ret < 0)
+- return ret;
+-
+- ret = ivpu_ipc_send_receive_active(vdev, req, expected_resp, resp, channel, timeout_ms);
+-
++rpm_put:
+ ivpu_rpm_put(vdev);
+ return ret;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h
+index 4fe38141045ea3..fb4de7fb8210ea 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.h
++++ b/drivers/accel/ivpu/ivpu_ipc.h
+@@ -101,10 +101,9 @@ int ivpu_ipc_send(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ struct ivpu_ipc_hdr *ipc_buf, struct vpu_jsm_msg *jsm_msg,
+ unsigned long timeout_ms);
+-
+-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms);
++int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ enum vpu_ipc_msg_type expected_resp_type,
++ struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms);
+ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+ u32 channel, unsigned long timeout_ms);
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
+index 46ef16c3c06910..88105963c1b288 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
+@@ -270,9 +270,8 @@ int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev)
+
+ req.payload.pwr_d0i3_enter.send_response = 1;
+
+- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE,
+- &resp, VPU_IPC_CHAN_GEN_CMD,
+- vdev->timeout.d0i3_entry_msg);
++ ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE, &resp,
++ VPU_IPC_CHAN_GEN_CMD, vdev->timeout.d0i3_entry_msg);
+ if (ret)
+ return ret;
+
+@@ -430,8 +429,8 @@ int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
+
+ req.payload.hws_priority_band_setup.normal_band_percentage = 10;
+
+- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
++ ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
++ &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ if (ret)
+ ivpu_warn_ratelimited(vdev, "Failed to set priority bands: %d\n", ret);
+
+@@ -544,9 +543,8 @@ int ivpu_jsm_dct_enable(struct ivpu_device *vdev, u32 active_us, u32 inactive_us
+ req.payload.pwr_dct_control.dct_active_us = active_us;
+ req.payload.pwr_dct_control.dct_inactive_us = inactive_us;
+
+- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD,
+- vdev->timeout.jsm);
++ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE, &resp,
++ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
+
+ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+@@ -554,7 +552,6 @@ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+ struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DCT_DISABLE };
+ struct vpu_jsm_msg resp;
+
+- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD,
+- vdev->timeout.jsm);
++ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, &resp,
++ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
+diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
+index c0e77c1c8e09d6..eb6c2d3603874a 100644
+--- a/drivers/acpi/arm64/gtdt.c
++++ b/drivers/acpi/arm64/gtdt.c
+@@ -283,7 +283,7 @@ static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block,
+ if (frame->virt_irq > 0)
+ acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
+ frame->virt_irq = 0;
+- } while (i-- >= 0 && gtdt_frame--);
++ } while (i-- > 0 && gtdt_frame--);
+
+ return -EINVAL;
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 5c0cc7aae8726b..e78e3754d99e1d 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1140,7 +1140,6 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ return -EFAULT;
+ }
+ val = MASK_VAL_WRITE(reg, prev_val, val);
+- val |= prev_val;
+ }
+
+ switch (size) {
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 324a9a3c087aa2..c6664a78796979 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -829,19 +829,18 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name, st
+ shash->tfm = alg;
+
+ if (crypto_shash_digest(shash, fw->data, fw->size, sha256buf) < 0)
+- goto out_shash;
++ goto out_free;
+
+ for (int i = 0; i < SHA256_DIGEST_SIZE; i++)
+ sprintf(&outbuf[i * 2], "%02x", sha256buf[i]);
+ outbuf[SHA256_BLOCK_SIZE] = 0;
+ dev_dbg(device, "Loaded FW: %s, sha256: %s\n", name, outbuf);
+
+-out_shash:
+- crypto_free_shash(alg);
+ out_free:
+ kfree(shash);
+ kfree(outbuf);
+ kfree(sha256buf);
++ crypto_free_shash(alg);
+ }
+ #else
+ static void fw_log_firmware_info(const struct firmware *fw, const char *name,
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index a750e48a26b87c..6981e5f974e9a4 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -514,12 +514,16 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
+ return IRQ_NONE;
+ }
+
++static struct lock_class_key regmap_irq_lock_class;
++static struct lock_class_key regmap_irq_request_class;
++
+ static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
+ irq_hw_number_t hw)
+ {
+ struct regmap_irq_chip_data *data = h->host_data;
+
+ irq_set_chip_data(virq, data);
++ irq_set_lockdep_class(virq, ®map_irq_lock_class, ®map_irq_request_class);
+ irq_set_chip(virq, &data->irq_chip);
+ irq_set_nested_thread(virq, 1);
+ irq_set_parent(virq, data->irq);
+diff --git a/drivers/base/trace.h b/drivers/base/trace.h
+index e52b6eae060dde..3b83b13a57ff1e 100644
+--- a/drivers/base/trace.h
++++ b/drivers/base/trace.h
+@@ -24,18 +24,18 @@ DECLARE_EVENT_CLASS(devres,
+ __field(struct device *, dev)
+ __field(const char *, op)
+ __field(void *, node)
+- __field(const char *, name)
++ __string(name, name)
+ __field(size_t, size)
+ ),
+ TP_fast_assign(
+ __assign_str(devname);
+ __entry->op = op;
+ __entry->node = node;
+- __entry->name = name;
++ __assign_str(name);
+ __entry->size = size;
+ ),
+ TP_printk("%s %3s %p %s (%zu bytes)", __get_str(devname),
+- __entry->op, __entry->node, __entry->name, __entry->size)
++ __entry->op, __entry->node, __get_str(name), __entry->size)
+ );
+
+ DEFINE_EVENT(devres, devres_log,
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index 2fd1ed1017481b..292f127cae0abe 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -231,8 +231,10 @@ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
+ xa_lock(&brd->brd_pages);
+ while (size >= PAGE_SIZE && aligned_sector < rd_size * 2) {
+ page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT);
+- if (page)
++ if (page) {
+ __free_page(page);
++ brd->brd_nr_pages--;
++ }
+ aligned_sector += PAGE_SECTORS;
+ size -= PAGE_SIZE;
+ }
+@@ -316,8 +318,40 @@ __setup("ramdisk_size=", ramdisk_size);
+ * (should share code eventually).
+ */
+ static LIST_HEAD(brd_devices);
++static DEFINE_MUTEX(brd_devices_mutex);
+ static struct dentry *brd_debugfs_dir;
+
++static struct brd_device *brd_find_or_alloc_device(int i)
++{
++ struct brd_device *brd;
++
++ mutex_lock(&brd_devices_mutex);
++ list_for_each_entry(brd, &brd_devices, brd_list) {
++ if (brd->brd_number == i) {
++ mutex_unlock(&brd_devices_mutex);
++ return ERR_PTR(-EEXIST);
++ }
++ }
++
++ brd = kzalloc(sizeof(*brd), GFP_KERNEL);
++ if (!brd) {
++ mutex_unlock(&brd_devices_mutex);
++ return ERR_PTR(-ENOMEM);
++ }
++ brd->brd_number = i;
++ list_add_tail(&brd->brd_list, &brd_devices);
++ mutex_unlock(&brd_devices_mutex);
++ return brd;
++}
++
++static void brd_free_device(struct brd_device *brd)
++{
++ mutex_lock(&brd_devices_mutex);
++ list_del(&brd->brd_list);
++ mutex_unlock(&brd_devices_mutex);
++ kfree(brd);
++}
++
+ static int brd_alloc(int i)
+ {
+ struct brd_device *brd;
+@@ -340,14 +374,9 @@ static int brd_alloc(int i)
+ BLK_FEAT_NOWAIT,
+ };
+
+- list_for_each_entry(brd, &brd_devices, brd_list)
+- if (brd->brd_number == i)
+- return -EEXIST;
+- brd = kzalloc(sizeof(*brd), GFP_KERNEL);
+- if (!brd)
+- return -ENOMEM;
+- brd->brd_number = i;
+- list_add_tail(&brd->brd_list, &brd_devices);
++ brd = brd_find_or_alloc_device(i);
++ if (IS_ERR(brd))
++ return PTR_ERR(brd);
+
+ xa_init(&brd->brd_pages);
+
+@@ -378,8 +407,7 @@ static int brd_alloc(int i)
+ out_cleanup_disk:
+ put_disk(disk);
+ out_free_dev:
+- list_del(&brd->brd_list);
+- kfree(brd);
++ brd_free_device(brd);
+ return err;
+ }
+
+@@ -398,8 +426,7 @@ static void brd_cleanup(void)
+ del_gendisk(brd->brd_disk);
+ put_disk(brd->brd_disk);
+ brd_free_pages(brd);
+- list_del(&brd->brd_list);
+- kfree(brd);
++ brd_free_device(brd);
+ }
+ }
+
+@@ -426,16 +453,6 @@ static int __init brd_init(void)
+ {
+ int err, i;
+
+- brd_check_and_reset_par();
+-
+- brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
+-
+- for (i = 0; i < rd_nr; i++) {
+- err = brd_alloc(i);
+- if (err)
+- goto out_free;
+- }
+-
+ /*
+ * brd module now has a feature to instantiate underlying device
+ * structure on-demand, provided that there is an access dev node.
+@@ -451,11 +468,18 @@ static int __init brd_init(void)
+ * dynamically.
+ */
+
++ brd_check_and_reset_par();
++
++ brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
++
+ if (__register_blkdev(RAMDISK_MAJOR, "ramdisk", brd_probe)) {
+ err = -EIO;
+ goto out_free;
+ }
+
++ for (i = 0; i < rd_nr; i++)
++ brd_alloc(i);
++
+ pr_info("brd: module loaded\n");
+ return 0;
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 78a7bb28defe4c..86cc3b19faae86 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -173,7 +173,7 @@ static loff_t get_loop_size(struct loop_device *lo, struct file *file)
+ static bool lo_bdev_can_use_dio(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+- unsigned short sb_bsize = bdev_logical_block_size(backing_bdev);
++ unsigned int sb_bsize = bdev_logical_block_size(backing_bdev);
+
+ if (queue_logical_block_size(lo->lo_queue) < sb_bsize)
+ return false;
+@@ -977,7 +977,7 @@ loop_set_status_from_info(struct loop_device *lo,
+ return 0;
+ }
+
+-static unsigned short loop_default_blocksize(struct loop_device *lo,
++static unsigned int loop_default_blocksize(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+ /* In case of direct I/O, match underlying block size */
+@@ -986,7 +986,7 @@ static unsigned short loop_default_blocksize(struct loop_device *lo,
+ return SECTOR_SIZE;
+ }
+
+-static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
++static int loop_reconfigure_limits(struct loop_device *lo, unsigned int bsize)
+ {
+ struct file *file = lo->lo_backing_file;
+ struct inode *inode = file->f_mapping->host;
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 6ba2c1dd1d878a..90bc605ff6c299 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -664,12 +664,21 @@ static inline char *ublk_queue_cmd_buf(struct ublk_device *ub, int q_id)
+ return ublk_get_queue(ub, q_id)->io_cmd_buf;
+ }
+
++static inline int __ublk_queue_cmd_buf_size(int depth)
++{
++ return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE);
++}
++
+ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
+ {
+ struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
+
+- return round_up(ubq->q_depth * sizeof(struct ublksrv_io_desc),
+- PAGE_SIZE);
++ return __ublk_queue_cmd_buf_size(ubq->q_depth);
++}
++
++static int ublk_max_cmd_buf_size(void)
++{
++ return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH);
+ }
+
+ static inline bool ublk_queue_can_use_recovery_reissue(
+@@ -1322,7 +1331,7 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ struct ublk_device *ub = filp->private_data;
+ size_t sz = vma->vm_end - vma->vm_start;
+- unsigned max_sz = UBLK_MAX_QUEUE_DEPTH * sizeof(struct ublksrv_io_desc);
++ unsigned max_sz = ublk_max_cmd_buf_size();
+ unsigned long pfn, end, phys_off = vma->vm_pgoff << PAGE_SHIFT;
+ int q_id, ret = 0;
+
+@@ -2965,7 +2974,7 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
+ ret = ublk_ctrl_end_recovery(ub, cmd);
+ break;
+ default:
+- ret = -ENOTSUPP;
++ ret = -EOPNOTSUPP;
+ break;
+ }
+
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 194417abc1053c..43c96b73a7118f 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -471,18 +471,18 @@ static bool virtblk_prep_rq_batch(struct request *req)
+ return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK;
+ }
+
+-static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
++static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ struct request **rqlist)
+ {
++ struct request *req;
+ unsigned long flags;
+- int err;
+ bool kick;
+
+ spin_lock_irqsave(&vq->lock, flags);
+
+- while (!rq_list_empty(*rqlist)) {
+- struct request *req = rq_list_pop(rqlist);
++ while ((req = rq_list_pop(rqlist))) {
+ struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
++ int err;
+
+ err = virtblk_add_req(vq->vq, vbr);
+ if (err) {
+@@ -495,37 +495,33 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ kick = virtqueue_kick_prepare(vq->vq);
+ spin_unlock_irqrestore(&vq->lock, flags);
+
+- return kick;
++ if (kick)
++ virtqueue_notify(vq->vq);
+ }
+
+ static void virtio_queue_rqs(struct request **rqlist)
+ {
+- struct request *req, *next, *prev = NULL;
++ struct request *submit_list = NULL;
+ struct request *requeue_list = NULL;
++ struct request **requeue_lastp = &requeue_list;
++ struct virtio_blk_vq *vq = NULL;
++ struct request *req;
+
+- rq_list_for_each_safe(rqlist, req, next) {
+- struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx);
+- bool kick;
+-
+- if (!virtblk_prep_rq_batch(req)) {
+- rq_list_move(rqlist, &requeue_list, req, prev);
+- req = prev;
+- if (!req)
+- continue;
+- }
++ while ((req = rq_list_pop(rqlist))) {
++ struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx);
+
+- if (!next || req->mq_hctx != next->mq_hctx) {
+- req->rq_next = NULL;
+- kick = virtblk_add_req_batch(vq, rqlist);
+- if (kick)
+- virtqueue_notify(vq->vq);
++ if (vq && vq != this_vq)
++ virtblk_add_req_batch(vq, &submit_list);
++ vq = this_vq;
+
+- *rqlist = next;
+- prev = NULL;
+- } else
+- prev = req;
++ if (virtblk_prep_rq_batch(req))
++ rq_list_add(&submit_list, req); /* reverse order */
++ else
++ rq_list_add_tail(&requeue_lastp, req);
+ }
+
++ if (vq)
++ virtblk_add_req_batch(vq, &submit_list);
+ *rqlist = requeue_list;
+ }
+
+diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
+index 6aea609b795c2f..402b7b17586328 100644
+--- a/drivers/block/zram/Kconfig
++++ b/drivers/block/zram/Kconfig
+@@ -94,6 +94,7 @@ endchoice
+
+ config ZRAM_DEF_COMP
+ string
++ depends on ZRAM
+ default "lzo-rle" if ZRAM_DEF_COMP_LZORLE
+ default "lzo" if ZRAM_DEF_COMP_LZO
+ default "lz4" if ZRAM_DEF_COMP_LZ4
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index ad9c9bc3ccfc5b..e682797cdee783 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -626,6 +626,12 @@ static ssize_t writeback_store(struct device *dev,
+ goto release_init_lock;
+ }
+
++ /* Do not permit concurrent post-processing actions. */
++ if (atomic_xchg(&zram->pp_in_progress, 1)) {
++ up_read(&zram->init_lock);
++ return -EAGAIN;
++ }
++
+ if (!zram->backing_dev) {
+ ret = -ENODEV;
+ goto release_init_lock;
+@@ -752,6 +758,7 @@ static ssize_t writeback_store(struct device *dev,
+ free_block_bdev(zram, blk_idx);
+ __free_page(page);
+ release_init_lock:
++ atomic_set(&zram->pp_in_progress, 0);
+ up_read(&zram->init_lock);
+
+ return ret;
+@@ -1881,6 +1888,12 @@ static ssize_t recompress_store(struct device *dev,
+ goto release_init_lock;
+ }
+
++ /* Do not permit concurrent post-processing actions. */
++ if (atomic_xchg(&zram->pp_in_progress, 1)) {
++ up_read(&zram->init_lock);
++ return -EAGAIN;
++ }
++
+ if (algo) {
+ bool found = false;
+
+@@ -1948,6 +1961,7 @@ static ssize_t recompress_store(struct device *dev,
+ __free_page(page);
+
+ release_init_lock:
++ atomic_set(&zram->pp_in_progress, 0);
+ up_read(&zram->init_lock);
+ return ret;
+ }
+@@ -2144,6 +2158,7 @@ static void zram_reset_device(struct zram *zram)
+ zram->disksize = 0;
+ zram_destroy_comps(zram);
+ memset(&zram->stats, 0, sizeof(zram->stats));
++ atomic_set(&zram->pp_in_progress, 0);
+ reset_bdev(zram);
+
+ comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
+@@ -2381,6 +2396,9 @@ static int zram_add(void)
+ zram->disk->fops = &zram_devops;
+ zram->disk->private_data = zram;
+ snprintf(zram->disk->disk_name, 16, "zram%d", device_id);
++ atomic_set(&zram->pp_in_progress, 0);
++ zram_comp_params_reset(zram);
++ comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
+
+ /* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
+ set_capacity(zram->disk, 0);
+@@ -2388,9 +2406,6 @@ static int zram_add(void)
+ if (ret)
+ goto out_cleanup_disk;
+
+- zram_comp_params_reset(zram);
+- comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
+-
+ zram_debugfs_register(zram);
+ pr_info("Added device: %s\n", zram->disk->disk_name);
+ return device_id;
+diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
+index cfc8c059db6369..8acf9d2ee42b87 100644
+--- a/drivers/block/zram/zram_drv.h
++++ b/drivers/block/zram/zram_drv.h
+@@ -139,5 +139,6 @@ struct zram {
+ #ifdef CONFIG_ZRAM_MEMORY_TRACKING
+ struct dentry *debugfs_dir;
+ #endif
++ atomic_t pp_in_progress;
+ };
+ #endif
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index eef00467905eb3..a1153ada74d206 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -541,11 +541,10 @@ static const struct bcm_subver_table bcm_usb_subver_table[] = {
+ static const char *btbcm_get_board_name(struct device *dev)
+ {
+ #ifdef CONFIG_OF
+- struct device_node *root;
++ struct device_node *root __free(device_node) = of_find_node_by_path("/");
+ char *board_type;
+ const char *tmp;
+
+- root = of_find_node_by_path("/");
+ if (!root)
+ return NULL;
+
+@@ -555,7 +554,6 @@ static const char *btbcm_get_board_name(struct device *dev)
+ /* get rid of any '/' in the compatible string */
+ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
+ strreplace(board_type, '/', '-');
+- of_node_put(root);
+
+ return board_type;
+ #else
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 30a32ebbcc681b..645047fb92fd26 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -1841,6 +1841,37 @@ static int btintel_boot_wait(struct hci_dev *hdev, ktime_t calltime, int msec)
+ return 0;
+ }
+
++static int btintel_boot_wait_d0(struct hci_dev *hdev, ktime_t calltime,
++ int msec)
++{
++ ktime_t delta, rettime;
++ unsigned long long duration;
++ int err;
++
++ bt_dev_info(hdev, "Waiting for device transition to d0");
++
++ err = btintel_wait_on_flag_timeout(hdev, INTEL_WAIT_FOR_D0,
++ TASK_INTERRUPTIBLE,
++ msecs_to_jiffies(msec));
++ if (err == -EINTR) {
++ bt_dev_err(hdev, "Device d0 move interrupted");
++ return -EINTR;
++ }
++
++ if (err) {
++ bt_dev_err(hdev, "Device d0 move timeout");
++ return -ETIMEDOUT;
++ }
++
++ rettime = ktime_get();
++ delta = ktime_sub(rettime, calltime);
++ duration = (unsigned long long)ktime_to_ns(delta) >> 10;
++
++ bt_dev_info(hdev, "Device moved to D0 in %llu usecs", duration);
++
++ return 0;
++}
++
+ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ {
+ ktime_t calltime;
+@@ -1849,6 +1880,7 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ calltime = ktime_get();
+
+ btintel_set_flag(hdev, INTEL_BOOTING);
++ btintel_set_flag(hdev, INTEL_WAIT_FOR_D0);
+
+ err = btintel_send_intel_reset(hdev, boot_addr);
+ if (err) {
+@@ -1861,13 +1893,28 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ * is done by the operational firmware sending bootup notification.
+ *
+ * Booting into operational firmware should not take longer than
+- * 1 second. However if that happens, then just fail the setup
++ * 5 second. However if that happens, then just fail the setup
+ * since something went wrong.
+ */
+- err = btintel_boot_wait(hdev, calltime, 1000);
+- if (err == -ETIMEDOUT)
++ err = btintel_boot_wait(hdev, calltime, 5000);
++ if (err == -ETIMEDOUT) {
+ btintel_reset_to_bootloader(hdev);
++ goto exit_error;
++ }
+
++ if (hdev->bus == HCI_PCI) {
++ /* In case of PCIe, after receiving bootup event, driver performs
++ * D0 entry by writing 0 to sleep control register (check
++ * btintel_pcie_recv_event())
++ * Firmware acks with alive interrupt indicating host is full ready to
++ * perform BT operation. Lets wait here till INTEL_WAIT_FOR_D0
++ * bit is cleared.
++ */
++ calltime = ktime_get();
++ err = btintel_boot_wait_d0(hdev, calltime, 2000);
++ }
++
++exit_error:
+ return err;
+ }
+
+@@ -3273,7 +3320,7 @@ int btintel_configure_setup(struct hci_dev *hdev, const char *driver_name)
+ }
+ EXPORT_SYMBOL_GPL(btintel_configure_setup);
+
+-static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
++int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct intel_tlv *tlv = (void *)&skb->data[5];
+
+@@ -3301,6 +3348,7 @@ static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ recv_frame:
+ return hci_recv_frame(hdev, skb);
+ }
++EXPORT_SYMBOL_GPL(btintel_diagnostics);
+
+ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+@@ -3320,7 +3368,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ * indicating that the bootup completed.
+ */
+ btintel_bootup(hdev, ptr, len);
+- break;
++ kfree_skb(skb);
++ return 0;
+ case 0x06:
+ /* When the firmware loading completes the
+ * device sends out a vendor specific event
+@@ -3328,7 +3377,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ * loading.
+ */
+ btintel_secure_send_result(hdev, ptr, len);
+- break;
++ kfree_skb(skb);
++ return 0;
+ }
+ }
+
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index aa70e4c2741653..b448c67e8ed94d 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -178,6 +178,7 @@ enum {
+ INTEL_ROM_LEGACY,
+ INTEL_ROM_LEGACY_NO_WBS_SUPPORT,
+ INTEL_ACPI_RESET_ACTIVE,
++ INTEL_WAIT_FOR_D0,
+
+ __INTEL_NUM_FLAGS,
+ };
+@@ -249,6 +250,7 @@ int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
+ int btintel_shutdown_combined(struct hci_dev *hdev);
+ void btintel_hw_error(struct hci_dev *hdev, u8 code);
+ void btintel_print_fseq_info(struct hci_dev *hdev);
++int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb);
+ #else
+
+ static inline int btintel_check_bdaddr(struct hci_dev *hdev)
+@@ -382,4 +384,9 @@ static inline void btintel_hw_error(struct hci_dev *hdev, u8 code)
+ static inline void btintel_print_fseq_info(struct hci_dev *hdev)
+ {
+ }
++
++static inline int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ return -EOPNOTSUPP;
++}
+ #endif
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 5252125b003f58..8bd663f4bac1b7 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -48,6 +48,17 @@ MODULE_DEVICE_TABLE(pci, btintel_pcie_table);
+ #define BTINTEL_PCIE_HCI_EVT_PKT 0x00000004
+ #define BTINTEL_PCIE_HCI_ISO_PKT 0x00000005
+
++/* Alive interrupt context */
++enum {
++ BTINTEL_PCIE_ROM,
++ BTINTEL_PCIE_FW_DL,
++ BTINTEL_PCIE_HCI_RESET,
++ BTINTEL_PCIE_INTEL_HCI_RESET1,
++ BTINTEL_PCIE_INTEL_HCI_RESET2,
++ BTINTEL_PCIE_D0,
++ BTINTEL_PCIE_D3
++};
++
+ static inline void ipc_print_ia_ring(struct hci_dev *hdev, struct ia *ia,
+ u16 queue_num)
+ {
+@@ -290,8 +301,9 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data)
+ /* wait for interrupt from the device after booting up to primary
+ * bootloader.
+ */
++ data->alive_intr_ctxt = BTINTEL_PCIE_ROM;
+ err = wait_event_timeout(data->gp0_wait_q, data->gp0_received,
+- msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT));
++ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+ if (!err)
+ return -ETIME;
+
+@@ -302,12 +314,78 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data)
+ return 0;
+ }
+
++/* BIT(0) - ROM, BIT(1) - IML and BIT(3) - OP
++ * Sometimes during firmware image switching from ROM to IML or IML to OP image,
++ * the previous image bit is not cleared by firmware when alive interrupt is
++ * received. Driver needs to take care of these sticky bits when deciding the
++ * current image running on controller.
++ * Ex: 0x10 and 0x11 - both represents that controller is running IML
++ */
++static inline bool btintel_pcie_in_rom(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_ROM &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW);
++}
++
++static inline bool btintel_pcie_in_op(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW;
++}
++
++static inline bool btintel_pcie_in_iml(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW);
++}
++
++static inline bool btintel_pcie_in_d3(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY;
++}
++
++static inline bool btintel_pcie_in_d0(struct btintel_pcie_data *data)
++{
++ return !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY);
++}
++
++static void btintel_pcie_wr_sleep_cntrl(struct btintel_pcie_data *data,
++ u32 dxstate)
++{
++ bt_dev_dbg(data->hdev, "writing sleep_ctl_reg: 0x%8.8x", dxstate);
++ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG, dxstate);
++}
++
++static inline char *btintel_pcie_alivectxt_state2str(u32 alive_intr_ctxt)
++{
++ switch (alive_intr_ctxt) {
++ case BTINTEL_PCIE_ROM:
++ return "rom";
++ case BTINTEL_PCIE_FW_DL:
++ return "fw_dl";
++ case BTINTEL_PCIE_D0:
++ return "d0";
++ case BTINTEL_PCIE_D3:
++ return "d3";
++ case BTINTEL_PCIE_HCI_RESET:
++ return "hci_reset";
++ case BTINTEL_PCIE_INTEL_HCI_RESET1:
++ return "intel_reset1";
++ case BTINTEL_PCIE_INTEL_HCI_RESET2:
++ return "intel_reset2";
++ default:
++ return "unknown";
++ }
++ return "null";
++}
++
+ /* This function handles the MSI-X interrupt for gp0 cause (bit 0 in
+ * BTINTEL_PCIE_CSR_MSIX_HW_INT_CAUSES) which is sent for boot stage and image response.
+ */
+ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ {
+- u32 reg;
++ bool submit_rx, signal_waitq;
++ u32 reg, old_ctxt;
+
+ /* This interrupt is for three different causes and it is not easy to
+ * know what causes the interrupt. So, it compares each register value
+@@ -317,20 +395,87 @@ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ if (reg != data->boot_stage_cache)
+ data->boot_stage_cache = reg;
+
++ bt_dev_dbg(data->hdev, "Alive context: %s old_boot_stage: 0x%8.8x new_boot_stage: 0x%8.8x",
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt),
++ data->boot_stage_cache, reg);
+ reg = btintel_pcie_rd_reg32(data, BTINTEL_PCIE_CSR_IMG_RESPONSE_REG);
+ if (reg != data->img_resp_cache)
+ data->img_resp_cache = reg;
+
+ data->gp0_received = true;
+
+- /* If the boot stage is OP or IML, reset IA and start RX again */
+- if (data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW ||
+- data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) {
++ old_ctxt = data->alive_intr_ctxt;
++ submit_rx = false;
++ signal_waitq = false;
++
++ switch (data->alive_intr_ctxt) {
++ case BTINTEL_PCIE_ROM:
++ data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
++ signal_waitq = true;
++ break;
++ case BTINTEL_PCIE_FW_DL:
++ /* Error case is already handled. Ideally control shall not
++ * reach here
++ */
++ break;
++ case BTINTEL_PCIE_INTEL_HCI_RESET1:
++ if (btintel_pcie_in_op(data)) {
++ submit_rx = true;
++ break;
++ }
++
++ if (btintel_pcie_in_iml(data)) {
++ submit_rx = true;
++ data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_INTEL_HCI_RESET2:
++ if (btintel_test_and_clear_flag(data->hdev, INTEL_WAIT_FOR_D0)) {
++ btintel_wake_up_flag(data->hdev, INTEL_WAIT_FOR_D0);
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ }
++ break;
++ case BTINTEL_PCIE_D0:
++ if (btintel_pcie_in_d3(data)) {
++ data->alive_intr_ctxt = BTINTEL_PCIE_D3;
++ signal_waitq = true;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_D3:
++ if (btintel_pcie_in_d0(data)) {
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ submit_rx = true;
++ signal_waitq = true;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_HCI_RESET:
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ submit_rx = true;
++ signal_waitq = true;
++ break;
++ default:
++ bt_dev_err(data->hdev, "Unknown state: 0x%2.2x",
++ data->alive_intr_ctxt);
++ break;
++ }
++
++ if (submit_rx) {
+ btintel_pcie_reset_ia(data);
+ btintel_pcie_start_rx(data);
+ }
+
+- wake_up(&data->gp0_wait_q);
++ if (signal_waitq) {
++ bt_dev_dbg(data->hdev, "wake up gp0 wait_q");
++ wake_up(&data->gp0_wait_q);
++ }
++
++ if (old_ctxt != data->alive_intr_ctxt)
++ bt_dev_dbg(data->hdev, "alive context changed: %s -> %s",
++ btintel_pcie_alivectxt_state2str(old_ctxt),
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+ }
+
+ /* This function handles the MSX-X interrupt for rx queue 0 which is for TX
+@@ -364,6 +509,83 @@ static void btintel_pcie_msix_tx_handle(struct btintel_pcie_data *data)
+ }
+ }
+
++static int btintel_pcie_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ struct hci_event_hdr *hdr = (void *)skb->data;
++ const char diagnostics_hdr[] = { 0x87, 0x80, 0x03 };
++ struct btintel_pcie_data *data = hci_get_drvdata(hdev);
++
++ if (skb->len > HCI_EVENT_HDR_SIZE && hdr->evt == 0xff &&
++ hdr->plen > 0) {
++ const void *ptr = skb->data + HCI_EVENT_HDR_SIZE + 1;
++ unsigned int len = skb->len - HCI_EVENT_HDR_SIZE - 1;
++
++ if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) {
++ switch (skb->data[2]) {
++ case 0x02:
++ /* When switching to the operational firmware
++ * the device sends a vendor specific event
++ * indicating that the bootup completed.
++ */
++ btintel_bootup(hdev, ptr, len);
++
++ /* If bootup event is from operational image,
++ * driver needs to write sleep control register to
++ * move into D0 state
++ */
++ if (btintel_pcie_in_op(data)) {
++ btintel_pcie_wr_sleep_cntrl(data, BTINTEL_PCIE_STATE_D0);
++ data->alive_intr_ctxt = BTINTEL_PCIE_INTEL_HCI_RESET2;
++ kfree_skb(skb);
++ return 0;
++ }
++
++ if (btintel_pcie_in_iml(data)) {
++ /* In case of IML, there is no concept
++ * of D0 transition. Just mimic as if
++ * IML moved to D0 by clearing INTEL_WAIT_FOR_D0
++ * bit and waking up the task waiting on
++ * INTEL_WAIT_FOR_D0. This is required
++ * as intel_boot() is common function for
++ * both IML and OP image loading.
++ */
++ if (btintel_test_and_clear_flag(data->hdev,
++ INTEL_WAIT_FOR_D0))
++ btintel_wake_up_flag(data->hdev,
++ INTEL_WAIT_FOR_D0);
++ }
++ kfree_skb(skb);
++ return 0;
++ case 0x06:
++ /* When the firmware loading completes the
++ * device sends out a vendor specific event
++ * indicating the result of the firmware
++ * loading.
++ */
++ btintel_secure_send_result(hdev, ptr, len);
++ kfree_skb(skb);
++ return 0;
++ }
++ }
++
++ /* Handle all diagnostics events separately. May still call
++ * hci_recv_frame.
++ */
++ if (len >= sizeof(diagnostics_hdr) &&
++ memcmp(&skb->data[2], diagnostics_hdr,
++ sizeof(diagnostics_hdr)) == 0) {
++ return btintel_diagnostics(hdev, skb);
++ }
++
++ /* This is a debug event that comes from IML and OP image when it
++ * starts execution. There is no need pass this event to stack.
++ */
++ if (skb->data[2] == 0x97)
++ return 0;
++ }
++
++ return hci_recv_frame(hdev, skb);
++}
+ /* Process the received rx data
+ * It check the frame header to identify the data type and create skb
+ * and calling HCI API
+@@ -465,7 +687,7 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
+ hdev->stat.byte_rx += plen;
+
+ if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT)
+- ret = btintel_recv_event(hdev, new_skb);
++ ret = btintel_pcie_recv_event(hdev, new_skb);
+ else
+ ret = hci_recv_frame(hdev, new_skb);
+
+@@ -1053,8 +1275,11 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ struct sk_buff *skb)
+ {
+ struct btintel_pcie_data *data = hci_get_drvdata(hdev);
++ struct hci_command_hdr *cmd;
++ __u16 opcode = ~0;
+ int ret;
+ u32 type;
++ u32 old_ctxt;
+
+ /* Due to the fw limitation, the type header of the packet should be
+ * 4 bytes unlike 1 byte for UART. In UART, the firmware can read
+@@ -1073,6 +1298,8 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ switch (hci_skb_pkt_type(skb)) {
+ case HCI_COMMAND_PKT:
+ type = BTINTEL_PCIE_HCI_CMD_PKT;
++ cmd = (void *)skb->data;
++ opcode = le16_to_cpu(cmd->opcode);
+ if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) {
+ struct hci_command_hdr *cmd = (void *)skb->data;
+ __u16 opcode = le16_to_cpu(cmd->opcode);
+@@ -1111,6 +1338,30 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ bt_dev_err(hdev, "Failed to send frame (%d)", ret);
+ goto exit_error;
+ }
++
++ if (type == BTINTEL_PCIE_HCI_CMD_PKT &&
++ (opcode == HCI_OP_RESET || opcode == 0xfc01)) {
++ old_ctxt = data->alive_intr_ctxt;
++ data->alive_intr_ctxt =
++ (opcode == 0xfc01 ? BTINTEL_PCIE_INTEL_HCI_RESET1 :
++ BTINTEL_PCIE_HCI_RESET);
++ bt_dev_dbg(data->hdev, "sent cmd: 0x%4.4x alive context changed: %s -> %s",
++ opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++ if (opcode == HCI_OP_RESET) {
++ data->gp0_received = false;
++ ret = wait_event_timeout(data->gp0_wait_q,
++ data->gp0_received,
++ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
++ if (!ret) {
++ hdev->stat.err_tx++;
++ bt_dev_err(hdev, "No alive interrupt received for %s",
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++ ret = -ETIME;
++ goto exit_error;
++ }
++ }
++ }
+ hdev->stat.byte_tx += skb->len;
+ kfree_skb(skb);
+
+diff --git a/drivers/bluetooth/btintel_pcie.h b/drivers/bluetooth/btintel_pcie.h
+index baaff70420f575..8b7824ad005a2a 100644
+--- a/drivers/bluetooth/btintel_pcie.h
++++ b/drivers/bluetooth/btintel_pcie.h
+@@ -12,6 +12,7 @@
+ #define BTINTEL_PCIE_CSR_HW_REV_REG (BTINTEL_PCIE_CSR_BASE + 0x028)
+ #define BTINTEL_PCIE_CSR_RF_ID_REG (BTINTEL_PCIE_CSR_BASE + 0x09C)
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_REG (BTINTEL_PCIE_CSR_BASE + 0x108)
++#define BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG (BTINTEL_PCIE_CSR_BASE + 0x114)
+ #define BTINTEL_PCIE_CSR_CI_ADDR_LSB_REG (BTINTEL_PCIE_CSR_BASE + 0x118)
+ #define BTINTEL_PCIE_CSR_CI_ADDR_MSB_REG (BTINTEL_PCIE_CSR_BASE + 0x11C)
+ #define BTINTEL_PCIE_CSR_IMG_RESPONSE_REG (BTINTEL_PCIE_CSR_BASE + 0x12C)
+@@ -32,6 +33,7 @@
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_IML_LOCKDOWN (BIT(11))
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_MAC_ACCESS_ON (BIT(16))
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_ALIVE (BIT(23))
++#define BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY (BIT(24))
+
+ /* Registers for MSI-X */
+ #define BTINTEL_PCIE_CSR_MSIX_BASE (0x2000)
+@@ -55,6 +57,16 @@ enum msix_hw_int_causes {
+ BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0 = BIT(0), /* cause 32 */
+ };
+
++/* PCIe device states
++ * Host-Device interface is active
++ * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD)
++ * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD)
++ */
++enum {
++ BTINTEL_PCIE_STATE_D0 = 0,
++ BTINTEL_PCIE_STATE_D3_HOT = 2,
++ BTINTEL_PCIE_STATE_D3_COLD = 3,
++};
+ #define BTINTEL_PCIE_MSIX_NON_AUTO_CLEAR_CAUSE BIT(7)
+
+ /* Minimum and Maximum number of MSI-X Vector
+@@ -67,7 +79,7 @@ enum msix_hw_int_causes {
+ #define BTINTEL_DEFAULT_MAC_ACCESS_TIMEOUT_US 200000
+
+ /* Default interrupt timeout in msec */
+-#define BTINTEL_DEFAULT_INTR_TIMEOUT 3000
++#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000
+
+ /* The number of descriptors in TX/RX queues */
+ #define BTINTEL_DESCS_COUNT 16
+@@ -343,6 +355,7 @@ struct rxq {
+ * @ia: Index Array struct
+ * @txq: TX Queue struct
+ * @rxq: RX Queue struct
++ * @alive_intr_ctxt: Alive interrupt context
+ */
+ struct btintel_pcie_data {
+ struct pci_dev *pdev;
+@@ -389,6 +402,7 @@ struct btintel_pcie_data {
+ struct ia ia;
+ struct txq txq;
+ struct rxq rxq;
++ u32 alive_intr_ctxt;
+ };
+
+ static inline u32 btintel_pcie_rd_reg32(struct btintel_pcie_data *data,
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 9bbf205021634f..480e4adba9faa6 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -1215,7 +1215,6 @@ static int btmtk_usb_isointf_init(struct hci_dev *hdev)
+ struct sk_buff *skb;
+ int err;
+
+- init_usb_anchor(&btmtk_data->isopkt_anchor);
+ spin_lock_init(&btmtk_data->isorxlock);
+
+ __set_mtk_intr_interface(hdev);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e9534fbc92e32f..4ccaddb46ddd81 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2616,6 +2616,7 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ }
+
+ set_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags);
++ init_usb_anchor(&btmtk_data->isopkt_anchor);
+ }
+
+ static void btusb_mtk_release_iso_intf(struct btusb_data *data)
+diff --git a/drivers/bus/mhi/host/trace.h b/drivers/bus/mhi/host/trace.h
+index 95613c8ebe0691..3e0c41777429eb 100644
+--- a/drivers/bus/mhi/host/trace.h
++++ b/drivers/bus/mhi/host/trace.h
+@@ -9,6 +9,7 @@
+ #if !defined(_TRACE_EVENT_MHI_HOST_H) || defined(TRACE_HEADER_MULTI_READ)
+ #define _TRACE_EVENT_MHI_HOST_H
+
++#include <linux/byteorder/generic.h>
+ #include <linux/tracepoint.h>
+ #include <linux/trace_seq.h>
+ #include "../common.h"
+@@ -97,18 +98,18 @@ TRACE_EVENT(mhi_gen_tre,
+ __string(name, mhi_cntrl->mhi_dev->name)
+ __field(int, ch_num)
+ __field(void *, wp)
+- __field(__le64, tre_ptr)
+- __field(__le32, dword0)
+- __field(__le32, dword1)
++ __field(uint64_t, tre_ptr)
++ __field(uint32_t, dword0)
++ __field(uint32_t, dword1)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name);
+ __entry->ch_num = mhi_chan->chan;
+ __entry->wp = mhi_tre;
+- __entry->tre_ptr = mhi_tre->ptr;
+- __entry->dword0 = mhi_tre->dword[0];
+- __entry->dword1 = mhi_tre->dword[1];
++ __entry->tre_ptr = le64_to_cpu(mhi_tre->ptr);
++ __entry->dword0 = le32_to_cpu(mhi_tre->dword[0]);
++ __entry->dword1 = le32_to_cpu(mhi_tre->dword[1]);
+ ),
+
+ TP_printk("%s: Chan: %d TRE: 0x%p TRE buf: 0x%llx DWORD0: 0x%08x DWORD1: 0x%08x\n",
+@@ -176,19 +177,19 @@ DECLARE_EVENT_CLASS(mhi_process_event_ring,
+
+ TP_STRUCT__entry(
+ __string(name, mhi_cntrl->mhi_dev->name)
+- __field(__le32, dword0)
+- __field(__le32, dword1)
++ __field(uint32_t, dword0)
++ __field(uint32_t, dword1)
+ __field(int, state)
+- __field(__le64, ptr)
++ __field(uint64_t, ptr)
+ __field(void *, rp)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name);
+ __entry->rp = rp;
+- __entry->ptr = rp->ptr;
+- __entry->dword0 = rp->dword[0];
+- __entry->dword1 = rp->dword[1];
++ __entry->ptr = le64_to_cpu(rp->ptr);
++ __entry->dword0 = le32_to_cpu(rp->dword[0]);
++ __entry->dword1 = le32_to_cpu(rp->dword[1]);
+ __entry->state = MHI_TRE_GET_EV_STATE(rp);
+ ),
+
+diff --git a/drivers/clk/.kunitconfig b/drivers/clk/.kunitconfig
+index 54ece920705525..08e26137f3d9c9 100644
+--- a/drivers/clk/.kunitconfig
++++ b/drivers/clk/.kunitconfig
+@@ -1,5 +1,6 @@
+ CONFIG_KUNIT=y
+ CONFIG_OF=y
++CONFIG_OF_OVERLAY=y
+ CONFIG_COMMON_CLK=y
+ CONFIG_CLK_KUNIT_TEST=y
+ CONFIG_CLK_FIXED_RATE_KUNIT_TEST=y
+diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
+index 299bc678ed1b9f..0fe07a594b4e1b 100644
+--- a/drivers/clk/Kconfig
++++ b/drivers/clk/Kconfig
+@@ -517,7 +517,6 @@ config CLK_KUNIT_TEST
+ tristate "Basic Clock Framework Kunit Tests" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+- select OF_OVERLAY if OF
+ select DTC
+ help
+ Kunit tests for the common clock framework.
+@@ -526,7 +525,6 @@ config CLK_FIXED_RATE_KUNIT_TEST
+ tristate "Basic fixed rate clk type KUnit test" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+- select OF_OVERLAY if OF
+ select DTC
+ help
+ KUnit tests for the basic fixed rate clk type.
+diff --git a/drivers/clk/clk-apple-nco.c b/drivers/clk/clk-apple-nco.c
+index 39472a51530a34..457a48d4894128 100644
+--- a/drivers/clk/clk-apple-nco.c
++++ b/drivers/clk/clk-apple-nco.c
+@@ -297,6 +297,9 @@ static int applnco_probe(struct platform_device *pdev)
+ memset(&init, 0, sizeof(init));
+ init.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%s-%d", np->name, i);
++ if (!init.name)
++ return -ENOMEM;
++
+ init.ops = &applnco_ops;
+ init.parent_data = &pdata;
+ init.num_parents = 1;
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index bf4d8ddc93aea1..934e53a96dddac 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -7,6 +7,7 @@
+ */
+
+ #include <linux/platform_device.h>
++#include <linux/clk.h>
+ #include <linux/clk-provider.h>
+ #include <linux/slab.h>
+ #include <linux/io.h>
+@@ -512,6 +513,7 @@ static int axi_clkgen_probe(struct platform_device *pdev)
+ struct clk_init_data init;
+ const char *parent_names[2];
+ const char *clk_name;
++ struct clk *axi_clk;
+ unsigned int i;
+ int ret;
+
+@@ -528,8 +530,24 @@ static int axi_clkgen_probe(struct platform_device *pdev)
+ return PTR_ERR(axi_clkgen->base);
+
+ init.num_parents = of_clk_get_parent_count(pdev->dev.of_node);
+- if (init.num_parents < 1 || init.num_parents > 2)
+- return -EINVAL;
++
++ axi_clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk");
++ if (!IS_ERR(axi_clk)) {
++ if (init.num_parents < 2 || init.num_parents > 3)
++ return -EINVAL;
++
++ init.num_parents -= 1;
++ } else {
++ /*
++ * Legacy... So that old DTs which do not have clock-names still
++ * work. In this case we don't explicitly enable the AXI bus
++ * clock.
++ */
++ if (PTR_ERR(axi_clk) != -ENOENT)
++ return PTR_ERR(axi_clk);
++ if (init.num_parents < 1 || init.num_parents > 2)
++ return -EINVAL;
++ }
+
+ for (i = 0; i < init.num_parents; i++) {
+ parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i);
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index 22fbea61c3dcc0..fdd8ea989ed24a 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -3,8 +3,10 @@
+ #include <linux/delay.h>
+ #include <linux/clk-provider.h>
+ #include <linux/io.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/platform_device.h>
+ #include <linux/property.h>
++#include <linux/regmap.h>
+ #include <linux/reset-controller.h>
+ #include <dt-bindings/clock/en7523-clk.h>
+ #include <dt-bindings/reset/airoha,en7581-reset.h>
+@@ -31,16 +33,11 @@
+ #define REG_RESET_CONTROL_PCIE1 BIT(27)
+ #define REG_RESET_CONTROL_PCIE2 BIT(26)
+ /* EN7581 */
+-#define REG_PCIE0_MEM 0x00
+-#define REG_PCIE0_MEM_MASK 0x04
+-#define REG_PCIE1_MEM 0x08
+-#define REG_PCIE1_MEM_MASK 0x0c
+-#define REG_PCIE2_MEM 0x10
+-#define REG_PCIE2_MEM_MASK 0x14
+ #define REG_NP_SCU_PCIC 0x88
+ #define REG_NP_SCU_SSTR 0x9c
+ #define REG_PCIE_XSI0_SEL_MASK GENMASK(14, 13)
+ #define REG_PCIE_XSI1_SEL_MASK GENMASK(12, 11)
++#define REG_CRYPTO_CLKSRC2 0x20c
+
+ #define REG_RST_CTRL2 0x00
+ #define REG_RST_CTRL1 0x04
+@@ -84,7 +81,8 @@ struct en_clk_soc_data {
+ const u16 *idx_map;
+ u16 idx_map_nr;
+ } reset;
+- int (*hw_init)(struct platform_device *pdev, void __iomem *np_base);
++ int (*hw_init)(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data);
+ };
+
+ static const u32 gsw_base[] = { 400000000, 500000000 };
+@@ -92,6 +90,10 @@ static const u32 emi_base[] = { 333000000, 400000000 };
+ static const u32 bus_base[] = { 500000000, 540000000 };
+ static const u32 slic_base[] = { 100000000, 3125000 };
+ static const u32 npu_base[] = { 333000000, 400000000, 500000000 };
++/* EN7581 */
++static const u32 emi7581_base[] = { 540000000, 480000000, 400000000, 300000000 };
++static const u32 npu7581_base[] = { 800000000, 750000000, 720000000, 600000000 };
++static const u32 crypto_base[] = { 540000000, 480000000 };
+
+ static const struct en_clk_desc en7523_base_clks[] = {
+ {
+@@ -189,6 +191,102 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ }
+ };
+
++static const struct en_clk_desc en7581_base_clks[] = {
++ {
++ .id = EN7523_CLK_GSW,
++ .name = "gsw",
++
++ .base_reg = REG_GSW_CLK_DIV_SEL,
++ .base_bits = 1,
++ .base_shift = 8,
++ .base_values = gsw_base,
++ .n_base_values = ARRAY_SIZE(gsw_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_EMI,
++ .name = "emi",
++
++ .base_reg = REG_EMI_CLK_DIV_SEL,
++ .base_bits = 2,
++ .base_shift = 8,
++ .base_values = emi7581_base,
++ .n_base_values = ARRAY_SIZE(emi7581_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_BUS,
++ .name = "bus",
++
++ .base_reg = REG_BUS_CLK_DIV_SEL,
++ .base_bits = 1,
++ .base_shift = 8,
++ .base_values = bus_base,
++ .n_base_values = ARRAY_SIZE(bus_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_SLIC,
++ .name = "slic",
++
++ .base_reg = REG_SPI_CLK_FREQ_SEL,
++ .base_bits = 1,
++ .base_shift = 0,
++ .base_values = slic_base,
++ .n_base_values = ARRAY_SIZE(slic_base),
++
++ .div_reg = REG_SPI_CLK_DIV_SEL,
++ .div_bits = 5,
++ .div_shift = 24,
++ .div_val0 = 20,
++ .div_step = 2,
++ }, {
++ .id = EN7523_CLK_SPI,
++ .name = "spi",
++
++ .base_reg = REG_SPI_CLK_DIV_SEL,
++
++ .base_value = 400000000,
++
++ .div_bits = 5,
++ .div_shift = 8,
++ .div_val0 = 40,
++ .div_step = 2,
++ }, {
++ .id = EN7523_CLK_NPU,
++ .name = "npu",
++
++ .base_reg = REG_NPU_CLK_DIV_SEL,
++ .base_bits = 2,
++ .base_shift = 8,
++ .base_values = npu7581_base,
++ .n_base_values = ARRAY_SIZE(npu7581_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_CRYPTO,
++ .name = "crypto",
++
++ .base_reg = REG_CRYPTO_CLKSRC2,
++ .base_bits = 1,
++ .base_shift = 0,
++ .base_values = crypto_base,
++ .n_base_values = ARRAY_SIZE(crypto_base),
++ }
++};
++
+ static const u16 en7581_rst_ofs[] = {
+ REG_RST_CTRL2,
+ REG_RST_CTRL1,
+@@ -252,15 +350,11 @@ static const u16 en7581_rst_map[] = {
+ [EN7581_XPON_MAC_RST] = RST_NR_PER_BANK + 31,
+ };
+
+-static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i)
++static u32 en7523_get_base_rate(const struct en_clk_desc *desc, u32 val)
+ {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
+- u32 val;
+-
+ if (!desc->base_bits)
+ return desc->base_value;
+
+- val = readl(base + desc->base_reg);
+ val >>= desc->base_shift;
+ val &= (1 << desc->base_bits) - 1;
+
+@@ -270,16 +364,11 @@ static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i)
+ return desc->base_values[val];
+ }
+
+-static u32 en7523_get_div(void __iomem *base, int i)
++static u32 en7523_get_div(const struct en_clk_desc *desc, u32 val)
+ {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
+- u32 reg, val;
+-
+ if (!desc->div_bits)
+ return 1;
+
+- reg = desc->div_reg ? desc->div_reg : desc->base_reg;
+- val = readl(base + reg);
+ val >>= desc->div_shift;
+ val &= (1 << desc->div_bits) - 1;
+
+@@ -412,44 +501,83 @@ static void en7581_pci_disable(struct clk_hw *hw)
+ usleep_range(1000, 2000);
+ }
+
+-static int en7581_clk_hw_init(struct platform_device *pdev,
+- void __iomem *np_base)
++static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
++ void __iomem *base, void __iomem *np_base)
+ {
+- void __iomem *pb_base;
+- u32 val;
++ struct clk_hw *hw;
++ u32 rate;
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
++ const struct en_clk_desc *desc = &en7523_base_clks[i];
++ u32 reg = desc->div_reg ? desc->div_reg : desc->base_reg;
++ u32 val = readl(base + desc->base_reg);
+
+- pb_base = devm_platform_ioremap_resource(pdev, 3);
+- if (IS_ERR(pb_base))
+- return PTR_ERR(pb_base);
++ rate = en7523_get_base_rate(desc, val);
++ val = readl(base + reg);
++ rate /= en7523_get_div(desc, val);
+
+- val = readl(np_base + REG_NP_SCU_SSTR);
+- val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK);
+- writel(val, np_base + REG_NP_SCU_SSTR);
+- val = readl(np_base + REG_NP_SCU_PCIC);
+- writel(val | 3, np_base + REG_NP_SCU_PCIC);
++ hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate);
++ if (IS_ERR(hw)) {
++ pr_err("Failed to register clk %s: %ld\n",
++ desc->name, PTR_ERR(hw));
++ continue;
++ }
++
++ clk_data->hws[desc->id] = hw;
++ }
++
++ hw = en7523_register_pcie_clk(dev, np_base);
++ clk_data->hws[EN7523_CLK_PCIE] = hw;
++
++ clk_data->num = EN7523_NUM_CLOCKS;
++}
++
++static int en7523_clk_hw_init(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data)
++{
++ void __iomem *base, *np_base;
++
++ base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(base))
++ return PTR_ERR(base);
++
++ np_base = devm_platform_ioremap_resource(pdev, 1);
++ if (IS_ERR(np_base))
++ return PTR_ERR(np_base);
+
+- writel(0x20000000, pb_base + REG_PCIE0_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE0_MEM_MASK);
+- writel(0x24000000, pb_base + REG_PCIE1_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE1_MEM_MASK);
+- writel(0x28000000, pb_base + REG_PCIE2_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE2_MEM_MASK);
++ en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
+
+ return 0;
+ }
+
+-static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
+- void __iomem *base, void __iomem *np_base)
++static void en7581_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
++ struct regmap *map, void __iomem *base)
+ {
+ struct clk_hw *hw;
+ u32 rate;
+ int i;
+
+- for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
++ for (i = 0; i < ARRAY_SIZE(en7581_base_clks); i++) {
++ const struct en_clk_desc *desc = &en7581_base_clks[i];
++ u32 val, reg = desc->div_reg ? desc->div_reg : desc->base_reg;
++ int err;
+
+- rate = en7523_get_base_rate(base, i);
+- rate /= en7523_get_div(base, i);
++ err = regmap_read(map, desc->base_reg, &val);
++ if (err) {
++ pr_err("Failed reading fixed clk rate %s: %d\n",
++ desc->name, err);
++ continue;
++ }
++ rate = en7523_get_base_rate(desc, val);
++
++ err = regmap_read(map, reg, &val);
++ if (err) {
++ pr_err("Failed reading fixed clk div %s: %d\n",
++ desc->name, err);
++ continue;
++ }
++ rate /= en7523_get_div(desc, val);
+
+ hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate);
+ if (IS_ERR(hw)) {
+@@ -461,12 +589,38 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+ clk_data->hws[desc->id] = hw;
+ }
+
+- hw = en7523_register_pcie_clk(dev, np_base);
++ hw = en7523_register_pcie_clk(dev, base);
+ clk_data->hws[EN7523_CLK_PCIE] = hw;
+
+ clk_data->num = EN7523_NUM_CLOCKS;
+ }
+
++static int en7581_clk_hw_init(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data)
++{
++ void __iomem *np_base;
++ struct regmap *map;
++ u32 val;
++
++ map = syscon_regmap_lookup_by_compatible("airoha,en7581-chip-scu");
++ if (IS_ERR(map))
++ return PTR_ERR(map);
++
++ np_base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(np_base))
++ return PTR_ERR(np_base);
++
++ en7581_register_clocks(&pdev->dev, clk_data, map, np_base);
++
++ val = readl(np_base + REG_NP_SCU_SSTR);
++ val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK);
++ writel(val, np_base + REG_NP_SCU_SSTR);
++ val = readl(np_base + REG_NP_SCU_PCIC);
++ writel(val | 3, np_base + REG_NP_SCU_PCIC);
++
++ return 0;
++}
++
+ static int en7523_reset_update(struct reset_controller_dev *rcdev,
+ unsigned long id, bool assert)
+ {
+@@ -533,7 +687,7 @@ static int en7523_reset_register(struct platform_device *pdev,
+ if (!soc_data->reset.idx_map_nr)
+ return 0;
+
+- base = devm_platform_ioremap_resource(pdev, 2);
++ base = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+@@ -561,31 +715,18 @@ static int en7523_clk_probe(struct platform_device *pdev)
+ struct device_node *node = pdev->dev.of_node;
+ const struct en_clk_soc_data *soc_data;
+ struct clk_hw_onecell_data *clk_data;
+- void __iomem *base, *np_base;
+ int r;
+
+- base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(base))
+- return PTR_ERR(base);
+-
+- np_base = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(np_base))
+- return PTR_ERR(np_base);
+-
+- soc_data = device_get_match_data(&pdev->dev);
+- if (soc_data->hw_init) {
+- r = soc_data->hw_init(pdev, np_base);
+- if (r)
+- return r;
+- }
+-
+ clk_data = devm_kzalloc(&pdev->dev,
+ struct_size(clk_data, hws, EN7523_NUM_CLOCKS),
+ GFP_KERNEL);
+ if (!clk_data)
+ return -ENOMEM;
+
+- en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
++ soc_data = device_get_match_data(&pdev->dev);
++ r = soc_data->hw_init(pdev, clk_data);
++ if (r)
++ return r;
+
+ r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ if (r)
+@@ -608,6 +749,7 @@ static const struct en_clk_soc_data en7523_data = {
+ .prepare = en7523_pci_prepare,
+ .unprepare = en7523_pci_unprepare,
+ },
++ .hw_init = en7523_clk_hw_init,
+ };
+
+ static const struct en_clk_soc_data en7581_data = {
+diff --git a/drivers/clk/clk-loongson2.c b/drivers/clk/clk-loongson2.c
+index 820bb1e9e3b79a..7082b4309c6f15 100644
+--- a/drivers/clk/clk-loongson2.c
++++ b/drivers/clk/clk-loongson2.c
+@@ -29,8 +29,10 @@ enum loongson2_clk_type {
+ struct loongson2_clk_provider {
+ void __iomem *base;
+ struct device *dev;
+- struct clk_hw_onecell_data clk_data;
+ spinlock_t clk_lock; /* protect access to DIV registers */
++
++ /* Must be last --ends in a flexible-array member. */
++ struct clk_hw_onecell_data clk_data;
+ };
+
+ struct loongson2_clk_data {
+@@ -304,7 +306,7 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ return PTR_ERR(clp->base);
+
+ spin_lock_init(&clp->clk_lock);
+- clp->clk_data.num = clks_num + 1;
++ clp->clk_data.num = clks_num;
+ clp->dev = dev;
+
+ for (i = 0; i < clks_num; i++) {
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index 591e0364ee5c11..85771afd4698ae 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -254,9 +254,11 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ pll_div = FIELD_PREP(PLL_RDIV_MASK, rate->rdiv) | rate->odiv |
+ FIELD_PREP(PLL_MFI_MASK, rate->mfi);
+ writel_relaxed(pll_div, pll->base + PLL_DIV);
++ readl(pll->base + PLL_DIV);
+ if (pll->flags & CLK_FRACN_GPPLL_FRACN) {
+ writel_relaxed(rate->mfd, pll->base + PLL_DENOMINATOR);
+ writel_relaxed(FIELD_PREP(PLL_MFN_MASK, rate->mfn), pll->base + PLL_NUMERATOR);
++ readl(pll->base + PLL_NUMERATOR);
+ }
+
+ /* Wait for 5us according to fracn mode pll doc */
+@@ -265,6 +267,7 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ /* Enable Powerup */
+ tmp |= POWERUP_MASK;
+ writel_relaxed(tmp, pll->base + PLL_CTRL);
++ readl(pll->base + PLL_CTRL);
+
+ /* Wait Lock */
+ ret = clk_fracn_gppll_wait_lock(pll);
+@@ -302,14 +305,15 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
+
+ val |= POWERUP_MASK;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+-
+- val |= CLKMUX_EN;
+- writel_relaxed(val, pll->base + PLL_CTRL);
++ readl(pll->base + PLL_CTRL);
+
+ ret = clk_fracn_gppll_wait_lock(pll);
+ if (ret)
+ return ret;
+
++ val |= CLKMUX_EN;
++ writel_relaxed(val, pll->base + PLL_CTRL);
++
+ val &= ~CLKMUX_BYPASS;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+
+diff --git a/drivers/clk/imx/clk-imx8-acm.c b/drivers/clk/imx/clk-imx8-acm.c
+index 6c351050b82ae0..c169fe53a35f83 100644
+--- a/drivers/clk/imx/clk-imx8-acm.c
++++ b/drivers/clk/imx/clk-imx8-acm.c
+@@ -294,9 +294,9 @@ static int clk_imx_acm_attach_pm_domains(struct device *dev,
+ DL_FLAG_STATELESS |
+ DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+- if (IS_ERR(dev_pm->pd_dev_link[i])) {
++ if (!dev_pm->pd_dev_link[i]) {
+ dev_pm_domain_detach(dev_pm->pd_dev[i], false);
+- ret = PTR_ERR(dev_pm->pd_dev_link[i]);
++ ret = -EINVAL;
+ goto detach_pm;
+ }
+ }
+diff --git a/drivers/clk/imx/clk-lpcg-scu.c b/drivers/clk/imx/clk-lpcg-scu.c
+index dd5abd09f3e206..620afdf8dc03e9 100644
+--- a/drivers/clk/imx/clk-lpcg-scu.c
++++ b/drivers/clk/imx/clk-lpcg-scu.c
+@@ -6,10 +6,12 @@
+
+ #include <linux/bits.h>
+ #include <linux/clk-provider.h>
++#include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/units.h>
+
+ #include "clk-scu.h"
+
+@@ -41,6 +43,29 @@ struct clk_lpcg_scu {
+
+ #define to_clk_lpcg_scu(_hw) container_of(_hw, struct clk_lpcg_scu, hw)
+
++/* e10858 -LPCG clock gating register synchronization errata */
++static void lpcg_e10858_writel(unsigned long rate, void __iomem *reg, u32 val)
++{
++ writel(val, reg);
++
++ if (rate >= 24 * HZ_PER_MHZ || rate == 0) {
++ /*
++ * The time taken to access the LPCG registers from the AP core
++ * through the interconnect is longer than the minimum delay
++ * of 4 clock cycles required by the errata.
++ * Adding a readl will provide sufficient delay to prevent
++ * back-to-back writes.
++ */
++ readl(reg);
++ } else {
++ /*
++ * For clocks running below 24MHz, wait a minimum of
++ * 4 clock cycles.
++ */
++ ndelay(4 * (DIV_ROUND_UP(1000 * HZ_PER_MHZ, rate)));
++ }
++}
++
+ static int clk_lpcg_scu_enable(struct clk_hw *hw)
+ {
+ struct clk_lpcg_scu *clk = to_clk_lpcg_scu(hw);
+@@ -57,7 +82,8 @@ static int clk_lpcg_scu_enable(struct clk_hw *hw)
+ val |= CLK_GATE_SCU_LPCG_HW_SEL;
+
+ reg |= val << clk->bit_idx;
+- writel(reg, clk->reg);
++
++ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
+
+ spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
+
+@@ -74,7 +100,7 @@ static void clk_lpcg_scu_disable(struct clk_hw *hw)
+
+ reg = readl_relaxed(clk->reg);
+ reg &= ~(CLK_GATE_SCU_LPCG_MASK << clk->bit_idx);
+- writel(reg, clk->reg);
++ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
+
+ spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
+ }
+@@ -145,13 +171,8 @@ static int __maybe_unused imx_clk_lpcg_scu_resume(struct device *dev)
+ {
+ struct clk_lpcg_scu *clk = dev_get_drvdata(dev);
+
+- /*
+- * FIXME: Sometimes writes don't work unless the CPU issues
+- * them twice
+- */
+-
+- writel(clk->state, clk->reg);
+ writel(clk->state, clk->reg);
++ lpcg_e10858_writel(0, clk->reg, clk->state);
+ dev_dbg(dev, "restore lpcg state 0x%x\n", clk->state);
+
+ return 0;
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index b1dd0c08e091b6..b27186aaf2a156 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -596,7 +596,7 @@ static int __maybe_unused imx_clk_scu_suspend(struct device *dev)
+ clk->rate = clk_scu_recalc_rate(&clk->hw, 0);
+ else
+ clk->rate = clk_hw_get_rate(&clk->hw);
+- clk->is_enabled = clk_hw_is_enabled(&clk->hw);
++ clk->is_enabled = clk_hw_is_prepared(&clk->hw);
+
+ if (clk->parent)
+ dev_dbg(dev, "save parent %s idx %u\n", clk_hw_get_name(clk->parent),
+diff --git a/drivers/clk/mediatek/Kconfig b/drivers/clk/mediatek/Kconfig
+index 70a005e7e1b180..486401e1f2f19c 100644
+--- a/drivers/clk/mediatek/Kconfig
++++ b/drivers/clk/mediatek/Kconfig
+@@ -887,13 +887,6 @@ config COMMON_CLK_MT8195_APUSYS
+ help
+ This driver supports MediaTek MT8195 AI Processor Unit System clocks.
+
+-config COMMON_CLK_MT8195_AUDSYS
+- tristate "Clock driver for MediaTek MT8195 audsys"
+- depends on COMMON_CLK_MT8195
+- default COMMON_CLK_MT8195
+- help
+- This driver supports MediaTek MT8195 audsys clocks.
+-
+ config COMMON_CLK_MT8195_IMP_IIC_WRAP
+ tristate "Clock driver for MediaTek MT8195 imp_iic_wrap"
+ depends on COMMON_CLK_MT8195
+@@ -908,14 +901,6 @@ config COMMON_CLK_MT8195_MFGCFG
+ help
+ This driver supports MediaTek MT8195 mfgcfg clocks.
+
+-config COMMON_CLK_MT8195_MSDC
+- tristate "Clock driver for MediaTek MT8195 msdc"
+- depends on COMMON_CLK_MT8195
+- default COMMON_CLK_MT8195
+- help
+- This driver supports MediaTek MT8195 MMC and SD Controller's
+- msdc and msdc_top clocks.
+-
+ config COMMON_CLK_MT8195_SCP_ADSP
+ tristate "Clock driver for MediaTek MT8195 scp_adsp"
+ depends on COMMON_CLK_MT8195
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index a3e2a09e2105b2..4444dafa4e3dfa 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -1230,11 +1230,11 @@ config SM_VIDEOCC_8350
+ config SM_VIDEOCC_8550
+ tristate "SM8550 Video Clock Controller"
+ depends on ARM64 || COMPILE_TEST
+- select SM_GCC_8550
++ depends on SM_GCC_8550 || SM_GCC_8650
+ select QCOM_GDSC
+ help
+ Support for the video clock controller on Qualcomm Technologies, Inc.
+- SM8550 devices.
++ SM8550 or SM8650 devices.
+ Say Y if you want to support video devices and functionality such as
+ video encode/decode.
+
+diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c
+index 50a443bf79ecd3..76285fbbdeaa2d 100644
+--- a/drivers/clk/ralink/clk-mtmips.c
++++ b/drivers/clk/ralink/clk-mtmips.c
+@@ -263,8 +263,9 @@ static int mtmips_register_pherip_clocks(struct device_node *np,
+ .rate = _rate \
+ }
+
+-static struct mtmips_clk_fixed rt305x_fixed_clocks[] = {
+- CLK_FIXED("xtal", NULL, 40000000)
++static struct mtmips_clk_fixed rt3883_fixed_clocks[] = {
++ CLK_FIXED("xtal", NULL, 40000000),
++ CLK_FIXED("periph", "xtal", 40000000)
+ };
+
+ static struct mtmips_clk_fixed rt3352_fixed_clocks[] = {
+@@ -366,6 +367,12 @@ static inline struct mtmips_clk *to_mtmips_clk(struct clk_hw *hw)
+ return container_of(hw, struct mtmips_clk, hw);
+ }
+
++static unsigned long rt2880_xtal_recalc_rate(struct clk_hw *hw,
++ unsigned long parent_rate)
++{
++ return 40000000;
++}
++
+ static unsigned long rt5350_xtal_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+@@ -677,10 +684,12 @@ static unsigned long mt76x8_cpu_recalc_rate(struct clk_hw *hw,
+ }
+
+ static struct mtmips_clk rt2880_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt2880_cpu_recalc_rate) }
+ };
+
+ static struct mtmips_clk rt305x_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt305x_cpu_recalc_rate) }
+ };
+
+@@ -690,6 +699,7 @@ static struct mtmips_clk rt3352_clks_base[] = {
+ };
+
+ static struct mtmips_clk rt3883_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt3883_cpu_recalc_rate) },
+ { CLK_BASE("bus", "cpu", rt3883_bus_recalc_rate) }
+ };
+@@ -746,8 +756,8 @@ static int mtmips_register_clocks(struct device_node *np,
+ static const struct mtmips_clk_data rt2880_clk_data = {
+ .clk_base = rt2880_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt2880_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = NULL,
++ .num_clk_fixed = 0,
+ .clk_factor = rt2880_factor_clocks,
+ .num_clk_factor = ARRAY_SIZE(rt2880_factor_clocks),
+ .clk_periph = rt2880_pherip_clks,
+@@ -757,8 +767,8 @@ static const struct mtmips_clk_data rt2880_clk_data = {
+ static const struct mtmips_clk_data rt305x_clk_data = {
+ .clk_base = rt305x_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt305x_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = NULL,
++ .num_clk_fixed = 0,
+ .clk_factor = rt305x_factor_clocks,
+ .num_clk_factor = ARRAY_SIZE(rt305x_factor_clocks),
+ .clk_periph = rt305x_pherip_clks,
+@@ -779,8 +789,8 @@ static const struct mtmips_clk_data rt3352_clk_data = {
+ static const struct mtmips_clk_data rt3883_clk_data = {
+ .clk_base = rt3883_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt3883_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = rt3883_fixed_clocks,
++ .num_clk_fixed = ARRAY_SIZE(rt3883_fixed_clocks),
+ .clk_factor = NULL,
+ .num_clk_factor = 0,
+ .clk_periph = rt5350_pherip_clks,
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 88bf39e8c79c83..b43b763dfe186a 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -548,7 +548,7 @@ static unsigned long
+ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
+ unsigned long rate)
+ {
+- unsigned long foutpostdiv_rate;
++ unsigned long foutpostdiv_rate, foutvco_rate;
+
+ params->pl5_intin = rate / MEGA;
+ params->pl5_fracin = div_u64(((u64)rate % MEGA) << 24, MEGA);
+@@ -557,10 +557,11 @@ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
+ params->pl5_postdiv2 = 1;
+ params->pl5_spread = 0x16;
+
+- foutpostdiv_rate =
+- EXTAL_FREQ_IN_MEGA_HZ * MEGA / params->pl5_refdiv *
+- ((((params->pl5_intin << 24) + params->pl5_fracin)) >> 24) /
+- (params->pl5_postdiv1 * params->pl5_postdiv2);
++ foutvco_rate = div_u64(mul_u32_u32(EXTAL_FREQ_IN_MEGA_HZ * MEGA,
++ (params->pl5_intin << 24) + params->pl5_fracin),
++ params->pl5_refdiv) >> 24;
++ foutpostdiv_rate = DIV_ROUND_CLOSEST_ULL(foutvco_rate,
++ params->pl5_postdiv1 * params->pl5_postdiv2);
+
+ return foutpostdiv_rate;
+ }
+diff --git a/drivers/clk/sophgo/clk-sg2042-pll.c b/drivers/clk/sophgo/clk-sg2042-pll.c
+index ff9deeef509b8f..1537f4f05860ea 100644
+--- a/drivers/clk/sophgo/clk-sg2042-pll.c
++++ b/drivers/clk/sophgo/clk-sg2042-pll.c
+@@ -153,7 +153,7 @@ static unsigned long sg2042_pll_recalc_rate(unsigned int reg_value,
+
+ sg2042_pll_ctrl_decode(reg_value, &ctrl_table);
+
+- numerator = parent_rate * ctrl_table.fbdiv;
++ numerator = (u64)parent_rate * ctrl_table.fbdiv;
+ denominator = ctrl_table.refdiv * ctrl_table.postdiv1 * ctrl_table.postdiv2;
+ do_div(numerator, denominator);
+ return numerator;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+index 9b5cfac2ee70cb..3f095515f54f91 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
++++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+@@ -1371,7 +1371,7 @@ static int sun20i_d1_ccu_probe(struct platform_device *pdev)
+
+ /* Enforce m1 = 0, m0 = 0 for PLL_AUDIO0 */
+ val = readl(reg + SUN20I_D1_PLL_AUDIO0_REG);
+- val &= ~BIT(1) | BIT(0);
++ val &= ~(BIT(1) | BIT(0));
+ writel(val, reg + SUN20I_D1_PLL_AUDIO0_REG);
+
+ /* Force fanout-27M factor N to 0. */
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 95dd4660b5b659..d546903dba4f3a 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -400,7 +400,8 @@ config ARM_GT_INITIAL_PRESCALER_VAL
+ This affects CPU_FREQ max delta from the initial frequency.
+
+ config ARM_TIMER_SP804
+- bool "Support for Dual Timer SP804 module" if COMPILE_TEST
++ bool "Support for Dual Timer SP804 module"
++ depends on ARM || ARM64 || COMPILE_TEST
+ depends on GENERIC_SCHED_CLOCK && HAVE_CLK
+ select CLKSRC_MMIO
+ select TIMER_OF if OF
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index c2dcd8d68e4587..d1c144d6f328cf 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -686,9 +686,9 @@ subsys_initcall(dmtimer_percpu_timer_startup);
+
+ static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+ {
+- struct device_node *arm_timer;
++ struct device_node *arm_timer __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+
+- arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+ if (of_device_is_available(arm_timer)) {
+ pr_warn_once("ARM architected timer wrap issue i940 detected\n");
+ return 0;
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index 1b481731df964e..b9df9b19d4bd97 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -2407,6 +2407,18 @@ static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
+
+ start += PAGE_SIZE;
+ }
++
++#ifdef CONFIG_MMU
++ /*
++ * Leaving behind a partial mapping of a buffer we're about to
++ * drop is unsafe, see remap_pfn_range_notrack().
++ * We need to zap the range here ourselves instead of relying
++ * on the automatic zapping in remap_pfn_range() because we call
++ * remap_pfn_range() in a loop.
++ */
++ if (retval)
++ zap_vma_ptes(vma, vma->vm_start, size);
++#endif
+ }
+
+ if (retval == 0) {
+diff --git a/drivers/counter/stm32-timer-cnt.c b/drivers/counter/stm32-timer-cnt.c
+index 186e73d6ccb455..87b6ec567b5447 100644
+--- a/drivers/counter/stm32-timer-cnt.c
++++ b/drivers/counter/stm32-timer-cnt.c
+@@ -214,11 +214,17 @@ static int stm32_count_enable_write(struct counter_device *counter,
+ {
+ struct stm32_timer_cnt *const priv = counter_priv(counter);
+ u32 cr1;
++ int ret;
+
+ if (enable) {
+ regmap_read(priv->regmap, TIM_CR1, &cr1);
+- if (!(cr1 & TIM_CR1_CEN))
+- clk_enable(priv->clk);
++ if (!(cr1 & TIM_CR1_CEN)) {
++ ret = clk_enable(priv->clk);
++ if (ret) {
++ dev_err(counter->parent, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
++ }
+
+ regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN,
+ TIM_CR1_CEN);
+@@ -694,6 +700,7 @@ static int stm32_timer_cnt_probe_encoder(struct device *dev,
+ }
+
+ ret = of_property_read_u32(tnode, "reg", &idx);
++ of_node_put(tnode);
+ if (ret) {
+ dev_err(dev, "Can't get index (%d)\n", ret);
+ return ret;
+@@ -816,7 +823,11 @@ static int __maybe_unused stm32_timer_cnt_resume(struct device *dev)
+ return ret;
+
+ if (priv->enabled) {
+- clk_enable(priv->clk);
++ ret = clk_enable(priv->clk);
++ if (ret) {
++ dev_err(dev, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
+
+ /* Restore registers that may have been lost */
+ regmap_write(priv->regmap, TIM_SMCR, priv->bak.smcr);
+diff --git a/drivers/counter/ti-ecap-capture.c b/drivers/counter/ti-ecap-capture.c
+index 675447315cafb8..b119aeede693ec 100644
+--- a/drivers/counter/ti-ecap-capture.c
++++ b/drivers/counter/ti-ecap-capture.c
+@@ -574,8 +574,13 @@ static int ecap_cnt_resume(struct device *dev)
+ {
+ struct counter_device *counter_dev = dev_get_drvdata(dev);
+ struct ecap_cnt_dev *ecap_dev = counter_priv(counter_dev);
++ int ret;
+
+- clk_enable(ecap_dev->clk);
++ ret = clk_enable(ecap_dev->clk);
++ if (ret) {
++ dev_err(dev, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
+
+ ecap_cnt_capture_set_evmode(counter_dev, ecap_dev->pm_ctx.ev_mode);
+
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index b63863f77c6778..91d3c3b1c2d3bf 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -665,34 +665,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- struct cppc_perf_ctrls perf_ctrls;
+- u32 highest_perf, nominal_perf, nominal_freq, max_freq;
++ u32 nominal_freq, max_freq;
+ int ret = 0;
+
+- highest_perf = READ_ONCE(cpudata->highest_perf);
+- nominal_perf = READ_ONCE(cpudata->nominal_perf);
+ nominal_freq = READ_ONCE(cpudata->nominal_freq);
+ max_freq = READ_ONCE(cpudata->max_freq);
+
+- if (boot_cpu_has(X86_FEATURE_CPPC)) {
+- u64 value = READ_ONCE(cpudata->cppc_req_cached);
+-
+- value &= ~GENMASK_ULL(7, 0);
+- value |= on ? highest_perf : nominal_perf;
+- WRITE_ONCE(cpudata->cppc_req_cached, value);
+-
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.max_perf = on ? highest_perf : nominal_perf;
+- ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- if (ret) {
+- cpufreq_cpu_release(policy);
+- pr_debug("Failed to set max perf on CPU:%d. ret:%d\n",
+- cpudata->cpu, ret);
+- return ret;
+- }
+- }
+-
+ if (on)
+ policy->cpuinfo.max_freq = max_freq;
+ else if (policy->cpuinfo.max_freq > nominal_freq * 1000)
+@@ -1535,7 +1513,7 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
+ value = READ_ONCE(cpudata->cppc_req_cached);
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+- min_perf = max_perf;
++ min_perf = min(cpudata->nominal_perf, max_perf);
+
+ /* Initial min/max values for CPPC Performance Controls Register */
+ value &= ~AMD_CPPC_MIN_PERF(~0L);
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index 2b8708475ac776..c1cdf0f4d0ddda 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -118,6 +118,9 @@ static void cppc_scale_freq_workfn(struct kthread_work *work)
+
+ perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs,
+ &fb_ctrs);
++ if (!perf)
++ return;
++
+ cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
+
+ perf <<= SCHED_CAPACITY_SHIFT;
+@@ -420,6 +423,9 @@ static int cppc_get_cpu_power(struct device *cpu_dev,
+ struct cppc_cpudata *cpu_data;
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
++ if (!policy)
++ return -EINVAL;
++
+ cpu_data = policy->driver_data;
+ perf_caps = &cpu_data->perf_caps;
+ max_cap = arch_scale_cpu_capacity(cpu_dev->id);
+@@ -487,6 +493,9 @@ static int cppc_get_cpu_cost(struct device *cpu_dev, unsigned long KHz,
+ int step;
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
++ if (!policy)
++ return -EINVAL;
++
+ cpu_data = policy->driver_data;
+ perf_caps = &cpu_data->perf_caps;
+ max_cap = arch_scale_cpu_capacity(cpu_dev->id);
+@@ -724,13 +733,31 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
+ delta_delivered = get_delta(fb_ctrs_t1->delivered,
+ fb_ctrs_t0->delivered);
+
+- /* Check to avoid divide-by zero and invalid delivered_perf */
++ /*
++ * Avoid divide-by zero and unchanged feedback counters.
++ * Leave it for callers to handle.
++ */
+ if (!delta_reference || !delta_delivered)
+- return cpu_data->perf_ctrls.desired_perf;
++ return 0;
+
+ return (reference_perf * delta_delivered) / delta_reference;
+ }
+
++static int cppc_get_perf_ctrs_sample(int cpu,
++ struct cppc_perf_fb_ctrs *fb_ctrs_t0,
++ struct cppc_perf_fb_ctrs *fb_ctrs_t1)
++{
++ int ret;
++
++ ret = cppc_get_perf_ctrs(cpu, fb_ctrs_t0);
++ if (ret)
++ return ret;
++
++ udelay(2); /* 2usec delay between sampling */
++
++ return cppc_get_perf_ctrs(cpu, fb_ctrs_t1);
++}
++
+ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+ {
+ struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
+@@ -746,18 +773,32 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+
+ cpufreq_cpu_put(policy);
+
+- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
+- if (ret)
+- return 0;
+-
+- udelay(2); /* 2usec delay between sampling */
+-
+- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
+- if (ret)
+- return 0;
++ ret = cppc_get_perf_ctrs_sample(cpu, &fb_ctrs_t0, &fb_ctrs_t1);
++ if (ret) {
++ if (ret == -EFAULT)
++ /* Any of the associated CPPC regs is 0. */
++ goto out_invalid_counters;
++ else
++ return 0;
++ }
+
+ delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
+ &fb_ctrs_t1);
++ if (!delivered_perf)
++ goto out_invalid_counters;
++
++ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
++
++out_invalid_counters:
++ /*
++ * Feedback counters could be unchanged or 0 when a cpu enters a
++ * low-power idle state, e.g. clock-gated or power-gated.
++ * Use desired perf for reflecting frequency. Get the latest register
++ * value first as some platforms may update the actual delivered perf
++ * there; if failed, resort to the cached desired perf.
++ */
++ if (cppc_get_desired_perf(cpu, &delivered_perf))
++ delivered_perf = cpu_data->perf_ctrls.desired_perf;
+
+ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
+ }
+diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
+index 6a8e97896d38ca..ed1a6dbad63894 100644
+--- a/drivers/cpufreq/loongson2_cpufreq.c
++++ b/drivers/cpufreq/loongson2_cpufreq.c
+@@ -148,7 +148,9 @@ static int __init cpufreq_init(void)
+
+ ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
+
+- if (!ret && !nowait) {
++ if (ret) {
++ platform_driver_unregister(&platform_driver);
++ } else if (!nowait) {
+ saved_cpu_wait = cpu_wait;
+ cpu_wait = loongson2_cpu_wait;
+ }
+diff --git a/drivers/cpufreq/loongson3_cpufreq.c b/drivers/cpufreq/loongson3_cpufreq.c
+index 6b5e6798d9a283..a923e196ec86e7 100644
+--- a/drivers/cpufreq/loongson3_cpufreq.c
++++ b/drivers/cpufreq/loongson3_cpufreq.c
+@@ -346,8 +346,11 @@ static int loongson3_cpufreq_probe(struct platform_device *pdev)
+ {
+ int i, ret;
+
+- for (i = 0; i < MAX_PACKAGES; i++)
+- devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
++ for (i = 0; i < MAX_PACKAGES; i++) {
++ ret = devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
++ if (ret)
++ return ret;
++ }
+
+ ret = do_service_request(0, 0, CMD_GET_VERSION, 0, 0);
+ if (ret <= 0)
+diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
+index 8925e096d5b9a0..aeb5e63045421b 100644
+--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
++++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
+@@ -62,7 +62,7 @@ mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *uW,
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
+ if (!policy)
+- return 0;
++ return -EINVAL;
+
+ data = policy->driver_data;
+
+diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
+index 1a3ecd44cbaf65..20f6453670aa49 100644
+--- a/drivers/crypto/bcm/cipher.c
++++ b/drivers/crypto/bcm/cipher.c
+@@ -2415,6 +2415,7 @@ static int ahash_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
+
+ static int ahash_hmac_init(struct ahash_request *req)
+ {
++ int ret;
+ struct iproc_reqctx_s *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm);
+@@ -2424,7 +2425,9 @@ static int ahash_hmac_init(struct ahash_request *req)
+ flow_log("ahash_hmac_init()\n");
+
+ /* init the context as a hash */
+- ahash_init(req);
++ ret = ahash_init(req);
++ if (ret)
++ return ret;
+
+ if (!spu_no_incr_hash(ctx)) {
+ /* SPU-M can do incr hashing but needs sw for outer HMAC */
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 887a5f2fb9279b..cb001aa1de6618 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -984,7 +984,7 @@ static int caam_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+ return -ENOMEM;
+ }
+
+-static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
++static int caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+ struct rsa_key *raw_key)
+ {
+ struct caam_rsa_key *rsa_key = &ctx->key;
+@@ -994,7 +994,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+
+ rsa_key->p = caam_read_raw_data(raw_key->p, &p_sz);
+ if (!rsa_key->p)
+- return;
++ return -ENOMEM;
+ rsa_key->p_sz = p_sz;
+
+ rsa_key->q = caam_read_raw_data(raw_key->q, &q_sz);
+@@ -1029,7 +1029,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+
+ rsa_key->priv_form = FORM3;
+
+- return;
++ return 0;
+
+ free_dq:
+ kfree_sensitive(rsa_key->dq);
+@@ -1043,6 +1043,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+ kfree_sensitive(rsa_key->q);
+ free_p:
+ kfree_sensitive(rsa_key->p);
++ return -ENOMEM;
+ }
+
+ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+@@ -1088,7 +1089,9 @@ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+ rsa_key->e_sz = raw_key.e_sz;
+ rsa_key->n_sz = raw_key.n_sz;
+
+- caam_rsa_set_priv_key_form(ctx, &raw_key);
++ ret = caam_rsa_set_priv_key_form(ctx, &raw_key);
++ if (ret)
++ goto err;
+
+ return 0;
+
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index f6111ee9ed342d..8ed2bb01a619fd 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -794,7 +794,7 @@ int caam_qi_init(struct platform_device *caam_pdev)
+
+ caam_debugfs_qi_init(ctrlpriv);
+
+- err = devm_add_action_or_reset(qidev, caam_qi_shutdown, ctrlpriv);
++ err = devm_add_action_or_reset(qidev, caam_qi_shutdown, qidev);
+ if (err)
+ goto fail2;
+
+diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c
+index 6872ac3440010f..54de869e5374c2 100644
+--- a/drivers/crypto/cavium/cpt/cptpf_main.c
++++ b/drivers/crypto/cavium/cpt/cptpf_main.c
+@@ -44,7 +44,7 @@ static void cpt_disable_cores(struct cpt_device *cpt, u64 coremask,
+ dev_err(dev, "Cores still busy %llx", coremask);
+ grp = cpt_read_csr64(cpt->reg_base,
+ CPTX_PF_EXEC_BUSY(0));
+- if (timeout--)
++ if (!timeout--)
+ break;
+
+ udelay(CSR_DELAY);
+@@ -302,6 +302,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+
+ ret = do_cpt_init(cpt, mcode);
+ if (ret) {
++ dma_free_coherent(&cpt->pdev->dev, mcode->code_size,
++ mcode->code, mcode->phys_base);
+ dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
+ goto fw_release;
+ }
+@@ -394,7 +396,7 @@ static void cpt_disable_all_cores(struct cpt_device *cpt)
+ dev_err(dev, "Cores still busy");
+ grp = cpt_read_csr64(cpt->reg_base,
+ CPTX_PF_EXEC_BUSY(0));
+- if (timeout--)
++ if (!timeout--)
+ break;
+
+ udelay(CSR_DELAY);
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index 6b536ad2ada52a..34d30b78381343 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -1280,11 +1280,15 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
+
+ static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT);
+- nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
++}
++
++static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB);
+ }
+
+ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1298,6 +1302,27 @@ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
+ qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ }
+
++static enum acc_err_result hpre_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = hpre_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ hpre_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ hpre_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ hpre_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void hpre_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1324,12 +1349,12 @@ static const struct hisi_qm_err_ini hpre_err_ini = {
+ .hw_err_disable = hpre_hw_error_disable,
+ .get_dev_hw_err_status = hpre_get_hw_err_status,
+ .clear_dev_hw_err_status = hpre_clear_hw_err_status,
+- .log_dev_hw_err = hpre_log_hw_error,
+ .open_axi_master_ooo = hpre_open_axi_master_ooo,
+ .open_sva_prefetch = hpre_open_sva_prefetch,
+ .close_sva_prefetch = hpre_close_sva_prefetch,
+ .show_last_dfx_regs = hpre_show_last_dfx_regs,
+ .err_info_init = hpre_err_info_init,
++ .get_err_result = hpre_get_err_result,
+ };
+
+ static int hpre_pf_probe_init(struct hpre *hpre)
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 07983af9e3e229..b18692ee7fd563 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -271,12 +271,6 @@ enum vft_type {
+ SHAPER_VFT,
+ };
+
+-enum acc_err_result {
+- ACC_ERR_NONE,
+- ACC_ERR_NEED_RESET,
+- ACC_ERR_RECOVERED,
+-};
+-
+ enum qm_alg_type {
+ ALG_TYPE_0,
+ ALG_TYPE_1,
+@@ -1425,22 +1419,25 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status)
+
+ static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
+ {
+- u32 error_status, tmp;
+-
+- /* read err sts */
+- tmp = readl(qm->io_base + QM_ABNORMAL_INT_STATUS);
+- error_status = qm->error_mask & tmp;
++ u32 error_status;
+
+- if (error_status) {
++ error_status = qm_get_hw_error_status(qm);
++ if (error_status & qm->error_mask) {
+ if (error_status & QM_ECC_MBIT)
+ qm->err_status.is_qm_ecc_mbit = true;
+
+ qm_log_hw_error(qm, error_status);
+- if (error_status & qm->err_info.qm_reset_mask)
++ if (error_status & qm->err_info.qm_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ writel(qm->err_info.nfe & (~error_status),
++ qm->io_base + QM_RAS_NFE_ENABLE);
+ return ACC_ERR_NEED_RESET;
++ }
+
++ /* Clear error source if not need reset. */
+ writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
+ writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE);
++ writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE);
+ }
+
+ return ACC_ERR_RECOVERED;
+@@ -3861,30 +3858,12 @@ EXPORT_SYMBOL_GPL(hisi_qm_sriov_configure);
+
+ static enum acc_err_result qm_dev_err_handle(struct hisi_qm *qm)
+ {
+- u32 err_sts;
+-
+- if (!qm->err_ini->get_dev_hw_err_status) {
+- dev_err(&qm->pdev->dev, "Device doesn't support get hw error status!\n");
++ if (!qm->err_ini->get_err_result) {
++ dev_err(&qm->pdev->dev, "Device doesn't support reset!\n");
+ return ACC_ERR_NONE;
+ }
+
+- /* get device hardware error status */
+- err_sts = qm->err_ini->get_dev_hw_err_status(qm);
+- if (err_sts) {
+- if (err_sts & qm->err_info.ecc_2bits_mask)
+- qm->err_status.is_dev_ecc_mbit = true;
+-
+- if (qm->err_ini->log_dev_hw_err)
+- qm->err_ini->log_dev_hw_err(qm, err_sts);
+-
+- if (err_sts & qm->err_info.dev_reset_mask)
+- return ACC_ERR_NEED_RESET;
+-
+- if (qm->err_ini->clear_dev_hw_err_status)
+- qm->err_ini->clear_dev_hw_err_status(qm, err_sts);
+- }
+-
+- return ACC_ERR_RECOVERED;
++ return qm->err_ini->get_err_result(qm);
+ }
+
+ static enum acc_err_result qm_process_dev_error(struct hisi_qm *qm)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index c35533d8930b21..75c25f0d5f2b82 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -1010,11 +1010,15 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm)
+
+ static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE);
+- nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
++}
++
++static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG);
+ }
+
+ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1026,6 +1030,27 @@ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
+ writel(val | SEC_AXI_SHUTDOWN_ENABLE, qm->io_base + SEC_CONTROL_REG);
+ }
+
++static enum acc_err_result sec_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = sec_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ sec_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ sec_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ sec_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void sec_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1052,12 +1077,12 @@ static const struct hisi_qm_err_ini sec_err_ini = {
+ .hw_err_disable = sec_hw_error_disable,
+ .get_dev_hw_err_status = sec_get_hw_err_status,
+ .clear_dev_hw_err_status = sec_clear_hw_err_status,
+- .log_dev_hw_err = sec_log_hw_error,
+ .open_axi_master_ooo = sec_open_axi_master_ooo,
+ .open_sva_prefetch = sec_open_sva_prefetch,
+ .close_sva_prefetch = sec_close_sva_prefetch,
+ .show_last_dfx_regs = sec_show_last_dfx_regs,
+ .err_info_init = sec_err_info_init,
++ .get_err_result = sec_get_err_result,
+ };
+
+ static int sec_pf_probe_init(struct sec_dev *sec)
+diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
+index d07e47b48be06a..80c2fcb1d26dcf 100644
+--- a/drivers/crypto/hisilicon/zip/zip_main.c
++++ b/drivers/crypto/hisilicon/zip/zip_main.c
+@@ -1059,11 +1059,15 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
+
+ static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
+- nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
++}
++
++static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
+ }
+
+ static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1093,6 +1097,27 @@ static void hisi_zip_close_axi_master_ooo(struct hisi_qm *qm)
+ qm->io_base + HZIP_CORE_INT_SET);
+ }
+
++static enum acc_err_result hisi_zip_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = hisi_zip_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ hisi_zip_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ hisi_zip_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ hisi_zip_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void hisi_zip_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1120,13 +1145,13 @@ static const struct hisi_qm_err_ini hisi_zip_err_ini = {
+ .hw_err_disable = hisi_zip_hw_error_disable,
+ .get_dev_hw_err_status = hisi_zip_get_hw_err_status,
+ .clear_dev_hw_err_status = hisi_zip_clear_hw_err_status,
+- .log_dev_hw_err = hisi_zip_log_hw_error,
+ .open_axi_master_ooo = hisi_zip_open_axi_master_ooo,
+ .close_axi_master_ooo = hisi_zip_close_axi_master_ooo,
+ .open_sva_prefetch = hisi_zip_open_sva_prefetch,
+ .close_sva_prefetch = hisi_zip_close_sva_prefetch,
+ .show_last_dfx_regs = hisi_zip_show_last_dfx_regs,
+ .err_info_init = hisi_zip_err_info_init,
++ .get_err_result = hisi_zip_get_err_result,
+ };
+
+ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index e17577b785c33a..f44c08f5f5ec4a 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -2093,7 +2093,7 @@ static int safexcel_xcbcmac_cra_init(struct crypto_tfm *tfm)
+
+ safexcel_ahash_cra_init(tfm);
+ ctx->aes = kmalloc(sizeof(*ctx->aes), GFP_KERNEL);
+- return PTR_ERR_OR_ZERO(ctx->aes);
++ return ctx->aes == NULL ? -ENOMEM : 0;
+ }
+
+ static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm)
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 78f0ea49254dbb..9faef33e54bd32 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -375,7 +375,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ else
+ id = -EINVAL;
+
+- if (id < 0 || id > num_objs)
++ if (id < 0 || id >= num_objs)
+ return NULL;
+
+ return fw_objs[id];
+diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+index 9fd7ec53b9f3d8..bbd92c017c28ed 100644
+--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+@@ -334,7 +334,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ else
+ id = -EINVAL;
+
+- if (id < 0 || id > num_objs)
++ if (id < 0 || id >= num_objs)
+ return NULL;
+
+ return fw_objs[id];
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+index ec7913ab00a2c7..4cb8bd83f57071 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_aer.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+@@ -281,8 +281,11 @@ int adf_init_aer(void)
+ return -EFAULT;
+
+ device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0);
+- if (!device_sriov_wq)
++ if (!device_sriov_wq) {
++ destroy_workqueue(device_reset_wq);
++ device_reset_wq = NULL;
+ return -EFAULT;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+index c42f5c25aabdfa..4c11ad1ebcf0f8 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+@@ -22,18 +22,13 @@
+ void adf_dbgfs_init(struct adf_accel_dev *accel_dev)
+ {
+ char name[ADF_DEVICE_NAME_LENGTH];
+- void *ret;
+
+ /* Create dev top level debugfs entry */
+ snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
+ accel_dev->hw_device->dev_class->name,
+ pci_name(accel_dev->accel_pci_dev.pci_dev));
+
+- ret = debugfs_create_dir(name, NULL);
+- if (IS_ERR_OR_NULL(ret))
+- return;
+-
+- accel_dev->debugfs_dir = ret;
++ accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
+
+ adf_cfg_dev_dbgfs_add(accel_dev);
+ }
+@@ -59,9 +54,6 @@ EXPORT_SYMBOL_GPL(adf_dbgfs_exit);
+ */
+ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
+ {
+- if (!accel_dev->debugfs_dir)
+- return;
+-
+ if (!accel_dev->is_vf) {
+ adf_fw_counters_dbgfs_add(accel_dev);
+ adf_heartbeat_dbgfs_add(accel_dev);
+@@ -77,9 +69,6 @@ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
+ */
+ void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
+ {
+- if (!accel_dev->debugfs_dir)
+- return;
+-
+ if (!accel_dev->is_vf) {
+ adf_tl_dbgfs_rm(accel_dev);
+ adf_cnv_dbgfs_rm(accel_dev);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+index 65bd26b25abce9..f93d9cca70cee4 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+@@ -90,10 +90,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev)
+
+ hw_data->get_arb_info(&info);
+
+- /* Reset arbiter configuration */
+- for (i = 0; i < ADF_ARB_NUM; i++)
+- WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0);
+-
+ /* Unmap worker threads to service arbiters */
+ for (i = 0; i < hw_data->num_engines; i++)
+ WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0);
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index c82775dbb557a7..77a6301f37f0af 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -225,21 +225,22 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ struct skcipher_request *req, int init)
+ {
+- dma_addr_t key_phys = 0;
+- dma_addr_t src_phys, dst_phys;
++ dma_addr_t key_phys, src_phys, dst_phys;
+ struct dcp *sdcp = global_sdcp;
+ struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+ bool key_referenced = actx->key_referenced;
+ int ret;
+
+- if (!key_referenced) {
++ if (key_referenced)
++ key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key + AES_KEYSIZE_128,
++ AES_KEYSIZE_128, DMA_TO_DEVICE);
++ else
+ key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
+ 2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
+- ret = dma_mapping_error(sdcp->dev, key_phys);
+- if (ret)
+- return ret;
+- }
++ ret = dma_mapping_error(sdcp->dev, key_phys);
++ if (ret)
++ return ret;
+
+ src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
+ DCP_BUF_SZ, DMA_TO_DEVICE);
+@@ -300,7 +301,10 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ err_dst:
+ dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
+ err_src:
+- if (!key_referenced)
++ if (key_referenced)
++ dma_unmap_single(sdcp->dev, key_phys, AES_KEYSIZE_128,
++ DMA_TO_DEVICE);
++ else
+ dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
+ DMA_TO_DEVICE);
+ return ret;
+diff --git a/drivers/dax/pmem/Makefile b/drivers/dax/pmem/Makefile
+deleted file mode 100644
+index 191c31f0d4f008..00000000000000
+--- a/drivers/dax/pmem/Makefile
++++ /dev/null
+@@ -1,7 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
+-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
+-
+-dax_pmem-y := pmem.o
+-dax_pmem_core-y := core.o
+-dax_pmem_compat-y := compat.o
+diff --git a/drivers/dax/pmem/pmem.c b/drivers/dax/pmem/pmem.c
+deleted file mode 100644
+index dfe91a2990fec4..00000000000000
+--- a/drivers/dax/pmem/pmem.c
++++ /dev/null
+@@ -1,10 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
+-#include <linux/percpu-refcount.h>
+-#include <linux/memremap.h>
+-#include <linux/module.h>
+-#include <linux/pfn_t.h>
+-#include <linux/nd.h>
+-#include "../bus.h"
+-
+-
+diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
+index b46eb8a552d7be..fee04fdb08220c 100644
+--- a/drivers/dma-buf/Kconfig
++++ b/drivers/dma-buf/Kconfig
+@@ -36,6 +36,7 @@ config UDMABUF
+ depends on DMA_SHARED_BUFFER
+ depends on MEMFD_CREATE || COMPILE_TEST
+ depends on MMU
++ select VMAP_PFN
+ help
+ A driver to let userspace turn memfd regions into dma-bufs.
+ Qemu can use this to create host dmabufs for guest framebuffers.
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 047c3cd2cefff6..a3638ccc15f571 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -74,21 +74,29 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
+ static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
+ {
+ struct udmabuf *ubuf = buf->priv;
+- struct page **pages;
++ unsigned long *pfns;
+ void *vaddr;
+ pgoff_t pg;
+
+ dma_resv_assert_held(buf->resv);
+
+- pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+- if (!pages)
++ /**
++ * HVO may free tail pages, so just use pfn to map each folio
++ * into vmalloc area.
++ */
++ pfns = kvmalloc_array(ubuf->pagecount, sizeof(*pfns), GFP_KERNEL);
++ if (!pfns)
+ return -ENOMEM;
+
+- for (pg = 0; pg < ubuf->pagecount; pg++)
+- pages[pg] = &ubuf->folios[pg]->page;
++ for (pg = 0; pg < ubuf->pagecount; pg++) {
++ unsigned long pfn = folio_pfn(ubuf->folios[pg]);
+
+- vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+- kfree(pages);
++ pfn += ubuf->offsets[pg] >> PAGE_SHIFT;
++ pfns[pg] = pfn;
++ }
++
++ vaddr = vmap_pfn(pfns, ubuf->pagecount, PAGE_KERNEL);
++ kvfree(pfns);
+ if (!vaddr)
+ return -EINVAL;
+
+@@ -196,8 +204,8 @@ static void release_udmabuf(struct dma_buf *buf)
+ put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
+
+ unpin_all_folios(&ubuf->unpin_list);
+- kfree(ubuf->offsets);
+- kfree(ubuf->folios);
++ kvfree(ubuf->offsets);
++ kvfree(ubuf->folios);
+ kfree(ubuf);
+ }
+
+@@ -322,14 +330,14 @@ static long udmabuf_create(struct miscdevice *device,
+ if (!ubuf->pagecount)
+ goto err;
+
+- ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
+- GFP_KERNEL);
++ ubuf->folios = kvmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
++ GFP_KERNEL);
+ if (!ubuf->folios) {
+ ret = -ENOMEM;
+ goto err;
+ }
+- ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+- GFP_KERNEL);
++ ubuf->offsets = kvcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
++ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+@@ -343,7 +351,7 @@ static long udmabuf_create(struct miscdevice *device,
+ goto err;
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+- folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
++ folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
+ goto err;
+@@ -353,7 +361,7 @@ static long udmabuf_create(struct miscdevice *device,
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret <= 0) {
+- kfree(folios);
++ kvfree(folios);
+ if (!ret)
+ ret = -EINVAL;
+ goto err;
+@@ -382,7 +390,7 @@ static long udmabuf_create(struct miscdevice *device,
+ }
+ }
+
+- kfree(folios);
++ kvfree(folios);
+ fput(memfd);
+ memfd = NULL;
+ }
+@@ -398,8 +406,8 @@ static long udmabuf_create(struct miscdevice *device,
+ if (memfd)
+ fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
+- kfree(ubuf->offsets);
+- kfree(ubuf->folios);
++ kvfree(ubuf->offsets);
++ kvfree(ubuf->folios);
+ kfree(ubuf);
+ return ret;
+ }
+diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c
+index 5b3164560648ee..0e539c1073510a 100644
+--- a/drivers/edac/bluefield_edac.c
++++ b/drivers/edac/bluefield_edac.c
+@@ -180,7 +180,7 @@ static void bluefield_edac_check(struct mem_ctl_info *mci)
+ static void bluefield_edac_init_dimms(struct mem_ctl_info *mci)
+ {
+ struct bluefield_edac_priv *priv = mci->pvt_info;
+- int mem_ctrl_idx = mci->mc_idx;
++ u64 mem_ctrl_idx = mci->mc_idx;
+ struct dimm_info *dimm;
+ u64 smc_info, smc_arg;
+ int is_empty = 1, i;
+diff --git a/drivers/edac/fsl_ddr_edac.c b/drivers/edac/fsl_ddr_edac.c
+index d148d262d0d4de..339d94b3d04c7d 100644
+--- a/drivers/edac/fsl_ddr_edac.c
++++ b/drivers/edac/fsl_ddr_edac.c
+@@ -328,21 +328,25 @@ static void fsl_mc_check(struct mem_ctl_info *mci)
+ * TODO: Add support for 32-bit wide buses
+ */
+ if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) {
++ u64 cap = (u64)cap_high << 32 | cap_low;
++ u32 s = syndrome;
++
+ sbe_ecc_decode(cap_high, cap_low, syndrome,
+ &bad_data_bit, &bad_ecc_bit);
+
+- if (bad_data_bit != -1)
+- fsl_mc_printk(mci, KERN_ERR,
+- "Faulty Data bit: %d\n", bad_data_bit);
+- if (bad_ecc_bit != -1)
+- fsl_mc_printk(mci, KERN_ERR,
+- "Faulty ECC bit: %d\n", bad_ecc_bit);
++ if (bad_data_bit >= 0) {
++ fsl_mc_printk(mci, KERN_ERR, "Faulty Data bit: %d\n", bad_data_bit);
++ cap ^= 1ULL << bad_data_bit;
++ }
++
++ if (bad_ecc_bit >= 0) {
++ fsl_mc_printk(mci, KERN_ERR, "Faulty ECC bit: %d\n", bad_ecc_bit);
++ s ^= 1 << bad_ecc_bit;
++ }
+
+ fsl_mc_printk(mci, KERN_ERR,
+ "Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n",
+- cap_high ^ (1 << (bad_data_bit - 32)),
+- cap_low ^ (1 << bad_data_bit),
+- syndrome ^ (1 << bad_ecc_bit));
++ upper_32_bits(cap), lower_32_bits(cap), s);
+ }
+
+ fsl_mc_printk(mci, KERN_ERR,
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index e2a954de913b42..51556c72a96746 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -1036,6 +1036,7 @@ static int __init i10nm_init(void)
+ return -ENODEV;
+
+ cfg = (struct res_config *)id->driver_data;
++ skx_set_res_cfg(cfg);
+ res_cfg = cfg;
+
+ rc = skx_get_hi_lo(0x09a2, off, &tolm, &tohm);
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index 189a2fc29e74f5..07dacf8c10be3d 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -1245,6 +1245,7 @@ static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev)
+ imc->mci = mci;
+ return 0;
+ fail3:
++ mci->pvt_info = NULL;
+ kfree(mci->ctl_name);
+ fail2:
+ edac_mc_free(mci);
+@@ -1269,6 +1270,7 @@ static void igen6_unregister_mcis(void)
+
+ edac_mc_del_mc(mci->pdev);
+ kfree(mci->ctl_name);
++ mci->pvt_info = NULL;
+ edac_mc_free(mci);
+ iounmap(imc->window);
+ }
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 85713646957b3e..6cf17af7d9112b 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -47,6 +47,7 @@ static skx_show_retry_log_f skx_show_retry_rd_err_log;
+ static u64 skx_tolm, skx_tohm;
+ static LIST_HEAD(dev_edac_list);
+ static bool skx_mem_cfg_2lm;
++static struct res_config *skx_res_cfg;
+
+ int skx_adxl_get(void)
+ {
+@@ -119,7 +120,7 @@ void skx_adxl_put(void)
+ }
+ EXPORT_SYMBOL_GPL(skx_adxl_put);
+
+-static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem)
++static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ {
+ struct skx_dev *d;
+ int i, len = 0;
+@@ -135,8 +136,24 @@ static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_me
+ return false;
+ }
+
++ /*
++ * GNR with a Flat2LM memory configuration may mistakenly classify
++ * a near-memory error(DDR5) as a far-memory error(CXL), resulting
++ * in the incorrect selection of decoded ADXL components.
++ * To address this, prefetch the decoded far-memory controller ID
++ * and adjust the error source to near-memory if the far-memory
++ * controller ID is invalid.
++ */
++ if (skx_res_cfg && skx_res_cfg->type == GNR && err_src == ERR_SRC_2LM_FM) {
++ res->imc = (int)adxl_values[component_indices[INDEX_MEMCTRL]];
++ if (res->imc == -1) {
++ err_src = ERR_SRC_2LM_NM;
++ edac_dbg(0, "Adjust the error source to near-memory.\n");
++ }
++ }
++
+ res->socket = (int)adxl_values[component_indices[INDEX_SOCKET]];
+- if (error_in_1st_level_mem) {
++ if (err_src == ERR_SRC_2LM_NM) {
+ res->imc = (adxl_nm_bitmap & BIT_NM_MEMCTRL) ?
+ (int)adxl_values[component_indices[INDEX_NM_MEMCTRL]] : -1;
+ res->channel = (adxl_nm_bitmap & BIT_NM_CHANNEL) ?
+@@ -191,6 +208,12 @@ void skx_set_mem_cfg(bool mem_cfg_2lm)
+ }
+ EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
+
++void skx_set_res_cfg(struct res_config *cfg)
++{
++ skx_res_cfg = cfg;
++}
++EXPORT_SYMBOL_GPL(skx_set_res_cfg);
++
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log)
+ {
+ driver_decode = decode;
+@@ -620,31 +643,27 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ optype, skx_msg);
+ }
+
+-static bool skx_error_in_1st_level_mem(const struct mce *m)
++static enum error_source skx_error_source(const struct mce *m)
+ {
+- u32 errcode;
++ u32 errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+
+- if (!skx_mem_cfg_2lm)
+- return false;
+-
+- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+-
+- return errcode == MCACOD_EXT_MEM_ERR;
+-}
++ if (errcode != MCACOD_MEM_CTL_ERR && errcode != MCACOD_EXT_MEM_ERR)
++ return ERR_SRC_NOT_MEMORY;
+
+-static bool skx_error_in_mem(const struct mce *m)
+-{
+- u32 errcode;
++ if (!skx_mem_cfg_2lm)
++ return ERR_SRC_1LM;
+
+- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
++ if (errcode == MCACOD_EXT_MEM_ERR)
++ return ERR_SRC_2LM_NM;
+
+- return (errcode == MCACOD_MEM_CTL_ERR || errcode == MCACOD_EXT_MEM_ERR);
++ return ERR_SRC_2LM_FM;
+ }
+
+ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ void *data)
+ {
+ struct mce *mce = (struct mce *)data;
++ enum error_source err_src;
+ struct decoded_addr res;
+ struct mem_ctl_info *mci;
+ char *type;
+@@ -652,8 +671,10 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ if (mce->kflags & MCE_HANDLED_CEC)
+ return NOTIFY_DONE;
+
++ err_src = skx_error_source(mce);
++
+ /* Ignore unless this is memory related with an address */
+- if (!skx_error_in_mem(mce) || !(mce->status & MCI_STATUS_ADDRV))
++ if (err_src == ERR_SRC_NOT_MEMORY || !(mce->status & MCI_STATUS_ADDRV))
+ return NOTIFY_DONE;
+
+ memset(&res, 0, sizeof(res));
+@@ -667,7 +688,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ /* Try driver decoder first */
+ if (!(driver_decode && driver_decode(&res))) {
+ /* Then try firmware decoder (ACPI DSM methods) */
+- if (!(adxl_component_count && skx_adxl_decode(&res, skx_error_in_1st_level_mem(mce))))
++ if (!(adxl_component_count && skx_adxl_decode(&res, err_src)))
+ return NOTIFY_DONE;
+ }
+
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index f945c1bf5ca465..54bba8a62f727c 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -146,6 +146,13 @@ enum {
+ INDEX_MAX
+ };
+
++enum error_source {
++ ERR_SRC_1LM,
++ ERR_SRC_2LM_NM,
++ ERR_SRC_2LM_FM,
++ ERR_SRC_NOT_MEMORY,
++};
++
+ #define BIT_NM_MEMCTRL BIT_ULL(INDEX_NM_MEMCTRL)
+ #define BIT_NM_CHANNEL BIT_ULL(INDEX_NM_CHANNEL)
+ #define BIT_NM_DIMM BIT_ULL(INDEX_NM_DIMM)
+@@ -234,6 +241,7 @@ int skx_adxl_get(void);
+ void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
+ void skx_set_mem_cfg(bool mem_cfg_2lm);
++void skx_set_res_cfg(struct res_config *cfg);
+
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
+ int skx_get_node_id(struct skx_dev *d, u8 *id);
+diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
+index 94a6b4e667de14..f4d47577f83ee7 100644
+--- a/drivers/firmware/arm_scpi.c
++++ b/drivers/firmware/arm_scpi.c
+@@ -630,6 +630,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
+ if (ret)
+ return ERR_PTR(ret);
+
++ if (!buf.opp_count)
++ return ERR_PTR(-ENOENT);
++
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return ERR_PTR(-ENOMEM);
+diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
+index 958a680e0660d4..2a1b43f9e0fa2b 100644
+--- a/drivers/firmware/efi/libstub/efi-stub.c
++++ b/drivers/firmware/efi/libstub/efi-stub.c
+@@ -129,7 +129,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
+
+ if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) ||
+ IS_ENABLED(CONFIG_CMDLINE_FORCE) ||
+- cmdline_size == 0) {
++ cmdline[0] == 0) {
+ status = efi_parse_options(CONFIG_CMDLINE);
+ if (status != EFI_SUCCESS) {
+ efi_err("Failed to parse options\n");
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index e8d69bd548f3fe..9c3613e6af158f 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -40,7 +40,8 @@ int __init efi_tpm_eventlog_init(void)
+ {
+ struct linux_efi_tpm_eventlog *log_tbl;
+ struct efi_tcg2_final_events_table *final_tbl;
+- int tbl_size;
++ unsigned int tbl_size;
++ int final_tbl_size;
+ int ret = 0;
+
+ if (efi.tpm_log == EFI_INVALID_TABLE_ADDR) {
+@@ -80,26 +81,26 @@ int __init efi_tpm_eventlog_init(void)
+ goto out;
+ }
+
+- tbl_size = 0;
++ final_tbl_size = 0;
+ if (final_tbl->nr_events != 0) {
+ void *events = (void *)efi.tpm_final_log
+ + sizeof(final_tbl->version)
+ + sizeof(final_tbl->nr_events);
+
+- tbl_size = tpm2_calc_event_log_size(events,
+- final_tbl->nr_events,
+- log_tbl->log);
++ final_tbl_size = tpm2_calc_event_log_size(events,
++ final_tbl->nr_events,
++ log_tbl->log);
+ }
+
+- if (tbl_size < 0) {
++ if (final_tbl_size < 0) {
+ pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");
+ ret = -EINVAL;
+ goto out_calc;
+ }
+
+ memblock_reserve(efi.tpm_final_log,
+- tbl_size + sizeof(*final_tbl));
+- efi_tpm_final_log_size = tbl_size;
++ final_tbl_size + sizeof(*final_tbl));
++ efi_tpm_final_log_size = final_tbl_size;
+
+ out_calc:
+ early_memunmap(final_tbl, sizeof(*final_tbl));
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index d304913314e494..24e666d5c3d1a2 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -918,7 +918,8 @@ static __init int gsmi_init(void)
+ gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info);
+ if (IS_ERR(gsmi_dev.pdev)) {
+ printk(KERN_ERR "gsmi: unable to register platform device\n");
+- return PTR_ERR(gsmi_dev.pdev);
++ ret = PTR_ERR(gsmi_dev.pdev);
++ goto out_unregister;
+ }
+
+ /* SMI access needs to be serialized */
+@@ -1056,10 +1057,11 @@ static __init int gsmi_init(void)
+ gsmi_buf_free(gsmi_dev.name_buf);
+ kmem_cache_destroy(gsmi_dev.mem_pool);
+ platform_device_unregister(gsmi_dev.pdev);
+- pr_info("gsmi: failed to load: %d\n", ret);
++out_unregister:
+ #ifdef CONFIG_PM
+ platform_driver_unregister(&gsmi_driver_info);
+ #endif
++ pr_info("gsmi: failed to load: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index 5170fe7599cdf8..d5909a4f0433c1 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -99,11 +99,13 @@ static void exar_set_value(struct gpio_chip *chip, unsigned int offset,
+ struct exar_gpio_chip *exar_gpio = gpiochip_get_data(chip);
+ unsigned int addr = exar_offset_to_lvl_addr(exar_gpio, offset);
+ unsigned int bit = exar_offset_to_bit(exar_gpio, offset);
++ unsigned int bit_value = value ? BIT(bit) : 0;
+
+- if (value)
+- regmap_set_bits(exar_gpio->regmap, addr, BIT(bit));
+- else
+- regmap_clear_bits(exar_gpio->regmap, addr, BIT(bit));
++ /*
++ * regmap_write_bits() forces value to be written when an external
++ * pull up/down might otherwise indicate value was already set.
++ */
++ regmap_write_bits(exar_gpio->regmap, addr, BIT(bit), bit_value);
+ }
+
+ static int exar_direction_output(struct gpio_chip *chip, unsigned int offset,
+diff --git a/drivers/gpio/gpio-zevio.c b/drivers/gpio/gpio-zevio.c
+index 2de61337ad3b54..d7230fd83f5d68 100644
+--- a/drivers/gpio/gpio-zevio.c
++++ b/drivers/gpio/gpio-zevio.c
+@@ -11,6 +11,7 @@
+ #include <linux/io.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
++#include <linux/property.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+
+@@ -169,6 +170,7 @@ static const struct gpio_chip zevio_gpio_chip = {
+ /* Initialization */
+ static int zevio_gpio_probe(struct platform_device *pdev)
+ {
++ struct device *dev = &pdev->dev;
+ struct zevio_gpio *controller;
+ int status, i;
+
+@@ -180,6 +182,10 @@ static int zevio_gpio_probe(struct platform_device *pdev)
+ controller->chip = zevio_gpio_chip;
+ controller->chip.parent = &pdev->dev;
+
++ controller->chip.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", dev_fwnode(dev));
++ if (!controller->chip.label)
++ return -ENOMEM;
++
+ controller->regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(controller->regs))
+ return dev_err_probe(&pdev->dev, PTR_ERR(controller->regs),
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 1cb5a4f1929335..cf5bc77e2362c4 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -152,6 +152,7 @@ config DRM_PANIC_SCREEN
+ config DRM_PANIC_SCREEN_QR_CODE
+ bool "Add a panic screen with a QR code"
+ depends on DRM_PANIC && RUST
++ select ZLIB_DEFLATE
+ help
+ This option adds a QR code generator, and a panic screen with a QR
+ code. The QR code will contain the last lines of kmsg and other debug
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+index 2ca12717313573..9d6345146495fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+@@ -158,7 +158,7 @@ static int aca_smu_get_valid_aca_banks(struct amdgpu_device *adev, enum aca_smu_
+ return -EINVAL;
+ }
+
+- if (start + count >= max_count)
++ if (start + count > max_count)
+ return -EINVAL;
+
+ count = min_t(int, count, max_count);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 4f08b153cb66d8..e41318bfbf4575 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -834,6 +834,9 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
++ if (!kiq_ring->sched.ready || adev->job_hang)
++ return 0;
++
+ ring_funcs = kzalloc(sizeof(*ring_funcs), GFP_KERNEL);
+ if (!ring_funcs)
+ return -ENOMEM;
+@@ -858,8 +861,14 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
+
+ kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES, 0, 0);
+
+- if (kiq_ring->sched.ready && !adev->job_hang)
+- r = amdgpu_ring_test_helper(kiq_ring);
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
++ */
++ r = amdgpu_ring_test_helper(kiq_ring);
+
+ spin_unlock(&kiq->ring_lock);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 4bd61c169ca8d4..ca8091fd3a24f4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1757,11 +1757,13 @@ int amdgpu_discovery_get_nps_info(struct amdgpu_device *adev,
+
+ switch (le16_to_cpu(nps_info->v1.header.version_major)) {
+ case 1:
++ mem_ranges = kvcalloc(nps_info->v1.count,
++ sizeof(*mem_ranges),
++ GFP_KERNEL);
++ if (!mem_ranges)
++ return -ENOMEM;
+ *nps_type = nps_info->v1.nps_type;
+ *range_cnt = nps_info->v1.count;
+- mem_ranges = kvzalloc(
+- *range_cnt * sizeof(struct amdgpu_gmc_memrange),
+- GFP_KERNEL);
+ for (i = 0; i < *range_cnt; i++) {
+ mem_ranges[i].base_address =
+ nps_info->v1.instance_info[i].base_address;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index f1ffab5a1eaed9..156abd2ba5a6c6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -525,6 +525,17 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
++ if (!kiq_ring->sched.ready || adev->job_hang)
++ return 0;
++ /**
++ * This is workaround: only skip kiq_ring test
++ * during ras recovery in suspend stage for gfx9.4.3
++ */
++ if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
++ amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) &&
++ amdgpu_ras_in_recovery(adev))
++ return 0;
++
+ spin_lock(&kiq->ring_lock);
+ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+ adev->gfx.num_compute_rings)) {
+@@ -538,20 +549,15 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.compute_ring[j],
+ RESET_QUEUES, 0, 0);
+ }
+-
+- /**
+- * This is workaround: only skip kiq_ring test
+- * during ras recovery in suspend stage for gfx9.4.3
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
+ */
+- if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
+- amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) &&
+- amdgpu_ras_in_recovery(adev)) {
+- spin_unlock(&kiq->ring_lock);
+- return 0;
+- }
++ r = amdgpu_ring_test_helper(kiq_ring);
+
+- if (kiq_ring->sched.ready && !adev->job_hang)
+- r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+
+ return r;
+@@ -579,8 +585,11 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
+- spin_lock(&kiq->ring_lock);
++ if (!adev->gfx.kiq[0].ring.sched.ready || adev->job_hang)
++ return 0;
++
+ if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) {
++ spin_lock(&kiq->ring_lock);
+ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+ adev->gfx.num_gfx_rings)) {
+ spin_unlock(&kiq->ring_lock);
+@@ -593,11 +602,17 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.gfx_ring[j],
+ PREEMPT_QUEUES, 0, 0);
+ }
+- }
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
+
+- if (adev->gfx.kiq[0].ring.sched.ready && !adev->job_hang)
++ /*
++ * Ring test will do a basic scratch register change check.
++ * Just run this to ensure that unmap queues that is submitted
++ * before got processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+- spin_unlock(&kiq->ring_lock);
++ spin_unlock(&kiq->ring_lock);
++ }
+
+ return r;
+ }
+@@ -702,7 +717,13 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev, int xcc_id)
+ kiq->pmf->kiq_map_queues(kiq_ring,
+ &adev->gfx.compute_ring[j]);
+ }
+-
++ /* Submit map queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that map queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+ if (r)
+@@ -753,7 +774,13 @@ int amdgpu_gfx_enable_kgq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.gfx_ring[j]);
+ }
+ }
+-
++ /* Submit map queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that map queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+ if (r)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index bc8295812cc842..9d741695ca07d6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -4823,6 +4823,13 @@ static int gfx_v8_0_kcq_disable(struct amdgpu_device *adev)
+ amdgpu_ring_write(kiq_ring, 0);
+ amdgpu_ring_write(kiq_ring, 0);
+ }
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ if (r)
+ DRM_ERROR("KCQ disable failed\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 23f0573ae47b33..785a343a95f0ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2418,6 +2418,8 @@ static int gfx_v9_0_sw_fini(void *handle)
+ amdgpu_gfx_kiq_free_ring(&adev->gfx.kiq[0].ring);
+ amdgpu_gfx_kiq_fini(adev, 0);
+
++ amdgpu_gfx_cleaner_shader_sw_fini(adev);
++
+ gfx_v9_0_mec_fini(adev);
+ amdgpu_bo_free_kernel(&adev->gfx.rlc.clear_state_obj,
+ &adev->gfx.rlc.clear_state_gpu_addr,
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index 86958cb2c2ab2b..aa5815bd633eba 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -674,11 +674,12 @@ void jpeg_v4_0_3_dec_ring_insert_start(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, 0x62a04); /* PCTL0_MMHUB_DEEPSLEEP_IB */
+- }
+
+- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+- 0, 0, PACKETJ_TYPE0));
+- amdgpu_ring_write(ring, 0x80004000);
++ amdgpu_ring_write(ring,
++ PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0,
++ 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, 0x80004000);
++ }
+ }
+
+ /**
+@@ -694,11 +695,12 @@ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, 0x62a04);
+- }
+
+- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+- 0, 0, PACKETJ_TYPE0));
+- amdgpu_ring_write(ring, 0x00004000);
++ amdgpu_ring_write(ring,
++ PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0,
++ 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, 0x00004000);
++ }
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index d4aa843aacfdd9..ff34bb1ac9db79 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -271,11 +271,9 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+ struct kfd_process *proc = NULL;
+ struct kfd_process_device *pdd = NULL;
+ int i;
+- struct kfd_cu_occupancy cu_occupancy[AMDGPU_MAX_QUEUES];
++ struct kfd_cu_occupancy *cu_occupancy;
+ u32 queue_format;
+
+- memset(cu_occupancy, 0x0, sizeof(cu_occupancy));
+-
+ pdd = container_of(attr, struct kfd_process_device, attr_cu_occupancy);
+ dev = pdd->dev;
+ if (dev->kfd2kgd->get_cu_occupancy == NULL)
+@@ -293,6 +291,10 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+ wave_cnt = 0;
+ max_waves_per_cu = 0;
+
++ cu_occupancy = kcalloc(AMDGPU_MAX_QUEUES, sizeof(*cu_occupancy), GFP_KERNEL);
++ if (!cu_occupancy)
++ return -ENOMEM;
++
+ /*
+ * For GFX 9.4.3, fetch the CU occupancy from the first XCC in the partition.
+ * For AQL queues, because of cooperative dispatch we multiply the wave count
+@@ -318,6 +320,7 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+
+ /* Translate wave count to number of compute units */
+ cu_cnt = (wave_cnt + (max_waves_per_cu - 1)) / max_waves_per_cu;
++ kfree(cu_occupancy);
+ return snprintf(buffer, PAGE_SIZE, "%d\n", cu_cnt);
+ }
+
+@@ -338,8 +341,8 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ attr_sdma);
+ struct kfd_sdma_activity_handler_workarea sdma_activity_work_handler;
+
+- INIT_WORK(&sdma_activity_work_handler.sdma_activity_work,
+- kfd_sdma_activity_worker);
++ INIT_WORK_ONSTACK(&sdma_activity_work_handler.sdma_activity_work,
++ kfd_sdma_activity_worker);
+
+ sdma_activity_work_handler.pdd = pdd;
+ sdma_activity_work_handler.sdma_activity_counter = 0;
+@@ -347,6 +350,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ schedule_work(&sdma_activity_work_handler.sdma_activity_work);
+
+ flush_work(&sdma_activity_work_handler.sdma_activity_work);
++ destroy_work_on_stack(&sdma_activity_work_handler.sdma_activity_work);
+
+ return snprintf(buffer, PAGE_SIZE, "%llu\n",
+ (sdma_activity_work_handler.sdma_activity_counter)/
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 8d97f17ffe662a..24fbde7dd1c425 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1696,6 +1696,26 @@ dm_allocate_gpu_mem(
+ return da->cpu_ptr;
+ }
+
++void
++dm_free_gpu_mem(
++ struct amdgpu_device *adev,
++ enum dc_gpu_mem_alloc_type type,
++ void *pvMem)
++{
++ struct dal_allocation *da;
++
++ /* walk the da list in DM */
++ list_for_each_entry(da, &adev->dm.da_list, list) {
++ if (pvMem == da->cpu_ptr) {
++ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
++ list_del(&da->list);
++ kfree(da);
++ break;
++ }
++ }
++
++}
++
+ static enum dmub_status
+ dm_dmub_send_vbios_gpint_command(struct amdgpu_device *adev,
+ enum dmub_gpint_command command_code,
+@@ -1762,16 +1782,20 @@ static struct dml2_soc_bb *dm_dmub_get_vbios_bounding_box(struct amdgpu_device *
+ /* Send the chunk */
+ ret = dm_dmub_send_vbios_gpint_command(adev, send_addrs[i], chunk, 30000);
+ if (ret != DMUB_STATUS_OK)
+- /* No need to free bb here since it shall be done in dm_sw_fini() */
+- return NULL;
++ goto free_bb;
+ }
+
+ /* Now ask DMUB to copy the bb */
+ ret = dm_dmub_send_vbios_gpint_command(adev, DMUB_GPINT__BB_COPY, 1, 200000);
+ if (ret != DMUB_STATUS_OK)
+- return NULL;
++ goto free_bb;
+
+ return bb;
++
++free_bb:
++ dm_free_gpu_mem(adev, DC_MEM_ALLOC_TYPE_GART, (void *) bb);
++ return NULL;
++
+ }
+
+ static enum dmub_ips_disable_type dm_get_default_ips_mode(
+@@ -2541,11 +2565,11 @@ static int dm_sw_fini(void *handle)
+ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+ list_del(&da->list);
+ kfree(da);
++ adev->dm.bb_from_dmub = NULL;
+ break;
+ }
+ }
+
+- adev->dm.bb_from_dmub = NULL;
+
+ kfree(adev->dm.dmub_fb_info);
+ adev->dm.dmub_fb_info = NULL;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 90dfffec33cf49..a0bc2c0ac04d96 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -1004,6 +1004,9 @@ void *dm_allocate_gpu_mem(struct amdgpu_device *adev,
+ enum dc_gpu_mem_alloc_type type,
+ size_t size,
+ long long *addr);
++void dm_free_gpu_mem(struct amdgpu_device *adev,
++ enum dc_gpu_mem_alloc_type type,
++ void *addr);
+
+ bool amdgpu_dm_is_headless(struct amdgpu_device *adev);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 288be19db7c1b8..9be87b53251739 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -35,8 +35,8 @@
+ #include "amdgpu_dm_trace.h"
+ #include "amdgpu_dm_debugfs.h"
+
+-#define HPD_DETECTION_PERIOD_uS 5000000
+-#define HPD_DETECTION_TIME_uS 1000
++#define HPD_DETECTION_PERIOD_uS 2000000
++#define HPD_DETECTION_TIME_uS 100000
+
+ void amdgpu_dm_crtc_handle_vblank(struct amdgpu_crtc *acrtc)
+ {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index eea317dcbe8c34..9752548cc5b21d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -1055,17 +1055,8 @@ void dm_helpers_free_gpu_mem(
+ void *pvMem)
+ {
+ struct amdgpu_device *adev = ctx->driver_context;
+- struct dal_allocation *da;
+-
+- /* walk the da list in DM */
+- list_for_each_entry(da, &adev->dm.da_list, list) {
+- if (pvMem == da->cpu_ptr) {
+- amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+- list_del(&da->list);
+- kfree(da);
+- break;
+- }
+- }
++
++ dm_free_gpu_mem(adev, type, pvMem);
+ }
+
+ bool dm_helpers_dmub_outbox_interrupt_control(struct dc_context *ctx, bool enable)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index a08e8a0b696c60..32b025c92c63cf 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1120,6 +1120,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ int i, k, ret;
+ bool debugfs_overwrite = false;
+ uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link);
++ struct drm_connector_state *new_conn_state;
+
+ memset(params, 0, sizeof(params));
+
+@@ -1127,7 +1128,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ return PTR_ERR(mst_state);
+
+ /* Set up params */
+- DRM_DEBUG_DRIVER("%s: MST_DSC Set up params for %d streams\n", __func__, dc_state->stream_count);
++ DRM_DEBUG_DRIVER("%s: MST_DSC Try to set up params from %d streams\n", __func__, dc_state->stream_count);
+ for (i = 0; i < dc_state->stream_count; i++) {
+ struct dc_dsc_policy dsc_policy = {0};
+
+@@ -1143,6 +1144,14 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ if (!aconnector->mst_output_port)
+ continue;
+
++ new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
++
++ if (!new_conn_state) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC Skip the stream 0x%p with invalid new_conn_state\n",
++ __func__, __LINE__, stream);
++ continue;
++ }
++
+ stream->timing.flags.DSC = 0;
+
+ params[count].timing = &stream->timing;
+@@ -1175,6 +1184,8 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ count++;
+ }
+
++ DRM_DEBUG_DRIVER("%s: MST_DSC Params set up for %d streams\n", __func__, count);
++
+ if (count == 0) {
+ ASSERT(0);
+ return 0;
+@@ -1302,7 +1313,7 @@ static bool is_dsc_need_re_compute(
+ continue;
+
+ aconnector = (struct amdgpu_dm_connector *) stream->dm_stream_context;
+- if (!aconnector || !aconnector->dsc_aux)
++ if (!aconnector)
+ continue;
+
+ stream_on_link[new_stream_on_link_num] = aconnector;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 7ee2be8f82c467..bb766c2a74176a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -881,6 +881,9 @@ void hwss_setup_dpp(union block_sequence_params *params)
+ struct dpp *dpp = pipe_ctx->plane_res.dpp;
+ struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+
++ if (!plane_state)
++ return;
++
+ if (dpp && dpp->funcs->dpp_setup) {
+ // program the input csc
+ dpp->funcs->dpp_setup(dpp,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index a80c0858293207..36d12db8d02256 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1923,9 +1923,9 @@ static void dcn20_program_pipe(
+ dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size);
+ }
+
+- if (pipe_ctx->update_flags.raw ||
+- (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||
+- pipe_ctx->stream->update_flags.raw)
++ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.raw ||
++ pipe_ctx->plane_state->update_flags.raw ||
++ pipe_ctx->stream->update_flags.raw))
+ dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
+
+ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.bits.enable ||
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index a2e9bb485c366e..a2675b121fe44b 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -2551,6 +2551,8 @@ static int __maybe_unused anx7625_runtime_pm_suspend(struct device *dev)
+ mutex_lock(&ctx->lock);
+
+ anx7625_stop_dp_work(ctx);
++ if (!ctx->pdata.panel_bridge)
++ anx7625_remove_edid(ctx);
+ anx7625_power_standby(ctx);
+
+ mutex_unlock(&ctx->lock);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 87b8545fccc0af..e3a9832c742cb1 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3107,6 +3107,8 @@ static __maybe_unused int it6505_bridge_suspend(struct device *dev)
+ {
+ struct it6505 *it6505 = dev_get_drvdata(dev);
+
++ it6505_remove_edid(it6505);
++
+ return it6505_poweroff(it6505);
+ }
+
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index f3afdab55c113e..47189587643a15 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1714,6 +1714,13 @@ static const struct drm_edid *tc_edid_read(struct drm_bridge *bridge,
+ struct drm_connector *connector)
+ {
+ struct tc_data *tc = bridge_to_tc(bridge);
++ int ret;
++
++ ret = tc_get_display_props(tc);
++ if (ret < 0) {
++ dev_err(tc->dev, "failed to read display props: %d\n", ret);
++ return 0;
++ }
+
+ return drm_edid_read_ddc(connector, &tc->aux.ddc);
+ }
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index ad1dc638c83bb1..ce82c9451dfe7d 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -129,7 +129,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
+ */
+ struct drm_file *drm_file_alloc(struct drm_minor *minor)
+ {
+- static atomic64_t ident = ATOMIC_INIT(0);
++ static atomic64_t ident = ATOMIC64_INIT(0);
+ struct drm_device *dev = minor->dev;
+ struct drm_file *file;
+ int ret;
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index 5ace481c190117..1ed68d3cd80bad 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -151,7 +151,7 @@ static void show_leaks(struct drm_mm *mm) { }
+
+ INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
+ u64, __subtree_last,
+- START, LAST, static inline, drm_mm_interval_tree)
++ START, LAST, static inline __maybe_unused, drm_mm_interval_tree)
+
+ struct drm_mm_node *
+ __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index 6500f3999c5fa5..19ec67a5a918e3 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -538,6 +538,16 @@ static int etnaviv_bind(struct device *dev)
+ priv->num_gpus = 0;
+ priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+
++ /*
++ * If the GPU is part of a system with DMA addressing limitations,
++ * request pages for our SHM backend buffers from the DMA32 zone to
++ * hopefully avoid performance killing SWIOTLB bounce buffering.
++ */
++ if (dma_addressing_limited(dev)) {
++ priv->shm_gfp_mask |= GFP_DMA32;
++ priv->shm_gfp_mask &= ~__GFP_HIGHMEM;
++ }
++
+ priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev);
+ if (IS_ERR(priv->cmdbuf_suballoc)) {
+ dev_err(drm->dev, "Failed to create cmdbuf suballocator\n");
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 7c7f97793ddd0c..df0bc828a23483 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -839,14 +839,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ if (ret)
+ goto fail;
+
+- /*
+- * If the GPU is part of a system with DMA addressing limitations,
+- * request pages for our SHM backend buffers from the DMA32 zone to
+- * hopefully avoid performance killing SWIOTLB bounce buffering.
+- */
+- if (dma_addressing_limited(gpu->dev))
+- priv->shm_gfp_mask |= GFP_DMA32;
+-
+ /* Create buffer: */
+ ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer,
+ PAGE_SIZE);
+@@ -1330,6 +1322,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
+ {
+ u32 val;
+
++ mutex_lock(&gpu->lock);
++
+ /* disable clock gating */
+ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
+ val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
+@@ -1341,6 +1335,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
+ gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
+
+ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE);
++
++ mutex_unlock(&gpu->lock);
+ }
+
+ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+@@ -1350,13 +1346,9 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+ unsigned int i;
+ u32 val;
+
+- sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
+-
+- for (i = 0; i < submit->nr_pmrs; i++) {
+- const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
++ mutex_lock(&gpu->lock);
+
+- *pmr->bo_vma = pmr->sequence;
+- }
++ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
+
+ /* disable debug register */
+ val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
+@@ -1367,6 +1359,14 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
+ val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
+ gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val);
++
++ mutex_unlock(&gpu->lock);
++
++ for (i = 0; i < submit->nr_pmrs; i++) {
++ const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
++
++ *pmr->bo_vma = pmr->sequence;
++ }
+ }
+
+
+diff --git a/drivers/gpu/drm/fsl-dcu/Kconfig b/drivers/gpu/drm/fsl-dcu/Kconfig
+index 5ca71ef8732590..c9ee98693b48a4 100644
+--- a/drivers/gpu/drm/fsl-dcu/Kconfig
++++ b/drivers/gpu/drm/fsl-dcu/Kconfig
+@@ -8,6 +8,7 @@ config DRM_FSL_DCU
+ select DRM_PANEL
+ select REGMAP_MMIO
+ select VIDEOMODE_HELPERS
++ select MFD_SYSCON if SOC_LS1021A
+ help
+ Choose this option if you have an Freescale DCU chipset.
+ If M is selected the module will be called fsl-dcu-drm.
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+index ab6c0c6cd0e2e3..c4c3d41ee53097 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+@@ -100,6 +100,7 @@ static void fsl_dcu_irq_uninstall(struct drm_device *dev)
+ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
+ {
+ struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
++ struct regmap *scfg;
+ int ret;
+
+ ret = fsl_dcu_drm_modeset_init(fsl_dev);
+@@ -108,6 +109,20 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
+ return ret;
+ }
+
++ scfg = syscon_regmap_lookup_by_compatible("fsl,ls1021a-scfg");
++ if (PTR_ERR(scfg) != -ENODEV) {
++ /*
++ * For simplicity, enable the PIXCLK unconditionally,
++ * resulting in increased power consumption. Disabling
++ * the clock in PM or on unload could be implemented as
++ * a future improvement.
++ */
++ ret = regmap_update_bits(scfg, SCFG_PIXCLKCR, SCFG_PIXCLKCR_PXCEN,
++ SCFG_PIXCLKCR_PXCEN);
++ if (ret < 0)
++ return dev_err_probe(dev->dev, ret, "failed to enable pixclk\n");
++ }
++
+ ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
+ if (ret < 0) {
+ dev_err(dev->dev, "failed to initialize vblank\n");
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+index e2049a0e8a92a5..566396013c04a5 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+@@ -160,6 +160,9 @@
+ #define FSL_DCU_ARGB4444 12
+ #define FSL_DCU_YUV422 14
+
++#define SCFG_PIXCLKCR 0x28
++#define SCFG_PIXCLKCR_PXCEN BIT(31)
++
+ #define VF610_LAYER_REG_NUM 9
+ #define LS1021A_LAYER_REG_NUM 10
+
+diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
+index 4deeac7ed40a4d..2bbdc05a3b9779 100644
+--- a/drivers/gpu/drm/imagination/pvr_ccb.c
++++ b/drivers/gpu/drm/imagination/pvr_ccb.c
+@@ -321,7 +321,7 @@ static int pvr_kccb_reserve_slot_sync(struct pvr_device *pvr_dev)
+ bool reserved = false;
+ u32 retries = 0;
+
+- while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT ||
++ while (time_before(jiffies, start_timestamp + RESERVE_SLOT_TIMEOUT) ||
+ retries < RESERVE_SLOT_MIN_RETRIES) {
+ reserved = pvr_kccb_try_reserve_slot(pvr_dev);
+ if (reserved)
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
+index 7bd6ba4c6e8ab6..363f885a709826 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.c
++++ b/drivers/gpu/drm/imagination/pvr_vm.c
+@@ -654,9 +654,7 @@ pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
+
+ xa_lock(&pvr_file->vm_ctx_handles);
+ vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
+- if (vm_ctx)
+- kref_get(&vm_ctx->ref_count);
+-
++ pvr_vm_context_get(vm_ctx);
+ xa_unlock(&pvr_file->vm_ctx_handles);
+
+ return vm_ctx;
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-crtc.c b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+index 31267c00782fc1..af91e45b5d13b7 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-crtc.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+@@ -206,15 +206,13 @@ int dcss_crtc_init(struct dcss_crtc *crtc, struct drm_device *drm)
+ if (crtc->irq < 0)
+ return crtc->irq;
+
+- ret = request_irq(crtc->irq, dcss_crtc_irq_handler,
+- 0, "dcss_drm", crtc);
++ ret = request_irq(crtc->irq, dcss_crtc_irq_handler, IRQF_NO_AUTOEN,
++ "dcss_drm", crtc);
+ if (ret) {
+ dev_err(dcss->dev, "irq request failed with %d.\n", ret);
+ return ret;
+ }
+
+- disable_irq(crtc->irq);
+-
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+index ef29c9a61a4617..99db53e167bd02 100644
+--- a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+@@ -410,14 +410,12 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data)
+ }
+
+ ipu_crtc->irq = ipu_plane_irq(ipu_crtc->plane[0]);
+- ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0,
+- "imx_drm", ipu_crtc);
++ ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler,
++ IRQF_NO_AUTOEN, "imx_drm", ipu_crtc);
+ if (ret < 0) {
+ dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret);
+ return ret;
+ }
+- /* Only enable IRQ when we actually need it to trigger work. */
+- disable_irq(ipu_crtc->irq);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 37927bdd6fbed8..14db7376c712d1 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1522,15 +1522,13 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
+
+ irq = platform_get_irq_byname(pdev, name);
+
+- ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH, name, gmu);
++ ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN, name, gmu);
+ if (ret) {
+ DRM_DEV_ERROR(&pdev->dev, "Unable to get interrupt %s %d\n",
+ name, ret);
+ return ret;
+ }
+
+- disable_irq(irq);
+-
+ return irq;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+index 1d3e9666c7411e..64c94e919a6980 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+@@ -156,18 +156,6 @@ static const struct dpu_lm_cfg msm8998_lm[] = {
+ .sblk = &msm8998_lm_sblk,
+ .lm_pair = LM_5,
+ .pingpong = PINGPONG_2,
+- }, {
+- .name = "lm_3", .id = LM_3,
+- .base = 0x47000, .len = 0x320,
+- .features = MIXER_MSM8998_MASK,
+- .sblk = &msm8998_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+- }, {
+- .name = "lm_4", .id = LM_4,
+- .base = 0x48000, .len = 0x320,
+- .features = MIXER_MSM8998_MASK,
+- .sblk = &msm8998_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+ }, {
+ .name = "lm_5", .id = LM_5,
+ .base = 0x49000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+index 7a23389a573272..72bd4f7e9e504c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+@@ -155,19 +155,6 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ .lm_pair = LM_5,
+ .pingpong = PINGPONG_2,
+ .dspp = DSPP_2,
+- }, {
+- .name = "lm_3", .id = LM_3,
+- .base = 0x0, .len = 0x320,
+- .features = MIXER_SDM845_MASK,
+- .sblk = &sdm845_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+- .dspp = DSPP_3,
+- }, {
+- .name = "lm_4", .id = LM_4,
+- .base = 0x0, .len = 0x320,
+- .features = MIXER_SDM845_MASK,
+- .sblk = &sdm845_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+ }, {
+ .name = "lm_5", .id = LM_5,
+ .base = 0x49000, .len = 0x320,
+@@ -175,6 +162,7 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+index 68fae048a9a837..260accc151d4b4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+@@ -80,7 +80,7 @@ static u64 _dpu_core_perf_calc_clk(const struct dpu_perf_cfg *perf_cfg,
+
+ mode = &state->adjusted_mode;
+
+- crtc_clk = mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
++ crtc_clk = (u64)mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
+
+ drm_atomic_crtc_for_each_plane(plane, crtc) {
+ pstate = to_dpu_plane_state(plane->state);
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index ea70c1c32d9401..6970b0f7f457c8 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -140,6 +140,7 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+ struct msm_drm_private *priv = gpu->dev->dev_private;
++ int ret;
+
+ /* We need target support to do devfreq */
+ if (!gpu->funcs->gpu_busy)
+@@ -156,8 +157,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+
+ mutex_init(&df->lock);
+
+- dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
+- DEV_PM_QOS_MIN_FREQUENCY, 0);
++ ret = dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
++ DEV_PM_QOS_MIN_FREQUENCY, 0);
++ if (ret < 0) {
++ DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize QoS\n");
++ return;
++ }
+
+ msm_devfreq_profile.initial_freq = gpu->fast_rate;
+
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+index 060c74a80eb14b..3ea447f6a45b51 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+@@ -443,6 +443,7 @@ gf100_gr_chan_new(struct nvkm_gr *base, struct nvkm_chan *fifoch,
+ ret = gf100_grctx_generate(gr, chan, fifoch->inst);
+ if (ret) {
+ nvkm_error(&base->engine.subdev, "failed to construct context\n");
++ mutex_unlock(&gr->fecs.mutex);
+ return ret;
+ }
+ }
+diff --git a/drivers/gpu/drm/omapdrm/dss/base.c b/drivers/gpu/drm/omapdrm/dss/base.c
+index 5f8002f6bb7a59..a4ac113e16904b 100644
+--- a/drivers/gpu/drm/omapdrm/dss/base.c
++++ b/drivers/gpu/drm/omapdrm/dss/base.c
+@@ -139,21 +139,13 @@ static bool omapdss_device_is_connected(struct omap_dss_device *dssdev)
+ }
+
+ int omapdss_device_connect(struct dss_device *dss,
+- struct omap_dss_device *src,
+ struct omap_dss_device *dst)
+ {
+- dev_dbg(&dss->pdev->dev, "connect(%s, %s)\n",
+- src ? dev_name(src->dev) : "NULL",
++ dev_dbg(&dss->pdev->dev, "connect(%s)\n",
+ dst ? dev_name(dst->dev) : "NULL");
+
+- if (!dst) {
+- /*
+- * The destination is NULL when the source is connected to a
+- * bridge instead of a DSS device. Stop here, we will attach
+- * the bridge later when we will have a DRM encoder.
+- */
+- return src && src->bridge ? 0 : -EINVAL;
+- }
++ if (!dst)
++ return -EINVAL;
+
+ if (omapdss_device_is_connected(dst))
+ return -EBUSY;
+@@ -163,19 +155,14 @@ int omapdss_device_connect(struct dss_device *dss,
+ return 0;
+ }
+
+-void omapdss_device_disconnect(struct omap_dss_device *src,
++void omapdss_device_disconnect(struct dss_device *dss,
+ struct omap_dss_device *dst)
+ {
+- struct dss_device *dss = src ? src->dss : dst->dss;
+-
+- dev_dbg(&dss->pdev->dev, "disconnect(%s, %s)\n",
+- src ? dev_name(src->dev) : "NULL",
++ dev_dbg(&dss->pdev->dev, "disconnect(%s)\n",
+ dst ? dev_name(dst->dev) : "NULL");
+
+- if (!dst) {
+- WARN_ON(!src->bridge);
++ if (WARN_ON(!dst))
+ return;
+- }
+
+ if (!dst->id && !omapdss_device_is_connected(dst)) {
+ WARN_ON(1);
+diff --git a/drivers/gpu/drm/omapdrm/dss/omapdss.h b/drivers/gpu/drm/omapdrm/dss/omapdss.h
+index 040d5a3e33d680..4c22c09c93d523 100644
+--- a/drivers/gpu/drm/omapdrm/dss/omapdss.h
++++ b/drivers/gpu/drm/omapdrm/dss/omapdss.h
+@@ -242,9 +242,8 @@ struct omap_dss_device *omapdss_device_get(struct omap_dss_device *dssdev);
+ void omapdss_device_put(struct omap_dss_device *dssdev);
+ struct omap_dss_device *omapdss_find_device_by_node(struct device_node *node);
+ int omapdss_device_connect(struct dss_device *dss,
+- struct omap_dss_device *src,
+ struct omap_dss_device *dst);
+-void omapdss_device_disconnect(struct omap_dss_device *src,
++void omapdss_device_disconnect(struct dss_device *dss,
+ struct omap_dss_device *dst);
+
+ int omap_dss_get_num_overlay_managers(void);
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index d3eac4817d7687..a982378aa14119 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -307,7 +307,7 @@ static void omap_disconnect_pipelines(struct drm_device *ddev)
+ for (i = 0; i < priv->num_pipes; i++) {
+ struct omap_drm_pipeline *pipe = &priv->pipes[i];
+
+- omapdss_device_disconnect(NULL, pipe->output);
++ omapdss_device_disconnect(priv->dss, pipe->output);
+
+ omapdss_device_put(pipe->output);
+ pipe->output = NULL;
+@@ -325,7 +325,7 @@ static int omap_connect_pipelines(struct drm_device *ddev)
+ int r;
+
+ for_each_dss_output(output) {
+- r = omapdss_device_connect(priv->dss, NULL, output);
++ r = omapdss_device_connect(priv->dss, output);
+ if (r == -EPROBE_DEFER) {
+ omapdss_device_put(output);
+ return r;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index fdae677558f3ef..b9c67e4ca36054 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1402,8 +1402,6 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
+
+ omap_obj = to_omap_bo(obj);
+
+- mutex_lock(&omap_obj->lock);
+-
+ omap_obj->sgt = sgt;
+
+ if (omap_gem_sgt_is_contiguous(sgt, size)) {
+@@ -1418,21 +1416,17 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
+ pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
+ if (!pages) {
+ omap_gem_free_object(obj);
+- obj = ERR_PTR(-ENOMEM);
+- goto done;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ omap_obj->pages = pages;
+ ret = drm_prime_sg_to_page_array(sgt, pages, npages);
+ if (ret) {
+ omap_gem_free_object(obj);
+- obj = ERR_PTR(-ENOMEM);
+- goto done;
++ return ERR_PTR(-ENOMEM);
+ }
+ }
+
+-done:
+- mutex_unlock(&omap_obj->lock);
+ return obj;
+ }
+
+diff --git a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
+index d3baccfe6286b2..06e16a7c14a756 100644
+--- a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
++++ b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
+@@ -917,7 +917,7 @@ static const struct nv3052c_panel_info wl_355608_a8_panel_info = {
+ static const struct spi_device_id nv3052c_ids[] = {
+ { "ltk035c5444t", },
+ { "fs035vg158", },
+- { "wl-355608-a8", },
++ { "rg35xx-plus-panel", },
+ { /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(spi, nv3052c_ids);
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index 57686340de49fc..549b86f2cc2887 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -38,6 +38,7 @@
+
+ #define NT35510_CMD_CORRECT_GAMMA BIT(0)
+ #define NT35510_CMD_CONTROL_DISPLAY BIT(1)
++#define NT35510_CMD_SETVCMOFF BIT(2)
+
+ #define MCS_CMD_MAUCCTR 0xF0 /* Manufacturer command enable */
+ #define MCS_CMD_READ_ID1 0xDA
+@@ -721,11 +722,13 @@ static int nt35510_setup_power(struct nt35510 *nt)
+ if (ret)
+ return ret;
+
+- ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF,
+- NT35510_P1_VCMOFF_LEN,
+- nt->conf->vcmoff);
+- if (ret)
+- return ret;
++ if (nt->conf->cmds & NT35510_CMD_SETVCMOFF) {
++ ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF,
++ NT35510_P1_VCMOFF_LEN,
++ nt->conf->vcmoff);
++ if (ret)
++ return ret;
++ }
+
+ /* Typically 10 ms */
+ usleep_range(10000, 20000);
+@@ -1319,7 +1322,7 @@ static const struct nt35510_config nt35510_frida_frd400b25025 = {
+ },
+ .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+ MIPI_DSI_MODE_LPM,
+- .cmds = NT35510_CMD_CONTROL_DISPLAY,
++ .cmds = NT35510_CMD_CONTROL_DISPLAY | NT35510_CMD_SETVCMOFF,
+ /* 0x03: AVDD = 6.2V */
+ .avdd = { 0x03, 0x03, 0x03 },
+ /* 0x46: PCK = 2 x Hsync, BTP = 2.5 x VDDB */
+diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+index 2d30da38c2c3e4..3385fd3ef41a47 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
++++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+@@ -38,7 +38,7 @@ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ return PTR_ERR(opp);
+ dev_pm_opp_put(opp);
+
+- err = dev_pm_opp_set_rate(dev, *freq);
++ err = dev_pm_opp_set_rate(dev, *freq);
+ if (!err)
+ ptdev->pfdevfreq.current_frequency = *freq;
+
+@@ -182,6 +182,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
+ * if any and will avoid a switch off by regulator_late_cleanup()
+ */
+ ret = dev_pm_opp_set_opp(dev, opp);
++ dev_pm_opp_put(opp);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n");
+ return ret;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index fd8e44992184fa..b52dd510e0367b 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -177,7 +177,6 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
+ struct panfrost_model {
+ const char *name;
+ u32 id;
+- u32 id_mask;
+ u64 features;
+ u64 issues;
+ struct {
+diff --git a/drivers/gpu/drm/panthor/panthor_devfreq.c b/drivers/gpu/drm/panthor/panthor_devfreq.c
+index c6d3c327cc24c0..ecc7a52bd688ee 100644
+--- a/drivers/gpu/drm/panthor/panthor_devfreq.c
++++ b/drivers/gpu/drm/panthor/panthor_devfreq.c
+@@ -62,14 +62,20 @@ static void panthor_devfreq_update_utilization(struct panthor_devfreq *pdevfreq)
+ static int panthor_devfreq_target(struct device *dev, unsigned long *freq,
+ u32 flags)
+ {
++ struct panthor_device *ptdev = dev_get_drvdata(dev);
+ struct dev_pm_opp *opp;
++ int err;
+
+ opp = devfreq_recommended_opp(dev, freq, flags);
+ if (IS_ERR(opp))
+ return PTR_ERR(opp);
+ dev_pm_opp_put(opp);
+
+- return dev_pm_opp_set_rate(dev, *freq);
++ err = dev_pm_opp_set_rate(dev, *freq);
++ if (!err)
++ ptdev->current_frequency = *freq;
++
++ return err;
+ }
+
+ static void panthor_devfreq_reset(struct panthor_devfreq *pdevfreq)
+@@ -130,6 +136,7 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+ struct panthor_devfreq *pdevfreq;
+ struct dev_pm_opp *opp;
+ unsigned long cur_freq;
++ unsigned long freq = ULONG_MAX;
+ int ret;
+
+ pdevfreq = drmm_kzalloc(&ptdev->base, sizeof(*ptdev->devfreq), GFP_KERNEL);
+@@ -156,12 +163,6 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+
+ cur_freq = clk_get_rate(ptdev->clks.core);
+
+- opp = devfreq_recommended_opp(dev, &cur_freq, 0);
+- if (IS_ERR(opp))
+- return PTR_ERR(opp);
+-
+- panthor_devfreq_profile.initial_freq = cur_freq;
+-
+ /* Regulator coupling only takes care of synchronizing/balancing voltage
+ * updates, but the coupled regulator needs to be enabled manually.
+ *
+@@ -192,16 +193,30 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+ return ret;
+ }
+
++ opp = devfreq_recommended_opp(dev, &cur_freq, 0);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++
++ panthor_devfreq_profile.initial_freq = cur_freq;
++ ptdev->current_frequency = cur_freq;
++
+ /*
+ * Set the recommend OPP this will enable and configure the regulator
+ * if any and will avoid a switch off by regulator_late_cleanup()
+ */
+ ret = dev_pm_opp_set_opp(dev, opp);
++ dev_pm_opp_put(opp);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n");
+ return ret;
+ }
+
++ /* Find the fastest defined rate */
++ opp = dev_pm_opp_find_freq_floor(dev, &freq);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++ ptdev->fast_rate = freq;
++
+ dev_pm_opp_put(opp);
+
+ /*
+diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
+index e388c0472ba783..2109905813e8c4 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.h
++++ b/drivers/gpu/drm/panthor/panthor_device.h
+@@ -66,6 +66,25 @@ struct panthor_irq {
+ atomic_t suspended;
+ };
+
++/**
++ * enum panthor_device_profiling_mode - Profiling state
++ */
++enum panthor_device_profiling_flags {
++ /** @PANTHOR_DEVICE_PROFILING_DISABLED: Profiling is disabled. */
++ PANTHOR_DEVICE_PROFILING_DISABLED = 0,
++
++ /** @PANTHOR_DEVICE_PROFILING_CYCLES: Sampling job cycles. */
++ PANTHOR_DEVICE_PROFILING_CYCLES = BIT(0),
++
++ /** @PANTHOR_DEVICE_PROFILING_TIMESTAMP: Sampling job timestamp. */
++ PANTHOR_DEVICE_PROFILING_TIMESTAMP = BIT(1),
++
++ /** @PANTHOR_DEVICE_PROFILING_ALL: Sampling everything. */
++ PANTHOR_DEVICE_PROFILING_ALL =
++ PANTHOR_DEVICE_PROFILING_CYCLES |
++ PANTHOR_DEVICE_PROFILING_TIMESTAMP,
++};
++
+ /**
+ * struct panthor_device - Panthor device
+ */
+@@ -162,6 +181,15 @@ struct panthor_device {
+ */
+ struct page *dummy_latest_flush;
+ } pm;
++
++ /** @profile_mask: User-set profiling flags for job accounting. */
++ u32 profile_mask;
++
++ /** @current_frequency: Device clock frequency at present. Set by DVFS*/
++ unsigned long current_frequency;
++
++ /** @fast_rate: Maximum device clock frequency. Set by DVFS */
++ unsigned long fast_rate;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 9929e22f4d8d2e..20135a9bc026ed 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -93,6 +93,9 @@
+ #define MIN_CSGS 3
+ #define MAX_CSG_PRIO 0xf
+
++#define NUM_INSTRS_PER_CACHE_LINE (64 / sizeof(u64))
++#define MAX_INSTRS_PER_JOB 24
++
+ struct panthor_group;
+
+ /**
+@@ -476,6 +479,18 @@ struct panthor_queue {
+ */
+ struct list_head in_flight_jobs;
+ } fence_ctx;
++
++ /** @profiling: Job profiling data slots and access information. */
++ struct {
++ /** @slots: Kernel BO holding the slots. */
++ struct panthor_kernel_bo *slots;
++
++ /** @slot_count: Number of jobs ringbuffer can hold at once. */
++ u32 slot_count;
++
++ /** @seqno: Index of the next available profiling information slot. */
++ u32 seqno;
++ } profiling;
+ };
+
+ /**
+@@ -662,6 +677,18 @@ struct panthor_group {
+ struct list_head wait_node;
+ };
+
++struct panthor_job_profiling_data {
++ struct {
++ u64 before;
++ u64 after;
++ } cycles;
++
++ struct {
++ u64 before;
++ u64 after;
++ } time;
++};
++
+ /**
+ * group_queue_work() - Queue a group work
+ * @group: Group to queue the work for.
+@@ -775,6 +802,15 @@ struct panthor_job {
+
+ /** @done_fence: Fence signaled when the job is finished or cancelled. */
+ struct dma_fence *done_fence;
++
++ /** @profiling: Job profiling information. */
++ struct {
++ /** @mask: Current device job profiling enablement bitmask. */
++ u32 mask;
++
++ /** @slot: Job index in the profiling slots BO. */
++ u32 slot;
++ } profiling;
+ };
+
+ static void
+@@ -839,6 +875,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
+
+ panthor_kernel_bo_destroy(queue->ringbuf);
+ panthor_kernel_bo_destroy(queue->iface.mem);
++ panthor_kernel_bo_destroy(queue->profiling.slots);
+
+ /* Release the last_fence we were holding, if any. */
+ dma_fence_put(queue->fence_ctx.last_fence);
+@@ -1989,8 +2026,6 @@ tick_ctx_init(struct panthor_scheduler *sched,
+ }
+ }
+
+-#define NUM_INSTRS_PER_SLOT 16
+-
+ static void
+ group_term_post_processing(struct panthor_group *group)
+ {
+@@ -2829,65 +2864,198 @@ static void group_sync_upd_work(struct work_struct *work)
+ group_put(group);
+ }
+
+-static struct dma_fence *
+-queue_run_job(struct drm_sched_job *sched_job)
++struct panthor_job_ringbuf_instrs {
++ u64 buffer[MAX_INSTRS_PER_JOB];
++ u32 count;
++};
++
++struct panthor_job_instr {
++ u32 profile_mask;
++ u64 instr;
++};
++
++#define JOB_INSTR(__prof, __instr) \
++ { \
++ .profile_mask = __prof, \
++ .instr = __instr, \
++ }
++
++static void
++copy_instrs_to_ringbuf(struct panthor_queue *queue,
++ struct panthor_job *job,
++ struct panthor_job_ringbuf_instrs *instrs)
++{
++ u64 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
++ u64 start = job->ringbuf.start & (ringbuf_size - 1);
++ u64 size, written;
++
++ /*
++ * We need to write a whole slot, including any trailing zeroes
++ * that may come at the end of it. Also, because instrs.buffer has
++ * been zero-initialised, there's no need to pad it with 0's
++ */
++ instrs->count = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
++ size = instrs->count * sizeof(u64);
++ WARN_ON(size > ringbuf_size);
++ written = min(ringbuf_size - start, size);
++
++ memcpy(queue->ringbuf->kmap + start, instrs->buffer, written);
++
++ if (written < size)
++ memcpy(queue->ringbuf->kmap,
++ &instrs->buffer[written / sizeof(u64)],
++ size - written);
++}
++
++struct panthor_job_cs_params {
++ u32 profile_mask;
++ u64 addr_reg; u64 val_reg;
++ u64 cycle_reg; u64 time_reg;
++ u64 sync_addr; u64 times_addr;
++ u64 cs_start; u64 cs_size;
++ u32 last_flush; u32 waitall_mask;
++};
++
++static void
++get_job_cs_params(struct panthor_job *job, struct panthor_job_cs_params *params)
+ {
+- struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
+ struct panthor_group *group = job->group;
+ struct panthor_queue *queue = group->queues[job->queue_idx];
+ struct panthor_device *ptdev = group->ptdev;
+ struct panthor_scheduler *sched = ptdev->scheduler;
+- u32 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
+- u32 ringbuf_insert = queue->iface.input->insert & (ringbuf_size - 1);
+- u64 addr_reg = ptdev->csif_info.cs_reg_count -
+- ptdev->csif_info.unpreserved_cs_reg_count;
+- u64 val_reg = addr_reg + 2;
+- u64 sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
+- job->queue_idx * sizeof(struct panthor_syncobj_64b);
+- u32 waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
+- struct dma_fence *done_fence;
+- int ret;
+
+- u64 call_instrs[NUM_INSTRS_PER_SLOT] = {
+- /* MOV32 rX+2, cs.latest_flush */
+- (2ull << 56) | (val_reg << 48) | job->call_info.latest_flush,
++ params->addr_reg = ptdev->csif_info.cs_reg_count -
++ ptdev->csif_info.unpreserved_cs_reg_count;
++ params->val_reg = params->addr_reg + 2;
++ params->cycle_reg = params->addr_reg;
++ params->time_reg = params->val_reg;
+
+- /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
+- (36ull << 56) | (0ull << 48) | (val_reg << 40) | (0 << 16) | 0x233,
++ params->sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
++ job->queue_idx * sizeof(struct panthor_syncobj_64b);
++ params->times_addr = panthor_kernel_bo_gpuva(queue->profiling.slots) +
++ (job->profiling.slot * sizeof(struct panthor_job_profiling_data));
++ params->waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
+
+- /* MOV48 rX:rX+1, cs.start */
+- (1ull << 56) | (addr_reg << 48) | job->call_info.start,
++ params->cs_start = job->call_info.start;
++ params->cs_size = job->call_info.size;
++ params->last_flush = job->call_info.latest_flush;
+
+- /* MOV32 rX+2, cs.size */
+- (2ull << 56) | (val_reg << 48) | job->call_info.size,
++ params->profile_mask = job->profiling.mask;
++}
+
+- /* WAIT(0) => waits for FLUSH_CACHE2 instruction */
+- (3ull << 56) | (1 << 16),
++#define JOB_INSTR_ALWAYS(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_DISABLED, (instr))
++#define JOB_INSTR_TIMESTAMP(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_TIMESTAMP, (instr))
++#define JOB_INSTR_CYCLES(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_CYCLES, (instr))
+
++static void
++prepare_job_instrs(const struct panthor_job_cs_params *params,
++ struct panthor_job_ringbuf_instrs *instrs)
++{
++ const struct panthor_job_instr instr_seq[] = {
++ /* MOV32 rX+2, cs.latest_flush */
++ JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->last_flush),
++ /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
++ JOB_INSTR_ALWAYS((36ull << 56) | (0ull << 48) | (params->val_reg << 40) |
++ (0 << 16) | 0x233),
++ /* MOV48 rX:rX+1, cycles_offset */
++ JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, cycles.before))),
++ /* STORE_STATE cycles */
++ JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
++ /* MOV48 rX:rX+1, time_offset */
++ JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, time.before))),
++ /* STORE_STATE timer */
++ JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
++ /* MOV48 rX:rX+1, cs.start */
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->cs_start),
++ /* MOV32 rX+2, cs.size */
++ JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->cs_size),
++ /* WAIT(0) => waits for FLUSH_CACHE2 instruction */
++ JOB_INSTR_ALWAYS((3ull << 56) | (1 << 16)),
+ /* CALL rX:rX+1, rX+2 */
+- (32ull << 56) | (addr_reg << 40) | (val_reg << 32),
+-
++ JOB_INSTR_ALWAYS((32ull << 56) | (params->addr_reg << 40) |
++ (params->val_reg << 32)),
++ /* MOV48 rX:rX+1, cycles_offset */
++ JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, cycles.after))),
++ /* STORE_STATE cycles */
++ JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
++ /* MOV48 rX:rX+1, time_offset */
++ JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, time.after))),
++ /* STORE_STATE timer */
++ JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
+ /* MOV48 rX:rX+1, sync_addr */
+- (1ull << 56) | (addr_reg << 48) | sync_addr,
+-
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->sync_addr),
+ /* MOV48 rX+2, #1 */
+- (1ull << 56) | (val_reg << 48) | 1,
+-
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->val_reg << 48) | 1),
+ /* WAIT(all) */
+- (3ull << 56) | (waitall_mask << 16),
+-
++ JOB_INSTR_ALWAYS((3ull << 56) | (params->waitall_mask << 16)),
+ /* SYNC_ADD64.system_scope.propage_err.nowait rX:rX+1, rX+2*/
+- (51ull << 56) | (0ull << 48) | (addr_reg << 40) | (val_reg << 32) | (0 << 16) | 1,
++ JOB_INSTR_ALWAYS((51ull << 56) | (0ull << 48) | (params->addr_reg << 40) |
++ (params->val_reg << 32) | (0 << 16) | 1),
++ /* ERROR_BARRIER, so we can recover from faults at job boundaries. */
++ JOB_INSTR_ALWAYS((47ull << 56)),
++ };
++ u32 pad;
+
+- /* ERROR_BARRIER, so we can recover from faults at job
+- * boundaries.
+- */
+- (47ull << 56),
++ instrs->count = 0;
++
++ /* NEED to be cacheline aligned to please the prefetcher. */
++ static_assert(sizeof(instrs->buffer) % 64 == 0,
++ "panthor_job_ringbuf_instrs::buffer is not aligned on a cacheline");
++
++ /* Make sure we have enough storage to store the whole sequence. */
++ static_assert(ALIGN(ARRAY_SIZE(instr_seq), NUM_INSTRS_PER_CACHE_LINE) ==
++ ARRAY_SIZE(instrs->buffer),
++ "instr_seq vs panthor_job_ringbuf_instrs::buffer size mismatch");
++
++ for (u32 i = 0; i < ARRAY_SIZE(instr_seq); i++) {
++ /* If the profile mask of this instruction is not enabled, skip it. */
++ if (instr_seq[i].profile_mask &&
++ !(instr_seq[i].profile_mask & params->profile_mask))
++ continue;
++
++ instrs->buffer[instrs->count++] = instr_seq[i].instr;
++ }
++
++ pad = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
++ memset(&instrs->buffer[instrs->count], 0,
++ (pad - instrs->count) * sizeof(instrs->buffer[0]));
++ instrs->count = pad;
++}
++
++static u32 calc_job_credits(u32 profile_mask)
++{
++ struct panthor_job_ringbuf_instrs instrs;
++ struct panthor_job_cs_params params = {
++ .profile_mask = profile_mask,
+ };
+
+- /* Need to be cacheline aligned to please the prefetcher. */
+- static_assert(sizeof(call_instrs) % 64 == 0,
+- "call_instrs is not aligned on a cacheline");
++ prepare_job_instrs(¶ms, &instrs);
++ return instrs.count;
++}
++
++static struct dma_fence *
++queue_run_job(struct drm_sched_job *sched_job)
++{
++ struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
++ struct panthor_group *group = job->group;
++ struct panthor_queue *queue = group->queues[job->queue_idx];
++ struct panthor_device *ptdev = group->ptdev;
++ struct panthor_scheduler *sched = ptdev->scheduler;
++ struct panthor_job_ringbuf_instrs instrs;
++ struct panthor_job_cs_params cs_params;
++ struct dma_fence *done_fence;
++ int ret;
+
+ /* Stream size is zero, nothing to do except making sure all previously
+ * submitted jobs are done before we signal the
+@@ -2914,17 +3082,23 @@ queue_run_job(struct drm_sched_job *sched_job)
+ queue->fence_ctx.id,
+ atomic64_inc_return(&queue->fence_ctx.seqno));
+
+- memcpy(queue->ringbuf->kmap + ringbuf_insert,
+- call_instrs, sizeof(call_instrs));
++ job->profiling.slot = queue->profiling.seqno++;
++ if (queue->profiling.seqno == queue->profiling.slot_count)
++ queue->profiling.seqno = 0;
++
++ job->ringbuf.start = queue->iface.input->insert;
++
++ get_job_cs_params(job, &cs_params);
++ prepare_job_instrs(&cs_params, &instrs);
++ copy_instrs_to_ringbuf(queue, job, &instrs);
++
++ job->ringbuf.end = job->ringbuf.start + (instrs.count * sizeof(u64));
+
+ panthor_job_get(&job->base);
+ spin_lock(&queue->fence_ctx.lock);
+ list_add_tail(&job->node, &queue->fence_ctx.in_flight_jobs);
+ spin_unlock(&queue->fence_ctx.lock);
+
+- job->ringbuf.start = queue->iface.input->insert;
+- job->ringbuf.end = job->ringbuf.start + sizeof(call_instrs);
+-
+ /* Make sure the ring buffer is updated before the INSERT
+ * register.
+ */
+@@ -3017,6 +3191,33 @@ static const struct drm_sched_backend_ops panthor_queue_sched_ops = {
+ .free_job = queue_free_job,
+ };
+
++static u32 calc_profiling_ringbuf_num_slots(struct panthor_device *ptdev,
++ u32 cs_ringbuf_size)
++{
++ u32 min_profiled_job_instrs = U32_MAX;
++ u32 last_flag = fls(PANTHOR_DEVICE_PROFILING_ALL);
++
++ /*
++ * We want to calculate the minimum size of a profiled job's CS,
++ * because since they need additional instructions for the sampling
++ * of performance metrics, they might take up further slots in
++ * the queue's ringbuffer. This means we might not need as many job
++ * slots for keeping track of their profiling information. What we
++ * need is the maximum number of slots we should allocate to this end,
++ * which matches the maximum number of profiled jobs we can place
++ * simultaneously in the queue's ring buffer.
++ * That has to be calculated separately for every single job profiling
++ * flag, but not in the case job profiling is disabled, since unprofiled
++ * jobs don't need to keep track of this at all.
++ */
++ for (u32 i = 0; i < last_flag; i++) {
++ min_profiled_job_instrs =
++ min(min_profiled_job_instrs, calc_job_credits(BIT(i)));
++ }
++
++ return DIV_ROUND_UP(cs_ringbuf_size, min_profiled_job_instrs * sizeof(u64));
++}
++
+ static struct panthor_queue *
+ group_create_queue(struct panthor_group *group,
+ const struct drm_panthor_queue_create *args)
+@@ -3070,9 +3271,35 @@ group_create_queue(struct panthor_group *group,
+ goto err_free_queue;
+ }
+
++ queue->profiling.slot_count =
++ calc_profiling_ringbuf_num_slots(group->ptdev, args->ringbuf_size);
++
++ queue->profiling.slots =
++ panthor_kernel_bo_create(group->ptdev, group->vm,
++ queue->profiling.slot_count *
++ sizeof(struct panthor_job_profiling_data),
++ DRM_PANTHOR_BO_NO_MMAP,
++ DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC |
++ DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED,
++ PANTHOR_VM_KERNEL_AUTO_VA);
++
++ if (IS_ERR(queue->profiling.slots)) {
++ ret = PTR_ERR(queue->profiling.slots);
++ goto err_free_queue;
++ }
++
++ ret = panthor_kernel_bo_vmap(queue->profiling.slots);
++ if (ret)
++ goto err_free_queue;
++
++ /*
++ * Credit limit argument tells us the total number of instructions
++ * across all CS slots in the ringbuffer, with some jobs requiring
++ * twice as many as others, depending on their profiling status.
++ */
+ ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
+ group->ptdev->scheduler->wq, 1,
+- args->ringbuf_size / (NUM_INSTRS_PER_SLOT * sizeof(u64)),
++ args->ringbuf_size / sizeof(u64),
+ 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
+ group->ptdev->reset.wq,
+ NULL, "panthor-queue", group->ptdev->base.dev);
+@@ -3380,6 +3607,7 @@ panthor_job_create(struct panthor_file *pfile,
+ {
+ struct panthor_group_pool *gpool = pfile->groups;
+ struct panthor_job *job;
++ u32 credits;
+ int ret;
+
+ if (qsubmit->pad)
+@@ -3438,9 +3666,16 @@ panthor_job_create(struct panthor_file *pfile,
+ }
+ }
+
++ job->profiling.mask = pfile->ptdev->profile_mask;
++ credits = calc_job_credits(job->profiling.mask);
++ if (credits == 0) {
++ ret = -EINVAL;
++ goto err_put_job;
++ }
++
+ ret = drm_sched_job_init(&job->base,
+ &job->group->queues[job->queue_idx]->entity,
+- 1, job->group);
++ credits, job->group);
+ if (ret)
+ goto err_put_job;
+
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index 47aa06a9a94221..5b69cc8011b42b 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -760,16 +760,20 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port,
+ if (!rdev->audio.enabled || !rdev->mode_info.mode_config_initialized)
+ return 0;
+
+- list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) {
++ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ const struct drm_connector_helper_funcs *connector_funcs =
++ connector->helper_private;
++ encoder = connector_funcs->best_encoder(connector);
++
++ if (!encoder)
++ continue;
++
+ if (!radeon_encoder_is_digital(encoder))
+ continue;
+ radeon_encoder = to_radeon_encoder(encoder);
+ dig = radeon_encoder->enc_priv;
+ if (!dig->pin || dig->pin->id != port)
+ continue;
+- connector = radeon_get_connector_for_encoder(encoder);
+- if (!connector)
+- continue;
+ *enabled = true;
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index cf4b23369dc449..75b4725d49c7e1 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -553,6 +553,7 @@ void v3d_irq_disable(struct v3d_dev *v3d);
+ void v3d_irq_reset(struct v3d_dev *v3d);
+
+ /* v3d_mmu.c */
++int v3d_mmu_flush_all(struct v3d_dev *v3d);
+ int v3d_mmu_set_page_table(struct v3d_dev *v3d);
+ void v3d_mmu_insert_ptes(struct v3d_bo *bo);
+ void v3d_mmu_remove_ptes(struct v3d_bo *bo);
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index d469bda52c1a5e..20bf33702c3c4f 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -70,6 +70,8 @@ v3d_overflow_mem_work(struct work_struct *work)
+ list_add_tail(&bo->unref_head, &v3d->bin_job->render->unref_list);
+ spin_unlock_irqrestore(&v3d->job_lock, irqflags);
+
++ v3d_mmu_flush_all(v3d);
++
+ V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << V3D_MMU_PAGE_SHIFT);
+ V3D_CORE_WRITE(0, V3D_PTB_BPOS, obj->size);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c
+index 14f3af40d6f6d1..5bb7821c0243c6 100644
+--- a/drivers/gpu/drm/v3d/v3d_mmu.c
++++ b/drivers/gpu/drm/v3d/v3d_mmu.c
+@@ -28,36 +28,27 @@
+ #define V3D_PTE_WRITEABLE BIT(29)
+ #define V3D_PTE_VALID BIT(28)
+
+-static int v3d_mmu_flush_all(struct v3d_dev *v3d)
++int v3d_mmu_flush_all(struct v3d_dev *v3d)
+ {
+ int ret;
+
+- /* Make sure that another flush isn't already running when we
+- * start this one.
+- */
+- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+- V3D_MMU_CTL_TLB_CLEARING), 100);
+- if (ret)
+- dev_err(v3d->drm.dev, "TLB clear wait idle pre-wait failed\n");
+-
+- V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
+- V3D_MMU_CTL_TLB_CLEAR);
+-
+- V3D_WRITE(V3D_MMUC_CONTROL,
+- V3D_MMUC_CONTROL_FLUSH |
++ V3D_WRITE(V3D_MMUC_CONTROL, V3D_MMUC_CONTROL_FLUSH |
+ V3D_MMUC_CONTROL_ENABLE);
+
+- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+- V3D_MMU_CTL_TLB_CLEARING), 100);
++ ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
++ V3D_MMUC_CONTROL_FLUSHING), 100);
+ if (ret) {
+- dev_err(v3d->drm.dev, "TLB clear wait idle failed\n");
++ dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
+ return ret;
+ }
+
+- ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
+- V3D_MMUC_CONTROL_FLUSHING), 100);
++ V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
++ V3D_MMU_CTL_TLB_CLEAR);
++
++ ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
++ V3D_MMU_CTL_TLB_CLEARING), 100);
+ if (ret)
+- dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
++ dev_err(v3d->drm.dev, "MMU TLB clear wait idle failed\n");
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 08d2a273958287..4f935f1d50a943 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -135,8 +135,31 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+ struct v3d_stats *local_stats = &file->stats[queue];
+ u64 now = local_clock();
+-
+- preempt_disable();
++ unsigned long flags;
++
++ /*
++ * We only need to disable local interrupts to appease lockdep who
++ * otherwise would think v3d_job_start_stats vs v3d_stats_update has an
++ * unsafe in-irq vs no-irq-off usage problem. This is a false positive
++ * because all the locks are per queue and stats type, and all jobs are
++ * completely one at a time serialised. More specifically:
++ *
++ * 1. Locks for GPU queues are updated from interrupt handlers under a
++ * spin lock and started here with preemption disabled.
++ *
++ * 2. Locks for CPU queues are updated from the worker with preemption
++ * disabled and equally started here with preemption disabled.
++ *
++ * Therefore both are consistent.
++ *
++ * 3. Because next job can only be queued after the previous one has
++ * been signaled, and locks are per queue, there is also no scope for
++ * the start part to race with the update part.
++ */
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_save(flags);
++ else
++ preempt_disable();
+
+ write_seqcount_begin(&local_stats->lock);
+ local_stats->start_ns = now;
+@@ -146,7 +169,10 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ global_stats->start_ns = now;
+ write_seqcount_end(&global_stats->lock);
+
+- preempt_enable();
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_restore(flags);
++ else
++ preempt_enable();
+ }
+
+ static void
+@@ -167,11 +193,21 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
+ struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+ struct v3d_stats *local_stats = &file->stats[queue];
+ u64 now = local_clock();
++ unsigned long flags;
++
++ /* See comment in v3d_job_start_stats() */
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_save(flags);
++ else
++ preempt_disable();
+
+- preempt_disable();
+ v3d_stats_update(local_stats, now);
+ v3d_stats_update(global_stats, now);
+- preempt_enable();
++
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_restore(flags);
++ else
++ preempt_enable();
+ }
+
+ static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_mock.c b/drivers/gpu/drm/vc4/tests/vc4_mock.c
+index 0731a7d85d7abc..922849dd4b4787 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_mock.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_mock.c
+@@ -155,11 +155,11 @@ KUNIT_DEFINE_ACTION_WRAPPER(kunit_action_drm_dev_unregister,
+ drm_dev_unregister,
+ struct drm_device *);
+
+-static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
++static struct vc4_dev *__mock_device(struct kunit *test, enum vc4_gen gen)
+ {
+ struct drm_device *drm;
+- const struct drm_driver *drv = is_vc5 ? &vc5_drm_driver : &vc4_drm_driver;
+- const struct vc4_mock_desc *desc = is_vc5 ? &vc5_mock : &vc4_mock;
++ const struct drm_driver *drv = (gen == VC4_GEN_5) ? &vc5_drm_driver : &vc4_drm_driver;
++ const struct vc4_mock_desc *desc = (gen == VC4_GEN_5) ? &vc5_mock : &vc4_mock;
+ struct vc4_dev *vc4;
+ struct device *dev;
+ int ret;
+@@ -173,7 +173,7 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4);
+
+ vc4->dev = dev;
+- vc4->is_vc5 = is_vc5;
++ vc4->gen = gen;
+
+ vc4->hvs = __vc4_hvs_alloc(vc4, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4->hvs);
+@@ -198,10 +198,10 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
+
+ struct vc4_dev *vc4_mock_device(struct kunit *test)
+ {
+- return __mock_device(test, false);
++ return __mock_device(test, VC4_GEN_4);
+ }
+
+ struct vc4_dev *vc5_mock_device(struct kunit *test)
+ {
+- return __mock_device(test, true);
++ return __mock_device(test, VC4_GEN_5);
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
+index 3f72be7490d5b7..2a85d08b19852a 100644
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -251,7 +251,7 @@ void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->purgeable.lock);
+@@ -265,7 +265,7 @@ static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* list_del_init() is used here because the caller might release
+@@ -396,7 +396,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+@@ -427,7 +427,7 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
+ struct drm_gem_dma_object *dma_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ if (size == 0)
+@@ -496,7 +496,7 @@ int vc4_bo_dumb_create(struct drm_file *file_priv,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ ret = vc4_dumb_fixup_args(args);
+@@ -622,7 +622,7 @@ int vc4_bo_inc_usecnt(struct vc4_bo *bo)
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /* Fast path: if the BO is already retained by someone, no need to
+@@ -661,7 +661,7 @@ void vc4_bo_dec_usecnt(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* Fast path: if the BO is still retained by someone, no need to test
+@@ -783,7 +783,7 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ ret = vc4_grab_bin_bo(vc4, vc4file);
+@@ -813,7 +813,7 @@ int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_vc4_mmap_bo *args = data;
+ struct drm_gem_object *gem_obj;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ gem_obj = drm_gem_object_lookup(file_priv, args->handle);
+@@ -839,7 +839,7 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->size == 0)
+@@ -918,7 +918,7 @@ int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo;
+ bool t_format;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->flags != 0)
+@@ -964,7 +964,7 @@ int vc4_get_tiling_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->flags != 0 || args->modifier != 0)
+@@ -1007,7 +1007,7 @@ int vc4_bo_cache_init(struct drm_device *dev)
+ int ret;
+ int i;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /* Create the initial set of BO labels that the kernel will
+@@ -1071,7 +1071,7 @@ int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ int ret = 0, label;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!args->len)
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 8b5a7e5eb1466c..26a7cf7f646515 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -263,7 +263,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
+ * Removing 1 from the FIFO full level however
+ * seems to completely remove that issue.
+ */
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
+
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
+@@ -428,7 +428,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ if (is_dsi)
+ CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ CRTC_WRITE(PV_MUX_CFG,
+ VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP,
+ PV_MUX_CFG_RGB_PIXEL_MUX_MODE));
+@@ -913,7 +913,7 @@ static int vc4_async_set_fence_cb(struct drm_device *dev,
+ struct dma_fence *fence;
+ int ret;
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
+
+ return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno,
+@@ -1000,7 +1000,7 @@ static int vc4_async_page_flip(struct drm_crtc *crtc,
+ struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /*
+@@ -1043,7 +1043,7 @@ int vc4_page_flip(struct drm_crtc *crtc,
+ struct drm_device *dev = crtc->dev;
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ return vc5_async_page_flip(crtc, fb, event, flags);
+ else
+ return vc4_async_page_flip(crtc, fb, event, flags);
+@@ -1338,9 +1338,8 @@ int __vc4_crtc_init(struct drm_device *drm,
+
+ drm_crtc_helper_add(crtc, crtc_helper_funcs);
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));
+-
+ drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
+
+ /* We support CTM, but only for one CRTC at a time. It's therefore
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
+index c133e96b8aca25..550324819f37fc 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -98,7 +98,7 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
+ if (args->pad != 0)
+ return -EINVAL;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d)
+@@ -147,7 +147,7 @@ static int vc4_open(struct drm_device *dev, struct drm_file *file)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_file *vc4file;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL);
+@@ -165,7 +165,7 @@ static void vc4_close(struct drm_device *dev, struct drm_file *file)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_file *vc4file = file->driver_priv;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (vc4file->bin_bo_used)
+@@ -291,13 +291,17 @@ static int vc4_drm_bind(struct device *dev)
+ struct vc4_dev *vc4;
+ struct device_node *node;
+ struct drm_crtc *crtc;
+- bool is_vc5;
++ enum vc4_gen gen;
+ int ret = 0;
+
+ dev->coherent_dma_mask = DMA_BIT_MASK(32);
+
+- is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5");
+- if (is_vc5)
++ if (of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5"))
++ gen = VC4_GEN_5;
++ else
++ gen = VC4_GEN_4;
++
++ if (gen == VC4_GEN_5)
+ driver = &vc5_drm_driver;
+ else
+ driver = &vc4_drm_driver;
+@@ -315,13 +319,13 @@ static int vc4_drm_bind(struct device *dev)
+ vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base);
+ if (IS_ERR(vc4))
+ return PTR_ERR(vc4);
+- vc4->is_vc5 = is_vc5;
++ vc4->gen = gen;
+ vc4->dev = dev;
+
+ drm = &vc4->base;
+ platform_set_drvdata(pdev, drm);
+
+- if (!is_vc5) {
++ if (gen == VC4_GEN_4) {
+ ret = drmm_mutex_init(drm, &vc4->bin_bo_lock);
+ if (ret)
+ goto err;
+@@ -335,7 +339,7 @@ static int vc4_drm_bind(struct device *dev)
+ if (ret)
+ goto err;
+
+- if (!is_vc5) {
++ if (gen == VC4_GEN_4) {
+ ret = vc4_gem_init(drm);
+ if (ret)
+ goto err;
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 08e29fa825635d..dd452e6a114304 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -80,11 +80,16 @@ struct vc4_perfmon {
+ u64 counters[] __counted_by(ncounters);
+ };
+
++enum vc4_gen {
++ VC4_GEN_4,
++ VC4_GEN_5,
++};
++
+ struct vc4_dev {
+ struct drm_device base;
+ struct device *dev;
+
+- bool is_vc5;
++ enum vc4_gen gen;
+
+ unsigned int irq;
+
+@@ -315,6 +320,7 @@ struct vc4_hvs {
+ struct platform_device *pdev;
+ void __iomem *regs;
+ u32 __iomem *dlist;
++ unsigned int dlist_mem_size;
+
+ struct clk *core_clk;
+
+diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
+index 24fb1b57e1dd99..be9c0b72ebe869 100644
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -76,7 +76,7 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
+ u32 i;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -389,7 +389,7 @@ vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns,
+ unsigned long timeout_expire;
+ DEFINE_WAIT(wait);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (vc4->finished_seqno >= seqno)
+@@ -474,7 +474,7 @@ vc4_submit_next_bin_job(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_exec_info *exec;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ again:
+@@ -522,7 +522,7 @@ vc4_submit_next_render_job(struct drm_device *dev)
+ if (!exec)
+ return;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* A previous RCL may have written to one of our textures, and
+@@ -543,7 +543,7 @@ vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ bool was_empty = list_empty(&vc4->render_job_list);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ list_move_tail(&exec->head, &vc4->render_job_list);
+@@ -970,7 +970,7 @@ vc4_job_handle_completed(struct vc4_dev *vc4)
+ unsigned long irqflags;
+ struct vc4_seqno_cb *cb, *cb_temp;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ spin_lock_irqsave(&vc4->job_lock, irqflags);
+@@ -1009,7 +1009,7 @@ int vc4_queue_seqno_cb(struct drm_device *dev,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ unsigned long irqflags;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ cb->func = func;
+@@ -1065,7 +1065,7 @@ vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct drm_vc4_wait_seqno *args = data;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,
+@@ -1082,7 +1082,7 @@ vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->pad != 0)
+@@ -1131,7 +1131,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
+ args->shader_rec_size,
+ args->bo_handle_count);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -1267,7 +1267,7 @@ int vc4_gem_init(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ vc4->dma_fence_context = dma_fence_context_alloc(1);
+@@ -1326,7 +1326,7 @@ int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ switch (args->madv) {
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 6611ab7c26a63c..2d7d3e90f3be44 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -147,6 +147,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ if (!drm_dev_enter(drm, &idx))
+ return -ENODEV;
+
++ WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++
+ drm_print_regset32(&p, &vc4_hdmi->hdmi_regset);
+ drm_print_regset32(&p, &vc4_hdmi->hd_regset);
+ drm_print_regset32(&p, &vc4_hdmi->cec_regset);
+@@ -156,6 +158,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ drm_print_regset32(&p, &vc4_hdmi->ram_regset);
+ drm_print_regset32(&p, &vc4_hdmi->rm_regset);
+
++ pm_runtime_put(&vc4_hdmi->pdev->dev);
++
+ drm_dev_exit(idx);
+
+ return 0;
+@@ -2047,6 +2051,7 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+ struct drm_device *drm = vc4_hdmi->connector.dev;
+ struct drm_connector *connector = &vc4_hdmi->connector;
++ struct vc4_dev *vc4 = to_vc4_dev(drm);
+ unsigned int sample_rate = params->sample_rate;
+ unsigned int channels = params->channels;
+ unsigned long flags;
+@@ -2104,11 +2109,18 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+ VC4_HDMI_AUDIO_PACKET_CEA_MASK);
+
+ /* Set the MAI threshold */
+- HDMI_WRITE(HDMI_MAI_THR,
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) |
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) |
+- VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) |
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW));
++ if (vc4->gen >= VC4_GEN_5)
++ HDMI_WRITE(HDMI_MAI_THR,
++ VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQLOW));
++ else
++ HDMI_WRITE(HDMI_MAI_THR,
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x6, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_DREQLOW));
+
+ HDMI_WRITE(HDMI_MAI_CONFIG,
+ VC4_HDMI_MAI_CONFIG_BIT_REVERSE |
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 2a835a5cff9dd1..863539e1f7e04b 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -110,7 +110,8 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_hvs *hvs = vc4->hvs;
+ struct drm_printer p = drm_seq_file_printer(m);
+- unsigned int next_entry_start = 0;
++ unsigned int dlist_mem_size = hvs->dlist_mem_size;
++ unsigned int next_entry_start;
+ unsigned int i, j;
+ u32 dlist_word, dispstat;
+
+@@ -124,8 +125,9 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
+ }
+
+ drm_printf(&p, "HVS chan %u:\n", i);
++ next_entry_start = 0;
+
+- for (j = HVS_READ(SCALER_DISPLISTX(i)); j < 256; j++) {
++ for (j = HVS_READ(SCALER_DISPLISTX(i)); j < dlist_mem_size; j++) {
+ dlist_word = readl((u32 __iomem *)vc4->hvs->dlist + j);
+ drm_printf(&p, "dlist: %02d: 0x%08x\n", j,
+ dlist_word);
+@@ -222,6 +224,9 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
++ if (hvs->vc4->gen != VC4_GEN_4)
++ goto exit;
++
+ /* The LUT memory is laid out with each HVS channel in order,
+ * each of which takes 256 writes for R, 256 for G, then 256
+ * for B.
+@@ -237,6 +242,7 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
+ for (i = 0; i < crtc->gamma_size; i++)
+ HVS_WRITE(SCALER_GAMDATA, vc4_crtc->lut_b[i]);
+
++exit:
+ drm_dev_exit(idx);
+ }
+
+@@ -291,7 +297,7 @@ int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
+ u32 reg;
+ int ret;
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ return output;
+
+ /*
+@@ -372,7 +378,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ dispctrl = SCALER_DISPCTRLX_ENABLE;
+ dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan));
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ dispctrl |= VC4_SET_FIELD(mode->hdisplay,
+ SCALER_DISPCTRLX_WIDTH) |
+ VC4_SET_FIELD(mode->vdisplay,
+@@ -394,7 +400,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ dispbkgndx &= ~SCALER_DISPBKGND_INTERLACE;
+
+ HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |
+- ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |
++ ((vc4->gen == VC4_GEN_4) ? SCALER_DISPBKGND_GAMMA : 0) |
+ (interlace ? SCALER_DISPBKGND_INTERLACE : 0));
+
+ /* Reload the LUT, since the SRAMs would have been disabled if
+@@ -415,13 +421,11 @@ void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int chan)
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
+- if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE)
++ if (!(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE))
+ goto out;
+
+- HVS_WRITE(SCALER_DISPCTRLX(chan),
+- HVS_READ(SCALER_DISPCTRLX(chan)) | SCALER_DISPCTRLX_RESET);
+- HVS_WRITE(SCALER_DISPCTRLX(chan),
+- HVS_READ(SCALER_DISPCTRLX(chan)) & ~SCALER_DISPCTRLX_ENABLE);
++ HVS_WRITE(SCALER_DISPCTRLX(chan), SCALER_DISPCTRLX_RESET);
++ HVS_WRITE(SCALER_DISPCTRLX(chan), 0);
+
+ /* Once we leave, the scaler should be disabled and its fifo empty. */
+ WARN_ON_ONCE(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_RESET);
+@@ -580,7 +584,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
+ }
+
+ if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)
+- return;
++ goto exit;
+
+ if (debug_dump_regs) {
+ DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc));
+@@ -663,12 +667,14 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
+ vc4_hvs_dump_state(hvs);
+ }
+
++exit:
+ drm_dev_exit(idx);
+ }
+
+ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ {
+- struct drm_device *drm = &hvs->vc4->base;
++ struct vc4_dev *vc4 = hvs->vc4;
++ struct drm_device *drm = &vc4->base;
+ u32 dispctrl;
+ int idx;
+
+@@ -676,8 +682,9 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl &= ~(hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel));
++ dispctrl &= ~((vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+
+@@ -686,7 +693,8 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+
+ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ {
+- struct drm_device *drm = &hvs->vc4->base;
++ struct vc4_dev *vc4 = hvs->vc4;
++ struct drm_device *drm = &vc4->base;
+ u32 dispctrl;
+ int idx;
+
+@@ -694,8 +702,9 @@ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl |= (hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel));
++ dispctrl |= ((vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPSTAT,
+ SCALER_DISPSTAT_EUFLOW(channel));
+@@ -738,8 +747,10 @@ static irqreturn_t vc4_hvs_irq_handler(int irq, void *data)
+ control = HVS_READ(SCALER_DISPCTRL);
+
+ for (channel = 0; channel < SCALER_CHANNELS_COUNT; channel++) {
+- dspeislur = vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel);
++ dspeislur = (vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel);
++
+ /* Interrupt masking is not always honored, so check it here. */
+ if (status & SCALER_DISPSTAT_EUFLOW(channel) &&
+ control & dspeislur) {
+@@ -767,7 +778,7 @@ int vc4_hvs_debugfs_init(struct drm_minor *minor)
+ if (!vc4->hvs)
+ return -ENODEV;
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ debugfs_create_bool("hvs_load_tracker", S_IRUGO | S_IWUSR,
+ minor->debugfs_root,
+ &vc4->load_tracker_enabled);
+@@ -800,16 +811,17 @@ struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pde
+ * our 16K), since we don't want to scramble the screen when
+ * transitioning from the firmware's boot setup to runtime.
+ */
++ hvs->dlist_mem_size = (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END;
+ drm_mm_init(&hvs->dlist_mm,
+ HVS_BOOTLOADER_DLIST_END,
+- (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END);
++ hvs->dlist_mem_size);
+
+ /* Set up the HVS LBM memory manager. We could have some more
+ * complicated data structure that allowed reuse of LBM areas
+ * between planes when they don't overlap on the screen, but
+ * for now we just allocate globally.
+ */
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ /* 48k words of 2x12-bit pixels */
+ drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);
+ else
+@@ -843,7 +855,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ hvs->regset.regs = hvs_regs;
+ hvs->regset.nregs = ARRAY_SIZE(hvs_regs);
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ struct rpi_firmware *firmware;
+ struct device_node *node;
+ unsigned int max_rate;
+@@ -881,7 +893,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ }
+ }
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ hvs->dlist = hvs->regs + SCALER_DLIST_START;
+ else
+ hvs->dlist = hvs->regs + SCALER5_DLIST_START;
+@@ -922,7 +934,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_DISPEIRQ(1) |
+ SCALER_DISPCTRL_DISPEIRQ(2);
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+ SCALER_DISPCTRL_SLVWREIRQ |
+ SCALER_DISPCTRL_SLVRDEIRQ |
+@@ -966,7 +978,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+
+ /* Recompute Composite Output Buffer (COB) allocations for the displays
+ */
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* The COB is 20736 pixels, or just over 10 lines at 2048 wide.
+ * The bottom 2048 pixels are full 32bpp RGBA (intended for the
+ * TXP composing RGBA to memory), whilst the remainder are only
+diff --git a/drivers/gpu/drm/vc4/vc4_irq.c b/drivers/gpu/drm/vc4/vc4_irq.c
+index ef93d8e22a35a4..968356d1b91dfb 100644
+--- a/drivers/gpu/drm/vc4/vc4_irq.c
++++ b/drivers/gpu/drm/vc4/vc4_irq.c
+@@ -263,7 +263,7 @@ vc4_irq_enable(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (!vc4->v3d)
+@@ -280,7 +280,7 @@ vc4_irq_disable(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (!vc4->v3d)
+@@ -303,7 +303,7 @@ int vc4_irq_install(struct drm_device *dev, int irq)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (irq == IRQ_NOTCONNECTED)
+@@ -324,7 +324,7 @@ void vc4_irq_uninstall(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ vc4_irq_disable(dev);
+@@ -337,7 +337,7 @@ void vc4_irq_reset(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ unsigned long irqflags;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* Acknowledge any stale IRQs. */
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index 5495f2a94fa926..bddfcad1095013 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -369,7 +369,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+ old_hvs_state->fifo_state[channel].pending_commit = NULL;
+ }
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ unsigned long state_rate = max(old_hvs_state->core_clock_rate,
+ new_hvs_state->core_clock_rate);
+ unsigned long core_rate = clamp_t(unsigned long, state_rate,
+@@ -388,7 +388,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+
+ vc4_ctm_commit(vc4, state);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ vc5_hvs_pv_muxing_commit(vc4, state);
+ else
+ vc4_hvs_pv_muxing_commit(vc4, state);
+@@ -406,7 +406,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+
+ drm_atomic_helper_cleanup_planes(dev, state);
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ unsigned long core_rate = min_t(unsigned long,
+ hvs->max_core_rate,
+ new_hvs_state->core_clock_rate);
+@@ -461,7 +461,7 @@ static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct drm_mode_fb_cmd2 mode_cmd_local;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ /* If the user didn't specify a modifier, use the
+@@ -1040,7 +1040,7 @@ int vc4_kms_load(struct drm_device *dev)
+ * the BCM2711, but the load tracker computations are used for
+ * the core clock rate calculation.
+ */
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* Start with the load tracker enabled. Can be
+ * disabled through the debugfs load_tracker file.
+ */
+@@ -1056,7 +1056,7 @@ int vc4_kms_load(struct drm_device *dev)
+ return ret;
+ }
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ dev->mode_config.max_width = 7680;
+ dev->mode_config.max_height = 7680;
+ } else {
+@@ -1064,7 +1064,7 @@ int vc4_kms_load(struct drm_device *dev)
+ dev->mode_config.max_height = 2048;
+ }
+
+- dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs;
++ dev->mode_config.funcs = (vc4->gen > VC4_GEN_4) ? &vc5_mode_funcs : &vc4_mode_funcs;
+ dev->mode_config.helper_private = &vc4_mode_config_helpers;
+ dev->mode_config.preferred_depth = 24;
+ dev->mode_config.async_page_flip = true;
+diff --git a/drivers/gpu/drm/vc4/vc4_perfmon.c b/drivers/gpu/drm/vc4/vc4_perfmon.c
+index c00a5cc2316d20..e4fda72c19f92f 100644
+--- a/drivers/gpu/drm/vc4/vc4_perfmon.c
++++ b/drivers/gpu/drm/vc4/vc4_perfmon.c
+@@ -23,7 +23,7 @@ void vc4_perfmon_get(struct vc4_perfmon *perfmon)
+ return;
+
+ vc4 = perfmon->dev;
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ refcount_inc(&perfmon->refcnt);
+@@ -37,7 +37,7 @@ void vc4_perfmon_put(struct vc4_perfmon *perfmon)
+ return;
+
+ vc4 = perfmon->dev;
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (refcount_dec_and_test(&perfmon->refcnt))
+@@ -49,7 +49,7 @@ void vc4_perfmon_start(struct vc4_dev *vc4, struct vc4_perfmon *perfmon)
+ unsigned int i;
+ u32 mask;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon))
+@@ -69,7 +69,7 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
+ {
+ unsigned int i;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (WARN_ON_ONCE(!vc4->active_perfmon ||
+@@ -90,7 +90,7 @@ struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
+ struct vc4_dev *vc4 = vc4file->dev;
+ struct vc4_perfmon *perfmon;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ mutex_lock(&vc4file->perfmon.lock);
+@@ -105,7 +105,7 @@ void vc4_perfmon_open_file(struct vc4_file *vc4file)
+ {
+ struct vc4_dev *vc4 = vc4file->dev;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_init(&vc4file->perfmon.lock);
+@@ -131,7 +131,7 @@ void vc4_perfmon_close_file(struct vc4_file *vc4file)
+ {
+ struct vc4_dev *vc4 = vc4file->dev;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4file->perfmon.lock);
+@@ -151,7 +151,7 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
+ unsigned int i;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -205,7 +205,7 @@ int vc4_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ struct drm_vc4_perfmon_destroy *req = data;
+ struct vc4_perfmon *perfmon;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -233,7 +233,7 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
+ struct vc4_perfmon *perfmon;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 07caf2a47c6cef..866bc46ee6d53a 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -587,10 +587,10 @@ static u32 vc4_lbm_size(struct drm_plane_state *state)
+ }
+
+ /* Align it to 64 or 128 (hvs5) bytes */
+- lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64);
++ lbm = roundup(lbm, vc4->gen == VC4_GEN_5 ? 128 : 64);
+
+ /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */
+- lbm /= vc4->is_vc5 ? 4 : 2;
++ lbm /= vc4->gen == VC4_GEN_5 ? 4 : 2;
+
+ return lbm;
+ }
+@@ -706,7 +706,7 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state)
+ ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,
+ &vc4_state->lbm,
+ lbm_size,
+- vc4->is_vc5 ? 64 : 32,
++ vc4->gen == VC4_GEN_5 ? 64 : 32,
+ 0, 0);
+ spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
+
+@@ -1057,7 +1057,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE &&
+ fb->format->has_alpha;
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* Control word */
+ vc4_dlist_write(vc4_state,
+ SCALER_CTL0_VALID |
+@@ -1632,7 +1632,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {
+- if (!hvs_formats[i].hvs5_only || vc4->is_vc5) {
++ if (!hvs_formats[i].hvs5_only || vc4->gen == VC4_GEN_5) {
+ formats[num_formats] = hvs_formats[i].drm;
+ num_formats++;
+ }
+@@ -1647,7 +1647,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
+ return ERR_CAST(vc4_plane);
+ plane = &vc4_plane->base;
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ drm_plane_helper_add(plane, &vc5_plane_helper_funcs);
+ else
+ drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
+diff --git a/drivers/gpu/drm/vc4/vc4_render_cl.c b/drivers/gpu/drm/vc4/vc4_render_cl.c
+index 1bda5010f15a86..ae4ad956f04ff8 100644
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -599,7 +599,7 @@ int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
+ bool has_bin = args->bin_cl_size != 0;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->min_x_tile > args->max_x_tile ||
+diff --git a/drivers/gpu/drm/vc4/vc4_v3d.c b/drivers/gpu/drm/vc4/vc4_v3d.c
+index bf5c4e36c94e4d..43f69d74e8761d 100644
+--- a/drivers/gpu/drm/vc4/vc4_v3d.c
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -127,7 +127,7 @@ static int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ int
+ vc4_v3d_pm_get(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ mutex_lock(&vc4->power_lock);
+@@ -148,7 +148,7 @@ vc4_v3d_pm_get(struct vc4_dev *vc4)
+ void
+ vc4_v3d_pm_put(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->power_lock);
+@@ -178,7 +178,7 @@ int vc4_v3d_get_bin_slot(struct vc4_dev *vc4)
+ uint64_t seqno = 0;
+ struct vc4_exec_info *exec;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ try_again:
+@@ -325,7 +325,7 @@ int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used)
+ {
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ mutex_lock(&vc4->bin_bo_lock);
+@@ -360,7 +360,7 @@ static void bin_bo_release(struct kref *ref)
+
+ void vc4_v3d_bin_bo_put(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->bin_bo_lock);
+diff --git a/drivers/gpu/drm/vc4/vc4_validate.c b/drivers/gpu/drm/vc4/vc4_validate.c
+index 0c17284bf6f5bb..f3d7fdbe9083c5 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -109,7 +109,7 @@ vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex)
+ struct drm_gem_dma_object *obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ if (hindex >= exec->bo_count) {
+@@ -169,7 +169,7 @@ vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_dma_object *fbo,
+ uint32_t utile_w = utile_width(cpp);
+ uint32_t utile_h = utile_height(cpp);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return false;
+
+ /* The shaded vertex format stores signed 12.4 fixed point
+@@ -495,7 +495,7 @@ vc4_validate_bin_cl(struct drm_device *dev,
+ uint32_t dst_offset = 0;
+ uint32_t src_offset = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ while (src_offset < len) {
+@@ -942,7 +942,7 @@ vc4_validate_shader_recs(struct drm_device *dev,
+ uint32_t i;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ for (i = 0; i < exec->shader_state_count; i++) {
+diff --git a/drivers/gpu/drm/vc4/vc4_validate_shaders.c b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+index 9745f8810eca6d..afb1a4d8268465 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate_shaders.c
++++ b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+@@ -786,7 +786,7 @@ vc4_validate_shader(struct drm_gem_dma_object *shader_obj)
+ struct vc4_validated_shader_info *validated_shader = NULL;
+ struct vc4_shader_validation_state validation_state;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ memset(&validation_state, 0, sizeof(validation_state));
+diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c
+index 5ce70dd946aa63..24589b947dea3d 100644
+--- a/drivers/gpu/drm/vkms/vkms_output.c
++++ b/drivers/gpu/drm/vkms/vkms_output.c
+@@ -84,7 +84,7 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ DRM_MODE_CONNECTOR_VIRTUAL);
+ if (ret) {
+ DRM_ERROR("Failed to init connector\n");
+- goto err_connector;
++ return ret;
+ }
+
+ drm_connector_helper_add(connector, &vkms_conn_helper_funcs);
+@@ -119,8 +119,5 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ err_encoder:
+ drm_connector_cleanup(connector);
+
+-err_connector:
+- drm_crtc_cleanup(crtc);
+-
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+index 6619a40aed1533..f4332f06b6c809 100644
+--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
++++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+@@ -42,7 +42,7 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ struct xe_gsc *gsc = >->uc.gsc;
+ bool ret = true;
+
+- if (!gsc && !xe_uc_fw_is_enabled(&gsc->fw)) {
++ if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
+ drm_dbg_kms(&xe->drm,
+ "GSC Components not ready for HDCP2.x\n");
+ return false;
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index 2e72c06fd40d07..b0684e6d2047b1 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -85,8 +85,12 @@ static void user_fence_worker(struct work_struct *w)
+ mmput(ufence->mm);
+ }
+
+- wake_up_all(&ufence->xe->ufence_wq);
++ /*
++ * Wake up waiters only after updating the ufence state, allowing the UMD
++ * to safely reuse the same ufence without encountering -EBUSY errors.
++ */
+ WRITE_ONCE(ufence->signalled, 1);
++ wake_up_all(&ufence->xe->ufence_wq);
+ user_fence_put(ufence);
+ }
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 9368acf56eaf79..e4e0e299e8a7d5 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1200,6 +1200,9 @@ static void zynqmp_disp_layer_release_dma(struct zynqmp_disp *disp,
+ {
+ unsigned int i;
+
++ if (!layer->info)
++ return;
++
+ for (i = 0; i < layer->info->num_channels; i++) {
+ struct zynqmp_disp_layer_dma *dma = &layer->dmas[i];
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+index bd1368df787034..4556af2faa0f19 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+@@ -536,7 +536,7 @@ void zynqmp_dpsub_drm_cleanup(struct zynqmp_dpsub *dpsub)
+ {
+ struct drm_device *drm = &dpsub->drm->dev;
+
+- drm_dev_unregister(drm);
++ drm_dev_unplug(drm);
+ drm_atomic_helper_shutdown(drm);
+ drm_encoder_cleanup(&dpsub->drm->encoder);
+ drm_kms_helper_poll_fini(drm);
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index f33485d83d24ff..0fb210e40a4127 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -422,6 +422,25 @@ static int mousevsc_hid_raw_request(struct hid_device *hid,
+ return 0;
+ }
+
++static int mousevsc_hid_probe(struct hid_device *hid_dev, const struct hid_device_id *id)
++{
++ int ret;
++
++ ret = hid_parse(hid_dev);
++ if (ret) {
++ hid_err(hid_dev, "parse failed\n");
++ return ret;
++ }
++
++ ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
++ if (ret) {
++ hid_err(hid_dev, "hw start failed\n");
++ return ret;
++ }
++
++ return 0;
++}
++
+ static const struct hid_ll_driver mousevsc_ll_driver = {
+ .parse = mousevsc_hid_parse,
+ .open = mousevsc_hid_open,
+@@ -431,7 +450,16 @@ static const struct hid_ll_driver mousevsc_ll_driver = {
+ .raw_request = mousevsc_hid_raw_request,
+ };
+
+-static struct hid_driver mousevsc_hid_driver;
++static const struct hid_device_id mousevsc_devices[] = {
++ { HID_DEVICE(BUS_VIRTUAL, HID_GROUP_ANY, 0x045E, 0x0621) },
++ { }
++};
++
++static struct hid_driver mousevsc_hid_driver = {
++ .name = "hid-hyperv",
++ .id_table = mousevsc_devices,
++ .probe = mousevsc_hid_probe,
++};
+
+ static int mousevsc_probe(struct hv_device *device,
+ const struct hv_vmbus_device_id *dev_id)
+@@ -473,7 +501,6 @@ static int mousevsc_probe(struct hv_device *device,
+ }
+
+ hid_dev->ll_driver = &mousevsc_ll_driver;
+- hid_dev->driver = &mousevsc_hid_driver;
+ hid_dev->bus = BUS_VIRTUAL;
+ hid_dev->vendor = input_dev->hid_dev_info.vendor;
+ hid_dev->product = input_dev->hid_dev_info.product;
+@@ -488,20 +515,6 @@ static int mousevsc_probe(struct hv_device *device,
+ if (ret)
+ goto probe_err2;
+
+-
+- ret = hid_parse(hid_dev);
+- if (ret) {
+- hid_err(hid_dev, "parse failed\n");
+- goto probe_err2;
+- }
+-
+- ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
+-
+- if (ret) {
+- hid_err(hid_dev, "hw start failed\n");
+- goto probe_err2;
+- }
+-
+ device_init_wakeup(&device->device, true);
+
+ input_dev->connected = true;
+@@ -579,12 +592,23 @@ static struct hv_driver mousevsc_drv = {
+
+ static int __init mousevsc_init(void)
+ {
+- return vmbus_driver_register(&mousevsc_drv);
++ int ret;
++
++ ret = hid_register_driver(&mousevsc_hid_driver);
++ if (ret)
++ return ret;
++
++ ret = vmbus_driver_register(&mousevsc_drv);
++ if (ret)
++ hid_unregister_driver(&mousevsc_hid_driver);
++
++ return ret;
+ }
+
+ static void __exit mousevsc_exit(void)
+ {
+ vmbus_driver_unregister(&mousevsc_drv);
++ hid_unregister_driver(&mousevsc_hid_driver);
+ }
+
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 413606bdf476df..5a599c90e7a2c7 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1353,9 +1353,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ rotation -= 1800;
+
+ input_report_abs(pen_input, ABS_TILT_X,
+- (char)frame[7]);
++ (signed char)frame[7]);
+ input_report_abs(pen_input, ABS_TILT_Y,
+- (char)frame[8]);
++ (signed char)frame[8]);
+ input_report_abs(pen_input, ABS_Z, rotation);
+ input_report_abs(pen_input, ABS_WHEEL,
+ get_unaligned_le16(&frame[11]));
+diff --git a/drivers/hwmon/aquacomputer_d5next.c b/drivers/hwmon/aquacomputer_d5next.c
+index 34cac27e4ddec3..0dcb8a3a691d69 100644
+--- a/drivers/hwmon/aquacomputer_d5next.c
++++ b/drivers/hwmon/aquacomputer_d5next.c
+@@ -597,7 +597,7 @@ struct aqc_data {
+
+ /* Sensor values */
+ s32 temp_input[20]; /* Max 4 physical and 16 virtual or 8 physical and 12 virtual */
+- s32 speed_input[8];
++ s32 speed_input[9];
+ u32 speed_input_min[1];
+ u32 speed_input_target[1];
+ u32 speed_input_max[1];
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index 934fed3dd58661..ee04795b98aabe 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -2878,8 +2878,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr,
+ if (err < 0)
+ return err;
+
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0,
+- data->target_temp_mask);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->target_temp_mask * 1000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->target_temp[nr] = val;
+@@ -2959,7 +2958,7 @@ store_temp_tolerance(struct device *dev, struct device_attribute *attr,
+ return err;
+
+ /* Limit tolerance as needed */
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, data->tolerance_mask);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->tolerance_mask * 1000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->temp_tolerance[index][nr] = val;
+@@ -3085,7 +3084,7 @@ store_weight_temp(struct device *dev, struct device_attribute *attr,
+ if (err < 0)
+ return err;
+
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->weight_temp[index][nr] = val;
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index ce7fd4ca9d89b0..a68b0a98e8d4db 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -3279,7 +3279,17 @@ static int pmbus_regulator_notify(struct pmbus_data *data, int page, int event)
+
+ static int pmbus_write_smbalert_mask(struct i2c_client *client, u8 page, u8 reg, u8 val)
+ {
+- return _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
++ int ret;
++
++ ret = _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
++
++ /*
++ * Clear fault systematically in case writing PMBUS_SMBALERT_MASK
++ * is not supported by the chip.
++ */
++ pmbus_clear_fault_page(client, page);
++
++ return ret;
+ }
+
+ static irqreturn_t pmbus_fault_handler(int irq, void *pdata)
+diff --git a/drivers/hwmon/tps23861.c b/drivers/hwmon/tps23861.c
+index dfcfb09d9f3cdf..80fb03f30c302d 100644
+--- a/drivers/hwmon/tps23861.c
++++ b/drivers/hwmon/tps23861.c
+@@ -132,7 +132,7 @@ static int tps23861_read_temp(struct tps23861_data *data, long *val)
+ if (err < 0)
+ return err;
+
+- *val = (regval * TEMPERATURE_LSB) - 20000;
++ *val = ((long)regval * TEMPERATURE_LSB) - 20000;
+
+ return 0;
+ }
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 61f7c4003d2ff7..e9577f920286d0 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -251,10 +251,8 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ return -EOPNOTSUPP;
+
+ data_ptrs = kmalloc_array(nmsgs, sizeof(u8 __user *), GFP_KERNEL);
+- if (data_ptrs == NULL) {
+- kfree(msgs);
++ if (!data_ptrs)
+ return -ENOMEM;
+- }
+
+ res = 0;
+ for (i = 0; i < nmsgs; i++) {
+@@ -302,7 +300,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ for (j = 0; j < i; ++j)
+ kfree(msgs[j].buf);
+ kfree(data_ptrs);
+- kfree(msgs);
+ return res;
+ }
+
+@@ -316,7 +313,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ kfree(msgs[i].buf);
+ }
+ kfree(data_ptrs);
+- kfree(msgs);
+ return res;
+ }
+
+@@ -446,6 +442,7 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case I2C_RDWR: {
+ struct i2c_rdwr_ioctl_data rdwr_arg;
+ struct i2c_msg *rdwr_pa;
++ int res;
+
+ if (copy_from_user(&rdwr_arg,
+ (struct i2c_rdwr_ioctl_data __user *)arg,
+@@ -467,7 +464,9 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ if (IS_ERR(rdwr_pa))
+ return PTR_ERR(rdwr_pa);
+
+- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ kfree(rdwr_pa);
++ return res;
+ }
+
+ case I2C_SMBUS: {
+@@ -540,7 +539,7 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ struct i2c_rdwr_ioctl_data32 rdwr_arg;
+ struct i2c_msg32 __user *p;
+ struct i2c_msg *rdwr_pa;
+- int i;
++ int i, res;
+
+ if (copy_from_user(&rdwr_arg,
+ (struct i2c_rdwr_ioctl_data32 __user *)arg,
+@@ -573,7 +572,9 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ };
+ }
+
+- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ kfree(rdwr_pa);
++ return res;
+ }
+ case I2C_SMBUS: {
+ struct i2c_smbus_ioctl_data32 data32;
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 6f3eb710a75d60..ffe99f0c6acef5 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2051,11 +2051,16 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
+ ibireq.max_payload_len = olddev->ibi->max_payload_len;
+ ibireq.num_slots = olddev->ibi->num_slots;
+
+- if (olddev->ibi->enabled) {
++ if (olddev->ibi->enabled)
+ enable_ibi = true;
+- i3c_dev_disable_ibi_locked(olddev);
+- }
+-
++ /*
++ * The olddev should not receive any commands on the
++ * i3c bus as it does not exist and has been assigned
++ * a new address. This will result in NACK or timeout.
++ * So, update the olddev->ibi->enabled flag to false
++ * to avoid DISEC with OldAddr.
++ */
++ olddev->ibi->enabled = false;
+ i3c_dev_free_ibi_locked(olddev);
+ }
+ mutex_unlock(&olddev->ibi_lock);
+diff --git a/drivers/iio/accel/adxl380.c b/drivers/iio/accel/adxl380.c
+index f80527d899be4d..b19ee37df7f12e 100644
+--- a/drivers/iio/accel/adxl380.c
++++ b/drivers/iio/accel/adxl380.c
+@@ -1181,7 +1181,7 @@ static int adxl380_read_raw(struct iio_dev *indio_dev,
+
+ ret = adxl380_read_chn(st, chan->address);
+ iio_device_release_direct_mode(indio_dev);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ *val = sign_extend32(ret >> chan->scan_type.shift,
+diff --git a/drivers/iio/adc/ad4000.c b/drivers/iio/adc/ad4000.c
+index 6ea49124508499..b3b82535f5c14d 100644
+--- a/drivers/iio/adc/ad4000.c
++++ b/drivers/iio/adc/ad4000.c
+@@ -344,6 +344,8 @@ static int ad4000_single_conversion(struct iio_dev *indio_dev,
+
+ if (chan->scan_type.sign == 's')
+ *val = sign_extend32(sample, chan->scan_type.realbits - 1);
++ else
++ *val = sample;
+
+ return IIO_VAL_INT;
+ }
+@@ -637,7 +639,9 @@ static int ad4000_probe(struct spi_device *spi)
+ indio_dev->name = chip->dev_name;
+ indio_dev->num_channels = 1;
+
+- devm_mutex_init(dev, &st->lock);
++ ret = devm_mutex_init(dev, &st->lock);
++ if (ret)
++ return ret;
+
+ st->gain_milli = 1000;
+ if (chip->has_hardware_gain) {
+diff --git a/drivers/iio/adc/pac1921.c b/drivers/iio/adc/pac1921.c
+index 36e813d9c73f1c..fe1d9e07fce24d 100644
+--- a/drivers/iio/adc/pac1921.c
++++ b/drivers/iio/adc/pac1921.c
+@@ -1171,7 +1171,9 @@ static int pac1921_probe(struct i2c_client *client)
+ return dev_err_probe(dev, (int)PTR_ERR(priv->regmap),
+ "Cannot initialize register map\n");
+
+- devm_mutex_init(dev, &priv->lock);
++ ret = devm_mutex_init(dev, &priv->lock);
++ if (ret)
++ return ret;
+
+ priv->dv_gain = PAC1921_DEFAULT_DV_GAIN;
+ priv->di_gain = PAC1921_DEFAULT_DI_GAIN;
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index 0cb00f3bec0453..b8b4171b80436b 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -46,7 +46,7 @@
+ #define AXI_DAC_REG_CNTRL_1 0x0044
+ #define AXI_DAC_SYNC BIT(0)
+ #define AXI_DAC_REG_CNTRL_2 0x0048
+-#define ADI_DAC_R1_MODE BIT(4)
++#define ADI_DAC_R1_MODE BIT(5)
+ #define AXI_DAC_DRP_STATUS 0x0074
+ #define AXI_DAC_DRP_LOCKED BIT(17)
+ /* DAC Channel controls */
+diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c
+index 20b3b5212da76a..fb34a8e4d04e74 100644
+--- a/drivers/iio/industrialio-backend.c
++++ b/drivers/iio/industrialio-backend.c
+@@ -737,8 +737,8 @@ static struct iio_backend *__devm_iio_backend_fwnode_get(struct device *dev, con
+ }
+
+ fwnode_back = fwnode_find_reference(fwnode, "io-backends", index);
+- if (IS_ERR(fwnode))
+- return dev_err_cast_probe(dev, fwnode,
++ if (IS_ERR(fwnode_back))
++ return dev_err_cast_probe(dev, fwnode_back,
+ "Cannot get Firmware reference\n");
+
+ guard(mutex)(&iio_back_lock);
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 5f131bc1a01e97..4ad949672210ba 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -167,7 +167,7 @@ static int iio_gts_gain_cmp(const void *a, const void *b)
+
+ static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
+ {
+- int ret, i, j, new_idx, time_idx;
++ int i, j, new_idx, time_idx, ret = 0;
+ int *all_gains;
+ size_t gain_bytes;
+
+diff --git a/drivers/iio/light/al3010.c b/drivers/iio/light/al3010.c
+index 53569587ccb7ba..7cbb8b20330090 100644
+--- a/drivers/iio/light/al3010.c
++++ b/drivers/iio/light/al3010.c
+@@ -87,7 +87,12 @@ static int al3010_init(struct al3010_data *data)
+ int ret;
+
+ ret = al3010_set_pwr(data->client, true);
++ if (ret < 0)
++ return ret;
+
++ ret = devm_add_action_or_reset(&data->client->dev,
++ al3010_set_pwr_off,
++ data);
+ if (ret < 0)
+ return ret;
+
+@@ -190,12 +195,6 @@ static int al3010_probe(struct i2c_client *client)
+ return ret;
+ }
+
+- ret = devm_add_action_or_reset(&client->dev,
+- al3010_set_pwr_off,
+- data);
+- if (ret < 0)
+- return ret;
+-
+ return devm_iio_device_register(&client->dev, indio_dev);
+ }
+
+diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c
+index d5131b3ba8ab04..a9f2c6b1b29ed2 100644
+--- a/drivers/infiniband/core/roce_gid_mgmt.c
++++ b/drivers/infiniband/core/roce_gid_mgmt.c
+@@ -515,6 +515,27 @@ void rdma_roce_rescan_device(struct ib_device *ib_dev)
+ }
+ EXPORT_SYMBOL(rdma_roce_rescan_device);
+
++/**
++ * rdma_roce_rescan_port - Rescan all of the network devices in the system
++ * and add their gids if relevant to the port of the RoCE device.
++ *
++ * @ib_dev: IB device
++ * @port: Port number
++ */
++void rdma_roce_rescan_port(struct ib_device *ib_dev, u32 port)
++{
++ struct net_device *ndev = NULL;
++
++ if (rdma_protocol_roce(ib_dev, port)) {
++ ndev = ib_device_get_netdev(ib_dev, port);
++ if (!ndev)
++ return;
++ enum_all_gids_of_dev_cb(ib_dev, port, ndev, ndev);
++ dev_put(ndev);
++ }
++}
++EXPORT_SYMBOL(rdma_roce_rescan_port);
++
+ static void callback_for_addr_gid_device_scan(struct ib_device *device,
+ u32 port,
+ struct net_device *rdma_ndev,
+@@ -575,16 +596,17 @@ static void handle_netdev_upper(struct ib_device *ib_dev, u32 port,
+ }
+ }
+
+-static void _roce_del_all_netdev_gids(struct ib_device *ib_dev, u32 port,
+- struct net_device *event_ndev)
++void roce_del_all_netdev_gids(struct ib_device *ib_dev,
++ u32 port, struct net_device *ndev)
+ {
+- ib_cache_gid_del_all_netdev_gids(ib_dev, port, event_ndev);
++ ib_cache_gid_del_all_netdev_gids(ib_dev, port, ndev);
+ }
++EXPORT_SYMBOL(roce_del_all_netdev_gids);
+
+ static void del_netdev_upper_ips(struct ib_device *ib_dev, u32 port,
+ struct net_device *rdma_ndev, void *cookie)
+ {
+- handle_netdev_upper(ib_dev, port, cookie, _roce_del_all_netdev_gids);
++ handle_netdev_upper(ib_dev, port, cookie, roce_del_all_netdev_gids);
+ }
+
+ static void add_netdev_upper_ips(struct ib_device *ib_dev, u32 port,
+diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
+index 821d93c8f7123c..dfd2e5a86e6fe5 100644
+--- a/drivers/infiniband/core/uverbs.h
++++ b/drivers/infiniband/core/uverbs.h
+@@ -160,6 +160,8 @@ struct ib_uverbs_file {
+ struct page *disassociate_page;
+
+ struct xarray idr;
++
++ struct mutex disassociation_lock;
+ };
+
+ struct ib_uverbs_event {
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 94454186ed81d5..85cfc790a7bb36 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -76,6 +76,7 @@ static dev_t dynamic_uverbs_dev;
+ static DEFINE_IDA(uverbs_ida);
+ static int ib_uverbs_add_one(struct ib_device *device);
+ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data);
++static struct ib_client uverbs_client;
+
+ static char *uverbs_devnode(const struct device *dev, umode_t *mode)
+ {
+@@ -217,6 +218,7 @@ void ib_uverbs_release_file(struct kref *ref)
+
+ if (file->disassociate_page)
+ __free_pages(file->disassociate_page, 0);
++ mutex_destroy(&file->disassociation_lock);
+ mutex_destroy(&file->umap_lock);
+ mutex_destroy(&file->ucontext_lock);
+ kfree(file);
+@@ -698,8 +700,13 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
+ ret = PTR_ERR(ucontext);
+ goto out;
+ }
++
++ mutex_lock(&file->disassociation_lock);
++
+ vma->vm_ops = &rdma_umap_ops;
+ ret = ucontext->device->ops.mmap(ucontext, vma);
++
++ mutex_unlock(&file->disassociation_lock);
+ out:
+ srcu_read_unlock(&file->device->disassociate_srcu, srcu_key);
+ return ret;
+@@ -721,6 +728,8 @@ static void rdma_umap_open(struct vm_area_struct *vma)
+ /* We are racing with disassociation */
+ if (!down_read_trylock(&ufile->hw_destroy_rwsem))
+ goto out_zap;
++ mutex_lock(&ufile->disassociation_lock);
++
+ /*
+ * Disassociation already completed, the VMA should already be zapped.
+ */
+@@ -732,10 +741,12 @@ static void rdma_umap_open(struct vm_area_struct *vma)
+ goto out_unlock;
+ rdma_umap_priv_init(priv, vma, opriv->entry);
+
++ mutex_unlock(&ufile->disassociation_lock);
+ up_read(&ufile->hw_destroy_rwsem);
+ return;
+
+ out_unlock:
++ mutex_unlock(&ufile->disassociation_lock);
+ up_read(&ufile->hw_destroy_rwsem);
+ out_zap:
+ /*
+@@ -819,7 +830,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ {
+ struct rdma_umap_priv *priv, *next_priv;
+
+- lockdep_assert_held(&ufile->hw_destroy_rwsem);
++ mutex_lock(&ufile->disassociation_lock);
+
+ while (1) {
+ struct mm_struct *mm = NULL;
+@@ -845,8 +856,10 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ break;
+ }
+ mutex_unlock(&ufile->umap_lock);
+- if (!mm)
++ if (!mm) {
++ mutex_unlock(&ufile->disassociation_lock);
+ return;
++ }
+
+ /*
+ * The umap_lock is nested under mmap_lock since it used within
+@@ -876,7 +889,31 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ mmap_read_unlock(mm);
+ mmput(mm);
+ }
++
++ mutex_unlock(&ufile->disassociation_lock);
++}
++
++/**
++ * rdma_user_mmap_disassociate() - Revoke mmaps for a device
++ * @device: device to revoke
++ *
++ * This function should be called by drivers that need to disable mmaps for the
++ * device, for instance because it is going to be reset.
++ */
++void rdma_user_mmap_disassociate(struct ib_device *device)
++{
++ struct ib_uverbs_device *uverbs_dev =
++ ib_get_client_data(device, &uverbs_client);
++ struct ib_uverbs_file *ufile;
++
++ mutex_lock(&uverbs_dev->lists_mutex);
++ list_for_each_entry(ufile, &uverbs_dev->uverbs_file_list, list) {
++ if (ufile->ucontext)
++ uverbs_user_mmap_disassociate(ufile);
++ }
++ mutex_unlock(&uverbs_dev->lists_mutex);
+ }
++EXPORT_SYMBOL(rdma_user_mmap_disassociate);
+
+ /*
+ * ib_uverbs_open() does not need the BKL:
+@@ -947,6 +984,8 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
+ mutex_init(&file->umap_lock);
+ INIT_LIST_HEAD(&file->umaps);
+
++ mutex_init(&file->disassociation_lock);
++
+ filp->private_data = file;
+ list_add_tail(&file->list, &dev->uverbs_file_list);
+ mutex_unlock(&dev->lists_mutex);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index e66ae9f22c710c..160096792224b1 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -3633,7 +3633,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
+ wc->byte_len = orig_cqe->length;
+ wc->qp = &gsi_qp->ib_qp;
+
+- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
++ wc->ex.imm_data = cpu_to_be32(orig_cqe->immdata);
+ wc->src_qp = orig_cqe->src_qp;
+ memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+ if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
+@@ -3778,7 +3778,10 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
+ (unsigned long)(cqe->qp_handle),
+ struct bnxt_re_qp, qplib_qp);
+ wc->qp = &qp->ib_qp;
+- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
++ if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
++ wc->ex.imm_data = cpu_to_be32(cqe->immdata);
++ else
++ wc->ex.invalidate_rkey = cqe->invrkey;
+ wc->src_qp = cqe->src_qp;
+ memcpy(wc->smac, cqe->smac, ETH_ALEN);
+ wc->port_num = 1;
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 9eb290ec71a85d..2ac8ddbed576f5 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -2033,12 +2033,6 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state)
+ rdev = en_info->rdev;
+ en_dev = en_info->en_dev;
+ mutex_lock(&bnxt_re_mutex);
+- /* L2 driver may invoke this callback during device error/crash or device
+- * reset. Current RoCE driver doesn't recover the device in case of
+- * error. Handle the error by dispatching fatal events to all qps
+- * ie. by calling bnxt_re_dev_stop and release the MSIx vectors as
+- * L2 driver want to modify the MSIx table.
+- */
+
+ ibdev_info(&rdev->ibdev, "Handle device suspend call");
+ /* Check the current device state from bnxt_en_dev and move the
+@@ -2046,17 +2040,12 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state)
+ * This prevents more commands to HW during clean-up,
+ * in case the device is already in error.
+ */
+- if (test_bit(BNXT_STATE_FW_FATAL_COND, &rdev->en_dev->en_state))
++ if (test_bit(BNXT_STATE_FW_FATAL_COND, &rdev->en_dev->en_state)) {
+ set_bit(ERR_DEVICE_DETACHED, &rdev->rcfw.cmdq.flags);
+-
+- bnxt_re_dev_stop(rdev);
+- bnxt_re_stop_irq(adev);
+- /* Move the device states to detached and avoid sending any more
+- * commands to HW
+- */
+- set_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags);
+- set_bit(ERR_DEVICE_DETACHED, &rdev->rcfw.cmdq.flags);
+- wake_up_all(&rdev->rcfw.cmdq.waitq);
++ set_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags);
++ wake_up_all(&rdev->rcfw.cmdq.waitq);
++ bnxt_re_dev_stop(rdev);
++ }
+
+ if (rdev->pacing.dbr_pacing)
+ bnxt_re_set_pacing_dev_state(rdev);
+@@ -2075,13 +2064,6 @@ static int bnxt_re_resume(struct auxiliary_device *adev)
+ struct bnxt_re_dev *rdev;
+
+ mutex_lock(&bnxt_re_mutex);
+- /* L2 driver may invoke this callback during device recovery, resume.
+- * reset. Current RoCE driver doesn't recover the device in case of
+- * error. Handle the error by dispatching fatal events to all qps
+- * ie. by calling bnxt_re_dev_stop and release the MSIx vectors as
+- * L2 driver want to modify the MSIx table.
+- */
+-
+ bnxt_re_add_device(adev, BNXT_RE_POST_RECOVERY_INIT);
+ rdev = en_info->rdev;
+ ibdev_info(&rdev->ibdev, "Device resume completed");
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 820611a239433a..f55958e5fddb4a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -391,7 +391,7 @@ struct bnxt_qplib_cqe {
+ u16 cfa_meta;
+ u64 wr_id;
+ union {
+- __le32 immdata;
++ u32 immdata;
+ u32 invrkey;
+ };
+ u64 qp_handle;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 4ec66611a14340..4106423a1b399d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -179,8 +179,8 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+ ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_CQC,
+ hr_cq->cqn);
+ if (ret)
+- dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret,
+- hr_cq->cqn);
++ dev_err_ratelimited(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n",
++ ret, hr_cq->cqn);
+
+ xa_erase_irq(&cq_table->array, hr_cq->cqn);
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_debugfs.c b/drivers/infiniband/hw/hns/hns_roce_debugfs.c
+index e8febb40f6450c..b869cdc5411893 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_debugfs.c
++++ b/drivers/infiniband/hw/hns/hns_roce_debugfs.c
+@@ -5,6 +5,7 @@
+
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
++#include <linux/pci.h>
+
+ #include "hns_roce_device.h"
+
+@@ -86,7 +87,7 @@ void hns_roce_register_debugfs(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_dev_debugfs *dbgfs = &hr_dev->dbgfs;
+
+- dbgfs->root = debugfs_create_dir(dev_name(&hr_dev->ib_dev.dev),
++ dbgfs->root = debugfs_create_dir(pci_name(hr_dev->pci_dev),
+ hns_roce_dbgfs_root);
+
+ create_sw_stat_debugfs(hr_dev, dbgfs->root);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 0b1e21cb6d2d38..560a1d9de408ff 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -489,12 +489,6 @@ struct hns_roce_bank {
+ u32 next; /* Next ID to allocate. */
+ };
+
+-struct hns_roce_idx_table {
+- u32 *spare_idx;
+- u32 head;
+- u32 tail;
+-};
+-
+ struct hns_roce_qp_table {
+ struct hns_roce_hem_table qp_table;
+ struct hns_roce_hem_table irrl_table;
+@@ -503,7 +497,7 @@ struct hns_roce_qp_table {
+ struct mutex scc_mutex;
+ struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM];
+ struct mutex bank_mutex;
+- struct hns_roce_idx_table idx_table;
++ struct xarray dip_xa;
+ };
+
+ struct hns_roce_cq_table {
+@@ -593,6 +587,7 @@ struct hns_roce_dev;
+
+ enum {
+ HNS_ROCE_FLUSH_FLAG = 0,
++ HNS_ROCE_STOP_FLUSH_FLAG = 1,
+ };
+
+ struct hns_roce_work {
+@@ -656,6 +651,8 @@ struct hns_roce_qp {
+ enum hns_roce_cong_type cong_type;
+ u8 tc_mode;
+ u8 priority;
++ spinlock_t flush_lock;
++ struct hns_roce_dip *dip;
+ };
+
+ struct hns_roce_ib_iboe {
+@@ -982,8 +979,6 @@ struct hns_roce_dev {
+ enum hns_roce_device_state state;
+ struct list_head qp_list; /* list of all qps on this dev */
+ spinlock_t qp_list_lock; /* protect qp_list */
+- struct list_head dip_list; /* list of all dest ips on this dev */
+- spinlock_t dip_list_lock; /* protect dip_list */
+
+ struct list_head pgdir_list;
+ struct mutex pgdir_mutex;
+@@ -1289,6 +1284,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn);
+ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type);
+ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp);
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type);
++void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn);
+ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type);
+ void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev);
+ int hns_roce_init(struct hns_roce_dev *hr_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index c7c167e2a04513..f84521be3bea4a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -300,7 +300,7 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ unsigned long mhop_obj = obj;
+ u32 l0_idx, l1_idx, l2_idx;
+ u32 chunk_ba_num;
+@@ -331,14 +331,14 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ index->buf = l0_idx;
+ break;
+ default:
+- ibdev_err(ibdev, "table %u not support mhop.hop_num = %u!\n",
+- table->type, mhop->hop_num);
++ dev_err(dev, "table %u not support mhop.hop_num = %u!\n",
++ table->type, mhop->hop_num);
+ return -EINVAL;
+ }
+
+ if (unlikely(index->buf >= table->num_hem)) {
+- ibdev_err(ibdev, "table %u exceed hem limt idx %llu, max %lu!\n",
+- table->type, index->buf, table->num_hem);
++ dev_err(dev, "table %u exceed hem limt idx %llu, max %lu!\n",
++ table->type, index->buf, table->num_hem);
+ return -EINVAL;
+ }
+
+@@ -448,14 +448,14 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ u32 step_idx;
+ int ret = 0;
+
+ if (index->inited & HEM_INDEX_L0) {
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, 0);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM step 0 failed!\n");
++ dev_err(dev, "set HEM step 0 failed!\n");
+ goto out;
+ }
+ }
+@@ -463,7 +463,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ if (index->inited & HEM_INDEX_L1) {
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, 1);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM step 1 failed!\n");
++ dev_err(dev, "set HEM step 1 failed!\n");
+ goto out;
+ }
+ }
+@@ -475,7 +475,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ step_idx = mhop->hop_num;
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, step_idx);
+ if (ret)
+- ibdev_err(ibdev, "set HEM step last failed!\n");
++ dev_err(dev, "set HEM step last failed!\n");
+ }
+ out:
+ return ret;
+@@ -485,14 +485,14 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_table *table,
+ unsigned long obj)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_hem_index index = {};
+ struct hns_roce_hem_mhop mhop = {};
++ struct device *dev = hr_dev->dev;
+ int ret;
+
+ ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "calc hem config failed!\n");
++ dev_err(dev, "calc hem config failed!\n");
+ return ret;
+ }
+
+@@ -504,7 +504,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+
+ ret = alloc_mhop_hem(hr_dev, table, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "alloc mhop hem failed!\n");
++ dev_err(dev, "alloc mhop hem failed!\n");
+ goto out;
+ }
+
+@@ -512,7 +512,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ if (table->type < HEM_TYPE_MTT) {
+ ret = set_mhop_hem(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM address to HW failed!\n");
++ dev_err(dev, "set HEM address to HW failed!\n");
+ goto err_alloc;
+ }
+ }
+@@ -575,7 +575,7 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ u32 hop_num = mhop->hop_num;
+ u32 chunk_ba_num;
+ u32 step_idx;
+@@ -605,21 +605,21 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, step_idx);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear hop%u HEM, ret = %d.\n",
+- hop_num, ret);
++ dev_warn(dev, "failed to clear hop%u HEM, ret = %d.\n",
++ hop_num, ret);
+
+ if (index->inited & HEM_INDEX_L1) {
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 1);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear HEM step 1, ret = %d.\n",
+- ret);
++ dev_warn(dev, "failed to clear HEM step 1, ret = %d.\n",
++ ret);
+ }
+
+ if (index->inited & HEM_INDEX_L0) {
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 0);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear HEM step 0, ret = %d.\n",
+- ret);
++ dev_warn(dev, "failed to clear HEM step 0, ret = %d.\n",
++ ret);
+ }
+ }
+ }
+@@ -629,14 +629,14 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev,
+ unsigned long obj,
+ int check_refcount)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_hem_index index = {};
+ struct hns_roce_hem_mhop mhop = {};
++ struct device *dev = hr_dev->dev;
+ int ret;
+
+ ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "calc hem config failed!\n");
++ dev_err(dev, "calc hem config failed!\n");
+ return;
+ }
+
+@@ -672,8 +672,8 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
+
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
+ if (ret)
+- dev_warn(dev, "failed to clear HEM base address, ret = %d.\n",
+- ret);
++ dev_warn_ratelimited(dev, "failed to clear HEM base address, ret = %d.\n",
++ ret);
+
+ hns_roce_free_hem(hr_dev, table->hem[i]);
+ table->hem[i] = NULL;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 24e906b9d3ae13..697b17cca02e71 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -373,19 +373,12 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
+ static int check_send_valid(struct hns_roce_dev *hr_dev,
+ struct hns_roce_qp *hr_qp)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+-
+ if (unlikely(hr_qp->state == IB_QPS_RESET ||
+ hr_qp->state == IB_QPS_INIT ||
+- hr_qp->state == IB_QPS_RTR)) {
+- ibdev_err(ibdev, "failed to post WQE, QP state %u!\n",
+- hr_qp->state);
++ hr_qp->state == IB_QPS_RTR))
+ return -EINVAL;
+- } else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) {
+- ibdev_err(ibdev, "failed to post WQE, dev state %d!\n",
+- hr_dev->state);
++ else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN))
+ return -EIO;
+- }
+
+ return 0;
+ }
+@@ -582,7 +575,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
+ if (WARN_ON(ret))
+ return ret;
+
+- hr_reg_write(rc_sq_wqe, RC_SEND_WQE_FENCE,
++ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
+ (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
+
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SE,
+@@ -2560,20 +2553,19 @@ static void hns_roce_free_link_table(struct hns_roce_dev *hr_dev)
+ free_link_table_buf(hr_dev, &priv->ext_llm);
+ }
+
+-static void free_dip_list(struct hns_roce_dev *hr_dev)
++static void free_dip_entry(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_dip *hr_dip;
+- struct hns_roce_dip *tmp;
+- unsigned long flags;
++ unsigned long idx;
+
+- spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
++ xa_lock(&hr_dev->qp_table.dip_xa);
+
+- list_for_each_entry_safe(hr_dip, tmp, &hr_dev->dip_list, node) {
+- list_del(&hr_dip->node);
++ xa_for_each(&hr_dev->qp_table.dip_xa, idx, hr_dip) {
++ __xa_erase(&hr_dev->qp_table.dip_xa, hr_dip->dip_idx);
+ kfree(hr_dip);
+ }
+
+- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
++ xa_unlock(&hr_dev->qp_table.dip_xa);
+ }
+
+ static struct ib_pd *free_mr_init_pd(struct hns_roce_dev *hr_dev)
+@@ -2775,8 +2767,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
+ ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
+ IB_QPS_INIT, NULL);
+ if (ret) {
+- ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev, "failed to modify qp to init, ret = %d.\n",
++ ret);
+ return ret;
+ }
+
+@@ -2981,7 +2973,7 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
+ hns_roce_free_link_table(hr_dev);
+
+ if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09)
+- free_dip_list(hr_dev);
++ free_dip_entry(hr_dev);
+ }
+
+ static int hns_roce_mbox_post(struct hns_roce_dev *hr_dev,
+@@ -3421,8 +3413,8 @@ static int free_mr_post_send_lp_wqe(struct hns_roce_qp *hr_qp)
+
+ ret = hns_roce_v2_post_send(&hr_qp->ibqp, send_wr, &bad_wr);
+ if (ret) {
+- ibdev_err(ibdev, "failed to post wqe for free mr, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev, "failed to post wqe for free mr, ret = %d.\n",
++ ret);
+ return ret;
+ }
+
+@@ -3461,9 +3453,9 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
+
+ ret = free_mr_post_send_lp_wqe(hr_qp);
+ if (ret) {
+- ibdev_err(ibdev,
+- "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
+- hr_qp->qpn, ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
++ hr_qp->qpn, ret);
+ break;
+ }
+
+@@ -3474,16 +3466,16 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
+ while (cqe_cnt) {
+ npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc);
+ if (npolled < 0) {
+- ibdev_err(ibdev,
+- "failed to poll cqe for free mr, remain %d cqe.\n",
+- cqe_cnt);
++ ibdev_err_ratelimited(ibdev,
++ "failed to poll cqe for free mr, remain %d cqe.\n",
++ cqe_cnt);
+ goto out;
+ }
+
+ if (time_after(jiffies, end)) {
+- ibdev_err(ibdev,
+- "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
+- cqe_cnt);
++ ibdev_err_ratelimited(ibdev,
++ "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
++ cqe_cnt);
+ goto out;
+ }
+ cqe_cnt -= npolled;
+@@ -4701,26 +4693,49 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp, int attr_mask,
+ return 0;
+ }
+
++static int alloc_dip_entry(struct xarray *dip_xa, u32 qpn)
++{
++ struct hns_roce_dip *hr_dip;
++ int ret;
++
++ hr_dip = xa_load(dip_xa, qpn);
++ if (hr_dip)
++ return 0;
++
++ hr_dip = kzalloc(sizeof(*hr_dip), GFP_KERNEL);
++ if (!hr_dip)
++ return -ENOMEM;
++
++ ret = xa_err(xa_store(dip_xa, qpn, hr_dip, GFP_KERNEL));
++ if (ret)
++ kfree(hr_dip);
++
++ return ret;
++}
++
+ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ u32 *dip_idx)
+ {
+ const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+- u32 *spare_idx = hr_dev->qp_table.idx_table.spare_idx;
+- u32 *head = &hr_dev->qp_table.idx_table.head;
+- u32 *tail = &hr_dev->qp_table.idx_table.tail;
++ struct xarray *dip_xa = &hr_dev->qp_table.dip_xa;
++ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+ struct hns_roce_dip *hr_dip;
+- unsigned long flags;
++ unsigned long idx;
+ int ret = 0;
+
+- spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
++ ret = alloc_dip_entry(dip_xa, ibqp->qp_num);
++ if (ret)
++ return ret;
+
+- spare_idx[*tail] = ibqp->qp_num;
+- *tail = (*tail == hr_dev->caps.num_qps - 1) ? 0 : (*tail + 1);
++ xa_lock(dip_xa);
+
+- list_for_each_entry(hr_dip, &hr_dev->dip_list, node) {
+- if (!memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
++ xa_for_each(dip_xa, idx, hr_dip) {
++ if (hr_dip->qp_cnt &&
++ !memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
+ *dip_idx = hr_dip->dip_idx;
++ hr_dip->qp_cnt++;
++ hr_qp->dip = hr_dip;
+ goto out;
+ }
+ }
+@@ -4728,19 +4743,24 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ /* If no dgid is found, a new dip and a mapping between dgid and
+ * dip_idx will be created.
+ */
+- hr_dip = kzalloc(sizeof(*hr_dip), GFP_ATOMIC);
+- if (!hr_dip) {
+- ret = -ENOMEM;
+- goto out;
++ xa_for_each(dip_xa, idx, hr_dip) {
++ if (hr_dip->qp_cnt)
++ continue;
++
++ *dip_idx = idx;
++ memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
++ hr_dip->dip_idx = idx;
++ hr_dip->qp_cnt++;
++ hr_qp->dip = hr_dip;
++ break;
+ }
+
+- memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
+- hr_dip->dip_idx = *dip_idx = spare_idx[*head];
+- *head = (*head == hr_dev->caps.num_qps - 1) ? 0 : (*head + 1);
+- list_add_tail(&hr_dip->node, &hr_dev->dip_list);
++ /* This should never happen. */
++ if (WARN_ON_ONCE(!hr_qp->dip))
++ ret = -ENOSPC;
+
+ out:
+- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
++ xa_unlock(dip_xa);
+ return ret;
+ }
+
+@@ -5061,10 +5081,8 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp,
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ int ret = 0;
+
+- if (!check_qp_state(cur_state, new_state)) {
+- ibdev_err(&hr_dev->ib_dev, "Illegal state for QP!\n");
++ if (!check_qp_state(cur_state, new_state))
+ return -EINVAL;
+- }
+
+ if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
+ memset(qpc_mask, 0, hr_dev->caps.qpc_sz);
+@@ -5325,7 +5343,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ /* SW pass context to HW */
+ ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp);
+ if (ret) {
+- ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret);
++ ibdev_err_ratelimited(ibdev, "failed to modify QP, ret = %d.\n", ret);
+ goto out;
+ }
+
+@@ -5463,7 +5481,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+
+ ret = hns_roce_v2_query_qpc(hr_dev, hr_qp->qpn, &context);
+ if (ret) {
+- ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to query QPC, ret = %d.\n",
++ ret);
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -5471,7 +5491,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ state = hr_reg_read(&context, QPC_QP_ST);
+ tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state);
+ if (tmp_qp_state == -1) {
+- ibdev_err(ibdev, "Illegal ib_qp_state\n");
++ ibdev_err_ratelimited(ibdev, "Illegal ib_qp_state\n");
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -5564,9 +5584,9 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
+ hr_qp->state, IB_QPS_RESET, udata);
+ if (ret)
+- ibdev_err(ibdev,
+- "failed to modify QP to RST, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to modify QP to RST, ret = %d.\n",
++ ret);
+ }
+
+ send_cq = hr_qp->ibqp.send_cq ? to_hr_cq(hr_qp->ibqp.send_cq) : NULL;
+@@ -5594,17 +5614,41 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ return ret;
+ }
+
++static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev,
++ struct hns_roce_qp *hr_qp)
++{
++ struct hns_roce_dip *hr_dip = hr_qp->dip;
++
++ xa_lock(&hr_dev->qp_table.dip_xa);
++
++ hr_dip->qp_cnt--;
++ if (!hr_dip->qp_cnt)
++ memset(hr_dip->dgid, 0, GID_LEN_V2);
++
++ xa_unlock(&hr_dev->qp_table.dip_xa);
++}
++
+ int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ {
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
++ unsigned long flags;
+ int ret;
+
++ /* Make sure flush_cqe() is completed */
++ spin_lock_irqsave(&hr_qp->flush_lock, flags);
++ set_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag);
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
++ flush_work(&hr_qp->flush_work.work);
++
++ if (hr_qp->cong_type == CONG_TYPE_DIP)
++ put_dip_ctx_idx(hr_dev, hr_qp);
++
+ ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
+ if (ret)
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
+- hr_qp->qpn, ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
++ hr_qp->qpn, ret);
+
+ hns_roce_qp_destroy(hr_dev, hr_qp, udata);
+
+@@ -5898,9 +5942,9 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
+ HNS_ROCE_CMD_MODIFY_CQC, hr_cq->cqn);
+ hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ if (ret)
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to process cmd when modifying CQ, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to process cmd when modifying CQ, ret = %d.\n",
++ ret);
+
+ err_out:
+ if (ret)
+@@ -5924,9 +5968,9 @@ static int hns_roce_v2_query_cqc(struct hns_roce_dev *hr_dev, u32 cqn,
+ ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma,
+ HNS_ROCE_CMD_QUERY_CQC, cqn);
+ if (ret) {
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to process cmd when querying CQ, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to process cmd when querying CQ, ret = %d.\n",
++ ret);
+ goto err_mailbox;
+ }
+
+@@ -5967,11 +6011,10 @@ static int hns_roce_v2_query_mpt(struct hns_roce_dev *hr_dev, u32 key,
+ return ret;
+ }
+
+-static void hns_roce_irq_work_handle(struct work_struct *work)
++static void dump_aeqe_log(struct hns_roce_work *irq_work)
+ {
+- struct hns_roce_work *irq_work =
+- container_of(work, struct hns_roce_work, work);
+- struct ib_device *ibdev = &irq_work->hr_dev->ib_dev;
++ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
++ struct ib_device *ibdev = &hr_dev->ib_dev;
+
+ switch (irq_work->event_type) {
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+@@ -6015,6 +6058,8 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
+ case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+ ibdev_warn(ibdev, "DB overflow.\n");
+ break;
++ case HNS_ROCE_EVENT_TYPE_MB:
++ break;
+ case HNS_ROCE_EVENT_TYPE_FLR:
+ ibdev_warn(ibdev, "function level reset.\n");
+ break;
+@@ -6025,8 +6070,46 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
+ ibdev_err(ibdev, "invalid xrceth error.\n");
+ break;
+ default:
++ ibdev_info(ibdev, "Undefined event %d.\n",
++ irq_work->event_type);
+ break;
+ }
++}
++
++static void hns_roce_irq_work_handle(struct work_struct *work)
++{
++ struct hns_roce_work *irq_work =
++ container_of(work, struct hns_roce_work, work);
++ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
++ int event_type = irq_work->event_type;
++ u32 queue_num = irq_work->queue_num;
++
++ switch (event_type) {
++ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
++ case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
++ case HNS_ROCE_EVENT_TYPE_COMM_EST:
++ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
++ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
++ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
++ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
++ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
++ hns_roce_qp_event(hr_dev, queue_num, event_type);
++ break;
++ case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
++ case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
++ hns_roce_srq_event(hr_dev, queue_num, event_type);
++ break;
++ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
++ hns_roce_cq_event(hr_dev, queue_num, event_type);
++ break;
++ default:
++ break;
++ }
++
++ dump_aeqe_log(irq_work);
+
+ kfree(irq_work);
+ }
+@@ -6087,14 +6170,14 @@ static struct hns_roce_aeqe *next_aeqe_sw_v2(struct hns_roce_eq *eq)
+ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+ {
+- struct device *dev = hr_dev->dev;
+ struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq);
+ irqreturn_t aeqe_found = IRQ_NONE;
++ int num_aeqes = 0;
+ int event_type;
+ u32 queue_num;
+ int sub_type;
+
+- while (aeqe) {
++ while (aeqe && num_aeqes < HNS_AEQ_POLLING_BUDGET) {
+ /* Make sure we read AEQ entry after we have checked the
+ * ownership bit
+ */
+@@ -6105,25 +6188,12 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ queue_num = hr_reg_read(aeqe, AEQE_EVENT_QUEUE_NUM);
+
+ switch (event_type) {
+- case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+- case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
+- case HNS_ROCE_EVENT_TYPE_COMM_EST:
+- case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+- case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
+ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
+- hns_roce_qp_event(hr_dev, queue_num, event_type);
+- break;
+- case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
+- case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
+- hns_roce_srq_event(hr_dev, queue_num, event_type);
+- break;
+- case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+- case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+- hns_roce_cq_event(hr_dev, queue_num, event_type);
++ hns_roce_flush_cqe(hr_dev, queue_num);
+ break;
+ case HNS_ROCE_EVENT_TYPE_MB:
+ hns_roce_cmd_event(hr_dev,
+@@ -6131,12 +6201,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ aeqe->event.cmd.status,
+ le64_to_cpu(aeqe->event.cmd.out_param));
+ break;
+- case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+- case HNS_ROCE_EVENT_TYPE_FLR:
+- break;
+ default:
+- dev_err(dev, "unhandled event %d on EQ %d at idx %u.\n",
+- event_type, eq->eqn, eq->cons_index);
+ break;
+ }
+
+@@ -6150,6 +6215,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ hns_roce_v2_init_irq_work(hr_dev, eq, queue_num);
+
+ aeqe = next_aeqe_sw_v2(eq);
++ ++num_aeqes;
+ }
+
+ update_eq_db(eq);
+@@ -6699,6 +6765,9 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
+ int ret;
+ int i;
+
++ if (hr_dev->caps.aeqe_depth < HNS_AEQ_POLLING_BUDGET)
++ return -EINVAL;
++
+ other_num = hr_dev->caps.num_other_vectors;
+ comp_num = hr_dev->caps.num_comp_vectors;
+ aeq_num = hr_dev->caps.num_aeq_vectors;
+@@ -7017,6 +7086,7 @@ static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+
+ handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT;
+ }
++
+ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+ {
+ struct hns_roce_dev *hr_dev;
+@@ -7035,6 +7105,9 @@ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+
+ hr_dev->active = false;
+ hr_dev->dis_db = true;
++
++ rdma_user_mmap_disassociate(&hr_dev->ib_dev);
++
+ hr_dev->state = HNS_ROCE_DEVICE_STATE_RST_DOWN;
+
+ return 0;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index c65f68a14a2608..cbdbc9edbce6ec 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -85,6 +85,11 @@
+
+ #define HNS_ROCE_V2_TABLE_CHUNK_SIZE (1 << 18)
+
++/* budget must be smaller than aeqe_depth to guarantee that we update
++ * the ci before we polled all the entries in the EQ.
++ */
++#define HNS_AEQ_POLLING_BUDGET 64
++
+ enum {
+ HNS_ROCE_CMD_FLAG_IN = BIT(0),
+ HNS_ROCE_CMD_FLAG_OUT = BIT(1),
+@@ -919,6 +924,7 @@ struct hns_roce_v2_rc_send_wqe {
+ #define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7)
+ #define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8)
+ #define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9)
++#define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10)
+ #define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11)
+ #define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12)
+ #define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
+@@ -1342,7 +1348,7 @@ struct hns_roce_v2_priv {
+ struct hns_roce_dip {
+ u8 dgid[GID_LEN_V2];
+ u32 dip_idx;
+- struct list_head node; /* all dips are on a list */
++ u32 qp_cnt;
+ };
+
+ struct fmea_ram_ecc {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 4cb0af73358708..ae24c81c9812d9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -466,6 +466,11 @@ static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
+ pgprot_t prot;
+ int ret;
+
++ if (hr_dev->dis_db) {
++ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]);
++ return -EPERM;
++ }
++
+ rdma_entry = rdma_user_mmap_entry_get_pgoff(uctx, vma->vm_pgoff);
+ if (!rdma_entry) {
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]);
+@@ -1130,8 +1135,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
+
+ INIT_LIST_HEAD(&hr_dev->qp_list);
+ spin_lock_init(&hr_dev->qp_list_lock);
+- INIT_LIST_HEAD(&hr_dev->dip_list);
+- spin_lock_init(&hr_dev->dip_list_lock);
+
+ ret = hns_roce_register_device(hr_dev);
+ if (ret)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 846da8c78b8b72..bf30b3a65a9ba9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -138,8 +138,8 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr
+ key_to_hw_index(mr->key) &
+ (hr_dev->caps.num_mtpts - 1));
+ if (ret)
+- ibdev_warn(ibdev, "failed to destroy mpt, ret = %d.\n",
+- ret);
++ ibdev_warn_ratelimited(ibdev, "failed to destroy mpt, ret = %d.\n",
++ ret);
+ }
+
+ free_mr_pbl(hr_dev, mr);
+@@ -435,15 +435,16 @@ static int hns_roce_set_page(struct ib_mr *ibmr, u64 addr)
+ }
+
+ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+- unsigned int *sg_offset)
++ unsigned int *sg_offset_p)
+ {
++ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibmr->device);
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_mr *mr = to_hr_mr(ibmr);
+ struct hns_roce_mtr *mtr = &mr->pbl_mtr;
+ int ret, sg_num = 0;
+
+- if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
++ if (!IS_ALIGNED(sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
+ ibmr->page_size < HNS_HW_PAGE_SIZE ||
+ ibmr->page_size > HNS_HW_MAX_PAGE_SIZE)
+ return sg_num;
+@@ -454,7 +455,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ if (!mr->page_list)
+ return sg_num;
+
+- sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
++ sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset_p, hns_roce_set_page);
+ if (sg_num < 1) {
+ ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n",
+ mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 6b03ba671ff8f3..9e2e76c5940636 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -39,6 +39,25 @@
+ #include "hns_roce_device.h"
+ #include "hns_roce_hem.h"
+
++static struct hns_roce_qp *hns_roce_qp_lookup(struct hns_roce_dev *hr_dev,
++ u32 qpn)
++{
++ struct device *dev = hr_dev->dev;
++ struct hns_roce_qp *qp;
++ unsigned long flags;
++
++ xa_lock_irqsave(&hr_dev->qp_table_xa, flags);
++ qp = __hns_roce_qp_lookup(hr_dev, qpn);
++ if (qp)
++ refcount_inc(&qp->refcount);
++ xa_unlock_irqrestore(&hr_dev->qp_table_xa, flags);
++
++ if (!qp)
++ dev_warn(dev, "async event for bogus QP %08x\n", qpn);
++
++ return qp;
++}
++
+ static void flush_work_handle(struct work_struct *work)
+ {
+ struct hns_roce_work *flush_work = container_of(work,
+@@ -71,11 +90,18 @@ static void flush_work_handle(struct work_struct *work)
+ void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ {
+ struct hns_roce_work *flush_work = &hr_qp->flush_work;
++ unsigned long flags;
++
++ spin_lock_irqsave(&hr_qp->flush_lock, flags);
++ /* Exit directly after destroy_qp() */
++ if (test_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag)) {
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
++ return;
++ }
+
+- flush_work->hr_dev = hr_dev;
+- INIT_WORK(&flush_work->work, flush_work_handle);
+ refcount_inc(&hr_qp->refcount);
+ queue_work(hr_dev->irq_workq, &flush_work->work);
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
+ }
+
+ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
+@@ -95,31 +121,28 @@ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
+
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
+ {
+- struct device *dev = hr_dev->dev;
+ struct hns_roce_qp *qp;
+
+- xa_lock(&hr_dev->qp_table_xa);
+- qp = __hns_roce_qp_lookup(hr_dev, qpn);
+- if (qp)
+- refcount_inc(&qp->refcount);
+- xa_unlock(&hr_dev->qp_table_xa);
+-
+- if (!qp) {
+- dev_warn(dev, "async event for bogus QP %08x\n", qpn);
++ qp = hns_roce_qp_lookup(hr_dev, qpn);
++ if (!qp)
+ return;
+- }
+
+- if (event_type == HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION ||
+- event_type == HNS_ROCE_EVENT_TYPE_INVALID_XRCETH) {
+- qp->state = IB_QPS_ERR;
++ qp->event(qp, (enum hns_roce_event)event_type);
+
+- flush_cqe(hr_dev, qp);
+- }
++ if (refcount_dec_and_test(&qp->refcount))
++ complete(&qp->free);
++}
+
+- qp->event(qp, (enum hns_roce_event)event_type);
++void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn)
++{
++ struct hns_roce_qp *qp;
++
++ qp = hns_roce_qp_lookup(hr_dev, qpn);
++ if (!qp)
++ return;
++
++ qp->state = IB_QPS_ERR;
++ flush_cqe(hr_dev, qp);
+
+ if (refcount_dec_and_test(&qp->refcount))
+ complete(&qp->free);
+@@ -1124,6 +1147,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ struct ib_udata *udata,
+ struct hns_roce_qp *hr_qp)
+ {
++ struct hns_roce_work *flush_work = &hr_qp->flush_work;
+ struct hns_roce_ib_create_qp_resp resp = {};
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_ib_create_qp ucmd = {};
+@@ -1132,9 +1156,12 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ mutex_init(&hr_qp->mutex);
+ spin_lock_init(&hr_qp->sq.lock);
+ spin_lock_init(&hr_qp->rq.lock);
++ spin_lock_init(&hr_qp->flush_lock);
+
+ hr_qp->state = IB_QPS_RESET;
+ hr_qp->flush_flag = 0;
++ flush_work->hr_dev = hr_dev;
++ INIT_WORK(&flush_work->work, flush_work_handle);
+
+ if (init_attr->create_flags)
+ return -EOPNOTSUPP;
+@@ -1546,14 +1573,10 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ unsigned int reserved_from_bot;
+ unsigned int i;
+
+- qp_table->idx_table.spare_idx = kcalloc(hr_dev->caps.num_qps,
+- sizeof(u32), GFP_KERNEL);
+- if (!qp_table->idx_table.spare_idx)
+- return -ENOMEM;
+-
+ mutex_init(&qp_table->scc_mutex);
+ mutex_init(&qp_table->bank_mutex);
+ xa_init(&hr_dev->qp_table_xa);
++ xa_init(&qp_table->dip_xa);
+
+ reserved_from_bot = hr_dev->caps.reserved_qps;
+
+@@ -1578,7 +1601,7 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+
+ for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++)
+ ida_destroy(&hr_dev->qp_table.bank[i].ida);
++ xa_destroy(&hr_dev->qp_table.dip_xa);
+ mutex_destroy(&hr_dev->qp_table.bank_mutex);
+ mutex_destroy(&hr_dev->qp_table.scc_mutex);
+- kfree(hr_dev->qp_table.idx_table.spare_idx);
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index c9b8233f4b0577..70c06ef65603d8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -151,8 +151,8 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
+ ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_SRQ,
+ srq->srqn);
+ if (ret)
+- dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
+- ret, srq->srqn);
++ dev_err_ratelimited(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
++ ret, srq->srqn);
+
+ xa_erase_irq(&srq_table->xa, srq->srqn);
+
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 4999239c8f4137..ac20ab3bbabf47 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2997,7 +2997,6 @@ int mlx5_ib_dev_res_srq_init(struct mlx5_ib_dev *dev)
+ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_ib_resources *devr = &dev->devr;
+- int port;
+ int ret;
+
+ if (!MLX5_CAP_GEN(dev->mdev, xrc))
+@@ -3013,10 +3012,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ return ret;
+ }
+
+- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+- INIT_WORK(&devr->ports[port].pkey_change_work,
+- pkey_change_handler);
+-
+ mutex_init(&devr->cq_lock);
+ mutex_init(&devr->srq_lock);
+
+@@ -3026,16 +3021,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ static void mlx5_ib_dev_res_cleanup(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_ib_resources *devr = &dev->devr;
+- int port;
+-
+- /*
+- * Make sure no change P_Key work items are still executing.
+- *
+- * At this stage, the mlx5_ib_event should be unregistered
+- * and it ensures that no new works are added.
+- */
+- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+- cancel_work_sync(&devr->ports[port].pkey_change_work);
+
+ /* After s0/s1 init, they are not unset during the device lifetime. */
+ if (devr->s1) {
+@@ -3211,12 +3196,14 @@ static int lag_event(struct notifier_block *nb, unsigned long event, void *data)
+ struct mlx5_ib_dev *dev = container_of(nb, struct mlx5_ib_dev,
+ lag_events);
+ struct mlx5_core_dev *mdev = dev->mdev;
++ struct ib_device *ibdev = &dev->ib_dev;
++ struct net_device *old_ndev = NULL;
+ struct mlx5_ib_port *port;
+ struct net_device *ndev;
+- int i, err;
+- int portnum;
++ u32 portnum = 0;
++ int ret = 0;
++ int i;
+
+- portnum = 0;
+ switch (event) {
+ case MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE:
+ ndev = data;
+@@ -3232,19 +3219,24 @@ static int lag_event(struct notifier_block *nb, unsigned long event, void *data)
+ }
+ }
+ }
+- err = ib_device_set_netdev(&dev->ib_dev, ndev,
+- portnum + 1);
+- dev_put(ndev);
+- if (err)
+- return err;
+- /* Rescan gids after new netdev assignment */
+- rdma_roce_rescan_device(&dev->ib_dev);
++ old_ndev = ib_device_get_netdev(ibdev, portnum + 1);
++ ret = ib_device_set_netdev(ibdev, ndev, portnum + 1);
++ if (ret)
++ goto out;
++
++ if (old_ndev)
++ roce_del_all_netdev_gids(ibdev, portnum + 1,
++ old_ndev);
++ rdma_roce_rescan_port(ibdev, portnum + 1);
+ }
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+- return NOTIFY_OK;
++
++out:
++ dev_put(old_ndev);
++ return notifier_from_errno(ret);
+ }
+
+ static void mlx5e_lag_event_register(struct mlx5_ib_dev *dev)
+@@ -4464,6 +4456,13 @@ static void mlx5_ib_stage_delay_drop_cleanup(struct mlx5_ib_dev *dev)
+
+ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
+ {
++ struct mlx5_ib_resources *devr = &dev->devr;
++ int port;
++
++ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
++ INIT_WORK(&devr->ports[port].pkey_change_work,
++ pkey_change_handler);
++
+ dev->mdev_events.notifier_call = mlx5_ib_event;
+ mlx5_notifier_register(dev->mdev, &dev->mdev_events);
+
+@@ -4474,8 +4473,14 @@ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
+
+ static void mlx5_ib_stage_dev_notifier_cleanup(struct mlx5_ib_dev *dev)
+ {
++ struct mlx5_ib_resources *devr = &dev->devr;
++ int port;
++
+ mlx5r_macsec_event_unregister(dev);
+ mlx5_notifier_unregister(dev->mdev, &dev->mdev_events);
++
++ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
++ cancel_work_sync(&devr->ports[port].pkey_change_work);
+ }
+
+ void mlx5_ib_data_direct_bind(struct mlx5_ib_dev *ibdev,
+@@ -4565,9 +4570,6 @@ static const struct mlx5_ib_profile pf_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
+ mlx5_ib_dev_res_init,
+ mlx5_ib_dev_res_cleanup),
+- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+- mlx5_ib_stage_dev_notifier_init,
+- mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_ODP,
+ mlx5_ib_odp_init_one,
+ mlx5_ib_odp_cleanup_one),
+@@ -4592,6 +4594,9 @@ static const struct mlx5_ib_profile pf_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
+ mlx5_ib_stage_ib_reg_init,
+ mlx5_ib_stage_ib_reg_cleanup),
++ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
++ mlx5_ib_stage_dev_notifier_init,
++ mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ mlx5_ib_stage_post_ib_reg_umr_init,
+ NULL),
+@@ -4628,9 +4633,6 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
+ mlx5_ib_dev_res_init,
+ mlx5_ib_dev_res_cleanup),
+- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+- mlx5_ib_stage_dev_notifier_init,
+- mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_COUNTERS,
+ mlx5_ib_counters_init,
+ mlx5_ib_counters_cleanup),
+@@ -4652,6 +4654,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
+ mlx5_ib_stage_ib_reg_init,
+ mlx5_ib_stage_ib_reg_cleanup),
++ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
++ mlx5_ib_stage_dev_notifier_init,
++ mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ mlx5_ib_stage_post_ib_reg_umr_init,
+ NULL),
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 23fd72f7f63df9..29bde64ea1eac9 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -972,7 +972,6 @@ enum mlx5_ib_stages {
+ MLX5_IB_STAGE_QP,
+ MLX5_IB_STAGE_SRQ,
+ MLX5_IB_STAGE_DEVICE_RESOURCES,
+- MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ MLX5_IB_STAGE_ODP,
+ MLX5_IB_STAGE_COUNTERS,
+ MLX5_IB_STAGE_CONG_DEBUGFS,
+@@ -981,6 +980,7 @@ enum mlx5_ib_stages {
+ MLX5_IB_STAGE_PRE_IB_REG_UMR,
+ MLX5_IB_STAGE_WHITELIST_UID,
+ MLX5_IB_STAGE_IB_REG,
++ MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ MLX5_IB_STAGE_POST_IB_REG_UMR,
+ MLX5_IB_STAGE_DELAY_DROP,
+ MLX5_IB_STAGE_RESTRACK,
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index d2f7b5195c19dd..91d329e903083c 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -775,6 +775,7 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask)
+ * Yield the processor
+ */
+ spin_lock_irqsave(&qp->state_lock, flags);
++ attr->cur_qp_state = qp_state(qp);
+ if (qp->attr.sq_draining) {
+ spin_unlock_irqrestore(&qp->state_lock, flags);
+ cond_resched();
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 479c07e6e4ed3e..87a02f0deb0001 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -663,10 +663,12 @@ int rxe_requester(struct rxe_qp *qp)
+ if (unlikely(qp_state(qp) == IB_QPS_ERR)) {
+ wqe = __req_next_wqe(qp);
+ spin_unlock_irqrestore(&qp->state_lock, flags);
+- if (wqe)
++ if (wqe) {
++ wqe->status = IB_WC_WR_FLUSH_ERR;
+ goto err;
+- else
++ } else {
+ goto exit;
++ }
+ }
+
+ if (unlikely(qp_state(qp) == IB_QPS_RESET)) {
+diff --git a/drivers/input/misc/cs40l50-vibra.c b/drivers/input/misc/cs40l50-vibra.c
+index 03bdb7c26ec09f..dce3b0ec8cf368 100644
+--- a/drivers/input/misc/cs40l50-vibra.c
++++ b/drivers/input/misc/cs40l50-vibra.c
+@@ -334,11 +334,12 @@ static int cs40l50_add(struct input_dev *dev, struct ff_effect *effect,
+ work_data.custom_len = effect->u.periodic.custom_len;
+ work_data.vib = vib;
+ work_data.effect = effect;
+- INIT_WORK(&work_data.work, cs40l50_add_worker);
++ INIT_WORK_ONSTACK(&work_data.work, cs40l50_add_worker);
+
+ /* Push to the workqueue to serialize with playbacks */
+ queue_work(vib->vib_wq, &work_data.work);
+ flush_work(&work_data.work);
++ destroy_work_on_stack(&work_data.work);
+
+ kfree(work_data.custom_data);
+
+@@ -467,11 +468,12 @@ static int cs40l50_erase(struct input_dev *dev, int effect_id)
+ work_data.vib = vib;
+ work_data.effect = &dev->ff->effects[effect_id];
+
+- INIT_WORK(&work_data.work, cs40l50_erase_worker);
++ INIT_WORK_ONSTACK(&work_data.work, cs40l50_erase_worker);
+
+ /* Push to workqueue to serialize with playbacks */
+ queue_work(vib->vib_wq, &work_data.work);
+ flush_work(&work_data.work);
++ destroy_work_on_stack(&work_data.work);
+
+ return work_data.error;
+ }
+diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
+index f49a8e0cb03c06..adacd6f7d6a8f7 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.c
++++ b/drivers/interconnect/qcom/icc-rpmh.c
+@@ -311,6 +311,9 @@ int qcom_icc_rpmh_probe(struct platform_device *pdev)
+ }
+
+ qp->num_clks = devm_clk_bulk_get_all(qp->dev, &qp->clks);
++ if (qp->num_clks == -EPROBE_DEFER)
++ return dev_err_probe(dev, qp->num_clks, "Failed to get QoS clocks\n");
++
+ if (qp->num_clks < 0 || (!qp->num_clks && desc->qos_clks_required)) {
+ dev_info(dev, "Skipping QoS, failed to get clk: %d\n", qp->num_clks);
+ goto skip_qos_config;
+diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
+index 25b9042fa45307..c616de2c5926ec 100644
+--- a/drivers/iommu/amd/io_pgtable_v2.c
++++ b/drivers/iommu/amd/io_pgtable_v2.c
+@@ -268,8 +268,11 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ out:
+ if (updated) {
+ struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
++ unsigned long flags;
+
++ spin_lock_irqsave(&pdom->lock, flags);
+ amd_iommu_domain_flush_pages(pdom, o_iova, size);
++ spin_unlock_irqrestore(&pdom->lock, flags);
+ }
+
+ if (mapped)
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index fcd13d301fff68..6b479592140c47 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -509,7 +509,8 @@ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+
+ snprintf(name, 16, "vcmdq%u", vcmdq->idx);
+
+- q->llq.max_n_shift = VCMDQ_LOG2SIZE_MAX;
++ /* Queue size, capped to ensure natural alignment */
++ q->llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, VCMDQ_LOG2SIZE_MAX);
+
+ /* Use the common helper to init the VCMDQ, and then... */
+ ret = arm_smmu_init_one_queue(smmu, q, vcmdq->page0,
+@@ -800,7 +801,7 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+ return 0;
+ }
+
+-struct dentry *cmdqv_debugfs_dir;
++static struct dentry *cmdqv_debugfs_dir;
+
+ static struct arm_smmu_device *
+ __tegra241_cmdqv_probe(struct arm_smmu_device *smmu, struct resource *res,
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index e860bc9439a283..a167d59101ae2e 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -707,14 +707,15 @@ static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn,
+ while (1) {
+ offset = pfn_level_offset(pfn, level);
+ pte = &parent[offset];
+- if (!pte || (dma_pte_superpage(pte) || !dma_pte_present(pte))) {
+- pr_info("PTE not present at level %d\n", level);
+- break;
+- }
+
+ pr_info("pte level: %d, pte value: 0x%016llx\n", level, pte->val);
+
+- if (level == 1)
++ if (!dma_pte_present(pte)) {
++ pr_info("page table not present at level %d\n", level - 1);
++ break;
++ }
++
++ if (level == 1 || dma_pte_superpage(pte))
+ break;
+
+ parent = phys_to_virt(dma_pte_addr(pte));
+@@ -737,11 +738,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ pr_info("Dump %s table entries for IOVA 0x%llx\n", iommu->name, addr);
+
+ /* root entry dump */
+- rt_entry = &iommu->root_entry[bus];
+- if (!rt_entry) {
+- pr_info("root table entry is not present\n");
++ if (!iommu->root_entry) {
++ pr_info("root table is not present\n");
+ return;
+ }
++ rt_entry = &iommu->root_entry[bus];
+
+ if (sm_supported(iommu))
+ pr_info("scalable mode root entry: hi 0x%016llx, low 0x%016llx\n",
+@@ -752,7 +753,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ /* context entry dump */
+ ctx_entry = iommu_context_addr(iommu, bus, devfn, 0);
+ if (!ctx_entry) {
+- pr_info("context table entry is not present\n");
++ pr_info("context table is not present\n");
+ return;
+ }
+
+@@ -761,17 +762,23 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+
+ /* legacy mode does not require PASID entries */
+ if (!sm_supported(iommu)) {
++ if (!context_present(ctx_entry)) {
++ pr_info("legacy mode page table is not present\n");
++ return;
++ }
+ level = agaw_to_level(ctx_entry->hi & 7);
+ pgtable = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+ goto pgtable_walk;
+ }
+
+- /* get the pointer to pasid directory entry */
+- dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+- if (!dir) {
+- pr_info("pasid directory entry is not present\n");
++ if (!context_present(ctx_entry)) {
++ pr_info("pasid directory table is not present\n");
+ return;
+ }
++
++ /* get the pointer to pasid directory entry */
++ dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
++
+ /* For request-without-pasid, get the pasid from context entry */
+ if (intel_iommu_sm && pasid == IOMMU_PASID_INVALID)
+ pasid = IOMMU_NO_PASID;
+@@ -783,7 +790,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ /* get the pointer to the pasid table entry */
+ entries = get_pasid_table_from_pde(pde);
+ if (!entries) {
+- pr_info("pasid table entry is not present\n");
++ pr_info("pasid table is not present\n");
+ return;
+ }
+ index = pasid & PASID_PTE_MASK;
+@@ -791,6 +798,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ for (i = 0; i < ARRAY_SIZE(pte->val); i++)
+ pr_info("pasid table entry[%d]: 0x%016llx\n", i, pte->val[i]);
+
++ if (!pasid_pte_is_present(pte)) {
++ pr_info("scalable mode page table is not present\n");
++ return;
++ }
++
+ if (pasid_pte_get_pgtt(pte) == PASID_ENTRY_PGTT_FL_ONLY) {
+ level = pte->val[2] & BIT_ULL(2) ? 5 : 4;
+ pgtable = phys_to_virt(pte->val[2] & VTD_PAGE_MASK);
+diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
+index d8eaa7ea380bb0..fbdeded3d48b59 100644
+--- a/drivers/iommu/s390-iommu.c
++++ b/drivers/iommu/s390-iommu.c
+@@ -33,6 +33,8 @@ struct s390_domain {
+ struct rcu_head rcu;
+ };
+
++static struct iommu_domain blocking_domain;
++
+ static inline unsigned int calc_rtx(dma_addr_t ptr)
+ {
+ return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK;
+@@ -369,20 +371,36 @@ static void s390_domain_free(struct iommu_domain *domain)
+ call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain);
+ }
+
+-static void s390_iommu_detach_device(struct iommu_domain *domain,
+- struct device *dev)
++static void zdev_s390_domain_update(struct zpci_dev *zdev,
++ struct iommu_domain *domain)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&zdev->dom_lock, flags);
++ zdev->s390_domain = domain;
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
++}
++
++static int blocking_domain_attach_device(struct iommu_domain *domain,
++ struct device *dev)
+ {
+- struct s390_domain *s390_domain = to_s390_domain(domain);
+ struct zpci_dev *zdev = to_zpci_dev(dev);
++ struct s390_domain *s390_domain;
+ unsigned long flags;
+
++ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
++ return 0;
++
++ s390_domain = to_s390_domain(zdev->s390_domain);
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_del_rcu(&zdev->iommu_list);
+ spin_unlock_irqrestore(&s390_domain->list_lock, flags);
+
+ zpci_unregister_ioat(zdev, 0);
+- zdev->s390_domain = NULL;
+ zdev->dma_table = NULL;
++ zdev_s390_domain_update(zdev, domain);
++
++ return 0;
+ }
+
+ static int s390_iommu_attach_device(struct iommu_domain *domain,
+@@ -401,20 +419,15 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
+ domain->geometry.aperture_end < zdev->start_dma))
+ return -EINVAL;
+
+- if (zdev->s390_domain)
+- s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
++ blocking_domain_attach_device(&blocking_domain, dev);
+
++ /* If we fail now DMA remains blocked via blocking domain */
+ cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
+ virt_to_phys(s390_domain->dma_table), &status);
+- /*
+- * If the device is undergoing error recovery the reset code
+- * will re-establish the new domain.
+- */
+ if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL)
+ return -EIO;
+-
+ zdev->dma_table = s390_domain->dma_table;
+- zdev->s390_domain = s390_domain;
++ zdev_s390_domain_update(zdev, domain);
+
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_add_rcu(&zdev->iommu_list, &s390_domain->devices);
+@@ -466,19 +479,11 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
+ if (zdev->tlb_refresh)
+ dev->iommu->shadow_on_flush = 1;
+
+- return &zdev->iommu_dev;
+-}
++ /* Start with DMA blocked */
++ spin_lock_init(&zdev->dom_lock);
++ zdev_s390_domain_update(zdev, &blocking_domain);
+
+-static void s390_iommu_release_device(struct device *dev)
+-{
+- struct zpci_dev *zdev = to_zpci_dev(dev);
+-
+- /*
+- * release_device is expected to detach any domain currently attached
+- * to the device, but keep it attached to other devices in the group.
+- */
+- if (zdev)
+- s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
++ return &zdev->iommu_dev;
+ }
+
+ static int zpci_refresh_all(struct zpci_dev *zdev)
+@@ -697,9 +702,15 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
+
+ struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
+ {
+- if (!zdev || !zdev->s390_domain)
++ struct s390_domain *s390_domain;
++
++ lockdep_assert_held(&zdev->dom_lock);
++
++ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
+ return NULL;
+- return &zdev->s390_domain->ctrs;
++
++ s390_domain = to_s390_domain(zdev->s390_domain);
++ return &s390_domain->ctrs;
+ }
+
+ int zpci_init_iommu(struct zpci_dev *zdev)
+@@ -776,11 +787,19 @@ static int __init s390_iommu_init(void)
+ }
+ subsys_initcall(s390_iommu_init);
+
++static struct iommu_domain blocking_domain = {
++ .type = IOMMU_DOMAIN_BLOCKED,
++ .ops = &(const struct iommu_domain_ops) {
++ .attach_dev = blocking_domain_attach_device,
++ }
++};
++
+ static const struct iommu_ops s390_iommu_ops = {
++ .blocked_domain = &blocking_domain,
++ .release_domain = &blocking_domain,
+ .capable = s390_iommu_capable,
+ .domain_alloc_paging = s390_domain_alloc_paging,
+ .probe_device = s390_iommu_probe_device,
+- .release_device = s390_iommu_release_device,
+ .device_group = generic_device_group,
+ .pgsize_bitmap = SZ_4K,
+ .get_resv_regions = s390_iommu_get_resv_regions,
+diff --git a/drivers/irqchip/irq-mvebu-sei.c b/drivers/irqchip/irq-mvebu-sei.c
+index f8c70f2d100a11..065166ab5dbc04 100644
+--- a/drivers/irqchip/irq-mvebu-sei.c
++++ b/drivers/irqchip/irq-mvebu-sei.c
+@@ -192,7 +192,6 @@ static void mvebu_sei_domain_free(struct irq_domain *domain, unsigned int virq,
+ }
+
+ static const struct irq_domain_ops mvebu_sei_domain_ops = {
+- .select = msi_lib_irq_domain_select,
+ .alloc = mvebu_sei_domain_alloc,
+ .free = mvebu_sei_domain_free,
+ };
+@@ -306,6 +305,7 @@ static void mvebu_sei_cp_domain_free(struct irq_domain *domain,
+ }
+
+ static const struct irq_domain_ops mvebu_sei_cp_domain_ops = {
++ .select = msi_lib_irq_domain_select,
+ .alloc = mvebu_sei_cp_domain_alloc,
+ .free = mvebu_sei_cp_domain_free,
+ };
+diff --git a/drivers/irqchip/irq-riscv-aplic-main.c b/drivers/irqchip/irq-riscv-aplic-main.c
+index 900e72541db9e5..93e7c51f944abe 100644
+--- a/drivers/irqchip/irq-riscv-aplic-main.c
++++ b/drivers/irqchip/irq-riscv-aplic-main.c
+@@ -207,7 +207,8 @@ static int aplic_probe(struct platform_device *pdev)
+ else
+ rc = aplic_direct_setup(dev, regs);
+ if (rc)
+- dev_err(dev, "failed to setup APLIC in %s mode\n", msi_mode ? "MSI" : "direct");
++ dev_err_probe(dev, rc, "failed to setup APLIC in %s mode\n",
++ msi_mode ? "MSI" : "direct");
+
+ #ifdef CONFIG_ACPI
+ if (!acpi_disabled)
+diff --git a/drivers/irqchip/irq-riscv-aplic-msi.c b/drivers/irqchip/irq-riscv-aplic-msi.c
+index 945bff28265cdc..fb8d1838609fb5 100644
+--- a/drivers/irqchip/irq-riscv-aplic-msi.c
++++ b/drivers/irqchip/irq-riscv-aplic-msi.c
+@@ -266,6 +266,9 @@ int aplic_msi_setup(struct device *dev, void __iomem *regs)
+ if (msi_domain)
+ dev_set_msi_domain(dev, msi_domain);
+ }
++
++ if (!dev_get_msi_domain(dev))
++ return -EPROBE_DEFER;
+ }
+
+ if (!msi_create_device_irq_domain(dev, MSI_DEFAULT_DOMAIN, &aplic_msi_template,
+diff --git a/drivers/leds/flash/leds-ktd2692.c b/drivers/leds/flash/leds-ktd2692.c
+index 16a01a200c0b75..b92adf908793e5 100644
+--- a/drivers/leds/flash/leds-ktd2692.c
++++ b/drivers/leds/flash/leds-ktd2692.c
+@@ -292,6 +292,7 @@ static int ktd2692_probe(struct platform_device *pdev)
+
+ fled_cdev = &led->fled_cdev;
+ led_cdev = &fled_cdev->led_cdev;
++ led->props.timing = ktd2692_timing;
+
+ ret = ktd2692_parse_dt(led, &pdev->dev, &led_cfg);
+ if (ret)
+diff --git a/drivers/leds/leds-max5970.c b/drivers/leds/leds-max5970.c
+index 56a584311581af..285074c53b2344 100644
+--- a/drivers/leds/leds-max5970.c
++++ b/drivers/leds/leds-max5970.c
+@@ -45,7 +45,7 @@ static int max5970_led_set_brightness(struct led_classdev *cdev,
+
+ static int max5970_led_probe(struct platform_device *pdev)
+ {
+- struct fwnode_handle *led_node, *child;
++ struct fwnode_handle *child;
+ struct device *dev = &pdev->dev;
+ struct regmap *regmap;
+ struct max5970_led *ddata;
+@@ -55,7 +55,8 @@ static int max5970_led_probe(struct platform_device *pdev)
+ if (!regmap)
+ return -ENODEV;
+
+- led_node = device_get_named_child_node(dev->parent, "leds");
++ struct fwnode_handle *led_node __free(fwnode_handle) =
++ device_get_named_child_node(dev->parent, "leds");
+ if (!led_node)
+ return -ENODEV;
+
+diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c
+index 0ec21dcdbde723..cff7c343ee082a 100644
+--- a/drivers/mailbox/arm_mhuv2.c
++++ b/drivers/mailbox/arm_mhuv2.c
+@@ -500,7 +500,7 @@ static const struct mhuv2_protocol_ops mhuv2_data_transfer_ops = {
+ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
+ {
+ struct mbox_chan *chans = mhu->mbox.chans;
+- int channel = 0, i, offset = 0, windows, protocol, ch_wn;
++ int channel = 0, i, j, offset = 0, windows, protocol, ch_wn;
+ u32 stat;
+
+ for (i = 0; i < MHUV2_CMB_INT_ST_REG_CNT; i++) {
+@@ -510,9 +510,9 @@ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
+
+ ch_wn = i * MHUV2_STAT_BITS + __builtin_ctz(stat);
+
+- for (i = 0; i < mhu->length; i += 2) {
+- protocol = mhu->protocols[i];
+- windows = mhu->protocols[i + 1];
++ for (j = 0; j < mhu->length; j += 2) {
++ protocol = mhu->protocols[j];
++ windows = mhu->protocols[j + 1];
+
+ if (ch_wn >= offset + windows) {
+ if (protocol == DOORBELL)
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 4bff73532085bd..9c43ed9bdd37b5 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -584,7 +584,7 @@ static int cmdq_get_clocks(struct device *dev, struct cmdq *cmdq)
+ struct clk_bulk_data *clks;
+
+ cmdq->clocks = devm_kcalloc(dev, cmdq->pdata->gce_num,
+- sizeof(cmdq->clocks), GFP_KERNEL);
++ sizeof(*cmdq->clocks), GFP_KERNEL);
+ if (!cmdq->clocks)
+ return -ENOMEM;
+
+diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c
+index 6797770474a55d..680243751d625f 100644
+--- a/drivers/mailbox/omap-mailbox.c
++++ b/drivers/mailbox/omap-mailbox.c
+@@ -15,6 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/kfifo.h>
+ #include <linux/err.h>
++#include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index 272945a878b3ce..a3f4b4ad35aab9 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -1405,12 +1405,13 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
+ if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, timings))
++ false, adv76xx_get_dv_timings_cap(sd, -1), timings))
+ return 0;
+ if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, state->aspect_ratio, timings))
++ false, state->aspect_ratio,
++ adv76xx_get_dv_timings_cap(sd, -1), timings))
+ return 0;
+
+ v4l2_dbg(2, debug, sd,
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index 014fc913225c4a..61ea7393066d77 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -1431,14 +1431,15 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
+ }
+
+ if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
+- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, timings))
++ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
++ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
++ false, adv7842_get_dv_timings_cap(sd), timings))
+ return 0;
+ if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
+- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, state->aspect_ratio, timings))
++ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
++ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
++ false, state->aspect_ratio,
++ adv7842_get_dv_timings_cap(sd), timings))
+ return 0;
+
+ v4l2_dbg(2, debug, sd,
+diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c
+index ffe5f25f864762..58424d8f72af03 100644
+--- a/drivers/media/i2c/ds90ub960.c
++++ b/drivers/media/i2c/ds90ub960.c
+@@ -1286,7 +1286,7 @@ static int ub960_rxport_get_strobe_pos(struct ub960_data *priv,
+
+ clk_delay += v & UB960_IR_RX_ANA_STROBE_SET_CLK_DELAY_MASK;
+
+- ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
++ ret = ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/media/i2c/max96717.c b/drivers/media/i2c/max96717.c
+index 4e85b8eb1e7767..9259d58ba734ee 100644
+--- a/drivers/media/i2c/max96717.c
++++ b/drivers/media/i2c/max96717.c
+@@ -697,8 +697,10 @@ static int max96717_subdev_init(struct max96717_priv *priv)
+ priv->pads[MAX96717_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
+
+ ret = media_entity_pads_init(&priv->sd.entity, 2, priv->pads);
+- if (ret)
+- return dev_err_probe(dev, ret, "Failed to init pads\n");
++ if (ret) {
++ dev_err_probe(dev, ret, "Failed to init pads\n");
++ goto err_free_ctrl;
++ }
+
+ ret = v4l2_subdev_init_finalize(&priv->sd);
+ if (ret) {
+diff --git a/drivers/media/i2c/vgxy61.c b/drivers/media/i2c/vgxy61.c
+index 409d2d4ffb4bb2..d77468c8587bc4 100644
+--- a/drivers/media/i2c/vgxy61.c
++++ b/drivers/media/i2c/vgxy61.c
+@@ -1617,7 +1617,7 @@ static int vgxy61_detect(struct vgxy61_dev *sensor)
+
+ ret = cci_read(sensor->regmap, VGXY61_REG_NVM, &st, NULL);
+ if (ret < 0)
+- return st;
++ return ret;
+ if (st != VGXY61_NVM_OK)
+ dev_warn(&client->dev, "Bad nvm state got %u\n", (u8)st);
+
+diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig
+index 49e4fb696573f6..a4537818a58c05 100644
+--- a/drivers/media/pci/intel/ipu6/Kconfig
++++ b/drivers/media/pci/intel/ipu6/Kconfig
+@@ -4,12 +4,6 @@ config VIDEO_INTEL_IPU6
+ depends on VIDEO_DEV
+ depends on X86 && X86_64 && HAS_DMA
+ depends on IPU_BRIDGE || !IPU_BRIDGE
+- #
+- # This driver incorrectly tries to override the dma_ops. It should
+- # never have done that, but for now keep it working on architectures
+- # that use dma ops
+- #
+- depends on ARCH_HAS_DMA_OPS
+ select AUXILIARY_BUS
+ select IOMMU_IOVA
+ select VIDEO_V4L2_SUBDEV_API
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-bus.c b/drivers/media/pci/intel/ipu6/ipu6-bus.c
+index 149ec098cdbfe1..37d88ddb6ee7cd 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-bus.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-bus.c
+@@ -94,8 +94,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
+ if (!adev)
+ return ERR_PTR(-ENOMEM);
+
+- adev->dma_mask = DMA_BIT_MASK(isp->secure_mode ? IPU6_MMU_ADDR_BITS :
+- IPU6_MMU_ADDR_BITS_NON_SECURE);
+ adev->isp = isp;
+ adev->ctrl = ctrl;
+ adev->pdata = pdata;
+@@ -106,10 +104,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
+
+ auxdev->dev.parent = parent;
+ auxdev->dev.release = ipu6_bus_release;
+- auxdev->dev.dma_ops = &ipu6_dma_ops;
+- auxdev->dev.dma_mask = &adev->dma_mask;
+- auxdev->dev.dma_parms = pdev->dev.dma_parms;
+- auxdev->dev.coherent_dma_mask = adev->dma_mask;
+
+ ret = auxiliary_device_init(auxdev);
+ if (ret < 0) {
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-buttress.c b/drivers/media/pci/intel/ipu6/ipu6-buttress.c
+index e47f84c30e10d6..1ee63ef4a40b22 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-buttress.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-buttress.c
+@@ -24,6 +24,7 @@
+
+ #include "ipu6.h"
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-buttress.h"
+ #include "ipu6-platform-buttress-regs.h"
+
+@@ -345,12 +346,16 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr)
+ u32 disable_irqs = 0;
+ u32 irq_status;
+ u32 i, count = 0;
++ int active;
+
+- pm_runtime_get_noresume(&isp->pdev->dev);
++ active = pm_runtime_get_if_active(&isp->pdev->dev);
++ if (!active)
++ return IRQ_NONE;
+
+ irq_status = readl(isp->base + reg_irq_sts);
+- if (!irq_status) {
+- pm_runtime_put_noidle(&isp->pdev->dev);
++ if (irq_status == 0 || WARN_ON_ONCE(irq_status == 0xffffffffu)) {
++ if (active > 0)
++ pm_runtime_put_noidle(&isp->pdev->dev);
+ return IRQ_NONE;
+ }
+
+@@ -426,7 +431,8 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr)
+ writel(BUTTRESS_IRQS & ~disable_irqs,
+ isp->base + BUTTRESS_REG_ISR_ENABLE);
+
+- pm_runtime_put(&isp->pdev->dev);
++ if (active > 0)
++ pm_runtime_put(&isp->pdev->dev);
+
+ return ret;
+ }
+@@ -553,6 +559,7 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys,
+ const struct firmware *fw, struct sg_table *sgt)
+ {
+ bool is_vmalloc = is_vmalloc_addr(fw->data);
++ struct pci_dev *pdev = sys->isp->pdev;
+ struct page **pages;
+ const void *addr;
+ unsigned long n_pages;
+@@ -588,14 +595,20 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys,
+ goto out;
+ }
+
+- ret = dma_map_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0);
+- if (ret < 0) {
+- ret = -ENOMEM;
++ ret = dma_map_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
++ if (ret) {
+ sg_free_table(sgt);
+ goto out;
+ }
+
+- dma_sync_sgtable_for_device(&sys->auxdev.dev, sgt, DMA_TO_DEVICE);
++ ret = ipu6_dma_map_sgtable(sys, sgt, DMA_TO_DEVICE, 0);
++ if (ret) {
++ dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
++ sg_free_table(sgt);
++ goto out;
++ }
++
++ ipu6_dma_sync_sgtable(sys, sgt);
+
+ out:
+ kfree(pages);
+@@ -607,7 +620,10 @@ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_map_fw_image, INTEL_IPU6);
+ void ipu6_buttress_unmap_fw_image(struct ipu6_bus_device *sys,
+ struct sg_table *sgt)
+ {
+- dma_unmap_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0);
++ struct pci_dev *pdev = sys->isp->pdev;
++
++ ipu6_dma_unmap_sgtable(sys, sgt, DMA_TO_DEVICE, 0);
++ dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
+ sg_free_table(sgt);
+ }
+ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_unmap_fw_image, INTEL_IPU6);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-cpd.c b/drivers/media/pci/intel/ipu6/ipu6-cpd.c
+index 715b21ab4b8e98..21c1c128a7eaa5 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-cpd.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-cpd.c
+@@ -15,6 +15,7 @@
+ #include "ipu6.h"
+ #include "ipu6-bus.h"
+ #include "ipu6-cpd.h"
++#include "ipu6-dma.h"
+
+ /* 15 entries + header*/
+ #define MAX_PKG_DIR_ENT_CNT 16
+@@ -162,7 +163,6 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ {
+ dma_addr_t dma_addr_src = sg_dma_address(adev->fw_sgt.sgl);
+ const struct ipu6_cpd_ent *ent, *man_ent, *met_ent;
+- struct device *dev = &adev->auxdev.dev;
+ struct ipu6_device *isp = adev->isp;
+ unsigned int man_sz, met_sz;
+ void *pkg_dir_pos;
+@@ -175,8 +175,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ met_sz = met_ent->len;
+
+ adev->pkg_dir_size = PKG_DIR_SIZE + man_sz + met_sz;
+- adev->pkg_dir = dma_alloc_attrs(dev, adev->pkg_dir_size,
+- &adev->pkg_dir_dma_addr, GFP_KERNEL, 0);
++ adev->pkg_dir = ipu6_dma_alloc(adev, adev->pkg_dir_size,
++ &adev->pkg_dir_dma_addr, GFP_KERNEL, 0);
+ if (!adev->pkg_dir)
+ return -ENOMEM;
+
+@@ -198,8 +198,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ met_ent->len);
+ if (ret) {
+ dev_err(&isp->pdev->dev, "Failed to parse module data\n");
+- dma_free_attrs(dev, adev->pkg_dir_size,
+- adev->pkg_dir, adev->pkg_dir_dma_addr, 0);
++ ipu6_dma_free(adev, adev->pkg_dir_size,
++ adev->pkg_dir, adev->pkg_dir_dma_addr, 0);
+ return ret;
+ }
+
+@@ -211,8 +211,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ pkg_dir_pos += man_sz;
+ memcpy(pkg_dir_pos, src + met_ent->offset, met_sz);
+
+- dma_sync_single_range_for_device(dev, adev->pkg_dir_dma_addr,
+- 0, adev->pkg_dir_size, DMA_TO_DEVICE);
++ ipu6_dma_sync_single(adev, adev->pkg_dir_dma_addr,
++ adev->pkg_dir_size);
+
+ return 0;
+ }
+@@ -220,8 +220,8 @@ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_create_pkg_dir, INTEL_IPU6);
+
+ void ipu6_cpd_free_pkg_dir(struct ipu6_bus_device *adev)
+ {
+- dma_free_attrs(&adev->auxdev.dev, adev->pkg_dir_size, adev->pkg_dir,
+- adev->pkg_dir_dma_addr, 0);
++ ipu6_dma_free(adev, adev->pkg_dir_size, adev->pkg_dir,
++ adev->pkg_dir_dma_addr, 0);
+ }
+ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_free_pkg_dir, INTEL_IPU6);
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+index 92530a1cc90f51..b71f66bd8c1fdb 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+@@ -39,8 +39,7 @@ static struct vm_info *get_vm_info(struct ipu6_mmu *mmu, dma_addr_t iova)
+ return NULL;
+ }
+
+-static void __dma_clear_buffer(struct page *page, size_t size,
+- unsigned long attrs)
++static void __clear_buffer(struct page *page, size_t size, unsigned long attrs)
+ {
+ void *ptr;
+
+@@ -56,8 +55,7 @@ static void __dma_clear_buffer(struct page *page, size_t size,
+ clflush_cache_range(ptr, size);
+ }
+
+-static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+- gfp_t gfp, unsigned long attrs)
++static struct page **__alloc_buffer(size_t size, gfp_t gfp, unsigned long attrs)
+ {
+ int count = PHYS_PFN(size);
+ int array_size = count * sizeof(struct page *);
+@@ -86,7 +84,7 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+ pages[i + j] = pages[i] + j;
+ }
+
+- __dma_clear_buffer(pages[i], PAGE_SIZE << order, attrs);
++ __clear_buffer(pages[i], PAGE_SIZE << order, attrs);
+ i += 1 << order;
+ count -= 1 << order;
+ }
+@@ -100,29 +98,26 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+ return NULL;
+ }
+
+-static void __dma_free_buffer(struct device *dev, struct page **pages,
+- size_t size, unsigned long attrs)
++static void __free_buffer(struct page **pages, size_t size, unsigned long attrs)
+ {
+ int count = PHYS_PFN(size);
+ unsigned int i;
+
+ for (i = 0; i < count && pages[i]; i++) {
+- __dma_clear_buffer(pages[i], PAGE_SIZE, attrs);
++ __clear_buffer(pages[i], PAGE_SIZE, attrs);
+ __free_pages(pages[i], 0);
+ }
+
+ kvfree(pages);
+ }
+
+-static void ipu6_dma_sync_single_for_cpu(struct device *dev,
+- dma_addr_t dma_handle,
+- size_t size,
+- enum dma_data_direction dir)
++void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle,
++ size_t size)
+ {
+ void *vaddr;
+ u32 offset;
+ struct vm_info *info;
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct ipu6_mmu *mmu = sys->mmu;
+
+ info = get_vm_info(mmu, dma_handle);
+ if (WARN_ON(!info))
+@@ -135,10 +130,10 @@ static void ipu6_dma_sync_single_for_cpu(struct device *dev,
+ vaddr = info->vaddr + offset;
+ clflush_cache_range(vaddr, size);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_single, INTEL_IPU6);
+
+-static void ipu6_dma_sync_sg_for_cpu(struct device *dev,
+- struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir)
++void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents)
+ {
+ struct scatterlist *sg;
+ int i;
+@@ -146,14 +141,22 @@ static void ipu6_dma_sync_sg_for_cpu(struct device *dev,
+ for_each_sg(sglist, sg, nents, i)
+ clflush_cache_range(page_to_virt(sg_page(sg)), sg->length);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sg, INTEL_IPU6);
+
+-static void *ipu6_dma_alloc(struct device *dev, size_t size,
+- dma_addr_t *dma_handle, gfp_t gfp,
+- unsigned long attrs)
++void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ ipu6_dma_sync_sg(sys, sgt->sgl, sgt->orig_nents);
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sgtable, INTEL_IPU6);
++
++void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
++ dma_addr_t *dma_handle, gfp_t gfp,
++ unsigned long attrs)
++{
++ struct device *dev = &sys->auxdev.dev;
++ struct pci_dev *pdev = sys->isp->pdev;
+ dma_addr_t pci_dma_addr, ipu6_iova;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct vm_info *info;
+ unsigned long count;
+ struct page **pages;
+@@ -173,7 +176,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+ if (!iova)
+ goto out_kfree;
+
+- pages = __dma_alloc_buffer(dev, size, gfp, attrs);
++ pages = __alloc_buffer(size, gfp, attrs);
+ if (!pages)
+ goto out_free_iova;
+
+@@ -227,7 +230,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+ ipu6_mmu_unmap(mmu->dmap->mmu_info, ipu6_iova, PAGE_SIZE);
+ }
+
+- __dma_free_buffer(dev, pages, size, attrs);
++ __free_buffer(pages, size, attrs);
+
+ out_free_iova:
+ __free_iova(&mmu->dmap->iovad, iova);
+@@ -236,13 +239,13 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+
+ return NULL;
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_alloc, INTEL_IPU6);
+
+-static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+- dma_addr_t dma_handle,
+- unsigned long attrs)
++void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
++ dma_addr_t dma_handle, unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ struct ipu6_mmu *mmu = sys->mmu;
++ struct pci_dev *pdev = sys->isp->pdev;
+ struct iova *iova = find_iova(&mmu->dmap->iovad, PHYS_PFN(dma_handle));
+ dma_addr_t pci_dma_addr, ipu6_iova;
+ struct vm_info *info;
+@@ -281,7 +284,7 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+ ipu6_mmu_unmap(mmu->dmap->mmu_info, PFN_PHYS(iova->pfn_lo),
+ PFN_PHYS(iova_size(iova)));
+
+- __dma_free_buffer(dev, pages, size, attrs);
++ __free_buffer(pages, size, attrs);
+
+ mmu->tlb_invalidate(mmu);
+
+@@ -289,13 +292,14 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+
+ kfree(info);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_free, INTEL_IPU6);
+
+-static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+- void *addr, dma_addr_t iova, size_t size,
+- unsigned long attrs)
++int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
++ void *addr, dma_addr_t iova, size_t size,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- size_t count = PHYS_PFN(PAGE_ALIGN(size));
++ struct ipu6_mmu *mmu = sys->mmu;
++ size_t count = PFN_UP(size);
+ struct vm_info *info;
+ size_t i;
+ int ret;
+@@ -323,18 +327,17 @@ static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+ return 0;
+ }
+
+-static void ipu6_dma_unmap_sg(struct device *dev,
+- struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir,
+- unsigned long attrs)
++void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs)
+ {
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct iova *iova = find_iova(&mmu->dmap->iovad,
+ PHYS_PFN(sg_dma_address(sglist)));
+- int i, npages, count;
+ struct scatterlist *sg;
+ dma_addr_t pci_dma_addr;
++ unsigned int i;
+
+ if (!nents)
+ return;
+@@ -342,31 +345,15 @@ static void ipu6_dma_unmap_sg(struct device *dev,
+ if (WARN_ON(!iova))
+ return;
+
+- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+- ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL);
+-
+- /* get the nents as orig_nents given by caller */
+- count = 0;
+- npages = iova_size(iova);
+- for_each_sg(sglist, sg, nents, i) {
+- if (sg_dma_len(sg) == 0 ||
+- sg_dma_address(sg) == DMA_MAPPING_ERROR)
+- break;
+-
+- npages -= PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+- count++;
+- if (npages <= 0)
+- break;
+- }
+-
+ /*
+ * Before IPU6 mmu unmap, return the pci dma address back to sg
+ * assume the nents is less than orig_nents as the least granule
+ * is 1 SZ_4K page
+ */
+- dev_dbg(dev, "trying to unmap concatenated %u ents\n", count);
+- for_each_sg(sglist, sg, count, i) {
+- dev_dbg(dev, "ipu unmap sg[%d] %pad\n", i, &sg_dma_address(sg));
++ dev_dbg(dev, "trying to unmap concatenated %u ents\n", nents);
++ for_each_sg(sglist, sg, nents, i) {
++ dev_dbg(dev, "unmap sg[%d] %pad size %u\n", i,
++ &sg_dma_address(sg), sg_dma_len(sg));
+ pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info,
+ sg_dma_address(sg));
+ dev_dbg(dev, "return pci_dma_addr %pad back to sg[%d]\n",
+@@ -380,23 +367,21 @@ static void ipu6_dma_unmap_sg(struct device *dev,
+ PFN_PHYS(iova_size(iova)));
+
+ mmu->tlb_invalidate(mmu);
+-
+- dma_unmap_sg_attrs(&pdev->dev, sglist, nents, dir, attrs);
+-
+ __free_iova(&mmu->dmap->iovad, iova);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sg, INTEL_IPU6);
+
+-static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir,
+- unsigned long attrs)
++int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct scatterlist *sg;
+ struct iova *iova;
+ size_t npages = 0;
+ unsigned long iova_addr;
+- int i, count;
++ int i;
+
+ for_each_sg(sglist, sg, nents, i) {
+ if (sg->offset) {
+@@ -406,18 +391,12 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ }
+ }
+
+- dev_dbg(dev, "pci_dma_map_sg trying to map %d ents\n", nents);
+- count = dma_map_sg_attrs(&pdev->dev, sglist, nents, dir, attrs);
+- if (count <= 0) {
+- dev_err(dev, "pci_dma_map_sg %d ents failed\n", nents);
+- return 0;
+- }
+-
+- dev_dbg(dev, "pci_dma_map_sg %d ents mapped\n", count);
+-
+- for_each_sg(sglist, sg, count, i)
++ for_each_sg(sglist, sg, nents, i)
+ npages += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+
++ dev_dbg(dev, "dmamap trying to map %d ents %zu pages\n",
++ nents, npages);
++
+ iova = alloc_iova(&mmu->dmap->iovad, npages,
+ PHYS_PFN(dma_get_mask(dev)), 0);
+ if (!iova)
+@@ -427,12 +406,13 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ iova->pfn_hi);
+
+ iova_addr = iova->pfn_lo;
+- for_each_sg(sglist, sg, count, i) {
++ for_each_sg(sglist, sg, nents, i) {
++ phys_addr_t iova_pa;
+ int ret;
+
+- dev_dbg(dev, "mapping entry %d: iova 0x%llx phy %pad size %d\n",
+- i, PFN_PHYS(iova_addr), &sg_dma_address(sg),
+- sg_dma_len(sg));
++ iova_pa = PFN_PHYS(iova_addr);
++ dev_dbg(dev, "mapping entry %d: iova %pap phy %pap size %d\n",
++ i, &iova_pa, &sg_dma_address(sg), sg_dma_len(sg));
+
+ ret = ipu6_mmu_map(mmu->dmap->mmu_info, PFN_PHYS(iova_addr),
+ sg_dma_address(sg),
+@@ -445,25 +425,48 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ iova_addr += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+ }
+
+- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+- ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL);
++ dev_dbg(dev, "dmamap %d ents %zu pages mapped\n", nents, npages);
+
+- return count;
++ return nents;
+
+ out_fail:
+- ipu6_dma_unmap_sg(dev, sglist, i, dir, attrs);
++ ipu6_dma_unmap_sg(sys, sglist, i, dir, attrs);
+
+ return 0;
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sg, INTEL_IPU6);
++
++int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs)
++{
++ int nents;
++
++ nents = ipu6_dma_map_sg(sys, sgt->sgl, sgt->nents, dir, attrs);
++ if (nents < 0)
++ return nents;
++
++ sgt->nents = nents;
++
++ return 0;
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sgtable, INTEL_IPU6);
++
++void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs)
++{
++ ipu6_dma_unmap_sg(sys, sgt->sgl, sgt->nents, dir, attrs);
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sgtable, INTEL_IPU6);
+
+ /*
+ * Create scatter-list for the already allocated DMA buffer
+ */
+-static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+- void *cpu_addr, dma_addr_t handle, size_t size,
+- unsigned long attrs)
++int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ void *cpu_addr, dma_addr_t handle, size_t size,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct vm_info *info;
+ int n_pages;
+ int ret = 0;
+@@ -483,20 +486,7 @@ static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ ret = sg_alloc_table_from_pages(sgt, info->pages, n_pages, 0, size,
+ GFP_KERNEL);
+ if (ret)
+- dev_warn(dev, "IPU6 get sgt table failed\n");
++ dev_warn(dev, "get sgt table failed\n");
+
+ return ret;
+ }
+-
+-const struct dma_map_ops ipu6_dma_ops = {
+- .alloc = ipu6_dma_alloc,
+- .free = ipu6_dma_free,
+- .mmap = ipu6_dma_mmap,
+- .map_sg = ipu6_dma_map_sg,
+- .unmap_sg = ipu6_dma_unmap_sg,
+- .sync_single_for_cpu = ipu6_dma_sync_single_for_cpu,
+- .sync_single_for_device = ipu6_dma_sync_single_for_cpu,
+- .sync_sg_for_cpu = ipu6_dma_sync_sg_for_cpu,
+- .sync_sg_for_device = ipu6_dma_sync_sg_for_cpu,
+- .get_sgtable = ipu6_dma_get_sgtable,
+-};
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h
+index 847ea5b7c925c3..b51244add9e611 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.h
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h
+@@ -5,7 +5,13 @@
+ #define IPU6_DMA_H
+
+ #include <linux/dma-map-ops.h>
++#include <linux/dma-mapping.h>
+ #include <linux/iova.h>
++#include <linux/iova.h>
++#include <linux/scatterlist.h>
++#include <linux/types.h>
++
++#include "ipu6-bus.h"
+
+ struct ipu6_mmu_info;
+
+@@ -14,6 +20,30 @@ struct ipu6_dma_mapping {
+ struct iova_domain iovad;
+ };
+
+-extern const struct dma_map_ops ipu6_dma_ops;
+-
++void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle,
++ size_t size);
++void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents);
++void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt);
++void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
++ dma_addr_t *dma_handle, gfp_t gfp,
++ unsigned long attrs);
++void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
++ dma_addr_t dma_handle, unsigned long attrs);
++int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
++ void *addr, dma_addr_t iova, size_t size,
++ unsigned long attrs);
++int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs);
++void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs);
++int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs);
++void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs);
++int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ void *cpu_addr, dma_addr_t handle, size_t size,
++ unsigned long attrs);
+ #endif /* IPU6_DMA_H */
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
+index 0b33fe9e703dcb..7d3d9314cb306b 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
+@@ -12,6 +12,7 @@
+ #include <linux/types.h>
+
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-fw-com.h"
+
+ /*
+@@ -88,7 +89,6 @@ struct ipu6_fw_com_context {
+ void *dma_buffer;
+ dma_addr_t dma_addr;
+ unsigned int dma_size;
+- unsigned long attrs;
+
+ struct ipu6_fw_sys_queue *input_queue; /* array of host to SP queues */
+ struct ipu6_fw_sys_queue *output_queue; /* array of SP to host */
+@@ -164,7 +164,6 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+ struct ipu6_fw_com_context *ctx;
+ struct device *dev = &adev->auxdev.dev;
+ size_t sizeall, offset;
+- unsigned long attrs = 0;
+ void *specific_host_addr;
+ unsigned int i;
+
+@@ -206,9 +205,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+
+ sizeall += sizeinput + sizeoutput;
+
+- ctx->dma_buffer = dma_alloc_attrs(dev, sizeall, &ctx->dma_addr,
+- GFP_KERNEL, attrs);
+- ctx->attrs = attrs;
++ ctx->dma_buffer = ipu6_dma_alloc(adev, sizeall, &ctx->dma_addr,
++ GFP_KERNEL, 0);
+ if (!ctx->dma_buffer) {
+ dev_err(dev, "failed to allocate dma memory\n");
+ kfree(ctx);
+@@ -239,6 +237,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+ memcpy(specific_host_addr, cfg->specific_addr,
+ cfg->specific_size);
+
++ ipu6_dma_sync_single(adev, ctx->config_vied_addr, sizeall);
++
+ /* initialize input queues */
+ offset += specific_size;
+ res.reg = SYSCOM_QPR_BASE_REG;
+@@ -315,8 +315,8 @@ int ipu6_fw_com_release(struct ipu6_fw_com_context *ctx, unsigned int force)
+ if (!force && !ctx->cell_ready(ctx->adev))
+ return -EBUSY;
+
+- dma_free_attrs(&ctx->adev->auxdev.dev, ctx->dma_size,
+- ctx->dma_buffer, ctx->dma_addr, ctx->attrs);
++ ipu6_dma_free(ctx->adev, ctx->dma_size,
++ ctx->dma_buffer, ctx->dma_addr, 0);
+ kfree(ctx);
+ return 0;
+ }
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+index c3a20507d6dbcc..57298ac73d0722 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+@@ -97,13 +97,15 @@ static void page_table_dump(struct ipu6_mmu_info *mmu_info)
+ for (l1_idx = 0; l1_idx < ISP_L1PT_PTES; l1_idx++) {
+ u32 l2_idx;
+ u32 iova = (phys_addr_t)l1_idx << ISP_L1PT_SHIFT;
++ phys_addr_t l2_phys;
+
+ if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval)
+ continue;
++
++ l2_phys = TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx];)
+ dev_dbg(mmu_info->dev,
+- "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pa\n",
+- l1_idx, iova, iova + ISP_PAGE_SIZE,
+- TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx]));
++ "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pap\n",
++ l1_idx, iova, iova + ISP_PAGE_SIZE, &l2_phys);
+
+ for (l2_idx = 0; l2_idx < ISP_L2PT_PTES; l2_idx++) {
+ u32 *l2_pt = mmu_info->l2_pts[l1_idx];
+@@ -227,7 +229,7 @@ static u32 *alloc_l1_pt(struct ipu6_mmu_info *mmu_info)
+ }
+
+ mmu_info->l1_pt_dma = dma >> ISP_PADDR_SHIFT;
+- dev_dbg(mmu_info->dev, "l1 pt %p mapped at %llx\n", pt, dma);
++ dev_dbg(mmu_info->dev, "l1 pt %p mapped at %pad\n", pt, &dma);
+
+ return pt;
+
+@@ -330,8 +332,8 @@ static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova,
+ u32 iova_end = ALIGN(iova + size, ISP_PAGE_SIZE);
+
+ dev_dbg(mmu_info->dev,
+- "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr 0x%10.10llx\n",
+- iova_start, iova_end, size, paddr);
++ "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr %pap\n",
++ iova_start, iova_end, size, &paddr);
+
+ return l2_map(mmu_info, iova_start, paddr, size);
+ }
+@@ -361,10 +363,13 @@ static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova,
+ for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT;
+ (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT)
+ < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) {
++ phys_addr_t pteval;
++
+ l2_pt = mmu_info->l2_pts[l1_idx];
++ pteval = TBL_PHYS_ADDR(l2_pt[l2_idx]);
+ dev_dbg(mmu_info->dev,
+- "unmap l2 index %u with pteval 0x%10.10llx\n",
+- l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx]));
++ "unmap l2 index %u with pteval 0x%p\n",
++ l2_idx, &pteval);
+ l2_pt[l2_idx] = mmu_info->dummy_page_pteval;
+
+ clflush_cache_range((void *)&l2_pt[l2_idx],
+@@ -525,9 +530,10 @@ static struct ipu6_mmu_info *ipu6_mmu_alloc(struct ipu6_device *isp)
+ return NULL;
+
+ mmu_info->aperture_start = 0;
+- mmu_info->aperture_end = DMA_BIT_MASK(isp->secure_mode ?
+- IPU6_MMU_ADDR_BITS :
+- IPU6_MMU_ADDR_BITS_NON_SECURE);
++ mmu_info->aperture_end =
++ (dma_addr_t)DMA_BIT_MASK(isp->secure_mode ?
++ IPU6_MMU_ADDR_BITS :
++ IPU6_MMU_ADDR_BITS_NON_SECURE);
+ mmu_info->pgsize_bitmap = SZ_4K;
+ mmu_info->dev = &isp->pdev->dev;
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c
+index 7fb707d3530967..91718eabd74e57 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6.c
++++ b/drivers/media/pci/intel/ipu6/ipu6.c
+@@ -752,6 +752,9 @@ static void ipu6_pci_reset_done(struct pci_dev *pdev)
+ */
+ static int ipu6_suspend(struct device *dev)
+ {
++ struct pci_dev *pdev = to_pci_dev(dev);
++
++ synchronize_irq(pdev->irq);
+ return 0;
+ }
+
+diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
+index 3d36f323a8f8f7..4d032436691c1b 100644
+--- a/drivers/media/radio/wl128x/fmdrv_common.c
++++ b/drivers/media/radio/wl128x/fmdrv_common.c
+@@ -466,11 +466,12 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
+ jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
+ return -ETIMEDOUT;
+ }
++ spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
+ if (!fmdev->resp_skb) {
++ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
+ fmerr("Response SKB is missing\n");
+ return -EFAULT;
+ }
+- spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
+ skb = fmdev->resp_skb;
+ fmdev->resp_skb = NULL;
+ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 6a790ac8cbe689..f25e011153642e 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -1459,12 +1459,19 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+ h_freq = (u32)bt->pixelclock / total_h_pixel;
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_CVT)) {
++ struct v4l2_dv_timings cvt = {};
++
+ if (v4l2_detect_cvt(total_v_lines, h_freq, bt->vsync, bt->width,
+- bt->polarities, bt->interlaced, timings))
++ bt->polarities, bt->interlaced,
++ &vivid_dv_timings_cap, &cvt) &&
++ cvt.bt.width == bt->width && cvt.bt.height == bt->height) {
++ *timings = cvt;
+ return true;
++ }
+ }
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_GTF)) {
++ struct v4l2_dv_timings gtf = {};
+ struct v4l2_fract aspect_ratio;
+
+ find_aspect_ratio(bt->width, bt->height,
+@@ -1472,8 +1479,12 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+ &aspect_ratio.denominator);
+ if (v4l2_detect_gtf(total_v_lines, h_freq, bt->vsync,
+ bt->polarities, bt->interlaced,
+- aspect_ratio, timings))
++ aspect_ratio, &vivid_dv_timings_cap,
++ >f) &&
++ gtf.bt.width == bt->width && gtf.bt.height == bt->height) {
++ *timings = gtf;
+ return true;
++ }
+ }
+ return false;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 942d0005c55e82..2cf5dcee0ce800 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -481,25 +481,28 @@ EXPORT_SYMBOL_GPL(v4l2_calc_timeperframe);
+ * @polarities - the horizontal and vertical polarities (same as struct
+ * v4l2_bt_timings polarities).
+ * @interlaced - if this flag is true, it indicates interlaced format
+- * @fmt - the resulting timings.
++ * @cap - the v4l2_dv_timings_cap capabilities.
++ * @timings - the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid CVT format. If so, then it will return true, and fmt will be filled
+ * in with the found CVT timings.
+ */
+-bool v4l2_detect_cvt(unsigned frame_height,
+- unsigned hfreq,
+- unsigned vsync,
+- unsigned active_width,
++bool v4l2_detect_cvt(unsigned int frame_height,
++ unsigned int hfreq,
++ unsigned int vsync,
++ unsigned int active_width,
+ u32 polarities,
+ bool interlaced,
+- struct v4l2_dv_timings *fmt)
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *timings)
+ {
+- int v_fp, v_bp, h_fp, h_bp, hsync;
+- int frame_width, image_height, image_width;
++ struct v4l2_dv_timings t = {};
++ int v_fp, v_bp, h_fp, h_bp, hsync;
++ int frame_width, image_height, image_width;
+ bool reduced_blanking;
+ bool rb_v2 = false;
+- unsigned pix_clk;
++ unsigned int pix_clk;
+
+ if (vsync < 4 || vsync > 8)
+ return false;
+@@ -625,36 +628,39 @@ bool v4l2_detect_cvt(unsigned frame_height,
+ h_fp = h_blank - hsync - h_bp;
+ }
+
+- fmt->type = V4L2_DV_BT_656_1120;
+- fmt->bt.polarities = polarities;
+- fmt->bt.width = image_width;
+- fmt->bt.height = image_height;
+- fmt->bt.hfrontporch = h_fp;
+- fmt->bt.vfrontporch = v_fp;
+- fmt->bt.hsync = hsync;
+- fmt->bt.vsync = vsync;
+- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
++ t.type = V4L2_DV_BT_656_1120;
++ t.bt.polarities = polarities;
++ t.bt.width = image_width;
++ t.bt.height = image_height;
++ t.bt.hfrontporch = h_fp;
++ t.bt.vfrontporch = v_fp;
++ t.bt.hsync = hsync;
++ t.bt.vsync = vsync;
++ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
+
+ if (!interlaced) {
+- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
+- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
++ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
++ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
+ } else {
+- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
++ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ 2 * vsync) / 2;
+- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+- 2 * vsync - fmt->bt.vbackporch;
+- fmt->bt.il_vfrontporch = v_fp;
+- fmt->bt.il_vsync = vsync;
+- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
+- fmt->bt.interlaced = V4L2_DV_INTERLACED;
++ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
++ 2 * vsync - t.bt.vbackporch;
++ t.bt.il_vfrontporch = v_fp;
++ t.bt.il_vsync = vsync;
++ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
++ t.bt.interlaced = V4L2_DV_INTERLACED;
+ }
+
+- fmt->bt.pixelclock = pix_clk;
+- fmt->bt.standards = V4L2_DV_BT_STD_CVT;
++ t.bt.pixelclock = pix_clk;
++ t.bt.standards = V4L2_DV_BT_STD_CVT;
+
+ if (reduced_blanking)
+- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
++ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+
++ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
++ return false;
++ *timings = t;
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
+@@ -699,22 +705,25 @@ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
+ * image height, so it has to be passed explicitly. Usually
+ * the native screen aspect ratio is used for this. If it
+ * is not filled in correctly, then 16:9 will be assumed.
+- * @fmt - the resulting timings.
++ * @cap - the v4l2_dv_timings_cap capabilities.
++ * @timings - the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid GTF format. If so, then it will return true, and fmt will be filled
+ * in with the found GTF timings.
+ */
+-bool v4l2_detect_gtf(unsigned frame_height,
+- unsigned hfreq,
+- unsigned vsync,
+- u32 polarities,
+- bool interlaced,
+- struct v4l2_fract aspect,
+- struct v4l2_dv_timings *fmt)
++bool v4l2_detect_gtf(unsigned int frame_height,
++ unsigned int hfreq,
++ unsigned int vsync,
++ u32 polarities,
++ bool interlaced,
++ struct v4l2_fract aspect,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *timings)
+ {
++ struct v4l2_dv_timings t = {};
+ int pix_clk;
+- int v_fp, v_bp, h_fp, hsync;
++ int v_fp, v_bp, h_fp, hsync;
+ int frame_width, image_height, image_width;
+ bool default_gtf;
+ int h_blank;
+@@ -783,36 +792,39 @@ bool v4l2_detect_gtf(unsigned frame_height,
+
+ h_fp = h_blank / 2 - hsync;
+
+- fmt->type = V4L2_DV_BT_656_1120;
+- fmt->bt.polarities = polarities;
+- fmt->bt.width = image_width;
+- fmt->bt.height = image_height;
+- fmt->bt.hfrontporch = h_fp;
+- fmt->bt.vfrontporch = v_fp;
+- fmt->bt.hsync = hsync;
+- fmt->bt.vsync = vsync;
+- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
++ t.type = V4L2_DV_BT_656_1120;
++ t.bt.polarities = polarities;
++ t.bt.width = image_width;
++ t.bt.height = image_height;
++ t.bt.hfrontporch = h_fp;
++ t.bt.vfrontporch = v_fp;
++ t.bt.hsync = hsync;
++ t.bt.vsync = vsync;
++ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
+
+ if (!interlaced) {
+- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
+- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
++ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
++ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
+ } else {
+- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
++ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ 2 * vsync) / 2;
+- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+- 2 * vsync - fmt->bt.vbackporch;
+- fmt->bt.il_vfrontporch = v_fp;
+- fmt->bt.il_vsync = vsync;
+- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
+- fmt->bt.interlaced = V4L2_DV_INTERLACED;
++ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
++ 2 * vsync - t.bt.vbackporch;
++ t.bt.il_vfrontporch = v_fp;
++ t.bt.il_vsync = vsync;
++ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
++ t.bt.interlaced = V4L2_DV_INTERLACED;
+ }
+
+- fmt->bt.pixelclock = pix_clk;
+- fmt->bt.standards = V4L2_DV_BT_STD_GTF;
++ t.bt.pixelclock = pix_clk;
++ t.bt.standards = V4L2_DV_BT_STD_GTF;
+
+ if (!default_gtf)
+- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
++ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+
++ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
++ return false;
++ *timings = t;
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_detect_gtf);
+diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c
+index a0bcb0864ecd2c..a798e26c6402d4 100644
+--- a/drivers/message/fusion/mptsas.c
++++ b/drivers/message/fusion/mptsas.c
+@@ -4231,10 +4231,8 @@ mptsas_find_phyinfo_by_phys_disk_num(MPT_ADAPTER *ioc, u8 phys_disk_num,
+ static void
+ mptsas_reprobe_lun(struct scsi_device *sdev, void *data)
+ {
+- int rc;
+-
+ sdev->no_uld_attach = data ? 1 : 0;
+- rc = scsi_device_reprobe(sdev);
++ WARN_ON(scsi_device_reprobe(sdev));
+ }
+
+ static void
+diff --git a/drivers/mfd/da9052-spi.c b/drivers/mfd/da9052-spi.c
+index be5f2b34e18aeb..80fc5c0cac2fb0 100644
+--- a/drivers/mfd/da9052-spi.c
++++ b/drivers/mfd/da9052-spi.c
+@@ -37,7 +37,7 @@ static int da9052_spi_probe(struct spi_device *spi)
+ spi_set_drvdata(spi, da9052);
+
+ config = da9052_regmap_config;
+- config.read_flag_mask = 1;
++ config.write_flag_mask = 1;
+ config.reg_bits = 7;
+ config.pad_bits = 1;
+ config.val_bits = 8;
+diff --git a/drivers/mfd/intel_soc_pmic_bxtwc.c b/drivers/mfd/intel_soc_pmic_bxtwc.c
+index ccd76800d8e49b..b7204072e93ef8 100644
+--- a/drivers/mfd/intel_soc_pmic_bxtwc.c
++++ b/drivers/mfd/intel_soc_pmic_bxtwc.c
+@@ -148,6 +148,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
+ .name = "bxtwc_irq_chip_pwrbtn",
++ .domain_suffix = "PWRBTN",
+ .status_base = BXTWC_PWRBTNIRQ,
+ .mask_base = BXTWC_MPWRBTNIRQ,
+ .irqs = bxtwc_regmap_irqs_pwrbtn,
+@@ -157,6 +158,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
+ .name = "bxtwc_irq_chip_tmu",
++ .domain_suffix = "TMU",
+ .status_base = BXTWC_TMUIRQ,
+ .mask_base = BXTWC_MTMUIRQ,
+ .irqs = bxtwc_regmap_irqs_tmu,
+@@ -166,6 +168,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
+ .name = "bxtwc_irq_chip_bcu",
++ .domain_suffix = "BCU",
+ .status_base = BXTWC_BCUIRQ,
+ .mask_base = BXTWC_MBCUIRQ,
+ .irqs = bxtwc_regmap_irqs_bcu,
+@@ -175,6 +178,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
+ .name = "bxtwc_irq_chip_adc",
++ .domain_suffix = "ADC",
+ .status_base = BXTWC_ADCIRQ,
+ .mask_base = BXTWC_MADCIRQ,
+ .irqs = bxtwc_regmap_irqs_adc,
+@@ -184,6 +188,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
+ .name = "bxtwc_irq_chip_chgr",
++ .domain_suffix = "CHGR",
+ .status_base = BXTWC_CHGR0IRQ,
+ .mask_base = BXTWC_MCHGR0IRQ,
+ .irqs = bxtwc_regmap_irqs_chgr,
+@@ -193,6 +198,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_crit = {
+ .name = "bxtwc_irq_chip_crit",
++ .domain_suffix = "CRIT",
+ .status_base = BXTWC_CRITIRQ,
+ .mask_base = BXTWC_MCRITIRQ,
+ .irqs = bxtwc_regmap_irqs_crit,
+@@ -230,44 +236,55 @@ static const struct resource tmu_resources[] = {
+ };
+
+ static struct mfd_cell bxt_wc_dev[] = {
+- {
+- .name = "bxt_wcove_gpadc",
+- .num_resources = ARRAY_SIZE(adc_resources),
+- .resources = adc_resources,
+- },
+ {
+ .name = "bxt_wcove_thermal",
+ .num_resources = ARRAY_SIZE(thermal_resources),
+ .resources = thermal_resources,
+ },
+ {
+- .name = "bxt_wcove_usbc",
+- .num_resources = ARRAY_SIZE(usbc_resources),
+- .resources = usbc_resources,
++ .name = "bxt_wcove_gpio",
++ .num_resources = ARRAY_SIZE(gpio_resources),
++ .resources = gpio_resources,
+ },
+ {
+- .name = "bxt_wcove_ext_charger",
+- .num_resources = ARRAY_SIZE(charger_resources),
+- .resources = charger_resources,
++ .name = "bxt_wcove_region",
++ },
++};
++
++static const struct mfd_cell bxt_wc_tmu_dev[] = {
++ {
++ .name = "bxt_wcove_tmu",
++ .num_resources = ARRAY_SIZE(tmu_resources),
++ .resources = tmu_resources,
+ },
++};
++
++static const struct mfd_cell bxt_wc_bcu_dev[] = {
+ {
+ .name = "bxt_wcove_bcu",
+ .num_resources = ARRAY_SIZE(bcu_resources),
+ .resources = bcu_resources,
+ },
++};
++
++static const struct mfd_cell bxt_wc_adc_dev[] = {
+ {
+- .name = "bxt_wcove_tmu",
+- .num_resources = ARRAY_SIZE(tmu_resources),
+- .resources = tmu_resources,
++ .name = "bxt_wcove_gpadc",
++ .num_resources = ARRAY_SIZE(adc_resources),
++ .resources = adc_resources,
+ },
++};
+
++static struct mfd_cell bxt_wc_chgr_dev[] = {
+ {
+- .name = "bxt_wcove_gpio",
+- .num_resources = ARRAY_SIZE(gpio_resources),
+- .resources = gpio_resources,
++ .name = "bxt_wcove_usbc",
++ .num_resources = ARRAY_SIZE(usbc_resources),
++ .resources = usbc_resources,
+ },
+ {
+- .name = "bxt_wcove_region",
++ .name = "bxt_wcove_ext_charger",
++ .num_resources = ARRAY_SIZE(charger_resources),
++ .resources = charger_resources,
+ },
+ };
+
+@@ -425,6 +442,26 @@ static int bxtwc_add_chained_irq_chip(struct intel_soc_pmic *pmic,
+ 0, chip, data);
+ }
+
++static int bxtwc_add_chained_devices(struct intel_soc_pmic *pmic,
++ const struct mfd_cell *cells, int n_devs,
++ struct regmap_irq_chip_data *pdata,
++ int pirq, int irq_flags,
++ const struct regmap_irq_chip *chip,
++ struct regmap_irq_chip_data **data)
++{
++ struct device *dev = pmic->dev;
++ struct irq_domain *domain;
++ int ret;
++
++ ret = bxtwc_add_chained_irq_chip(pmic, pdata, pirq, irq_flags, chip, data);
++ if (ret)
++ return dev_err_probe(dev, ret, "Failed to add %s IRQ chip\n", chip->name);
++
++ domain = regmap_irq_get_domain(*data);
++
++ return devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, cells, n_devs, NULL, 0, domain);
++}
++
+ static int bxtwc_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -466,6 +503,15 @@ static int bxtwc_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add IRQ chip\n");
+
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_tmu_dev, ARRAY_SIZE(bxt_wc_tmu_dev),
++ pmic->irq_chip_data,
++ BXTWC_TMU_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_tmu,
++ &pmic->irq_chip_data_tmu);
++ if (ret)
++ return ret;
++
+ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+ BXTWC_PWRBTN_LVL1_IRQ,
+ IRQF_ONESHOT,
+@@ -474,40 +520,32 @@ static int bxtwc_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add PWRBTN IRQ chip\n");
+
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_TMU_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_tmu,
+- &pmic->irq_chip_data_tmu);
+- if (ret)
+- return dev_err_probe(dev, ret, "Failed to add TMU IRQ chip\n");
+-
+- /* Add chained IRQ handler for BCU IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_BCU_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_bcu,
+- &pmic->irq_chip_data_bcu);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_bcu_dev, ARRAY_SIZE(bxt_wc_bcu_dev),
++ pmic->irq_chip_data,
++ BXTWC_BCU_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_bcu,
++ &pmic->irq_chip_data_bcu);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add BUC IRQ chip\n");
++ return ret;
+
+- /* Add chained IRQ handler for ADC IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_ADC_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_adc,
+- &pmic->irq_chip_data_adc);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_adc_dev, ARRAY_SIZE(bxt_wc_adc_dev),
++ pmic->irq_chip_data,
++ BXTWC_ADC_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_adc,
++ &pmic->irq_chip_data_adc);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add ADC IRQ chip\n");
++ return ret;
+
+- /* Add chained IRQ handler for CHGR IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_CHGR_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_chgr,
+- &pmic->irq_chip_data_chgr);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_chgr_dev, ARRAY_SIZE(bxt_wc_chgr_dev),
++ pmic->irq_chip_data,
++ BXTWC_CHGR_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_chgr,
++ &pmic->irq_chip_data_chgr);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add CHGR IRQ chip\n");
++ return ret;
+
+ /* Add chained IRQ handler for CRIT IRQs */
+ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
+index 7e23ab3d5842c8..84ebc96f58e48d 100644
+--- a/drivers/mfd/rt5033.c
++++ b/drivers/mfd/rt5033.c
+@@ -81,8 +81,8 @@ static int rt5033_i2c_probe(struct i2c_client *i2c)
+ chip_rev = dev_id & RT5033_CHIP_REV_MASK;
+ dev_info(&i2c->dev, "Device found (rev. %d)\n", chip_rev);
+
+- ret = regmap_add_irq_chip(rt5033->regmap, rt5033->irq,
+- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++ ret = devm_regmap_add_irq_chip(rt5033->dev, rt5033->regmap,
++ rt5033->irq, IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 0, &rt5033_irq_chip, &rt5033->irq_data);
+ if (ret) {
+ dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n",
+diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
+index 2b9105295f3012..710364435b6b9e 100644
+--- a/drivers/mfd/tps65010.c
++++ b/drivers/mfd/tps65010.c
+@@ -544,17 +544,13 @@ static int tps65010_probe(struct i2c_client *client)
+ */
+ if (client->irq > 0) {
+ status = request_irq(client->irq, tps65010_irq,
+- IRQF_TRIGGER_FALLING, DRIVER_NAME, tps);
++ IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN,
++ DRIVER_NAME, tps);
+ if (status < 0) {
+ dev_dbg(&client->dev, "can't get IRQ %d, err %d\n",
+ client->irq, status);
+ return status;
+ }
+- /* annoying race here, ideally we'd have an option
+- * to claim the irq now and enable it later.
+- * FIXME genirq IRQF_NOAUTOEN now solves that ...
+- */
+- disable_irq(client->irq);
+ set_bit(FLAG_IRQ_ENABLE, &tps->flags);
+ } else
+ dev_warn(&client->dev, "IRQ not configured!\n");
+diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c
+index 6d4edd69db126a..e7d73c972f65dc 100644
+--- a/drivers/misc/apds990x.c
++++ b/drivers/misc/apds990x.c
+@@ -1147,7 +1147,7 @@ static int apds990x_probe(struct i2c_client *client)
+ err = chip->pdata->setup_resources();
+ if (err) {
+ err = -EINVAL;
+- goto fail3;
++ goto fail4;
+ }
+ }
+
+@@ -1155,7 +1155,7 @@ static int apds990x_probe(struct i2c_client *client)
+ apds990x_attribute_group);
+ if (err < 0) {
+ dev_err(&chip->client->dev, "Sysfs registration failed\n");
+- goto fail4;
++ goto fail5;
+ }
+
+ err = request_threaded_irq(client->irq, NULL,
+@@ -1166,15 +1166,17 @@ static int apds990x_probe(struct i2c_client *client)
+ if (err) {
+ dev_err(&client->dev, "could not get IRQ %d\n",
+ client->irq);
+- goto fail5;
++ goto fail6;
+ }
+ return err;
+-fail5:
++fail6:
+ sysfs_remove_group(&chip->client->dev.kobj,
+ &apds990x_attribute_group[0]);
+-fail4:
++fail5:
+ if (chip->pdata && chip->pdata->release_resources)
+ chip->pdata->release_resources();
++fail4:
++ pm_runtime_disable(&client->dev);
+ fail3:
+ regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs);
+ fail2:
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index 62ba0152547975..376047beea3d64 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -445,7 +445,7 @@ static void lkdtm_FAM_BOUNDS(void)
+
+ pr_err("FAIL: survived access of invalid flexible array member index!\n");
+
+- if (!__has_attribute(__counted_by__))
++ if (!IS_ENABLED(CONFIG_CC_HAS_COUNTED_BY))
+ pr_warn("This is expected since this %s was built with a compiler that does not support __counted_by\n",
+ lkdtm_kernel_info);
+ else if (IS_ENABLED(CONFIG_UBSAN_BOUNDS))
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 8fee7052f2ef4f..47443fb5eb3362 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -222,10 +222,6 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
+ u8 leftover = 0;
+ unsigned short rotator;
+ int i;
+- char tag[32];
+-
+- snprintf(tag, sizeof(tag), " ... CMD%d response SPI_%s",
+- cmd->opcode, maptype(cmd));
+
+ /* Except for data block reads, the whole response will already
+ * be stored in the scratch buffer. It's somewhere after the
+@@ -378,8 +374,9 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
+ }
+
+ if (value < 0)
+- dev_dbg(&host->spi->dev, "%s: resp %04x %08x\n",
+- tag, cmd->resp[0], cmd->resp[1]);
++ dev_dbg(&host->spi->dev,
++ " ... CMD%d response SPI_%s: resp %04x %08x\n",
++ cmd->opcode, maptype(cmd), cmd->resp[0], cmd->resp[1]);
+
+ /* disable chipselect on errors and some success cases */
+ if (value >= 0 && cs_on)
+diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
+index b22aa57119f238..e7a28f3316c3f2 100644
+--- a/drivers/mtd/hyperbus/rpc-if.c
++++ b/drivers/mtd/hyperbus/rpc-if.c
+@@ -163,9 +163,16 @@ static void rpcif_hb_remove(struct platform_device *pdev)
+ pm_runtime_disable(hyperbus->rpc.dev);
+ }
+
++static const struct platform_device_id rpc_if_hyperflash_id_table[] = {
++ { .name = "rpc-if-hyperflash" },
++ { /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(platform, rpc_if_hyperflash_id_table);
++
+ static struct platform_driver rpcif_platform_driver = {
+ .probe = rpcif_hb_probe,
+ .remove_new = rpcif_hb_remove,
++ .id_table = rpc_if_hyperflash_id_table,
+ .driver = {
+ .name = "rpc-if-hyperflash",
+ },
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index 4d7dc8a9c37385..a22aab4ed4e8ab 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -362,7 +362,7 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ size = ALIGN(size, sizeof(s32));
+ size += (req->ecc.strength + 1) * sizeof(s32) * 3;
+
+- user = kzalloc(size, GFP_KERNEL);
++ user = devm_kzalloc(pmecc->dev, size, GFP_KERNEL);
+ if (!user)
+ return ERR_PTR(-ENOMEM);
+
+@@ -408,12 +408,6 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ }
+ EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
+
+-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user)
+-{
+- kfree(user);
+-}
+-EXPORT_SYMBOL_GPL(atmel_pmecc_destroy_user);
+-
+ static int get_strength(struct atmel_pmecc_user *user)
+ {
+ const int *strengths = user->pmecc->caps->strengths;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.h b/drivers/mtd/nand/raw/atmel/pmecc.h
+index 7851c05126cf15..cc0c5af1f4f1ab 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.h
++++ b/drivers/mtd/nand/raw/atmel/pmecc.h
+@@ -55,8 +55,6 @@ struct atmel_pmecc *devm_atmel_pmecc_get(struct device *dev);
+ struct atmel_pmecc_user *
+ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ struct atmel_pmecc_user_req *req);
+-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user);
+-
+ void atmel_pmecc_reset(struct atmel_pmecc *pmecc);
+ int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op);
+ void atmel_pmecc_disable(struct atmel_pmecc_user *user);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 9d6e85bf227b92..8c57df44c40fe8 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor,
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->dummy.nbytes)
+- op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto);
++ op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
+
+ if (op->data.nbytes)
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index d6c92595f6bc9b..5a88a6096ca8c9 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -106,6 +106,7 @@ static int cypress_nor_sr_ready_and_clear_reg(struct spi_nor *nor, u64 addr)
+ int ret;
+
+ if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) {
++ op.addr.nbytes = nor->addr_nbytes;
+ op.dummy.nbytes = params->rdsr_dummy;
+ op.data.nbytes = 2;
+ }
+diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
+index ae5abe492b52ab..adc47b87b38a5f 100644
+--- a/drivers/mtd/ubi/attach.c
++++ b/drivers/mtd/ubi/attach.c
+@@ -1447,7 +1447,7 @@ static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ return err;
+ }
+
+-static struct ubi_attach_info *alloc_ai(void)
++static struct ubi_attach_info *alloc_ai(const char *slab_name)
+ {
+ struct ubi_attach_info *ai;
+
+@@ -1461,7 +1461,7 @@ static struct ubi_attach_info *alloc_ai(void)
+ INIT_LIST_HEAD(&ai->alien);
+ INIT_LIST_HEAD(&ai->fastmap);
+ ai->volumes = RB_ROOT;
+- ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache",
++ ai->aeb_slab_cache = kmem_cache_create(slab_name,
+ sizeof(struct ubi_ainf_peb),
+ 0, 0, NULL);
+ if (!ai->aeb_slab_cache) {
+@@ -1491,7 +1491,7 @@ static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info **ai)
+
+ err = -ENOMEM;
+
+- scan_ai = alloc_ai();
++ scan_ai = alloc_ai("ubi_aeb_slab_cache_fastmap");
+ if (!scan_ai)
+ goto out;
+
+@@ -1557,7 +1557,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ int err;
+ struct ubi_attach_info *ai;
+
+- ai = alloc_ai();
++ ai = alloc_ai("ubi_aeb_slab_cache");
+ if (!ai)
+ return -ENOMEM;
+
+@@ -1575,7 +1575,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ if (err > 0 || mtd_is_eccerr(err)) {
+ if (err != UBI_NO_FASTMAP) {
+ destroy_ai(ai);
+- ai = alloc_ai();
++ ai = alloc_ai("ubi_aeb_slab_cache");
+ if (!ai)
+ return -ENOMEM;
+
+@@ -1614,7 +1614,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) {
+ struct ubi_attach_info *scan_ai;
+
+- scan_ai = alloc_ai();
++ scan_ai = alloc_ai("ubi_aeb_slab_cache_dbg_chk_fastmap");
+ if (!scan_ai) {
+ err = -ENOMEM;
+ goto out_wl;
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 2a9cc9413c427d..9bdb6525f1281f 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -346,14 +346,27 @@ int ubi_wl_get_peb(struct ubi_device *ubi)
+ * WL sub-system.
+ *
+ * @ubi: UBI device description object
++ * @need_fill: whether to fill wear-leveling pool when no PEBs are found
+ */
+-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi)
++static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
++ bool need_fill)
+ {
+ struct ubi_fm_pool *pool = &ubi->fm_wl_pool;
+ int pnum;
+
+- if (pool->used == pool->size)
++ if (pool->used == pool->size) {
++ if (need_fill && !ubi->fm_work_scheduled) {
++ /*
++ * We cannot update the fastmap here because this
++ * function is called in atomic context.
++ * Let's fail here and refill/update it as soon as
++ * possible.
++ */
++ ubi->fm_work_scheduled = 1;
++ schedule_work(&ubi->fm_work);
++ }
+ return NULL;
++ }
+
+ pnum = pool->pebs[pool->used];
+ return ubi->lookuptbl[pnum];
+@@ -375,7 +388,7 @@ static bool need_wear_leveling(struct ubi_device *ubi)
+ if (!ubi->used.rb_node)
+ return false;
+
+- e = next_peb_for_wl(ubi);
++ e = next_peb_for_wl(ubi, false);
+ if (!e) {
+ if (!ubi->free.rb_node)
+ return false;
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 5a3558bbb90356..e5cf3bdca3b012 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -143,8 +143,10 @@ static struct fwnode_handle *find_volume_fwnode(struct ubi_volume *vol)
+ vol->vol_id != volid)
+ continue;
+
++ fwnode_handle_put(fw_vols);
+ return fw_vol;
+ }
++ fwnode_handle_put(fw_vols);
+
+ return NULL;
+ }
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index a357f3d27f2f3d..fbd399cf650337 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -683,7 +683,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ ubi_assert(!ubi->move_to_put);
+
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+- if (!next_peb_for_wl(ubi) ||
++ if (!next_peb_for_wl(ubi, true) ||
+ #else
+ if (!ubi->free.rb_node ||
+ #endif
+@@ -846,7 +846,14 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ goto out_not_moved;
+ }
+ if (err == MOVE_RETRY) {
+- scrubbing = 1;
++ /*
++ * For source PEB:
++ * 1. The scrubbing is set for scrub type PEB, it will
++ * be put back into ubi->scrub list.
++ * 2. Non-scrub type PEB will be put back into ubi->used
++ * list.
++ */
++ keep = 1;
+ dst_leb_clean = 1;
+ goto out_not_moved;
+ }
+diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h
+index 7b6715ef6d4a35..a69169c35e310f 100644
+--- a/drivers/mtd/ubi/wl.h
++++ b/drivers/mtd/ubi/wl.h
+@@ -5,7 +5,8 @@
+ static void update_fastmap_work_fn(struct work_struct *wrk);
+ static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root);
+ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi);
+-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi);
++static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
++ bool need_fill);
+ static bool need_wear_leveling(struct ubi_device *ubi);
+ static void ubi_fastmap_close(struct ubi_device *ubi);
+ static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 99d025b69079a8..3d9ee91e1f8be0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4558,7 +4558,7 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ struct net_device *dev = bp->dev;
+
+ if (page_mode) {
+- bp->flags &= ~BNXT_FLAG_AGG_RINGS;
++ bp->flags &= ~(BNXT_FLAG_AGG_RINGS | BNXT_FLAG_NO_AGG_RINGS);
+ bp->flags |= BNXT_FLAG_RX_PAGE_MODE;
+
+ if (bp->xdp_prog->aux->xdp_has_frags)
+@@ -9053,7 +9053,6 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
+ struct hwrm_port_mac_ptp_qcfg_output *resp;
+ struct hwrm_port_mac_ptp_qcfg_input *req;
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+- bool phc_cfg;
+ u8 flags;
+ int rc;
+
+@@ -9100,8 +9099,9 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
+ rc = -ENODEV;
+ goto exit;
+ }
+- phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
+- rc = bnxt_ptp_init(bp, phc_cfg);
++ ptp->rtc_configured =
++ (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
++ rc = bnxt_ptp_init(bp);
+ if (rc)
+ netdev_warn(bp->dev, "PTP initialization failed.\n");
+ exit:
+@@ -14494,6 +14494,14 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
+ bnxt_close_nic(bp, true, false);
+
+ WRITE_ONCE(dev->mtu, new_mtu);
++
++ /* MTU change may change the AGG ring settings if an XDP multi-buffer
++ * program is attached. We need to set the AGG rings settings and
++ * rx_skb_func accordingly.
++ */
++ if (READ_ONCE(bp->xdp_prog))
++ bnxt_set_rx_skb_mode(bp, true);
++
+ bnxt_set_ring_params(bp);
+
+ if (netif_running(dev))
+@@ -15231,6 +15239,13 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+
+ for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) {
+ vnic = &bp->vnic_info[i];
++
++ rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
++ if (rc) {
++ netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",
++ vnic->vnic_id, rc);
++ return rc;
++ }
+ vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;
+ bnxt_hwrm_vnic_update(bp, vnic,
+ VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
+@@ -15984,6 +15999,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
+ if (netif_running(dev))
+ dev_close(dev);
+
++ bnxt_ptp_clear(bp);
+ bnxt_clear_int_mode(bp);
+ pci_disable_device(pdev);
+
+@@ -16011,6 +16027,7 @@ static int bnxt_suspend(struct device *device)
+ rc = bnxt_close(dev);
+ }
+ bnxt_hwrm_func_drv_unrgtr(bp);
++ bnxt_ptp_clear(bp);
+ pci_disable_device(bp->pdev);
+ bnxt_free_ctx_mem(bp);
+ rtnl_unlock();
+@@ -16054,6 +16071,10 @@ static int bnxt_resume(struct device *device)
+ if (bp->fw_crash_mem)
+ bnxt_hwrm_crash_dump_mem_cfg(bp);
+
++ if (bnxt_ptp_init(bp)) {
++ kfree(bp->ptp_cfg);
++ bp->ptp_cfg = NULL;
++ }
+ bnxt_get_wol_settings(bp);
+ if (netif_running(dev)) {
+ rc = bnxt_open(dev);
+@@ -16232,8 +16253,12 @@ static void bnxt_io_resume(struct pci_dev *pdev)
+ rtnl_lock();
+
+ err = bnxt_hwrm_func_qcaps(bp);
+- if (!err && netif_running(netdev))
+- err = bnxt_open(netdev);
++ if (!err) {
++ if (netif_running(netdev))
++ err = bnxt_open(netdev);
++ else
++ err = bnxt_reserve_rings(bp, true);
++ }
+
+ if (!err)
+ netif_device_attach(netdev);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index f71cc8188b4e5b..20ba14eb87e00b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2838,19 +2838,24 @@ static int bnxt_get_link_ksettings(struct net_device *dev,
+ }
+
+ base->port = PORT_NONE;
+- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) {
++ if (media == BNXT_MEDIA_TP) {
+ base->port = PORT_TP;
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.advertising);
++ } else if (media == BNXT_MEDIA_KR) {
++ linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
++ lk_ksettings->link_modes.supported);
++ linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
++ lk_ksettings->link_modes.advertising);
+ } else {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.advertising);
+
+- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC)
++ if (media == BNXT_MEDIA_CR)
+ base->port = PORT_DA;
+ else
+ base->port = PORT_FIBRE;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index fa514be8765028..781225d3ba8ffc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -1024,7 +1024,7 @@ static void bnxt_ptp_free(struct bnxt *bp)
+ }
+ }
+
+-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
++int bnxt_ptp_init(struct bnxt *bp)
+ {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+ int rc;
+@@ -1047,7 +1047,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
+
+ if (BNXT_PTP_USE_RTC(bp)) {
+ bnxt_ptp_timecounter_init(bp, false);
+- rc = bnxt_ptp_init_rtc(bp, phc_cfg);
++ rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured);
+ if (rc)
+ goto out;
+ } else {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index f322466ecad350..61e89bb2d2690c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -133,6 +133,7 @@ struct bnxt_ptp_cfg {
+ BNXT_PTP_MSG_PDELAY_REQ | \
+ BNXT_PTP_MSG_PDELAY_RESP)
+ u8 tx_tstamp_en:1;
++ u8 rtc_configured:1;
+ int rx_filter;
+ u32 tstamp_filters;
+
+@@ -180,6 +181,6 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
+ struct tx_ts_cmp *tscmp);
+ void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns);
+ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg);
+-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg);
++int bnxt_ptp_init(struct bnxt *bp);
+ void bnxt_ptp_clear(struct bnxt *bp);
+ #endif
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 37881591774175..d178138981a967 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -17801,6 +17801,9 @@ static int tg3_init_one(struct pci_dev *pdev,
+ } else
+ persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
+
++ if (tg3_asic_rev(tp) == ASIC_REV_57766)
++ persist_dma_mask = DMA_BIT_MASK(31);
++
+ /* Configure DMA attributes. */
+ if (dma_mask > DMA_BIT_MASK(32)) {
+ err = dma_set_mask(&pdev->dev, dma_mask);
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index e44e8b139633fc..060e0e6749380f 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -1248,10 +1248,10 @@ gve_adminq_configure_flow_rule(struct gve_priv *priv,
+ sizeof(struct gve_adminq_configure_flow_rule),
+ flow_rule_cmd);
+
+- if (err) {
++ if (err == -ETIME) {
+ dev_err(&priv->pdev->dev, "Timeout to configure the flow rule, trigger reset");
+ gve_reset(priv, true);
+- } else {
++ } else if (!err) {
+ priv->flow_rules_cache.rules_cache_synced = false;
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index f2506511bbfff4..bce5b76f1e7a58 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -5299,7 +5299,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
+ }
+
+ flags_complete:
+- bitmap_xor(changed_flags, pf->flags, orig_flags, I40E_PF_FLAGS_NBITS);
++ bitmap_xor(changed_flags, new_flags, orig_flags, I40E_PF_FLAGS_NBITS);
+
+ if (test_bit(I40E_FLAG_FW_LLDP_DIS, changed_flags))
+ reset_needed = I40E_PF_RESET_AND_REBUILD_FLAG;
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 59f62306b9cb02..b6ec01f6fa73e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -1715,8 +1715,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+
+ /* copy Tx queue info from VF into VSI */
+ if (qpi->txq.ring_len > 0) {
+- vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;
+- vsi->tx_rings[i]->count = qpi->txq.ring_len;
++ vsi->tx_rings[q_idx]->dma = qpi->txq.dma_ring_addr;
++ vsi->tx_rings[q_idx]->count = qpi->txq.ring_len;
+
+ /* Disable any existing queue first */
+ if (ice_vf_vsi_dis_single_txq(vf, vsi, q_idx))
+@@ -1725,7 +1725,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ /* Configure a queue with the requested settings */
+ if (ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx)) {
+ dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure TX queue %d\n",
+- vf->vf_id, i);
++ vf->vf_id, q_idx);
+ goto error_param;
+ }
+ }
+@@ -1733,24 +1733,23 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ /* copy Rx queue info from VF into VSI */
+ if (qpi->rxq.ring_len > 0) {
+ u16 max_frame_size = ice_vc_get_max_frame_size(vf);
++ struct ice_rx_ring *ring = vsi->rx_rings[q_idx];
+ u32 rxdid;
+
+- vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
+- vsi->rx_rings[i]->count = qpi->rxq.ring_len;
++ ring->dma = qpi->rxq.dma_ring_addr;
++ ring->count = qpi->rxq.ring_len;
+
+ if (qpi->rxq.crc_disable)
+- vsi->rx_rings[q_idx]->flags |=
+- ICE_RX_FLAGS_CRC_STRIP_DIS;
++ ring->flags |= ICE_RX_FLAGS_CRC_STRIP_DIS;
+ else
+- vsi->rx_rings[q_idx]->flags &=
+- ~ICE_RX_FLAGS_CRC_STRIP_DIS;
++ ring->flags &= ~ICE_RX_FLAGS_CRC_STRIP_DIS;
+
+ if (qpi->rxq.databuffer_size != 0 &&
+ (qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
+ qpi->rxq.databuffer_size < 1024))
+ goto error_param;
+ vsi->rx_buf_len = qpi->rxq.databuffer_size;
+- vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;
++ ring->rx_buf_len = vsi->rx_buf_len;
+ if (qpi->rxq.max_pkt_size > max_frame_size ||
+ qpi->rxq.max_pkt_size < 64)
+ goto error_param;
+@@ -1765,7 +1764,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+
+ if (ice_vsi_cfg_single_rxq(vsi, q_idx)) {
+ dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure RX queue %d\n",
+- vf->vf_id, i);
++ vf->vf_id, q_idx);
+ goto error_param;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 27935c54b91bc7..8216f843a7cd5f 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -112,6 +112,11 @@ struct mac_ops *get_mac_ops(void *cgxd)
+ return ((struct cgx *)cgxd)->mac_ops;
+ }
+
++u32 cgx_get_fifo_len(void *cgxd)
++{
++ return ((struct cgx *)cgxd)->fifo_len;
++}
++
+ void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
+ {
+ writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) +
+@@ -209,6 +214,24 @@ u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id)
+ return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT;
+ }
+
++static u8 cgx_get_nix_resetbit(struct cgx *cgx)
++{
++ int first_lmac;
++ u8 p2x;
++
++ /* non 98XX silicons supports only NIX0 block */
++ if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX)
++ return CGX_NIX0_RESET;
++
++ first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac);
++ p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac);
++
++ if (p2x == CMR_P2X_SEL_NIX1)
++ return CGX_NIX1_RESET;
++ else
++ return CGX_NIX0_RESET;
++}
++
+ /* Ensure the required lock for event queue(where asynchronous events are
+ * posted) is acquired before calling this API. Else an asynchronous event(with
+ * latest link status) can reach the destination before this function returns
+@@ -501,7 +524,7 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
+ u8 num_lmacs;
+ u32 fifo_len;
+
+- fifo_len = cgx->mac_ops->fifo_len;
++ fifo_len = cgx->fifo_len;
+ num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx);
+
+ switch (num_lmacs) {
+@@ -1719,6 +1742,8 @@ static int cgx_lmac_init(struct cgx *cgx)
+ lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
+ }
+
++ /* Start X2P reset on given MAC block */
++ cgx->mac_ops->mac_x2p_reset(cgx, true);
+ return cgx_lmac_verify_fwi_version(cgx);
+
+ err_bitmap_free:
+@@ -1764,7 +1789,7 @@ static void cgx_populate_features(struct cgx *cgx)
+ u64 cfg;
+
+ cfg = cgx_read(cgx, 0, CGX_CONST);
+- cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
++ cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
+ cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg);
+
+ if (is_dev_rpm(cgx))
+@@ -1784,6 +1809,45 @@ static u8 cgx_get_rxid_mapoffset(struct cgx *cgx)
+ return 0x60;
+ }
+
++static void cgx_x2p_reset(void *cgxd, bool enable)
++{
++ struct cgx *cgx = cgxd;
++ int lmac_id;
++ u64 cfg;
++
++ if (enable) {
++ for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac)
++ cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false);
++
++ usleep_range(1000, 2000);
++
++ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
++ cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP;
++ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
++ } else {
++ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
++ cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP);
++ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
++ }
++}
++
++static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable)
++{
++ struct cgx *cgx = cgxd;
++ u64 cfg;
++
++ if (!is_lmac_valid(cgx, lmac_id))
++ return -ENODEV;
++
++ cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG);
++ if (enable)
++ cfg |= DATA_PKT_RX_EN;
++ else
++ cfg &= ~DATA_PKT_RX_EN;
++ cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg);
++ return 0;
++}
++
+ static struct mac_ops cgx_mac_ops = {
+ .name = "cgx",
+ .csr_offset = 0,
+@@ -1815,6 +1879,8 @@ static struct mac_ops cgx_mac_ops = {
+ .mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg,
+ .mac_reset = cgx_lmac_reset,
+ .mac_stats_reset = cgx_stats_reset,
++ .mac_x2p_reset = cgx_x2p_reset,
++ .mac_enadis_rx = cgx_enadis_rx,
+ };
+
+ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+index dc9ace30554af6..1cf12e5c7da873 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+@@ -32,6 +32,10 @@
+ #define CGX_LMAC_TYPE_MASK 0xF
+ #define CGXX_CMRX_INT 0x040
+ #define FW_CGX_INT BIT_ULL(1)
++#define CGXX_CMR_GLOBAL_CONFIG 0x08
++#define CGX_NIX0_RESET BIT_ULL(2)
++#define CGX_NIX1_RESET BIT_ULL(3)
++#define CGX_NSCI_DROP BIT_ULL(9)
+ #define CGXX_CMRX_INT_ENA_W1S 0x058
+ #define CGXX_CMRX_RX_ID_MAP 0x060
+ #define CGXX_CMRX_RX_STAT0 0x070
+@@ -185,4 +189,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
+ int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
+ int pfvf_idx);
+ int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
++u32 cgx_get_fifo_len(void *cgxd);
+ #endif /* CGX_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+index 9ffc6790c51307..6180e68e1765a7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+@@ -72,7 +72,6 @@ struct mac_ops {
+ u8 irq_offset;
+ u8 int_ena_bit;
+ u8 lmac_fwi;
+- u32 fifo_len;
+ bool non_contiguous_serdes_lane;
+ /* RPM & CGX differs in number of Receive/transmit stats */
+ u8 rx_stats_cnt;
+@@ -133,6 +132,8 @@ struct mac_ops {
+ int (*get_fec_stats)(void *cgxd, int lmac_id,
+ struct cgx_fec_stats_rsp *rsp);
+ int (*mac_stats_reset)(void *cgxd, int lmac_id);
++ void (*mac_x2p_reset)(void *cgxd, bool enable);
++ int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable);
+ };
+
+ struct cgx {
+@@ -142,6 +143,10 @@ struct cgx {
+ u8 lmac_count;
+ /* number of LMACs per MAC could be 4 or 8 */
+ u8 max_lmac_per_mac;
++ /* length of fifo varies depending on the number
++ * of LMACS
++ */
++ u32 fifo_len;
+ #define MAX_LMAC_COUNT 8
+ struct lmac *lmac_idmap[MAX_LMAC_COUNT];
+ struct work_struct cgx_cmd_work;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index 1b34cf9c97035a..2e9945446199ec 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -39,6 +39,8 @@ static struct mac_ops rpm_mac_ops = {
+ .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
+ .mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
++ .mac_x2p_reset = rpm_x2p_reset,
++ .mac_enadis_rx = rpm_enadis_rx,
+ };
+
+ static struct mac_ops rpm2_mac_ops = {
+@@ -72,6 +74,8 @@ static struct mac_ops rpm2_mac_ops = {
+ .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
+ .mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
++ .mac_x2p_reset = rpm_x2p_reset,
++ .mac_enadis_rx = rpm_enadis_rx,
+ };
+
+ bool is_dev_rpm2(void *rpmd)
+@@ -467,7 +471,7 @@ u8 rpm_get_lmac_type(void *rpmd, int lmac_id)
+ int err;
+
+ req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req);
+- err = cgx_fwi_cmd_generic(req, &resp, rpm, 0);
++ err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id);
+ if (!err)
+ return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp);
+ return err;
+@@ -480,7 +484,7 @@ u32 rpm_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ u8 num_lmacs;
+ u32 fifo_len;
+
+- fifo_len = rpm->mac_ops->fifo_len;
++ fifo_len = rpm->fifo_len;
+ num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm);
+
+ switch (num_lmacs) {
+@@ -533,9 +537,9 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ */
+ max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF;
+ if (max_lmac > 4)
+- fifo_len = rpm->mac_ops->fifo_len / 2;
++ fifo_len = rpm->fifo_len / 2;
+ else
+- fifo_len = rpm->mac_ops->fifo_len;
++ fifo_len = rpm->fifo_len;
+
+ if (lmac_id < 4) {
+ num_lmacs = hweight8(lmac_info & 0xF);
+@@ -699,46 +703,51 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
+ if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE)
+ return 0;
+
++ /* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common
++ * for all counters. Acquire lock to ensure serialized reads
++ */
++ mutex_lock(&rpm->lock);
+ if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) {
+- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO);
+- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_corr_blks = (val_hi << 16 | val_lo);
+
+- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO);
+- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_uncorr_blks = (val_hi << 16 | val_lo);
+
+ /* 50G uses 2 Physical serdes lines */
+ if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id ==
+ LMAC_MODE_50G_R) {
+- val_lo = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_VL1_CCW_LO);
+- val_hi = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_corr_blks += (val_hi << 16 | val_lo);
+
+- val_lo = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_VL1_NCCW_LO);
+- val_hi = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_uncorr_blks += (val_hi << 16 | val_lo);
+ }
+ } else {
+ /* enable RS-FEC capture */
+- cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL);
++ cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL);
+ cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id);
+- rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg);
++ rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg);
+
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2);
+- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
+ rsp->fec_corr_blks = (val_hi << 32 | val_lo);
+
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3);
+- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
+ rsp->fec_uncorr_blks = (val_hi << 32 | val_lo);
+ }
++ mutex_unlock(&rpm->lock);
+
+ return 0;
+ }
+@@ -763,3 +772,41 @@ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
+
+ return 0;
+ }
++
++void rpm_x2p_reset(void *rpmd, bool enable)
++{
++ rpm_t *rpm = rpmd;
++ int lmac_id;
++ u64 cfg;
++
++ if (enable) {
++ for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac)
++ rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false);
++
++ usleep_range(1000, 2000);
++
++ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
++ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET);
++ } else {
++ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
++ cfg &= ~RPM_NIX0_RESET;
++ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg);
++ }
++}
++
++int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable)
++{
++ rpm_t *rpm = rpmd;
++ u64 cfg;
++
++ if (!is_lmac_valid(rpm, lmac_id))
++ return -ENODEV;
++
++ cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
++ if (enable)
++ cfg |= RPM_RX_EN;
++ else
++ cfg &= ~RPM_RX_EN;
++ rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
++ return 0;
++}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+index 34b11deb0f3c1d..b8d3972e096aed 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+@@ -17,6 +17,8 @@
+
+ /* Registers */
+ #define RPMX_CMRX_CFG 0x00
++#define RPMX_CMR_GLOBAL_CFG 0x08
++#define RPM_NIX0_RESET BIT_ULL(3)
+ #define RPMX_RX_TS_PREPEND BIT_ULL(22)
+ #define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17)
+ #define RPMX_CMRX_RX_ID_MAP 0x80
+@@ -84,16 +86,18 @@
+ /* FEC stats */
+ #define RPMX_MTI_STAT_STATN_CONTROL 0x10018
+ #define RPMX_MTI_STAT_DATA_HI_CDC 0x10038
+-#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27)
++#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28)
+ #define RPMX_CMD_CLEAR_RX BIT_ULL(30)
+ #define RPMX_CMD_CLEAR_TX BIT_ULL(31)
++#define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018
++#define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000
+ #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050
+ #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058
+-#define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618
+-#define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620
+-#define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628
+-#define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630
+-#define RPMX_MTI_FCFECX_CW_HI 0x38638
++#define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40))
+
+ /* CN10KB CSR Declaration */
+ #define RPM2_CMRX_SW_INT 0x1b0
+@@ -137,4 +141,6 @@ bool is_dev_rpm2(void *rpmd);
+ int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
+ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
+ int rpm_stats_reset(void *rpmd, int lmac_id);
++void rpm_x2p_reset(void *rpmd, bool enable);
++int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable);
+ #endif /* RPM_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 1a97fb9032fa44..cd0d7b7774f1af 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1162,6 +1162,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
+ }
+
+ rvu_program_channels(rvu);
++ cgx_start_linkup(rvu);
+
+ err = rvu_mcs_init(rvu);
+ if (err) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 5016ba82e1423a..8555edbb1c8f9a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -997,6 +997,7 @@ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_
+ int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
+ void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
++void cgx_start_linkup(struct rvu *rvu);
+ int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
+ int type);
+ bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 266ecbc1b97a68..992fa0b82e8d2d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -349,6 +349,7 @@ static void rvu_cgx_wq_destroy(struct rvu *rvu)
+
+ int rvu_cgx_init(struct rvu *rvu)
+ {
++ struct mac_ops *mac_ops;
+ int cgx, err;
+ void *cgxd;
+
+@@ -375,6 +376,15 @@ int rvu_cgx_init(struct rvu *rvu)
+ if (err)
+ return err;
+
++ /* Clear X2P reset on all MAC blocks */
++ for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) {
++ cgxd = rvu_cgx_pdata(cgx, rvu);
++ if (!cgxd)
++ continue;
++ mac_ops = get_mac_ops(cgxd);
++ mac_ops->mac_x2p_reset(cgxd, false);
++ }
++
+ /* Register for CGX events */
+ err = cgx_lmac_event_handler_init(rvu);
+ if (err)
+@@ -382,10 +392,26 @@ int rvu_cgx_init(struct rvu *rvu)
+
+ mutex_init(&rvu->cgx_cfg_lock);
+
+- /* Ensure event handler registration is completed, before
+- * we turn on the links
+- */
+- mb();
++ return 0;
++}
++
++void cgx_start_linkup(struct rvu *rvu)
++{
++ unsigned long lmac_bmap;
++ struct mac_ops *mac_ops;
++ int cgx, lmac, err;
++ void *cgxd;
++
++ /* Enable receive on all LMACS */
++ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
++ cgxd = rvu_cgx_pdata(cgx, rvu);
++ if (!cgxd)
++ continue;
++ mac_ops = get_mac_ops(cgxd);
++ lmac_bmap = cgx_get_lmac_bmap(cgxd);
++ for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx)
++ mac_ops->mac_enadis_rx(cgxd, lmac, true);
++ }
+
+ /* Do link up for all CGX ports */
+ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
+@@ -398,8 +424,6 @@ int rvu_cgx_init(struct rvu *rvu)
+ "Link up process failed to start on cgx %d\n",
+ cgx);
+ }
+-
+- return 0;
+ }
+
+ int rvu_cgx_exit(struct rvu *rvu)
+@@ -923,13 +947,12 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
+
+ u32 rvu_cgx_get_fifolen(struct rvu *rvu)
+ {
+- struct mac_ops *mac_ops;
+- u32 fifo_len;
++ void *cgxd = rvu_first_cgx_pdata(rvu);
+
+- mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu));
+- fifo_len = mac_ops ? mac_ops->fifo_len : 0;
++ if (!cgxd)
++ return 0;
+
+- return fifo_len;
++ return cgx_get_fifo_len(cgxd);
+ }
+
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+index c1c99d7054f87f..7417087b6db597 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+@@ -203,6 +203,11 @@ int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf)
+
+ rsp = (struct nix_bandprof_alloc_rsp *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
++
+ if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) {
+ rc = -EIO;
+ goto out;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 87d5776e3b88e9..7510a918d942c0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1837,6 +1837,10 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf)
+ if (!rc) {
+ rsp = (struct nix_hw_info *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
+
+ /* HW counts VLAN insertion bytes (8 for double tag)
+ * irrespective of whether SQE is requesting to insert VLAN
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+index aa01110f04a339..294fba58b67095 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+@@ -315,6 +315,11 @@ int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf)
+ if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ rsp = (struct cgx_pfc_rsp *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ err = PTR_ERR(rsp);
++ goto unlock;
++ }
++
+ if (req->rx_pause != rsp->rx_pause || req->tx_pause != rsp->tx_pause) {
+ dev_warn(pfvf->dev,
+ "Failed to config PFC\n");
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+index 80d853b343f98f..2046dd0da00d85 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+@@ -28,6 +28,11 @@ static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac,
+ if (!err) {
+ rsp = (struct cgx_mac_addr_add_rsp *)
+ otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pf->mbox.lock);
++ return PTR_ERR(rsp);
++ }
++
+ *dmac_index = rsp->index;
+ }
+
+@@ -200,6 +205,10 @@ int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u32 bit_pos)
+
+ rsp = (struct cgx_mac_addr_update_rsp *)
+ otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
+
+ pf->flow_cfg->bmap_to_dmacindex[bit_pos] = rsp->index;
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 32468c663605ef..5197ce816581e3 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -343,6 +343,11 @@ static void otx2_get_pauseparam(struct net_device *netdev,
+ if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ rsp = (struct cgx_pause_frm_cfg *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return;
++ }
++
+ pause->rx_pause = rsp->rx_pause;
+ pause->tx_pause = rsp->tx_pause;
+ }
+@@ -1072,6 +1077,11 @@ static int otx2_set_fecparam(struct net_device *netdev,
+
+ rsp = (struct fec_mode *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+ 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ err = PTR_ERR(rsp);
++ goto end;
++ }
++
+ if (rsp->fec >= 0)
+ pfvf->linfo.fec = rsp->fec;
+ else
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+index 98c31a16c70b4f..58720a161ee24a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+@@ -119,6 +119,8 @@ int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp))
++ goto exit;
+
+ for (ent = 0; ent < rsp->count; ent++)
+ flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent];
+@@ -197,6 +199,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return PTR_ERR(rsp);
++ }
+
+ if (rsp->count != req->count) {
+ netdev_info(pfvf->netdev,
+@@ -232,6 +238,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+
+ frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &freq->hdr);
++ if (IS_ERR(frsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return PTR_ERR(frsp);
++ }
+
+ if (frsp->enable) {
+ pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT;
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index 1a59c952aa01c1..45f115e41857ba 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1394,18 +1394,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+
+ printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
+
+- clk = devm_clk_get(&pdev->dev, NULL);
++ clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(clk)) {
+- dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n");
++ dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n");
+ return -ENODEV;
+ }
+- clk_prepare_enable(clk);
+
+ dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
+- if (!dev) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!dev)
++ return -ENOMEM;
+
+ platform_set_drvdata(pdev, dev);
+ pep = netdev_priv(dev);
+@@ -1523,8 +1520,6 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ mdiobus_free(pep->smi_bus);
+ err_netdev:
+ free_netdev(dev);
+-err_clk:
+- clk_disable_unprepare(clk);
+ return err;
+ }
+
+@@ -1542,7 +1537,6 @@ static void pxa168_eth_remove(struct platform_device *pdev)
+ if (dev->phydev)
+ phy_disconnect(dev->phydev);
+
+- clk_disable_unprepare(pep->clk);
+ mdiobus_unregister(pep->smi_bus);
+ mdiobus_free(pep->smi_bus);
+ unregister_netdev(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index 8577db3308cc56..7f68468c2e7598 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -516,6 +516,7 @@ void mlx5_modify_lag(struct mlx5_lag *ldev,
+ blocking_notifier_call_chain(&dev0->priv.lag_nh,
+ MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE,
+ ndev);
++ dev_put(ndev);
+ }
+ }
+
+@@ -918,6 +919,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ {
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
+ struct lag_tracker tracker = { };
++ struct net_device *ndev;
+ bool do_bond, roce_lag;
+ int err;
+ int i;
+@@ -981,6 +983,16 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ return;
+ }
+ }
++ if (tracker.tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) {
++ ndev = mlx5_lag_active_backup_get_netdev(dev0);
++ /** Only sriov and roce lag should have tracker->TX_type
++ * set so no need to check the mode
++ */
++ blocking_notifier_call_chain(&dev0->priv.lag_nh,
++ MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE,
++ ndev);
++ dev_put(ndev);
++ }
+ } else if (mlx5_lag_should_modify_lag(ldev, do_bond)) {
+ mlx5_modify_lag(ldev, &tracker);
+ } else if (mlx5_lag_should_disable_lag(ldev, do_bond)) {
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+index a4809fe0fc2496..268489b15616fd 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+@@ -319,7 +319,6 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ free_irqs:
+ fbnic_free_irqs(fbd);
+ free_fbd:
+- pci_disable_device(pdev);
+ fbnic_devlink_free(fbd);
+
+ return err;
+@@ -349,7 +348,6 @@ static void fbnic_remove(struct pci_dev *pdev)
+ fbnic_fw_disable_mbx(fbd);
+ fbnic_free_irqs(fbd);
+
+- pci_disable_device(pdev);
+ fbnic_devlink_free(fbd);
+ }
+
+diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+index 7251121ab196e3..16eb3de60eb6df 100644
+--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
++++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+@@ -366,12 +366,13 @@ static void vcap_api_iterator_init_test(struct kunit *test)
+ struct vcap_typegroup typegroups[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 0, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_typegroup typegroups2[] = {
+ { .offset = 0, .width = 3, .value = 4, },
+ { .offset = 49, .width = 2, .value = 0, },
+ { .offset = 98, .width = 2, .value = 0, },
++ { }
+ };
+
+ vcap_iter_init(&iter, 52, typegroups, 86);
+@@ -399,6 +400,7 @@ static void vcap_api_iterator_next_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 0, },
+ { .offset = 196, .width = 2, .value = 0, },
+ { .offset = 245, .width = 1, .value = 0, },
++ { }
+ };
+ int idx;
+
+@@ -433,7 +435,7 @@ static void vcap_api_encode_typegroups_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 5, .value = 27, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_encode_typegroups(stream, 49, typegroups, false);
+@@ -463,6 +465,7 @@ static void vcap_api_encode_bit_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 1, .value = 0, },
++ { }
+ };
+
+ vcap_iter_init(&iter, 49, typegroups, 44);
+@@ -489,7 +492,7 @@ static void vcap_api_encode_field_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 5, .value = 27, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_field rf = {
+ .type = VCAP_FIELD_U32,
+@@ -538,7 +541,7 @@ static void vcap_api_encode_short_field_test(struct kunit *test)
+ { .offset = 0, .width = 3, .value = 7, },
+ { .offset = 21, .width = 2, .value = 3, },
+ { .offset = 42, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_field rf = {
+ .type = VCAP_FIELD_U32,
+@@ -608,7 +611,7 @@ static void vcap_api_encode_keyfield_test(struct kunit *test)
+ struct vcap_typegroup tgt[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_test_api_init(&admin);
+@@ -671,7 +674,7 @@ static void vcap_api_encode_max_keyfield_test(struct kunit *test)
+ struct vcap_typegroup tgt[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ u32 keyres[] = {
+ 0x928e8a84,
+@@ -732,7 +735,7 @@ static void vcap_api_encode_actionfield_test(struct kunit *test)
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 21, .width = 1, .value = 1, },
+ { .offset = 42, .width = 1, .value = 0, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_encode_actionfield(&rule, &caf, &rf, tgt);
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase.h b/drivers/net/ethernet/realtek/rtase/rtase.h
+index 583c33930f886f..4a4434869b10a8 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase.h
++++ b/drivers/net/ethernet/realtek/rtase/rtase.h
+@@ -9,7 +9,10 @@
+ #ifndef RTASE_H
+ #define RTASE_H
+
+-#define RTASE_HW_VER_MASK 0x7C800000
++#define RTASE_HW_VER_MASK 0x7C800000
++#define RTASE_HW_VER_906X_7XA 0x00800000
++#define RTASE_HW_VER_906X_7XC 0x04000000
++#define RTASE_HW_VER_907XD_V1 0x04800000
+
+ #define RTASE_RX_DMA_BURST_256 4
+ #define RTASE_TX_DMA_BURST_UNLIMITED 7
+@@ -327,6 +330,8 @@ struct rtase_private {
+ u16 int_nums;
+ u16 tx_int_mit;
+ u16 rx_int_mit;
++
++ u32 hw_ver;
+ };
+
+ #define RTASE_LSO_64K 64000
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+index f8777b7663d35d..1bfe5ef40c522d 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c
++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+@@ -1714,10 +1714,21 @@ static int rtase_get_settings(struct net_device *dev,
+ struct ethtool_link_ksettings *cmd)
+ {
+ u32 supported = SUPPORTED_MII | SUPPORTED_Pause | SUPPORTED_Asym_Pause;
++ const struct rtase_private *tp = netdev_priv(dev);
+
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+ supported);
+- cmd->base.speed = SPEED_5000;
++
++ switch (tp->hw_ver) {
++ case RTASE_HW_VER_906X_7XA:
++ case RTASE_HW_VER_906X_7XC:
++ cmd->base.speed = SPEED_5000;
++ break;
++ case RTASE_HW_VER_907XD_V1:
++ cmd->base.speed = SPEED_10000;
++ break;
++ }
++
+ cmd->base.duplex = DUPLEX_FULL;
+ cmd->base.port = PORT_MII;
+ cmd->base.autoneg = AUTONEG_DISABLE;
+@@ -1972,20 +1983,21 @@ static void rtase_init_software_variable(struct pci_dev *pdev,
+ tp->dev->max_mtu = RTASE_MAX_JUMBO_SIZE;
+ }
+
+-static bool rtase_check_mac_version_valid(struct rtase_private *tp)
++static int rtase_check_mac_version_valid(struct rtase_private *tp)
+ {
+- u32 hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK;
+- bool known_ver = false;
++ int ret = -ENODEV;
++
++ tp->hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK;
+
+- switch (hw_ver) {
+- case 0x00800000:
+- case 0x04000000:
+- case 0x04800000:
+- known_ver = true;
++ switch (tp->hw_ver) {
++ case RTASE_HW_VER_906X_7XA:
++ case RTASE_HW_VER_906X_7XC:
++ case RTASE_HW_VER_907XD_V1:
++ ret = 0;
+ break;
+ }
+
+- return known_ver;
++ return ret;
+ }
+
+ static int rtase_init_board(struct pci_dev *pdev, struct net_device **dev_out,
+@@ -2105,9 +2117,13 @@ static int rtase_init_one(struct pci_dev *pdev,
+ tp->pdev = pdev;
+
+ /* identify chip attached to board */
+- if (!rtase_check_mac_version_valid(tp))
+- return dev_err_probe(&pdev->dev, -ENODEV,
+- "unknown chip version, contact rtase maintainers (see MAINTAINERS file)\n");
++ ret = rtase_check_mac_version_valid(tp);
++ if (ret != 0) {
++ dev_err(&pdev->dev,
++ "unknown chip version: 0x%08x, contact rtase maintainers (see MAINTAINERS file)\n",
++ tp->hw_ver);
++ goto err_out_release_board;
++ }
+
+ rtase_init_software_variable(pdev, tp);
+ rtase_init_hardware(tp);
+@@ -2181,6 +2197,7 @@ static int rtase_init_one(struct pci_dev *pdev,
+ netif_napi_del(&ivec->napi);
+ }
+
++err_out_release_board:
+ rtase_release_board(pdev, dev, ioaddr);
+
+ return ret;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index fdb4c773ec98ab..e897b49aa9e05e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -486,6 +486,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
+ plat_dat->pcs_exit = socfpga_dwmac_pcs_exit;
+ plat_dat->select_pcs = socfpga_dwmac_select_pcs;
+
++ plat_dat->riwt_off = 1;
++
+ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+index a4cf682dca650e..0ee73a265545c3 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+@@ -72,14 +72,6 @@ int txgbe_request_queue_irqs(struct wx *wx)
+ return err;
+ }
+
+-static int txgbe_request_gpio_irq(struct txgbe *txgbe)
+-{
+- txgbe->gpio_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO);
+- return request_threaded_irq(txgbe->gpio_irq, NULL,
+- txgbe_gpio_irq_handler,
+- IRQF_ONESHOT, "txgbe-gpio-irq", txgbe);
+-}
+-
+ static int txgbe_request_link_irq(struct txgbe *txgbe)
+ {
+ txgbe->link_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK);
+@@ -149,11 +141,6 @@ static irqreturn_t txgbe_misc_irq_thread_fn(int irq, void *data)
+ u32 eicr;
+
+ eicr = wx_misc_isb(wx, WX_ISB_MISC);
+- if (eicr & TXGBE_PX_MISC_GPIO) {
+- sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO);
+- handle_nested_irq(sub_irq);
+- nhandled++;
+- }
+ if (eicr & (TXGBE_PX_MISC_ETH_LK | TXGBE_PX_MISC_ETH_LKDN |
+ TXGBE_PX_MISC_ETH_AN)) {
+ sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK);
+@@ -179,7 +166,6 @@ static void txgbe_del_irq_domain(struct txgbe *txgbe)
+
+ void txgbe_free_misc_irq(struct txgbe *txgbe)
+ {
+- free_irq(txgbe->gpio_irq, txgbe);
+ free_irq(txgbe->link_irq, txgbe);
+ free_irq(txgbe->misc.irq, txgbe);
+ txgbe_del_irq_domain(txgbe);
+@@ -191,7 +177,7 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+ struct wx *wx = txgbe->wx;
+ int hwirq, err;
+
+- txgbe->misc.nirqs = 2;
++ txgbe->misc.nirqs = 1;
+ txgbe->misc.domain = irq_domain_add_simple(NULL, txgbe->misc.nirqs, 0,
+ &txgbe_misc_irq_domain_ops, txgbe);
+ if (!txgbe->misc.domain)
+@@ -216,20 +202,14 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+ if (err)
+ goto del_misc_irq;
+
+- err = txgbe_request_gpio_irq(txgbe);
+- if (err)
+- goto free_msic_irq;
+-
+ err = txgbe_request_link_irq(txgbe);
+ if (err)
+- goto free_gpio_irq;
++ goto free_msic_irq;
+
+ wx->misc_irq_domain = true;
+
+ return 0;
+
+-free_gpio_irq:
+- free_irq(txgbe->gpio_irq, txgbe);
+ free_msic_irq:
+ free_irq(txgbe->misc.irq, txgbe);
+ del_misc_irq:
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+index 93180225a6f14c..f7745026803643 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+@@ -82,7 +82,6 @@ static void txgbe_up_complete(struct wx *wx)
+ {
+ struct net_device *netdev = wx->netdev;
+
+- txgbe_reinit_gpio_intr(wx);
+ wx_control_hw(wx, true);
+ wx_configure_vectors(wx);
+
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+index 67b61afdde96ce..f26946198a2fb9 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+@@ -162,7 +162,7 @@ static struct phylink_pcs *txgbe_phylink_mac_select(struct phylink_config *confi
+ struct wx *wx = phylink_to_wx(config);
+ struct txgbe *txgbe = wx->priv;
+
+- if (interface == PHY_INTERFACE_MODE_10GBASER)
++ if (wx->media_type != sp_media_copper)
+ return &txgbe->xpcs->pcs;
+
+ return NULL;
+@@ -358,169 +358,8 @@ static int txgbe_gpio_direction_out(struct gpio_chip *chip, unsigned int offset,
+ return 0;
+ }
+
+-static void txgbe_gpio_irq_ack(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32(wx, WX_GPIO_EOI, BIT(hwirq));
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_gpio_irq_mask(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- gpiochip_disable_irq(gc, hwirq);
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), BIT(hwirq));
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_gpio_irq_unmask(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- gpiochip_enable_irq(gc, hwirq);
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), 0);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_toggle_trigger(struct gpio_chip *gc, unsigned int offset)
+-{
+- struct wx *wx = gpiochip_get_data(gc);
+- u32 pol, val;
+-
+- pol = rd32(wx, WX_GPIO_POLARITY);
+- val = rd32(wx, WX_GPIO_EXT);
+-
+- if (val & BIT(offset))
+- pol &= ~BIT(offset);
+- else
+- pol |= BIT(offset);
+-
+- wr32(wx, WX_GPIO_POLARITY, pol);
+-}
+-
+-static int txgbe_gpio_set_type(struct irq_data *d, unsigned int type)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- u32 level, polarity, mask;
+- unsigned long flags;
+-
+- mask = BIT(hwirq);
+-
+- if (type & IRQ_TYPE_LEVEL_MASK) {
+- level = 0;
+- irq_set_handler_locked(d, handle_level_irq);
+- } else {
+- level = mask;
+- irq_set_handler_locked(d, handle_edge_irq);
+- }
+-
+- if (type == IRQ_TYPE_EDGE_RISING || type == IRQ_TYPE_LEVEL_HIGH)
+- polarity = mask;
+- else
+- polarity = 0;
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+-
+- wr32m(wx, WX_GPIO_INTEN, mask, mask);
+- wr32m(wx, WX_GPIO_INTTYPE_LEVEL, mask, level);
+- if (type == IRQ_TYPE_EDGE_BOTH)
+- txgbe_toggle_trigger(gc, hwirq);
+- else
+- wr32m(wx, WX_GPIO_POLARITY, mask, polarity);
+-
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-
+- return 0;
+-}
+-
+-static const struct irq_chip txgbe_gpio_irq_chip = {
+- .name = "txgbe-gpio-irq",
+- .irq_ack = txgbe_gpio_irq_ack,
+- .irq_mask = txgbe_gpio_irq_mask,
+- .irq_unmask = txgbe_gpio_irq_unmask,
+- .irq_set_type = txgbe_gpio_set_type,
+- .flags = IRQCHIP_IMMUTABLE,
+- GPIOCHIP_IRQ_RESOURCE_HELPERS,
+-};
+-
+-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data)
+-{
+- struct txgbe *txgbe = data;
+- struct wx *wx = txgbe->wx;
+- irq_hw_number_t hwirq;
+- unsigned long gpioirq;
+- struct gpio_chip *gc;
+- unsigned long flags;
+-
+- gpioirq = rd32(wx, WX_GPIO_INTSTATUS);
+-
+- gc = txgbe->gpio;
+- for_each_set_bit(hwirq, &gpioirq, gc->ngpio) {
+- int gpio = irq_find_mapping(gc->irq.domain, hwirq);
+- struct irq_data *d = irq_get_irq_data(gpio);
+- u32 irq_type = irq_get_trigger_type(gpio);
+-
+- txgbe_gpio_irq_ack(d);
+- handle_nested_irq(gpio);
+-
+- if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) {
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- txgbe_toggle_trigger(gc, hwirq);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+- }
+- }
+-
+- return IRQ_HANDLED;
+-}
+-
+-void txgbe_reinit_gpio_intr(struct wx *wx)
+-{
+- struct txgbe *txgbe = wx->priv;
+- irq_hw_number_t hwirq;
+- unsigned long gpioirq;
+- struct gpio_chip *gc;
+- unsigned long flags;
+-
+- /* for gpio interrupt pending before irq enable */
+- gpioirq = rd32(wx, WX_GPIO_INTSTATUS);
+-
+- gc = txgbe->gpio;
+- for_each_set_bit(hwirq, &gpioirq, gc->ngpio) {
+- int gpio = irq_find_mapping(gc->irq.domain, hwirq);
+- struct irq_data *d = irq_get_irq_data(gpio);
+- u32 irq_type = irq_get_trigger_type(gpio);
+-
+- txgbe_gpio_irq_ack(d);
+-
+- if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) {
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- txgbe_toggle_trigger(gc, hwirq);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+- }
+- }
+-}
+-
+ static int txgbe_gpio_init(struct txgbe *txgbe)
+ {
+- struct gpio_irq_chip *girq;
+ struct gpio_chip *gc;
+ struct device *dev;
+ struct wx *wx;
+@@ -550,11 +389,6 @@ static int txgbe_gpio_init(struct txgbe *txgbe)
+ gc->direction_input = txgbe_gpio_direction_in;
+ gc->direction_output = txgbe_gpio_direction_out;
+
+- girq = &gc->irq;
+- gpio_irq_chip_set_chip(girq, &txgbe_gpio_irq_chip);
+- girq->default_type = IRQ_TYPE_NONE;
+- girq->handler = handle_bad_irq;
+-
+ ret = devm_gpiochip_add_data(dev, gc, wx);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+index 8a026d804fe24c..3938985355ed6c 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+@@ -4,8 +4,6 @@
+ #ifndef _TXGBE_PHY_H_
+ #define _TXGBE_PHY_H_
+
+-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data);
+-void txgbe_reinit_gpio_intr(struct wx *wx);
+ irqreturn_t txgbe_link_irq_handler(int irq, void *data);
+ int txgbe_init_phy(struct txgbe *txgbe);
+ void txgbe_remove_phy(struct txgbe *txgbe);
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+index 959102c4c3797e..8ea413a7abe9d3 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+@@ -75,8 +75,7 @@
+ #define TXGBE_PX_MISC_IEN_MASK \
+ (TXGBE_PX_MISC_ETH_LKDN | TXGBE_PX_MISC_DEV_RST | \
+ TXGBE_PX_MISC_ETH_EVENT | TXGBE_PX_MISC_ETH_LK | \
+- TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR | \
+- TXGBE_PX_MISC_GPIO)
++ TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR)
+
+ /* Port cfg registers */
+ #define TXGBE_CFG_PORT_ST 0x14404
+@@ -313,8 +312,7 @@ struct txgbe_nodes {
+ };
+
+ enum txgbe_misc_irqs {
+- TXGBE_IRQ_GPIO = 0,
+- TXGBE_IRQ_LINK,
++ TXGBE_IRQ_LINK = 0,
+ TXGBE_IRQ_MAX
+ };
+
+@@ -335,7 +333,6 @@ struct txgbe {
+ struct clk_lookup *clock;
+ struct clk *clk;
+ struct gpio_chip *gpio;
+- unsigned int gpio_irq;
+ unsigned int link_irq;
+
+ /* flow director */
+diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
+index 9d8f43b28aac5b..ea1f64596a85cf 100644
+--- a/drivers/net/mdio/mdio-ipq4019.c
++++ b/drivers/net/mdio/mdio-ipq4019.c
+@@ -352,8 +352,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
+ /* The platform resource is provided on the chipset IPQ5018 */
+ /* This resource is optional */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+- if (res)
++ if (res) {
+ priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(priv->eth_ldo_rdy))
++ return PTR_ERR(priv->eth_ldo_rdy);
++ }
+
+ bus->name = "ipq4019_mdio";
+ bus->read = ipq4019_mdio_read_c22;
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index f0d58092e7e961..3612b0633bd177 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -176,14 +176,13 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ return ret;
+ }
+
+- if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
++ if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ sa.rx = true;
+
+- if (xs->props.family == AF_INET6)
+- memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
+- else
+- memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
+- }
++ if (xs->props.family == AF_INET6)
++ memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
++ else
++ memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
+
+ /* the preparations worked, so save the info */
+ memcpy(&ipsec->sa[sa_idx], &sa, sizeof(sa));
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 8adf77e3557e7a..531b1b6a37d190 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1652,13 +1652,13 @@ static int lan78xx_set_wol(struct net_device *netdev,
+ struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+ int ret;
+
++ if (wol->wolopts & ~WAKE_ALL)
++ return -EINVAL;
++
+ ret = usb_autopm_get_interface(dev->intf);
+ if (ret < 0)
+ return ret;
+
+- if (wol->wolopts & ~WAKE_ALL)
+- return -EINVAL;
+-
+ pdata->wol = wol->wolopts;
+
+ device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
+@@ -2380,6 +2380,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
+ if (dev->chipid == ID_REV_CHIP_ID_7801_) {
+ if (phy_is_pseudo_fixed_link(phydev)) {
+ fixed_phy_unregister(phydev);
++ phy_device_free(phydev);
+ } else {
+ phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
+ 0xfffffff0);
+@@ -4246,8 +4247,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
+
+ phy_disconnect(net->phydev);
+
+- if (phy_is_pseudo_fixed_link(phydev))
++ if (phy_is_pseudo_fixed_link(phydev)) {
+ fixed_phy_unregister(phydev);
++ phy_device_free(phydev);
++ }
+
+ usb_scuttle_anchored_urbs(&dev->deferred);
+
+@@ -4414,29 +4417,30 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ period = ep_intr->desc.bInterval;
+ maxp = usb_maxpacket(dev->udev, dev->pipe_intr);
+- buf = kmalloc(maxp, GFP_KERNEL);
+- if (!buf) {
++
++ dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
++ if (!dev->urb_intr) {
+ ret = -ENOMEM;
+ goto out5;
+ }
+
+- dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
+- if (!dev->urb_intr) {
++ buf = kmalloc(maxp, GFP_KERNEL);
++ if (!buf) {
+ ret = -ENOMEM;
+- goto out6;
+- } else {
+- usb_fill_int_urb(dev->urb_intr, dev->udev,
+- dev->pipe_intr, buf, maxp,
+- intr_complete, dev, period);
+- dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
++ goto free_urbs;
+ }
+
++ usb_fill_int_urb(dev->urb_intr, dev->udev,
++ dev->pipe_intr, buf, maxp,
++ intr_complete, dev, period);
++ dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
++
+ dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out);
+
+ /* Reject broken descriptors. */
+ if (dev->maxpacket == 0) {
+ ret = -ENODEV;
+- goto out6;
++ goto free_urbs;
+ }
+
+ /* driver requires remote-wakeup capability during autosuspend. */
+@@ -4444,7 +4448,7 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ ret = lan78xx_phy_init(dev);
+ if (ret < 0)
+- goto out7;
++ goto free_urbs;
+
+ ret = register_netdev(netdev);
+ if (ret != 0) {
+@@ -4466,10 +4470,8 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ out8:
+ phy_disconnect(netdev->phydev);
+-out7:
++free_urbs:
+ usb_free_urb(dev->urb_intr);
+-out6:
+- kfree(buf);
+ out5:
+ lan78xx_unbind(dev, intf);
+ out4:
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 646e1737d4c47c..6b467696bc982c 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -9121,7 +9121,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss1[
+ {6, {2633, 2925}, {1215, 1350}, {585, 650} },
+ {7, {2925, 3250}, {1350, 1500}, {650, 722} },
+ {8, {3510, 3900}, {1620, 1800}, {780, 867} },
+- {9, {3900, 4333}, {1800, 2000}, {780, 867} }
++ {9, {3900, 4333}, {1800, 2000}, {865, 960} }
+ };
+
+ /*MCS parameters with Nss = 2 */
+@@ -9136,7 +9136,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss2[
+ {6, {5265, 5850}, {2430, 2700}, {1170, 1300} },
+ {7, {5850, 6500}, {2700, 3000}, {1300, 1444} },
+ {8, {7020, 7800}, {3240, 3600}, {1560, 1733} },
+- {9, {7800, 8667}, {3600, 4000}, {1560, 1733} }
++ {9, {7800, 8667}, {3600, 4000}, {1730, 1920} }
+ };
+
+ static void ath10k_mac_get_rate_flags_ht(struct ath10k *ar, u32 rate, u8 nss, u8 mcs,
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index f477afd325deaf..7a22483b35cd98 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2180,6 +2180,9 @@ static int ath11k_qmi_request_device_info(struct ath11k_base *ab)
+ ab->mem = bar_addr_va;
+ ab->mem_len = resp.bar_size;
+
++ if (!ab->hw_params.ce_remap)
++ ab->mem_ce = ab->mem;
++
+ return 0;
+ out:
+ return ret;
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 61aa78d8bd8c8f..217eb57663f058 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -1202,10 +1202,16 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+ if (!skb)
+ continue;
+
+- skb_cb = ATH12K_SKB_CB(skb);
+- ar = skb_cb->ar;
+- if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+- wake_up(&ar->dp.tx_empty_waitq);
++ /* if we are unregistering, hw would've been destroyed and
++ * ar is no longer valid.
++ */
++ if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))) {
++ skb_cb = ATH12K_SKB_CB(skb);
++ ar = skb_cb->ar;
++
++ if (atomic_dec_and_test(&ar->dp.num_tx_pending))
++ wake_up(&ar->dp.tx_empty_waitq);
++ }
+
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr,
+ skb->len, DMA_TO_DEVICE);
+@@ -1241,6 +1247,7 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+ }
+
+ kfree(dp->spt_info);
++ dp->spt_info = NULL;
+ }
+
+ static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab)
+@@ -1276,8 +1283,10 @@ void ath12k_dp_free(struct ath12k_base *ab)
+
+ ath12k_dp_rx_reo_cmd_list_cleanup(ab);
+
+- for (i = 0; i < ab->hw_params->max_tx_ring; i++)
++ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
+ kfree(dp->tx_ring[i].tx_status);
++ dp->tx_ring[i].tx_status = NULL;
++ }
+
+ ath12k_dp_rx_free(ab);
+ /* Deinit any SOC level resource */
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 137394c364603b..6d0784a21558ea 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -917,7 +917,10 @@ void ath12k_mac_peer_cleanup_all(struct ath12k *ar)
+
+ spin_lock_bh(&ab->base_lock);
+ list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
+- ath12k_dp_rx_peer_tid_cleanup(ar, peer);
++ /* Skip Rx TID cleanup for self peer */
++ if (peer->sta)
++ ath12k_dp_rx_peer_tid_cleanup(ar, peer);
++
+ list_del(&peer->list);
+ kfree(peer);
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/wow.c b/drivers/net/wireless/ath/ath12k/wow.c
+index 9b8684abbe40ae..3624180b25b970 100644
+--- a/drivers/net/wireless/ath/ath12k/wow.c
++++ b/drivers/net/wireless/ath/ath12k/wow.c
+@@ -191,7 +191,7 @@ ath12k_wow_convert_8023_to_80211(struct ath12k *ar,
+ memcpy(bytemask, eth_bytemask, eth_pat_len);
+
+ pat_len = eth_pat_len;
+- } else if (eth_pkt_ofs + eth_pat_len < prot_ofs) {
++ } else if (size_add(eth_pkt_ofs, eth_pat_len) < prot_ofs) {
+ memcpy(pat, eth_pat, ETH_ALEN - eth_pkt_ofs);
+ memcpy(bytemask, eth_bytemask, ETH_ALEN - eth_pkt_ofs);
+
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index eb631fd3336d8d..b5257b2b4aa527 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -294,6 +294,9 @@ int htc_connect_service(struct htc_target *target,
+ return -ETIMEDOUT;
+ }
+
++ if (target->conn_rsp_epid < 0 || target->conn_rsp_epid >= ENDPOINT_MAX)
++ return -EINVAL;
++
+ *conn_rsp_epid = target->conn_rsp_epid;
+ return 0;
+ err:
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+index fe4f657561056c..af930e34c21f8a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+@@ -110,9 +110,8 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ }
+ strreplace(board_type, '/', '-');
+ settings->board_type = board_type;
+-
+- of_node_put(root);
+ }
++ of_node_put(root);
+
+ if (!np || !of_device_is_compatible(np, "brcm,bcm4329-fmac"))
+ return;
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945.c b/drivers/net/wireless/intel/iwlegacy/3945.c
+index 14d2331ee6cb97..b0656b143f77a2 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945.c
+@@ -566,7 +566,7 @@ il3945_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb)
+ if (!(rx_end->status & RX_RES_STATUS_NO_CRC32_ERROR) ||
+ !(rx_end->status & RX_RES_STATUS_NO_RXE_OVERFLOW)) {
+ D_RX("Bad CRC or FIFO: 0x%08X.\n", rx_end->status);
+- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++ return;
+ }
+
+ /* Convert 3945's rssi indicator to dBm */
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index fcccde7bb65922..05c4af41bdb960 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -664,7 +664,7 @@ il4965_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb)
+ if (!(rx_pkt_status & RX_RES_STATUS_NO_CRC32_ERROR) ||
+ !(rx_pkt_status & RX_RES_STATUS_NO_RXE_OVERFLOW)) {
+ D_RX("Bad CRC or FIFO: 0x%08X.\n", le32_to_cpu(rx_pkt_status));
+- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++ return;
+ }
+
+ /* This will be used in several places later */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 80b9a115245fe8..d37d83d246354e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1237,6 +1237,7 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm)
+ fast_resume = mvm->fast_resume;
+
+ if (fast_resume) {
++ iwl_mvm_mei_device_state(mvm, true);
+ ret = iwl_mvm_fast_resume(mvm);
+ if (ret) {
+ iwl_mvm_stop_device(mvm);
+@@ -1377,10 +1378,13 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm, bool suspend)
+ iwl_mvm_rm_aux_sta(mvm);
+
+ if (suspend &&
+- mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
++ mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_22000) {
+ iwl_mvm_fast_suspend(mvm);
+- else
++ /* From this point on, we won't touch the device */
++ iwl_mvm_mei_device_state(mvm, false);
++ } else {
+ iwl_mvm_stop_device(mvm);
++ }
+
+ iwl_mvm_async_handlers_purge(mvm);
+ /* async_handlers_list is empty and will stay empty: HW is stopped */
+diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c
+index d33a994906a7bb..27f44a9f0bc1f9 100644
+--- a/drivers/net/wireless/intersil/p54/p54spi.c
++++ b/drivers/net/wireless/intersil/p54/p54spi.c
+@@ -624,7 +624,7 @@ static int p54spi_probe(struct spi_device *spi)
+ gpio_direction_input(p54spi_gpio_irq);
+
+ ret = request_irq(gpio_to_irq(p54spi_gpio_irq),
+- p54spi_interrupt, 0, "p54spi",
++ p54spi_interrupt, IRQF_NO_AUTOEN, "p54spi",
+ priv->spi);
+ if (ret < 0) {
+ dev_err(&priv->spi->dev, "request_irq() failed");
+@@ -633,8 +633,6 @@ static int p54spi_probe(struct spi_device *spi)
+
+ irq_set_irq_type(gpio_to_irq(p54spi_gpio_irq), IRQ_TYPE_EDGE_RISING);
+
+- disable_irq(gpio_to_irq(p54spi_gpio_irq));
+-
+ INIT_WORK(&priv->work, p54spi_work);
+ init_completion(&priv->fw_comp);
+ INIT_LIST_HEAD(&priv->tx_pending);
+diff --git a/drivers/net/wireless/marvell/mwifiex/cmdevt.c b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
+index 1cff001bdc5145..b30ed321c6251a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cmdevt.c
++++ b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
+@@ -938,8 +938,10 @@ void mwifiex_process_assoc_resp(struct mwifiex_adapter *adapter)
+ assoc_resp.links[0].bss = priv->req_bss;
+ assoc_resp.buf = priv->assoc_rsp_buf;
+ assoc_resp.len = priv->assoc_rsp_size;
++ wiphy_lock(priv->wdev.wiphy);
+ cfg80211_rx_assoc_resp(priv->netdev,
+ &assoc_resp);
++ wiphy_unlock(priv->wdev.wiphy);
+ priv->assoc_rsp_size = 0;
+ }
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index d03129d5d24e3d..4a96281792cc1a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -875,7 +875,7 @@ struct mwifiex_ietypes_chanstats {
+ struct mwifiex_ie_types_wildcard_ssid_params {
+ struct mwifiex_ie_types_header header;
+ u8 max_ssid_length;
+- u8 ssid[1];
++ u8 ssid[];
+ } __packed;
+
+ #define TSF_DATA_SIZE 8
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 96d1f6039fbca3..855019fe548582 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -1679,7 +1679,8 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ }
+
+ ret = devm_request_irq(dev, adapter->irq_wakeup,
+- mwifiex_irq_wakeup_handler, IRQF_TRIGGER_LOW,
++ mwifiex_irq_wakeup_handler,
++ IRQF_TRIGGER_LOW | IRQF_NO_AUTOEN,
+ "wifi_wake", adapter);
+ if (ret) {
+ dev_err(dev, "Failed to request irq_wakeup %d (%d)\n",
+@@ -1687,7 +1688,6 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ goto err_exit;
+ }
+
+- disable_irq(adapter->irq_wakeup);
+ if (device_init_wakeup(dev, true)) {
+ dev_err(dev, "fail to init wakeup for mwifiex\n");
+ goto err_exit;
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index 42c04bf858da37..1f1f6280a0f251 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -494,7 +494,9 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ }
+ }
+
++ wiphy_lock(priv->wdev.wiphy);
+ cfg80211_rx_mlme_mgmt(priv->netdev, skb->data, pkt_len);
++ wiphy_unlock(priv->wdev.wiphy);
+ }
+
+ if (priv->adapter->host_mlme_enabled &&
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 9ecf3fb29b558f..8bc127c5a538cb 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -608,6 +608,9 @@ static int wilc_mac_open(struct net_device *ndev)
+ return ret;
+ }
+
++ wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
++ vif->idx);
++
+ netdev_dbg(ndev, "Mac address: %pM\n", ndev->dev_addr);
+ ret = wilc_set_mac_address(vif, ndev->dev_addr);
+ if (ret) {
+@@ -618,9 +621,6 @@ static int wilc_mac_open(struct net_device *ndev)
+ return ret;
+ }
+
+- wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
+- vif->idx);
+-
+ mgmt_regs.interface_stypes = vif->mgmt_reg_stypes;
+ /* so we detect a change */
+ vif->mgmt_reg_stypes = 0;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index 7891c988dd5f03..f95898f68d68a5 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -5058,10 +5058,12 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ }
+
+ if (changed & BSS_CHANGED_BEACON_ENABLED) {
+- if (bss_conf->enable_beacon)
++ if (bss_conf->enable_beacon) {
+ rtl8xxxu_start_tx_beacon(priv);
+- else
++ schedule_delayed_work(&priv->update_beacon_work, 0);
++ } else {
+ rtl8xxxu_stop_tx_beacon(priv);
++ }
+ }
+
+ if (changed & BSS_CHANGED_BEACON)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/efuse.c b/drivers/net/wireless/realtek/rtlwifi/efuse.c
+index 82cf5fb5175fef..6518e77b89f578 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/efuse.c
++++ b/drivers/net/wireless/realtek/rtlwifi/efuse.c
+@@ -162,10 +162,19 @@ void efuse_write_1byte(struct ieee80211_hw *hw, u16 address, u8 value)
+ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
+ {
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
++ u16 max_attempts = 10000;
+ u32 value32;
+ u8 readbyte;
+ u16 retry;
+
++ /*
++ * In case of USB devices, transfer speeds are limited, hence
++ * efuse I/O reads could be (way) slower. So, decrease (a lot)
++ * the read attempts in case of failures.
++ */
++ if (rtlpriv->rtlhal.interface == INTF_USB)
++ max_attempts = 10;
++
+ rtl_write_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 1,
+ (_offset & 0xff));
+ readbyte = rtl_read_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 2);
+@@ -178,7 +187,7 @@ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
+
+ retry = 0;
+ value32 = rtl_read_dword(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]);
+- while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) {
++ while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < max_attempts)) {
+ value32 = rtl_read_dword(rtlpriv,
+ rtlpriv->cfg->maps[EFUSE_CTRL]);
+ retry++;
+diff --git a/drivers/net/wireless/realtek/rtw89/cam.c b/drivers/net/wireless/realtek/rtw89/cam.c
+index 4476fc7e53db74..8d140b94cb4403 100644
+--- a/drivers/net/wireless/realtek/rtw89/cam.c
++++ b/drivers/net/wireless/realtek/rtw89/cam.c
+@@ -211,25 +211,17 @@ static int rtw89_cam_get_addr_cam_key_idx(struct rtw89_addr_cam_entry *addr_cam,
+ return 0;
+ }
+
+-static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta,
+- const struct rtw89_sec_cam_entry *sec_cam,
+- bool inform_fw)
++static int __rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ const struct rtw89_sec_cam_entry *sec_cam,
++ bool inform_fw)
+ {
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- struct rtw89_vif *rtwvif;
+ struct rtw89_addr_cam_entry *addr_cam;
+ unsigned int i;
+ int ret = 0;
+
+- if (!vif) {
+- rtw89_err(rtwdev, "No iface for deleting sec cam\n");
+- return -EINVAL;
+- }
+-
+- rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ addr_cam = rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+
+ for_each_set_bit(i, addr_cam->sec_cam_map, RTW89_SEC_CAM_IN_ADDR_CAM) {
+ if (addr_cam->sec_ent[i] != sec_cam->sec_cam_idx)
+@@ -239,11 +231,11 @@ static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
+ }
+
+ if (inform_fw) {
+- ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret)
+ rtw89_err(rtwdev,
+ "failed to update dctl cam del key: %d\n", ret);
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret)
+ rtw89_err(rtwdev, "failed to update cam del key: %d\n", ret);
+ }
+@@ -251,25 +243,17 @@ static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta,
+- struct ieee80211_key_conf *key,
+- struct rtw89_sec_cam_entry *sec_cam)
++static int __rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_key_conf *key,
++ struct rtw89_sec_cam_entry *sec_cam)
+ {
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- struct rtw89_vif *rtwvif;
+ struct rtw89_addr_cam_entry *addr_cam;
+ u8 key_idx = 0;
+ int ret;
+
+- if (!vif) {
+- rtw89_err(rtwdev, "No iface for adding sec cam\n");
+- return -EINVAL;
+- }
+-
+- rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ addr_cam = rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+
+ if (key->cipher == WLAN_CIPHER_SUITE_WEP40 ||
+ key->cipher == WLAN_CIPHER_SUITE_WEP104)
+@@ -285,13 +269,13 @@ static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
+ addr_cam->sec_ent_keyid[key_idx] = key->keyidx;
+ addr_cam->sec_ent[key_idx] = sec_cam->sec_cam_idx;
+ set_bit(key_idx, addr_cam->sec_cam_map);
+- ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n",
+ ret);
+ return ret;
+ }
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to update addr cam sec entry: %d\n",
+ ret);
+@@ -302,6 +286,92 @@ static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
+ return 0;
+ }
+
++static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ const struct rtw89_sec_cam_entry *sec_cam,
++ bool inform_fw)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
++ int ret;
++
++ if (!vif) {
++ rtw89_err(rtwdev, "No iface for deleting sec cam\n");
++ return -EINVAL;
++ }
++
++ rtwvif = vif_to_rtwvif(vif);
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ rtwsta_link = rtwsta ? rtwsta->links[link_id] : NULL;
++ if (rtwsta && !rtwsta_link)
++ continue;
++
++ ret = __rtw89_cam_detach_sec_cam(rtwdev, rtwvif_link, rtwsta_link,
++ sec_cam, inform_fw);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ struct ieee80211_key_conf *key,
++ struct rtw89_sec_cam_entry *sec_cam)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
++ int key_link_id;
++ int ret;
++
++ if (!vif) {
++ rtw89_err(rtwdev, "No iface for adding sec cam\n");
++ return -EINVAL;
++ }
++
++ rtwvif = vif_to_rtwvif(vif);
++
++ key_link_id = ieee80211_vif_is_mld(vif) ? key->link_id : 0;
++ if (key_link_id >= 0) {
++ rtwvif_link = rtwvif->links[key_link_id];
++ rtwsta_link = rtwsta ? rtwsta->links[key_link_id] : NULL;
++
++ if (!rtwvif_link || (rtwsta && !rtwsta_link)) {
++ rtw89_err(rtwdev, "No drv link for adding sec cam\n");
++ return -ENOLINK;
++ }
++
++ return __rtw89_cam_attach_sec_cam(rtwdev, rtwvif_link,
++ rtwsta_link, key, sec_cam);
++ }
++
++ /* key_link_id < 0: MLD pairwise key */
++ if (!rtwsta) {
++ rtw89_err(rtwdev, "No sta for adding MLD pairwise sec cam\n");
++ return -EINVAL;
++ }
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = __rtw89_cam_attach_sec_cam(rtwdev, rtwvif_link,
++ rtwsta_link, key, sec_cam);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
+ static int rtw89_cam_sec_key_install(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+@@ -485,10 +555,10 @@ void rtw89_cam_deinit_bssid_cam(struct rtw89_dev *rtwdev,
+ clear_bit(bssid_cam->bssid_cam_idx, cam_info->bssid_cam_map);
+ }
+
+-void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = &rtwvif->addr_cam;
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_addr_cam_entry *addr_cam = &rtwvif_link->addr_cam;
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
+
+ rtw89_cam_deinit_addr_cam(rtwdev, addr_cam);
+ rtw89_cam_deinit_bssid_cam(rtwdev, bssid_cam);
+@@ -593,7 +663,7 @@ static int rtw89_cam_get_avail_bssid_cam(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_bssid_cam_entry *bssid_cam,
+ const u8 *bssid)
+ {
+@@ -613,7 +683,7 @@ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+ }
+
+ bssid_cam->bssid_cam_idx = bssid_cam_idx;
+- bssid_cam->phy_idx = rtwvif->phy_idx;
++ bssid_cam->phy_idx = rtwvif_link->phy_idx;
+ bssid_cam->len = BSSID_CAM_ENT_SIZE;
+ bssid_cam->offset = 0;
+ bssid_cam->valid = true;
+@@ -622,20 +692,21 @@ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+ return 0;
+ }
+
+-void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
+
+- ether_addr_copy(bssid_cam->bssid, rtwvif->bssid);
++ ether_addr_copy(bssid_cam->bssid, rtwvif_link->bssid);
+ }
+
+-int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = &rtwvif->addr_cam;
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_addr_cam_entry *addr_cam = &rtwvif_link->addr_cam;
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
+ int ret;
+
+- ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, rtwvif->bssid);
++ ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif_link, bssid_cam,
++ rtwvif_link->bssid);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to init bssid cam\n");
+ return ret;
+@@ -651,19 +722,27 @@ int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ }
+
+ int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, u8 *cmd)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, u8 *cmd)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif, rtwsta);
+- u8 bss_color = vif->bss_conf.he_bss_color.color;
++ struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif_link,
++ rtwsta_link);
++ struct ieee80211_bss_conf *bss_conf;
++ u8 bss_color;
+ u8 bss_mask;
+
+- if (vif->bss_conf.nontransmitted)
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ bss_color = bss_conf->he_bss_color.color;
++
++ if (bss_conf->nontransmitted)
+ bss_mask = RTW89_BSSID_MATCH_5_BYTES;
+ else
+ bss_mask = RTW89_BSSID_MATCH_ALL;
+
++ rcu_read_unlock();
++
+ FWCMD_SET_ADDR_BSSID_IDX(cmd, bssid_cam->bssid_cam_idx);
+ FWCMD_SET_ADDR_BSSID_OFFSET(cmd, bssid_cam->offset);
+ FWCMD_SET_ADDR_BSSID_LEN(cmd, bssid_cam->len);
+@@ -694,19 +773,30 @@ static u8 rtw89_cam_addr_hash(u8 start, const u8 *addr)
+ }
+
+ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ const u8 *scan_mac_addr,
+ u8 *cmd)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
+- struct ieee80211_sta *sta = rtwsta_to_sta_safe(rtwsta);
+- const u8 *sma = scan_mac_addr ? scan_mac_addr : rtwvif->mac_addr;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_addr_cam_entry *addr_cam =
++ rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
++ struct ieee80211_sta *sta = rtwsta_link_to_sta_safe(rtwsta_link);
++ struct ieee80211_link_sta *link_sta;
++ const u8 *sma = scan_mac_addr ? scan_mac_addr : rtwvif_link->mac_addr;
+ u8 sma_hash, tma_hash, addr_msk_start;
+ u8 sma_start = 0;
+ u8 tma_start = 0;
+- u8 *tma = sta ? sta->addr : rtwvif->bssid;
++ const u8 *tma;
++
++ rcu_read_lock();
++
++ if (sta) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ tma = link_sta->addr;
++ } else {
++ tma = rtwvif_link->bssid;
++ }
+
+ if (addr_cam->addr_mask != 0) {
+ addr_msk_start = __ffs(addr_cam->addr_mask);
+@@ -723,10 +813,10 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+ FWCMD_SET_ADDR_LEN(cmd, addr_cam->len);
+
+ FWCMD_SET_ADDR_VALID(cmd, addr_cam->valid);
+- FWCMD_SET_ADDR_NET_TYPE(cmd, rtwvif->net_type);
+- FWCMD_SET_ADDR_BCN_HIT_COND(cmd, rtwvif->bcn_hit_cond);
+- FWCMD_SET_ADDR_HIT_RULE(cmd, rtwvif->hit_rule);
+- FWCMD_SET_ADDR_BB_SEL(cmd, rtwvif->phy_idx);
++ FWCMD_SET_ADDR_NET_TYPE(cmd, rtwvif_link->net_type);
++ FWCMD_SET_ADDR_BCN_HIT_COND(cmd, rtwvif_link->bcn_hit_cond);
++ FWCMD_SET_ADDR_HIT_RULE(cmd, rtwvif_link->hit_rule);
++ FWCMD_SET_ADDR_BB_SEL(cmd, rtwvif_link->phy_idx);
+ FWCMD_SET_ADDR_ADDR_MASK(cmd, addr_cam->addr_mask);
+ FWCMD_SET_ADDR_MASK_SEL(cmd, addr_cam->mask_sel);
+ FWCMD_SET_ADDR_SMA_HASH(cmd, sma_hash);
+@@ -748,20 +838,21 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+ FWCMD_SET_ADDR_TMA4(cmd, tma[4]);
+ FWCMD_SET_ADDR_TMA5(cmd, tma[5]);
+
+- FWCMD_SET_ADDR_PORT_INT(cmd, rtwvif->port);
+- FWCMD_SET_ADDR_TSF_SYNC(cmd, rtwvif->port);
+- FWCMD_SET_ADDR_TF_TRS(cmd, rtwvif->trigger);
+- FWCMD_SET_ADDR_LSIG_TXOP(cmd, rtwvif->lsig_txop);
+- FWCMD_SET_ADDR_TGT_IND(cmd, rtwvif->tgt_ind);
+- FWCMD_SET_ADDR_FRM_TGT_IND(cmd, rtwvif->frm_tgt_ind);
+- FWCMD_SET_ADDR_MACID(cmd, rtwsta ? rtwsta->mac_id : rtwvif->mac_id);
+- if (rtwvif->net_type == RTW89_NET_TYPE_INFRA)
++ FWCMD_SET_ADDR_PORT_INT(cmd, rtwvif_link->port);
++ FWCMD_SET_ADDR_TSF_SYNC(cmd, rtwvif_link->port);
++ FWCMD_SET_ADDR_TF_TRS(cmd, rtwvif_link->trigger);
++ FWCMD_SET_ADDR_LSIG_TXOP(cmd, rtwvif_link->lsig_txop);
++ FWCMD_SET_ADDR_TGT_IND(cmd, rtwvif_link->tgt_ind);
++ FWCMD_SET_ADDR_FRM_TGT_IND(cmd, rtwvif_link->frm_tgt_ind);
++ FWCMD_SET_ADDR_MACID(cmd, rtwsta_link ? rtwsta_link->mac_id :
++ rtwvif_link->mac_id);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_INFRA)
+ FWCMD_SET_ADDR_AID12(cmd, vif->cfg.aid & 0xfff);
+- else if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ else if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ FWCMD_SET_ADDR_AID12(cmd, sta ? sta->aid & 0xfff : 0);
+- FWCMD_SET_ADDR_WOL_PATTERN(cmd, rtwvif->wowlan_pattern);
+- FWCMD_SET_ADDR_WOL_UC(cmd, rtwvif->wowlan_uc);
+- FWCMD_SET_ADDR_WOL_MAGIC(cmd, rtwvif->wowlan_magic);
++ FWCMD_SET_ADDR_WOL_PATTERN(cmd, rtwvif_link->wowlan_pattern);
++ FWCMD_SET_ADDR_WOL_UC(cmd, rtwvif_link->wowlan_uc);
++ FWCMD_SET_ADDR_WOL_MAGIC(cmd, rtwvif_link->wowlan_magic);
+ FWCMD_SET_ADDR_WAPI(cmd, addr_cam->wapi);
+ FWCMD_SET_ADDR_SEC_ENT_MODE(cmd, addr_cam->sec_ent_mode);
+ FWCMD_SET_ADDR_SEC_ENT0_KEYID(cmd, addr_cam->sec_ent_keyid[0]);
+@@ -780,18 +871,22 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+ FWCMD_SET_ADDR_SEC_ENT4(cmd, addr_cam->sec_ent[4]);
+ FWCMD_SET_ADDR_SEC_ENT5(cmd, addr_cam->sec_ent[5]);
+ FWCMD_SET_ADDR_SEC_ENT6(cmd, addr_cam->sec_ent[6]);
++
++ rcu_read_unlock();
+ }
+
+ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v1 *h2c)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ struct rtw89_addr_cam_entry *addr_cam =
++ rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ u8 *ptk_tx_iv = rtw_wow->key_info.ptk_tx_iv;
+
+- h2c->c0 = le32_encode_bits(rtwsta ? rtwsta->mac_id : rtwvif->mac_id,
++ h2c->c0 = le32_encode_bits(rtwsta_link ? rtwsta_link->mac_id :
++ rtwvif_link->mac_id,
+ DCTLINFO_V1_C0_MACID) |
+ le32_encode_bits(1, DCTLINFO_V1_C0_OP);
+
+@@ -862,15 +957,17 @@ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev,
+ }
+
+ void rtw89_cam_fill_dctl_sec_cam_info_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ struct rtw89_addr_cam_entry *addr_cam =
++ rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ u8 *ptk_tx_iv = rtw_wow->key_info.ptk_tx_iv;
+
+- h2c->c0 = le32_encode_bits(rtwsta ? rtwsta->mac_id : rtwvif->mac_id,
++ h2c->c0 = le32_encode_bits(rtwsta_link ? rtwsta_link->mac_id :
++ rtwvif_link->mac_id,
+ DCTLINFO_V2_C0_MACID) |
+ le32_encode_bits(1, DCTLINFO_V2_C0_OP);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/cam.h b/drivers/net/wireless/realtek/rtw89/cam.h
+index 5d7b624c2dd428..a6f72edd30fe3a 100644
+--- a/drivers/net/wireless/realtek/rtw89/cam.h
++++ b/drivers/net/wireless/realtek/rtw89/cam.h
+@@ -526,34 +526,34 @@ struct rtw89_h2c_dctlinfo_ud_v2 {
+ #define DCTLINFO_V2_W12_MLD_TA_BSSID_H_V1 GENMASK(15, 0)
+ #define DCTLINFO_V2_W12_ALL GENMASK(15, 0)
+
+-int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
+-void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
++int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
++void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
+ int rtw89_cam_init_addr_cam(struct rtw89_dev *rtwdev,
+ struct rtw89_addr_cam_entry *addr_cam,
+ const struct rtw89_bssid_cam_entry *bssid_cam);
+ void rtw89_cam_deinit_addr_cam(struct rtw89_dev *rtwdev,
+ struct rtw89_addr_cam_entry *addr_cam);
+ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_bssid_cam_entry *bssid_cam,
+ const u8 *bssid);
+ void rtw89_cam_deinit_bssid_cam(struct rtw89_dev *rtwdev,
+ struct rtw89_bssid_cam_entry *bssid_cam);
+ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *vif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *vif,
++ struct rtw89_sta_link *rtwsta_link,
+ const u8 *scan_mac_addr, u8 *cmd);
+ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v1 *h2c);
+ void rtw89_cam_fill_dctl_sec_cam_info_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c);
+ int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, u8 *cmd);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, u8 *cmd);
+ int rtw89_cam_sec_key_add(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+@@ -564,6 +564,6 @@ int rtw89_cam_sec_key_del(struct rtw89_dev *rtwdev,
+ struct ieee80211_key_conf *key,
+ bool inform_fw);
+ void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ void rtw89_cam_reset_keys(struct rtw89_dev *rtwdev);
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
+index 7070c85e2c2883..ba6332da8019c1 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.c
++++ b/drivers/net/wireless/realtek/rtw89/chan.c
+@@ -234,6 +234,18 @@ void rtw89_entity_init(struct rtw89_dev *rtwdev)
+ rtw89_config_default_chandef(rtwdev);
+ }
+
++static bool rtw89_vif_is_active_role(struct rtw89_vif *rtwvif)
++{
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ if (rtwvif_link->chanctx_assigned)
++ return true;
++
++ return false;
++}
++
+ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev,
+ struct rtw89_entity_weight *w)
+ {
+@@ -255,7 +267,7 @@ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev,
+ }
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (rtwvif->chanctx_assigned)
++ if (rtw89_vif_is_active_role(rtwvif))
+ w->active_roles++;
+ }
+ }
+@@ -387,9 +399,9 @@ int rtw89_iterate_mcc_roles(struct rtw89_dev *rtwdev,
+ static u32 rtw89_mcc_get_tbtt_ofst(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *role, u64 tsf)
+ {
+- struct rtw89_vif *rtwvif = role->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = role->rtwvif_link;
+ u32 bcn_intvl_us = ieee80211_tu_to_usec(role->beacon_interval);
+- u64 sync_tsf = READ_ONCE(rtwvif->sync_bcn_tsf);
++ u64 sync_tsf = READ_ONCE(rtwvif_link->sync_bcn_tsf);
+ u32 remainder;
+
+ if (tsf < sync_tsf) {
+@@ -413,8 +425,8 @@ static int __mcc_fw_req_tsf(struct rtw89_dev *rtwdev, u64 *tsf_ref, u64 *tsf_aux
+ int ret;
+
+ req.group = mcc->group;
+- req.macid_x = ref->rtwvif->mac_id;
+- req.macid_y = aux->rtwvif->mac_id;
++ req.macid_x = ref->rtwvif_link->mac_id;
++ req.macid_y = aux->rtwvif_link->mac_id;
+ ret = rtw89_fw_h2c_mcc_req_tsf(rtwdev, &req, &rpt);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -440,10 +452,10 @@ static int __mrc_fw_req_tsf(struct rtw89_dev *rtwdev, u64 *tsf_ref, u64 *tsf_aux
+ BUILD_BUG_ON(RTW89_MAC_MRC_MAX_REQ_TSF_NUM < NUM_OF_RTW89_MCC_ROLES);
+
+ arg.num = 2;
+- arg.infos[0].band = ref->rtwvif->mac_idx;
+- arg.infos[0].port = ref->rtwvif->port;
+- arg.infos[1].band = aux->rtwvif->mac_idx;
+- arg.infos[1].port = aux->rtwvif->port;
++ arg.infos[0].band = ref->rtwvif_link->mac_idx;
++ arg.infos[0].port = ref->rtwvif_link->port;
++ arg.infos[1].band = aux->rtwvif_link->mac_idx;
++ arg.infos[1].port = aux->rtwvif_link->port;
+
+ ret = rtw89_fw_h2c_mrc_req_tsf(rtwdev, &arg, &rpt);
+ if (ret) {
+@@ -522,23 +534,31 @@ u32 rtw89_mcc_role_fw_macid_bitmap_to_u32(struct rtw89_mcc_role *mcc_role)
+
+ static void rtw89_mcc_role_macid_sta_iter(void *data, struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_mcc_role *mcc_role = data;
+- struct rtw89_vif *target = mcc_role->rtwvif;
++ struct rtw89_vif *target = mcc_role->rtwvif_link->rtwvif;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_sta_link *rtwsta_link;
+
+ if (rtwvif != target)
+ return;
+
+- rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta->mac_id);
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link)) {
++ rtw89_err(rtwdev, "mcc sta macid: find no link on HW-0\n");
++ return;
++ }
++
++ rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta_link->mac_id);
+ }
+
+ static void rtw89_mcc_fill_role_macid_bitmap(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+ {
+- struct rtw89_vif *rtwvif = mcc_role->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = mcc_role->rtwvif_link;
+
+- rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif->mac_id);
++ rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif_link->mac_id);
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_mcc_role_macid_sta_iter,
+ mcc_role);
+@@ -564,8 +584,9 @@ static void rtw89_mcc_fill_role_policy(struct rtw89_dev *rtwdev,
+ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(mcc_role->rtwvif);
++ struct rtw89_vif_link *rtwvif_link = mcc_role->rtwvif_link;
+ struct ieee80211_p2p_noa_desc *noa_desc;
++ struct ieee80211_bss_conf *bss_conf;
+ u32 bcn_intvl_us = ieee80211_tu_to_usec(mcc_role->beacon_interval);
+ u32 max_toa_us, max_tob_us, max_dur_us;
+ u32 start_time, interval, duration;
+@@ -576,13 +597,18 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ if (!mcc_role->is_go && !mcc_role->is_gc)
+ return;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
+ /* find the first periodic NoA */
+ for (i = 0; i < RTW89_P2P_MAX_NOA_NUM; i++) {
+- noa_desc = &vif->bss_conf.p2p_noa_attr.desc[i];
++ noa_desc = &bss_conf->p2p_noa_attr.desc[i];
+ if (noa_desc->count == 255)
+ goto fill;
+ }
+
++ rcu_read_unlock();
+ return;
+
+ fill:
+@@ -590,6 +616,8 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ interval = le32_to_cpu(noa_desc->interval);
+ duration = le32_to_cpu(noa_desc->duration);
+
++ rcu_read_unlock();
++
+ if (interval != bcn_intvl_us) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role limit: mismatch interval: %d vs. %d\n",
+@@ -597,7 +625,7 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ return;
+ }
+
+- ret = rtw89_mac_port_get_tsf(rtwdev, mcc_role->rtwvif, &tsf);
++ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return;
+@@ -632,15 +660,21 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_mcc_role *role)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
+ const struct rtw89_chan *chan;
+
+ memset(role, 0, sizeof(*role));
+- role->rtwvif = rtwvif;
+- role->beacon_interval = vif->bss_conf.beacon_int;
++ role->rtwvif_link = rtwvif_link;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ role->beacon_interval = bss_conf->beacon_int;
++
++ rcu_read_unlock();
+
+ if (!role->beacon_interval) {
+ rtw89_warn(rtwdev,
+@@ -650,10 +684,10 @@ static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev,
+
+ role->duration = role->beacon_interval / 2;
+
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
+ role->is_2ghz = chan->band_type == RTW89_BAND_2G;
+- role->is_go = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_GO;
+- role->is_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
++ role->is_go = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_GO;
++ role->is_gc = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
+
+ rtw89_mcc_fill_role_macid_bitmap(rtwdev, role);
+ rtw89_mcc_fill_role_policy(rtwdev, role);
+@@ -678,7 +712,7 @@ static void rtw89_mcc_fill_bt_role(struct rtw89_dev *rtwdev)
+ }
+
+ struct rtw89_mcc_fill_role_selector {
+- struct rtw89_vif *bind_vif[NUM_OF_RTW89_CHANCTX];
++ struct rtw89_vif_link *bind_vif[NUM_OF_RTW89_CHANCTX];
+ };
+
+ static_assert((u8)NUM_OF_RTW89_CHANCTX >= NUM_OF_RTW89_MCC_ROLES);
+@@ -689,7 +723,7 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ void *data)
+ {
+ struct rtw89_mcc_fill_role_selector *sel = data;
+- struct rtw89_vif *role_vif = sel->bind_vif[ordered_idx];
++ struct rtw89_vif_link *role_vif = sel->bind_vif[ordered_idx];
+ int ret;
+
+ if (!role_vif) {
+@@ -712,21 +746,28 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_mcc_fill_role_selector sel = {};
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
+ int ret;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (!rtwvif->chanctx_assigned)
++ if (!rtw89_vif_is_active_role(rtwvif))
++ continue;
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "mcc fill roles: find no link on HW-0\n");
+ continue;
++ }
+
+- if (sel.bind_vif[rtwvif->chanctx_idx]) {
++ if (sel.bind_vif[rtwvif_link->chanctx_idx]) {
+ rtw89_warn(rtwdev,
+ "MCC skip extra vif <macid %d> on chanctx[%d]\n",
+- rtwvif->mac_id, rtwvif->chanctx_idx);
++ rtwvif_link->mac_id, rtwvif_link->chanctx_idx);
+ continue;
+ }
+
+- sel.bind_vif[rtwvif->chanctx_idx] = rtwvif;
++ sel.bind_vif[rtwvif_link->chanctx_idx] = rtwvif_link;
+ }
+
+ ret = rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_fill_role_iterator, &sel);
+@@ -754,13 +795,13 @@ static void rtw89_mcc_assign_pattern(struct rtw89_dev *rtwdev,
+ memset(&pattern->courtesy, 0, sizeof(pattern->courtesy));
+
+ if (pattern->tob_aux <= 0 || pattern->toa_aux <= 0) {
+- pattern->courtesy.macid_tgt = aux->rtwvif->mac_id;
+- pattern->courtesy.macid_src = ref->rtwvif->mac_id;
++ pattern->courtesy.macid_tgt = aux->rtwvif_link->mac_id;
++ pattern->courtesy.macid_src = ref->rtwvif_link->mac_id;
+ pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT;
+ pattern->courtesy.enable = true;
+ } else if (pattern->tob_ref <= 0 || pattern->toa_ref <= 0) {
+- pattern->courtesy.macid_tgt = ref->rtwvif->mac_id;
+- pattern->courtesy.macid_src = aux->rtwvif->mac_id;
++ pattern->courtesy.macid_tgt = ref->rtwvif_link->mac_id;
++ pattern->courtesy.macid_src = aux->rtwvif_link->mac_id;
+ pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT;
+ pattern->courtesy.enable = true;
+ }
+@@ -1263,7 +1304,7 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ u64 tsf_src;
+ int ret;
+
+- ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif, &tsf_src);
++ ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif_link, &tsf_src);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return;
+@@ -1280,12 +1321,12 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ div_u64_rem(tbtt_tgt, bcn_intvl_src_us, &remainder);
+ tsf_ofst_tgt = bcn_intvl_src_us - remainder;
+
+- config->sync.macid_tgt = tgt->rtwvif->mac_id;
+- config->sync.band_tgt = tgt->rtwvif->mac_idx;
+- config->sync.port_tgt = tgt->rtwvif->port;
+- config->sync.macid_src = src->rtwvif->mac_id;
+- config->sync.band_src = src->rtwvif->mac_idx;
+- config->sync.port_src = src->rtwvif->port;
++ config->sync.macid_tgt = tgt->rtwvif_link->mac_id;
++ config->sync.band_tgt = tgt->rtwvif_link->mac_idx;
++ config->sync.port_tgt = tgt->rtwvif_link->port;
++ config->sync.macid_src = src->rtwvif_link->mac_id;
++ config->sync.band_src = src->rtwvif_link->mac_idx;
++ config->sync.port_src = src->rtwvif_link->port;
+ config->sync.offset = tsf_ofst_tgt / 1024;
+ config->sync.enable = true;
+
+@@ -1294,7 +1335,7 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ config->sync.macid_tgt, config->sync.macid_src,
+ config->sync.offset);
+
+- rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif, src->rtwvif,
++ rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif_link, src->rtwvif_link,
+ config->sync.offset);
+ }
+
+@@ -1305,13 +1346,13 @@ static int rtw89_mcc_fill_start_tsf(struct rtw89_dev *rtwdev)
+ struct rtw89_mcc_config *config = &mcc->config;
+ u32 bcn_intvl_ref_us = ieee80211_tu_to_usec(ref->beacon_interval);
+ u32 tob_ref_us = ieee80211_tu_to_usec(config->pattern.tob_ref);
+- struct rtw89_vif *rtwvif = ref->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = ref->rtwvif_link;
+ u64 tsf, start_tsf;
+ u32 cur_tbtt_ofst;
+ u64 min_time;
+ int ret;
+
+- ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf);
++ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return ret;
+@@ -1390,13 +1431,13 @@ static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *ro
+ const struct rtw89_chan *chan;
+ int ret;
+
+- chan = rtw89_chan_get(rtwdev, role->rtwvif->chanctx_idx);
++ chan = rtw89_chan_get(rtwdev, role->rtwvif_link->chanctx_idx);
+ req.central_ch_seg0 = chan->channel;
+ req.primary_ch = chan->primary_channel;
+ req.bandwidth = chan->band_width;
+ req.ch_band_type = chan->band_type;
+
+- req.macid = role->rtwvif->mac_id;
++ req.macid = role->rtwvif_link->mac_id;
+ req.group = mcc->group;
+ req.c2h_rpt = policy->c2h_rpt;
+ req.tx_null_early = policy->tx_null_early;
+@@ -1421,7 +1462,7 @@ static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *ro
+ }
+
+ ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group,
+- role->rtwvif->mac_id,
++ role->rtwvif_link->mac_id,
+ role->macid_bitmap);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -1448,7 +1489,7 @@ void __mrc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role,
+ slot_arg->duration = role->duration;
+ slot_arg->role_num = 1;
+
+- chan = rtw89_chan_get(rtwdev, role->rtwvif->chanctx_idx);
++ chan = rtw89_chan_get(rtwdev, role->rtwvif_link->chanctx_idx);
+
+ slot_arg->roles[0].role_type = RTW89_H2C_MRC_ROLE_WIFI;
+ slot_arg->roles[0].is_master = role == ref;
+@@ -1458,7 +1499,7 @@ void __mrc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role,
+ slot_arg->roles[0].primary_ch = chan->primary_channel;
+ slot_arg->roles[0].en_tx_null = !policy->dis_tx_null;
+ slot_arg->roles[0].null_early = policy->tx_null_early;
+- slot_arg->roles[0].macid = role->rtwvif->mac_id;
++ slot_arg->roles[0].macid = role->rtwvif_link->mac_id;
+ slot_arg->roles[0].macid_main_bitmap =
+ rtw89_mcc_role_fw_macid_bitmap_to_u32(role);
+ }
+@@ -1569,7 +1610,7 @@ static int __mcc_fw_start(struct rtw89_dev *rtwdev, bool replace)
+ }
+ }
+
+- req.macid = ref->rtwvif->mac_id;
++ req.macid = ref->rtwvif_link->mac_id;
+ req.tsf_high = config->start_tsf >> 32;
+ req.tsf_low = config->start_tsf;
+
+@@ -1598,7 +1639,7 @@ static void __mrc_fw_add_courtesy(struct rtw89_dev *rtwdev,
+ if (!courtesy->enable)
+ return;
+
+- if (courtesy->macid_src == ref->rtwvif->mac_id) {
++ if (courtesy->macid_src == ref->rtwvif_link->mac_id) {
+ slot_arg_src = &arg->slots[ref->slot_idx];
+ slot_idx_tgt = aux->slot_idx;
+ } else {
+@@ -1717,9 +1758,9 @@ static int __mcc_fw_set_duration_no_bt(struct rtw89_dev *rtwdev, bool sync_chang
+ struct rtw89_fw_mcc_duration req = {
+ .group = mcc->group,
+ .btc_in_group = false,
+- .start_macid = ref->rtwvif->mac_id,
+- .macid_x = ref->rtwvif->mac_id,
+- .macid_y = aux->rtwvif->mac_id,
++ .start_macid = ref->rtwvif_link->mac_id,
++ .macid_x = ref->rtwvif_link->mac_id,
++ .macid_y = aux->rtwvif_link->mac_id,
+ .duration_x = ref->duration,
+ .duration_y = aux->duration,
+ .start_tsf_high = config->start_tsf >> 32,
+@@ -1813,18 +1854,18 @@ static void rtw89_mcc_handle_beacon_noa(struct rtw89_dev *rtwdev, bool enable)
+ struct ieee80211_p2p_noa_desc noa_desc = {};
+ u64 start_time = config->start_tsf;
+ u32 interval = config->mcc_interval;
+- struct rtw89_vif *rtwvif_go;
++ struct rtw89_vif_link *rtwvif_go;
+ u32 duration;
+
+ if (mcc->mode != RTW89_MCC_MODE_GO_STA)
+ return;
+
+ if (ref->is_go) {
+- rtwvif_go = ref->rtwvif;
++ rtwvif_go = ref->rtwvif_link;
+ start_time += ieee80211_tu_to_usec(ref->duration);
+ duration = config->mcc_interval - ref->duration;
+ } else if (aux->is_go) {
+- rtwvif_go = aux->rtwvif;
++ rtwvif_go = aux->rtwvif_link;
+ start_time += ieee80211_tu_to_usec(pattern->tob_ref) +
+ ieee80211_tu_to_usec(config->beacon_offset) +
+ ieee80211_tu_to_usec(pattern->toa_aux);
+@@ -1865,9 +1906,9 @@ static void rtw89_mcc_start_beacon_noa(struct rtw89_dev *rtwdev)
+ return;
+
+ if (ref->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, true);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif_link, true);
+ else if (aux->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, true);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif_link, true);
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, true);
+ }
+@@ -1882,9 +1923,9 @@ static void rtw89_mcc_stop_beacon_noa(struct rtw89_dev *rtwdev)
+ return;
+
+ if (ref->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, false);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif_link, false);
+ else if (aux->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, false);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif_link, false);
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, false);
+ }
+@@ -1942,7 +1983,7 @@ struct rtw89_mcc_stop_sel {
+ static void rtw89_mcc_stop_sel_fill(struct rtw89_mcc_stop_sel *sel,
+ const struct rtw89_mcc_role *mcc_role)
+ {
+- sel->mac_id = mcc_role->rtwvif->mac_id;
++ sel->mac_id = mcc_role->rtwvif_link->mac_id;
+ sel->slot_idx = mcc_role->slot_idx;
+ }
+
+@@ -1953,7 +1994,7 @@ static int rtw89_mcc_stop_sel_iterator(struct rtw89_dev *rtwdev,
+ {
+ struct rtw89_mcc_stop_sel *sel = data;
+
+- if (!mcc_role->rtwvif->chanctx_assigned)
++ if (!mcc_role->rtwvif_link->chanctx_assigned)
+ return 0;
+
+ rtw89_mcc_stop_sel_fill(sel, mcc_role);
+@@ -2081,7 +2122,7 @@ static int __mcc_fw_upd_macid_bitmap(struct rtw89_dev *rtwdev,
+ int ret;
+
+ ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group,
+- upd->rtwvif->mac_id,
++ upd->rtwvif_link->mac_id,
+ upd->macid_bitmap);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -2106,7 +2147,7 @@ static int __mrc_fw_upd_macid_bitmap(struct rtw89_dev *rtwdev,
+ int i;
+
+ arg.sch_idx = mcc->group;
+- arg.macid = upd->rtwvif->mac_id;
++ arg.macid = upd->rtwvif_link->mac_id;
+
+ for (i = 0; i < 32; i++) {
+ if (add & BIT(i)) {
+@@ -2144,7 +2185,7 @@ static int rtw89_mcc_upd_map_iterator(struct rtw89_dev *rtwdev,
+ void *data)
+ {
+ struct rtw89_mcc_role upd = {
+- .rtwvif = mcc_role->rtwvif,
++ .rtwvif_link = mcc_role->rtwvif_link,
+ };
+ int ret;
+
+@@ -2370,6 +2411,24 @@ void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
+ rtw89_queue_chanctx_work(rtwdev);
+ }
+
++static void __rtw89_swap_chanctx(struct rtw89_vif *rtwvif,
++ enum rtw89_chanctx_idx idx1,
++ enum rtw89_chanctx_idx idx2)
++{
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ if (!rtwvif_link->chanctx_assigned)
++ continue;
++
++ if (rtwvif_link->chanctx_idx == idx1)
++ rtwvif_link->chanctx_idx = idx2;
++ else if (rtwvif_link->chanctx_idx == idx2)
++ rtwvif_link->chanctx_idx = idx1;
++ }
++}
++
+ static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_idx idx1,
+ enum rtw89_chanctx_idx idx2)
+@@ -2386,14 +2445,8 @@ static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
+
+ swap(hal->chanctx[idx1], hal->chanctx[idx2]);
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (!rtwvif->chanctx_assigned)
+- continue;
+- if (rtwvif->chanctx_idx == idx1)
+- rtwvif->chanctx_idx = idx2;
+- else if (rtwvif->chanctx_idx == idx2)
+- rtwvif->chanctx_idx = idx1;
+- }
++ rtw89_for_each_rtwvif(rtwdev, rtwvif)
++ __rtw89_swap_chanctx(rtwvif, idx1, idx2);
+
+ cur = atomic_read(&hal->roc_chanctx_idx);
+ if (cur == idx1)
+@@ -2444,14 +2497,14 @@ void rtw89_chanctx_ops_change(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
+ struct rtw89_entity_weight w = {};
+
+- rtwvif->chanctx_idx = cfg->idx;
+- rtwvif->chanctx_assigned = true;
++ rtwvif_link->chanctx_idx = cfg->idx;
++ rtwvif_link->chanctx_assigned = true;
+ cfg->ref_count++;
+
+ if (cfg->idx == RTW89_CHANCTX_0)
+@@ -2469,7 +2522,7 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+ }
+
+ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
+@@ -2479,8 +2532,8 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ enum rtw89_entity_mode new;
+ int ret;
+
+- rtwvif->chanctx_idx = RTW89_CHANCTX_0;
+- rtwvif->chanctx_assigned = false;
++ rtwvif_link->chanctx_idx = RTW89_CHANCTX_0;
++ rtwvif_link->chanctx_assigned = false;
+ cfg->ref_count--;
+
+ if (cfg->ref_count != 0)
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.h b/drivers/net/wireless/realtek/rtw89/chan.h
+index c6d31984e57536..4ed777ea506485 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.h
++++ b/drivers/net/wireless/realtek/rtw89/chan.h
+@@ -106,10 +106,10 @@ void rtw89_chanctx_ops_change(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx,
+ u32 changed);
+ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx);
+ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx);
+
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index 8d27374db83ca0..8d54d71fcf539e 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -2492,6 +2492,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
+ if (ver->fcxmreg == 7) {
+ sz = struct_size(v7, regs, n);
+ v7 = kmalloc(sz, GFP_KERNEL);
++ if (!v7)
++ return;
+ v7->type = RPT_EN_MREG;
+ v7->fver = ver->fcxmreg;
+ v7->len = n;
+@@ -2506,6 +2508,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
+ } else {
+ sz = struct_size(v1, regs, n);
+ v1 = kmalloc(sz, GFP_KERNEL);
++ if (!v1)
++ return;
+ v1->fver = ver->fcxmreg;
+ v1->reg_num = n;
+ memcpy(v1->regs, chip->mon_reg, flex_array_size(v1, regs, n));
+@@ -4989,18 +4993,16 @@ struct rtw89_txtime_data {
+ bool reenable;
+ };
+
+-static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
++static void __rtw89_tx_time_iter(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct rtw89_txtime_data *iter_data)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_txtime_data *iter_data =
+- (struct rtw89_txtime_data *)data;
+ struct rtw89_dev *rtwdev = iter_data->rtwdev;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_wl_info *wl = &cx->wl;
+ struct rtw89_btc_wl_link_info *plink = NULL;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 tx_time = iter_data->tx_time;
+ u8 tx_retry = iter_data->tx_retry;
+ u16 enable = iter_data->enable;
+@@ -5023,8 +5025,8 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
+
+ /* backup the original tx time before tx-limit on */
+ if (reenable) {
+- rtw89_mac_get_tx_time(rtwdev, rtwsta, &plink->tx_time);
+- rtw89_mac_get_tx_retry_limit(rtwdev, rtwsta, &plink->tx_retry);
++ rtw89_mac_get_tx_time(rtwdev, rtwsta_link, &plink->tx_time);
++ rtw89_mac_get_tx_retry_limit(rtwdev, rtwsta_link, &plink->tx_retry);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): reenable, tx_time=%d tx_retry= %d\n",
+ __func__, plink->tx_time, plink->tx_retry);
+@@ -5032,22 +5034,37 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
+
+ /* restore the original tx time if no tx-limit */
+ if (!enable) {
+- rtw89_mac_set_tx_time(rtwdev, rtwsta, true, plink->tx_time);
+- rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta, true,
++ rtw89_mac_set_tx_time(rtwdev, rtwsta_link, true, plink->tx_time);
++ rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta_link, true,
+ plink->tx_retry);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): restore, tx_time=%d tx_retry= %d\n",
+ __func__, plink->tx_time, plink->tx_retry);
+
+ } else {
+- rtw89_mac_set_tx_time(rtwdev, rtwsta, false, tx_time);
+- rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta, false, tx_retry);
++ rtw89_mac_set_tx_time(rtwdev, rtwsta_link, false, tx_time);
++ rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta_link, false, tx_retry);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): set, tx_time=%d tx_retry= %d\n",
+ __func__, tx_time, tx_retry);
+ }
+ }
+
++static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_txtime_data *iter_data =
++ (struct rtw89_txtime_data *)data;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ __rtw89_tx_time_iter(rtwvif_link, rtwsta_link, iter_data);
++ }
++}
++
+ static void _set_wl_tx_limit(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_btc *btc = &rtwdev->btc;
+@@ -7481,13 +7498,16 @@ static void _update_bt_info(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
+ _run_coex(rtwdev, BTC_RSN_UPDATE_BT_INFO);
+ }
+
+-void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, enum btc_role_state state)
++void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ enum btc_role_state state)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct ieee80211_bss_conf *bss_conf;
++ struct ieee80211_link_sta *link_sta;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+@@ -7495,51 +7515,59 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ struct rtw89_btc_wl_link_info *wlinfo = NULL;
+ u8 mode = 0, rlink_id, link_mode_ori, pta_req_mac_ori, wa_type;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], state=%d\n", state);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], role is STA=%d\n",
+ vif->type == NL80211_IFTYPE_STATION);
+- rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], port=%d\n", rtwvif->port);
++ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], port=%d\n", rtwvif_link->port);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], band=%d ch=%d bw=%d\n",
+ chan->band_type, chan->channel, chan->band_width);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], associated=%d\n",
+ state == BTC_ROLE_MSTS_STA_CONN_END);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], bcn_period=%d dtim_period=%d\n",
+- vif->bss_conf.beacon_int, vif->bss_conf.dtim_period);
++ bss_conf->beacon_int, bss_conf->dtim_period);
++
++ if (rtwsta_link) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
+
+- if (rtwsta) {
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], STA mac_id=%d\n",
+- rtwsta->mac_id);
++ rtwsta_link->mac_id);
+
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], STA support HE=%d VHT=%d HT=%d\n",
+- sta->deflink.he_cap.has_he,
+- sta->deflink.vht_cap.vht_supported,
+- sta->deflink.ht_cap.ht_supported);
+- if (sta->deflink.he_cap.has_he)
++ link_sta->he_cap.has_he,
++ link_sta->vht_cap.vht_supported,
++ link_sta->ht_cap.ht_supported);
++ if (link_sta->he_cap.has_he)
+ mode |= BIT(BTC_WL_MODE_HE);
+- if (sta->deflink.vht_cap.vht_supported)
++ if (link_sta->vht_cap.vht_supported)
+ mode |= BIT(BTC_WL_MODE_VHT);
+- if (sta->deflink.ht_cap.ht_supported)
++ if (link_sta->ht_cap.ht_supported)
+ mode |= BIT(BTC_WL_MODE_HT);
+
+ r.mode = mode;
+ }
+
+- if (rtwvif->wifi_role >= RTW89_WIFI_ROLE_MLME_MAX)
++ if (rtwvif_link->wifi_role >= RTW89_WIFI_ROLE_MLME_MAX) {
++ rcu_read_unlock();
+ return;
++ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+- "[BTC], wifi_role=%d\n", rtwvif->wifi_role);
++ "[BTC], wifi_role=%d\n", rtwvif_link->wifi_role);
+
+- r.role = rtwvif->wifi_role;
+- r.phy = rtwvif->phy_idx;
+- r.pid = rtwvif->port;
++ r.role = rtwvif_link->wifi_role;
++ r.phy = rtwvif_link->phy_idx;
++ r.pid = rtwvif_link->port;
+ r.active = true;
+ r.connected = MLME_LINKED;
+- r.bcn_period = vif->bss_conf.beacon_int;
+- r.dtim_period = vif->bss_conf.dtim_period;
++ r.bcn_period = bss_conf->beacon_int;
++ r.dtim_period = bss_conf->dtim_period;
+ r.band = chan->band_type;
+ r.ch = chan->channel;
+ r.bw = chan->band_width;
+@@ -7547,10 +7575,12 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ r.chdef.center_ch = chan->channel;
+ r.chdef.bw = chan->band_width;
+ r.chdef.chan = chan->primary_channel;
+- ether_addr_copy(r.mac_addr, rtwvif->mac_addr);
++ ether_addr_copy(r.mac_addr, rtwvif_link->mac_addr);
+
+- if (rtwsta && vif->type == NL80211_IFTYPE_STATION)
+- r.mac_id = rtwsta->mac_id;
++ rcu_read_unlock();
++
++ if (rtwsta_link && vif->type == NL80211_IFTYPE_STATION)
++ r.mac_id = rtwsta_link->mac_id;
+
+ btc->dm.cnt_notify[BTC_NCNT_ROLE_INFO]++;
+
+@@ -7781,26 +7811,26 @@ struct rtw89_btc_wl_sta_iter_data {
+ bool is_traffic_change;
+ };
+
+-static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
++static
++void __rtw89_btc_ntfy_wl_sta_iter(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct rtw89_btc_wl_sta_iter_data *iter_data)
+ {
+- struct rtw89_btc_wl_sta_iter_data *iter_data =
+- (struct rtw89_btc_wl_sta_iter_data *)data;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_dev *rtwdev = iter_data->rtwdev;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_wl_link_info *link_info = NULL;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_traffic_stats *link_info_t = NULL;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_traffic_stats *stats = &rtwvif->stats;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc_wl_role_info *r;
+ struct rtw89_btc_wl_role_info_v1 *r1;
+ u32 last_tx_rate, last_rx_rate;
+ u16 last_tx_lvl, last_rx_lvl;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u8 rssi;
+ u8 busy = 0;
+ u8 dir = 0;
+@@ -7808,11 +7838,11 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ u8 i = 0;
+ bool is_sta_change = false, is_traffic_change = false;
+
+- rssi = ewma_rssi_read(&rtwsta->avg_rssi) >> RSSI_FACTOR;
++ rssi = ewma_rssi_read(&rtwsta_link->avg_rssi) >> RSSI_FACTOR;
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], rssi=%d\n", rssi);
+
+ link_info = &wl->link_info[port];
+- link_info->stat.traffic = rtwvif->stats;
++ link_info->stat.traffic = *stats;
+ link_info_t = &link_info->stat.traffic;
+
+ if (link_info->connected == MLME_NO_LINK) {
+@@ -7860,19 +7890,19 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ iter_data->busy_all |= busy;
+ iter_data->dir_all |= BIT(dir);
+
+- if (rtwsta->rx_hw_rate <= RTW89_HW_RATE_CCK2 &&
++ if (rtwsta_link->rx_hw_rate <= RTW89_HW_RATE_CCK2 &&
+ last_rx_rate > RTW89_HW_RATE_CCK2 &&
+ link_info_t->rx_tfc_lv > RTW89_TFC_IDLE)
+ link_info->rx_rate_drop_cnt++;
+
+- if (last_tx_rate != rtwsta->ra_report.hw_rate ||
+- last_rx_rate != rtwsta->rx_hw_rate ||
++ if (last_tx_rate != rtwsta_link->ra_report.hw_rate ||
++ last_rx_rate != rtwsta_link->rx_hw_rate ||
+ last_tx_lvl != link_info_t->tx_tfc_lv ||
+ last_rx_lvl != link_info_t->rx_tfc_lv)
+ is_traffic_change = true;
+
+- link_info_t->tx_rate = rtwsta->ra_report.hw_rate;
+- link_info_t->rx_rate = rtwsta->rx_hw_rate;
++ link_info_t->tx_rate = rtwsta_link->ra_report.hw_rate;
++ link_info_t->rx_rate = rtwsta_link->rx_hw_rate;
+
+ if (link_info->role == RTW89_WIFI_ROLE_STATION ||
+ link_info->role == RTW89_WIFI_ROLE_P2P_CLIENT) {
+@@ -7884,19 +7914,19 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ r = &wl->role_info;
+ r->active_role[port].tx_lvl = stats->tx_tfc_lv;
+ r->active_role[port].rx_lvl = stats->rx_tfc_lv;
+- r->active_role[port].tx_rate = rtwsta->ra_report.hw_rate;
+- r->active_role[port].rx_rate = rtwsta->rx_hw_rate;
++ r->active_role[port].tx_rate = rtwsta_link->ra_report.hw_rate;
++ r->active_role[port].rx_rate = rtwsta_link->rx_hw_rate;
+ } else if (ver->fwlrole == 1) {
+ r1 = &wl->role_info_v1;
+ r1->active_role_v1[port].tx_lvl = stats->tx_tfc_lv;
+ r1->active_role_v1[port].rx_lvl = stats->rx_tfc_lv;
+- r1->active_role_v1[port].tx_rate = rtwsta->ra_report.hw_rate;
+- r1->active_role_v1[port].rx_rate = rtwsta->rx_hw_rate;
++ r1->active_role_v1[port].tx_rate = rtwsta_link->ra_report.hw_rate;
++ r1->active_role_v1[port].rx_rate = rtwsta_link->rx_hw_rate;
+ } else if (ver->fwlrole == 2) {
+ dm->trx_info.tx_lvl = stats->tx_tfc_lv;
+ dm->trx_info.rx_lvl = stats->rx_tfc_lv;
+- dm->trx_info.tx_rate = rtwsta->ra_report.hw_rate;
+- dm->trx_info.rx_rate = rtwsta->rx_hw_rate;
++ dm->trx_info.tx_rate = rtwsta_link->ra_report.hw_rate;
++ dm->trx_info.rx_rate = rtwsta_link->rx_hw_rate;
+ }
+
+ dm->trx_info.tx_tp = link_info_t->tx_throughput;
+@@ -7916,6 +7946,21 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ iter_data->is_traffic_change = true;
+ }
+
++static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_btc_wl_sta_iter_data *iter_data =
++ (struct rtw89_btc_wl_sta_iter_data *)data;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ __rtw89_btc_ntfy_wl_sta_iter(rtwvif_link, rtwsta_link, iter_data);
++ }
++}
++
+ #define BTC_NHM_CHK_INTVL 20
+
+ void rtw89_btc_ntfy_wl_sta(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.h b/drivers/net/wireless/realtek/rtw89/coex.h
+index de53b56632f7c6..dbdb56e063ef03 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.h
++++ b/drivers/net/wireless/realtek/rtw89/coex.h
+@@ -271,8 +271,10 @@ void rtw89_btc_ntfy_eapol_packet_work(struct work_struct *work);
+ void rtw89_btc_ntfy_arp_packet_work(struct work_struct *work);
+ void rtw89_btc_ntfy_dhcp_packet_work(struct work_struct *work);
+ void rtw89_btc_ntfy_icmp_packet_work(struct work_struct *work);
+-void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, enum btc_role_state state);
++void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ enum btc_role_state state);
+ void rtw89_btc_ntfy_radio_state(struct rtw89_dev *rtwdev, enum btc_rfctrl rf_state);
+ void rtw89_btc_ntfy_wl_rfk(struct rtw89_dev *rtwdev, u8 phy_map,
+ enum btc_wl_rfk_type type,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 4553810634c66b..5b8e65f6de6a4e 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -436,15 +436,6 @@ int rtw89_set_channel(struct rtw89_dev *rtwdev)
+ return 0;
+ }
+
+-void rtw89_get_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_chan *chan)
+-{
+- const struct cfg80211_chan_def *chandef;
+-
+- chandef = rtw89_chandef_get(rtwdev, rtwvif->chanctx_idx);
+- rtw89_get_channel_params(chandef, chan);
+-}
+-
+ static enum rtw89_core_tx_type
+ rtw89_core_get_tx_type(struct rtw89_dev *rtwdev,
+ struct sk_buff *skb)
+@@ -463,8 +454,9 @@ rtw89_core_tx_update_ampdu_info(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req,
+ enum btc_pkt_type pkt_type)
+ {
+- struct ieee80211_sta *sta = tx_req->sta;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
++ struct ieee80211_link_sta *link_sta;
+ struct sk_buff *skb = tx_req->skb;
+ struct rtw89_sta *rtwsta;
+ u8 ampdu_num;
+@@ -478,21 +470,26 @@ rtw89_core_tx_update_ampdu_info(struct rtw89_dev *rtwdev,
+ if (!(IEEE80211_SKB_CB(skb)->flags & IEEE80211_TX_CTL_AMPDU))
+ return;
+
+- if (!sta) {
++ if (!rtwsta_link) {
+ rtw89_warn(rtwdev, "cannot set ampdu info without sta\n");
+ return;
+ }
+
+ tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+- rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ rtwsta = rtwsta_link->rtwsta;
++
++ rcu_read_lock();
+
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
+ ampdu_num = (u8)((rtwsta->ampdu_params[tid].agg_num ?
+ rtwsta->ampdu_params[tid].agg_num :
+- 4 << sta->deflink.ht_cap.ampdu_factor) - 1);
++ 4 << link_sta->ht_cap.ampdu_factor) - 1);
+
+ desc_info->agg_en = true;
+- desc_info->ampdu_density = sta->deflink.ht_cap.ampdu_density;
++ desc_info->ampdu_density = link_sta->ht_cap.ampdu_density;
+ desc_info->ampdu_num = ampdu_num;
++
++ rcu_read_unlock();
+ }
+
+ static void
+@@ -569,9 +566,13 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan)
+ {
+ struct sk_buff *skb = tx_req->skb;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = tx_info->control.vif;
++ struct ieee80211_bss_conf *bss_conf;
+ u16 lowest_rate;
++ u16 rate;
+
+ if (tx_info->flags & IEEE80211_TX_CTL_NO_CCK_RATE ||
+ (vif && vif->p2p))
+@@ -581,25 +582,35 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- if (!vif || !vif->bss_conf.basic_rates || !tx_req->sta)
++ if (!rtwvif_link)
+ return lowest_rate;
+
+- return __ffs(vif->bss_conf.basic_rates) + lowest_rate;
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ if (!bss_conf->basic_rates || !rtwsta_link) {
++ rate = lowest_rate;
++ goto out;
++ }
++
++ rate = __ffs(bss_conf->basic_rates) + lowest_rate;
++
++out:
++ rcu_read_unlock();
++
++ return rate;
+ }
+
+ static u8 rtw89_core_tx_get_mac_id(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_sta *rtwsta;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+
+- if (!sta)
+- return rtwvif->mac_id;
++ if (!rtwsta_link)
++ return rtwvif_link->mac_id;
+
+- rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- return rtwsta->mac_id;
++ return rtwsta_link->mac_id;
+ }
+
+ static void rtw89_core_tx_update_llc_hdr(struct rtw89_dev *rtwdev,
+@@ -618,11 +629,10 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ struct sk_buff *skb = tx_req->skb;
+ u8 qsel, ch_dma;
+
+@@ -631,7 +641,7 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
+
+ desc_info->qsel = qsel;
+ desc_info->ch_dma = ch_dma;
+- desc_info->port = desc_info->hiq ? rtwvif->port : 0;
++ desc_info->port = desc_info->hiq ? rtwvif_link->port : 0;
+ desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req);
+ desc_info->hw_ssn_sel = RTW89_MGMT_HW_SSN_SEL;
+ desc_info->hw_seq_mode = RTW89_MGMT_HW_SEQ_MODE;
+@@ -701,26 +711,36 @@ __rtw89_core_tx_check_he_qos_htc(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req,
+ enum btc_pkt_type pkt_type)
+ {
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct sk_buff *skb = tx_req->skb;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
++ struct ieee80211_link_sta *link_sta;
+ __le16 fc = hdr->frame_control;
+
+ /* AP IOT issue with EAPoL, ARP and DHCP */
+ if (pkt_type < PACKET_MAX)
+ return false;
+
+- if (!sta || !sta->deflink.he_cap.has_he)
++ if (!rtwsta_link)
+ return false;
+
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ if (!link_sta->he_cap.has_he) {
++ rcu_read_unlock();
++ return false;
++ }
++
++ rcu_read_unlock();
++
+ if (!ieee80211_is_data_qos(fc))
+ return false;
+
+ if (skb_headroom(skb) < IEEE80211_HT_CTL_LEN)
+ return false;
+
+- if (rtwsta && rtwsta->ra_report.might_fallback_legacy)
++ if (rtwsta_link && rtwsta_link->ra_report.might_fallback_legacy)
+ return false;
+
+ return true;
+@@ -730,8 +750,7 @@ static void
+ __rtw89_core_tx_adjust_he_qos_htc(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct sk_buff *skb = tx_req->skb;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ __le16 fc = hdr->frame_control;
+@@ -747,7 +766,7 @@ __rtw89_core_tx_adjust_he_qos_htc(struct rtw89_dev *rtwdev,
+ hdr = data;
+ htc = data + hdr_len;
+ hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_ORDER);
+- *htc = rtwsta->htc_template ? rtwsta->htc_template :
++ *htc = rtwsta_link->htc_template ? rtwsta_link->htc_template :
+ le32_encode_bits(RTW89_HTC_VARIANT_HE, RTW89_HTC_MASK_VARIANT) |
+ le32_encode_bits(RTW89_HTC_VARIANT_HE_CID_CAS, RTW89_HTC_MASK_CTL_ID);
+
+@@ -761,8 +780,7 @@ rtw89_core_tx_update_he_qos_htc(struct rtw89_dev *rtwdev,
+ enum btc_pkt_type pkt_type)
+ {
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
+
+ if (!__rtw89_core_tx_check_he_qos_htc(rtwdev, tx_req, pkt_type))
+ goto desc_bk;
+@@ -773,23 +791,25 @@ rtw89_core_tx_update_he_qos_htc(struct rtw89_dev *rtwdev,
+ desc_info->a_ctrl_bsr = true;
+
+ desc_bk:
+- if (!rtwvif || rtwvif->last_a_ctrl == desc_info->a_ctrl_bsr)
++ if (!rtwvif_link || rtwvif_link->last_a_ctrl == desc_info->a_ctrl_bsr)
+ return;
+
+- rtwvif->last_a_ctrl = desc_info->a_ctrl_bsr;
++ rtwvif_link->last_a_ctrl = desc_info->a_ctrl_bsr;
+ desc_info->bk = true;
+ }
+
+ static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
+- enum rtw89_chanctx_idx idx = rtwvif->chanctx_idx;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif_link->rate_pattern;
++ enum rtw89_chanctx_idx idx = rtwvif_link->chanctx_idx;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, idx);
++ struct ieee80211_link_sta *link_sta;
+ u16 lowest_rate;
++ u16 rate;
+
+ if (rate_pattern->enable)
+ return rate_pattern->rate;
+@@ -801,20 +821,31 @@ static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- if (!sta || !sta->deflink.supp_rates[chan->band_type])
++ if (!rtwsta_link)
+ return lowest_rate;
+
+- return __ffs(sta->deflink.supp_rates[chan->band_type]) + lowest_rate;
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ if (!link_sta->supp_rates[chan->band_type]) {
++ rate = lowest_rate;
++ goto out;
++ }
++
++ rate = __ffs(link_sta->supp_rates[chan->band_type]) + lowest_rate;
++
++out:
++ rcu_read_unlock();
++
++ return rate;
+ }
+
+ static void
+ rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
+ struct sk_buff *skb = tx_req->skb;
+ u8 tid, tid_indicate;
+@@ -829,10 +860,10 @@ rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
+ desc_info->tid_indicate = tid_indicate;
+ desc_info->qsel = qsel;
+ desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req);
+- desc_info->port = desc_info->hiq ? rtwvif->port : 0;
+- desc_info->er_cap = rtwsta ? rtwsta->er_cap : false;
+- desc_info->stbc = rtwsta ? rtwsta->ra.stbc_cap : false;
+- desc_info->ldpc = rtwsta ? rtwsta->ra.ldpc_cap : false;
++ desc_info->port = desc_info->hiq ? rtwvif_link->port : 0;
++ desc_info->er_cap = rtwsta_link ? rtwsta_link->er_cap : false;
++ desc_info->stbc = rtwsta_link ? rtwsta_link->ra.stbc_cap : false;
++ desc_info->ldpc = rtwsta_link ? rtwsta_link->ra.ldpc_cap : false;
+
+ /* enable wd_info for AMPDU */
+ desc_info->en_wd_info = true;
+@@ -1027,13 +1058,34 @@ int rtw89_h2c_tx(struct rtw89_dev *rtwdev,
+ int rtw89_core_tx_write(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, struct sk_buff *skb, int *qsel)
+ {
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ struct rtw89_core_tx_request tx_req = {0};
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_sta_link *rtwsta_link = NULL;
++ struct rtw89_vif_link *rtwvif_link;
+ int ret;
+
++ /* By default, driver writes tx via the link on HW-0. And then,
++ * according to links' status, HW can change tx to another link.
++ */
++
++ if (rtwsta) {
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link)) {
++ rtw89_err(rtwdev, "tx: find no sta link on HW-0\n");
++ return -ENOLINK;
++ }
++ }
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "tx: find no vif link on HW-0\n");
++ return -ENOLINK;
++ }
++
+ tx_req.skb = skb;
+- tx_req.sta = sta;
+- tx_req.vif = vif;
++ tx_req.rtwvif_link = rtwvif_link;
++ tx_req.rtwsta_link = rtwsta_link;
+
+ rtw89_traffic_stats_accu(rtwdev, &rtwdev->stats, skb, true);
+ rtw89_traffic_stats_accu(rtwdev, &rtwvif->stats, skb, true);
+@@ -1514,16 +1566,24 @@ static u8 rtw89_get_data_rate_nss(struct rtw89_dev *rtwdev, u16 data_rate)
+ static void rtw89_core_rx_process_phy_ppdu_iter(void *data,
+ struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_rx_phy_ppdu *phy_ppdu = (struct rtw89_rx_phy_ppdu *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_sta_link *rtwsta_link;
+ u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num;
+ u8 ant_pos = U8_MAX;
+ u8 evm_pos = 0;
+ int i;
+
+- if (rtwsta->mac_id != phy_ppdu->mac_id || !phy_ppdu->to_self)
++ /* FIXME: For single link, taking link on HW-0 here is okay. But, when
++ * enabling multiple active links, we should determine the right link.
++ */
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link))
++ return;
++
++ if (rtwsta_link->mac_id != phy_ppdu->mac_id || !phy_ppdu->to_self)
+ return;
+
+ if (hal->ant_diversity && hal->antenna_rx) {
+@@ -1531,22 +1591,24 @@ static void rtw89_core_rx_process_phy_ppdu_iter(void *data,
+ evm_pos = ant_pos;
+ }
+
+- ewma_rssi_add(&rtwsta->avg_rssi, phy_ppdu->rssi_avg);
++ ewma_rssi_add(&rtwsta_link->avg_rssi, phy_ppdu->rssi_avg);
+
+ if (ant_pos < ant_num) {
+- ewma_rssi_add(&rtwsta->rssi[ant_pos], phy_ppdu->rssi[0]);
++ ewma_rssi_add(&rtwsta_link->rssi[ant_pos], phy_ppdu->rssi[0]);
+ } else {
+ for (i = 0; i < rtwdev->chip->rf_path_num; i++)
+- ewma_rssi_add(&rtwsta->rssi[i], phy_ppdu->rssi[i]);
++ ewma_rssi_add(&rtwsta_link->rssi[i], phy_ppdu->rssi[i]);
+ }
+
+ if (phy_ppdu->ofdm.has && (phy_ppdu->has_data || phy_ppdu->has_bcn)) {
+- ewma_snr_add(&rtwsta->avg_snr, phy_ppdu->ofdm.avg_snr);
++ ewma_snr_add(&rtwsta_link->avg_snr, phy_ppdu->ofdm.avg_snr);
+ if (rtw89_get_data_rate_nss(rtwdev, phy_ppdu->rate) == 1) {
+- ewma_evm_add(&rtwsta->evm_1ss, phy_ppdu->ofdm.evm_min);
++ ewma_evm_add(&rtwsta_link->evm_1ss, phy_ppdu->ofdm.evm_min);
+ } else {
+- ewma_evm_add(&rtwsta->evm_min[evm_pos], phy_ppdu->ofdm.evm_min);
+- ewma_evm_add(&rtwsta->evm_max[evm_pos], phy_ppdu->ofdm.evm_max);
++ ewma_evm_add(&rtwsta_link->evm_min[evm_pos],
++ phy_ppdu->ofdm.evm_min);
++ ewma_evm_add(&rtwsta_link->evm_max[evm_pos],
++ phy_ppdu->ofdm.evm_max);
+ }
+ }
+ }
+@@ -1876,17 +1938,19 @@ struct rtw89_vif_rx_stats_iter_data {
+ };
+
+ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf,
+ struct sk_buff *skb)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct ieee80211_trigger *tf = (struct ieee80211_trigger *)skb->data;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ u8 *pos, *end, type, tf_bw;
+ u16 aid, tf_rua;
+
+- if (!ether_addr_equal(vif->bss_conf.bssid, tf->ta) ||
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION ||
+- rtwvif->net_type == RTW89_NET_TYPE_NO_LINK)
++ if (!ether_addr_equal(bss_conf->bssid, tf->ta) ||
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_NO_LINK)
+ return;
+
+ type = le64_get_bits(tf->common_info, IEEE80211_TRIGGER_TYPE_MASK);
+@@ -1915,7 +1979,7 @@ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
+ rtwdev->stats.rx_tf_acc++;
+ if (tf_bw == IEEE80211_TRIGGER_ULBW_160_80P80MHZ &&
+ rua <= NL80211_RATE_INFO_HE_RU_ALLOC_106)
+- rtwvif->pwr_diff_en = true;
++ rtwvif_link->pwr_diff_en = true;
+ break;
+ }
+
+@@ -1986,7 +2050,7 @@ static void rtw89_core_cancel_6ghz_probe_tx(struct rtw89_dev *rtwdev,
+ ieee80211_queue_work(rtwdev->hw, &rtwdev->cancel_6ghz_probe_work);
+ }
+
+-static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif *rtwvif,
++static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_hdr *hdr, size_t len)
+ {
+ struct ieee80211_mgmt *mgmt = (typeof(mgmt))hdr;
+@@ -1994,20 +2058,22 @@ static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif *rtwvif,
+ if (len < offsetof(typeof(*mgmt), u.beacon.variable))
+ return;
+
+- WRITE_ONCE(rtwvif->sync_bcn_tsf, le64_to_cpu(mgmt->u.beacon.timestamp));
++ WRITE_ONCE(rtwvif_link->sync_bcn_tsf, le64_to_cpu(mgmt->u.beacon.timestamp));
+ }
+
+ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
+ struct ieee80211_vif *vif)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct rtw89_vif_rx_stats_iter_data *iter_data = data;
+ struct rtw89_dev *rtwdev = iter_data->rtwdev;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ struct rtw89_pkt_stat *pkt_stat = &rtwdev->phystat.cur_pkt_stat;
+ struct rtw89_rx_desc_info *desc_info = iter_data->desc_info;
+ struct sk_buff *skb = iter_data->skb;
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct rtw89_rx_phy_ppdu *phy_ppdu = iter_data->phy_ppdu;
++ struct ieee80211_bss_conf *bss_conf;
++ struct rtw89_vif_link *rtwvif_link;
+ const u8 *bssid = iter_data->bssid;
+
+ if (rtwdev->scanning &&
+@@ -2015,33 +2081,46 @@ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
+ ieee80211_is_probe_resp(hdr->frame_control)))
+ rtw89_core_cancel_6ghz_probe_tx(rtwdev, skb);
+
+- if (!vif->bss_conf.bssid)
+- return;
++ rcu_read_lock();
++
++ /* FIXME: For single link, taking link on HW-0 here is okay. But, when
++ * enabling multiple active links, we should determine the right link.
++ */
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link))
++ goto out;
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ if (!bss_conf->bssid)
++ goto out;
+
+ if (ieee80211_is_trigger(hdr->frame_control)) {
+- rtw89_stats_trigger_frame(rtwdev, vif, skb);
+- return;
++ rtw89_stats_trigger_frame(rtwdev, rtwvif_link, bss_conf, skb);
++ goto out;
+ }
+
+- if (!ether_addr_equal(vif->bss_conf.bssid, bssid))
+- return;
++ if (!ether_addr_equal(bss_conf->bssid, bssid))
++ goto out;
+
+ if (ieee80211_is_beacon(hdr->frame_control)) {
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ !test_bit(RTW89_FLAG_WOWLAN, rtwdev->flags)) {
+- rtw89_vif_sync_bcn_tsf(rtwvif, hdr, skb->len);
++ rtw89_vif_sync_bcn_tsf(rtwvif_link, hdr, skb->len);
+ rtw89_fw_h2c_rssi_offload(rtwdev, phy_ppdu);
+ }
+ pkt_stat->beacon_nr++;
+ }
+
+- if (!ether_addr_equal(vif->addr, hdr->addr1))
+- return;
++ if (!ether_addr_equal(bss_conf->addr, hdr->addr1))
++ goto out;
+
+ if (desc_info->data_rate < RTW89_HW_RATE_NR)
+ pkt_stat->rx_rate_cnt[desc_info->data_rate]++;
+
+ rtw89_traffic_stats_accu(rtwdev, &rtwvif->stats, skb, false);
++
++out:
++ rcu_read_unlock();
+ }
+
+ static void rtw89_core_rx_stats(struct rtw89_dev *rtwdev,
+@@ -2432,15 +2511,23 @@ void rtw89_core_stats_sta_rx_status_iter(void *data, struct ieee80211_sta *sta)
+ struct rtw89_core_iter_rx_status *iter_data =
+ (struct rtw89_core_iter_rx_status *)data;
+ struct ieee80211_rx_status *rx_status = iter_data->rx_status;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_rx_desc_info *desc_info = iter_data->desc_info;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
+ u8 mac_id = iter_data->mac_id;
+
+- if (mac_id != rtwsta->mac_id)
++ /* FIXME: For single link, taking link on HW-0 here is okay. But, when
++ * enabling multiple active links, we should determine the right link.
++ */
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link))
+ return;
+
+- rtwsta->rx_status = *rx_status;
+- rtwsta->rx_hw_rate = desc_info->data_rate;
++ if (mac_id != rtwsta_link->mac_id)
++ return;
++
++ rtwsta_link->rx_status = *rx_status;
++ rtwsta_link->rx_hw_rate = desc_info->data_rate;
+ }
+
+ static void rtw89_core_stats_sta_rx_status(struct rtw89_dev *rtwdev,
+@@ -2546,6 +2633,10 @@ static enum rtw89_ps_mode rtw89_update_ps_mode(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
++ /* FIXME: Fix __rtw89_enter_ps_mode() to consider MLO cases. */
++ if (rtwdev->support_mlo)
++ return RTW89_PS_MODE_NONE;
++
+ if (rtw89_disable_ps_mode || !chip->ps_mode_supported ||
+ RTW89_CHK_FW_FEATURE(NO_DEEP_PS, &rtwdev->fw))
+ return RTW89_PS_MODE_NONE;
+@@ -2658,7 +2749,7 @@ static void rtw89_core_ba_work(struct work_struct *work)
+ list_for_each_entry_safe(rtwtxq, tmp, &rtwdev->ba_list, list) {
+ struct ieee80211_txq *txq = rtw89_txq_to_txq(rtwtxq);
+ struct ieee80211_sta *sta = txq->sta;
+- struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+ u8 tid = txq->tid;
+
+ if (!sta) {
+@@ -2686,8 +2777,8 @@ static void rtw89_core_ba_work(struct work_struct *work)
+ spin_unlock_bh(&rtwdev->ba_lock);
+ }
+
+-static void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta)
++void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta)
+ {
+ struct rtw89_txq *rtwtxq, *tmp;
+
+@@ -2701,8 +2792,8 @@ static void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
+ spin_unlock_bh(&rtwdev->ba_lock);
+ }
+
+-static void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta)
++void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta)
+ {
+ struct rtw89_txq *rtwtxq, *tmp;
+
+@@ -2718,10 +2809,10 @@ static void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
+ spin_unlock_bh(&rtwdev->ba_lock);
+ }
+
+-static void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta)
++void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct sk_buff *skb, *tmp;
+
+ skb_queue_walk_safe(&rtwsta->roc_queue, skb, tmp) {
+@@ -2762,7 +2853,7 @@ static void rtw89_core_txq_check_agg(struct rtw89_dev *rtwdev,
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct ieee80211_txq *txq = rtw89_txq_to_txq(rtwtxq);
+ struct ieee80211_sta *sta = txq->sta;
+- struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+
+ if (test_bit(RTW89_TXQ_F_FORBID_BA, &rtwtxq->flags))
+ return;
+@@ -2838,10 +2929,19 @@ static bool rtw89_core_txq_agg_wait(struct rtw89_dev *rtwdev,
+ bool *sched_txq, bool *reinvoke)
+ {
+ struct rtw89_txq *rtwtxq = (struct rtw89_txq *)txq->drv_priv;
+- struct ieee80211_sta *sta = txq->sta;
+- struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(txq->sta);
++ struct rtw89_sta_link *rtwsta_link;
+
+- if (!sta || rtwsta->max_agg_wait <= 0)
++ if (!rtwsta)
++ return false;
++
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link)) {
++ rtw89_err(rtwdev, "agg wait: find no link on HW-0\n");
++ return false;
++ }
++
++ if (rtwsta_link->max_agg_wait <= 0)
+ return false;
+
+ if (rtwdev->stats.tx_tfc_lv <= RTW89_TFC_MID)
+@@ -2855,7 +2955,7 @@ static bool rtw89_core_txq_agg_wait(struct rtw89_dev *rtwdev,
+ return false;
+ }
+
+- if (*frame_cnt == 1 && rtwtxq->wait_cnt < rtwsta->max_agg_wait) {
++ if (*frame_cnt == 1 && rtwtxq->wait_cnt < rtwsta_link->max_agg_wait) {
+ *reinvoke = true;
+ rtwtxq->wait_cnt++;
+ return true;
+@@ -2879,7 +2979,7 @@ static void rtw89_core_txq_schedule(struct rtw89_dev *rtwdev, u8 ac, bool *reinv
+ ieee80211_txq_schedule_start(hw, ac);
+ while ((txq = ieee80211_next_txq(hw, ac))) {
+ rtwtxq = (struct rtw89_txq *)txq->drv_priv;
+- rtwvif = (struct rtw89_vif *)txq->vif->drv_priv;
++ rtwvif = vif_to_rtwvif(txq->vif);
+
+ if (rtwvif->offchan) {
+ ieee80211_return_txq(hw, txq, true);
+@@ -2955,16 +3055,23 @@ static void rtw89_forbid_ba_work(struct work_struct *w)
+ static void rtw89_core_sta_pending_tx_iter(void *data,
+ struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_vif *rtwvif_target = data, *rtwvif = rtwsta->rtwvif;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct rtw89_vif_link *target = data;
++ struct rtw89_vif_link *rtwvif_link;
+ struct sk_buff *skb, *tmp;
++ unsigned int link_id;
+ int qsel, ret;
+
+- if (rtwvif->chanctx_idx != rtwvif_target->chanctx_idx)
+- return;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ if (rtwvif_link->chanctx_idx == target->chanctx_idx)
++ goto bottom;
++
++ return;
+
++bottom:
+ if (skb_queue_len(&rtwsta->roc_queue) == 0)
+ return;
+
+@@ -2982,17 +3089,17 @@ static void rtw89_core_sta_pending_tx_iter(void *data,
+ }
+
+ static void rtw89_core_handle_sta_pending_tx(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_core_sta_pending_tx_iter,
+- rtwvif);
++ rtwvif_link);
+ }
+
+ static int rtw89_core_send_nullfunc(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool qos, bool ps)
++ struct rtw89_vif_link *rtwvif_link, bool qos, bool ps)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct ieee80211_sta *sta;
+ struct ieee80211_hdr *hdr;
+ struct sk_buff *skb;
+@@ -3002,7 +3109,7 @@ static int rtw89_core_send_nullfunc(struct rtw89_dev *rtwdev,
+ return 0;
+
+ rcu_read_lock();
+- sta = ieee80211_find_sta(vif, vif->bss_conf.bssid);
++ sta = ieee80211_find_sta(vif, vif->cfg.ap_addr);
+ if (!sta) {
+ ret = -EINVAL;
+ goto out;
+@@ -3040,27 +3147,43 @@ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct rtw89_roc *roc = &rtwvif->roc;
++ struct rtw89_vif_link *rtwvif_link;
+ struct cfg80211_chan_def roc_chan;
+- struct rtw89_vif *tmp;
++ struct rtw89_vif *tmp_vif;
+ int ret;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "roc start: find no link on HW-0\n");
++ return;
++ }
++
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_ROC);
+
+- ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, true);
++ ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, true);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+ "roc send null-1 failed: %d\n", ret);
+
+- rtw89_for_each_rtwvif(rtwdev, tmp)
+- if (tmp->chanctx_idx == rtwvif->chanctx_idx)
+- tmp->offchan = true;
++ rtw89_for_each_rtwvif(rtwdev, tmp_vif) {
++ struct rtw89_vif_link *tmp_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(tmp_vif, tmp_link, link_id) {
++ if (tmp_link->chanctx_idx == rtwvif_link->chanctx_idx) {
++ tmp_vif->offchan = true;
++ break;
++ }
++ }
++ }
+
+ cfg80211_chandef_create(&roc_chan, &roc->chan, NL80211_CHAN_NO_HT);
+- rtw89_config_roc_chandef(rtwdev, rtwvif->chanctx_idx, &roc_chan);
++ rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, &roc_chan);
+ rtw89_set_channel(rtwdev);
+ rtw89_write32_clr(rtwdev,
+ rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
+@@ -3077,7 +3200,8 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct rtw89_roc *roc = &rtwvif->roc;
+- struct rtw89_vif *tmp;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_vif *tmp_vif;
+ int ret;
+
+ lockdep_assert_held(&rtwdev->mutex);
+@@ -3087,24 +3211,29 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
+
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "roc end: find no link on HW-0\n");
++ return;
++ }
++
+ rtw89_write32_mask(rtwdev,
+ rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
+ B_AX_RX_FLTR_CFG_MASK,
+ rtwdev->hal.rx_fltr);
+
+ roc->state = RTW89_ROC_IDLE;
+- rtw89_config_roc_chandef(rtwdev, rtwvif->chanctx_idx, NULL);
++ rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, NULL);
+ rtw89_chanctx_proceed(rtwdev);
+- ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, false);
++ ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, false);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+ "roc send null-0 failed: %d\n", ret);
+
+- rtw89_for_each_rtwvif(rtwdev, tmp)
+- if (tmp->chanctx_idx == rtwvif->chanctx_idx)
+- tmp->offchan = false;
++ rtw89_for_each_rtwvif(rtwdev, tmp_vif)
++ tmp_vif->offchan = false;
+
+- rtw89_core_handle_sta_pending_tx(rtwdev, rtwvif);
++ rtw89_core_handle_sta_pending_tx(rtwdev, rtwvif_link);
+ queue_work(rtwdev->txq_wq, &rtwdev->txq_work);
+
+ if (hw->conf.flags & IEEE80211_CONF_IDLE)
+@@ -3188,39 +3317,52 @@ static bool rtw89_traffic_stats_calc(struct rtw89_dev *rtwdev,
+
+ static bool rtw89_traffic_stats_track(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ bool tfc_changed;
+
+ tfc_changed = rtw89_traffic_stats_calc(rtwdev, &rtwdev->stats);
++
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+ rtw89_traffic_stats_calc(rtwdev, &rtwvif->stats);
+- rtw89_fw_h2c_tp_offload(rtwdev, rtwvif);
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_fw_h2c_tp_offload(rtwdev, rtwvif_link);
+ }
+
+ return tfc_changed;
+ }
+
+-static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- if ((rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION &&
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) ||
+- rtwvif->tdls_peer)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION &&
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
+ return;
+
+- if (rtwvif->offchan)
+- return;
+-
+- if (rtwvif->stats.tx_tfc_lv == RTW89_TFC_IDLE &&
+- rtwvif->stats.rx_tfc_lv == RTW89_TFC_IDLE)
+- rtw89_enter_lps(rtwdev, rtwvif, true);
++ rtw89_enter_lps(rtwdev, rtwvif_link, true);
+ }
+
+ static void rtw89_enter_lps_track(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_vif_enter_lps(rtwdev, rtwvif);
++ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++ if (rtwvif->tdls_peer)
++ continue;
++ if (rtwvif->offchan)
++ continue;
++
++ if (rtwvif->stats.tx_tfc_lv != RTW89_TFC_IDLE ||
++ rtwvif->stats.rx_tfc_lv != RTW89_TFC_IDLE)
++ continue;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_vif_enter_lps(rtwdev, rtwvif_link);
++ }
+ }
+
+ static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev)
+@@ -3234,14 +3376,16 @@ static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev)
+ rtw89_chip_rfk_track(rtwdev);
+ }
+
+-void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+ enum rtw89_entity_mode mode = rtw89_get_entity_mode(rtwdev);
+
+ if (mode == RTW89_ENTITY_MODE_MCC)
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_P2P_PS_CHANGE);
+ else
+- rtw89_process_p2p_ps(rtwdev, vif);
++ rtw89_process_p2p_ps(rtwdev, rtwvif_link, bss_conf);
+ }
+
+ void rtw89_traffic_stats_init(struct rtw89_dev *rtwdev,
+@@ -3326,7 +3470,8 @@ void rtw89_core_release_all_bits_map(unsigned long *addr, unsigned int nbits)
+ }
+
+ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx)
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_cam_info *cam_info = &rtwdev->cam_info;
+@@ -3363,7 +3508,7 @@ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+ }
+
+ entry->tid = tid;
+- list_add_tail(&entry->list, &rtwsta->ba_cam_list);
++ list_add_tail(&entry->list, &rtwsta_link->ba_cam_list);
+
+ *cam_idx = idx;
+
+@@ -3371,7 +3516,8 @@ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx)
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx)
+ {
+ struct rtw89_cam_info *cam_info = &rtwdev->cam_info;
+ struct rtw89_ba_cam_entry *entry = NULL, *tmp;
+@@ -3379,7 +3525,7 @@ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+- list_for_each_entry_safe(entry, tmp, &rtwsta->ba_cam_list, list) {
++ list_for_each_entry_safe(entry, tmp, &rtwsta_link->ba_cam_list, list) {
+ if (entry->tid != tid)
+ continue;
+
+@@ -3396,24 +3542,25 @@ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+
+ #define RTW89_TYPE_MAPPING(_type) \
+ case NL80211_IFTYPE_ ## _type: \
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_ ## _type; \
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_ ## _type; \
+ break
+-void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc)
++void rtw89_vif_type_mapping(struct rtw89_vif_link *rtwvif_link, bool assoc)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_bss_conf *bss_conf;
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_STATION:
+ if (vif->p2p)
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_P2P_CLIENT;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_P2P_CLIENT;
+ else
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_STATION;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_STATION;
+ break;
+ case NL80211_IFTYPE_AP:
+ if (vif->p2p)
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_P2P_GO;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_P2P_GO;
+ else
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_AP;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_AP;
+ break;
+ RTW89_TYPE_MAPPING(ADHOC);
+ RTW89_TYPE_MAPPING(MONITOR);
+@@ -3426,23 +3573,27 @@ void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc)
+ switch (vif->type) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_MESH_POINT:
+- rtwvif->net_type = RTW89_NET_TYPE_AP_MODE;
+- rtwvif->self_role = RTW89_SELF_ROLE_AP;
++ rtwvif_link->net_type = RTW89_NET_TYPE_AP_MODE;
++ rtwvif_link->self_role = RTW89_SELF_ROLE_AP;
+ break;
+ case NL80211_IFTYPE_ADHOC:
+- rtwvif->net_type = RTW89_NET_TYPE_AD_HOC;
+- rtwvif->self_role = RTW89_SELF_ROLE_CLIENT;
++ rtwvif_link->net_type = RTW89_NET_TYPE_AD_HOC;
++ rtwvif_link->self_role = RTW89_SELF_ROLE_CLIENT;
+ break;
+ case NL80211_IFTYPE_STATION:
+ if (assoc) {
+- rtwvif->net_type = RTW89_NET_TYPE_INFRA;
+- rtwvif->trigger = vif->bss_conf.he_support;
++ rtwvif_link->net_type = RTW89_NET_TYPE_INFRA;
++
++ rcu_read_lock();
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ rtwvif_link->trigger = bss_conf->he_support;
++ rcu_read_unlock();
+ } else {
+- rtwvif->net_type = RTW89_NET_TYPE_NO_LINK;
+- rtwvif->trigger = false;
++ rtwvif_link->net_type = RTW89_NET_TYPE_NO_LINK;
++ rtwvif_link->trigger = false;
+ }
+- rtwvif->self_role = RTW89_SELF_ROLE_CLIENT;
+- rtwvif->addr_cam.sec_ent_mode = RTW89_ADDR_CAM_SEC_NORMAL;
++ rtwvif_link->self_role = RTW89_SELF_ROLE_CLIENT;
++ rtwvif_link->addr_cam.sec_ent_mode = RTW89_ADDR_CAM_SEC_NORMAL;
+ break;
+ case NL80211_IFTYPE_MONITOR:
+ break;
+@@ -3452,137 +3603,110 @@ void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc)
+ }
+ }
+
+-int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_add(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+ struct rtw89_hal *hal = &rtwdev->hal;
+ u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num;
+ int i;
+ int ret;
+
+- rtwsta->rtwdev = rtwdev;
+- rtwsta->rtwvif = rtwvif;
+- rtwsta->prev_rssi = 0;
+- INIT_LIST_HEAD(&rtwsta->ba_cam_list);
+- skb_queue_head_init(&rtwsta->roc_queue);
+-
+- for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
+- rtw89_core_txq_init(rtwdev, sta->txq[i]);
+-
+- ewma_rssi_init(&rtwsta->avg_rssi);
+- ewma_snr_init(&rtwsta->avg_snr);
+- ewma_evm_init(&rtwsta->evm_1ss);
++ rtwsta_link->prev_rssi = 0;
++ INIT_LIST_HEAD(&rtwsta_link->ba_cam_list);
++ ewma_rssi_init(&rtwsta_link->avg_rssi);
++ ewma_snr_init(&rtwsta_link->avg_snr);
++ ewma_evm_init(&rtwsta_link->evm_1ss);
+ for (i = 0; i < ant_num; i++) {
+- ewma_rssi_init(&rtwsta->rssi[i]);
+- ewma_evm_init(&rtwsta->evm_min[i]);
+- ewma_evm_init(&rtwsta->evm_max[i]);
++ ewma_rssi_init(&rtwsta_link->rssi[i]);
++ ewma_evm_init(&rtwsta_link->evm_min[i]);
++ ewma_evm_init(&rtwsta_link->evm_max[i]);
+ }
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- /* for station mode, assign the mac_id from itself */
+- rtwsta->mac_id = rtwvif->mac_id;
+-
+ /* must do rtw89_reg_6ghz_recalc() before rfk channel */
+- ret = rtw89_reg_6ghz_recalc(rtwdev, rtwvif, true);
++ ret = rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, true);
+ if (ret)
+ return ret;
+
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link,
+ BTC_ROLE_MSTS_STA_CONN_START);
+- rtw89_chip_rfk_channel(rtwdev, rtwvif);
++ rtw89_chip_rfk_channel(rtwdev, rtwvif_link);
+ } else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
+- rtwsta->mac_id = rtw89_acquire_mac_id(rtwdev);
+- if (rtwsta->mac_id == RTW89_MAX_MAC_ID_NUM)
+- return -ENOSPC;
+-
+- ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta->mac_id, false);
++ ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta_link->mac_id, false);
+ if (ret) {
+- rtw89_release_mac_id(rtwdev, rtwsta->mac_id);
+ rtw89_warn(rtwdev, "failed to send h2c macid pause\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link,
+ RTW89_ROLE_CREATE);
+ if (ret) {
+- rtw89_release_mac_id(rtwdev, rtwsta->mac_id);
+ rtw89_warn(rtwdev, "failed to send h2c role info\n");
+ return ret;
+ }
+
+- ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret)
+ return ret;
+
+- ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret)
+ return ret;
+-
+- rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
+ }
+
+ return 0;
+ }
+
+-int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, false);
+-
+- rtwdev->total_sta_assoc--;
+- if (sta->tdls)
+- rtwvif->tdls_peer--;
+- rtwsta->disassoc = true;
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, false);
+
+ return 0;
+ }
+
+-int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_disconnect(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+ int ret;
+
+- rtw89_mac_bf_monitor_calc(rtwdev, sta, true);
+- rtw89_mac_bf_disassoc(rtwdev, vif, sta);
+- rtw89_core_free_sta_pending_ba(rtwdev, sta);
+- rtw89_core_free_sta_pending_forbid_ba(rtwdev, sta);
+- rtw89_core_free_sta_pending_roc_tx(rtwdev, sta);
++ rtw89_mac_bf_monitor_calc(rtwdev, rtwsta_link, true);
++ rtw89_mac_bf_disassoc(rtwdev, rtwvif_link, rtwsta_link);
+
+ if (vif->type == NL80211_IFTYPE_AP || sta->tdls)
+- rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam);
++ rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta_link->addr_cam);
+ if (sta->tdls)
+- rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam);
++ rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta_link->bssid_cam);
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- rtw89_vif_type_mapping(vif, false);
+- rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, true);
++ rtw89_vif_type_mapping(rtwvif_link, false);
++ rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif_link, true);
+ }
+
+- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, sta);
++ ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cmac table\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, true);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, true);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c join info\n");
+ return ret;
+ }
+
+ /* update cam aid mac_id net_type */
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+@@ -3591,106 +3715,114 @@ int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif, rtwsta);
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
++ struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif_link,
++ rtwsta_link);
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ int ret;
+
+ if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
+ if (sta->tdls) {
+- ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, sta->addr);
++ struct ieee80211_link_sta *link_sta;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif_link, bssid_cam,
++ link_sta->addr);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c init bssid cam for TDLS\n");
++ rcu_read_unlock();
+ return ret;
+ }
++
++ rcu_read_unlock();
+ }
+
+- ret = rtw89_cam_init_addr_cam(rtwdev, &rtwsta->addr_cam, bssid_cam);
++ ret = rtw89_cam_init_addr_cam(rtwdev, &rtwsta_link->addr_cam, bssid_cam);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c init addr cam\n");
+ return ret;
+ }
+ }
+
+- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, sta);
++ ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cmac table\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, false);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, false);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c join info\n");
+ return ret;
+ }
+
+ /* update cam aid mac_id net_type */
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+ }
+
+- rtwdev->total_sta_assoc++;
+- if (sta->tdls)
+- rtwvif->tdls_peer++;
+- rtw89_phy_ra_assoc(rtwdev, sta);
+- rtw89_mac_bf_assoc(rtwdev, vif, sta);
+- rtw89_mac_bf_monitor_calc(rtwdev, sta, false);
++ rtw89_phy_ra_assoc(rtwdev, rtwsta_link);
++ rtw89_mac_bf_assoc(rtwdev, rtwvif_link, rtwsta_link);
++ rtw89_mac_bf_monitor_calc(rtwdev, rtwsta_link, false);
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
++ struct ieee80211_bss_conf *bss_conf;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
+ if (bss_conf->he_support &&
+ !(bss_conf->he_oper.params & IEEE80211_HE_OPERATION_ER_SU_DISABLE))
+- rtwsta->er_cap = true;
++ rtwsta_link->er_cap = true;
++
++ rcu_read_unlock();
+
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link,
+ BTC_ROLE_MSTS_STA_CONN_END);
+- rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta->htc_template, chan);
+- rtw89_phy_ul_tb_assoc(rtwdev, rtwvif);
++ rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta_link->htc_template, chan);
++ rtw89_phy_ul_tb_assoc(rtwdev, rtwvif_link);
+
+- ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id);
++ ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif_link, rtwsta_link->mac_id);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c general packet\n");
+ return ret;
+ }
+
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+ }
+
+ return ret;
+ }
+
+-int rtw89_core_sta_remove(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_remove(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+ int ret;
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- rtw89_reg_6ghz_recalc(rtwdev, rtwvif, false);
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
++ rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, false);
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link,
+ BTC_ROLE_MSTS_STA_DIS_CONN);
+ } else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
+- rtw89_release_mac_id(rtwdev, rtwsta->mac_id);
+-
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link,
+ RTW89_ROLE_REMOVE);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c role info\n");
+ return ret;
+ }
+-
+- rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
+ }
+
+ return 0;
+@@ -4152,15 +4284,16 @@ static void rtw89_core_ppdu_sts_init(struct rtw89_dev *rtwdev)
+ void rtw89_core_update_beacon_work(struct work_struct *work)
+ {
+ struct rtw89_dev *rtwdev;
+- struct rtw89_vif *rtwvif = container_of(work, struct rtw89_vif,
+- update_beacon_work);
++ struct rtw89_vif_link *rtwvif_link = container_of(work, struct rtw89_vif_link,
++ update_beacon_work);
+
+- if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE)
++ if (rtwvif_link->net_type != RTW89_NET_TYPE_AP_MODE)
+ return;
+
+- rtwdev = rtwvif->rtwdev;
++ rtwdev = rtwvif_link->rtwvif->rtwdev;
++
+ mutex_lock(&rtwdev->mutex);
+- rtw89_chip_h2c_update_beacon(rtwdev, rtwvif);
++ rtw89_chip_h2c_update_beacon(rtwdev, rtwvif_link);
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -4355,6 +4488,168 @@ void rtw89_release_mac_id(struct rtw89_dev *rtwdev, u8 mac_id)
+ clear_bit(mac_id, rtwdev->mac_id_map);
+ }
+
++void rtw89_init_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ u8 mac_id, u8 port)
++{
++ const struct rtw89_chip_info *chip = rtwdev->chip;
++ u8 support_link_num = chip->support_link_num;
++ u8 support_mld_num = 0;
++ unsigned int link_id;
++ u8 index;
++
++ bitmap_zero(rtwvif->links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++)
++ rtwvif->links[link_id] = NULL;
++
++ rtwvif->rtwdev = rtwdev;
++
++ if (rtwdev->support_mlo) {
++ rtwvif->links_inst_valid_num = support_link_num;
++ support_mld_num = chip->support_macid_num / support_link_num;
++ } else {
++ rtwvif->links_inst_valid_num = 1;
++ }
++
++ for (index = 0; index < rtwvif->links_inst_valid_num; index++) {
++ struct rtw89_vif_link *inst = &rtwvif->links_inst[index];
++
++ inst->rtwvif = rtwvif;
++ inst->mac_id = mac_id + index * support_mld_num;
++ inst->mac_idx = RTW89_MAC_0 + index;
++ inst->phy_idx = RTW89_PHY_0 + index;
++
++ /* multi-link use the same port id on different HW bands */
++ inst->port = port;
++ }
++}
++
++void rtw89_init_sta(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_sta *rtwsta, u8 mac_id)
++{
++ const struct rtw89_chip_info *chip = rtwdev->chip;
++ u8 support_link_num = chip->support_link_num;
++ u8 support_mld_num = 0;
++ unsigned int link_id;
++ u8 index;
++
++ bitmap_zero(rtwsta->links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++)
++ rtwsta->links[link_id] = NULL;
++
++ rtwsta->rtwdev = rtwdev;
++ rtwsta->rtwvif = rtwvif;
++
++ if (rtwdev->support_mlo) {
++ rtwsta->links_inst_valid_num = support_link_num;
++ support_mld_num = chip->support_macid_num / support_link_num;
++ } else {
++ rtwsta->links_inst_valid_num = 1;
++ }
++
++ for (index = 0; index < rtwsta->links_inst_valid_num; index++) {
++ struct rtw89_sta_link *inst = &rtwsta->links_inst[index];
++
++ inst->rtwvif_link = &rtwvif->links_inst[index];
++
++ inst->rtwsta = rtwsta;
++ inst->mac_id = mac_id + index * support_mld_num;
++ }
++}
++
++struct rtw89_vif_link *rtw89_vif_set_link(struct rtw89_vif *rtwvif,
++ unsigned int link_id)
++{
++ struct rtw89_vif_link *rtwvif_link = rtwvif->links[link_id];
++ u8 index;
++ int ret;
++
++ if (rtwvif_link)
++ return rtwvif_link;
++
++ index = find_first_zero_bit(rtwvif->links_inst_map,
++ rtwvif->links_inst_valid_num);
++ if (index == rtwvif->links_inst_valid_num) {
++ ret = -EBUSY;
++ goto err;
++ }
++
++ rtwvif_link = &rtwvif->links_inst[index];
++ rtwvif_link->link_id = link_id;
++
++ set_bit(index, rtwvif->links_inst_map);
++ rtwvif->links[link_id] = rtwvif_link;
++ return rtwvif_link;
++
++err:
++ rtw89_err(rtwvif->rtwdev, "vif (link_id %u) failed to set link: %d\n",
++ link_id, ret);
++ return NULL;
++}
++
++void rtw89_vif_unset_link(struct rtw89_vif *rtwvif, unsigned int link_id)
++{
++ struct rtw89_vif_link **container = &rtwvif->links[link_id];
++ struct rtw89_vif_link *link = *container;
++ u8 index;
++
++ if (!link)
++ return;
++
++ index = rtw89_vif_link_inst_get_index(link);
++ clear_bit(index, rtwvif->links_inst_map);
++ *container = NULL;
++}
++
++struct rtw89_sta_link *rtw89_sta_set_link(struct rtw89_sta *rtwsta,
++ unsigned int link_id)
++{
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = rtwvif->links[link_id];
++ struct rtw89_sta_link *rtwsta_link = rtwsta->links[link_id];
++ u8 index;
++ int ret;
++
++ if (rtwsta_link)
++ return rtwsta_link;
++
++ if (!rtwvif_link) {
++ ret = -ENOLINK;
++ goto err;
++ }
++
++ index = rtw89_vif_link_inst_get_index(rtwvif_link);
++ if (test_bit(index, rtwsta->links_inst_map)) {
++ ret = -EBUSY;
++ goto err;
++ }
++
++ rtwsta_link = &rtwsta->links_inst[index];
++ rtwsta_link->link_id = link_id;
++
++ set_bit(index, rtwsta->links_inst_map);
++ rtwsta->links[link_id] = rtwsta_link;
++ return rtwsta_link;
++
++err:
++ rtw89_err(rtwsta->rtwdev, "sta (link_id %u) failed to set link: %d\n",
++ link_id, ret);
++ return NULL;
++}
++
++void rtw89_sta_unset_link(struct rtw89_sta *rtwsta, unsigned int link_id)
++{
++ struct rtw89_sta_link **container = &rtwsta->links[link_id];
++ struct rtw89_sta_link *link = *container;
++ u8 index;
++
++ if (!link)
++ return;
++
++ index = rtw89_sta_link_inst_get_index(link);
++ clear_bit(index, rtwsta->links_inst_map);
++ *container = NULL;
++}
++
+ int rtw89_core_init(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_btc *btc = &rtwdev->btc;
+@@ -4444,38 +4739,44 @@ void rtw89_core_deinit(struct rtw89_dev *rtwdev)
+ }
+ EXPORT_SYMBOL(rtw89_core_deinit);
+
+-void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ const u8 *mac_addr, bool hw_scan)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+
+ rtwdev->scanning = true;
+ rtw89_leave_lps(rtwdev);
+ if (hw_scan)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+
+- ether_addr_copy(rtwvif->mac_addr, mac_addr);
++ ether_addr_copy(rtwvif_link->mac_addr, mac_addr);
+ rtw89_btc_ntfy_scan_start(rtwdev, RTW89_PHY_0, chan->band_type);
+- rtw89_chip_rfk_scan(rtwdev, rtwvif, true);
++ rtw89_chip_rfk_scan(rtwdev, rtwvif_link, true);
+ rtw89_hci_recalc_int_mit(rtwdev);
+ rtw89_phy_config_edcca(rtwdev, true);
+
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, mac_addr);
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, mac_addr);
+ }
+
+ void rtw89_core_scan_complete(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif, bool hw_scan)
++ struct rtw89_vif_link *rtwvif_link, bool hw_scan)
+ {
+- struct rtw89_vif *rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
++ struct ieee80211_bss_conf *bss_conf;
+
+- if (!rtwvif)
++ if (!rtwvif_link)
+ return;
+
+- ether_addr_copy(rtwvif->mac_addr, vif->addr);
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ ether_addr_copy(rtwvif_link->mac_addr, bss_conf->addr);
++
++ rcu_read_unlock();
++
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
+
+- rtw89_chip_rfk_scan(rtwdev, rtwvif, false);
++ rtw89_chip_rfk_scan(rtwdev, rtwvif_link, false);
+ rtw89_btc_ntfy_scan_finish(rtwdev, RTW89_PHY_0);
+ rtw89_phy_config_edcca(rtwdev, false);
+
+@@ -4688,17 +4989,39 @@ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev)
+ }
+ EXPORT_SYMBOL(rtw89_chip_info_setup);
+
++void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct rtw89_chip_info *chip = rtwdev->chip;
++ struct ieee80211_bss_conf *bss_conf;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ if (!bss_conf->he_support || !vif->cfg.assoc) {
++ rcu_read_unlock();
++ return;
++ }
++
++ rcu_read_unlock();
++
++ if (chip->ops->set_txpwr_ul_tb_offset)
++ chip->ops->set_txpwr_ul_tb_offset(rtwdev, 0, rtwvif_link->mac_idx);
++}
++
+ static int rtw89_core_register_hw(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
++ u8 n = rtwdev->support_mlo ? chip->support_link_num : 1;
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct rtw89_efuse *efuse = &rtwdev->efuse;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ int ret;
+ int tx_headroom = IEEE80211_HT_CTL_LEN;
+
+- hw->vif_data_size = sizeof(struct rtw89_vif);
+- hw->sta_data_size = sizeof(struct rtw89_sta);
++ hw->vif_data_size = struct_size_t(struct rtw89_vif, links_inst, n);
++ hw->sta_data_size = struct_size_t(struct rtw89_sta, links_inst, n);
+ hw->txq_data_size = sizeof(struct rtw89_txq);
+ hw->chanctx_data_size = sizeof(struct rtw89_chanctx_cfg);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 4ed9034fdb4641..de33320b1354cd 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -829,6 +829,8 @@ enum rtw89_phy_idx {
+ RTW89_PHY_MAX
+ };
+
++#define __RTW89_MLD_MAX_LINK_NUM 2
++
+ enum rtw89_chanctx_idx {
+ RTW89_CHANCTX_0 = 0,
+ RTW89_CHANCTX_1 = 1,
+@@ -1166,8 +1168,8 @@ struct rtw89_core_tx_request {
+ enum rtw89_core_tx_type tx_type;
+
+ struct sk_buff *skb;
+- struct ieee80211_vif *vif;
+- struct ieee80211_sta *sta;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
+ struct rtw89_tx_desc_info desc_info;
+ };
+
+@@ -3354,12 +3356,13 @@ struct rtw89_sec_cam_entry {
+ u8 key[32];
+ };
+
+-struct rtw89_sta {
++struct rtw89_sta_link {
++ struct rtw89_sta *rtwsta;
++ unsigned int link_id;
++
+ u8 mac_id;
+- bool disassoc;
+ bool er_cap;
+- struct rtw89_dev *rtwdev;
+- struct rtw89_vif *rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_ra_info ra;
+ struct rtw89_ra_report ra_report;
+ int max_agg_wait;
+@@ -3370,15 +3373,12 @@ struct rtw89_sta {
+ struct ewma_evm evm_1ss;
+ struct ewma_evm evm_min[RF_PATH_MAX];
+ struct ewma_evm evm_max[RF_PATH_MAX];
+- struct rtw89_ampdu_params ampdu_params[IEEE80211_NUM_TIDS];
+- DECLARE_BITMAP(ampdu_map, IEEE80211_NUM_TIDS);
+ struct ieee80211_rx_status rx_status;
+ u16 rx_hw_rate;
+ __le32 htc_template;
+ struct rtw89_addr_cam_entry addr_cam; /* AP mode or TDLS peer only */
+ struct rtw89_bssid_cam_entry bssid_cam; /* TDLS peer only */
+ struct list_head ba_cam_list;
+- struct sk_buff_head roc_queue;
+
+ bool use_cfg_mask;
+ struct cfg80211_bitrate_mask mask;
+@@ -3460,10 +3460,10 @@ struct rtw89_p2p_noa_setter {
+ u8 noa_index;
+ };
+
+-struct rtw89_vif {
+- struct list_head list;
+- struct rtw89_dev *rtwdev;
+- struct rtw89_roc roc;
++struct rtw89_vif_link {
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
++
+ bool chanctx_assigned; /* only valid when running with chanctx_ops */
+ enum rtw89_chanctx_idx chanctx_idx;
+ enum rtw89_reg_6ghz_power reg_6ghz_power;
+@@ -3473,7 +3473,6 @@ struct rtw89_vif {
+ u8 port;
+ u8 mac_addr[ETH_ALEN];
+ u8 bssid[ETH_ALEN];
+- __be32 ip_addr;
+ u8 phy_idx;
+ u8 mac_idx;
+ u8 net_type;
+@@ -3484,7 +3483,6 @@ struct rtw89_vif {
+ u8 hit_rule;
+ u8 last_noa_nr;
+ u64 sync_bcn_tsf;
+- bool offchan;
+ bool trigger;
+ bool lsig_txop;
+ u8 tgt_ind;
+@@ -3498,15 +3496,11 @@ struct rtw89_vif {
+ bool pre_pwr_diff_en;
+ bool pwr_diff_en;
+ u8 def_tri_idx;
+- u32 tdls_peer;
+ struct work_struct update_beacon_work;
+ struct rtw89_addr_cam_entry addr_cam;
+ struct rtw89_bssid_cam_entry bssid_cam;
+ struct ieee80211_tx_queue_params tx_params[IEEE80211_NUM_ACS];
+- struct rtw89_traffic_stats stats;
+ struct rtw89_phy_rate_pattern rate_pattern;
+- struct cfg80211_scan_request *scan_req;
+- struct ieee80211_scan_ies *scan_ies;
+ struct list_head general_pkt_list;
+ struct rtw89_p2p_noa_setter p2p_noa;
+ };
+@@ -3599,11 +3593,11 @@ struct rtw89_chip_ops {
+ void (*rfk_hw_init)(struct rtw89_dev *rtwdev);
+ void (*rfk_init)(struct rtw89_dev *rtwdev);
+ void (*rfk_init_late)(struct rtw89_dev *rtwdev);
+- void (*rfk_channel)(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++ void (*rfk_channel)(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void (*rfk_band_changed)(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx,
+ const struct rtw89_chan *chan);
+- void (*rfk_scan)(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ void (*rfk_scan)(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool start);
+ void (*rfk_track)(struct rtw89_dev *rtwdev);
+ void (*power_trim)(struct rtw89_dev *rtwdev);
+@@ -3646,23 +3640,25 @@ struct rtw89_chip_ops {
+ u32 *tx_en, enum rtw89_sch_tx_sel sel);
+ int (*resume_sch_tx)(struct rtw89_dev *rtwdev, u8 mac_idx, u32 tx_en);
+ int (*h2c_dctl_sec_cam)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_default_cmac_tbl)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_assoc_cmac_tbl)(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_ampdu_cmac_tbl)(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_default_dmac_tbl)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_update_beacon)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
+- int (*h2c_ba_cam)(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link);
++ int (*h2c_ba_cam)(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params);
+
+ void (*btc_set_rfe)(struct rtw89_dev *rtwdev);
+@@ -5196,7 +5192,7 @@ struct rtw89_early_h2c {
+ };
+
+ struct rtw89_hw_scan_info {
+- struct ieee80211_vif *scanning_vif;
++ struct rtw89_vif_link *scanning_vif;
+ struct list_head pkt_list[NUM_NL80211_BANDS];
+ struct rtw89_chan op_chan;
+ bool abort;
+@@ -5371,7 +5367,7 @@ struct rtw89_wow_aoac_report {
+ };
+
+ struct rtw89_wow_param {
+- struct ieee80211_vif *wow_vif;
++ struct rtw89_vif_link *rtwvif_link;
+ DECLARE_BITMAP(flags, RTW89_WOW_FLAG_NUM);
+ struct rtw89_wow_cam_info patterns[RTW89_MAX_PATTERN_NUM];
+ struct rtw89_wow_key_info key_info;
+@@ -5408,7 +5404,7 @@ struct rtw89_mcc_policy {
+ };
+
+ struct rtw89_mcc_role {
+- struct rtw89_vif *rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_mcc_policy policy;
+ struct rtw89_mcc_limit limit;
+
+@@ -5608,6 +5604,121 @@ struct rtw89_dev {
+ u8 priv[] __aligned(sizeof(void *));
+ };
+
++struct rtw89_vif {
++ struct rtw89_dev *rtwdev;
++ struct list_head list;
++
++ u8 mac_addr[ETH_ALEN];
++ __be32 ip_addr;
++
++ struct rtw89_traffic_stats stats;
++ u32 tdls_peer;
++
++ struct ieee80211_scan_ies *scan_ies;
++ struct cfg80211_scan_request *scan_req;
++
++ struct rtw89_roc roc;
++ bool offchan;
++
++ u8 links_inst_valid_num;
++ DECLARE_BITMAP(links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ struct rtw89_vif_link *links[IEEE80211_MLD_MAX_NUM_LINKS];
++ struct rtw89_vif_link links_inst[] __counted_by(links_inst_valid_num);
++};
++
++static inline bool rtw89_vif_assign_link_is_valid(struct rtw89_vif_link **rtwvif_link,
++ const struct rtw89_vif *rtwvif,
++ unsigned int link_id)
++{
++ *rtwvif_link = rtwvif->links[link_id];
++ return !!*rtwvif_link;
++}
++
++#define rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) \
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) \
++ if (rtw89_vif_assign_link_is_valid(&(rtwvif_link), rtwvif, link_id))
++
++struct rtw89_sta {
++ struct rtw89_dev *rtwdev;
++ struct rtw89_vif *rtwvif;
++
++ bool disassoc;
++
++ struct sk_buff_head roc_queue;
++
++ struct rtw89_ampdu_params ampdu_params[IEEE80211_NUM_TIDS];
++ DECLARE_BITMAP(ampdu_map, IEEE80211_NUM_TIDS);
++
++ u8 links_inst_valid_num;
++ DECLARE_BITMAP(links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ struct rtw89_sta_link *links[IEEE80211_MLD_MAX_NUM_LINKS];
++ struct rtw89_sta_link links_inst[] __counted_by(links_inst_valid_num);
++};
++
++static inline bool rtw89_sta_assign_link_is_valid(struct rtw89_sta_link **rtwsta_link,
++ const struct rtw89_sta *rtwsta,
++ unsigned int link_id)
++{
++ *rtwsta_link = rtwsta->links[link_id];
++ return !!*rtwsta_link;
++}
++
++#define rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) \
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) \
++ if (rtw89_sta_assign_link_is_valid(&(rtwsta_link), rtwsta, link_id))
++
++static inline u8 rtw89_vif_get_main_macid(struct rtw89_vif *rtwvif)
++{
++ /* const after init, so no need to check if active first */
++ return rtwvif->links_inst[0].mac_id;
++}
++
++static inline u8 rtw89_vif_get_main_port(struct rtw89_vif *rtwvif)
++{
++ /* const after init, so no need to check if active first */
++ return rtwvif->links_inst[0].port;
++}
++
++static inline struct rtw89_vif_link *
++rtw89_vif_get_link_inst(struct rtw89_vif *rtwvif, u8 index)
++{
++ if (index >= rtwvif->links_inst_valid_num ||
++ !test_bit(index, rtwvif->links_inst_map))
++ return NULL;
++ return &rtwvif->links_inst[index];
++}
++
++static inline
++u8 rtw89_vif_link_inst_get_index(struct rtw89_vif_link *rtwvif_link)
++{
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++
++ return rtwvif_link - rtwvif->links_inst;
++}
++
++static inline u8 rtw89_sta_get_main_macid(struct rtw89_sta *rtwsta)
++{
++ /* const after init, so no need to check if active first */
++ return rtwsta->links_inst[0].mac_id;
++}
++
++static inline struct rtw89_sta_link *
++rtw89_sta_get_link_inst(struct rtw89_sta *rtwsta, u8 index)
++{
++ if (index >= rtwsta->links_inst_valid_num ||
++ !test_bit(index, rtwsta->links_inst_map))
++ return NULL;
++ return &rtwsta->links_inst[index];
++}
++
++static inline
++u8 rtw89_sta_link_inst_get_index(struct rtw89_sta_link *rtwsta_link)
++{
++ struct rtw89_sta *rtwsta = rtwsta_link->rtwsta;
++
++ return rtwsta_link - rtwsta->links_inst;
++}
++
+ static inline int rtw89_hci_tx_write(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+@@ -5972,9 +6083,26 @@ static inline struct ieee80211_vif *rtwvif_to_vif_safe(struct rtw89_vif *rtwvif)
+ return rtwvif ? rtwvif_to_vif(rtwvif) : NULL;
+ }
+
++static inline
++struct ieee80211_vif *rtwvif_link_to_vif(struct rtw89_vif_link *rtwvif_link)
++{
++ return rtwvif_to_vif(rtwvif_link->rtwvif);
++}
++
++static inline
++struct ieee80211_vif *rtwvif_link_to_vif_safe(struct rtw89_vif_link *rtwvif_link)
++{
++ return rtwvif_link ? rtwvif_link_to_vif(rtwvif_link) : NULL;
++}
++
++static inline struct rtw89_vif *vif_to_rtwvif(struct ieee80211_vif *vif)
++{
++ return (struct rtw89_vif *)vif->drv_priv;
++}
++
+ static inline struct rtw89_vif *vif_to_rtwvif_safe(struct ieee80211_vif *vif)
+ {
+- return vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
++ return vif ? vif_to_rtwvif(vif) : NULL;
+ }
+
+ static inline struct ieee80211_sta *rtwsta_to_sta(struct rtw89_sta *rtwsta)
+@@ -5989,11 +6117,88 @@ static inline struct ieee80211_sta *rtwsta_to_sta_safe(struct rtw89_sta *rtwsta)
+ return rtwsta ? rtwsta_to_sta(rtwsta) : NULL;
+ }
+
++static inline
++struct ieee80211_sta *rtwsta_link_to_sta(struct rtw89_sta_link *rtwsta_link)
++{
++ return rtwsta_to_sta(rtwsta_link->rtwsta);
++}
++
++static inline
++struct ieee80211_sta *rtwsta_link_to_sta_safe(struct rtw89_sta_link *rtwsta_link)
++{
++ return rtwsta_link ? rtwsta_link_to_sta(rtwsta_link) : NULL;
++}
++
++static inline struct rtw89_sta *sta_to_rtwsta(struct ieee80211_sta *sta)
++{
++ return (struct rtw89_sta *)sta->drv_priv;
++}
++
+ static inline struct rtw89_sta *sta_to_rtwsta_safe(struct ieee80211_sta *sta)
+ {
+- return sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ return sta ? sta_to_rtwsta(sta) : NULL;
+ }
+
++static inline struct ieee80211_bss_conf *
++__rtw89_vif_rcu_dereference_link(struct rtw89_vif_link *rtwvif_link, bool *nolink)
++{
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct ieee80211_bss_conf *bss_conf;
++
++ bss_conf = rcu_dereference(vif->link_conf[rtwvif_link->link_id]);
++ if (unlikely(!bss_conf)) {
++ *nolink = true;
++ return &vif->bss_conf;
++ }
++
++ *nolink = false;
++ return bss_conf;
++}
++
++#define rtw89_vif_rcu_dereference_link(rtwvif_link, assert) \
++({ \
++ typeof(rtwvif_link) p = rtwvif_link; \
++ struct ieee80211_bss_conf *bss_conf; \
++ bool nolink; \
++ \
++ bss_conf = __rtw89_vif_rcu_dereference_link(p, &nolink); \
++ if (unlikely(nolink) && (assert)) \
++ rtw89_err(p->rtwvif->rtwdev, \
++ "%s: cannot find exact bss_conf for link_id %u\n",\
++ __func__, p->link_id); \
++ bss_conf; \
++})
++
++static inline struct ieee80211_link_sta *
++__rtw89_sta_rcu_dereference_link(struct rtw89_sta_link *rtwsta_link, bool *nolink)
++{
++ struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
++ struct ieee80211_link_sta *link_sta;
++
++ link_sta = rcu_dereference(sta->link[rtwsta_link->link_id]);
++ if (unlikely(!link_sta)) {
++ *nolink = true;
++ return &sta->deflink;
++ }
++
++ *nolink = false;
++ return link_sta;
++}
++
++#define rtw89_sta_rcu_dereference_link(rtwsta_link, assert) \
++({ \
++ typeof(rtwsta_link) p = rtwsta_link; \
++ struct ieee80211_link_sta *link_sta; \
++ bool nolink; \
++ \
++ link_sta = __rtw89_sta_rcu_dereference_link(p, &nolink); \
++ if (unlikely(nolink) && (assert)) \
++ rtw89_err(p->rtwsta->rtwdev, \
++ "%s: cannot find exact link_sta for link_id %u\n",\
++ __func__, p->link_id); \
++ link_sta; \
++})
++
+ static inline u8 rtw89_hw_to_rate_info_bw(enum rtw89_bandwidth hw_bw)
+ {
+ if (hw_bw == RTW89_CHANNEL_WIDTH_160)
+@@ -6078,29 +6283,29 @@ enum nl80211_he_ru_alloc rtw89_he_rua_to_ru_alloc(u16 rua)
+ }
+
+ static inline
+-struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- if (rtwsta) {
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
++ if (rtwsta_link) {
++ struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
+- return &rtwsta->addr_cam;
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
++ return &rtwsta_link->addr_cam;
+ }
+- return &rtwvif->addr_cam;
++ return &rtwvif_link->addr_cam;
+ }
+
+ static inline
+-struct rtw89_bssid_cam_entry *rtw89_get_bssid_cam_of(struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++struct rtw89_bssid_cam_entry *rtw89_get_bssid_cam_of(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- if (rtwsta) {
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
++ if (rtwsta_link) {
++ struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+
+ if (sta->tdls)
+- return &rtwsta->bssid_cam;
++ return &rtwsta_link->bssid_cam;
+ }
+- return &rtwvif->bssid_cam;
++ return &rtwvif_link->bssid_cam;
+ }
+
+ static inline
+@@ -6159,11 +6364,10 @@ const struct rtw89_chan_rcd *rtw89_chan_rcd_get(struct rtw89_dev *rtwdev,
+ static inline
+ const struct rtw89_chan *rtw89_scan_chan_get(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
+
+- if (rtwvif)
+- return rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
++ if (rtwvif_link)
++ return rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
+ else
+ return rtw89_chan_get(rtwdev, RTW89_CHANCTX_0);
+ }
+@@ -6240,12 +6444,12 @@ static inline void rtw89_chip_rfk_init_late(struct rtw89_dev *rtwdev)
+ }
+
+ static inline void rtw89_chip_rfk_channel(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->rfk_channel)
+- chip->ops->rfk_channel(rtwdev, rtwvif);
++ chip->ops->rfk_channel(rtwdev, rtwvif_link);
+ }
+
+ static inline void rtw89_chip_rfk_band_changed(struct rtw89_dev *rtwdev,
+@@ -6259,12 +6463,12 @@ static inline void rtw89_chip_rfk_band_changed(struct rtw89_dev *rtwdev,
+ }
+
+ static inline void rtw89_chip_rfk_scan(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool start)
++ struct rtw89_vif_link *rtwvif_link, bool start)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->rfk_scan)
+- chip->ops->rfk_scan(rtwdev, rtwvif, start);
++ chip->ops->rfk_scan(rtwdev, rtwvif_link, start);
+ }
+
+ static inline void rtw89_chip_rfk_track(struct rtw89_dev *rtwdev)
+@@ -6347,20 +6551,6 @@ static inline void rtw89_chip_cfg_txrx_path(struct rtw89_dev *rtwdev)
+ chip->ops->cfg_txrx_path(rtwdev);
+ }
+
+-static inline
+-void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
+-{
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- const struct rtw89_chip_info *chip = rtwdev->chip;
+-
+- if (!vif->bss_conf.he_support || !vif->cfg.assoc)
+- return;
+-
+- if (chip->ops->set_txpwr_ul_tb_offset)
+- chip->ops->set_txpwr_ul_tb_offset(rtwdev, 0, rtwvif->mac_idx);
+-}
+-
+ static inline void rtw89_chip_digital_pwr_comp(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx)
+ {
+@@ -6457,14 +6647,14 @@ int rtw89_chip_resume_sch_tx(struct rtw89_dev *rtwdev, u8 mac_idx, u32 tx_en)
+
+ static inline
+ int rtw89_chip_h2c_dctl_sec_cam(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (!chip->ops->h2c_dctl_sec_cam)
+ return 0;
+- return chip->ops->h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ return chip->ops->h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+ static inline u8 *get_hdr_bssid(struct ieee80211_hdr *hdr)
+@@ -6479,13 +6669,14 @@ static inline u8 *get_hdr_bssid(struct ieee80211_hdr *hdr)
+ return hdr->addr3;
+ }
+
+-static inline bool rtw89_sta_has_beamformer_cap(struct ieee80211_sta *sta)
++static inline
++bool rtw89_sta_has_beamformer_cap(struct ieee80211_link_sta *link_sta)
+ {
+- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.he_cap.he_cap_elem.phy_cap_info[3] &
++ if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) ||
++ (link_sta->he_cap.he_cap_elem.phy_cap_info[3] &
+ IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+- (sta->deflink.he_cap.he_cap_elem.phy_cap_info[4] &
++ (link_sta->he_cap.he_cap_elem.phy_cap_info[4] &
+ IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER))
+ return true;
+ return false;
+@@ -6605,21 +6796,21 @@ void rtw89_core_napi_start(struct rtw89_dev *rtwdev);
+ void rtw89_core_napi_stop(struct rtw89_dev *rtwdev);
+ int rtw89_core_napi_init(struct rtw89_dev *rtwdev);
+ void rtw89_core_napi_deinit(struct rtw89_dev *rtwdev);
+-int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_remove(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++int rtw89_core_sta_link_add(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_disconnect(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_remove(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ void rtw89_core_set_tid_config(struct rtw89_dev *rtwdev,
+ struct ieee80211_sta *sta,
+ struct cfg80211_tid_config *tid_config);
+@@ -6635,22 +6826,40 @@ struct rtw89_dev *rtw89_alloc_ieee80211_hw(struct device *device,
+ void rtw89_free_ieee80211_hw(struct rtw89_dev *rtwdev);
+ u8 rtw89_acquire_mac_id(struct rtw89_dev *rtwdev);
+ void rtw89_release_mac_id(struct rtw89_dev *rtwdev, u8 mac_id);
++void rtw89_init_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ u8 mac_id, u8 port);
++void rtw89_init_sta(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_sta *rtwsta, u8 mac_id);
++struct rtw89_vif_link *rtw89_vif_set_link(struct rtw89_vif *rtwvif,
++ unsigned int link_id);
++void rtw89_vif_unset_link(struct rtw89_vif *rtwvif, unsigned int link_id);
++struct rtw89_sta_link *rtw89_sta_set_link(struct rtw89_sta *rtwsta,
++ unsigned int link_id);
++void rtw89_sta_unset_link(struct rtw89_sta *rtwsta, unsigned int link_id);
+ void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev);
+ void rtw89_get_default_chandef(struct cfg80211_chan_def *chandef);
+ void rtw89_get_channel_params(const struct cfg80211_chan_def *chandef,
+ struct rtw89_chan *chan);
+ int rtw89_set_channel(struct rtw89_dev *rtwdev);
+-void rtw89_get_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_chan *chan);
+ u8 rtw89_core_acquire_bit_map(unsigned long *addr, unsigned long size);
+ void rtw89_core_release_bit_map(unsigned long *addr, u8 bit);
+ void rtw89_core_release_all_bits_map(unsigned long *addr, unsigned int nbits);
+ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx);
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx);
+ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx);
+-void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc);
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx);
++void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta);
++void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta);
++void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta);
++void rtw89_vif_type_mapping(struct rtw89_vif_link *rtwvif_link, bool assoc);
+ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev);
++void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ bool rtw89_ra_report_to_bitrate(struct rtw89_dev *rtwdev, u8 rpt_rate, u16 *bitrate);
+ int rtw89_regd_setup(struct rtw89_dev *rtwdev);
+ int rtw89_regd_init(struct rtw89_dev *rtwdev,
+@@ -6667,13 +6876,15 @@ void rtw89_core_update_beacon_work(struct work_struct *work);
+ void rtw89_roc_work(struct work_struct *work);
+ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+-void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ const u8 *mac_addr, bool hw_scan);
+ void rtw89_core_scan_complete(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif, bool hw_scan);
+-int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link, bool hw_scan);
++int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool active);
+-void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf);
+ void rtw89_core_ntfy_btc_event(struct rtw89_dev *rtwdev, enum rtw89_btc_hmsg event);
+
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
+index 29f85210f91964..7391f131229a58 100644
+--- a/drivers/net/wireless/realtek/rtw89/debug.c
++++ b/drivers/net/wireless/realtek/rtw89/debug.c
+@@ -3506,7 +3506,9 @@ static ssize_t rtw89_debug_priv_fw_log_manual_set(struct file *filp,
+ return count;
+ }
+
+-static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
++static void rtw89_sta_link_info_get_iter(struct seq_file *m,
++ struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ static const char * const he_gi_str[] = {
+ [NL80211_RATE_INFO_HE_GI_0_8] = "0.8",
+@@ -3518,20 +3520,26 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ [NL80211_RATE_INFO_EHT_GI_1_6] = "1.6",
+ [NL80211_RATE_INFO_EHT_GI_3_2] = "3.2",
+ };
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rate_info *rate = &rtwsta->ra_report.txrate;
+- struct ieee80211_rx_status *status = &rtwsta->rx_status;
+- struct seq_file *m = (struct seq_file *)data;
+- struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rate_info *rate = &rtwsta_link->ra_report.txrate;
++ struct ieee80211_rx_status *status = &rtwsta_link->rx_status;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num;
+ bool ant_asterisk = hal->tx_path_diversity || hal->ant_diversity;
++ struct ieee80211_link_sta *link_sta;
+ u8 evm_min, evm_max, evm_1ss;
++ u16 max_rc_amsdu_len;
+ u8 rssi;
+ u8 snr;
+ int i;
+
+- seq_printf(m, "TX rate [%d]: ", rtwsta->mac_id);
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ max_rc_amsdu_len = link_sta->agg.max_rc_amsdu_len;
++
++ rcu_read_unlock();
++
++ seq_printf(m, "TX rate [%u, %u]: ", rtwsta_link->mac_id, rtwsta_link->link_id);
+
+ if (rate->flags & RATE_INFO_FLAGS_MCS)
+ seq_printf(m, "HT MCS-%d%s", rate->mcs,
+@@ -3549,13 +3557,13 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ eht_gi_str[rate->eht_gi] : "N/A");
+ else
+ seq_printf(m, "Legacy %d", rate->legacy);
+- seq_printf(m, "%s", rtwsta->ra_report.might_fallback_legacy ? " FB_G" : "");
++ seq_printf(m, "%s", rtwsta_link->ra_report.might_fallback_legacy ? " FB_G" : "");
+ seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(rate->bw));
+- seq_printf(m, "\t(hw_rate=0x%x)", rtwsta->ra_report.hw_rate);
+- seq_printf(m, "\t==> agg_wait=%d (%d)\n", rtwsta->max_agg_wait,
+- sta->deflink.agg.max_rc_amsdu_len);
++ seq_printf(m, " (hw_rate=0x%x)", rtwsta_link->ra_report.hw_rate);
++ seq_printf(m, " ==> agg_wait=%d (%d)\n", rtwsta_link->max_agg_wait,
++ max_rc_amsdu_len);
+
+- seq_printf(m, "RX rate [%d]: ", rtwsta->mac_id);
++ seq_printf(m, "RX rate [%u, %u]: ", rtwsta_link->mac_id, rtwsta_link->link_id);
+
+ switch (status->encoding) {
+ case RX_ENC_LEGACY:
+@@ -3582,24 +3590,24 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ break;
+ }
+ seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(status->bw));
+- seq_printf(m, "\t(hw_rate=0x%x)\n", rtwsta->rx_hw_rate);
++ seq_printf(m, " (hw_rate=0x%x)\n", rtwsta_link->rx_hw_rate);
+
+- rssi = ewma_rssi_read(&rtwsta->avg_rssi);
++ rssi = ewma_rssi_read(&rtwsta_link->avg_rssi);
+ seq_printf(m, "RSSI: %d dBm (raw=%d, prev=%d) [",
+- RTW89_RSSI_RAW_TO_DBM(rssi), rssi, rtwsta->prev_rssi);
++ RTW89_RSSI_RAW_TO_DBM(rssi), rssi, rtwsta_link->prev_rssi);
+ for (i = 0; i < ant_num; i++) {
+- rssi = ewma_rssi_read(&rtwsta->rssi[i]);
++ rssi = ewma_rssi_read(&rtwsta_link->rssi[i]);
+ seq_printf(m, "%d%s%s", RTW89_RSSI_RAW_TO_DBM(rssi),
+ ant_asterisk && (hal->antenna_tx & BIT(i)) ? "*" : "",
+ i + 1 == ant_num ? "" : ", ");
+ }
+ seq_puts(m, "]\n");
+
+- evm_1ss = ewma_evm_read(&rtwsta->evm_1ss);
++ evm_1ss = ewma_evm_read(&rtwsta_link->evm_1ss);
+ seq_printf(m, "EVM: [%2u.%02u, ", evm_1ss >> 2, (evm_1ss & 0x3) * 25);
+ for (i = 0; i < (hal->ant_diversity ? 2 : 1); i++) {
+- evm_min = ewma_evm_read(&rtwsta->evm_min[i]);
+- evm_max = ewma_evm_read(&rtwsta->evm_max[i]);
++ evm_min = ewma_evm_read(&rtwsta_link->evm_min[i]);
++ evm_max = ewma_evm_read(&rtwsta_link->evm_max[i]);
+
+ seq_printf(m, "%s(%2u.%02u, %2u.%02u)", i == 0 ? "" : " ",
+ evm_min >> 2, (evm_min & 0x3) * 25,
+@@ -3607,10 +3615,22 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ }
+ seq_puts(m, "]\t");
+
+- snr = ewma_snr_read(&rtwsta->avg_snr);
++ snr = ewma_snr_read(&rtwsta_link->avg_snr);
+ seq_printf(m, "SNR: %u\n", snr);
+ }
+
++static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct seq_file *m = (struct seq_file *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id)
++ rtw89_sta_link_info_get_iter(m, rtwdev, rtwsta_link);
++}
++
+ static void
+ rtw89_debug_append_rx_rate(struct seq_file *m, struct rtw89_pkt_stat *pkt_stat,
+ enum rtw89_hw_rate first_rate, int len)
+@@ -3737,28 +3757,41 @@ static void rtw89_dump_pkt_offload(struct seq_file *m, struct list_head *pkt_lis
+ seq_puts(m, "\n");
+ }
+
++static void rtw89_vif_link_ids_get(struct seq_file *m, u8 *mac,
++ struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
++
++ seq_printf(m, " [%u] %pM\n", rtwvif_link->mac_id, rtwvif_link->mac_addr);
++ seq_printf(m, "\tlink_id=%u\n", rtwvif_link->link_id);
++ seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx);
++ rtw89_dump_addr_cam(m, rtwdev, &rtwvif_link->addr_cam);
++ rtw89_dump_pkt_offload(m, &rtwvif_link->general_pkt_list,
++ "\tpkt_ofld[GENERAL]: ");
++}
++
+ static
+ void rtw89_vif_ids_get_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
+ struct seq_file *m = (struct seq_file *)data;
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
+
+- seq_printf(m, "VIF [%d] %pM\n", rtwvif->mac_id, rtwvif->mac_addr);
+- seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx);
+- rtw89_dump_addr_cam(m, rtwdev, &rtwvif->addr_cam);
+- rtw89_dump_pkt_offload(m, &rtwvif->general_pkt_list, "\tpkt_ofld[GENERAL]: ");
++ seq_printf(m, "VIF %pM\n", rtwvif->mac_addr);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_vif_link_ids_get(m, mac, rtwdev, rtwvif_link);
+ }
+
+-static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta)
++static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
+ struct rtw89_ba_cam_entry *entry;
+ bool first = true;
+
+- list_for_each_entry(entry, &rtwsta->ba_cam_list, list) {
++ list_for_each_entry(entry, &rtwsta_link->ba_cam_list, list) {
+ if (first) {
+ seq_puts(m, "\tba_cam ");
+ first = false;
+@@ -3771,16 +3804,36 @@ static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta)
+ seq_puts(m, "\n");
+ }
+
++static void rtw89_sta_link_ids_get(struct seq_file *m,
++ struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
++{
++ struct ieee80211_link_sta *link_sta;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ seq_printf(m, " [%u] %pM\n", rtwsta_link->mac_id, link_sta->addr);
++
++ rcu_read_unlock();
++
++ seq_printf(m, "\tlink_id=%u\n", rtwsta_link->link_id);
++ rtw89_dump_addr_cam(m, rtwdev, &rtwsta_link->addr_cam);
++ rtw89_dump_ba_cam(m, rtwdev, rtwsta_link);
++}
++
+ static void rtw89_sta_ids_get_iter(void *data, struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+ struct seq_file *m = (struct seq_file *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
+
+- seq_printf(m, "STA [%d] %pM %s\n", rtwsta->mac_id, sta->addr,
+- sta->tdls ? "(TDLS)" : "");
+- rtw89_dump_addr_cam(m, rtwdev, &rtwsta->addr_cam);
+- rtw89_dump_ba_cam(m, rtwsta);
++ seq_printf(m, "STA %pM %s\n", sta->addr, sta->tdls ? "(TDLS)" : "");
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id)
++ rtw89_sta_link_ids_get(m, rtwdev, rtwsta_link);
+ }
+
+ static int rtw89_debug_priv_stations_get(struct seq_file *m, void *v)
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index d9b0e7ebe619a3..13a7c39ceb6f55 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -1741,8 +1741,8 @@ void rtw89_fw_log_dump(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
+ }
+
+ #define H2C_CAM_LEN 60
+-int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, const u8 *scan_mac_addr)
++int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr)
+ {
+ struct sk_buff *skb;
+ int ret;
+@@ -1753,8 +1753,9 @@ int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_CAM_LEN);
+- rtw89_cam_fill_addr_cam_info(rtwdev, rtwvif, rtwsta, scan_mac_addr, skb->data);
+- rtw89_cam_fill_bssid_cam_info(rtwdev, rtwvif, rtwsta, skb->data);
++ rtw89_cam_fill_addr_cam_info(rtwdev, rtwvif_link, rtwsta_link, scan_mac_addr,
++ skb->data);
++ rtw89_cam_fill_bssid_cam_info(rtwdev, rtwvif_link, rtwsta_link, skb->data);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -1776,8 +1777,8 @@ int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ }
+
+ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ struct rtw89_h2c_dctlinfo_ud_v1 *h2c;
+ u32 len = sizeof(*h2c);
+@@ -1792,7 +1793,7 @@ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_dctlinfo_ud_v1 *)skb->data;
+
+- rtw89_cam_fill_dctl_sec_cam_info_v1(rtwdev, rtwvif, rtwsta, h2c);
++ rtw89_cam_fill_dctl_sec_cam_info_v1(rtwdev, rtwvif_link, rtwsta_link, h2c);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -1815,8 +1816,8 @@ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_dctl_sec_cam_v1);
+
+ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c;
+ u32 len = sizeof(*h2c);
+@@ -1831,7 +1832,7 @@ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_dctlinfo_ud_v2 *)skb->data;
+
+- rtw89_cam_fill_dctl_sec_cam_info_v2(rtwdev, rtwvif, rtwsta, h2c);
++ rtw89_cam_fill_dctl_sec_cam_info_v2(rtwdev, rtwvif_link, rtwsta_link, h2c);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -1854,10 +1855,10 @@ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_dctl_sec_cam_v2);
+
+ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -1908,21 +1909,24 @@ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev,
+ }
+ EXPORT_SYMBOL(rtw89_fw_h2c_default_dmac_tbl_v2);
+
+-int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_h2c_ba_cam *h2c;
+- u8 macid = rtwsta->mac_id;
++ u8 macid = rtwsta_link->mac_id;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ u8 entry_idx;
+ int ret;
+
+ ret = valid ?
+- rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx) :
+- rtw89_core_release_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx);
++ rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx) :
++ rtw89_core_release_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx);
+ if (ret) {
+ /* it still works even if we don't have static BA CAM, because
+ * hardware can create dynamic BA CAM automatically.
+@@ -1960,7 +1964,8 @@ int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+
+ if (chip->bacam_ver == RTW89_BACAM_V0_EXT) {
+ h2c->w1 |= le32_encode_bits(1, RTW89_H2C_BA_CAM_W1_STD_EN) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BA_CAM_W1_BAND);
++ le32_encode_bits(rtwvif_link->mac_idx,
++ RTW89_H2C_BA_CAM_W1_BAND);
+ }
+
+ end:
+@@ -2039,13 +2044,14 @@ void rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(struct rtw89_dev *rtwdev)
+ }
+ }
+
+-int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_h2c_ba_cam_v1 *h2c;
+- u8 macid = rtwsta->mac_id;
++ u8 macid = rtwsta_link->mac_id;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ u8 entry_idx;
+@@ -2053,8 +2059,10 @@ int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ int ret;
+
+ ret = valid ?
+- rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx) :
+- rtw89_core_release_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx);
++ rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx) :
++ rtw89_core_release_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx);
+ if (ret) {
+ /* it still works even if we don't have static BA CAM, because
+ * hardware can create dynamic BA CAM automatically.
+@@ -2092,7 +2100,8 @@ int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ entry_idx += chip->bacam_dynamic_num; /* std entry right after dynamic ones */
+ h2c->w1 = le32_encode_bits(entry_idx, RTW89_H2C_BA_CAM_V1_W1_ENTRY_IDX_MASK) |
+ le32_encode_bits(1, RTW89_H2C_BA_CAM_V1_W1_STD_ENTRY_EN) |
+- le32_encode_bits(!!rtwvif->mac_idx, RTW89_H2C_BA_CAM_V1_W1_BAND_SEL);
++ le32_encode_bits(!!rtwvif_link->mac_idx,
++ RTW89_H2C_BA_CAM_V1_W1_BAND_SEL);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -2197,15 +2206,14 @@ int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable)
+ }
+
+ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ static const u8 gtkbody[] = {0xAA, 0xAA, 0x03, 0x00, 0x00, 0x00, 0x88,
+ 0x8E, 0x01, 0x03, 0x00, 0x5F, 0x02, 0x03};
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+ u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_eapol_2_of_2 *eapol_pkt;
++ struct ieee80211_bss_conf *bss_conf;
+ struct ieee80211_hdr_3addr *hdr;
+ struct sk_buff *skb;
+ u8 key_des_ver;
+@@ -2227,10 +2235,17 @@ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev,
+ hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_DATA |
+ IEEE80211_FCTL_TODS |
+ IEEE80211_FCTL_PROTECTED);
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
+ ether_addr_copy(hdr->addr1, bss_conf->bssid);
+- ether_addr_copy(hdr->addr2, vif->addr);
++ ether_addr_copy(hdr->addr2, bss_conf->addr);
+ ether_addr_copy(hdr->addr3, bss_conf->bssid);
+
++ rcu_read_unlock();
++
+ skb_put_zero(skb, sec_hdr_len);
+
+ eapol_pkt = skb_put_zero(skb, sizeof(*eapol_pkt));
+@@ -2241,11 +2256,10 @@ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev,
+ }
+
+ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+ u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev);
++ struct ieee80211_bss_conf *bss_conf;
+ struct ieee80211_hdr_3addr *hdr;
+ struct rtw89_sa_query *sa_query;
+ struct sk_buff *skb;
+@@ -2258,10 +2272,17 @@ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev,
+ hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
+ IEEE80211_STYPE_ACTION |
+ IEEE80211_FCTL_PROTECTED);
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
+ ether_addr_copy(hdr->addr1, bss_conf->bssid);
+- ether_addr_copy(hdr->addr2, vif->addr);
++ ether_addr_copy(hdr->addr2, bss_conf->addr);
+ ether_addr_copy(hdr->addr3, bss_conf->bssid);
+
++ rcu_read_unlock();
++
+ skb_put_zero(skb, sec_hdr_len);
+
+ sa_query = skb_put_zero(skb, sizeof(*sa_query));
+@@ -2272,8 +2293,9 @@ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev,
+ }
+
+ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct ieee80211_hdr_3addr *hdr;
+@@ -2295,9 +2317,9 @@ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev,
+ fc = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_FCTL_TODS);
+
+ hdr->frame_control = fc;
+- ether_addr_copy(hdr->addr1, rtwvif->bssid);
+- ether_addr_copy(hdr->addr2, rtwvif->mac_addr);
+- ether_addr_copy(hdr->addr3, rtwvif->bssid);
++ ether_addr_copy(hdr->addr1, rtwvif_link->bssid);
++ ether_addr_copy(hdr->addr2, rtwvif_link->mac_addr);
++ ether_addr_copy(hdr->addr3, rtwvif_link->bssid);
+
+ skb_put_zero(skb, sec_hdr_len);
+
+@@ -2312,18 +2334,18 @@ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev,
+ arp_hdr->ar_pln = 4;
+ arp_hdr->ar_op = htons(ARPOP_REPLY);
+
+- ether_addr_copy(arp_skb->sender_hw, rtwvif->mac_addr);
++ ether_addr_copy(arp_skb->sender_hw, rtwvif_link->mac_addr);
+ arp_skb->sender_ip = rtwvif->ip_addr;
+
+ return skb;
+ }
+
+ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ enum rtw89_fw_pkt_ofld_type type,
+ u8 *id)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_pktofld_info *info;
+ struct sk_buff *skb;
+ int ret;
+@@ -2346,13 +2368,13 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+ skb = ieee80211_nullfunc_get(rtwdev->hw, vif, -1, true);
+ break;
+ case RTW89_PKT_OFLD_TYPE_EAPOL_KEY:
+- skb = rtw89_eapol_get(rtwdev, rtwvif);
++ skb = rtw89_eapol_get(rtwdev, rtwvif_link);
+ break;
+ case RTW89_PKT_OFLD_TYPE_SA_QUERY:
+- skb = rtw89_sa_query_get(rtwdev, rtwvif);
++ skb = rtw89_sa_query_get(rtwdev, rtwvif_link);
+ break;
+ case RTW89_PKT_OFLD_TYPE_ARP_RSP:
+- skb = rtw89_arp_response_get(rtwdev, rtwvif);
++ skb = rtw89_arp_response_get(rtwdev, rtwvif_link);
+ break;
+ default:
+ goto err;
+@@ -2367,7 +2389,7 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+ if (ret)
+ goto err;
+
+- list_add_tail(&info->list, &rtwvif->general_pkt_list);
++ list_add_tail(&info->list, &rtwvif_link->general_pkt_list);
+ *id = info->id;
+ return 0;
+
+@@ -2377,9 +2399,10 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+ }
+
+ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool notify_fw)
++ struct rtw89_vif_link *rtwvif_link,
++ bool notify_fw)
+ {
+- struct list_head *pkt_list = &rtwvif->general_pkt_list;
++ struct list_head *pkt_list = &rtwvif_link->general_pkt_list;
+ struct rtw89_pktofld_info *info, *tmp;
+
+ list_for_each_entry_safe(info, tmp, pkt_list, list) {
+@@ -2394,16 +2417,20 @@ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+
+ void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, notify_fw);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif_link,
++ notify_fw);
+ }
+
+ #define H2C_GENERAL_PKT_LEN 6
+ #define H2C_GENERAL_PKT_ID_UND 0xff
+ int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u8 macid)
++ struct rtw89_vif_link *rtwvif_link, u8 macid)
+ {
+ u8 pkt_id_ps_poll = H2C_GENERAL_PKT_ID_UND;
+ u8 pkt_id_null = H2C_GENERAL_PKT_ID_UND;
+@@ -2411,11 +2438,11 @@ int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev,
+ struct sk_buff *skb;
+ int ret;
+
+- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_PS_POLL, &pkt_id_ps_poll);
+- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_NULL_DATA, &pkt_id_null);
+- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_QOS_NULL, &pkt_id_qos_null);
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_GENERAL_PKT_LEN);
+@@ -2494,10 +2521,10 @@ int rtw89_fw_h2c_lps_parm(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_h2c_lps_ch_info *h2c;
+ u32 len = sizeof(*h2c);
+@@ -2546,13 +2573,14 @@ int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ }
+
+ #define H2C_P2P_ACT_LEN 20
+-int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf,
+ struct ieee80211_p2p_noa_desc *desc,
+ u8 act, u8 noa_id)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- bool p2p_type_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
+- u8 ctwindow_oppps = vif->bss_conf.p2p_noa_attr.oppps_ctwindow;
++ bool p2p_type_gc = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
++ u8 ctwindow_oppps = bss_conf->p2p_noa_attr.oppps_ctwindow;
+ struct sk_buff *skb;
+ u8 *cmd;
+ int ret;
+@@ -2565,7 +2593,7 @@ int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ skb_put(skb, H2C_P2P_ACT_LEN);
+ cmd = skb->data;
+
+- RTW89_SET_FWCMD_P2P_MACID(cmd, rtwvif->mac_id);
++ RTW89_SET_FWCMD_P2P_MACID(cmd, rtwvif_link->mac_id);
+ RTW89_SET_FWCMD_P2P_P2PID(cmd, 0);
+ RTW89_SET_FWCMD_P2P_NOAID(cmd, noa_id);
+ RTW89_SET_FWCMD_P2P_ACT(cmd, act);
+@@ -2622,11 +2650,11 @@ static void __rtw89_fw_h2c_set_tx_path(struct rtw89_dev *rtwdev,
+
+ #define H2C_CMC_TBL_LEN 68
+ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- u8 macid = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 macid = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct sk_buff *skb;
+ int ret;
+
+@@ -2648,7 +2676,7 @@ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+ }
+ SET_CMC_TBL_DOPPLER_CTRL(skb->data, 0);
+ SET_CMC_TBL_TXPWR_TOLERENCE(skb->data, 0);
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ SET_CMC_TBL_DATA_DCM(skb->data, 0);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -2671,10 +2699,10 @@ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_default_cmac_tbl);
+
+ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct rtw89_h2c_cctlinfo_ud_g7 *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -2755,24 +2783,25 @@ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_default_cmac_tbl_g7);
+
+ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, u8 *pads)
++ struct ieee80211_link_sta *link_sta,
++ u8 *pads)
+ {
+ bool ppe_th;
+ u8 ppe16, ppe8;
+- u8 nss = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1;
+- u8 ppe_thres_hdr = sta->deflink.he_cap.ppe_thres[0];
++ u8 nss = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1;
++ u8 ppe_thres_hdr = link_sta->he_cap.ppe_thres[0];
+ u8 ru_bitmap;
+ u8 n, idx, sh;
+ u16 ppe;
+ int i;
+
+ ppe_th = FIELD_GET(IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT,
+- sta->deflink.he_cap.he_cap_elem.phy_cap_info[6]);
++ link_sta->he_cap.he_cap_elem.phy_cap_info[6]);
+ if (!ppe_th) {
+ u8 pad;
+
+ pad = FIELD_GET(IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK,
+- sta->deflink.he_cap.he_cap_elem.phy_cap_info[9]);
++ link_sta->he_cap.he_cap_elem.phy_cap_info[9]);
+
+ for (i = 0; i < RTW89_PPE_BW_NUM; i++)
+ pads[i] = pad;
+@@ -2794,7 +2823,7 @@ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev,
+ sh = n & 7;
+ n += IEEE80211_PPE_THRES_INFO_PPET_SIZE * 2;
+
+- ppe = le16_to_cpu(*((__le16 *)&sta->deflink.he_cap.ppe_thres[idx]));
++ ppe = le16_to_cpu(*((__le16 *)&link_sta->he_cap.ppe_thres[idx]));
+ ppe16 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+ sh += IEEE80211_PPE_THRES_INFO_PPET_SIZE;
+ ppe8 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+@@ -2809,23 +2838,35 @@ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_link_sta *link_sta;
+ struct sk_buff *skb;
+ u8 pads[RTW89_PPE_BW_NUM];
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ u16 lowest_rate;
+ int ret;
+
+ memset(pads, 0, sizeof(pads));
+- if (sta && sta->deflink.he_cap.has_he)
+- __get_sta_he_pkt_padding(rtwdev, sta, pads);
++
++ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_CMC_TBL_LEN);
++ if (!skb) {
++ rtw89_err(rtwdev, "failed to alloc skb for fw dl\n");
++ return -ENOMEM;
++ }
++
++ rcu_read_lock();
++
++ if (rtwsta_link)
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (rtwsta_link && link_sta->he_cap.has_he)
++ __get_sta_he_pkt_padding(rtwdev, link_sta, pads);
+
+ if (vif->p2p)
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+@@ -2834,11 +2875,6 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_CMC_TBL_LEN);
+- if (!skb) {
+- rtw89_err(rtwdev, "failed to alloc skb for fw dl\n");
+- return -ENOMEM;
+- }
+ skb_put(skb, H2C_CMC_TBL_LEN);
+ SET_CTRL_INFO_MACID(skb->data, mac_id);
+ SET_CTRL_INFO_OPERATION(skb->data, 1);
+@@ -2851,7 +2887,7 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ SET_CMC_TBL_ULDL(skb->data, 1);
+ else
+ SET_CMC_TBL_ULDL(skb->data, 0);
+- SET_CMC_TBL_MULTI_PORT_ID(skb->data, rtwvif->port);
++ SET_CMC_TBL_MULTI_PORT_ID(skb->data, rtwvif_link->port);
+ if (chip->h2c_cctl_func_id == H2C_FUNC_MAC_CCTLINFO_UD_V1) {
+ SET_CMC_TBL_NOMINAL_PKT_PADDING_V1(skb->data, pads[RTW89_CHANNEL_WIDTH_20]);
+ SET_CMC_TBL_NOMINAL_PKT_PADDING40_V1(skb->data, pads[RTW89_CHANNEL_WIDTH_40]);
+@@ -2863,12 +2899,14 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ SET_CMC_TBL_NOMINAL_PKT_PADDING80(skb->data, pads[RTW89_CHANNEL_WIDTH_80]);
+ SET_CMC_TBL_NOMINAL_PKT_PADDING160(skb->data, pads[RTW89_CHANNEL_WIDTH_160]);
+ }
+- if (sta)
++ if (rtwsta_link)
+ SET_CMC_TBL_BSR_QUEUE_SIZE_FORMAT(skb->data,
+- sta->deflink.he_cap.has_he);
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ link_sta->he_cap.has_he);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ SET_CMC_TBL_DATA_DCM(skb->data, 0);
+
++ rcu_read_unlock();
++
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_FR_EXCHG,
+ chip->h2c_cctl_func_id, 0, 1,
+@@ -2889,9 +2927,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_assoc_cmac_tbl);
+
+ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, u8 *pads)
++ struct ieee80211_link_sta *link_sta,
++ u8 *pads)
+ {
+- u8 nss = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1;
++ u8 nss = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1;
+ u16 ppe_thres_hdr;
+ u8 ppe16, ppe8;
+ u8 n, idx, sh;
+@@ -2900,12 +2939,12 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ u16 ppe;
+ int i;
+
+- ppe_th = !!u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5],
++ ppe_th = !!u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5],
+ IEEE80211_EHT_PHY_CAP5_PPE_THRESHOLD_PRESENT);
+ if (!ppe_th) {
+ u8 pad;
+
+- pad = u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5],
++ pad = u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5],
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK);
+
+ for (i = 0; i < RTW89_PPE_BW_NUM; i++)
+@@ -2914,7 +2953,7 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ return;
+ }
+
+- ppe_thres_hdr = get_unaligned_le16(sta->deflink.eht_cap.eht_ppe_thres);
++ ppe_thres_hdr = get_unaligned_le16(link_sta->eht_cap.eht_ppe_thres);
+ ru_bitmap = u16_get_bits(ppe_thres_hdr,
+ IEEE80211_EHT_PPE_THRES_RU_INDEX_BITMASK_MASK);
+ n = hweight8(ru_bitmap);
+@@ -2931,7 +2970,7 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ sh = n & 7;
+ n += IEEE80211_EHT_PPE_THRES_INFO_PPET_SIZE * 2;
+
+- ppe = get_unaligned_le16(sta->deflink.eht_cap.eht_ppe_thres + idx);
++ ppe = get_unaligned_le16(link_sta->eht_cap.eht_ppe_thres + idx);
+ ppe16 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+ sh += IEEE80211_EHT_PPE_THRES_INFO_PPET_SIZE;
+ ppe8 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+@@ -2946,14 +2985,15 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct rtw89_h2c_cctlinfo_ud_g7 *h2c;
++ struct ieee80211_bss_conf *bss_conf;
++ struct ieee80211_link_sta *link_sta;
+ u8 pads[RTW89_PPE_BW_NUM];
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -2961,11 +3001,24 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ int ret;
+
+ memset(pads, 0, sizeof(pads));
+- if (sta) {
+- if (sta->deflink.eht_cap.has_eht)
+- __get_sta_eht_pkt_padding(rtwdev, sta, pads);
+- else if (sta->deflink.he_cap.has_he)
+- __get_sta_he_pkt_padding(rtwdev, sta, pads);
++
++ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
++ if (!skb) {
++ rtw89_err(rtwdev, "failed to alloc skb for cmac g7\n");
++ return -ENOMEM;
++ }
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
++ if (rtwsta_link) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->eht_cap.has_eht)
++ __get_sta_eht_pkt_padding(rtwdev, link_sta, pads);
++ else if (link_sta->he_cap.has_he)
++ __get_sta_he_pkt_padding(rtwdev, link_sta, pads);
+ }
+
+ if (vif->p2p)
+@@ -2975,11 +3028,6 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+- if (!skb) {
+- rtw89_err(rtwdev, "failed to alloc skb for cmac g7\n");
+- return -ENOMEM;
+- }
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_cctlinfo_ud_g7 *)skb->data;
+
+@@ -3000,16 +3048,16 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ h2c->w3 = le32_encode_bits(0, CCTLINFO_G7_W3_RTS_TXCNT_LMT_SEL);
+ h2c->m3 = cpu_to_le32(CCTLINFO_G7_W3_RTS_TXCNT_LMT_SEL);
+
+- h2c->w4 = le32_encode_bits(rtwvif->port, CCTLINFO_G7_W4_MULTI_PORT_ID);
++ h2c->w4 = le32_encode_bits(rtwvif_link->port, CCTLINFO_G7_W4_MULTI_PORT_ID);
+ h2c->m4 = cpu_to_le32(CCTLINFO_G7_W4_MULTI_PORT_ID);
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) {
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) {
+ h2c->w4 |= le32_encode_bits(0, CCTLINFO_G7_W4_DATA_DCM);
+ h2c->m4 |= cpu_to_le32(CCTLINFO_G7_W4_DATA_DCM);
+ }
+
+- if (vif->bss_conf.eht_support) {
+- u16 punct = vif->bss_conf.chanreq.oper.punctured;
++ if (bss_conf->eht_support) {
++ u16 punct = bss_conf->chanreq.oper.punctured;
+
+ h2c->w4 |= le32_encode_bits(~punct,
+ CCTLINFO_G7_W4_ACT_SUBCH_CBW);
+@@ -3036,12 +3084,14 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ CCTLINFO_G7_W6_ULDL);
+ h2c->m6 = cpu_to_le32(CCTLINFO_G7_W6_ULDL);
+
+- if (sta) {
+- h2c->w8 = le32_encode_bits(sta->deflink.he_cap.has_he,
++ if (rtwsta_link) {
++ h2c->w8 = le32_encode_bits(link_sta->he_cap.has_he,
+ CCTLINFO_G7_W8_BSR_QUEUE_SIZE_FORMAT);
+ h2c->m8 = cpu_to_le32(CCTLINFO_G7_W8_BSR_QUEUE_SIZE_FORMAT);
+ }
+
++ rcu_read_unlock();
++
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_FR_EXCHG,
+ H2C_FUNC_MAC_CCTLINFO_UD_G7, 0, 1,
+@@ -3062,10 +3112,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_assoc_cmac_tbl_g7);
+
+ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = rtwsta_link->rtwsta;
+ struct rtw89_h2c_cctlinfo_ud_g7 *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -3102,7 +3152,7 @@ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ else if (agg_num > 0x200 && agg_num <= 0x400)
+ ba_bmap = 5;
+
+- h2c->c0 = le32_encode_bits(rtwsta->mac_id, CCTLINFO_G7_C0_MACID) |
++ h2c->c0 = le32_encode_bits(rtwsta_link->mac_id, CCTLINFO_G7_C0_MACID) |
+ le32_encode_bits(1, CCTLINFO_G7_C0_OP);
+
+ h2c->w3 = le32_encode_bits(ba_bmap, CCTLINFO_G7_W3_BA_BMAP);
+@@ -3128,7 +3178,7 @@ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_ampdu_cmac_tbl_g7);
+
+ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct sk_buff *skb;
+@@ -3140,15 +3190,15 @@ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_CMC_TBL_LEN);
+- SET_CTRL_INFO_MACID(skb->data, rtwsta->mac_id);
++ SET_CTRL_INFO_MACID(skb->data, rtwsta_link->mac_id);
+ SET_CTRL_INFO_OPERATION(skb->data, 1);
+- if (rtwsta->cctl_tx_time) {
++ if (rtwsta_link->cctl_tx_time) {
+ SET_CMC_TBL_AMPDU_TIME_SEL(skb->data, 1);
+- SET_CMC_TBL_AMPDU_MAX_TIME(skb->data, rtwsta->ampdu_max_time);
++ SET_CMC_TBL_AMPDU_MAX_TIME(skb->data, rtwsta_link->ampdu_max_time);
+ }
+- if (rtwsta->cctl_tx_retry_limit) {
++ if (rtwsta_link->cctl_tx_retry_limit) {
+ SET_CMC_TBL_DATA_TXCNT_LMT_SEL(skb->data, 1);
+- SET_CMC_TBL_DATA_TX_CNT_LMT(skb->data, rtwsta->data_tx_cnt_lmt);
++ SET_CMC_TBL_DATA_TX_CNT_LMT(skb->data, rtwsta_link->data_tx_cnt_lmt);
+ }
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -3170,7 +3220,7 @@ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct sk_buff *skb;
+@@ -3185,7 +3235,7 @@ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_CMC_TBL_LEN);
+- SET_CTRL_INFO_MACID(skb->data, rtwsta->mac_id);
++ SET_CTRL_INFO_MACID(skb->data, rtwsta_link->mac_id);
+ SET_CTRL_INFO_OPERATION(skb->data, 1);
+
+ __rtw89_fw_h2c_set_tx_path(rtwdev, skb);
+@@ -3209,11 +3259,11 @@ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_h2c_bcn_upd *h2c;
+ struct sk_buff *skb_beacon;
+ struct ieee80211_hdr *hdr;
+@@ -3240,7 +3290,7 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+
+- noa_len = rtw89_p2p_noa_fetch(rtwvif, &noa_data);
++ noa_len = rtw89_p2p_noa_fetch(rtwvif_link, &noa_data);
+ if (noa_len &&
+ (noa_len <= skb_tailroom(skb_beacon) ||
+ pskb_expand_head(skb_beacon, 0, noa_len, GFP_KERNEL) == 0)) {
+@@ -3260,11 +3310,11 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_bcn_upd *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->port, RTW89_H2C_BCN_UPD_W0_PORT) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->port, RTW89_H2C_BCN_UPD_W0_PORT) |
+ le32_encode_bits(0, RTW89_H2C_BCN_UPD_W0_MBSSID) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BCN_UPD_W0_BAND) |
++ le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_BCN_UPD_W0_BAND) |
+ le32_encode_bits(tim_offset | BIT(7), RTW89_H2C_BCN_UPD_W0_GRP_IE_OFST);
+- h2c->w1 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCN_UPD_W1_MACID) |
++ h2c->w1 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCN_UPD_W1_MACID) |
+ le32_encode_bits(RTW89_MGMT_HW_SSN_SEL, RTW89_H2C_BCN_UPD_W1_SSN_SEL) |
+ le32_encode_bits(RTW89_MGMT_HW_SEQ_MODE, RTW89_H2C_BCN_UPD_W1_SSN_MODE) |
+ le32_encode_bits(beacon_rate, RTW89_H2C_BCN_UPD_W1_RATE);
+@@ -3289,10 +3339,10 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_update_beacon);
+
+ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_h2c_bcn_upd_be *h2c;
+ struct sk_buff *skb_beacon;
+ struct ieee80211_hdr *hdr;
+@@ -3319,7 +3369,7 @@ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+
+- noa_len = rtw89_p2p_noa_fetch(rtwvif, &noa_data);
++ noa_len = rtw89_p2p_noa_fetch(rtwvif_link, &noa_data);
+ if (noa_len &&
+ (noa_len <= skb_tailroom(skb_beacon) ||
+ pskb_expand_head(skb_beacon, 0, noa_len, GFP_KERNEL) == 0)) {
+@@ -3339,11 +3389,11 @@ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_bcn_upd_be *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->port, RTW89_H2C_BCN_UPD_BE_W0_PORT) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->port, RTW89_H2C_BCN_UPD_BE_W0_PORT) |
+ le32_encode_bits(0, RTW89_H2C_BCN_UPD_BE_W0_MBSSID) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BCN_UPD_BE_W0_BAND) |
++ le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_BCN_UPD_BE_W0_BAND) |
+ le32_encode_bits(tim_offset | BIT(7), RTW89_H2C_BCN_UPD_BE_W0_GRP_IE_OFST);
+- h2c->w1 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCN_UPD_BE_W1_MACID) |
++ h2c->w1 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCN_UPD_BE_W1_MACID) |
+ le32_encode_bits(RTW89_MGMT_HW_SSN_SEL, RTW89_H2C_BCN_UPD_BE_W1_SSN_SEL) |
+ le32_encode_bits(RTW89_MGMT_HW_SEQ_MODE, RTW89_H2C_BCN_UPD_BE_W1_SSN_MODE) |
+ le32_encode_bits(beacon_rate, RTW89_H2C_BCN_UPD_BE_W1_RATE);
+@@ -3373,22 +3423,22 @@ EXPORT_SYMBOL(rtw89_fw_h2c_update_beacon_be);
+
+ #define H2C_ROLE_MAINTAIN_LEN 4
+ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ enum rtw89_upd_mode upd_mode)
+ {
+ struct sk_buff *skb;
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ u8 self_role;
+ int ret;
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) {
+- if (rtwsta)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) {
++ if (rtwsta_link)
+ self_role = RTW89_SELF_ROLE_AP_CLIENT;
+ else
+- self_role = rtwvif->self_role;
++ self_role = rtwvif_link->self_role;
+ } else {
+- self_role = rtwvif->self_role;
++ self_role = rtwvif_link->self_role;
+ }
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_ROLE_MAINTAIN_LEN);
+@@ -3400,7 +3450,7 @@ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+ SET_FWROLE_MAINTAIN_MACID(skb->data, mac_id);
+ SET_FWROLE_MAINTAIN_SELF_ROLE(skb->data, self_role);
+ SET_FWROLE_MAINTAIN_UPD_MODE(skb->data, upd_mode);
+- SET_FWROLE_MAINTAIN_WIFI_ROLE(skb->data, rtwvif->wifi_role);
++ SET_FWROLE_MAINTAIN_WIFI_ROLE(skb->data, rtwvif_link->wifi_role);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_MEDIA_RPT,
+@@ -3421,39 +3471,53 @@ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+ }
+
+ static enum rtw89_fw_sta_type
+-rtw89_fw_get_sta_type(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++rtw89_fw_get_sta_type(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct ieee80211_sta *sta = rtwsta_to_sta_safe(rtwsta);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
++ struct ieee80211_link_sta *link_sta;
++ enum rtw89_fw_sta_type type;
++
++ rcu_read_lock();
+
+- if (!sta)
++ if (!rtwsta_link)
+ goto by_vif;
+
+- if (sta->deflink.eht_cap.has_eht)
+- return RTW89_FW_BE_STA;
+- else if (sta->deflink.he_cap.has_he)
+- return RTW89_FW_AX_STA;
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->eht_cap.has_eht)
++ type = RTW89_FW_BE_STA;
++ else if (link_sta->he_cap.has_he)
++ type = RTW89_FW_AX_STA;
+ else
+- return RTW89_FW_N_AC_STA;
++ type = RTW89_FW_N_AC_STA;
++
++ goto out;
+
+ by_vif:
+- if (vif->bss_conf.eht_support)
+- return RTW89_FW_BE_STA;
+- else if (vif->bss_conf.he_support)
+- return RTW89_FW_AX_STA;
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
++ if (bss_conf->eht_support)
++ type = RTW89_FW_BE_STA;
++ else if (bss_conf->he_support)
++ type = RTW89_FW_AX_STA;
+ else
+- return RTW89_FW_N_AC_STA;
++ type = RTW89_FW_N_AC_STA;
++
++out:
++ rcu_read_unlock();
++
++ return type;
+ }
+
+-int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, bool dis_conn)
++int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, bool dis_conn)
+ {
+ struct sk_buff *skb;
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
+- u8 self_role = rtwvif->self_role;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
++ u8 self_role = rtwvif_link->self_role;
+ enum rtw89_fw_sta_type sta_type;
+- u8 net_type = rtwvif->net_type;
++ u8 net_type = rtwvif_link->net_type;
+ struct rtw89_h2c_join_v1 *h2c_v1;
+ struct rtw89_h2c_join *h2c;
+ u32 len = sizeof(*h2c);
+@@ -3465,7 +3529,7 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ format_v1 = true;
+ }
+
+- if (net_type == RTW89_NET_TYPE_AP_MODE && rtwsta) {
++ if (net_type == RTW89_NET_TYPE_AP_MODE && rtwsta_link) {
+ self_role = RTW89_SELF_ROLE_AP_CLIENT;
+ net_type = dis_conn ? RTW89_NET_TYPE_NO_LINK : net_type;
+ }
+@@ -3480,16 +3544,17 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c->w0 = le32_encode_bits(mac_id, RTW89_H2C_JOININFO_W0_MACID) |
+ le32_encode_bits(dis_conn, RTW89_H2C_JOININFO_W0_OP) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_JOININFO_W0_BAND) |
+- le32_encode_bits(rtwvif->wmm, RTW89_H2C_JOININFO_W0_WMM) |
+- le32_encode_bits(rtwvif->trigger, RTW89_H2C_JOININFO_W0_TGR) |
++ le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_JOININFO_W0_BAND) |
++ le32_encode_bits(rtwvif_link->wmm, RTW89_H2C_JOININFO_W0_WMM) |
++ le32_encode_bits(rtwvif_link->trigger, RTW89_H2C_JOININFO_W0_TGR) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_ISHESTA) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_DLBW) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_TF_MAC_PAD) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_DL_T_PE) |
+- le32_encode_bits(rtwvif->port, RTW89_H2C_JOININFO_W0_PORT_ID) |
++ le32_encode_bits(rtwvif_link->port, RTW89_H2C_JOININFO_W0_PORT_ID) |
+ le32_encode_bits(net_type, RTW89_H2C_JOININFO_W0_NET_TYPE) |
+- le32_encode_bits(rtwvif->wifi_role, RTW89_H2C_JOININFO_W0_WIFI_ROLE) |
++ le32_encode_bits(rtwvif_link->wifi_role,
++ RTW89_H2C_JOININFO_W0_WIFI_ROLE) |
+ le32_encode_bits(self_role, RTW89_H2C_JOININFO_W0_SELF_ROLE);
+
+ if (!format_v1)
+@@ -3497,7 +3562,7 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c_v1 = (struct rtw89_h2c_join_v1 *)skb->data;
+
+- sta_type = rtw89_fw_get_sta_type(rtwdev, rtwvif, rtwsta);
++ sta_type = rtw89_fw_get_sta_type(rtwdev, rtwvif_link, rtwsta_link);
+
+ h2c_v1->w1 = le32_encode_bits(sta_type, RTW89_H2C_JOININFO_W1_STA_TYPE);
+ h2c_v1->w2 = 0;
+@@ -3618,7 +3683,7 @@ int rtw89_fw_h2c_macid_pause(struct rtw89_dev *rtwdev, u8 sh, u8 grp,
+ }
+
+ #define H2C_EDCA_LEN 12
+-int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u8 ac, u32 val)
+ {
+ struct sk_buff *skb;
+@@ -3631,7 +3696,7 @@ int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ }
+ skb_put(skb, H2C_EDCA_LEN);
+ RTW89_SET_EDCA_SEL(skb->data, 0);
+- RTW89_SET_EDCA_BAND(skb->data, rtwvif->mac_idx);
++ RTW89_SET_EDCA_BAND(skb->data, rtwvif_link->mac_idx);
+ RTW89_SET_EDCA_WMM(skb->data, 0);
+ RTW89_SET_EDCA_AC(skb->data, ac);
+ RTW89_SET_EDCA_PARAM(skb->data, val);
+@@ -3655,7 +3720,8 @@ int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ }
+
+ #define H2C_TSF32_TOGL_LEN 4
+-int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool en)
+ {
+ struct sk_buff *skb;
+@@ -3671,9 +3737,9 @@ int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ skb_put(skb, H2C_TSF32_TOGL_LEN);
+ cmd = skb->data;
+
+- RTW89_SET_FWCMD_TSF32_TOGL_BAND(cmd, rtwvif->mac_idx);
++ RTW89_SET_FWCMD_TSF32_TOGL_BAND(cmd, rtwvif_link->mac_idx);
+ RTW89_SET_FWCMD_TSF32_TOGL_EN(cmd, en);
+- RTW89_SET_FWCMD_TSF32_TOGL_PORT(cmd, rtwvif->port);
++ RTW89_SET_FWCMD_TSF32_TOGL_PORT(cmd, rtwvif_link->port);
+ RTW89_SET_FWCMD_TSF32_TOGL_EARLY(cmd, early_us);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -3727,11 +3793,10 @@ int rtw89_fw_h2c_set_ofld_cfg(struct rtw89_dev *rtwdev)
+ }
+
+ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool connect)
+ {
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+- struct ieee80211_bss_conf *bss_conf = vif ? &vif->bss_conf : NULL;
++ struct ieee80211_bss_conf *bss_conf;
+ s32 thold = RTW89_DEFAULT_CQM_THOLD;
+ u32 hyst = RTW89_DEFAULT_CQM_HYST;
+ struct rtw89_h2c_bcnfltr *h2c;
+@@ -3742,9 +3807,20 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+ if (!RTW89_CHK_FW_FEATURE(BEACON_FILTER, &rtwdev->fw))
+ return -EINVAL;
+
+- if (!rtwvif || !bss_conf || rtwvif->net_type != RTW89_NET_TYPE_INFRA)
++ if (!rtwvif_link || rtwvif_link->net_type != RTW89_NET_TYPE_INFRA)
+ return -EINVAL;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++
++ if (bss_conf->cqm_rssi_hyst)
++ hyst = bss_conf->cqm_rssi_hyst;
++ if (bss_conf->cqm_rssi_thold)
++ thold = bss_conf->cqm_rssi_thold;
++
++ rcu_read_unlock();
++
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+ if (!skb) {
+ rtw89_err(rtwdev, "failed to alloc skb for h2c bcn filter\n");
+@@ -3754,11 +3830,6 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_bcnfltr *)skb->data;
+
+- if (bss_conf->cqm_rssi_hyst)
+- hyst = bss_conf->cqm_rssi_hyst;
+- if (bss_conf->cqm_rssi_thold)
+- thold = bss_conf->cqm_rssi_thold;
+-
+ h2c->w0 = le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_RSSI) |
+ le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_BCN) |
+ le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_EN) |
+@@ -3768,7 +3839,7 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+ le32_encode_bits(hyst, RTW89_H2C_BCNFLTR_W0_RSSI_HYST) |
+ le32_encode_bits(thold + MAX_RSSI,
+ RTW89_H2C_BCNFLTR_W0_RSSI_THRESHOLD) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCNFLTR_W0_MAC_ID);
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCNFLTR_W0_MAC_ID);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_FW_OFLD,
+@@ -3833,15 +3904,16 @@ int rtw89_fw_h2c_rssi_offload(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_traffic_stats *stats = &rtwvif->stats;
+ struct rtw89_h2c_ofld *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ int ret;
+
+- if (rtwvif->net_type != RTW89_NET_TYPE_INFRA)
++ if (rtwvif_link->net_type != RTW89_NET_TYPE_INFRA)
+ return -EINVAL;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+@@ -3853,7 +3925,7 @@ int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_ofld *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_OFLD_W0_MAC_ID) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_OFLD_W0_MAC_ID) |
+ le32_encode_bits(stats->tx_throughput, RTW89_H2C_OFLD_W0_TX_TP) |
+ le32_encode_bits(stats->rx_throughput, RTW89_H2C_OFLD_W0_RX_TP);
+
+@@ -4858,7 +4930,7 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num,
+ #define RTW89_SCAN_DELAY_TSF_UNIT 104800
+ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *option,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool wowlan)
+ {
+ struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait;
+@@ -4880,7 +4952,7 @@ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ h2c = (struct rtw89_h2c_scanofld *)skb->data;
+
+ if (option->delay) {
+- ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf);
++ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "NLO failed to get port tsf: %d\n", ret);
+ scan_mode = RTW89_SCAN_IMMEDIATE;
+@@ -4890,8 +4962,8 @@ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ }
+ }
+
+- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_SCANOFLD_W0_MACID) |
+- le32_encode_bits(rtwvif->port, RTW89_H2C_SCANOFLD_W0_PORT_ID) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_W0_MACID) |
++ le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_W0_PORT_ID) |
+ le32_encode_bits(RTW89_PHY_0, RTW89_H2C_SCANOFLD_W0_BAND) |
+ le32_encode_bits(option->enable, RTW89_H2C_SCANOFLD_W0_OPERATION);
+
+@@ -4963,9 +5035,10 @@ static void rtw89_scan_get_6g_disabled_chan(struct rtw89_dev *rtwdev,
+
+ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *option,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool wowlan)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+ struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+@@ -5016,8 +5089,8 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ le32_encode_bits(option->repeat, RTW89_H2C_SCANOFLD_BE_W0_REPEAT) |
+ le32_encode_bits(true, RTW89_H2C_SCANOFLD_BE_W0_NOTIFY_END) |
+ le32_encode_bits(true, RTW89_H2C_SCANOFLD_BE_W0_LEARN_CH) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_SCANOFLD_BE_W0_MACID) |
+- le32_encode_bits(rtwvif->port, RTW89_H2C_SCANOFLD_BE_W0_PORT) |
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_BE_W0_MACID) |
++ le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_BE_W0_PORT) |
+ le32_encode_bits(option->band, RTW89_H2C_SCANOFLD_BE_W0_BAND);
+
+ h2c->w1 = le32_encode_bits(option->num_macc_role, RTW89_H2C_SCANOFLD_BE_W1_NUM_MACC_ROLE) |
+@@ -5082,11 +5155,11 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+
+ for (i = 0; i < option->num_opch; i++) {
+ opch = ptr;
+- opch->w0 = le32_encode_bits(rtwvif->mac_id,
++ opch->w0 = le32_encode_bits(rtwvif_link->mac_id,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_MACID) |
+ le32_encode_bits(option->band,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_BAND) |
+- le32_encode_bits(rtwvif->port,
++ le32_encode_bits(rtwvif_link->port,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_PORT) |
+ le32_encode_bits(RTW89_SCAN_OPMODE_INTV,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_POLICY) |
+@@ -5871,12 +5944,10 @@ static void rtw89_release_pkt_list(struct rtw89_dev *rtwdev)
+ }
+
+ static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct cfg80211_scan_request *req,
+ struct rtw89_pktofld_info *info,
+ enum nl80211_band band, u8 ssid_idx)
+ {
+- struct cfg80211_scan_request *req = rtwvif->scan_req;
+-
+ if (band != NL80211_BAND_6GHZ)
+ return false;
+
+@@ -5892,11 +5963,13 @@ static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct sk_buff *skb, u8 ssid_idx)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
++ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_pktofld_info *info;
+ struct sk_buff *new;
+ int ret = 0;
+@@ -5921,8 +5994,7 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+ goto out;
+ }
+
+- rtw89_is_6ghz_wildcard_probe_req(rtwdev, rtwvif, info, band,
+- ssid_idx);
++ rtw89_is_6ghz_wildcard_probe_req(rtwdev, req, info, band, ssid_idx);
+
+ ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, new);
+ if (ret) {
+@@ -5939,22 +6011,23 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_hw_scan_update_probe_req(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct sk_buff *skb;
+ u8 num = req->n_ssids, i;
+ int ret;
+
+ for (i = 0; i < num; i++) {
+- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
++ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ req->ssids[i].ssid,
+ req->ssids[i].ssid_len,
+ req->ie_len);
+ if (!skb)
+ return -ENOMEM;
+
+- ret = rtw89_append_probe_req_ie(rtwdev, rtwvif, skb, i);
++ ret = rtw89_append_probe_req_ie(rtwdev, rtwvif_link, skb, i);
+ kfree_skb(skb);
+
+ if (ret)
+@@ -5965,13 +6038,12 @@ static int rtw89_hw_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
++ struct ieee80211_scan_ies *ies,
+ struct cfg80211_scan_request *req,
+ struct rtw89_mac_chinfo *ch_info)
+ {
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
+ struct list_head *pkt_list = rtwdev->scan_info.pkt_list;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+- struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
+ struct cfg80211_scan_6ghz_params *params;
+ struct rtw89_pktofld_info *info, *tmp;
+ struct ieee80211_hdr *hdr;
+@@ -6000,7 +6072,7 @@ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
+ if (found)
+ continue;
+
+- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
++ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ NULL, 0, req->ie_len);
+ skb_put_data(skb, ies->ies[NL80211_BAND_6GHZ], ies->len[NL80211_BAND_6GHZ]);
+ skb_put_data(skb, ies->common_ies, ies->common_ie_len);
+@@ -6090,8 +6162,9 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
+ struct rtw89_mac_chinfo *ch_info)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++ struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_chan *op = &rtwdev->scan_info.op_chan;
+ struct rtw89_pktofld_info *info;
+@@ -6117,7 +6190,7 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
+ }
+ }
+
+- ret = rtw89_update_6ghz_rnr_chan(rtwdev, req, ch_info);
++ ret = rtw89_update_6ghz_rnr_chan(rtwdev, ies, req, ch_info);
+ if (ret)
+ rtw89_warn(rtwdev, "RNR fails: %d\n", ret);
+
+@@ -6207,8 +6280,8 @@ static void rtw89_hw_scan_add_chan_be(struct rtw89_dev *rtwdev, int chan_type,
+ struct rtw89_mac_chinfo_be *ch_info)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_pktofld_info *info;
+ u8 band, probe_count = 0, i;
+@@ -6265,7 +6338,7 @@ static void rtw89_hw_scan_add_chan_be(struct rtw89_dev *rtwdev, int chan_type,
+ }
+
+ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+@@ -6315,8 +6388,9 @@ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected)
++ struct rtw89_vif_link *rtwvif_link, bool connected)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_mac_chinfo *ch_info, *tmp;
+ struct ieee80211_channel *channel;
+@@ -6392,7 +6466,7 @@ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+@@ -6444,8 +6518,9 @@ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected)
++ struct rtw89_vif_link *rtwvif_link, bool connected)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_mac_chinfo_be *ch_info, *tmp;
+ struct ieee80211_channel *channel;
+@@ -6503,45 +6578,50 @@ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_hw_scan_prehandle(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected)
++ struct rtw89_vif_link *rtwvif_link, bool connected)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ int ret;
+
+- ret = rtw89_hw_scan_update_probe_req(rtwdev, rtwvif);
++ ret = rtw89_hw_scan_update_probe_req(rtwdev, rtwvif_link);
+ if (ret) {
+ rtw89_err(rtwdev, "Update probe request failed\n");
+ goto out;
+ }
+- ret = mac->add_chan_list(rtwdev, rtwvif, connected);
++ ret = mac->add_chan_list(rtwdev, rtwvif_link, connected);
+ out:
+ return ret;
+ }
+
+-void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_scan_request *scan_req)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct cfg80211_scan_request *req = &scan_req->req;
++ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
++ rtwvif_link->chanctx_idx);
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ u32 rx_fltr = rtwdev->hal.rx_fltr;
+ u8 mac_addr[ETH_ALEN];
+
+- rtw89_get_channel(rtwdev, rtwvif, &rtwdev->scan_info.op_chan);
+- rtwdev->scan_info.scanning_vif = vif;
++ /* clone op and keep it during scan */
++ rtwdev->scan_info.op_chan = *chan;
++
++ rtwdev->scan_info.scanning_vif = rtwvif_link;
+ rtwdev->scan_info.last_chan_idx = 0;
+ rtwdev->scan_info.abort = false;
+ rtwvif->scan_ies = &scan_req->ies;
+ rtwvif->scan_req = req;
+ ieee80211_stop_queues(rtwdev->hw);
+- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, false);
++ rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, false);
+
+ if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
+ get_random_mask_addr(mac_addr, req->mac_addr,
+ req->mac_addr_mask);
+ else
+- ether_addr_copy(mac_addr, vif->addr);
+- rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, true);
++ ether_addr_copy(mac_addr, rtwvif_link->mac_addr);
++ rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, true);
+
+ rx_fltr &= ~B_AX_A_BCN_CHK_EN;
+ rx_fltr &= ~B_AX_A_BC;
+@@ -6554,28 +6634,33 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_HW_SCAN);
+ }
+
+-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool aborted)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+ struct cfg80211_scan_info info = {
+ .aborted = aborted,
+ };
++ struct rtw89_vif *rtwvif;
+
+- if (!vif)
++ if (!rtwvif_link)
+ return;
+
++ rtw89_chanctx_proceed(rtwdev);
++
++ rtwvif = rtwvif_link->rtwvif;
++
+ rtw89_write32_mask(rtwdev,
+ rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
+ B_AX_RX_FLTR_CFG_MASK,
+ rtwdev->hal.rx_fltr);
+
+- rtw89_core_scan_complete(rtwdev, vif, true);
++ rtw89_core_scan_complete(rtwdev, rtwvif_link, true);
+ ieee80211_scan_completed(rtwdev->hw, &info);
+ ieee80211_wake_queues(rtwdev->hw);
+- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, true);
++ rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, true);
+ rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
+
+ rtw89_release_pkt_list(rtwdev);
+@@ -6584,18 +6669,17 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ scan_info->last_chan_idx = 0;
+ scan_info->scanning_vif = NULL;
+ scan_info->abort = false;
+-
+- rtw89_chanctx_proceed(rtwdev);
+ }
+
+-void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+ int ret;
+
+ scan_info->abort = true;
+
+- ret = rtw89_hw_scan_offload(rtwdev, vif, false);
++ ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, false);
+ if (ret)
+ rtw89_warn(rtwdev, "rtw89_hw_scan_offload failed ret %d\n", ret);
+
+@@ -6604,40 +6688,43 @@ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ * RTW89_SCAN_END_SCAN_NOTIFY, so that ieee80211_stop() can flush scan
+ * work properly.
+ */
+- rtw89_hw_scan_complete(rtwdev, vif, true);
++ rtw89_hw_scan_complete(rtwdev, rtwvif_link, true);
+ }
+
+ static bool rtw89_is_any_vif_connected_or_connecting(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- /* This variable implies connected or during attempt to connect */
+- if (!is_zero_ether_addr(rtwvif->bssid))
+- return true;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ /* This variable implies connected or during attempt to connect */
++ if (!is_zero_ether_addr(rtwvif_link->bssid))
++ return true;
++ }
+ }
+
+ return false;
+ }
+
+-int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_scan_option opt = {0};
+- struct rtw89_vif *rtwvif;
+ bool connected;
+ int ret = 0;
+
+- rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
+- if (!rtwvif)
++ if (!rtwvif_link)
+ return -EINVAL;
+
+ connected = rtw89_is_any_vif_connected_or_connecting(rtwdev);
+ opt.enable = enable;
+ opt.target_ch_mode = connected;
+ if (enable) {
+- ret = rtw89_hw_scan_prehandle(rtwdev, rtwvif, connected);
++ ret = rtw89_hw_scan_prehandle(rtwdev, rtwvif_link, connected);
+ if (ret)
+ goto out;
+ }
+@@ -6652,7 +6739,7 @@ int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ opt.opch_end = connected ? 0 : RTW89_CHAN_INVALID;
+ }
+
+- ret = mac->scan_offload(rtwdev, &opt, rtwvif, false);
++ ret = mac->scan_offload(rtwdev, &opt, rtwvif_link, false);
+ out:
+ return ret;
+ }
+@@ -6758,7 +6845,7 @@ int rtw89_fw_h2c_pkt_drop(struct rtw89_dev *rtwdev,
+ }
+
+ #define H2C_KEEP_ALIVE_LEN 4
+-int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct sk_buff *skb;
+@@ -6766,7 +6853,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ int ret;
+
+ if (enable) {
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_NULL_DATA,
+ &pkt_id);
+ if (ret)
+@@ -6784,7 +6871,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ RTW89_SET_KEEP_ALIVE_ENABLE(skb->data, enable);
+ RTW89_SET_KEEP_ALIVE_PKT_NULL_ID(skb->data, pkt_id);
+ RTW89_SET_KEEP_ALIVE_PERIOD(skb->data, 5);
+- RTW89_SET_KEEP_ALIVE_MACID(skb->data, rtwvif->mac_id);
++ RTW89_SET_KEEP_ALIVE_MACID(skb->data, rtwvif_link->mac_id);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -6806,7 +6893,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_h2c_arp_offload *h2c;
+@@ -6816,7 +6903,7 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ int ret;
+
+ if (enable) {
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_ARP_RSP,
+ &pkt_id);
+ if (ret)
+@@ -6834,7 +6921,7 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c->w0 = le32_encode_bits(enable, RTW89_H2C_ARP_OFFLOAD_W0_ENABLE) |
+ le32_encode_bits(0, RTW89_H2C_ARP_OFFLOAD_W0_ACTION) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_ARP_OFFLOAD_W0_MACID) |
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_ARP_OFFLOAD_W0_MACID) |
+ le32_encode_bits(pkt_id, RTW89_H2C_ARP_OFFLOAD_W0_PKT_ID);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -6859,11 +6946,11 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ #define H2C_DISCONNECT_DETECT_LEN 8
+ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable)
++ struct rtw89_vif_link *rtwvif_link, bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct sk_buff *skb;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ int ret;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_DISCONNECT_DETECT_LEN);
+@@ -6902,7 +6989,7 @@ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+@@ -6923,7 +7010,7 @@ int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c->w0 = le32_encode_bits(enable, RTW89_H2C_NLO_W0_ENABLE) |
+ le32_encode_bits(enable, RTW89_H2C_NLO_W0_IGNORE_CIPHER) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_NLO_W0_MACID);
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_NLO_W0_MACID);
+
+ if (enable) {
+ h2c->nlo_cnt = nd_config->n_match_sets;
+@@ -6953,12 +7040,12 @@ int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_h2c_wow_global *h2c;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ int ret;
+@@ -7002,12 +7089,12 @@ int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ #define H2C_WAKEUP_CTRL_LEN 4
+ int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct sk_buff *skb;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ int ret;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_WAKEUP_CTRL_LEN);
+@@ -7100,13 +7187,13 @@ int rtw89_fw_wow_cam_update(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_gtk_info *gtk_info = &rtw_wow->gtk_info;
+ struct rtw89_h2c_wow_gtk_ofld *h2c;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ u32 len = sizeof(*h2c);
+ u8 pkt_id_sa_query = 0;
+ struct sk_buff *skb;
+@@ -7128,14 +7215,14 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+ if (!enable)
+ goto hdr;
+
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_EAPOL_KEY,
+ &pkt_id_eapol);
+ if (ret)
+ goto fail;
+
+ if (gtk_info->igtk_keyid) {
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_SA_QUERY,
+ &pkt_id_sa_query);
+ if (ret)
+@@ -7173,7 +7260,7 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wait_info *wait = &rtwdev->mac.ps_wait;
+@@ -7189,7 +7276,7 @@ int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_fwips *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_FW_IPS_W0_MACID) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_FW_IPS_W0_MACID) |
+ le32_encode_bits(enable, RTW89_H2C_FW_IPS_W0_ENABLE);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index ad47e77d740b25..ccbbc43f33feed 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -4404,59 +4404,59 @@ void rtw89_h2c_pkt_set_hdr(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ u8 type, u8 cat, u8 class, u8 func,
+ bool rack, bool dack, u32 len);
+ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
+-int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *vif,
+- struct rtw89_sta *rtwsta, const u8 *scan_mac_addr);
++ struct rtw89_vif_link *rtwvif_link);
++int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif,
++ struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr);
+ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ void rtw89_fw_c2h_irqsafe(struct rtw89_dev *rtwdev, struct sk_buff *c2h);
+ void rtw89_fw_c2h_work(struct work_struct *work);
+ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ enum rtw89_upd_mode upd_mode);
+-int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, bool dis_conn);
++int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, bool dis_conn);
+ int rtw89_fw_h2c_notify_dbcc(struct rtw89_dev *rtwdev, bool en);
+ int rtw89_fw_h2c_macid_pause(struct rtw89_dev *rtwdev, u8 sh, u8 grp,
+ bool pause);
+-int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u8 ac, u32 val);
+ int rtw89_fw_h2c_set_ofld_cfg(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool connect);
+ int rtw89_fw_h2c_rssi_offload(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_phy_ppdu *phy_ppdu);
+-int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ int rtw89_fw_h2c_ra(struct rtw89_dev *rtwdev, struct rtw89_ra_info *ra, bool csi);
+ int rtw89_fw_h2c_cxdrv_init(struct rtw89_dev *rtwdev, u8 type);
+ int rtw89_fw_h2c_cxdrv_init_v7(struct rtw89_dev *rtwdev, u8 type);
+@@ -4478,11 +4478,11 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num,
+ struct list_head *chan_list);
+ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *opt,
+- struct rtw89_vif *vif,
++ struct rtw89_vif_link *vif,
+ bool wowlan);
+ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *opt,
+- struct rtw89_vif *vif,
++ struct rtw89_vif_link *vif,
+ bool wowlan);
+ int rtw89_fw_h2c_rf_reg(struct rtw89_dev *rtwdev,
+ struct rtw89_fw_h2c_rf_reg_info *info,
+@@ -4508,14 +4508,19 @@ int rtw89_fw_h2c_raw_with_hdr(struct rtw89_dev *rtwdev,
+ int rtw89_fw_h2c_raw(struct rtw89_dev *rtwdev, const u8 *buf, u16 len);
+ void rtw89_fw_send_all_early_h2c(struct rtw89_dev *rtwdev);
+ void rtw89_fw_free_all_early_h2c(struct rtw89_dev *rtwdev);
+-int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u8 macid);
+ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool notify_fw);
++ struct rtw89_vif_link *rtwvif_link,
++ bool notify_fw);
+ void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw);
+-int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params);
+-int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params);
+ void rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_init_ba_cam_users(struct rtw89_dev *rtwdev, u8 users,
+@@ -4524,8 +4529,8 @@ int rtw89_fw_h2c_init_ba_cam_users(struct rtw89_dev *rtwdev, u8 users,
+ int rtw89_fw_h2c_lps_parm(struct rtw89_dev *rtwdev,
+ struct rtw89_lps_parm *lps_param);
+ int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
+-int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link);
++int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ struct sk_buff *rtw89_fw_h2c_alloc_skb_with_hdr(struct rtw89_dev *rtwdev, u32 len);
+ struct sk_buff *rtw89_fw_h2c_alloc_skb_no_hdr(struct rtw89_dev *rtwdev, u32 len);
+@@ -4534,49 +4539,56 @@ int rtw89_fw_msg_reg(struct rtw89_dev *rtwdev,
+ struct rtw89_mac_c2h_info *c2h_info);
+ int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable);
+ void rtw89_fw_st_dbg_dump(struct rtw89_dev *rtwdev);
+-void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_scan_request *req);
+-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_scan_request *scan_req);
++void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool aborted);
+-int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+-void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected);
++ struct rtw89_vif_link *rtwvif_link, bool connected);
+ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected);
++ struct rtw89_vif_link *rtwvif_link, bool connected);
+ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_fw_h2c_trigger_cpu_exception(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_pkt_drop(struct rtw89_dev *rtwdev,
+ const struct rtw89_pkt_drop_params *params);
+-int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf,
+ struct ieee80211_p2p_noa_desc *desc,
+ u8 act, u8 noa_id);
+-int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool en);
+-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
+-int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link, bool enable);
++int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+-int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
++ struct rtw89_vif_link *rtwvif_link, bool enable);
+ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
+-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link, bool enable);
++int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
++ struct rtw89_vif_link *rtwvif_link, bool enable);
+ int rtw89_fw_wow_cam_update(struct rtw89_dev *rtwdev,
+ struct rtw89_wow_cam_info *cam_info);
+ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_wow_request_aoac(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_add_mcc(struct rtw89_dev *rtwdev,
+@@ -4621,51 +4633,73 @@ static inline void rtw89_fw_h2c_init_ba_cam(struct rtw89_dev *rtwdev)
+ }
+
+ static inline int rtw89_chip_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- return chip->ops->h2c_default_cmac_tbl(rtwdev, rtwvif, rtwsta);
++ return chip->ops->h2c_default_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+ static inline int rtw89_chip_h2c_default_dmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->h2c_default_dmac_tbl)
+- return chip->ops->h2c_default_dmac_tbl(rtwdev, rtwvif, rtwsta);
++ return chip->ops->h2c_default_dmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+
+ return 0;
+ }
+
+ static inline int rtw89_chip_h2c_update_beacon(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- return chip->ops->h2c_update_beacon(rtwdev, rtwvif);
++ return chip->ops->h2c_update_beacon(rtwdev, rtwvif_link);
+ }
+
+ static inline int rtw89_chip_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- return chip->ops->h2c_assoc_cmac_tbl(rtwdev, vif, sta);
++ return chip->ops->h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+-static inline int rtw89_chip_h2c_ampdu_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++static inline
++int rtw89_chip_h2c_ampdu_link_cmac_tbl(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->h2c_ampdu_cmac_tbl)
+- return chip->ops->h2c_ampdu_cmac_tbl(rtwdev, vif, sta);
++ return chip->ops->h2c_ampdu_cmac_tbl(rtwdev, rtwvif_link,
++ rtwsta_link);
++
++ return 0;
++}
++
++static inline int rtw89_chip_h2c_ampdu_cmac_tbl(struct rtw89_dev *rtwdev,
++ struct rtw89_vif *rtwvif,
++ struct rtw89_sta *rtwsta)
++{
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_chip_h2c_ampdu_link_cmac_tbl(rtwdev, rtwvif_link,
++ rtwsta_link);
++ if (ret)
++ return ret;
++ }
+
+ return 0;
+ }
+@@ -4675,8 +4709,20 @@ int rtw89_chip_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ bool valid, struct ieee80211_ampdu_params *params)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = chip->ops->h2c_ba_cam(rtwdev, rtwvif_link, rtwsta_link,
++ valid, params);
++ if (ret)
++ return ret;
++ }
+
+- return chip->ops->h2c_ba_cam(rtwdev, rtwsta, valid, params);
++ return 0;
+ }
+
+ /* must consider compatibility; don't insert new in the mid */
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index c70a23a763b0ee..4e15d539e3d1c4 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -4076,17 +4076,17 @@ static const struct rtw89_port_reg rtw89_port_base_ax = {
+ };
+
+ static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u8 type)
++ struct rtw89_vif_link *rtwvif_link, u8 type)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif->port);
++ u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif_link->port);
+ u32 reg_info, reg_ctrl;
+ u32 val;
+ int ret;
+
+- reg_info = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg_info, rtwvif->mac_idx);
+- reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg, rtwvif->mac_idx);
++ reg_info = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg_info, rtwvif_link->mac_idx);
++ reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg, rtwvif_link->mac_idx);
+
+ rtw89_write32_mask(rtwdev, reg_ctrl, B_AX_PTCL_DBG_SEL_MASK, type);
+ rtw89_write32_set(rtwdev, reg_ctrl, B_AX_PTCL_DBG_EN);
+@@ -4098,26 +4098,32 @@ static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev,
+ rtw89_warn(rtwdev, "Polling beacon packet empty fail\n");
+ }
+
+-static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_set(rtwdev, p->bcn_drop_all, BIT(rtwvif->port));
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK, 1);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area, B_AX_BCN_MSK_AREA_MASK, 0);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 0);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK, 2);
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK, 1);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK, 1);
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
+-
+- rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM0);
+- if (rtwvif->port == RTW89_PORT_0)
+- rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM1);
+-
+- rtw89_write32_clr(rtwdev, p->bcn_drop_all, BIT(rtwvif->port));
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TBTT_PROHIB_EN);
++ rtw89_write32_set(rtwdev, p->bcn_drop_all, BIT(rtwvif_link->port));
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK,
++ 1);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_area, B_AX_BCN_MSK_AREA_MASK,
++ 0);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK,
++ 0);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_early, B_AX_BCNERLY_MASK, 2);
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_early,
++ B_AX_TBTTERLY_MASK, 1);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_space,
++ B_AX_BCN_SPACE_MASK, 1);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN);
++
++ rtw89_mac_check_packet_ctrl(rtwdev, rtwvif_link, AX_PTCL_DBG_BCNQ_NUM0);
++ if (rtwvif_link->port == RTW89_PORT_0)
++ rtw89_mac_check_packet_ctrl(rtwdev, rtwvif_link, AX_PTCL_DBG_BCNQ_NUM1);
++
++ rtw89_write32_clr(rtwdev, p->bcn_drop_all, BIT(rtwvif_link->port));
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_TBTT_PROHIB_EN);
+ fsleep(2000);
+ }
+
+@@ -4131,286 +4137,329 @@ static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvi
+ #define BCN_ERLY_SET_DLY (10 * 2)
+
+ static void rtw89_mac_port_cfg_func_sw(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
++ struct ieee80211_bss_conf *bss_conf;
+ bool need_backup = false;
+ u32 backup_val;
++ u16 beacon_int;
+
+- if (!rtw89_read32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN))
++ if (!rtw89_read32_port_mask(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN))
+ return;
+
+- if (chip->chip_id == RTL8852A && rtwvif->port != RTW89_PORT_0) {
++ if (chip->chip_id == RTL8852A && rtwvif_link->port != RTW89_PORT_0) {
+ need_backup = true;
+- backup_val = rtw89_read32_port(rtwdev, rtwvif, p->tbtt_prohib);
++ backup_val = rtw89_read32_port(rtwdev, rtwvif_link, p->tbtt_prohib);
+ }
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
+- rtw89_mac_bcn_drop(rtwdev, rtwvif);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
++ rtw89_mac_bcn_drop(rtwdev, rtwvif_link);
+
+ if (chip->chip_id == RTL8852A) {
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 1);
+- rtw89_write16_port_clr(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK);
+- rtw89_write16_port_clr(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->tbtt_prohib,
++ B_AX_TBTT_SETUP_MASK);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib,
++ B_AX_TBTT_HOLD_MASK, 1);
++ rtw89_write16_port_clr(rtwdev, rtwvif_link, p->tbtt_early,
++ B_AX_TBTTERLY_MASK);
++ rtw89_write16_port_clr(rtwdev, rtwvif_link, p->bcn_early,
++ B_AX_BCNERLY_MASK);
+ }
+
+- msleep(vif->bss_conf.beacon_int + 1);
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN |
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ beacon_int = bss_conf->beacon_int;
++
++ rcu_read_unlock();
++
++ msleep(beacon_int + 1);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN |
+ B_AX_BRK_SETUP);
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSFTR_RST);
+- rtw89_write32_port(rtwdev, rtwvif, p->bcn_cnt_tmr, 0);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSFTR_RST);
++ rtw89_write32_port(rtwdev, rtwvif_link, p->bcn_cnt_tmr, 0);
+
+ if (need_backup)
+- rtw89_write32_port(rtwdev, rtwvif, p->tbtt_prohib, backup_val);
++ rtw89_write32_port(rtwdev, rtwvif_link, p->tbtt_prohib, backup_val);
+ }
+
+ static void rtw89_mac_port_cfg_tx_rpt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_TXBCN_RPT_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_TXBCN_RPT_EN);
+ }
+
+ static void rtw89_mac_port_cfg_rx_rpt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_RXBCN_RPT_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_RXBCN_RPT_EN);
+ }
+
+ static void rtw89_mac_port_cfg_net_type(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_NET_TYPE_MASK,
+- rtwvif->net_type);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->port_cfg, B_AX_NET_TYPE_MASK,
++ rtwvif_link->net_type);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_prct(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- bool en = rtwvif->net_type != RTW89_NET_TYPE_NO_LINK;
++ bool en = rtwvif_link->net_type != RTW89_NET_TYPE_NO_LINK;
+ u32 bits = B_AX_TBTT_PROHIB_EN | B_AX_BRK_SETUP;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, bits);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, bits);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, bits);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, bits);
+ }
+
+ static void rtw89_mac_port_cfg_rx_sw(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA ||
+- rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++ bool en = rtwvif_link->net_type == RTW89_NET_TYPE_INFRA ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
+ u32 bit = B_AX_RX_BSSID_FIT_EN;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, bit);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, bit);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, bit);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, bit);
+ }
+
+ void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSF_UDT_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSF_UDT_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TSF_UDT_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSF_UDT_EN);
+ }
+
+ static void rtw89_mac_port_cfg_rx_sync_by_nettype(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA ||
+- rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++ bool en = rtwvif_link->net_type == RTW89_NET_TYPE_INFRA ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
+
+- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, en);
++ rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, en);
+ }
+
+ static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN);
+ }
+
+ static void rtw89_mac_port_cfg_tx_sw_by_nettype(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- bool en = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ||
+- rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++ bool en = rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
+
+- rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en);
++ rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif_link, en);
+ }
+
+ void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
+- rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
++ rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif_link, en);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_intv(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- u16 bcn_int = vif->bss_conf.beacon_int ? vif->bss_conf.beacon_int : BCN_INTERVAL;
++ struct ieee80211_bss_conf *bss_conf;
++ u16 bcn_int;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ if (bss_conf->beacon_int)
++ bcn_int = bss_conf->beacon_int;
++ else
++ bcn_int = BCN_INTERVAL;
++
++ rcu_read_unlock();
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_space, B_AX_BCN_SPACE_MASK,
+ bcn_int);
+ }
+
+ static void rtw89_mac_port_cfg_hiq_win(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- u8 win = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0;
++ u8 win = rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif_link->mac_idx);
+ rtw89_write8(rtwdev, reg, win);
+ }
+
+ static void rtw89_mac_port_cfg_hiq_dtim(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
++ u8 dtim_period;
+ u32 addr;
+
+- addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif->mac_idx);
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ dtim_period = bss_conf->dtim_period;
++
++ rcu_read_unlock();
++
++ addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif_link->mac_idx);
+ rtw89_write8_set(rtwdev, addr, B_AX_UPD_HGQMD | B_AX_UPD_TIMIE);
+
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->dtim_ctrl, B_AX_DTIM_NUM_MASK,
+- vif->bss_conf.dtim_period);
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->dtim_ctrl, B_AX_DTIM_NUM_MASK,
++ dtim_period);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_setup_time(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib,
+ B_AX_TBTT_SETUP_MASK, BCN_SETUP_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_hold_time(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib,
+ B_AX_TBTT_HOLD_MASK, BCN_HOLD_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_mask_area(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_area,
+ B_AX_BCN_MSK_AREA_MASK, BCN_MASK_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_tbtt_early(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early,
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_early,
+ B_AX_TBTTERLY_MASK, TBTT_ERLY_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_bss_color(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ static const u32 masks[RTW89_PORT_NUM] = {
+ B_AX_BSS_COLOB_AX_PORT_0_MASK, B_AX_BSS_COLOB_AX_PORT_1_MASK,
+ B_AX_BSS_COLOB_AX_PORT_2_MASK, B_AX_BSS_COLOB_AX_PORT_3_MASK,
+ B_AX_BSS_COLOB_AX_PORT_4_MASK,
+ };
+- u8 port = rtwvif->port;
++ struct ieee80211_bss_conf *bss_conf;
++ u8 port = rtwvif_link->port;
+ u32 reg_base;
+ u32 reg;
+ u8 bss_color;
+
+- bss_color = vif->bss_conf.he_bss_color.color;
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ bss_color = bss_conf->he_bss_color.color;
++
++ rcu_read_unlock();
++
+ reg_base = port >= 4 ? p->bss_color + 4 : p->bss_color;
+- reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, masks[port], bss_color);
+ }
+
+ static void rtw89_mac_port_cfg_mbssid(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 reg;
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ return;
+
+ if (port == 0) {
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif_link->mac_idx);
+ rtw89_write32_clr(rtwdev, reg, B_AX_P0MB_ALL_MASK);
+ }
+ }
+
+ static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 reg;
+ u32 val;
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif_link->mac_idx);
+ val = rtw89_read32(rtwdev, reg);
+ val &= ~FIELD_PREP(B_AX_PORT_DROP_4_0_MASK, BIT(port));
+ if (port == 0)
+@@ -4419,31 +4468,31 @@ static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_port_cfg_func_en(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable)
++ struct rtw89_vif_link *rtwvif_link, bool enable)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (enable)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg,
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg,
+ B_AX_PORT_FUNC_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg,
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg,
+ B_AX_PORT_FUNC_EN);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_early(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_early, B_AX_BCNERLY_MASK,
+ BCN_ERLY_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+@@ -4452,20 +4501,20 @@ static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev,
+ if (rtwdev->chip->chip_id != RTL8852C)
+ return;
+
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT &&
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT &&
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION)
+ return;
+
+ val = FIELD_PREP(B_AX_TBTT_SHIFT_OFST_MAG, 1) |
+ B_AX_TBTT_SHIFT_OFST_SIGN;
+
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_shift,
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_shift,
+ B_AX_TBTT_SHIFT_OFST_MASK, val);
+ }
+
+ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_vif *rtwvif_src,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_vif_link *rtwvif_src,
+ u16 offset_tu)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+@@ -4473,8 +4522,8 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+ u32 val, reg;
+
+ val = RTW89_PORT_OFFSET_TU_TO_32US(offset_tu);
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif->port * 4,
+- rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif_link->port * 4,
++ rtwvif_link->mac_idx);
+
+ rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_SRC, rtwvif_src->port);
+ rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_OFFSET_VAL, val);
+@@ -4482,16 +4531,16 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_vif *rtwvif_src,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_vif_link *rtwvif_src,
+ u8 offset, int *n_offset)
+ {
+- if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif == rtwvif_src)
++ if (rtwvif_link->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif_link == rtwvif_src)
+ return;
+
+ /* adjust offset randomly to avoid beacon conflict */
+ offset = offset - offset / 4 + get_random_u32() % (offset / 2);
+- rtw89_mac_port_tsf_sync(rtwdev, rtwvif, rtwvif_src,
++ rtw89_mac_port_tsf_sync(rtwdev, rtwvif_link, rtwvif_src,
+ (*n_offset) * offset);
+
+ (*n_offset)++;
+@@ -4499,15 +4548,19 @@ static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev,
+
+ static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
+ {
+- struct rtw89_vif *src = NULL, *tmp;
++ struct rtw89_vif_link *src = NULL, *tmp;
+ u8 offset = 100, vif_aps = 0;
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ int n_offset = 1;
+
+- rtw89_for_each_rtwvif(rtwdev, tmp) {
+- if (!src || tmp->net_type == RTW89_NET_TYPE_INFRA)
+- src = tmp;
+- if (tmp->net_type == RTW89_NET_TYPE_AP_MODE)
+- vif_aps++;
++ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++ rtw89_vif_for_each_link(rtwvif, tmp, link_id) {
++ if (!src || tmp->net_type == RTW89_NET_TYPE_INFRA)
++ src = tmp;
++ if (tmp->net_type == RTW89_NET_TYPE_AP_MODE)
++ vif_aps++;
++ }
+ }
+
+ if (vif_aps == 0)
+@@ -4515,104 +4568,106 @@ static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
+
+ offset /= (vif_aps + 1);
+
+- rtw89_for_each_rtwvif(rtwdev, tmp)
+- rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset, &n_offset);
++ rtw89_for_each_rtwvif(rtwdev, rtwvif)
++ rtw89_vif_for_each_link(rtwvif, tmp, link_id)
++ rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset,
++ &n_offset);
+ }
+
+-int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ int ret;
+
+- ret = rtw89_mac_port_update(rtwdev, rtwvif);
++ ret = rtw89_mac_port_update(rtwdev, rtwvif_link);
+ if (ret)
+ return ret;
+
+- rtw89_mac_dmac_tbl_init(rtwdev, rtwvif->mac_id);
+- rtw89_mac_cmac_tbl_init(rtwdev, rtwvif->mac_id);
++ rtw89_mac_dmac_tbl_init(rtwdev, rtwvif_link->mac_id);
++ rtw89_mac_cmac_tbl_init(rtwdev, rtwvif_link->mac_id);
+
+- ret = rtw89_mac_set_macid_pause(rtwdev, rtwvif->mac_id, false);
++ ret = rtw89_mac_set_macid_pause(rtwdev, rtwvif_link->mac_id, false);
+ if (ret)
+ return ret;
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_CREATE);
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_CREATE);
+ if (ret)
+ return ret;
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true);
+ if (ret)
+ return ret;
+
+- ret = rtw89_cam_init(rtwdev, rtwvif);
++ ret = rtw89_cam_init(rtwdev, rtwvif_link);
+ if (ret)
+ return ret;
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
+ if (ret)
+ return ret;
+
+- ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif, NULL);
++ ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif_link, NULL);
+ if (ret)
+ return ret;
+
+- ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif, NULL);
++ ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif_link, NULL);
+ if (ret)
+ return ret;
+
+ return 0;
+ }
+
+-int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ int ret;
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_REMOVE);
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_REMOVE);
+ if (ret)
+ return ret;
+
+- rtw89_cam_deinit(rtwdev, rtwvif);
++ rtw89_cam_deinit(rtwdev, rtwvif_link);
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
+ if (ret)
+ return ret;
+
+ return 0;
+ }
+
+-int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+
+ if (port >= RTW89_PORT_NUM)
+ return -EINVAL;
+
+- rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tx_rpt(rtwdev, rtwvif, false);
+- rtw89_mac_port_cfg_rx_rpt(rtwdev, rtwvif, false);
+- rtw89_mac_port_cfg_net_type(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_rx_sync_by_nettype(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_hiq_drop(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_setup_time(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_hold_time(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_mask_area(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tbtt_early(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tbtt_shift(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bss_color(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_mbssid(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_func_en(rtwdev, rtwvif, true);
++ rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tx_rpt(rtwdev, rtwvif_link, false);
++ rtw89_mac_port_cfg_rx_rpt(rtwdev, rtwvif_link, false);
++ rtw89_mac_port_cfg_net_type(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_rx_sync_by_nettype(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_hiq_drop(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_setup_time(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_hold_time(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_mask_area(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tbtt_early(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tbtt_shift(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bss_color(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_mbssid(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_func_en(rtwdev, rtwvif_link, true);
+ rtw89_mac_port_tsf_resync_all(rtwdev);
+ fsleep(BCN_ERLY_SET_DLY);
+- rtw89_mac_port_cfg_bcn_early(rtwdev, rtwvif);
++ rtw89_mac_port_cfg_bcn_early(rtwdev, rtwvif_link);
+
+ return 0;
+ }
+
+-int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u64 *tsf)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+@@ -4620,12 +4675,12 @@ int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ u32 tsf_low, tsf_high;
+ int ret;
+
+- ret = rtw89_mac_check_mac_en(rtwdev, rtwvif->mac_idx, RTW89_CMAC_SEL);
++ ret = rtw89_mac_check_mac_en(rtwdev, rtwvif_link->mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+- tsf_low = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_l);
+- tsf_high = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_h);
++ tsf_low = rtw89_read32_port(rtwdev, rtwvif_link, p->tsftr_l);
++ tsf_high = rtw89_read32_port(rtwdev, rtwvif_link, p->tsftr_h);
+ *tsf = (u64)tsf_high << 32 | tsf_low;
+
+ return 0;
+@@ -4651,65 +4706,57 @@ static void rtw89_mac_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ }
+
+ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct ieee80211_hw *hw = rtwdev->hw;
++ struct ieee80211_bss_conf *bss_conf;
++ struct cfg80211_chan_def oper;
+ bool tolerated = true;
+ u32 reg;
+
+- if (!vif->bss_conf.he_support || vif->type != NL80211_IFTYPE_STATION)
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ if (!bss_conf->he_support || vif->type != NL80211_IFTYPE_STATION) {
++ rcu_read_unlock();
+ return;
++ }
+
+- if (!(vif->bss_conf.chanreq.oper.chan->flags & IEEE80211_CHAN_RADAR))
++ oper = bss_conf->chanreq.oper;
++ if (!(oper.chan->flags & IEEE80211_CHAN_RADAR)) {
++ rcu_read_unlock();
+ return;
++ }
++
++ rcu_read_unlock();
+
+- cfg80211_bss_iter(hw->wiphy, &vif->bss_conf.chanreq.oper,
++ cfg80211_bss_iter(hw->wiphy, &oper,
+ rtw89_mac_check_he_obss_narrow_bw_ru_iter,
+ &tolerated);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, mac->narrow_bw_ru_dis.addr,
+- rtwvif->mac_idx);
++ rtwvif_link->mac_idx);
+ if (tolerated)
+ rtw89_write32_clr(rtwdev, reg, mac->narrow_bw_ru_dis.mask);
+ else
+ rtw89_write32_set(rtwdev, reg, mac->narrow_bw_ru_dis.mask);
+ }
+
+-void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif);
++ rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link);
+ }
+
+-int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- int ret;
+-
+- rtwvif->mac_id = rtw89_acquire_mac_id(rtwdev);
+- if (rtwvif->mac_id == RTW89_MAX_MAC_ID_NUM)
+- return -ENOSPC;
+-
+- ret = rtw89_mac_vif_init(rtwdev, rtwvif);
+- if (ret)
+- goto release_mac_id;
+-
+- return 0;
+-
+-release_mac_id:
+- rtw89_release_mac_id(rtwdev, rtwvif->mac_id);
+-
+- return ret;
++ return rtw89_mac_vif_init(rtwdev, rtwvif_link);
+ }
+
+-int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- int ret;
+-
+- ret = rtw89_mac_vif_deinit(rtwdev, rtwvif);
+- rtw89_release_mac_id(rtwdev, rtwvif->mac_id);
+-
+- return ret;
++ return rtw89_mac_vif_deinit(rtwdev, rtwvif_link);
+ }
+
+ static void
+@@ -4730,8 +4777,8 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ {
+ const struct rtw89_c2h_scanofld *c2h =
+ (const struct rtw89_c2h_scanofld *)skb->data;
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif *rtwvif;
+ struct rtw89_chan new;
+ u8 reason, status, tx_fail, band, actual_period, expect_period;
+ u32 last_chan = rtwdev->scan_info.last_chan_idx, report_tsf;
+@@ -4739,9 +4786,11 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ u16 chan;
+ int ret;
+
+- if (!rtwvif)
++ if (!rtwvif_link)
+ return;
+
++ rtwvif = rtwvif_link->rtwvif;
++
+ tx_fail = le32_get_bits(c2h->w5, RTW89_C2H_SCANOFLD_W5_TX_FAIL);
+ status = le32_get_bits(c2h->w2, RTW89_C2H_SCANOFLD_W2_STATUS);
+ chan = le32_get_bits(c2h->w2, RTW89_C2H_SCANOFLD_W2_PRI_CH);
+@@ -4781,28 +4830,28 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ if (rtwdev->scan_info.abort)
+ return;
+
+- if (rtwvif && rtwvif->scan_req &&
++ if (rtwvif_link && rtwvif->scan_req &&
+ last_chan < rtwvif->scan_req->n_channels) {
+- ret = rtw89_hw_scan_offload(rtwdev, vif, true);
++ ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, true);
+ if (ret) {
+- rtw89_hw_scan_abort(rtwdev, vif);
++ rtw89_hw_scan_abort(rtwdev, rtwvif_link);
+ rtw89_warn(rtwdev, "HW scan failed: %d\n", ret);
+ }
+ } else {
+- rtw89_hw_scan_complete(rtwdev, vif, false);
++ rtw89_hw_scan_complete(rtwdev, rtwvif_link, false);
+ }
+ break;
+ case RTW89_SCAN_ENTER_OP_NOTIFY:
+ case RTW89_SCAN_ENTER_CH_NOTIFY:
+ if (rtw89_is_op_chan(rtwdev, band, chan)) {
+- rtw89_assign_entity_chan(rtwdev, rtwvif->chanctx_idx,
++ rtw89_assign_entity_chan(rtwdev, rtwvif_link->chanctx_idx,
+ &rtwdev->scan_info.op_chan);
+ rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
+ ieee80211_wake_queues(rtwdev->hw);
+ } else {
+ rtw89_chan_create(&new, chan, chan, band,
+ RTW89_CHANNEL_WIDTH_20);
+- rtw89_assign_entity_chan(rtwdev, rtwvif->chanctx_idx,
++ rtw89_assign_entity_chan(rtwdev, rtwvif_link->chanctx_idx,
+ &new);
+ }
+ break;
+@@ -4812,10 +4861,11 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ }
+
+ static void
+-rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ struct sk_buff *skb)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif_safe(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ enum nl80211_cqm_rssi_threshold_event nl_event;
+ const struct rtw89_c2h_mac_bcnfltr_rpt *c2h =
+ (const struct rtw89_c2h_mac_bcnfltr_rpt *)skb->data;
+@@ -4827,7 +4877,7 @@ rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ event = le32_get_bits(c2h->w2, RTW89_C2H_MAC_BCNFLTR_RPT_W2_EVENT);
+ mac_id = le32_get_bits(c2h->w2, RTW89_C2H_MAC_BCNFLTR_RPT_W2_MACID);
+
+- if (mac_id != rtwvif->mac_id)
++ if (mac_id != rtwvif_link->mac_id)
+ return;
+
+ rtw89_debug(rtwdev, RTW89_DBG_FW,
+@@ -4839,7 +4889,7 @@ rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ if (!rtwdev->scanning && !rtwvif->offchan)
+ ieee80211_connection_loss(vif);
+ else
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+ return;
+ case RTW89_BCN_FLTR_NOTIFY:
+ nl_event = NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH;
+@@ -4863,10 +4913,13 @@ static void
+ rtw89_mac_c2h_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct sk_buff *c2h,
+ u32 len)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_bcn_fltr_rpt(rtwdev, rtwvif, c2h);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_bcn_fltr_rpt(rtwdev, rtwvif_link, c2h);
+ }
+
+ static void
+@@ -5931,15 +5984,15 @@ static int rtw89_mac_init_bfee_ax(struct rtw89_dev *rtwdev, u8 mac_idx)
+ }
+
+ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- u8 mac_idx = rtwvif->mac_idx;
+ u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1;
+- u8 port_sel = rtwvif->port;
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
++ u8 port_sel = rtwvif_link->port;
+ u8 sound_dim = 3, t;
+- u8 *phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
++ u8 *phy_cap;
+ u32 reg;
+ u16 val;
+ int ret;
+@@ -5948,6 +6001,11 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ if (ret)
+ return ret;
+
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
++
+ if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+ (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) {
+ ldpc_en &= !!(phy_cap[1] & IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD);
+@@ -5956,17 +6014,19 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ phy_cap[5]);
+ sound_dim = min(sound_dim, t);
+ }
+- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
+- ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
+- stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
++ if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
++ ldpc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
++ stbc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
+ t = FIELD_GET(IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK,
+- sta->deflink.vht_cap.cap);
++ link_sta->vht_cap.cap);
+ sound_dim = min(sound_dim, t);
+ }
+ nc = min(nc, sound_dim);
+ nr = min(nr, sound_dim);
+
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_AX_BFMEE_BFPARAM_SEL);
+
+@@ -5989,34 +6049,41 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M);
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
+ u32 reg;
+- u8 mac_idx = rtwvif->mac_idx;
+ int ret;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+- if (sta->deflink.he_cap.has_he) {
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->he_cap.has_he) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC5));
+ }
+- if (sta->deflink.vht_cap.vht_supported) {
++ if (link_sta->vht_cap.vht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC5));
+ }
+- if (sta->deflink.ht_cap.ht_supported) {
++ if (link_sta->ht_cap.ht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC5));
+ }
++
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_AX_BFMEE_BFPARAM_SEL);
+ rtw89_write32_clr(rtwdev, reg, B_AX_BFMEE_CSI_FORCE_RETE_EN);
+@@ -6028,35 +6095,53 @@ static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_bf_assoc_ax(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct ieee80211_link_sta *link_sta;
++ bool has_beamformer_cap;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ has_beamformer_cap = rtw89_sta_has_beamformer_cap(link_sta);
++
++ rcu_read_unlock();
+
+- if (rtw89_sta_has_beamformer_cap(sta)) {
++ if (has_beamformer_cap) {
+ rtw89_debug(rtwdev, RTW89_DBG_BF,
+ "initialize bfee for new association\n");
+- rtw89_mac_init_bfee_ax(rtwdev, rtwvif->mac_idx);
+- rtw89_mac_set_csi_para_reg_ax(rtwdev, vif, sta);
+- rtw89_mac_csi_rrsc_ax(rtwdev, vif, sta);
++ rtw89_mac_init_bfee_ax(rtwdev, rtwvif_link->mac_idx);
++ rtw89_mac_set_csi_para_reg_ax(rtwdev, rtwvif_link, rtwsta_link);
++ rtw89_mac_csi_rrsc_ax(rtwdev, rtwvif_link, rtwsta_link);
+ }
+ }
+
+-void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+-
+- rtw89_mac_bfee_ctrl(rtwdev, rtwvif->mac_idx, false);
++ rtw89_mac_bfee_ctrl(rtwdev, rtwvif_link->mac_idx, false);
+ }
+
+ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *conf)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- u8 mac_idx = rtwvif->mac_idx;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ u8 mac_idx;
+ __le32 *p;
+
++ rtwvif_link = rtwvif->links[conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, conf->link_id);
++ return;
++ }
++
++ mac_idx = rtwvif_link->mac_idx;
++
+ rtw89_debug(rtwdev, RTW89_DBG_BF, "update bf GID table\n");
+
+ p = (__le32 *)conf->mu_group.membership;
+@@ -6080,7 +6165,7 @@ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *
+
+ struct rtw89_mac_bf_monitor_iter_data {
+ struct rtw89_dev *rtwdev;
+- struct ieee80211_sta *down_sta;
++ struct rtw89_sta_link *down_rtwsta_link;
+ int count;
+ };
+
+@@ -6089,23 +6174,41 @@ void rtw89_mac_bf_monitor_calc_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_mac_bf_monitor_iter_data *iter_data =
+ (struct rtw89_mac_bf_monitor_iter_data *)data;
+- struct ieee80211_sta *down_sta = iter_data->down_sta;
++ struct rtw89_sta_link *down_rtwsta_link = iter_data->down_rtwsta_link;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct ieee80211_link_sta *link_sta;
++ struct rtw89_sta_link *rtwsta_link;
++ bool has_beamformer_cap = false;
+ int *count = &iter_data->count;
++ unsigned int link_id;
+
+- if (down_sta == sta)
+- return;
++ rcu_read_lock();
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ if (rtwsta_link == down_rtwsta_link)
++ continue;
+
+- if (rtw89_sta_has_beamformer_cap(sta))
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ if (rtw89_sta_has_beamformer_cap(link_sta)) {
++ has_beamformer_cap = true;
++ break;
++ }
++ }
++
++ if (has_beamformer_cap)
+ (*count)++;
++
++ rcu_read_unlock();
+ }
+
+ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, bool disconnect)
++ struct rtw89_sta_link *rtwsta_link,
++ bool disconnect)
+ {
+ struct rtw89_mac_bf_monitor_iter_data data;
+
+ data.rtwdev = rtwdev;
+- data.down_sta = disconnect ? sta : NULL;
++ data.down_rtwsta_link = disconnect ? rtwsta_link : NULL;
+ data.count = 0;
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_mac_bf_monitor_calc_iter,
+@@ -6121,10 +6224,12 @@ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
+ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_traffic_stats *stats = &rtwdev->stats;
+- struct rtw89_vif *rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
+ bool en = stats->tx_tfc_lv <= stats->rx_tfc_lv;
+ bool old = test_bit(RTW89_FLAG_BFEE_EN, rtwdev->flags);
++ struct rtw89_vif *rtwvif;
+ bool keep_timer = true;
++ unsigned int link_id;
+ bool old_keep_timer;
+
+ old_keep_timer = test_bit(RTW89_FLAG_BFEE_TIMER_KEEP, rtwdev->flags);
+@@ -6134,30 +6239,32 @@ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
+
+ if (keep_timer != old_keep_timer) {
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_bfee_standby_timer(rtwdev, rtwvif->mac_idx,
+- keep_timer);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_bfee_standby_timer(rtwdev, rtwvif_link->mac_idx,
++ keep_timer);
+ }
+
+ if (en == old)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_bfee_ctrl(rtwdev, rtwvif->mac_idx, en);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_bfee_ctrl(rtwdev, rtwvif_link->mac_idx, en);
+ }
+
+ static int
+-__rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++__rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ u32 tx_time)
+ {
+ #define MAC_AX_DFLT_TX_TIME 5280
+- u8 mac_idx = rtwsta->rtwvif->mac_idx;
++ u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx;
+ u32 max_tx_time = tx_time == 0 ? MAC_AX_DFLT_TX_TIME : tx_time;
+ u32 reg;
+ int ret = 0;
+
+- if (rtwsta->cctl_tx_time) {
+- rtwsta->ampdu_max_time = (max_tx_time - 512) >> 9;
+- ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta);
++ if (rtwsta_link->cctl_tx_time) {
++ rtwsta_link->ampdu_max_time = (max_tx_time - 512) >> 9;
++ ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link);
+ } else {
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret) {
+@@ -6173,31 +6280,31 @@ __rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ return ret;
+ }
+
+-int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ bool resume, u32 tx_time)
+ {
+ int ret = 0;
+
+ if (!resume) {
+- rtwsta->cctl_tx_time = true;
+- ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta, tx_time);
++ rtwsta_link->cctl_tx_time = true;
++ ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta_link, tx_time);
+ } else {
+- ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta, tx_time);
+- rtwsta->cctl_tx_time = false;
++ ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta_link, tx_time);
++ rtwsta_link->cctl_tx_time = false;
+ }
+
+ return ret;
+ }
+
+-int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ u32 *tx_time)
+ {
+- u8 mac_idx = rtwsta->rtwvif->mac_idx;
++ u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx;
+ u32 reg;
+ int ret = 0;
+
+- if (rtwsta->cctl_tx_time) {
+- *tx_time = (rtwsta->ampdu_max_time + 1) << 9;
++ if (rtwsta_link->cctl_tx_time) {
++ *tx_time = (rtwsta_link->ampdu_max_time + 1) << 9;
+ } else {
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret) {
+@@ -6213,33 +6320,33 @@ int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ }
+
+ int rtw89_mac_set_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_sta_link *rtwsta_link,
+ bool resume, u8 tx_retry)
+ {
+ int ret = 0;
+
+- rtwsta->data_tx_cnt_lmt = tx_retry;
++ rtwsta_link->data_tx_cnt_lmt = tx_retry;
+
+ if (!resume) {
+- rtwsta->cctl_tx_retry_limit = true;
+- ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta);
++ rtwsta_link->cctl_tx_retry_limit = true;
++ ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link);
+ } else {
+- ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta);
+- rtwsta->cctl_tx_retry_limit = false;
++ ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link);
++ rtwsta_link->cctl_tx_retry_limit = false;
+ }
+
+ return ret;
+ }
+
+ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 *tx_retry)
++ struct rtw89_sta_link *rtwsta_link, u8 *tx_retry)
+ {
+- u8 mac_idx = rtwsta->rtwvif->mac_idx;
++ u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx;
+ u32 reg;
+ int ret = 0;
+
+- if (rtwsta->cctl_tx_retry_limit) {
+- *tx_retry = rtwsta->data_tx_cnt_lmt;
++ if (rtwsta_link->cctl_tx_retry_limit) {
++ *tx_retry = rtwsta_link->data_tx_cnt_lmt;
+ } else {
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret) {
+@@ -6255,10 +6362,10 @@ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+- u8 mac_idx = rtwvif->mac_idx;
++ u8 mac_idx = rtwvif_link->mac_idx;
+ u16 set = mac->muedca_ctrl.mask;
+ u32 reg;
+ u32 ret;
+@@ -6326,7 +6433,9 @@ int rtw89_mac_read_xtal_si_ax(struct rtw89_dev *rtwdev, u8 offset, u8 *val)
+ }
+
+ static
+-void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
++void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ static const enum rtw89_pkt_drop_sel sels[] = {
+ RTW89_PKT_DROP_SEL_MACID_BE_ONCE,
+@@ -6334,15 +6443,14 @@ void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
+ RTW89_PKT_DROP_SEL_MACID_VI_ONCE,
+ RTW89_PKT_DROP_SEL_MACID_VO_ONCE,
+ };
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_pkt_drop_params params = {0};
+ int i;
+
+ params.mac_band = RTW89_MAC_0;
+- params.macid = rtwsta->mac_id;
+- params.port = rtwvif->port;
++ params.macid = rtwsta_link->mac_id;
++ params.port = rtwvif_link->port;
+ params.mbssid = 0;
+- params.tf_trs = rtwvif->trigger;
++ params.tf_trs = rtwvif_link->trigger;
+
+ for (i = 0; i < ARRAY_SIZE(sels); i++) {
+ params.sel = sels[i];
+@@ -6352,15 +6460,21 @@ void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
+
+ static void rtw89_mac_pkt_drop_vif_iter(void *data, struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
+ struct rtw89_vif *target = data;
++ unsigned int link_id;
+
+ if (rtwvif != target)
+ return;
+
+- rtw89_mac_pkt_drop_sta(rtwdev, rtwsta);
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ rtw89_mac_pkt_drop_sta(rtwdev, rtwvif_link, rtwsta_link);
++ }
+ }
+
+ void rtw89_mac_pkt_drop_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index 67c2a45071244d..0c269961a57311 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -951,8 +951,9 @@ struct rtw89_mac_gen_def {
+ void (*dmac_func_pre_en)(struct rtw89_dev *rtwdev);
+ void (*dle_func_en)(struct rtw89_dev *rtwdev, bool enable);
+ void (*dle_clk_en)(struct rtw89_dev *rtwdev, bool enable);
+- void (*bf_assoc)(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ void (*bf_assoc)(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+
+ int (*typ_fltr_opt)(struct rtw89_dev *rtwdev,
+ enum rtw89_machdr_frame_type type,
+@@ -1004,12 +1005,12 @@ struct rtw89_mac_gen_def {
+ bool (*is_txq_empty)(struct rtw89_dev *rtwdev);
+
+ int (*add_chan_list)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected);
++ struct rtw89_vif_link *rtwvif_link, bool connected);
+ int (*add_chan_list_pno)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int (*scan_offload)(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *option,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool wowlan);
+
+ int (*wow_config_mac)(struct rtw89_dev *rtwdev, bool enable_wow);
+@@ -1033,81 +1034,89 @@ u32 rtw89_mac_reg_by_port(struct rtw89_dev *rtwdev, u32 base, u8 port, u8 mac_id
+ }
+
+ static inline u32
+-rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base)
++rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ return rtw89_read32(rtwdev, reg);
+ }
+
+ static inline u32
+-rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 mask)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ return rtw89_read32_mask(rtwdev, reg, mask);
+ }
+
+ static inline void
+-rtw89_write32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base,
++rtw89_write32_port(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base,
+ u32 data)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32(rtwdev, reg, data);
+ }
+
+ static inline void
+-rtw89_write32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 mask, u32 data)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, mask, data);
+ }
+
+ static inline void
+-rtw89_write16_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write16_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 mask, u16 data)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write16_mask(rtwdev, reg, mask, data);
+ }
+
+ static inline void
+-rtw89_write32_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write32_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 bit)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32_clr(rtwdev, reg, bit);
+ }
+
+ static inline void
+-rtw89_write16_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write16_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u16 bit)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write16_clr(rtwdev, reg, bit);
+ }
+
+ static inline void
+-rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 bit)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32_set(rtwdev, reg, bit);
+ }
+
+@@ -1139,21 +1148,21 @@ int rtw89_mac_dle_dfi_qempty_cfg(struct rtw89_dev *rtwdev,
+ struct rtw89_mac_dle_dfi_qempty *qempty);
+ void rtw89_mac_dump_l0_to_l1(struct rtw89_dev *rtwdev,
+ enum mac_ax_err_info err);
+-int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
+-int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
++int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_vif *rtwvif_src,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_vif_link *rtwvif_src,
+ u16 offset_tu);
+-int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u64 *tsf);
+ void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en);
++ struct rtw89_vif_link *rtwvif_link, bool en);
+ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif);
+-void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
++void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en);
+-int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
++int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
+ int rtw89_mac_enable_bb_rf(struct rtw89_dev *rtwdev);
+ int rtw89_mac_disable_bb_rf(struct rtw89_dev *rtwdev);
+
+@@ -1251,27 +1260,30 @@ void rtw89_mac_power_mode_change(struct rtw89_dev *rtwdev, bool enter);
+ void rtw89_mac_notify_wake(struct rtw89_dev *rtwdev);
+
+ static inline
+-void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+
+ if (mac->bf_assoc)
+- mac->bf_assoc(rtwdev, vif, sta);
++ mac->bf_assoc(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+-void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *conf);
+ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, bool disconnect);
++ struct rtw89_sta_link *rtwsta_link,
++ bool disconnect);
+ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev);
+ void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en);
+-int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+-int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
++int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en);
++ struct rtw89_vif_link *rtwvif_link, bool en);
+ int rtw89_mac_set_macid_pause(struct rtw89_dev *rtwdev, u8 macid, bool pause);
+
+ static inline void rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
+@@ -1376,15 +1388,15 @@ static inline bool rtw89_mac_get_power_state(struct rtw89_dev *rtwdev)
+ return !!val;
+ }
+
+-int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ bool resume, u32 tx_time);
+-int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ u32 *tx_time);
+ int rtw89_mac_set_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_sta_link *rtwsta_link,
+ bool resume, u8 tx_retry);
+ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 *tx_retry);
++ struct rtw89_sta_link *rtwsta_link, u8 *tx_retry);
+
+ enum rtw89_mac_xtal_si_offset {
+ XTAL0 = 0x0,
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 48ad0d0f76bff4..13fb3cac27016b 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -23,13 +23,13 @@ static void rtw89_ops_tx(struct ieee80211_hw *hw,
+ struct rtw89_dev *rtwdev = hw->priv;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = info->control.vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ struct ieee80211_sta *sta = control->sta;
+ u32 flags = IEEE80211_SKB_CB(skb)->flags;
+ int ret, qsel;
+
+ if (rtwvif->offchan && !(flags & IEEE80211_TX_CTL_TX_OFFCHAN) && sta) {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX, "ops_tx during offchan\n");
+ skb_queue_tail(&rtwsta->roc_queue, skb);
+@@ -105,11 +105,61 @@ static int rtw89_ops_config(struct ieee80211_hw *hw, u32 changed)
+ return 0;
+ }
+
++static int __rtw89_ops_add_iface_link(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct ieee80211_bss_conf *bss_conf;
++ int ret;
++
++ rtw89_leave_ps_mode(rtwdev);
++
++ rtw89_vif_type_mapping(rtwvif_link, false);
++
++ INIT_WORK(&rtwvif_link->update_beacon_work, rtw89_core_update_beacon_work);
++ INIT_LIST_HEAD(&rtwvif_link->general_pkt_list);
++
++ rtwvif_link->hit_rule = 0;
++ rtwvif_link->bcn_hit_cond = 0;
++ rtwvif_link->chanctx_assigned = false;
++ rtwvif_link->chanctx_idx = RTW89_CHANCTX_0;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ ether_addr_copy(rtwvif_link->mac_addr, bss_conf->addr);
++
++ rcu_read_unlock();
++
++ ret = rtw89_mac_add_vif(rtwdev, rtwvif_link);
++ if (ret)
++ return ret;
++
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, NULL, BTC_ROLE_START);
++ return 0;
++}
++
++static void __rtw89_ops_remove_iface_link(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ mutex_unlock(&rtwdev->mutex);
++ cancel_work_sync(&rtwvif_link->update_beacon_work);
++ mutex_lock(&rtwdev->mutex);
++
++ rtw89_leave_ps_mode(rtwdev);
++
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, NULL, BTC_ROLE_STOP);
++
++ rtw89_mac_remove_vif(rtwdev, rtwvif_link);
++}
++
+ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ u8 mac_id, port;
+ int ret = 0;
+
+ rtw89_debug(rtwdev, RTW89_DBG_STATE, "add vif %pM type %d, p2p %d\n",
+@@ -123,49 +173,56 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER |
+ IEEE80211_VIF_SUPPORTS_CQM_RSSI;
+
+- rtwvif->rtwdev = rtwdev;
+- rtwvif->roc.state = RTW89_ROC_IDLE;
+- rtwvif->offchan = false;
++ mac_id = rtw89_acquire_mac_id(rtwdev);
++ if (mac_id == RTW89_MAX_MAC_ID_NUM) {
++ ret = -ENOSPC;
++ goto err;
++ }
++
++ port = rtw89_core_acquire_bit_map(rtwdev->hw_port, RTW89_PORT_NUM);
++ if (port == RTW89_PORT_NUM) {
++ ret = -ENOSPC;
++ goto release_macid;
++ }
++
++ rtw89_init_vif(rtwdev, rtwvif, mac_id, port);
++
++ rtw89_core_txq_init(rtwdev, vif->txq);
++
+ if (!rtw89_rtwvif_in_list(rtwdev, rtwvif))
+ list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
+
+- INIT_WORK(&rtwvif->update_beacon_work, rtw89_core_update_beacon_work);
++ ether_addr_copy(rtwvif->mac_addr, vif->addr);
++
++ rtwvif->offchan = false;
++ rtwvif->roc.state = RTW89_ROC_IDLE;
+ INIT_DELAYED_WORK(&rtwvif->roc.roc_work, rtw89_roc_work);
+- rtw89_leave_ps_mode(rtwdev);
+
+ rtw89_traffic_stats_init(rtwdev, &rtwvif->stats);
+- rtw89_vif_type_mapping(vif, false);
+- rtwvif->port = rtw89_core_acquire_bit_map(rtwdev->hw_port,
+- RTW89_PORT_NUM);
+- if (rtwvif->port == RTW89_PORT_NUM) {
+- ret = -ENOSPC;
+- list_del_init(&rtwvif->list);
+- goto out;
+- }
+-
+- rtwvif->bcn_hit_cond = 0;
+- rtwvif->mac_idx = RTW89_MAC_0;
+- rtwvif->phy_idx = RTW89_PHY_0;
+- rtwvif->chanctx_idx = RTW89_CHANCTX_0;
+- rtwvif->chanctx_assigned = false;
+- rtwvif->hit_rule = 0;
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
+- ether_addr_copy(rtwvif->mac_addr, vif->addr);
+- INIT_LIST_HEAD(&rtwvif->general_pkt_list);
+
+- ret = rtw89_mac_add_vif(rtwdev, rtwvif);
+- if (ret) {
+- rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
+- list_del_init(&rtwvif->list);
+- goto out;
++ rtwvif_link = rtw89_vif_set_link(rtwvif, 0);
++ if (!rtwvif_link) {
++ ret = -EINVAL;
++ goto release_port;
+ }
+
+- rtw89_core_txq_init(rtwdev, vif->txq);
+-
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_START);
++ ret = __rtw89_ops_add_iface_link(rtwdev, rtwvif_link);
++ if (ret)
++ goto unset_link;
+
+ rtw89_recalc_lps(rtwdev);
+-out:
++
++ mutex_unlock(&rtwdev->mutex);
++ return 0;
++
++unset_link:
++ rtw89_vif_unset_link(rtwvif, 0);
++release_port:
++ list_del_init(&rtwvif->list);
++ rtw89_core_release_bit_map(rtwdev->hw_port, port);
++release_macid:
++ rtw89_release_mac_id(rtwdev, mac_id);
++err:
+ mutex_unlock(&rtwdev->mutex);
+
+ return ret;
+@@ -175,20 +232,35 @@ static void rtw89_ops_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ u8 macid = rtw89_vif_get_main_macid(rtwvif);
++ u8 port = rtw89_vif_get_main_port(rtwvif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ rtw89_debug(rtwdev, RTW89_DBG_STATE, "remove vif %pM type %d p2p %d\n",
+ vif->addr, vif->type, vif->p2p);
+
+- cancel_work_sync(&rtwvif->update_beacon_work);
+ cancel_delayed_work_sync(&rtwvif->roc.roc_work);
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_leave_ps_mode(rtwdev);
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_STOP);
+- rtw89_mac_remove_vif(rtwdev, rtwvif);
+- rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
++
++ rtwvif_link = rtwvif->links[0];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, 0);
++ goto bottom;
++ }
++
++ __rtw89_ops_remove_iface_link(rtwdev, rtwvif_link);
++
++ rtw89_vif_unset_link(rtwvif, 0);
++
++bottom:
+ list_del_init(&rtwvif->list);
++ rtw89_core_release_bit_map(rtwdev->hw_port, port);
++ rtw89_release_mac_id(rtwdev, macid);
++
+ rtw89_recalc_lps(rtwdev);
+ rtw89_enter_ips_by_hwflags(rtwdev);
+
+@@ -311,24 +383,30 @@ static const u8 ac_to_fw_idx[IEEE80211_NUM_ACS] = {
+ };
+
+ static u8 rtw89_aifsn_to_aifs(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u8 aifsn)
++ struct rtw89_vif_link *rtwvif_link, u8 aifsn)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_bss_conf *bss_conf;
+ u8 slot_time;
+ u8 sifs;
+
+- slot_time = vif->bss_conf.use_short_slot ? 9 : 20;
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ slot_time = bss_conf->use_short_slot ? 9 : 20;
++
++ rcu_read_unlock();
++
+ sifs = chan->band_type == RTW89_BAND_2G ? 10 : 16;
+
+ return aifsn * slot_time + sifs;
+ }
+
+ static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u16 ac)
++ struct rtw89_vif_link *rtwvif_link, u16 ac)
+ {
+- struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac];
++ struct ieee80211_tx_queue_params *params = &rtwvif_link->tx_params[ac];
+ u32 val;
+ u8 ecw_max, ecw_min;
+ u8 aifs;
+@@ -336,12 +414,12 @@ static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev,
+ /* 2^ecw - 1 = cw; ecw = log2(cw + 1) */
+ ecw_max = ilog2(params->cw_max + 1);
+ ecw_min = ilog2(params->cw_min + 1);
+- aifs = rtw89_aifsn_to_aifs(rtwdev, rtwvif, params->aifs);
++ aifs = rtw89_aifsn_to_aifs(rtwdev, rtwvif_link, params->aifs);
+ val = FIELD_PREP(FW_EDCA_PARAM_TXOPLMT_MSK, params->txop) |
+ FIELD_PREP(FW_EDCA_PARAM_CWMAX_MSK, ecw_max) |
+ FIELD_PREP(FW_EDCA_PARAM_CWMIN_MSK, ecw_min) |
+ FIELD_PREP(FW_EDCA_PARAM_AIFS_MSK, aifs);
+- rtw89_fw_h2c_set_edca(rtwdev, rtwvif, ac_to_fw_idx[ac], val);
++ rtw89_fw_h2c_set_edca(rtwdev, rtwvif_link, ac_to_fw_idx[ac], val);
+ }
+
+ #define R_MUEDCA_ACS_PARAM(acs) {R_AX_MUEDCA_ ## acs ## _PARAM_0, \
+@@ -355,9 +433,9 @@ static const u32 ac_to_mu_edca_param[IEEE80211_NUM_ACS][RTW89_CHIP_GEN_NUM] = {
+ };
+
+ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u16 ac)
++ struct rtw89_vif_link *rtwvif_link, u16 ac)
+ {
+- struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac];
++ struct ieee80211_tx_queue_params *params = &rtwvif_link->tx_params[ac];
+ struct ieee80211_he_mu_edca_param_ac_rec *mu_edca;
+ int gen = rtwdev->chip->chip_gen;
+ u8 aifs, aifsn;
+@@ -370,32 +448,199 @@ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
+
+ mu_edca = ¶ms->mu_edca_param_rec;
+ aifsn = FIELD_GET(GENMASK(3, 0), mu_edca->aifsn);
+- aifs = aifsn ? rtw89_aifsn_to_aifs(rtwdev, rtwvif, aifsn) : 0;
++ aifs = aifsn ? rtw89_aifsn_to_aifs(rtwdev, rtwvif_link, aifsn) : 0;
+ timer_32us = mu_edca->mu_edca_timer << 8;
+
+ val = FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_TIMER_MASK, timer_32us) |
+ FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_CW_MASK, mu_edca->ecw_min_max) |
+ FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_AIFS_MASK, aifs);
+- reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen], rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen],
++ rtwvif_link->mac_idx);
+ rtw89_write32(rtwdev, reg, val);
+
+- rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif, true);
++ rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif_link, true);
+ }
+
+ static void __rtw89_conf_tx(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u16 ac)
++ struct rtw89_vif_link *rtwvif_link, u16 ac)
+ {
+- ____rtw89_conf_tx_edca(rtwdev, rtwvif, ac);
+- ____rtw89_conf_tx_mu_edca(rtwdev, rtwvif, ac);
++ ____rtw89_conf_tx_edca(rtwdev, rtwvif_link, ac);
++ ____rtw89_conf_tx_mu_edca(rtwdev, rtwvif_link, ac);
+ }
+
+ static void rtw89_conf_tx(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ u16 ac;
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++)
+- __rtw89_conf_tx(rtwdev, rtwvif, ac);
++ __rtw89_conf_tx(rtwdev, rtwvif_link, ac);
++}
++
++static int __rtw89_ops_sta_add(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ bool acquire_macid = false;
++ u8 macid;
++ int ret;
++ int i;
++
++ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
++ /* for station mode, assign the mac_id from itself */
++ macid = rtw89_vif_get_main_macid(rtwvif);
++ } else {
++ macid = rtw89_acquire_mac_id(rtwdev);
++ if (macid == RTW89_MAX_MAC_ID_NUM)
++ return -ENOSPC;
++
++ acquire_macid = true;
++ }
++
++ rtw89_init_sta(rtwdev, rtwvif, rtwsta, macid);
++
++ for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
++ rtw89_core_txq_init(rtwdev, sta->txq[i]);
++
++ skb_queue_head_init(&rtwsta->roc_queue);
++
++ rtwsta_link = rtw89_sta_set_link(rtwsta, sta->deflink.link_id);
++ if (!rtwsta_link) {
++ ret = -EINVAL;
++ goto err;
++ }
++
++ rtwvif_link = rtwsta_link->rtwvif_link;
++
++ ret = rtw89_core_sta_link_add(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ goto unset_link;
++
++ if (vif->type == NL80211_IFTYPE_AP || sta->tdls)
++ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
++
++ return 0;
++
++unset_link:
++ rtw89_sta_unset_link(rtwsta, sta->deflink.link_id);
++err:
++ if (acquire_macid)
++ rtw89_release_mac_id(rtwdev, macid);
++
++ return ret;
++}
++
++static int __rtw89_ops_sta_assoc(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ bool station_mode)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++
++ if (station_mode)
++ rtw89_vif_type_mapping(rtwvif_link, true);
++
++ ret = rtw89_core_sta_link_assoc(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++ }
++
++ rtwdev->total_sta_assoc++;
++ if (sta->tdls)
++ rtwvif->tdls_peer++;
++
++ return 0;
++}
++
++static int __rtw89_ops_sta_disassoc(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_core_sta_link_disassoc(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++ }
++
++ rtwsta->disassoc = true;
++
++ rtwdev->total_sta_assoc--;
++ if (sta->tdls)
++ rtwvif->tdls_peer--;
++
++ return 0;
++}
++
++static int __rtw89_ops_sta_disconnect(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_core_free_sta_pending_ba(rtwdev, sta);
++ rtw89_core_free_sta_pending_forbid_ba(rtwdev, sta);
++ rtw89_core_free_sta_pending_roc_tx(rtwdev, sta);
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_core_sta_link_disconnect(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static int __rtw89_ops_sta_remove(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ u8 macid = rtw89_sta_get_main_macid(rtwsta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_core_sta_link_remove(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++
++ rtw89_sta_unset_link(rtwsta, link_id);
++ }
++
++ if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
++ rtw89_release_mac_id(rtwdev, macid);
++ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
++ }
++
++ return 0;
+ }
+
+ static void rtw89_station_mode_sta_assoc(struct rtw89_dev *rtwdev,
+@@ -412,16 +657,34 @@ static void rtw89_station_mode_sta_assoc(struct rtw89_dev *rtwdev,
+ return;
+ }
+
+- rtw89_vif_type_mapping(vif, true);
++ __rtw89_ops_sta_assoc(rtwdev, vif, sta, true);
++}
++
++static void __rtw89_ops_bss_link_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
++ rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link);
++ rtw89_mac_port_update(rtwdev, rtwvif_link);
++ rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, rtwvif_link);
++}
++
++static void __rtw89_ops_bss_assoc(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
+
+- rtw89_core_sta_assoc(rtwdev, vif, sta);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ __rtw89_ops_bss_link_assoc(rtwdev, rtwvif_link);
+ }
+
+ static void rtw89_ops_vif_cfg_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif, u64 changed)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+@@ -429,10 +692,7 @@ static void rtw89_ops_vif_cfg_changed(struct ieee80211_hw *hw,
+ if (changed & BSS_CHANGED_ASSOC) {
+ if (vif->cfg.assoc) {
+ rtw89_station_mode_sta_assoc(rtwdev, vif);
+- rtw89_phy_set_bss_color(rtwdev, vif);
+- rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, vif);
+- rtw89_mac_port_update(rtwdev, rtwvif);
+- rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, vif);
++ __rtw89_ops_bss_assoc(rtwdev, vif);
+
+ rtw89_queue_chanctx_work(rtwdev);
+ } else {
+@@ -459,39 +719,49 @@ static void rtw89_ops_link_info_changed(struct ieee80211_hw *hw,
+ u64 changed)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+
++ rtwvif_link = rtwvif->links[conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, conf->link_id);
++ goto out;
++ }
++
+ if (changed & BSS_CHANGED_BSSID) {
+- ether_addr_copy(rtwvif->bssid, conf->bssid);
+- rtw89_cam_bssid_changed(rtwdev, rtwvif);
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
+- WRITE_ONCE(rtwvif->sync_bcn_tsf, 0);
++ ether_addr_copy(rtwvif_link->bssid, conf->bssid);
++ rtw89_cam_bssid_changed(rtwdev, rtwvif_link);
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
++ WRITE_ONCE(rtwvif_link->sync_bcn_tsf, 0);
+ }
+
+ if (changed & BSS_CHANGED_BEACON)
+- rtw89_chip_h2c_update_beacon(rtwdev, rtwvif);
++ rtw89_chip_h2c_update_beacon(rtwdev, rtwvif_link);
+
+ if (changed & BSS_CHANGED_ERP_SLOT)
+- rtw89_conf_tx(rtwdev, rtwvif);
++ rtw89_conf_tx(rtwdev, rtwvif_link);
+
+ if (changed & BSS_CHANGED_HE_BSS_COLOR)
+- rtw89_phy_set_bss_color(rtwdev, vif);
++ rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
+
+ if (changed & BSS_CHANGED_MU_GROUPS)
+ rtw89_mac_bf_set_gid_table(rtwdev, vif, conf);
+
+ if (changed & BSS_CHANGED_P2P_PS)
+- rtw89_core_update_p2p_ps(rtwdev, vif);
++ rtw89_core_update_p2p_ps(rtwdev, rtwvif_link, conf);
+
+ if (changed & BSS_CHANGED_CQM)
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+
+ if (changed & BSS_CHANGED_TPE)
+- rtw89_reg_6ghz_recalc(rtwdev, rtwvif, true);
++ rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, true);
+
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -500,12 +770,21 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw,
+ struct ieee80211_bss_conf *link_conf)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+ const struct rtw89_chan *chan;
+
+ mutex_lock(&rtwdev->mutex);
+
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ goto out;
++ }
++
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
+ if (chan->band_type == RTW89_BAND_6G) {
+ mutex_unlock(&rtwdev->mutex);
+ return -EOPNOTSUPP;
+@@ -514,16 +793,18 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw,
+ if (rtwdev->scanning)
+ rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
+
+- ether_addr_copy(rtwvif->bssid, vif->bss_conf.bssid);
+- rtw89_cam_bssid_changed(rtwdev, rtwvif);
+- rtw89_mac_port_update(rtwdev, rtwvif);
+- rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, NULL);
+- rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_TYPE_CHANGE);
+- rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true);
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
+- rtw89_chip_rfk_channel(rtwdev, rtwvif);
++ ether_addr_copy(rtwvif_link->bssid, link_conf->bssid);
++ rtw89_cam_bssid_changed(rtwdev, rtwvif_link);
++ rtw89_mac_port_update(rtwdev, rtwvif_link);
++ rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, NULL);
++ rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_TYPE_CHANGE);
++ rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true);
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
++ rtw89_chip_rfk_channel(rtwdev, rtwvif_link);
+
+ rtw89_queue_chanctx_work(rtwdev);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+ return 0;
+@@ -534,12 +815,24 @@ void rtw89_ops_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_mac_stop_ap(rtwdev, rtwvif);
+- rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, NULL);
+- rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true);
++
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ goto out;
++ }
++
++ rtw89_mac_stop_ap(rtwdev, rtwvif_link);
++ rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, NULL);
++ rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -547,10 +840,13 @@ static int rtw89_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ bool set)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
+
+- ieee80211_queue_work(rtwdev->hw, &rtwvif->update_beacon_work);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ ieee80211_queue_work(rtwdev->hw, &rtwvif_link->update_beacon_work);
+
+ return 0;
+ }
+@@ -561,15 +857,29 @@ static int rtw89_ops_conf_tx(struct ieee80211_hw *hw,
+ const struct ieee80211_tx_queue_params *params)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ int ret = 0;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+- rtwvif->tx_params[ac] = *params;
+- __rtw89_conf_tx(rtwdev, rtwvif, ac);
++
++ rtwvif_link = rtwvif->links[link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_id);
++ ret = -ENOLINK;
++ goto out;
++ }
++
++ rtwvif_link->tx_params[ac] = *params;
++ __rtw89_conf_tx(rtwdev, rtwvif_link, ac);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+- return 0;
++ return ret;
+ }
+
+ static int __rtw89_ops_sta_state(struct ieee80211_hw *hw,
+@@ -582,26 +892,26 @@ static int __rtw89_ops_sta_state(struct ieee80211_hw *hw,
+
+ if (old_state == IEEE80211_STA_NOTEXIST &&
+ new_state == IEEE80211_STA_NONE)
+- return rtw89_core_sta_add(rtwdev, vif, sta);
++ return __rtw89_ops_sta_add(rtwdev, vif, sta);
+
+ if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_ASSOC) {
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+ return 0; /* defer to bss_info_changed to have vif info */
+- return rtw89_core_sta_assoc(rtwdev, vif, sta);
++ return __rtw89_ops_sta_assoc(rtwdev, vif, sta, false);
+ }
+
+ if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTH)
+- return rtw89_core_sta_disassoc(rtwdev, vif, sta);
++ return __rtw89_ops_sta_disassoc(rtwdev, vif, sta);
+
+ if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_NONE)
+- return rtw89_core_sta_disconnect(rtwdev, vif, sta);
++ return __rtw89_ops_sta_disconnect(rtwdev, vif, sta);
+
+ if (old_state == IEEE80211_STA_NONE &&
+ new_state == IEEE80211_STA_NOTEXIST)
+- return rtw89_core_sta_remove(rtwdev, vif, sta);
++ return __rtw89_ops_sta_remove(rtwdev, vif, sta);
+
+ return 0;
+ }
+@@ -667,7 +977,8 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw,
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+ struct ieee80211_sta *sta = params->sta;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ u16 tid = params->tid;
+ struct ieee80211_txq *txq = sta->txq[tid];
+ struct rtw89_txq *rtwtxq = (struct rtw89_txq *)txq->drv_priv;
+@@ -681,7 +992,7 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw,
+ mutex_lock(&rtwdev->mutex);
+ clear_bit(RTW89_TXQ_F_AMPDU, &rtwtxq->flags);
+ clear_bit(tid, rtwsta->ampdu_map);
+- rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, vif, sta);
++ rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, rtwvif, rtwsta);
+ mutex_unlock(&rtwdev->mutex);
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ break;
+@@ -692,7 +1003,7 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw,
+ rtwsta->ampdu_params[tid].amsdu = params->amsdu;
+ set_bit(tid, rtwsta->ampdu_map);
+ rtw89_leave_ps_mode(rtwdev);
+- rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, vif, sta);
++ rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, rtwvif, rtwsta);
+ mutex_unlock(&rtwdev->mutex);
+ break;
+ case IEEE80211_AMPDU_RX_START:
+@@ -731,9 +1042,14 @@ static void rtw89_ops_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
+
+- sinfo->txrate = rtwsta->ra_report.txrate;
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link))
++ return;
++
++ sinfo->txrate = rtwsta_link->ra_report.txrate;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+ }
+
+@@ -743,7 +1059,7 @@ void __rtw89_drop_packets(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ struct rtw89_vif *rtwvif;
+
+ if (vif) {
+- rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ rtwvif = vif_to_rtwvif(vif);
+ rtw89_mac_pkt_drop_vif(rtwdev, rtwvif);
+ } else {
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+@@ -777,14 +1093,20 @@ struct rtw89_iter_bitrate_mask_data {
+ static void rtw89_ra_mask_info_update_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_iter_bitrate_mask_data *br_data = data;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwsta->rtwvif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
+
+ if (vif != br_data->vif || vif->p2p)
+ return;
+
+- rtwsta->use_cfg_mask = true;
+- rtwsta->mask = *br_data->mask;
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwsta_link->use_cfg_mask = true;
++ rtwsta_link->mask = *br_data->mask;
++ }
++
+ rtw89_phy_ra_update_sta(br_data->rtwdev, sta, IEEE80211_RC_SUPP_RATES_CHANGED);
+ }
+
+@@ -854,10 +1176,20 @@ static void rtw89_ops_sw_scan_start(struct ieee80211_hw *hw,
+ const u8 *mac_addr)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, false);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "sw scan start: find no link on HW-0\n");
++ goto out;
++ }
++
++ rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, false);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -865,9 +1197,20 @@ static void rtw89_ops_sw_scan_complete(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_core_scan_complete(rtwdev, vif, false);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "sw scan complete: find no link on HW-0\n");
++ goto out;
++ }
++
++ rtw89_core_scan_complete(rtwdev, rtwvif_link, false);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -884,22 +1227,35 @@ static int rtw89_ops_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *req)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+- int ret = 0;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ int ret;
+
+ if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw))
+ return 1;
+
+- if (rtwdev->scanning || rtwvif->offchan)
+- return -EBUSY;
+-
+ mutex_lock(&rtwdev->mutex);
+- rtw89_hw_scan_start(rtwdev, vif, req);
+- ret = rtw89_hw_scan_offload(rtwdev, vif, true);
++
++ if (rtwdev->scanning || rtwvif->offchan) {
++ ret = -EBUSY;
++ goto out;
++ }
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "hw scan: find no link on HW-0\n");
++ ret = -ENOLINK;
++ goto out;
++ }
++
++ rtw89_hw_scan_start(rtwdev, rtwvif_link, req);
++ ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, true);
+ if (ret) {
+- rtw89_hw_scan_abort(rtwdev, vif);
++ rtw89_hw_scan_abort(rtwdev, rtwvif_link);
+ rtw89_err(rtwdev, "HW scan failed with status: %d\n", ret);
+ }
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+ return ret;
+@@ -909,6 +1265,8 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw))
+ return;
+@@ -917,7 +1275,16 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw,
+ return;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_hw_scan_abort(rtwdev, vif);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "cancel hw scan: find no link on HW-0\n");
++ goto out;
++ }
++
++ rtw89_hw_scan_abort(rtwdev, rtwvif_link);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -970,11 +1337,24 @@ static int rtw89_ops_assign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+ int ret;
+
+ mutex_lock(&rtwdev->mutex);
+- ret = rtw89_chanctx_ops_assign_vif(rtwdev, rtwvif, ctx);
++
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ ret = -ENOLINK;
++ goto out;
++ }
++
++ ret = rtw89_chanctx_ops_assign_vif(rtwdev, rtwvif_link, ctx);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+ return ret;
+@@ -986,10 +1366,21 @@ static void rtw89_ops_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_chanctx_ops_unassign_vif(rtwdev, rtwvif, ctx);
++
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ mutex_unlock(&rtwdev->mutex);
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ return;
++ }
++
++ rtw89_chanctx_ops_unassign_vif(rtwdev, rtwvif_link, ctx);
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -1003,7 +1394,7 @@ static int rtw89_ops_remain_on_channel(struct ieee80211_hw *hw,
+ struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+ struct rtw89_roc *roc = &rtwvif->roc;
+
+- if (!vif)
++ if (!rtwvif)
+ return -EINVAL;
+
+ mutex_lock(&rtwdev->mutex);
+@@ -1053,8 +1444,8 @@ static int rtw89_ops_cancel_remain_on_channel(struct ieee80211_hw *hw,
+ static void rtw89_set_tid_config_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct cfg80211_tid_config *tid_config = data;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_dev *rtwdev = rtwsta->rtwvif->rtwdev;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+
+ rtw89_core_set_tid_config(rtwdev, sta, tid_config);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/mac_be.c b/drivers/net/wireless/realtek/rtw89/mac_be.c
+index 31f0a5225b115e..f22eaa83297fb4 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac_be.c
++++ b/drivers/net/wireless/realtek/rtw89/mac_be.c
+@@ -2091,13 +2091,13 @@ static int rtw89_mac_init_bfee_be(struct rtw89_dev *rtwdev, u8 mac_idx)
+ }
+
+ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1;
+- u8 mac_idx = rtwvif->mac_idx;
+- u8 port_sel = rtwvif->port;
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
++ u8 port_sel = rtwvif_link->port;
+ u8 sound_dim = 3, t;
+ u8 *phy_cap;
+ u32 reg;
+@@ -2108,7 +2108,10 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ if (ret)
+ return ret;
+
+- phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
+
+ if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+ (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) {
+@@ -2119,11 +2122,11 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ sound_dim = min(sound_dim, t);
+ }
+
+- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
+- ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
+- stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
+- t = u32_get_bits(sta->deflink.vht_cap.cap,
++ if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
++ ldpc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
++ stbc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
++ t = u32_get_bits(link_sta->vht_cap.cap,
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK);
+ sound_dim = min(sound_dim, t);
+ }
+@@ -2131,6 +2134,8 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ nc = min(nc, sound_dim);
+ nr = min(nr, sound_dim);
+
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
+
+@@ -2155,12 +2160,12 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M);
+- u8 mac_idx = rtwvif->mac_idx;
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
+ int ret;
+ u32 reg;
+
+@@ -2168,22 +2173,28 @@ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+ if (ret)
+ return ret;
+
+- if (sta->deflink.he_cap.has_he) {
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->he_cap.has_he) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC5));
+ }
+- if (sta->deflink.vht_cap.vht_supported) {
++ if (link_sta->vht_cap.vht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC5));
+ }
+- if (sta->deflink.ht_cap.ht_supported) {
++ if (link_sta->ht_cap.ht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC5));
+ }
+
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
+ rtw89_write32_clr(rtwdev, reg, B_BE_BFMEE_CSI_FORCE_RETE_EN);
+@@ -2195,17 +2206,25 @@ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_bf_assoc_be(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct ieee80211_link_sta *link_sta;
++ bool has_beamformer_cap;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ has_beamformer_cap = rtw89_sta_has_beamformer_cap(link_sta);
++
++ rcu_read_unlock();
+
+- if (rtw89_sta_has_beamformer_cap(sta)) {
++ if (has_beamformer_cap) {
+ rtw89_debug(rtwdev, RTW89_DBG_BF,
+ "initialize bfee for new association\n");
+- rtw89_mac_init_bfee_be(rtwdev, rtwvif->mac_idx);
+- rtw89_mac_set_csi_para_reg_be(rtwdev, vif, sta);
+- rtw89_mac_csi_rrsc_be(rtwdev, vif, sta);
++ rtw89_mac_init_bfee_be(rtwdev, rtwvif_link->mac_idx);
++ rtw89_mac_set_csi_para_reg_be(rtwdev, rtwvif_link, rtwsta_link);
++ rtw89_mac_csi_rrsc_be(rtwdev, rtwvif_link, rtwsta_link);
+ }
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index c7165e757842be..4b47b45f897cbc 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -75,12 +75,12 @@ static u64 get_mcs_ra_mask(u16 mcs_map, u8 highest_mcs, u8 gap)
+ return ra_mask;
+ }
+
+-static u64 get_he_ra_mask(struct ieee80211_sta *sta)
++static u64 get_he_ra_mask(struct ieee80211_link_sta *link_sta)
+ {
+- struct ieee80211_sta_he_cap cap = sta->deflink.he_cap;
++ struct ieee80211_sta_he_cap cap = link_sta->he_cap;
+ u16 mcs_map;
+
+- switch (sta->deflink.bandwidth) {
++ switch (link_sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ if (cap.he_cap_elem.phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)
+@@ -118,14 +118,14 @@ static u64 get_eht_mcs_ra_mask(u8 *max_nss, u8 start_mcs, u8 n_nss)
+ return mask;
+ }
+
+-static u64 get_eht_ra_mask(struct ieee80211_sta *sta)
++static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta)
+ {
+- struct ieee80211_sta_eht_cap *eht_cap = &sta->deflink.eht_cap;
++ struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap;
+ struct ieee80211_eht_mcs_nss_supp_20mhz_only *mcs_nss_20mhz;
+ struct ieee80211_eht_mcs_nss_supp_bw *mcs_nss;
+- u8 *he_phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
++ u8 *he_phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
+
+- switch (sta->deflink.bandwidth) {
++ switch (link_sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_320:
+ mcs_nss = &eht_cap->eht_mcs_nss_supp.bw._320;
+ /* MCS 9, 11, 13 */
+@@ -195,15 +195,16 @@ static u64 rtw89_phy_ra_mask_recover(u64 ra_mask, u64 ra_mask_bak)
+ return ra_mask;
+ }
+
+-static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_link_sta *link_sta,
+ const struct rtw89_chan *chan)
+ {
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
+- struct cfg80211_bitrate_mask *mask = &rtwsta->mask;
++ struct cfg80211_bitrate_mask *mask = &rtwsta_link->mask;
+ enum nl80211_band band;
+ u64 cfg_mask;
+
+- if (!rtwsta->use_cfg_mask)
++ if (!rtwsta_link->use_cfg_mask)
+ return -1;
+
+ switch (chan->band_type) {
+@@ -227,17 +228,17 @@ static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, struct rtw89_sta *rtw
+ return -1;
+ }
+
+- if (sta->deflink.he_cap.has_he) {
++ if (link_sta->he_cap.has_he) {
+ cfg_mask |= u64_encode_bits(mask->control[band].he_mcs[0],
+ RA_MASK_HE_1SS_RATES);
+ cfg_mask |= u64_encode_bits(mask->control[band].he_mcs[1],
+ RA_MASK_HE_2SS_RATES);
+- } else if (sta->deflink.vht_cap.vht_supported) {
++ } else if (link_sta->vht_cap.vht_supported) {
+ cfg_mask |= u64_encode_bits(mask->control[band].vht_mcs[0],
+ RA_MASK_VHT_1SS_RATES);
+ cfg_mask |= u64_encode_bits(mask->control[band].vht_mcs[1],
+ RA_MASK_VHT_2SS_RATES);
+- } else if (sta->deflink.ht_cap.ht_supported) {
++ } else if (link_sta->ht_cap.ht_supported) {
+ cfg_mask |= u64_encode_bits(mask->control[band].ht_mcs[0],
+ RA_MASK_HT_1SS_RATES);
+ cfg_mask |= u64_encode_bits(mask->control[band].ht_mcs[1],
+@@ -261,17 +262,17 @@ rtw89_ra_mask_eht_rates[4] = {RA_MASK_EHT_1SS_RATES, RA_MASK_EHT_2SS_RATES,
+ RA_MASK_EHT_3SS_RATES, RA_MASK_EHT_4SS_RATES};
+
+ static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_sta_link *rtwsta_link,
+ const struct rtw89_chan *chan,
+ bool *fix_giltf_en, u8 *fix_giltf)
+ {
+- struct cfg80211_bitrate_mask *mask = &rtwsta->mask;
++ struct cfg80211_bitrate_mask *mask = &rtwsta_link->mask;
+ u8 band = chan->band_type;
+ enum nl80211_band nl_band = rtw89_hw_to_nl80211_band(band);
+ u8 he_gi = mask->control[nl_band].he_gi;
+ u8 he_ltf = mask->control[nl_band].he_ltf;
+
+- if (!rtwsta->use_cfg_mask)
++ if (!rtwsta_link->use_cfg_mask)
+ return;
+
+ if (he_ltf == 2 && he_gi == 2) {
+@@ -295,17 +296,17 @@ static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, bool csi)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_link_sta *link_sta,
++ bool p2p, bool csi)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+- struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
+- struct rtw89_ra_info *ra = &rtwsta->ra;
++ struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif_link->rate_pattern;
++ struct rtw89_ra_info *ra = &rtwsta_link->ra;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwsta->rtwvif);
++ rtwvif_link->chanctx_idx);
+ const u64 *high_rate_masks = rtw89_ra_mask_ht_rates;
+- u8 rssi = ewma_rssi_read(&rtwsta->avg_rssi);
++ u8 rssi = ewma_rssi_read(&rtwsta_link->avg_rssi);
+ u64 ra_mask = 0;
+ u64 ra_mask_bak;
+ u8 mode = 0;
+@@ -320,65 +321,65 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+
+ memset(ra, 0, sizeof(*ra));
+ /* Set the ra mask from sta's capability */
+- if (sta->deflink.eht_cap.has_eht) {
++ if (link_sta->eht_cap.has_eht) {
+ mode |= RTW89_RA_MODE_EHT;
+- ra_mask |= get_eht_ra_mask(sta);
++ ra_mask |= get_eht_ra_mask(link_sta);
+ high_rate_masks = rtw89_ra_mask_eht_rates;
+- } else if (sta->deflink.he_cap.has_he) {
++ } else if (link_sta->he_cap.has_he) {
+ mode |= RTW89_RA_MODE_HE;
+ csi_mode = RTW89_RA_RPT_MODE_HE;
+- ra_mask |= get_he_ra_mask(sta);
++ ra_mask |= get_he_ra_mask(link_sta);
+ high_rate_masks = rtw89_ra_mask_he_rates;
+- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[2] &
++ if (link_sta->he_cap.he_cap_elem.phy_cap_info[2] &
+ IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
+ stbc_en = 1;
+- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[1] &
++ if (link_sta->he_cap.he_cap_elem.phy_cap_info[1] &
+ IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD)
+ ldpc_en = 1;
+- rtw89_phy_ra_gi_ltf(rtwdev, rtwsta, chan, &fix_giltf_en, &fix_giltf);
+- } else if (sta->deflink.vht_cap.vht_supported) {
+- u16 mcs_map = le16_to_cpu(sta->deflink.vht_cap.vht_mcs.rx_mcs_map);
++ rtw89_phy_ra_gi_ltf(rtwdev, rtwsta_link, chan, &fix_giltf_en, &fix_giltf);
++ } else if (link_sta->vht_cap.vht_supported) {
++ u16 mcs_map = le16_to_cpu(link_sta->vht_cap.vht_mcs.rx_mcs_map);
+
+ mode |= RTW89_RA_MODE_VHT;
+ csi_mode = RTW89_RA_RPT_MODE_VHT;
+ /* MCS9 (non-20MHz), MCS8, MCS7 */
+- if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
++ if (link_sta->bandwidth == IEEE80211_STA_RX_BW_20)
+ ra_mask |= get_mcs_ra_mask(mcs_map, 8, 1);
+ else
+ ra_mask |= get_mcs_ra_mask(mcs_map, 9, 1);
+ high_rate_masks = rtw89_ra_mask_vht_rates;
+- if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK)
++ if (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK)
+ stbc_en = 1;
+- if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
++ if (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
+ ldpc_en = 1;
+- } else if (sta->deflink.ht_cap.ht_supported) {
++ } else if (link_sta->ht_cap.ht_supported) {
+ mode |= RTW89_RA_MODE_HT;
+ csi_mode = RTW89_RA_RPT_MODE_HT;
+- ra_mask |= ((u64)sta->deflink.ht_cap.mcs.rx_mask[3] << 48) |
+- ((u64)sta->deflink.ht_cap.mcs.rx_mask[2] << 36) |
+- ((u64)sta->deflink.ht_cap.mcs.rx_mask[1] << 24) |
+- ((u64)sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
++ ra_mask |= ((u64)link_sta->ht_cap.mcs.rx_mask[3] << 48) |
++ ((u64)link_sta->ht_cap.mcs.rx_mask[2] << 36) |
++ ((u64)link_sta->ht_cap.mcs.rx_mask[1] << 24) |
++ ((u64)link_sta->ht_cap.mcs.rx_mask[0] << 12);
+ high_rate_masks = rtw89_ra_mask_ht_rates;
+- if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
++ if (link_sta->ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
+ stbc_en = 1;
+- if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING)
++ if (link_sta->ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING)
+ ldpc_en = 1;
+ }
+
+ switch (chan->band_type) {
+ case RTW89_BAND_2G:
+- ra_mask |= sta->deflink.supp_rates[NL80211_BAND_2GHZ];
+- if (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0xf)
++ ra_mask |= link_sta->supp_rates[NL80211_BAND_2GHZ];
++ if (link_sta->supp_rates[NL80211_BAND_2GHZ] & 0xf)
+ mode |= RTW89_RA_MODE_CCK;
+- if (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0xff0)
++ if (link_sta->supp_rates[NL80211_BAND_2GHZ] & 0xff0)
+ mode |= RTW89_RA_MODE_OFDM;
+ break;
+ case RTW89_BAND_5G:
+- ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_5GHZ] << 4;
++ ra_mask |= (u64)link_sta->supp_rates[NL80211_BAND_5GHZ] << 4;
+ mode |= RTW89_RA_MODE_OFDM;
+ break;
+ case RTW89_BAND_6G:
+- ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_6GHZ] << 4;
++ ra_mask |= (u64)link_sta->supp_rates[NL80211_BAND_6GHZ] << 4;
+ mode |= RTW89_RA_MODE_OFDM;
+ break;
+ default:
+@@ -405,48 +406,48 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ ra_mask &= rtw89_phy_ra_mask_rssi(rtwdev, rssi, 0);
+
+ ra_mask = rtw89_phy_ra_mask_recover(ra_mask, ra_mask_bak);
+- ra_mask &= rtw89_phy_ra_mask_cfg(rtwdev, rtwsta, chan);
++ ra_mask &= rtw89_phy_ra_mask_cfg(rtwdev, rtwsta_link, link_sta, chan);
+
+- switch (sta->deflink.bandwidth) {
++ switch (link_sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ bw_mode = RTW89_CHANNEL_WIDTH_160;
+- sgi = sta->deflink.vht_cap.vht_supported &&
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160);
++ sgi = link_sta->vht_cap.vht_supported &&
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160);
+ break;
+ case IEEE80211_STA_RX_BW_80:
+ bw_mode = RTW89_CHANNEL_WIDTH_80;
+- sgi = sta->deflink.vht_cap.vht_supported &&
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
++ sgi = link_sta->vht_cap.vht_supported &&
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ bw_mode = RTW89_CHANNEL_WIDTH_40;
+- sgi = sta->deflink.ht_cap.ht_supported &&
+- (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_40);
++ sgi = link_sta->ht_cap.ht_supported &&
++ (link_sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40);
+ break;
+ default:
+ bw_mode = RTW89_CHANNEL_WIDTH_20;
+- sgi = sta->deflink.ht_cap.ht_supported &&
+- (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_20);
++ sgi = link_sta->ht_cap.ht_supported &&
++ (link_sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20);
+ break;
+ }
+
+- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[3] &
++ if (link_sta->he_cap.he_cap_elem.phy_cap_info[3] &
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_16_QAM)
+ ra->dcm_cap = 1;
+
+- if (rate_pattern->enable && !vif->p2p) {
+- ra_mask = rtw89_phy_ra_mask_cfg(rtwdev, rtwsta, chan);
++ if (rate_pattern->enable && !p2p) {
++ ra_mask = rtw89_phy_ra_mask_cfg(rtwdev, rtwsta_link, link_sta, chan);
+ ra_mask &= rate_pattern->ra_mask;
+ mode = rate_pattern->ra_mode;
+ }
+
+ ra->bw_cap = bw_mode;
+- ra->er_cap = rtwsta->er_cap;
++ ra->er_cap = rtwsta_link->er_cap;
+ ra->mode_ctrl = mode;
+- ra->macid = rtwsta->mac_id;
++ ra->macid = rtwsta_link->mac_id;
+ ra->stbc_cap = stbc_en;
+ ra->ldpc_cap = ldpc_en;
+- ra->ss_num = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1;
++ ra->ss_num = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1;
+ ra->en_sgi = sgi;
+ ra->ra_mask = ra_mask;
+ ra->fix_giltf_en = fix_giltf_en;
+@@ -458,20 +459,29 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ ra->fixed_csi_rate_en = false;
+ ra->ra_csi_rate_en = true;
+ ra->cr_tbl_sel = false;
+- ra->band_num = rtwvif->phy_idx;
++ ra->band_num = rtwvif_link->phy_idx;
+ ra->csi_bw = bw_mode;
+ ra->csi_gi_ltf = RTW89_GILTF_LGI_4XHE32;
+ ra->csi_mcs_ss_idx = 5;
+ ra->csi_mode = csi_mode;
+ }
+
+-void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
+- u32 changed)
++static void __rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ u32 changed)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_ra_info *ra = &rtwsta->ra;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_ra_info *ra = &rtwsta_link->ra;
++ struct ieee80211_link_sta *link_sta;
+
+- rtw89_phy_ra_sta_update(rtwdev, sta, false);
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ rtw89_phy_ra_sta_update(rtwdev, rtwvif_link, rtwsta_link,
++ link_sta, vif->p2p, false);
++
++ rcu_read_unlock();
+
+ if (changed & IEEE80211_RC_SUPP_RATES_CHANGED)
+ ra->upd_mask = 1;
+@@ -489,6 +499,20 @@ void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta
+ rtw89_fw_h2c_ra(rtwdev, ra, false);
+ }
+
++void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
++ u32 changed)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ __rtw89_phy_ra_update_sta(rtwdev, rtwvif_link, rtwsta_link, changed);
++ }
++}
++
+ static bool __check_rate_pattern(struct rtw89_phy_rate_pattern *next,
+ u16 rate_base, u64 ra_mask, u8 ra_mode,
+ u32 rate_ctrl, u32 ctrl_skip, bool force)
+@@ -523,15 +547,15 @@ static bool __check_rate_pattern(struct rtw89_phy_rate_pattern *next,
+ [RTW89_CHIP_BE] = RTW89_HW_RATE_V1_ ## rate, \
+ }
+
+-void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- const struct cfg80211_bitrate_mask *mask)
++static
++void __rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ const struct cfg80211_bitrate_mask *mask)
+ {
+ struct ieee80211_supported_band *sband;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct rtw89_phy_rate_pattern next_pattern = {0};
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ static const u16 hw_rate_he[][RTW89_CHIP_GEN_NUM] = {
+ RTW89_HW_RATE_BY_CHIP_GEN(HE_NSS1_MCS0),
+ RTW89_HW_RATE_BY_CHIP_GEN(HE_NSS2_MCS0),
+@@ -600,7 +624,7 @@ void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
+ if (!next_pattern.enable)
+ goto out;
+
+- rtwvif->rate_pattern = next_pattern;
++ rtwvif_link->rate_pattern = next_pattern;
+ rtw89_debug(rtwdev, RTW89_DBG_RA,
+ "configure pattern: rate 0x%x, mask 0x%llx, mode 0x%x\n",
+ next_pattern.rate,
+@@ -609,10 +633,22 @@ void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
+ return;
+
+ out:
+- rtwvif->rate_pattern.enable = false;
++ rtwvif_link->rate_pattern.enable = false;
+ rtw89_debug(rtwdev, RTW89_DBG_RA, "unset rate pattern\n");
+ }
+
++void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ const struct cfg80211_bitrate_mask *mask)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ __rtw89_phy_rate_pattern_vif(rtwdev, rtwvif_link, mask);
++}
++
+ static void rtw89_phy_ra_update_sta_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_dev *rtwdev = (struct rtw89_dev *)data;
+@@ -627,14 +663,24 @@ void rtw89_phy_ra_update(struct rtw89_dev *rtwdev)
+ rtwdev);
+ }
+
+-void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta)
++void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_ra_info *ra = &rtwsta->ra;
+- u8 rssi = ewma_rssi_read(&rtwsta->avg_rssi) >> RSSI_FACTOR;
+- bool csi = rtw89_sta_has_beamformer_cap(sta);
++ struct rtw89_vif_link *rtwvif_link = rtwsta_link->rtwvif_link;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_ra_info *ra = &rtwsta_link->ra;
++ u8 rssi = ewma_rssi_read(&rtwsta_link->avg_rssi) >> RSSI_FACTOR;
++ struct ieee80211_link_sta *link_sta;
++ bool csi;
++
++ rcu_read_lock();
+
+- rtw89_phy_ra_sta_update(rtwdev, sta, csi);
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ csi = rtw89_sta_has_beamformer_cap(link_sta);
++
++ rtw89_phy_ra_sta_update(rtwdev, rtwvif_link, rtwsta_link,
++ link_sta, vif->p2p, csi);
++
++ rcu_read_unlock();
+
+ if (rssi > 40)
+ ra->init_rate_lv = 1;
+@@ -2553,14 +2599,14 @@ struct rtw89_phy_iter_ra_data {
+ struct sk_buff *c2h;
+ };
+
+-static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
++static void __rtw89_phy_c2h_ra_rpt_iter(struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_link_sta *link_sta,
++ struct rtw89_phy_iter_ra_data *ra_data)
+ {
+- struct rtw89_phy_iter_ra_data *ra_data = (struct rtw89_phy_iter_ra_data *)data;
+ struct rtw89_dev *rtwdev = ra_data->rtwdev;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ const struct rtw89_c2h_ra_rpt *c2h =
+ (const struct rtw89_c2h_ra_rpt *)ra_data->c2h->data;
+- struct rtw89_ra_report *ra_report = &rtwsta->ra_report;
++ struct rtw89_ra_report *ra_report = &rtwsta_link->ra_report;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ bool format_v1 = chip->chip_gen == RTW89_CHIP_BE;
+ u8 mode, rate, bw, giltf, mac_id;
+@@ -2570,7 +2616,7 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
+ u8 t;
+
+ mac_id = le32_get_bits(c2h->w2, RTW89_C2H_RA_RPT_W2_MACID);
+- if (mac_id != rtwsta->mac_id)
++ if (mac_id != rtwsta_link->mac_id)
+ return;
+
+ rate = le32_get_bits(c2h->w3, RTW89_C2H_RA_RPT_W3_MCSNSS);
+@@ -2661,8 +2707,26 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
+ u16_encode_bits(mode, RTW89_HW_RATE_MASK_MOD) |
+ u16_encode_bits(rate, RTW89_HW_RATE_MASK_VAL);
+ ra_report->might_fallback_legacy = mcs <= 2;
+- sta->deflink.agg.max_rc_amsdu_len = get_max_amsdu_len(rtwdev, ra_report);
+- rtwsta->max_agg_wait = sta->deflink.agg.max_rc_amsdu_len / 1500 - 1;
++ link_sta->agg.max_rc_amsdu_len = get_max_amsdu_len(rtwdev, ra_report);
++ rtwsta_link->max_agg_wait = link_sta->agg.max_rc_amsdu_len / 1500 - 1;
++}
++
++static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_phy_iter_ra_data *ra_data = (struct rtw89_phy_iter_ra_data *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ struct ieee80211_link_sta *link_sta;
++ unsigned int link_id;
++
++ rcu_read_lock();
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ __rtw89_phy_c2h_ra_rpt_iter(rtwsta_link, link_sta, ra_data);
++ }
++
++ rcu_read_unlock();
+ }
+
+ static void
+@@ -4290,33 +4354,33 @@ void rtw89_phy_cfo_parse(struct rtw89_dev *rtwdev, s16 cfo_val,
+ cfo->packet_count++;
+ }
+
+-void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ struct rtw89_phy_ul_tb_info *ul_tb_info = &rtwdev->ul_tb_info;
+
+ if (!chip->ul_tb_waveform_ctrl)
+ return;
+
+- rtwvif->def_tri_idx =
++ rtwvif_link->def_tri_idx =
+ rtw89_phy_read32_mask(rtwdev, R_DCFO_OPT, B_TXSHAPE_TRIANGULAR_CFG);
+
+ if (chip->chip_id == RTL8852B && rtwdev->hal.cv > CHIP_CBV)
+- rtwvif->dyn_tb_bedge_en = false;
++ rtwvif_link->dyn_tb_bedge_en = false;
+ else if (chan->band_type >= RTW89_BAND_5G &&
+ chan->band_width >= RTW89_CHANNEL_WIDTH_40)
+- rtwvif->dyn_tb_bedge_en = true;
++ rtwvif_link->dyn_tb_bedge_en = true;
+ else
+- rtwvif->dyn_tb_bedge_en = false;
++ rtwvif_link->dyn_tb_bedge_en = false;
+
+ rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
+ "[ULTB] def_if_bandedge=%d, def_tri_idx=%d\n",
+- ul_tb_info->def_if_bandedge, rtwvif->def_tri_idx);
++ ul_tb_info->def_if_bandedge, rtwvif_link->def_tri_idx);
+ rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
+ "[ULTB] dyn_tb_begde_en=%d, dyn_tb_tri_en=%d\n",
+- rtwvif->dyn_tb_bedge_en, ul_tb_info->dyn_tb_tri_en);
++ rtwvif_link->dyn_tb_bedge_en, ul_tb_info->dyn_tb_tri_en);
+ }
+
+ struct rtw89_phy_ul_tb_check_data {
+@@ -4338,7 +4402,7 @@ struct rtw89_phy_power_diff {
+ };
+
+ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ static const struct rtw89_phy_power_diff table[2] = {
+ {0x0, 0x0, 0x0, 0x0, 0xf4, 0x3, 0x3},
+@@ -4350,13 +4414,13 @@ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+ if (!rtwdev->chip->ul_tb_pwr_diff)
+ return;
+
+- if (rtwvif->pwr_diff_en == rtwvif->pre_pwr_diff_en) {
+- rtwvif->pwr_diff_en = false;
++ if (rtwvif_link->pwr_diff_en == rtwvif_link->pre_pwr_diff_en) {
++ rtwvif_link->pwr_diff_en = false;
+ return;
+ }
+
+- rtwvif->pre_pwr_diff_en = rtwvif->pwr_diff_en;
+- param = &table[rtwvif->pwr_diff_en];
++ rtwvif_link->pre_pwr_diff_en = rtwvif_link->pwr_diff_en;
++ param = &table[rtwvif_link->pwr_diff_en];
+
+ rtw89_phy_write32_mask(rtwdev, R_Q_MATRIX_00, B_Q_MATRIX_00_REAL,
+ param->q_00);
+@@ -4365,32 +4429,32 @@ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+ rtw89_phy_write32_mask(rtwdev, R_CUSTOMIZE_Q_MATRIX,
+ B_CUSTOMIZE_Q_MATRIX_EN, param->q_matrix_en);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_1T_NORM_BW160,
+ param->ultb_1t_norm_160);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_2T_NORM_BW160,
+ param->ultb_2t_norm_160);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM1_NORM_1STS,
+ param->com1_norm_1sts);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM2_RESP_1STS_PATH,
+ param->com2_resp_1sts_path);
+ }
+
+ static
+ void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_phy_ul_tb_check_data *ul_tb_data)
+ {
+ struct rtw89_traffic_stats *stats = &rtwdev->stats;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION)
+ return;
+
+ if (!vif->cfg.assoc)
+@@ -4403,11 +4467,11 @@ void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev,
+ ul_tb_data->low_tf_client = true;
+
+ ul_tb_data->valid = true;
+- ul_tb_data->def_tri_idx = rtwvif->def_tri_idx;
+- ul_tb_data->dyn_tb_bedge_en = rtwvif->dyn_tb_bedge_en;
++ ul_tb_data->def_tri_idx = rtwvif_link->def_tri_idx;
++ ul_tb_data->dyn_tb_bedge_en = rtwvif_link->dyn_tb_bedge_en;
+ }
+
+- rtw89_phy_ofdma_power_diff(rtwdev, rtwvif);
++ rtw89_phy_ofdma_power_diff(rtwdev, rtwvif_link);
+ }
+
+ static void rtw89_phy_ul_tb_waveform_ctrl(struct rtw89_dev *rtwdev,
+@@ -4453,7 +4517,9 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_phy_ul_tb_check_data ul_tb_data = {};
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ if (!chip->ul_tb_waveform_ctrl && !chip->ul_tb_pwr_diff)
+ return;
+@@ -4462,7 +4528,8 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif, &ul_tb_data);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif_link, &ul_tb_data);
+
+ if (!ul_tb_data.valid)
+ return;
+@@ -4626,30 +4693,42 @@ struct rtw89_phy_iter_rssi_data {
+ bool rssi_changed;
+ };
+
+-static void rtw89_phy_stat_rssi_update_iter(void *data,
+- struct ieee80211_sta *sta)
++static
++void __rtw89_phy_stat_rssi_update_iter(struct rtw89_sta_link *rtwsta_link,
++ struct rtw89_phy_iter_rssi_data *rssi_data)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_phy_iter_rssi_data *rssi_data =
+- (struct rtw89_phy_iter_rssi_data *)data;
+ struct rtw89_phy_ch_info *ch_info = rssi_data->ch_info;
+ unsigned long rssi_curr;
+
+- rssi_curr = ewma_rssi_read(&rtwsta->avg_rssi);
++ rssi_curr = ewma_rssi_read(&rtwsta_link->avg_rssi);
+
+ if (rssi_curr < ch_info->rssi_min) {
+ ch_info->rssi_min = rssi_curr;
+- ch_info->rssi_min_macid = rtwsta->mac_id;
++ ch_info->rssi_min_macid = rtwsta_link->mac_id;
+ }
+
+- if (rtwsta->prev_rssi == 0) {
+- rtwsta->prev_rssi = rssi_curr;
+- } else if (abs((int)rtwsta->prev_rssi - (int)rssi_curr) > (3 << RSSI_FACTOR)) {
+- rtwsta->prev_rssi = rssi_curr;
++ if (rtwsta_link->prev_rssi == 0) {
++ rtwsta_link->prev_rssi = rssi_curr;
++ } else if (abs((int)rtwsta_link->prev_rssi - (int)rssi_curr) >
++ (3 << RSSI_FACTOR)) {
++ rtwsta_link->prev_rssi = rssi_curr;
+ rssi_data->rssi_changed = true;
+ }
+ }
+
++static void rtw89_phy_stat_rssi_update_iter(void *data,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_phy_iter_rssi_data *rssi_data =
++ (struct rtw89_phy_iter_rssi_data *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id)
++ __rtw89_phy_stat_rssi_update_iter(rtwsta_link, rssi_data);
++}
++
+ static void rtw89_phy_stat_rssi_update(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_phy_iter_rssi_data rssi_data = {0};
+@@ -5753,26 +5832,15 @@ void rtw89_phy_dig(struct rtw89_dev *rtwdev)
+ rtw89_phy_dig_sdagc_follow_pagc_config(rtwdev, false);
+ }
+
+-static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta)
++static void __rtw89_phy_tx_path_div_sta_iter(struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_hal *hal = &rtwdev->hal;
+- bool *done = data;
+ u8 rssi_a, rssi_b;
+ u32 candidate;
+
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION || sta->tdls)
+- return;
+-
+- if (*done)
+- return;
+-
+- *done = true;
+-
+- rssi_a = ewma_rssi_read(&rtwsta->rssi[RF_PATH_A]);
+- rssi_b = ewma_rssi_read(&rtwsta->rssi[RF_PATH_B]);
++ rssi_a = ewma_rssi_read(&rtwsta_link->rssi[RF_PATH_A]);
++ rssi_b = ewma_rssi_read(&rtwsta_link->rssi[RF_PATH_B]);
+
+ if (rssi_a > rssi_b + RTW89_TX_DIV_RSSI_RAW_TH)
+ candidate = RF_A;
+@@ -5785,7 +5853,7 @@ static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta
+ return;
+
+ hal->antenna_tx = candidate;
+- rtw89_fw_h2c_txpath_cmac_tbl(rtwdev, rtwsta);
++ rtw89_fw_h2c_txpath_cmac_tbl(rtwdev, rtwsta_link);
+
+ if (hal->antenna_tx == RF_A) {
+ rtw89_phy_write32_mask(rtwdev, R_P0_RFMODE, B_P0_RFMODE_MUX, 0x12);
+@@ -5796,6 +5864,37 @@ static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta
+ }
+ }
+
++static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ bool *done = data;
++
++ if (WARN(ieee80211_vif_is_mld(vif), "MLD mix path_div\n"))
++ return;
++
++ if (sta->tdls)
++ return;
++
++ if (*done)
++ return;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION)
++ continue;
++
++ *done = true;
++ __rtw89_phy_tx_path_div_sta_iter(rtwdev, rtwsta_link);
++ return;
++ }
++}
++
+ void rtw89_phy_tx_path_div_track(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+@@ -6002,17 +6101,27 @@ void rtw89_phy_dm_init(struct rtw89_dev *rtwdev)
+ rtw89_chip_cfg_txrx_path(rtwdev);
+ }
+
+-void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_reg_def *bss_clr_vld = &chip->bss_clr_vld;
+ enum rtw89_phy_idx phy_idx = RTW89_PHY_0;
++ struct ieee80211_bss_conf *bss_conf;
+ u8 bss_color;
+
+- if (!vif->bss_conf.he_support || !vif->cfg.assoc)
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ if (!bss_conf->he_support || !vif->cfg.assoc) {
++ rcu_read_unlock();
+ return;
++ }
++
++ bss_color = bss_conf->he_bss_color.color;
+
+- bss_color = vif->bss_conf.he_bss_color.color;
++ rcu_read_unlock();
+
+ rtw89_phy_write32_idx(rtwdev, bss_clr_vld->addr, bss_clr_vld->mask, 0x1,
+ phy_idx);
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.h b/drivers/net/wireless/realtek/rtw89/phy.h
+index 6dd8ec46939acd..7e335c02ee6fbf 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.h
++++ b/drivers/net/wireless/realtek/rtw89/phy.h
+@@ -892,7 +892,7 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
+ phy->set_txpwr_limit_ru(rtwdev, chan, phy_idx);
+ }
+
+-void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta);
++void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link);
+ void rtw89_phy_ra_update(struct rtw89_dev *rtwdev);
+ void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
+ u32 changed);
+@@ -953,11 +953,12 @@ void rtw89_phy_antdiv_parse(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_phy_ppdu *phy_ppdu);
+ void rtw89_phy_antdiv_track(struct rtw89_dev *rtwdev);
+ void rtw89_phy_antdiv_work(struct work_struct *work);
+-void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ void rtw89_phy_tssi_ctrl_set_bandedge_cfg(struct rtw89_dev *rtwdev,
+ enum rtw89_mac_idx mac_idx,
+ enum rtw89_tssi_bandedge_cfg bandedge_cfg);
+-void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev);
+ u8 rtw89_encode_chan_idx(struct rtw89_dev *rtwdev, u8 central_ch, u8 band);
+ void rtw89_decode_chan_idx(struct rtw89_dev *rtwdev, u8 chan_idx,
+diff --git a/drivers/net/wireless/realtek/rtw89/ps.c b/drivers/net/wireless/realtek/rtw89/ps.c
+index aebd6404f80250..c1c12abc2ea93a 100644
+--- a/drivers/net/wireless/realtek/rtw89/ps.c
++++ b/drivers/net/wireless/realtek/rtw89/ps.c
+@@ -62,9 +62,9 @@ static void rtw89_ps_power_mode_change(struct rtw89_dev *rtwdev, bool enter)
+ rtw89_mac_power_mode_change(rtwdev, enter);
+ }
+
+-void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- if (rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT)
++ if (rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT)
+ return;
+
+ if (!rtwdev->ps_mode)
+@@ -85,23 +85,25 @@ void __rtw89_leave_ps_mode(struct rtw89_dev *rtwdev)
+ rtw89_ps_power_mode_change(rtwdev, false);
+ }
+
+-static void __rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void __rtw89_enter_lps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_lps_parm lps_param = {
+- .macid = rtwvif->mac_id,
++ .macid = rtwvif_link->mac_id,
+ .psmode = RTW89_MAC_AX_PS_MODE_LEGACY,
+ .lastrpwm = RTW89_LAST_RPWM_PS,
+ };
+
+ rtw89_btc_ntfy_radio_state(rtwdev, BTC_RFCTRL_FW_CTRL);
+ rtw89_fw_h2c_lps_parm(rtwdev, &lps_param);
+- rtw89_fw_h2c_lps_ch_info(rtwdev, rtwvif);
++ rtw89_fw_h2c_lps_ch_info(rtwdev, rtwvif_link);
+ }
+
+-static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void __rtw89_leave_lps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_lps_parm lps_param = {
+- .macid = rtwvif->mac_id,
++ .macid = rtwvif_link->mac_id,
+ .psmode = RTW89_MAC_AX_PS_MODE_ACTIVE,
+ .lastrpwm = RTW89_LAST_RPWM_ACTIVE,
+ };
+@@ -109,7 +111,7 @@ static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ rtw89_fw_h2c_lps_parm(rtwdev, &lps_param);
+ rtw89_fw_leave_lps_check(rtwdev, 0);
+ rtw89_btc_ntfy_radio_state(rtwdev, BTC_RFCTRL_WL_ON);
+- rtw89_chip_digital_pwr_comp(rtwdev, rtwvif->phy_idx);
++ rtw89_chip_digital_pwr_comp(rtwdev, rtwvif_link->phy_idx);
+ }
+
+ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev)
+@@ -119,7 +121,7 @@ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev)
+ __rtw89_leave_ps_mode(rtwdev);
+ }
+
+-void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool ps_mode)
+ {
+ lockdep_assert_held(&rtwdev->mutex);
+@@ -127,23 +129,26 @@ void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ if (test_and_set_bit(RTW89_FLAG_LEISURE_PS, rtwdev->flags))
+ return;
+
+- __rtw89_enter_lps(rtwdev, rtwvif);
++ __rtw89_enter_lps(rtwdev, rtwvif_link);
+ if (ps_mode)
+- __rtw89_enter_ps_mode(rtwdev, rtwvif);
++ __rtw89_enter_ps_mode(rtwdev, rtwvif_link);
+ }
+
+-static void rtw89_leave_lps_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_leave_lps_vif(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION &&
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION &&
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
+ return;
+
+- __rtw89_leave_lps(rtwdev, rtwvif);
++ __rtw89_leave_lps(rtwdev, rtwvif_link);
+ }
+
+ void rtw89_leave_lps(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+@@ -153,12 +158,15 @@ void rtw89_leave_lps(struct rtw89_dev *rtwdev)
+ __rtw89_leave_ps_mode(rtwdev);
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_leave_lps_vif(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_leave_lps_vif(rtwdev, rtwvif_link);
+ }
+
+ void rtw89_enter_ips(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ set_bit(RTW89_FLAG_INACTIVE_PS, rtwdev->flags);
+
+@@ -166,14 +174,17 @@ void rtw89_enter_ips(struct rtw89_dev *rtwdev)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_vif_deinit(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_vif_deinit(rtwdev, rtwvif_link);
+
+ rtw89_core_stop(rtwdev);
+ }
+
+ void rtw89_leave_ips(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ int ret;
+
+ if (test_bit(RTW89_FLAG_POWERON, rtwdev->flags))
+@@ -186,7 +197,8 @@ void rtw89_leave_ips(struct rtw89_dev *rtwdev)
+ rtw89_set_channel(rtwdev);
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_vif_init(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_vif_init(rtwdev, rtwvif_link);
+
+ clear_bit(RTW89_FLAG_INACTIVE_PS, rtwdev->flags);
+ }
+@@ -197,48 +209,50 @@ void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl)
+ rtw89_leave_lps(rtwdev);
+ }
+
+-static void rtw89_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw89_tsf32_toggle(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ enum rtw89_p2pps_action act)
+ {
+ if (act == RTW89_P2P_ACT_UPDATE || act == RTW89_P2P_ACT_REMOVE)
+ return;
+
+ if (act == RTW89_P2P_ACT_INIT)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif, true);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif_link, true);
+ else if (act == RTW89_P2P_ACT_TERMINATE)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif, false);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif_link, false);
+ }
+
+ static void rtw89_p2p_disable_all_noa(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ enum rtw89_p2pps_action act;
+ u8 noa_id;
+
+- if (rtwvif->last_noa_nr == 0)
++ if (rtwvif_link->last_noa_nr == 0)
+ return;
+
+- for (noa_id = 0; noa_id < rtwvif->last_noa_nr; noa_id++) {
+- if (noa_id == rtwvif->last_noa_nr - 1)
++ for (noa_id = 0; noa_id < rtwvif_link->last_noa_nr; noa_id++) {
++ if (noa_id == rtwvif_link->last_noa_nr - 1)
+ act = RTW89_P2P_ACT_TERMINATE;
+ else
+ act = RTW89_P2P_ACT_REMOVE;
+- rtw89_tsf32_toggle(rtwdev, rtwvif, act);
+- rtw89_fw_h2c_p2p_act(rtwdev, vif, NULL, act, noa_id);
++ rtw89_tsf32_toggle(rtwdev, rtwvif_link, act);
++ rtw89_fw_h2c_p2p_act(rtwdev, rtwvif_link, bss_conf,
++ NULL, act, noa_id);
+ }
+ }
+
+ static void rtw89_p2p_update_noa(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct ieee80211_p2p_noa_desc *desc;
+ enum rtw89_p2pps_action act;
+ u8 noa_id;
+
+ for (noa_id = 0; noa_id < RTW89_P2P_MAX_NOA_NUM; noa_id++) {
+- desc = &vif->bss_conf.p2p_noa_attr.desc[noa_id];
++ desc = &bss_conf->p2p_noa_attr.desc[noa_id];
+ if (!desc->count || !desc->duration)
+ break;
+
+@@ -246,16 +260,19 @@ static void rtw89_p2p_update_noa(struct rtw89_dev *rtwdev,
+ act = RTW89_P2P_ACT_INIT;
+ else
+ act = RTW89_P2P_ACT_UPDATE;
+- rtw89_tsf32_toggle(rtwdev, rtwvif, act);
+- rtw89_fw_h2c_p2p_act(rtwdev, vif, desc, act, noa_id);
++ rtw89_tsf32_toggle(rtwdev, rtwvif_link, act);
++ rtw89_fw_h2c_p2p_act(rtwdev, rtwvif_link, bss_conf,
++ desc, act, noa_id);
+ }
+- rtwvif->last_noa_nr = noa_id;
++ rtwvif_link->last_noa_nr = noa_id;
+ }
+
+-void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+- rtw89_p2p_disable_all_noa(rtwdev, vif);
+- rtw89_p2p_update_noa(rtwdev, vif);
++ rtw89_p2p_disable_all_noa(rtwdev, rtwvif_link, bss_conf);
++ rtw89_p2p_update_noa(rtwdev, rtwvif_link, bss_conf);
+ }
+
+ void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
+@@ -265,6 +282,12 @@ void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
+ enum rtw89_entity_mode mode;
+ int count = 0;
+
++ /* FIXME: Fix rtw89_enter_lps() and __rtw89_enter_ps_mode()
++ * to take MLO cases into account before doing the following.
++ */
++ if (rtwdev->support_mlo)
++ goto disable_lps;
++
+ mode = rtw89_get_entity_mode(rtwdev);
+ if (mode == RTW89_ENTITY_MODE_MCC)
+ goto disable_lps;
+@@ -291,9 +314,9 @@ void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
+ rtwdev->lps_enabled = false;
+ }
+
+-void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif)
++void rtw89_p2p_noa_renew(struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa;
++ struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa;
+ struct rtw89_p2p_noa_ie *ie = &setter->ie;
+ struct rtw89_p2p_ie_head *p2p_head = &ie->p2p_head;
+ struct rtw89_noa_attr_head *noa_head = &ie->noa_head;
+@@ -318,10 +341,10 @@ void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif)
+ noa_head->oppps_ctwindow = 0;
+ }
+
+-void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif,
++void rtw89_p2p_noa_append(struct rtw89_vif_link *rtwvif_link,
+ const struct ieee80211_p2p_noa_desc *desc)
+ {
+- struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa;
++ struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa;
+ struct rtw89_p2p_noa_ie *ie = &setter->ie;
+ struct rtw89_p2p_ie_head *p2p_head = &ie->p2p_head;
+ struct rtw89_noa_attr_head *noa_head = &ie->noa_head;
+@@ -338,9 +361,9 @@ void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif,
+ ie->noa_desc[setter->noa_count++] = *desc;
+ }
+
+-u8 rtw89_p2p_noa_fetch(struct rtw89_vif *rtwvif, void **data)
++u8 rtw89_p2p_noa_fetch(struct rtw89_vif_link *rtwvif_link, void **data)
+ {
+- struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa;
++ struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa;
+ struct rtw89_p2p_noa_ie *ie = &setter->ie;
+ void *tail;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/ps.h b/drivers/net/wireless/realtek/rtw89/ps.h
+index 54486e4550b61e..cdd712966b09d9 100644
+--- a/drivers/net/wireless/realtek/rtw89/ps.h
++++ b/drivers/net/wireless/realtek/rtw89/ps.h
+@@ -5,21 +5,23 @@
+ #ifndef __RTW89_PS_H_
+ #define __RTW89_PS_H_
+
+-void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool ps_mode);
+ void rtw89_leave_lps(struct rtw89_dev *rtwdev);
+ void __rtw89_leave_ps_mode(struct rtw89_dev *rtwdev);
+-void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev);
+ void rtw89_enter_ips(struct rtw89_dev *rtwdev);
+ void rtw89_leave_ips(struct rtw89_dev *rtwdev);
+ void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl);
+-void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf);
+ void rtw89_recalc_lps(struct rtw89_dev *rtwdev);
+-void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif);
+-void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif,
++void rtw89_p2p_noa_renew(struct rtw89_vif_link *rtwvif_link);
++void rtw89_p2p_noa_append(struct rtw89_vif_link *rtwvif_link,
+ const struct ieee80211_p2p_noa_desc *desc);
+-u8 rtw89_p2p_noa_fetch(struct rtw89_vif *rtwvif, void **data);
++u8 rtw89_p2p_noa_fetch(struct rtw89_vif_link *rtwvif_link, void **data);
+
+ static inline void rtw89_leave_ips_by_hwflags(struct rtw89_dev *rtwdev)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw89/regd.c b/drivers/net/wireless/realtek/rtw89/regd.c
+index a7720a1f17a743..bb064a086970bb 100644
+--- a/drivers/net/wireless/realtek/rtw89/regd.c
++++ b/drivers/net/wireless/realtek/rtw89/regd.c
+@@ -793,22 +793,26 @@ static bool __rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_regulatory_info *regulatory = &rtwdev->regulatory;
+ struct rtw89_reg_6ghz_tpe new = {};
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ bool changed = false;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+ const struct rtw89_reg_6ghz_tpe *tmp;
+ const struct rtw89_chan *chan;
+
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- if (chan->band_type != RTW89_BAND_6G)
+- continue;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ if (chan->band_type != RTW89_BAND_6G)
++ continue;
+
+- tmp = &rtwvif->reg_6ghz_tpe;
+- if (!tmp->valid)
+- continue;
++ tmp = &rtwvif_link->reg_6ghz_tpe;
++ if (!tmp->valid)
++ continue;
+
+- tpe_intersect_constraint(&new, tmp->constraint);
++ tpe_intersect_constraint(&new, tmp->constraint);
++ }
+ }
+
+ if (memcmp(®ulatory->reg_6ghz_tpe, &new,
+@@ -831,19 +835,24 @@ static bool __rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev)
+ }
+
+ static int rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool active,
++ struct rtw89_vif_link *rtwvif_link, bool active,
+ unsigned int *changed)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+- struct rtw89_reg_6ghz_tpe *tpe = &rtwvif->reg_6ghz_tpe;
++ struct rtw89_reg_6ghz_tpe *tpe = &rtwvif_link->reg_6ghz_tpe;
++ struct ieee80211_bss_conf *bss_conf;
+
+ memset(tpe, 0, sizeof(*tpe));
+
+- if (!active || rtwvif->reg_6ghz_power != RTW89_REG_6GHZ_POWER_STD)
++ if (!active || rtwvif_link->reg_6ghz_power != RTW89_REG_6GHZ_POWER_STD)
+ goto bottom;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
+ rtw89_calculate_tpe(rtwdev, tpe, &bss_conf->tpe);
++
++ rcu_read_unlock();
++
+ if (!tpe->valid)
+ goto bottom;
+
+@@ -867,20 +876,24 @@ static bool __rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev)
+ const struct rtw89_regd *regd = regulatory->regd;
+ enum rtw89_reg_6ghz_power sel;
+ const struct rtw89_chan *chan;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ int count = 0;
+ u8 index;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- if (chan->band_type != RTW89_BAND_6G)
+- continue;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ if (chan->band_type != RTW89_BAND_6G)
++ continue;
+
+- if (count != 0 && rtwvif->reg_6ghz_power == sel)
+- continue;
++ if (count != 0 && rtwvif_link->reg_6ghz_power == sel)
++ continue;
+
+- sel = rtwvif->reg_6ghz_power;
+- count++;
++ sel = rtwvif_link->reg_6ghz_power;
++ count++;
++ }
+ }
+
+ if (count != 1)
+@@ -908,35 +921,41 @@ static bool __rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev)
+ }
+
+ static int rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool active,
++ struct rtw89_vif_link *rtwvif_link, bool active,
+ unsigned int *changed)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
+
+ if (active) {
+- switch (vif->bss_conf.power_type) {
++ switch (bss_conf->power_type) {
+ case IEEE80211_REG_VLP_AP:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_VLP;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_VLP;
+ break;
+ case IEEE80211_REG_LPI_AP:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_LPI;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_LPI;
+ break;
+ case IEEE80211_REG_SP_AP:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_STD;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_STD;
+ break;
+ default:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
+ break;
+ }
+ } else {
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
+ }
+
++ rcu_read_unlock();
++
+ *changed += __rtw89_reg_6ghz_power_recalc(rtwdev);
+ return 0;
+ }
+
+-int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool active)
+ {
+ unsigned int changed = 0;
+@@ -948,11 +967,11 @@ int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ * so must do reg_6ghz_tpe_recalc() after reg_6ghz_power_recalc().
+ */
+
+- ret = rtw89_reg_6ghz_power_recalc(rtwdev, rtwvif, active, &changed);
++ ret = rtw89_reg_6ghz_power_recalc(rtwdev, rtwvif_link, active, &changed);
+ if (ret)
+ return ret;
+
+- ret = rtw89_reg_6ghz_tpe_recalc(rtwdev, rtwvif, active, &changed);
++ ret = rtw89_reg_6ghz_tpe_recalc(rtwdev, rtwvif_link, active, &changed);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b.c b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+index 1679bd408ef3f3..f9766bf30e71df 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8851b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+@@ -1590,10 +1590,11 @@ static void rtw8851b_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8851b_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8851b_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8851b_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -1608,10 +1609,12 @@ static void rtw8851b_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8851b_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8851b_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8851b_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8851b_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx);
++ rtw8851b_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx,
++ rtwvif_link->chanctx_idx);
+ }
+
+ static void rtw8851b_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a.c b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+index dde96bd63021ff..42d369d2e916a6 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852a.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+@@ -1350,10 +1350,11 @@ static void rtw8852a_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852a_rx_dck(rtwdev, RTW89_PHY_0, true, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852a_rx_dck(rtwdev, phy_idx, true, chanctx_idx);
+ rtw8852a_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -1368,10 +1369,11 @@ static void rtw8852a_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852a_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852a_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852a_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852a_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx);
++ rtw8852a_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx);
+ }
+
+ static void rtw8852a_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+index 12be52f76427a1..364aa21cbd446f 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+@@ -562,10 +562,11 @@ static void rtw8852b_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852b_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852b_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8852b_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -580,10 +581,12 @@ static void rtw8852b_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852b_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852b_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852b_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852b_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx);
++ rtw8852b_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx,
++ rtwvif_link->chanctx_idx);
+ }
+
+ static void rtw8852b_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
+index 7dfdcb5964e117..dab7e71ec6a140 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
+@@ -535,10 +535,11 @@ static void rtw8852bt_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852bt_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852bt_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8852bt_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -553,10 +554,12 @@ static void rtw8852bt_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852bt_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852bt_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852bt_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852bt_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx);
++ rtw8852bt_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx,
++ rtwvif_link->chanctx_idx);
+ }
+
+ static void rtw8852bt_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+index 1c6e89ab0f4bcb..dbe77abb2c488f 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+@@ -1846,10 +1846,11 @@ static void rtw8852c_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852c_rx_dck(rtwdev, RTW89_PHY_0, false);
+ }
+
+-static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852c_mcc_get_ch_info(rtwdev, phy_idx);
+ rtw8852c_rx_dck(rtwdev, phy_idx, false);
+@@ -1866,10 +1867,11 @@ static void rtw8852c_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852c_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852c_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852c_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852c_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx);
++ rtw8852c_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx);
+ }
+
+ static void rtw8852c_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8922a.c b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+index 63b1ff2f98ed31..ef7747adbcc2b8 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8922a.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+@@ -2020,11 +2020,12 @@ static void _wait_rx_mode(struct rtw89_dev *rtwdev, u8 kpath)
+ }
+ }
+
+-static void rtw8922a_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8922a_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, chanctx_idx);
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+ u8 phy_map = rtw89_btc_phymap(rtwdev, phy_idx, RF_AB, chanctx_idx);
+ u32 tx_en;
+
+@@ -2050,7 +2051,8 @@ static void rtw8922a_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw89_phy_rfk_tssi_and_wait(rtwdev, phy_idx, chan, RTW89_TSSI_SCAN, 6);
+ }
+
+-static void rtw8922a_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8922a_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
+index 5fc2faa9ba5a7e..7b203bb7f151a7 100644
+--- a/drivers/net/wireless/realtek/rtw89/ser.c
++++ b/drivers/net/wireless/realtek/rtw89/ser.c
+@@ -300,37 +300,54 @@ static void drv_resume_rx(struct rtw89_ser *ser)
+
+ static void ser_reset_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ {
+- rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
+- rtwvif->net_type = RTW89_NET_TYPE_NO_LINK;
+- rtwvif->trigger = false;
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
+ rtwvif->tdls_peer = 0;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif_link->port);
++ rtwvif_link->net_type = RTW89_NET_TYPE_NO_LINK;
++ rtwvif_link->trigger = false;
++ }
+ }
+
+ static void ser_sta_deinit_cam_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_vif *target_rtwvif = (struct rtw89_vif *)data;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
+
+ if (rtwvif != target_rtwvif)
+ return;
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
+- rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam);
+- if (sta->tdls)
+- rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam);
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
+
+- INIT_LIST_HEAD(&rtwsta->ba_cam_list);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
++ rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta_link->addr_cam);
++ if (sta->tdls)
++ rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta_link->bssid_cam);
++
++ INIT_LIST_HEAD(&rtwsta_link->ba_cam_list);
++ }
+ }
+
+ static void ser_deinit_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ {
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ ser_sta_deinit_cam_iter,
+ rtwvif);
+
+- rtw89_cam_deinit(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_cam_deinit(rtwdev, rtwvif_link);
+
+ bitmap_zero(rtwdev->cam_info.ba_cam_map, RTW89_MAX_BA_CAM_NUM);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
+index 86e24e07780d9b..3e81fd974ec180 100644
+--- a/drivers/net/wireless/realtek/rtw89/wow.c
++++ b/drivers/net/wireless/realtek/rtw89/wow.c
+@@ -421,7 +421,8 @@ static void rtw89_wow_construct_key_info(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_key_info *key_info = &rtw_wow->key_info;
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ bool err = false;
+
+ rcu_read_lock();
+@@ -596,7 +597,8 @@ static int rtw89_wow_get_aoac_rpt(struct rtw89_dev *rtwdev, bool rx_ready)
+ static struct ieee80211_key_conf *rtw89_wow_gtk_rekey(struct rtw89_dev *rtwdev,
+ u32 cipher, u8 keyidx, u8 *gtk)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_cipher_info *cipher_info;
+ struct ieee80211_key_conf *rekey_conf;
+ struct ieee80211_key_conf *key;
+@@ -632,11 +634,13 @@ static struct ieee80211_key_conf *rtw89_wow_gtk_rekey(struct rtw89_dev *rtwdev,
+
+ static void rtw89_wow_update_key_info(struct rtw89_dev *rtwdev, bool rx_ready)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt;
+ struct rtw89_set_key_info_iter_data data = {.error = false,
+ .rx_ready = rx_ready};
++ struct ieee80211_bss_conf *bss_conf;
+ struct ieee80211_key_conf *key;
+
+ rcu_read_lock();
+@@ -669,9 +673,15 @@ static void rtw89_wow_update_key_info(struct rtw89_dev *rtwdev, bool rx_ready)
+ return;
+
+ rtw89_rx_pn_set_pmf(rtwdev, key, aoac_rpt->igtk_ipn);
+- ieee80211_gtk_rekey_notify(wow_vif, wow_vif->bss_conf.bssid,
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ ieee80211_gtk_rekey_notify(wow_vif, bss_conf->bssid,
+ aoac_rpt->eapol_key_replay_count,
+- GFP_KERNEL);
++ GFP_ATOMIC);
++
++ rcu_read_unlock();
+ }
+
+ static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev)
+@@ -681,27 +691,24 @@ static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev)
+
+ static void rtw89_wow_enter_deep_ps(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+- __rtw89_enter_ps_mode(rtwdev, rtwvif);
++ __rtw89_enter_ps_mode(rtwdev, rtwvif_link);
+ }
+
+ static void rtw89_wow_enter_ps(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+ if (rtw89_wow_mgd_linked(rtwdev))
+- rtw89_enter_lps(rtwdev, rtwvif, false);
++ rtw89_enter_lps(rtwdev, rtwvif_link, false);
+ else if (rtw89_wow_no_link(rtwdev))
+- rtw89_fw_h2c_fwips(rtwdev, rtwvif, true);
++ rtw89_fw_h2c_fwips(rtwdev, rtwvif_link, true);
+ }
+
+ static void rtw89_wow_leave_ps(struct rtw89_dev *rtwdev, bool enable_wow)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+ if (rtw89_wow_mgd_linked(rtwdev)) {
+ rtw89_leave_lps(rtwdev);
+@@ -709,7 +716,7 @@ static void rtw89_wow_leave_ps(struct rtw89_dev *rtwdev, bool enable_wow)
+ if (enable_wow)
+ rtw89_leave_ips(rtwdev);
+ else
+- rtw89_fw_h2c_fwips(rtwdev, rtwvif, false);
++ rtw89_fw_h2c_fwips(rtwdev, rtwvif_link, false);
+ }
+ }
+
+@@ -734,6 +741,8 @@ static void rtw89_wow_set_rx_filter(struct rtw89_dev *rtwdev, bool enable)
+
+ static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt;
+ struct cfg80211_wowlan_nd_info nd_info;
+@@ -780,35 +789,34 @@ static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev)
+ break;
+ default:
+ rtw89_warn(rtwdev, "Unknown wakeup reason %x\n", reason);
+- ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, NULL,
+- GFP_KERNEL);
++ ieee80211_report_wowlan_wakeup(wow_vif, NULL, GFP_KERNEL);
+ return;
+ }
+
+- ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, &wakeup,
+- GFP_KERNEL);
++ ieee80211_report_wowlan_wakeup(wow_vif, &wakeup, GFP_KERNEL);
+ }
+
+-static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+
+ /* Current WoWLAN function support setting of only vif in
+ * infra mode or no link mode. When one suitable vif is found,
+ * stop the iteration.
+ */
+- if (rtw_wow->wow_vif || vif->type != NL80211_IFTYPE_STATION)
++ if (rtw_wow->rtwvif_link || vif->type != NL80211_IFTYPE_STATION)
+ return;
+
+- switch (rtwvif->net_type) {
++ switch (rtwvif_link->net_type) {
+ case RTW89_NET_TYPE_INFRA:
+ if (rtw_wow_has_mgd_features(rtwdev))
+- rtw_wow->wow_vif = vif;
++ rtw_wow->rtwvif_link = rtwvif_link;
+ break;
+ case RTW89_NET_TYPE_NO_LINK:
+ if (rtw_wow->pno_inited)
+- rtw_wow->wow_vif = vif;
++ rtw_wow->rtwvif_link = rtwvif_link;
+ break;
+ default:
+ break;
+@@ -865,7 +873,7 @@ static u16 rtw89_calc_crc(u8 *pdata, int length)
+ return ~crc;
+ }
+
+-static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif,
++static int rtw89_wow_pattern_get_type(struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_wow_cam_info *rtw_pattern,
+ const u8 *pattern, u8 da_mask)
+ {
+@@ -885,7 +893,7 @@ static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif,
+ rtw_pattern->bc = true;
+ else if (is_multicast_ether_addr(da))
+ rtw_pattern->mc = true;
+- else if (ether_addr_equal(da, rtwvif->mac_addr) &&
++ else if (ether_addr_equal(da, rtwvif_link->mac_addr) &&
+ da_mask == GENMASK(5, 0))
+ rtw_pattern->uc = true;
+ else if (!da_mask) /*da_mask == 0 mean wildcard*/
+@@ -897,7 +905,7 @@ static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif,
+ }
+
+ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ const struct cfg80211_pkt_pattern *pkt_pattern,
+ struct rtw89_wow_cam_info *rtw_pattern)
+ {
+@@ -916,7 +924,7 @@ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev,
+ mask_len = DIV_ROUND_UP(len, 8);
+ memset(rtw_pattern, 0, sizeof(*rtw_pattern));
+
+- ret = rtw89_wow_pattern_get_type(rtwvif, rtw_pattern, pattern,
++ ret = rtw89_wow_pattern_get_type(rtwvif_link, rtw_pattern, pattern,
+ mask[0] & GENMASK(5, 0));
+ if (ret)
+ return ret;
+@@ -970,7 +978,7 @@ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct cfg80211_wowlan *wowlan)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+@@ -983,7 +991,7 @@ static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev,
+
+ for (i = 0; i < wowlan->n_patterns; i++) {
+ rtw_pattern = &rtw_wow->patterns[i];
+- ret = rtw89_wow_pattern_generate(rtwdev, rtwvif,
++ ret = rtw89_wow_pattern_generate(rtwdev, rtwvif_link,
+ &wowlan->patterns[i],
+ rtw_pattern);
+ if (ret) {
+@@ -1040,7 +1048,7 @@ static void rtw89_wow_clear_wakeups(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+
+- rtw_wow->wow_vif = NULL;
++ rtw_wow->rtwvif_link = NULL;
+ rtw89_core_release_all_bits_map(rtw_wow->flags, RTW89_WOW_FLAG_NUM);
+ rtw_wow->pattern_cnt = 0;
+ rtw_wow->pno_inited = false;
+@@ -1066,6 +1074,7 @@ static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev,
+ struct cfg80211_wowlan *wowlan)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
+
+ if (wowlan->disconnect)
+@@ -1078,36 +1087,40 @@ static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev,
+ if (wowlan->nd_config)
+ rtw89_wow_init_pno(rtwdev, wowlan->nd_config);
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_wow_vif_iter(rtwdev, rtwvif);
++ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++ /* use the link on HW-0 to do wow flow */
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (!rtwvif_link)
++ continue;
++
++ rtw89_wow_vif_iter(rtwdev, rtwvif_link);
++ }
+
+- if (!rtw_wow->wow_vif)
++ rtwvif_link = rtw_wow->rtwvif_link;
++ if (!rtwvif_link)
+ return -EPERM;
+
+- rtwvif = (struct rtw89_vif *)rtw_wow->wow_vif->drv_priv;
+- return rtw89_wow_parse_patterns(rtwdev, rtwvif, wowlan);
++ return rtw89_wow_parse_patterns(rtwdev, rtwvif_link, wowlan);
+ }
+
+ static int rtw89_wow_cfg_wake_pno(struct rtw89_dev *rtwdev, bool wow)
+ {
+- struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+ int ret;
+
+- ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to config pno\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow global\n");
+ return ret;
+@@ -1119,34 +1132,39 @@ static int rtw89_wow_cfg_wake_pno(struct rtw89_dev *rtwdev, bool wow)
+ static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ struct ieee80211_sta *wow_sta;
+- struct rtw89_sta *rtwsta = NULL;
++ struct rtw89_sta_link *rtwsta_link = NULL;
++ struct rtw89_sta *rtwsta;
+ int ret;
+
+- wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid);
+- if (wow_sta)
+- rtwsta = (struct rtw89_sta *)wow_sta->drv_priv;
++ wow_sta = ieee80211_find_sta(wow_vif, wow_vif->cfg.ap_addr);
++ if (wow_sta) {
++ rtwsta = sta_to_rtwsta(wow_sta);
++ rtwsta_link = rtwsta->links[rtwvif_link->link_id];
++ if (!rtwsta_link)
++ return -ENOLINK;
++ }
+
+ if (wow) {
+ if (rtw_wow->pattern_cnt)
+- rtwvif->wowlan_pattern = true;
++ rtwvif_link->wowlan_pattern = true;
+ if (test_bit(RTW89_WOW_FLAG_EN_MAGIC_PKT, rtw_wow->flags))
+- rtwvif->wowlan_magic = true;
++ rtwvif_link->wowlan_magic = true;
+ } else {
+- rtwvif->wowlan_pattern = false;
+- rtwvif->wowlan_magic = false;
++ rtwvif_link->wowlan_pattern = false;
++ rtwvif_link->wowlan_magic = false;
+ }
+
+- ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n");
+ return ret;
+ }
+
+ if (wow) {
+- ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n",
+ ret);
+@@ -1154,13 +1172,13 @@ static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow)
+ }
+ }
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow global\n");
+ return ret;
+@@ -1190,25 +1208,30 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
+ enum rtw89_fw_type fw_type = wow ? RTW89_FW_WOWLAN : RTW89_FW_NORMAL;
+ enum rtw89_chip_gen chip_gen = rtwdev->chip->chip_gen;
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ enum rtw89_core_chip_id chip_id = rtwdev->chip->chip_id;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ bool include_bb = !!chip->bbmcu_nr;
+ bool disable_intr_for_dlfw = false;
+ struct ieee80211_sta *wow_sta;
+- struct rtw89_sta *rtwsta = NULL;
++ struct rtw89_sta_link *rtwsta_link = NULL;
++ struct rtw89_sta *rtwsta;
+ bool is_conn = true;
+ int ret;
+
+ if (chip_id == RTL8852C || chip_id == RTL8922A)
+ disable_intr_for_dlfw = true;
+
+- wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid);
+- if (wow_sta)
+- rtwsta = (struct rtw89_sta *)wow_sta->drv_priv;
+- else
++ wow_sta = ieee80211_find_sta(wow_vif, wow_vif->cfg.ap_addr);
++ if (wow_sta) {
++ rtwsta = sta_to_rtwsta(wow_sta);
++ rtwsta_link = rtwsta->links[rtwvif_link->link_id];
++ if (!rtwsta_link)
++ return -ENOLINK;
++ } else {
+ is_conn = false;
++ }
+
+ if (disable_intr_for_dlfw)
+ rtw89_hci_disable_intr(rtwdev);
+@@ -1224,14 +1247,14 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
+
+ rtw89_phy_init_rf_reg(rtwdev, true);
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link,
+ RTW89_ROLE_FW_RESTORE);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c role maintain\n");
+ return ret;
+ }
+
+- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, wow_vif, wow_sta);
++ ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c assoc cmac tbl\n");
+ return ret;
+@@ -1240,27 +1263,27 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
+ if (!is_conn)
+ rtw89_cam_reset_keys(rtwdev);
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, !is_conn);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, !is_conn);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c join info\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+ }
+
+ if (is_conn) {
+- ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id);
++ ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif_link, rtwsta_link->mac_id);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c general packet\n");
+ return ret;
+ }
+- rtw89_phy_ra_assoc(rtwdev, wow_sta);
+- rtw89_phy_set_bss_color(rtwdev, wow_vif);
+- rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, wow_vif);
++ rtw89_phy_ra_assoc(rtwdev, rtwsta_link);
++ rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
++ rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link);
+ }
+
+ if (chip_gen == RTW89_CHIP_BE)
+@@ -1363,21 +1386,20 @@ static int rtw89_wow_disable_trx_pre(struct rtw89_dev *rtwdev)
+
+ static int rtw89_wow_disable_trx_post(struct rtw89_dev *rtwdev)
+ {
+- struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *vif = rtw_wow->wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+ int ret;
+
+ ret = rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, true);
+ if (ret)
+ rtw89_err(rtwdev, "cfg ppdu status\n");
+
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+
+ return ret;
+ }
+
+ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct list_head *pkt_list = &rtw_wow->pno_pkt_list;
+@@ -1391,7 +1413,7 @@ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+@@ -1401,7 +1423,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ int ret;
+
+ for (i = 0; i < num; i++) {
+- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
++ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ nd_config->match_sets[i].ssid.ssid,
+ nd_config->match_sets[i].ssid.ssid_len,
+ nd_config->ie_len);
+@@ -1413,7 +1435,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
+ if (!info) {
+ kfree_skb(skb);
+- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif);
++ rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link);
+ return -ENOMEM;
+ }
+
+@@ -1421,7 +1443,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ if (ret) {
+ kfree_skb(skb);
+ kfree(info);
+- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif);
++ rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link);
+ return ret;
+ }
+
+@@ -1436,20 +1458,19 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+ int interval = rtw_wow->nd_config->scan_plans[0].interval;
+ struct rtw89_scan_option opt = {};
+ int ret;
+
+ if (enable) {
+- ret = rtw89_pno_scan_update_probe_req(rtwdev, rtwvif);
++ ret = rtw89_pno_scan_update_probe_req(rtwdev, rtwvif_link);
+ if (ret) {
+ rtw89_err(rtwdev, "Update probe request failed\n");
+ return ret;
+ }
+
+- ret = mac->add_chan_list_pno(rtwdev, rtwvif);
++ ret = mac->add_chan_list_pno(rtwdev, rtwvif_link);
+ if (ret) {
+ rtw89_err(rtwdev, "Update channel list failed\n");
+ return ret;
+@@ -1471,7 +1492,7 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable)
+ opt.opch_end = RTW89_CHAN_INVALID;
+ }
+
+- mac->scan_offload(rtwdev, &opt, rtwvif, true);
++ mac->scan_offload(rtwdev, &opt, rtwvif_link, true);
+
+ return 0;
+ }
+@@ -1479,8 +1500,7 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable)
+ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
+ int ret;
+
+ if (rtw89_wow_no_link(rtwdev)) {
+@@ -1499,25 +1519,25 @@ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev)
+ rtw89_wow_pattern_write(rtwdev);
+ rtw89_wow_construct_key_info(rtwdev);
+
+- ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to enable keep alive\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to enable disconnect detect\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to enable GTK offload\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif_link, true);
+ if (ret)
+ rtw89_warn(rtwdev, "wow: failed to enable arp offload\n");
+ }
+@@ -1548,8 +1568,7 @@ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev)
+ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
+ int ret;
+
+ if (rtw89_wow_no_link(rtwdev)) {
+@@ -1559,35 +1578,35 @@ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev)
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable pno\n");
+ return ret;
+ }
+
+- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif);
++ rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link);
+ } else {
+ rtw89_wow_pattern_clear(rtwdev);
+
+- ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable keep alive\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable disconnect detect\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable GTK offload\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif_link, false);
+ if (ret)
+ rtw89_warn(rtwdev, "wow: failed to disable arp offload\n");
+
+diff --git a/drivers/net/wireless/realtek/rtw89/wow.h b/drivers/net/wireless/realtek/rtw89/wow.h
+index 3fbc2b87c058ac..f91991e8f2e30e 100644
+--- a/drivers/net/wireless/realtek/rtw89/wow.h
++++ b/drivers/net/wireless/realtek/rtw89/wow.h
+@@ -97,18 +97,16 @@ static inline int rtw89_wow_get_sec_hdr_len(struct rtw89_dev *rtwdev)
+ #ifdef CONFIG_PM
+ static inline bool rtw89_wow_mgd_linked(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+- return rtwvif->net_type == RTW89_NET_TYPE_INFRA;
++ return rtwvif_link->net_type == RTW89_NET_TYPE_INFRA;
+ }
+
+ static inline bool rtw89_wow_no_link(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+- return rtwvif->net_type == RTW89_NET_TYPE_NO_LINK;
++ return rtwvif_link->net_type == RTW89_NET_TYPE_NO_LINK;
+ }
+
+ static inline bool rtw_wow_has_mgd_features(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
+index e7198520bdffc7..64441c8bc4606c 100644
+--- a/drivers/net/wireless/silabs/wfx/main.c
++++ b/drivers/net/wireless/silabs/wfx/main.c
+@@ -480,10 +480,23 @@ static int __init wfx_core_init(void)
+ {
+ int ret = 0;
+
+- if (IS_ENABLED(CONFIG_SPI))
++ if (IS_ENABLED(CONFIG_SPI)) {
+ ret = spi_register_driver(&wfx_spi_driver);
+- if (IS_ENABLED(CONFIG_MMC) && !ret)
++ if (ret)
++ goto out;
++ }
++ if (IS_ENABLED(CONFIG_MMC)) {
+ ret = sdio_register_driver(&wfx_sdio_driver);
++ if (ret)
++ goto unregister_spi;
++ }
++
++ return 0;
++
++unregister_spi:
++ if (IS_ENABLED(CONFIG_SPI))
++ spi_unregister_driver(&wfx_spi_driver);
++out:
+ return ret;
+ }
+ module_init(wfx_core_init);
+diff --git a/drivers/net/wireless/st/cw1200/cw1200_spi.c b/drivers/net/wireless/st/cw1200/cw1200_spi.c
+index 4f346fb977a989..862964a8cc8761 100644
+--- a/drivers/net/wireless/st/cw1200/cw1200_spi.c
++++ b/drivers/net/wireless/st/cw1200/cw1200_spi.c
+@@ -450,7 +450,7 @@ static int __maybe_unused cw1200_spi_suspend(struct device *dev)
+ {
+ struct hwbus_priv *self = spi_get_drvdata(to_spi_device(dev));
+
+- if (!cw1200_can_suspend(self->core))
++ if (self && !cw1200_can_suspend(self->core))
+ return -EAGAIN;
+
+ /* XXX notify host that we have to keep CW1200 powered on? */
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 855b42c92284df..f0d4c6f3cb0555 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4591,6 +4591,11 @@ EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set);
+
+ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
+ {
++ /*
++ * As we're about to destroy the queue and free tagset
++ * we can not have keep-alive work running.
++ */
++ nvme_stop_keep_alive(ctrl);
+ blk_mq_destroy_queue(ctrl->admin_q);
+ blk_put_queue(ctrl->admin_q);
+ if (ctrl->ops->flags & NVME_F_FABRICS) {
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 6a15873055b951..f25582e4d88bb0 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -165,7 +165,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ if (!ns->head->disk)
+ continue;
+ kblockd_schedule_work(&ns->head->requeue_work);
+@@ -209,7 +210,8 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ nvme_mpath_clear_current_path(ns);
+ kblockd_schedule_work(&ns->head->requeue_work);
+ }
+@@ -224,7 +226,8 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&head->srcu);
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (capacity != get_capacity(ns->disk))
+ clear_bit(NVME_NS_READY, &ns->flags);
+ }
+@@ -257,7 +260,8 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+ int found_distance = INT_MAX, fallback_distance = INT_MAX, distance;
+ struct nvme_ns *found = NULL, *fallback = NULL, *ns;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+@@ -356,7 +360,8 @@ static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head)
+ unsigned int min_depth_opt = UINT_MAX, min_depth_nonopt = UINT_MAX;
+ unsigned int depth;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+@@ -424,7 +429,8 @@ static bool nvme_available_path(struct nvme_ns_head *head)
+ if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
+ return NULL;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
+ continue;
+ switch (nvme_ctrl_state(ns->ctrl)) {
+@@ -785,7 +791,8 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ return 0;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ unsigned nsid;
+ again:
+ nsid = le32_to_cpu(desc->nsids[n]);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4b9fda0b1d9a33..55af3dfbc2607b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -153,6 +153,7 @@ struct nvme_dev {
+ /* host memory buffer support: */
+ u64 host_mem_size;
+ u32 nr_host_mem_descs;
++ u32 host_mem_descs_size;
+ dma_addr_t host_mem_descs_dma;
+ struct nvme_host_mem_buf_desc *host_mem_descs;
+ void **host_mem_desc_bufs;
+@@ -904,9 +905,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+
+ static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+ {
++ struct request *req;
++
+ spin_lock(&nvmeq->sq_lock);
+- while (!rq_list_empty(*rqlist)) {
+- struct request *req = rq_list_pop(rqlist);
++ while ((req = rq_list_pop(rqlist))) {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+ nvme_sq_copy_cmd(nvmeq, &iod->cmd);
+@@ -931,31 +933,25 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+
+ static void nvme_queue_rqs(struct request **rqlist)
+ {
+- struct request *req, *next, *prev = NULL;
++ struct request *submit_list = NULL;
+ struct request *requeue_list = NULL;
++ struct request **requeue_lastp = &requeue_list;
++ struct nvme_queue *nvmeq = NULL;
++ struct request *req;
+
+- rq_list_for_each_safe(rqlist, req, next) {
+- struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+-
+- if (!nvme_prep_rq_batch(nvmeq, req)) {
+- /* detach 'req' and add to remainder list */
+- rq_list_move(rqlist, &requeue_list, req, prev);
+-
+- req = prev;
+- if (!req)
+- continue;
+- }
++ while ((req = rq_list_pop(rqlist))) {
++ if (nvmeq && nvmeq != req->mq_hctx->driver_data)
++ nvme_submit_cmds(nvmeq, &submit_list);
++ nvmeq = req->mq_hctx->driver_data;
+
+- if (!next || req->mq_hctx != next->mq_hctx) {
+- /* detach rest of list, and submit */
+- req->rq_next = NULL;
+- nvme_submit_cmds(nvmeq, rqlist);
+- *rqlist = next;
+- prev = NULL;
+- } else
+- prev = req;
++ if (nvme_prep_rq_batch(nvmeq, req))
++ rq_list_add(&submit_list, req); /* reverse order */
++ else
++ rq_list_add_tail(&requeue_lastp, req);
+ }
+
++ if (nvmeq)
++ nvme_submit_cmds(nvmeq, &submit_list);
+ *rqlist = requeue_list;
+ }
+
+@@ -1966,10 +1962,10 @@ static void nvme_free_host_mem(struct nvme_dev *dev)
+
+ kfree(dev->host_mem_desc_bufs);
+ dev->host_mem_desc_bufs = NULL;
+- dma_free_coherent(dev->dev,
+- dev->nr_host_mem_descs * sizeof(*dev->host_mem_descs),
++ dma_free_coherent(dev->dev, dev->host_mem_descs_size,
+ dev->host_mem_descs, dev->host_mem_descs_dma);
+ dev->host_mem_descs = NULL;
++ dev->host_mem_descs_size = 0;
+ dev->nr_host_mem_descs = 0;
+ }
+
+@@ -1977,7 +1973,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ u32 chunk_size)
+ {
+ struct nvme_host_mem_buf_desc *descs;
+- u32 max_entries, len;
++ u32 max_entries, len, descs_size;
+ dma_addr_t descs_dma;
+ int i = 0;
+ void **bufs;
+@@ -1990,8 +1986,9 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries)
+ max_entries = dev->ctrl.hmmaxd;
+
+- descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs),
+- &descs_dma, GFP_KERNEL);
++ descs_size = max_entries * sizeof(*descs);
++ descs = dma_alloc_coherent(dev->dev, descs_size, &descs_dma,
++ GFP_KERNEL);
+ if (!descs)
+ goto out;
+
+@@ -2020,6 +2017,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ dev->host_mem_size = size;
+ dev->host_mem_descs = descs;
+ dev->host_mem_descs_dma = descs_dma;
++ dev->host_mem_descs_size = descs_size;
+ dev->host_mem_desc_bufs = bufs;
+ return 0;
+
+@@ -2034,8 +2032,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+
+ kfree(bufs);
+ out_free_descs:
+- dma_free_coherent(dev->dev, max_entries * sizeof(*descs), descs,
+- descs_dma);
++ dma_free_coherent(dev->dev, descs_size, descs, descs_dma);
+ out:
+ dev->host_mem_descs = NULL;
+ return -ENOMEM;
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 4d528c10df3a9a..546e76ac407cfd 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -457,6 +457,7 @@ int __initdata dt_root_addr_cells;
+ int __initdata dt_root_size_cells;
+
+ void *initial_boot_params __ro_after_init;
++phys_addr_t initial_boot_params_pa __ro_after_init;
+
+ #ifdef CONFIG_OF_EARLY_FLATTREE
+
+@@ -1136,17 +1137,18 @@ static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
+ return ptr;
+ }
+
+-bool __init early_init_dt_verify(void *params)
++bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys)
+ {
+- if (!params)
++ if (!dt_virt)
+ return false;
+
+ /* check device tree validity */
+- if (fdt_check_header(params))
++ if (fdt_check_header(dt_virt))
+ return false;
+
+ /* Setup flat device-tree pointer */
+- initial_boot_params = params;
++ initial_boot_params = dt_virt;
++ initial_boot_params_pa = dt_phys;
+ of_fdt_crc32 = crc32_be(~0, initial_boot_params,
+ fdt_totalsize(initial_boot_params));
+
+@@ -1173,11 +1175,11 @@ void __init early_init_dt_scan_nodes(void)
+ early_init_dt_check_for_usable_mem_range();
+ }
+
+-bool __init early_init_dt_scan(void *params)
++bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys)
+ {
+ bool status;
+
+- status = early_init_dt_verify(params);
++ status = early_init_dt_verify(dt_virt, dt_phys);
+ if (!status)
+ return false;
+
+diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
+index 9ccde2fd77cbf5..5b924597a4debe 100644
+--- a/drivers/of/kexec.c
++++ b/drivers/of/kexec.c
+@@ -301,7 +301,7 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
+ }
+
+ /* Remove memory reservation for the current device tree. */
+- ret = fdt_find_and_del_mem_rsv(fdt, __pa(initial_boot_params),
++ ret = fdt_find_and_del_mem_rsv(fdt, initial_boot_params_pa,
+ fdt_totalsize(initial_boot_params));
+ if (ret == -EINVAL) {
+ pr_err("Error removing memory reservation.\n");
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 284f2e0e4d2615..e091c3e55b5c6f 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -572,15 +572,14 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ pcie->refclk = clk;
+
+ /*
+- * The "Power Sequencing and Reset Signal Timings" table of the
+- * PCI Express Card Electromechanical Specification, Revision
+- * 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST#
+- * should be deasserted after minimum of 100us once REFCLK is
+- * stable. The REFCLK to the connector in RC mode is selected
+- * while enabling the PHY. So deassert PERST# after 100 us.
++ * Section 2.2 of the PCI Express Card Electromechanical
++ * Specification (Revision 5.1) mandates that the deassertion
++ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
++ * This shall ensure that the power and the reference clock
++ * are stable.
+ */
+ if (gpiod) {
+- fsleep(PCIE_T_PERST_CLK_US);
++ msleep(PCIE_T_PVPERL_MS);
+ gpiod_set_value_cansleep(gpiod, 1);
+ }
+
+@@ -671,15 +670,14 @@ static int j721e_pcie_resume_noirq(struct device *dev)
+ return ret;
+
+ /*
+- * The "Power Sequencing and Reset Signal Timings" table of the
+- * PCI Express Card Electromechanical Specification, Revision
+- * 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST#
+- * should be deasserted after minimum of 100us once REFCLK is
+- * stable. The REFCLK to the connector in RC mode is selected
+- * while enabling the PHY. So deassert PERST# after 100 us.
++ * Section 2.2 of the PCI Express Card Electromechanical
++ * Specification (Revision 5.1) mandates that the deassertion
++ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
++ * This shall ensure that the power and the reference clock
++ * are stable.
+ */
+ if (pcie->reset_gpio) {
+- fsleep(PCIE_T_PERST_CLK_US);
++ msleep(PCIE_T_PVPERL_MS);
+ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
+ }
+
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index e588fcc5458936..b5ca5260f9049f 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -396,6 +396,10 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
+ return ret;
+ }
+
++ /* Perform cleanup that requires refclk */
++ pci_epc_deinit_notify(pci->ep.epc);
++ dw_pcie_ep_cleanup(&pci->ep);
++
+ /* Assert WAKE# to RC to indicate device is ready */
+ gpiod_set_value_cansleep(pcie_ep->wake, 1);
+ usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500);
+@@ -540,8 +544,6 @@ static void qcom_pcie_perst_assert(struct dw_pcie *pci)
+ {
+ struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
+
+- pci_epc_deinit_notify(pci->ep.epc);
+- dw_pcie_ep_cleanup(&pci->ep);
+ qcom_pcie_disable_resources(pcie_ep);
+ pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index ef44a82be058b2..2b33d03ed05416 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -133,6 +133,7 @@
+
+ /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
+ #define PARF_INT_ALL_LINK_UP BIT(13)
++#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
+
+ /* PARF_NO_SNOOP_OVERIDE register fields */
+ #define WR_NO_SNOOP_OVERIDE_EN BIT(1)
+@@ -1716,7 +1717,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ goto err_host_deinit;
+ }
+
+- writel_relaxed(PARF_INT_ALL_LINK_UP, pcie->parf + PARF_INT_ALL_MASK);
++ writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7,
++ pcie->parf + PARF_INT_ALL_MASK);
+ }
+
+ qcom_pcie_icc_opp_update(pcie);
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index c1394f2ab63ff1..ced3b7e7bdaded 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -1704,9 +1704,6 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
+ if (ret)
+ dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
+
+- pci_epc_deinit_notify(pcie->pci.ep.epc);
+- dw_pcie_ep_cleanup(&pcie->pci.ep);
+-
+ reset_control_assert(pcie->core_rst);
+
+ tegra_pcie_disable_phy(pcie);
+@@ -1785,6 +1782,10 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ goto fail_phy;
+ }
+
++ /* Perform cleanup that requires refclk */
++ pci_epc_deinit_notify(pcie->pci.ep.epc);
++ dw_pcie_ep_cleanup(&pcie->pci.ep);
++
+ /* Clear any stale interrupt statuses */
+ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
+ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
+diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+index 7d070b1def1166..54286a40bdfbf7 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
++++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+@@ -867,12 +867,18 @@ static int pci_epf_mhi_bind(struct pci_epf *epf)
+ {
+ struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
++ struct device *dev = &epf->dev;
+ struct platform_device *pdev = to_platform_device(epc->dev.parent);
+ struct resource *res;
+ int ret;
+
+ /* Get MMIO base address from Endpoint controller */
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio");
++ if (!res) {
++ dev_err(dev, "Failed to get \"mmio\" resource\n");
++ return -ENODEV;
++ }
++
+ epf_mhi->mmio_phys = res->start;
+ epf_mhi->mmio_size = resource_size(res);
+
+diff --git a/drivers/pci/hotplug/cpqphp_pci.c b/drivers/pci/hotplug/cpqphp_pci.c
+index 718bc6cf12cb3c..974c7db3265b5a 100644
+--- a/drivers/pci/hotplug/cpqphp_pci.c
++++ b/drivers/pci/hotplug/cpqphp_pci.c
+@@ -135,11 +135,13 @@ int cpqhp_unconfigure_device(struct pci_func *func)
+ static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
+ {
+ u32 vendID = 0;
++ int ret;
+
+- if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1)
+- return -1;
++ ret = pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID);
++ if (ret != PCIBIOS_SUCCESSFUL)
++ return PCIBIOS_DEVICE_NOT_FOUND;
+ if (PCI_POSSIBLE_ERROR(vendID))
+- return -1;
++ return PCIBIOS_DEVICE_NOT_FOUND;
+ return pci_bus_read_config_dword(bus, devfn, offset, value);
+ }
+
+@@ -202,13 +204,15 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
+ {
+ u16 tdevice;
+ u32 work;
++ int ret;
+ u8 tbus;
+
+ ctrl->pci_bus->number = bus_num;
+
+ for (tdevice = 0; tdevice < 0xFF; tdevice++) {
+ /* Scan for access first */
+- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
++ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
++ if (ret)
+ continue;
+ dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
+ /* Yep we got one. Not a bridge ? */
+@@ -220,7 +224,8 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
+ }
+ for (tdevice = 0; tdevice < 0xFF; tdevice++) {
+ /* Scan for access first */
+- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
++ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
++ if (ret)
+ continue;
+ dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
+ /* Yep we got one. bridge ? */
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 225a6cd2e9ca3b..08f170fd3efb3e 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5248,7 +5248,7 @@ static ssize_t reset_method_store(struct device *dev,
+ const char *buf, size_t count)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+- char *options, *name;
++ char *options, *tmp_options, *name;
+ int m, n;
+ u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
+
+@@ -5268,7 +5268,8 @@ static ssize_t reset_method_store(struct device *dev,
+ return -ENOMEM;
+
+ n = 0;
+- while ((name = strsep(&options, " ")) != NULL) {
++ tmp_options = options;
++ while ((name = strsep(&tmp_options, " ")) != NULL) {
+ if (sysfs_streq(name, ""))
+ continue;
+
+diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
+index 0f87cade10f74b..ed645c7a4e4b41 100644
+--- a/drivers/pci/slot.c
++++ b/drivers/pci/slot.c
+@@ -79,6 +79,7 @@ static void pci_slot_release(struct kobject *kobj)
+ up_read(&pci_bus_sem);
+
+ list_del(&slot->list);
++ pci_bus_put(slot->bus);
+
+ kfree(slot);
+ }
+@@ -261,7 +262,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
+ goto err;
+ }
+
+- slot->bus = parent;
++ slot->bus = pci_bus_get(parent);
+ slot->number = slot_nr;
+
+ slot->kobj.kset = pci_slots_kset;
+@@ -269,6 +270,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
+ slot_name = make_slot_name(name);
+ if (!slot_name) {
+ err = -ENOMEM;
++ pci_bus_put(slot->bus);
+ kfree(slot);
+ goto err;
+ }
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 397a46410f7cb7..30506c43776f15 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -2178,8 +2178,6 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ continue;
+
+ xp = arm_cmn_node_to_xp(cmn, dn);
+- dn->portid_bits = xp->portid_bits;
+- dn->deviceid_bits = xp->deviceid_bits;
+ dn->dtc = xp->dtc;
+ dn->dtm = xp->dtm;
+ if (cmn->multi_dtm)
+@@ -2420,6 +2418,8 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ }
+
+ arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn);
++ dn->portid_bits = xp->portid_bits;
++ dn->deviceid_bits = xp->deviceid_bits;
+
+ switch (dn->type) {
+ case CMN_TYPE_DTC:
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index d5fa92ba837397..dabdb9f7bb82c4 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -431,6 +431,17 @@ static int smmu_pmu_event_init(struct perf_event *event)
+ return -EINVAL;
+ }
+
++ /*
++ * Ensure all events are on the same cpu so all events are in the
++ * same cpu context, to avoid races on pmu_enable etc.
++ */
++ event->cpu = smmu_pmu->on_cpu;
++
++ hwc->idx = -1;
++
++ if (event->group_leader == event)
++ return 0;
++
+ for_each_sibling_event(sibling, event->group_leader) {
+ if (is_software_event(sibling))
+ continue;
+@@ -442,14 +453,6 @@ static int smmu_pmu_event_init(struct perf_event *event)
+ return -EINVAL;
+ }
+
+- hwc->idx = -1;
+-
+- /*
+- * Ensure all events are on the same cpu so all events are in the
+- * same cpu context, to avoid races on pmu_enable etc.
+- */
+- event->cpu = smmu_pmu->on_cpu;
+-
+ return 0;
+ }
+
+diff --git a/drivers/phy/phy-airoha-pcie-regs.h b/drivers/phy/phy-airoha-pcie-regs.h
+index bb1f679ca1dfa0..b938a7b468fee3 100644
+--- a/drivers/phy/phy-airoha-pcie-regs.h
++++ b/drivers/phy/phy-airoha-pcie-regs.h
+@@ -197,9 +197,9 @@
+ #define CSR_2L_PXP_TX1_MULTLANE_EN BIT(0)
+
+ #define REG_CSR_2L_RX0_REV0 0x00fc
+-#define CSR_2L_PXP_VOS_PNINV GENMASK(3, 2)
+-#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(6, 4)
+-#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(10, 8)
++#define CSR_2L_PXP_VOS_PNINV GENMASK(19, 18)
++#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(22, 20)
++#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(26, 24)
+
+ #define REG_CSR_2L_RX0_PHYCK_DIV 0x0100
+ #define CSR_2L_PXP_RX0_PHYCK_SEL GENMASK(9, 8)
+diff --git a/drivers/phy/phy-airoha-pcie.c b/drivers/phy/phy-airoha-pcie.c
+index 1e410eb410580c..56e9ade8a9fd3d 100644
+--- a/drivers/phy/phy-airoha-pcie.c
++++ b/drivers/phy/phy-airoha-pcie.c
+@@ -459,7 +459,7 @@ static void airoha_pcie_phy_init_clk_out(struct airoha_pcie_phy *pcie_phy)
+ airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_CLKTX1_OFFSET,
+ CSR_2L_PXP_CLKTX1_SR);
+ airoha_phy_csr_2l_update_field(pcie_phy, REG_CSR_2L_PLL_CMN_RESERVE0,
+- CSR_2L_PXP_PLL_RESERVE_MASK, 0xdd);
++ CSR_2L_PXP_PLL_RESERVE_MASK, 0xd0d);
+ }
+
+ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy)
+@@ -471,9 +471,9 @@ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy)
+ PCIE_SW_XFI_RXPCS_RST | PCIE_SW_REF_RST |
+ PCIE_SW_RX_RST);
+ airoha_phy_pma0_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET,
+- PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET);
++ PCIE_TX_TOP_RST | PCIE_TX_CAL_RST);
+ airoha_phy_pma1_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET,
+- PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET);
++ PCIE_TX_TOP_RST | PCIE_TX_CAL_RST);
+ }
+
+ static void airoha_pcie_phy_init_rx(struct airoha_pcie_phy *pcie_phy)
+@@ -802,7 +802,7 @@ static void airoha_pcie_phy_init_ssc_jcpll(struct airoha_pcie_phy *pcie_phy)
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_IFM,
+ CSR_2L_PXP_JCPLL_SDM_IFM);
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_HREN,
+- REG_CSR_2L_JCPLL_SDM_HREN);
++ CSR_2L_PXP_JCPLL_SDM_HREN);
+ airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_JCPLL_RST_DLY,
+ CSR_2L_PXP_JCPLL_SDM_DI_EN);
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SSC,
+diff --git a/drivers/phy/realtek/phy-rtk-usb2.c b/drivers/phy/realtek/phy-rtk-usb2.c
+index e3ad7cea510998..e8ca2ec5998fe6 100644
+--- a/drivers/phy/realtek/phy-rtk-usb2.c
++++ b/drivers/phy/realtek/phy-rtk-usb2.c
+@@ -1023,6 +1023,8 @@ static int rtk_usb2phy_probe(struct platform_device *pdev)
+
+ rtk_phy->dev = &pdev->dev;
+ rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL);
++ if (!rtk_phy->phy_cfg)
++ return -ENOMEM;
+
+ memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
+
+diff --git a/drivers/phy/realtek/phy-rtk-usb3.c b/drivers/phy/realtek/phy-rtk-usb3.c
+index dfcf4b921bba63..96af483e5444b9 100644
+--- a/drivers/phy/realtek/phy-rtk-usb3.c
++++ b/drivers/phy/realtek/phy-rtk-usb3.c
+@@ -577,6 +577,8 @@ static int rtk_usb3phy_probe(struct platform_device *pdev)
+
+ rtk_phy->dev = &pdev->dev;
+ rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL);
++ if (!rtk_phy->phy_cfg)
++ return -ENOMEM;
+
+ memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
+
+diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
+index 0f6b55fec31de7..a71805997b028a 100644
+--- a/drivers/pinctrl/pinctrl-k210.c
++++ b/drivers/pinctrl/pinctrl-k210.c
+@@ -183,7 +183,7 @@ static const u32 k210_pinconf_mode_id_to_mode[] = {
+ [K210_PC_DEFAULT_INT13] = K210_PC_MODE_IN | K210_PC_PU,
+ };
+
+-#undef DEFAULT
++#undef K210_PC_DEFAULT
+
+ /*
+ * Pin functions configuration information.
+diff --git a/drivers/pinctrl/pinctrl-zynqmp.c b/drivers/pinctrl/pinctrl-zynqmp.c
+index 3c6d56fdb8c964..93454d2a26bcc6 100644
+--- a/drivers/pinctrl/pinctrl-zynqmp.c
++++ b/drivers/pinctrl/pinctrl-zynqmp.c
+@@ -49,7 +49,6 @@
+ * @name: Name of the pin mux function
+ * @groups: List of pin groups for this function
+ * @ngroups: Number of entries in @groups
+- * @node: Firmware node matching with the function
+ *
+ * This structure holds information about pin control function
+ * and function group names supporting that function.
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index d2dd66769aa891..a0eb4e01b3a755 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -667,7 +667,7 @@ static void pmic_gpio_config_dbg_show(struct pinctrl_dev *pctldev,
+ "push-pull", "open-drain", "open-source"
+ };
+ static const char *const strengths[] = {
+- "no", "high", "medium", "low"
++ "no", "low", "medium", "high"
+ };
+
+ pad = pctldev->desc->pins[pin].drv_data;
+diff --git a/drivers/pinctrl/renesas/Kconfig b/drivers/pinctrl/renesas/Kconfig
+index 14bd55d647319b..7f3f41c7fe54c8 100644
+--- a/drivers/pinctrl/renesas/Kconfig
++++ b/drivers/pinctrl/renesas/Kconfig
+@@ -41,6 +41,7 @@ config PINCTRL_RENESAS
+ select PINCTRL_PFC_R8A779H0 if ARCH_R8A779H0
+ select PINCTRL_RZG2L if ARCH_RZG2L
+ select PINCTRL_RZV2M if ARCH_R9A09G011
++ select PINCTRL_RZG2L if ARCH_R9A09G057
+ select PINCTRL_PFC_SH7203 if CPU_SUBTYPE_SH7203
+ select PINCTRL_PFC_SH7264 if CPU_SUBTYPE_SH7264
+ select PINCTRL_PFC_SH7269 if CPU_SUBTYPE_SH7269
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 5a403915fed2c6..3a81837b5e623b 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -2710,7 +2710,7 @@ static int rzg2l_pinctrl_register(struct rzg2l_pinctrl *pctrl)
+
+ ret = pinctrl_enable(pctrl->pctl);
+ if (ret)
+- dev_err_probe(pctrl->dev, ret, "pinctrl enable failed\n");
++ return dev_err_probe(pctrl->dev, ret, "pinctrl enable failed\n");
+
+ ret = rzg2l_gpio_register(pctrl);
+ if (ret)
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index c7781aea0b88b2..f1324466efac65 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -409,6 +409,7 @@ static int cros_typec_init_ports(struct cros_typec_data *typec)
+ return 0;
+
+ unregister_ports:
++ fwnode_handle_put(fwnode);
+ cros_unregister_ports(typec);
+ return ret;
+ }
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index abdca3f05c5c15..89f5f44857d555 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -3696,10 +3696,28 @@ static int asus_wmi_custom_fan_curve_init(struct asus_wmi *asus)
+ /* Throttle thermal policy ****************************************************/
+ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ {
+- u8 value = asus->throttle_thermal_policy_mode;
+ u32 retval;
++ u8 value;
+ int err;
+
++ if (asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO) {
++ switch (asus->throttle_thermal_policy_mode) {
++ case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT:
++ value = ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO;
++ break;
++ case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST:
++ value = ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO;
++ break;
++ case ASUS_THROTTLE_THERMAL_POLICY_SILENT:
++ value = ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO;
++ break;
++ default:
++ return -EINVAL;
++ }
++ } else {
++ value = asus->throttle_thermal_policy_mode;
++ }
++
+ err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev,
+ value, &retval);
+
+@@ -3804,46 +3822,6 @@ static ssize_t throttle_thermal_policy_store(struct device *dev,
+ static DEVICE_ATTR_RW(throttle_thermal_policy);
+
+ /* Platform profile ***********************************************************/
+-static int asus_wmi_platform_profile_to_vivo(struct asus_wmi *asus, int mode)
+-{
+- bool vivo;
+-
+- vivo = asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO;
+-
+- if (vivo) {
+- switch (mode) {
+- case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT:
+- return ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO;
+- case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST:
+- return ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO;
+- case ASUS_THROTTLE_THERMAL_POLICY_SILENT:
+- return ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO;
+- }
+- }
+-
+- return mode;
+-}
+-
+-static int asus_wmi_platform_profile_mode_from_vivo(struct asus_wmi *asus, int mode)
+-{
+- bool vivo;
+-
+- vivo = asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO;
+-
+- if (vivo) {
+- switch (mode) {
+- case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO:
+- return ASUS_THROTTLE_THERMAL_POLICY_DEFAULT;
+- case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO:
+- return ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST;
+- case ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO:
+- return ASUS_THROTTLE_THERMAL_POLICY_SILENT;
+- }
+- }
+-
+- return mode;
+-}
+-
+ static int asus_wmi_platform_profile_get(struct platform_profile_handler *pprof,
+ enum platform_profile_option *profile)
+ {
+@@ -3853,7 +3831,7 @@ static int asus_wmi_platform_profile_get(struct platform_profile_handler *pprof,
+ asus = container_of(pprof, struct asus_wmi, platform_profile_handler);
+ tp = asus->throttle_thermal_policy_mode;
+
+- switch (asus_wmi_platform_profile_mode_from_vivo(asus, tp)) {
++ switch (tp) {
+ case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT:
+ *profile = PLATFORM_PROFILE_BALANCED;
+ break;
+@@ -3892,7 +3870,7 @@ static int asus_wmi_platform_profile_set(struct platform_profile_handler *pprof,
+ return -EOPNOTSUPP;
+ }
+
+- asus->throttle_thermal_policy_mode = asus_wmi_platform_profile_to_vivo(asus, tp);
++ asus->throttle_thermal_policy_mode = tp;
+ return throttle_thermal_policy_write(asus);
+ }
+
+diff --git a/drivers/platform/x86/intel/bxtwc_tmu.c b/drivers/platform/x86/intel/bxtwc_tmu.c
+index d0e2a3c293b0b0..9ac801b929b93c 100644
+--- a/drivers/platform/x86/intel/bxtwc_tmu.c
++++ b/drivers/platform/x86/intel/bxtwc_tmu.c
+@@ -48,9 +48,8 @@ static irqreturn_t bxt_wcove_tmu_irq_handler(int irq, void *data)
+ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
+ {
+ struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
+- struct regmap_irq_chip_data *regmap_irq_chip;
+ struct wcove_tmu *wctmu;
+- int ret, virq, irq;
++ int ret;
+
+ wctmu = devm_kzalloc(&pdev->dev, sizeof(*wctmu), GFP_KERNEL);
+ if (!wctmu)
+@@ -59,27 +58,18 @@ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
+ wctmu->dev = &pdev->dev;
+ wctmu->regmap = pmic->regmap;
+
+- irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
+- return irq;
++ wctmu->irq = platform_get_irq(pdev, 0);
++ if (wctmu->irq < 0)
++ return wctmu->irq;
+
+- regmap_irq_chip = pmic->irq_chip_data_tmu;
+- virq = regmap_irq_get_virq(regmap_irq_chip, irq);
+- if (virq < 0) {
+- dev_err(&pdev->dev,
+- "failed to get virtual interrupt=%d\n", irq);
+- return virq;
+- }
+-
+- ret = devm_request_threaded_irq(&pdev->dev, virq,
++ ret = devm_request_threaded_irq(&pdev->dev, wctmu->irq,
+ NULL, bxt_wcove_tmu_irq_handler,
+ IRQF_ONESHOT, "bxt_wcove_tmu", wctmu);
+ if (ret) {
+ dev_err(&pdev->dev, "request irq failed: %d,virq: %d\n",
+- ret, virq);
++ ret, wctmu->irq);
+ return ret;
+ }
+- wctmu->irq = virq;
+
+ /* Unmask TMU second level Wake & System alarm */
+ regmap_update_bits(wctmu->regmap, BXTWC_MTMUIRQ_REG,
+diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c
+index c04bb7f97a4db1..c3ca2ac91b0569 100644
+--- a/drivers/platform/x86/intel/pmt/class.c
++++ b/drivers/platform/x86/intel/pmt/class.c
+@@ -59,10 +59,12 @@ pmt_memcpy64_fromio(void *to, const u64 __iomem *from, size_t count)
+ }
+
+ int pmt_telem_read_mmio(struct pci_dev *pdev, struct pmt_callbacks *cb, u32 guid, void *buf,
+- void __iomem *addr, u32 count)
++ void __iomem *addr, loff_t off, u32 count)
+ {
+ if (cb && cb->read_telem)
+- return cb->read_telem(pdev, guid, buf, count);
++ return cb->read_telem(pdev, guid, buf, off, count);
++
++ addr += off;
+
+ if (guid == GUID_SPR_PUNIT)
+ /* PUNIT on SPR only supports aligned 64-bit read */
+@@ -96,7 +98,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj,
+ count = entry->size - off;
+
+ count = pmt_telem_read_mmio(entry->ep->pcidev, entry->cb, entry->header.guid, buf,
+- entry->base + off, count);
++ entry->base, off, count);
+
+ return count;
+ }
+diff --git a/drivers/platform/x86/intel/pmt/class.h b/drivers/platform/x86/intel/pmt/class.h
+index a267ac96442301..b2006d57779d66 100644
+--- a/drivers/platform/x86/intel/pmt/class.h
++++ b/drivers/platform/x86/intel/pmt/class.h
+@@ -62,7 +62,7 @@ struct intel_pmt_namespace {
+ };
+
+ int pmt_telem_read_mmio(struct pci_dev *pdev, struct pmt_callbacks *cb, u32 guid, void *buf,
+- void __iomem *addr, u32 count);
++ void __iomem *addr, loff_t off, u32 count);
+ bool intel_pmt_is_early_client_hw(struct device *dev);
+ int intel_pmt_dev_create(struct intel_pmt_entry *entry,
+ struct intel_pmt_namespace *ns,
+diff --git a/drivers/platform/x86/intel/pmt/telemetry.c b/drivers/platform/x86/intel/pmt/telemetry.c
+index c9feac859e574c..0cea617c6c2e25 100644
+--- a/drivers/platform/x86/intel/pmt/telemetry.c
++++ b/drivers/platform/x86/intel/pmt/telemetry.c
+@@ -219,7 +219,7 @@ int pmt_telem_read(struct telem_endpoint *ep, u32 id, u64 *data, u32 count)
+ if (offset + NUM_BYTES_QWORD(count) > size)
+ return -EINVAL;
+
+- pmt_telem_read_mmio(ep->pcidev, ep->cb, ep->header.guid, data, ep->base + offset,
++ pmt_telem_read_mmio(ep->pcidev, ep->cb, ep->header.guid, data, ep->base, offset,
+ NUM_BYTES_QWORD(count));
+
+ return ep->present ? 0 : -EPIPE;
+diff --git a/drivers/platform/x86/panasonic-laptop.c b/drivers/platform/x86/panasonic-laptop.c
+index 2bf94d0ab32432..22ca70eb822718 100644
+--- a/drivers/platform/x86/panasonic-laptop.c
++++ b/drivers/platform/x86/panasonic-laptop.c
+@@ -614,8 +614,7 @@ static ssize_t eco_mode_show(struct device *dev, struct device_attribute *attr,
+ result = 1;
+ break;
+ default:
+- result = -EIO;
+- break;
++ return -EIO;
+ }
+ return sysfs_emit(buf, "%u\n", result);
+ }
+@@ -761,7 +760,12 @@ static ssize_t current_brightness_store(struct device *dev, struct device_attrib
+ static ssize_t cdpower_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- return sysfs_emit(buf, "%d\n", get_optd_power_state());
++ int state = get_optd_power_state();
++
++ if (state < 0)
++ return state;
++
++ return sysfs_emit(buf, "%d\n", state);
+ }
+
+ static ssize_t cdpower_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/pmdomain/ti/ti_sci_pm_domains.c b/drivers/pmdomain/ti/ti_sci_pm_domains.c
+index 1510d5ddae3dec..0df3eb7ff09a3d 100644
+--- a/drivers/pmdomain/ti/ti_sci_pm_domains.c
++++ b/drivers/pmdomain/ti/ti_sci_pm_domains.c
+@@ -161,6 +161,7 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ break;
+
+ if (args.args_count >= 1 && args.np == dev->of_node) {
++ of_node_put(args.np);
+ if (args.args[0] > max_id) {
+ max_id = args.args[0];
+ } else {
+@@ -192,7 +193,10 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ pm_genpd_init(&pd->pd, NULL, true);
+
+ list_add(&pd->node, &pd_provider->pd_list);
++ } else {
++ of_node_put(args.np);
+ }
++
+ index++;
+ }
+ }
+diff --git a/drivers/power/reset/Kconfig b/drivers/power/reset/Kconfig
+index 389d5a193e5dce..f5fc33a8bf4431 100644
+--- a/drivers/power/reset/Kconfig
++++ b/drivers/power/reset/Kconfig
+@@ -79,6 +79,7 @@ config POWER_RESET_EP93XX
+ bool "Cirrus EP93XX reset driver" if COMPILE_TEST
+ depends on MFD_SYSCON
+ default ARCH_EP93XX
++ select AUXILIARY_BUS
+ help
+ This driver provides restart support for Cirrus EP93XX SoC.
+
+diff --git a/drivers/power/sequencing/Kconfig b/drivers/power/sequencing/Kconfig
+index c9f1cdb6652488..ddcc42a984921c 100644
+--- a/drivers/power/sequencing/Kconfig
++++ b/drivers/power/sequencing/Kconfig
+@@ -16,6 +16,7 @@ if POWER_SEQUENCING
+ config POWER_SEQUENCING_QCOM_WCN
+ tristate "Qualcomm WCN family PMU driver"
+ default m if ARCH_QCOM
++ depends on OF
+ help
+ Say Y here to enable the power sequencing driver for Qualcomm
+ WCN Bluetooth/WLAN chipsets.
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 750fda543308c8..51fb88aca0f9fd 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -449,9 +449,29 @@ static u8
+ [BQ27XXX_REG_AP] = 0x18,
+ BQ27XXX_DM_REG_ROWS,
+ },
++ bq27426_regs[BQ27XXX_REG_MAX] = {
++ [BQ27XXX_REG_CTRL] = 0x00,
++ [BQ27XXX_REG_TEMP] = 0x02,
++ [BQ27XXX_REG_INT_TEMP] = 0x1e,
++ [BQ27XXX_REG_VOLT] = 0x04,
++ [BQ27XXX_REG_AI] = 0x10,
++ [BQ27XXX_REG_FLAGS] = 0x06,
++ [BQ27XXX_REG_TTE] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTF] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTES] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTECP] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_NAC] = 0x08,
++ [BQ27XXX_REG_RC] = 0x0c,
++ [BQ27XXX_REG_FCC] = 0x0e,
++ [BQ27XXX_REG_CYCT] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_AE] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_SOC] = 0x1c,
++ [BQ27XXX_REG_DCAP] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_AP] = 0x18,
++ BQ27XXX_DM_REG_ROWS,
++ },
+ #define bq27411_regs bq27421_regs
+ #define bq27425_regs bq27421_regs
+-#define bq27426_regs bq27421_regs
+ #define bq27441_regs bq27421_regs
+ #define bq27621_regs bq27421_regs
+ bq27z561_regs[BQ27XXX_REG_MAX] = {
+@@ -769,10 +789,23 @@ static enum power_supply_property bq27421_props[] = {
+ };
+ #define bq27411_props bq27421_props
+ #define bq27425_props bq27421_props
+-#define bq27426_props bq27421_props
+ #define bq27441_props bq27421_props
+ #define bq27621_props bq27421_props
+
++static enum power_supply_property bq27426_props[] = {
++ POWER_SUPPLY_PROP_STATUS,
++ POWER_SUPPLY_PROP_PRESENT,
++ POWER_SUPPLY_PROP_VOLTAGE_NOW,
++ POWER_SUPPLY_PROP_CURRENT_NOW,
++ POWER_SUPPLY_PROP_CAPACITY,
++ POWER_SUPPLY_PROP_CAPACITY_LEVEL,
++ POWER_SUPPLY_PROP_TEMP,
++ POWER_SUPPLY_PROP_TECHNOLOGY,
++ POWER_SUPPLY_PROP_CHARGE_FULL,
++ POWER_SUPPLY_PROP_CHARGE_NOW,
++ POWER_SUPPLY_PROP_MANUFACTURER,
++};
++
+ static enum power_supply_property bq27z561_props[] = {
+ POWER_SUPPLY_PROP_STATUS,
+ POWER_SUPPLY_PROP_PRESENT,
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 49534458a9f7d3..73cc9c236e8333 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -484,8 +484,6 @@ EXPORT_SYMBOL_GPL(power_supply_get_by_name);
+ */
+ void power_supply_put(struct power_supply *psy)
+ {
+- might_sleep();
+-
+ atomic_dec(&psy->use_cnt);
+ put_device(&psy->dev);
+ }
+diff --git a/drivers/power/supply/rt9471.c b/drivers/power/supply/rt9471.c
+index c04af1ee89c675..67b86ac91a21dd 100644
+--- a/drivers/power/supply/rt9471.c
++++ b/drivers/power/supply/rt9471.c
+@@ -139,6 +139,19 @@ enum {
+ RT9471_PORTSTAT_DCP,
+ };
+
++enum {
++ RT9471_ICSTAT_SLEEP = 0,
++ RT9471_ICSTAT_VBUSRDY,
++ RT9471_ICSTAT_TRICKLECHG,
++ RT9471_ICSTAT_PRECHG,
++ RT9471_ICSTAT_FASTCHG,
++ RT9471_ICSTAT_IEOC,
++ RT9471_ICSTAT_BGCHG,
++ RT9471_ICSTAT_CHGDONE,
++ RT9471_ICSTAT_CHGFAULT,
++ RT9471_ICSTAT_OTG = 15,
++};
++
+ struct rt9471_chip {
+ struct device *dev;
+ struct regmap *regmap;
+@@ -153,8 +166,8 @@ struct rt9471_chip {
+ };
+
+ static const struct reg_field rt9471_reg_fields[F_MAX_FIELDS] = {
+- [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 0),
+- [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 1, 1),
++ [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 1),
++ [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 2, 2),
+ [F_CHG_EN] = REG_FIELD(RT9471_REG_FUNC, 0, 0),
+ [F_HZ] = REG_FIELD(RT9471_REG_FUNC, 5, 5),
+ [F_BATFET_DIS] = REG_FIELD(RT9471_REG_FUNC, 7, 7),
+@@ -255,31 +268,32 @@ static int rt9471_get_ieoc(struct rt9471_chip *chip, int *microamp)
+
+ static int rt9471_get_status(struct rt9471_chip *chip, int *status)
+ {
+- unsigned int chg_ready, chg_done, fault_stat;
++ unsigned int ic_stat;
+ int ret;
+
+- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_RDY], &chg_ready);
+- if (ret)
+- return ret;
+-
+- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_DONE], &chg_done);
++ ret = regmap_field_read(chip->rm_fields[F_IC_STAT], &ic_stat);
+ if (ret)
+ return ret;
+
+- ret = regmap_read(chip->regmap, RT9471_REG_STAT1, &fault_stat);
+- if (ret)
+- return ret;
+-
+- fault_stat &= RT9471_CHGFAULT_MASK;
+-
+- if (chg_ready && chg_done)
+- *status = POWER_SUPPLY_STATUS_FULL;
+- else if (chg_ready && fault_stat)
++ switch (ic_stat) {
++ case RT9471_ICSTAT_VBUSRDY:
++ case RT9471_ICSTAT_CHGFAULT:
+ *status = POWER_SUPPLY_STATUS_NOT_CHARGING;
+- else if (chg_ready && !fault_stat)
++ break;
++ case RT9471_ICSTAT_TRICKLECHG ... RT9471_ICSTAT_BGCHG:
+ *status = POWER_SUPPLY_STATUS_CHARGING;
+- else
++ break;
++ case RT9471_ICSTAT_CHGDONE:
++ *status = POWER_SUPPLY_STATUS_FULL;
++ break;
++ case RT9471_ICSTAT_SLEEP:
++ case RT9471_ICSTAT_OTG:
+ *status = POWER_SUPPLY_STATUS_DISCHARGING;
++ break;
++ default:
++ *status = POWER_SUPPLY_STATUS_UNKNOWN;
++ break;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 6e752e148b98cc..210368099a0642 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -75,7 +75,7 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ state->duty_cycle < state->period)
+ dev_warn(pwmchip_parent(chip), ".apply ignored .polarity\n");
+
+- if (state->enabled &&
++ if (state->enabled && s2.enabled &&
+ last->polarity == state->polarity &&
+ last->period > s2.period &&
+ last->period <= state->period)
+@@ -83,7 +83,11 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ ".apply didn't pick the best available period (requested: %llu, applied: %llu, possible: %llu)\n",
+ state->period, s2.period, last->period);
+
+- if (state->enabled && state->period < s2.period)
++ /*
++ * Rounding period up is fine only if duty_cycle is 0 then, because a
++ * flat line doesn't have a characteristic period.
++ */
++ if (state->enabled && s2.enabled && state->period < s2.period && s2.duty_cycle)
+ dev_warn(pwmchip_parent(chip),
+ ".apply is supposed to round down period (requested: %llu, applied: %llu)\n",
+ state->period, s2.period);
+@@ -99,7 +103,7 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ s2.duty_cycle, s2.period,
+ last->duty_cycle, last->period);
+
+- if (state->enabled && state->duty_cycle < s2.duty_cycle)
++ if (state->enabled && s2.enabled && state->duty_cycle < s2.duty_cycle)
+ dev_warn(pwmchip_parent(chip),
+ ".apply is supposed to round down duty_cycle (requested: %llu/%llu, applied: %llu/%llu)\n",
+ state->duty_cycle, state->period,
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index 9e2bbf5b4a8ce7..0375987194318f 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -26,6 +26,7 @@
+ #define MX3_PWMSR 0x04 /* PWM Status Register */
+ #define MX3_PWMSAR 0x0C /* PWM Sample Register */
+ #define MX3_PWMPR 0x10 /* PWM Period Register */
++#define MX3_PWMCNR 0x14 /* PWM Counter Register */
+
+ #define MX3_PWMCR_FWM GENMASK(27, 26)
+ #define MX3_PWMCR_STOPEN BIT(25)
+@@ -219,10 +220,12 @@ static void pwm_imx27_wait_fifo_slot(struct pwm_chip *chip,
+ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
+ {
+- unsigned long period_cycles, duty_cycles, prescale;
++ unsigned long period_cycles, duty_cycles, prescale, period_us, tmp;
+ struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
+ unsigned long long c;
+ unsigned long long clkrate;
++ unsigned long flags;
++ int val;
+ int ret;
+ u32 cr;
+
+@@ -263,7 +266,98 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ pwm_imx27_sw_reset(chip);
+ }
+
+- writel(duty_cycles, imx->mmio_base + MX3_PWMSAR);
++ val = readl(imx->mmio_base + MX3_PWMPR);
++ val = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
++ cr = readl(imx->mmio_base + MX3_PWMCR);
++ tmp = NSEC_PER_SEC * (u64)(val + 2) * MX3_PWMCR_PRESCALER_GET(cr);
++ tmp = DIV_ROUND_UP_ULL(tmp, clkrate);
++ period_us = DIV_ROUND_UP_ULL(tmp, 1000);
++
++ /*
++ * ERR051198:
++ * PWM: PWM output may not function correctly if the FIFO is empty when
++ * a new SAR value is programmed
++ *
++ * Description:
++ * When the PWM FIFO is empty, a new value programmed to the PWM Sample
++ * register (PWM_PWMSAR) will be directly applied even if the current
++ * timer period has not expired.
++ *
++ * If the new SAMPLE value programmed in the PWM_PWMSAR register is
++ * less than the previous value, and the PWM counter register
++ * (PWM_PWMCNR) that contains the current COUNT value is greater than
++ * the new programmed SAMPLE value, the current period will not flip
++ * the level. This may result in an output pulse with a duty cycle of
++ * 100%.
++ *
++ * Consider a change from
++ * ________
++ * / \______/
++ * ^ * ^
++ * to
++ * ____
++ * / \__________/
++ * ^ ^
++ * At the time marked by *, the new write value will be directly applied
++ * to SAR even the current period is not over if FIFO is empty.
++ *
++ * ________ ____________________
++ * / \______/ \__________/
++ * ^ ^ * ^ ^
++ * |<-- old SAR -->| |<-- new SAR -->|
++ *
++ * That is the output is active for a whole period.
++ *
++ * Workaround:
++ * Check new SAR less than old SAR and current counter is in errata
++ * windows, write extra old SAR into FIFO and new SAR will effect at
++ * next period.
++ *
++ * Sometime period is quite long, such as over 1 second. If add old SAR
++ * into FIFO unconditional, new SAR have to wait for next period. It
++ * may be too long.
++ *
++ * Turn off the interrupt to ensure that not IRQ and schedule happen
++ * during above operations. If any irq and schedule happen, counter
++ * in PWM will be out of data and take wrong action.
++ *
++ * Add a safety margin 1.5us because it needs some time to complete
++ * IO write.
++ *
++ * Use writel_relaxed() to minimize the interval between two writes to
++ * the SAR register to increase the fastest PWM frequency supported.
++ *
++ * When the PWM period is longer than 2us(or <500kHz), this workaround
++ * can solve this problem. No software workaround is available if PWM
++ * period is shorter than IO write. Just try best to fill old data
++ * into FIFO.
++ */
++ c = clkrate * 1500;
++ do_div(c, NSEC_PER_SEC);
++
++ local_irq_save(flags);
++ val = FIELD_GET(MX3_PWMSR_FIFOAV, readl_relaxed(imx->mmio_base + MX3_PWMSR));
++
++ if (duty_cycles < imx->duty_cycle && (cr & MX3_PWMCR_EN)) {
++ if (period_us < 2) { /* 2us = 500 kHz */
++ /* Best effort attempt to fix up >500 kHz case */
++ udelay(3 * period_us);
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ } else if (val < MX3_PWMSR_FIFOAV_2WORDS) {
++ val = readl_relaxed(imx->mmio_base + MX3_PWMCNR);
++ /*
++ * If counter is close to period, controller may roll over when
++ * next IO write.
++ */
++ if ((val + c >= duty_cycles && val < imx->duty_cycle) ||
++ val + c >= period_cycles)
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ }
++ }
++ writel_relaxed(duty_cycles, imx->mmio_base + MX3_PWMSAR);
++ local_irq_restore(flags);
++
+ writel(period_cycles, imx->mmio_base + MX3_PWMPR);
+
+ /*
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 28e7ce60cb617c..25ed9f713974ba 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -11,7 +11,7 @@
+ #include <linux/regulator/of_regulator.h>
+ #include <linux/soc/qcom/smd-rpm.h>
+
+-struct qcom_smd_rpm *smd_vreg_rpm;
++static struct qcom_smd_rpm *smd_vreg_rpm;
+
+ struct qcom_rpm_reg {
+ struct device *dev;
+diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c
+index 01a8d04879184c..37476d2558fda7 100644
+--- a/drivers/regulator/rk808-regulator.c
++++ b/drivers/regulator/rk808-regulator.c
+@@ -1853,7 +1853,7 @@ static int rk808_regulator_dt_parse_pdata(struct device *dev,
+ }
+
+ if (!pdata->dvs_gpio[i]) {
+- dev_info(dev, "there is no dvs%d gpio\n", i);
++ dev_dbg(dev, "there is no dvs%d gpio\n", i);
+ continue;
+ }
+
+@@ -1889,12 +1889,6 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ if (!pdata)
+ return -ENOMEM;
+
+- ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
+- if (ret < 0)
+- return ret;
+-
+- platform_set_drvdata(pdev, pdata);
+-
+ switch (rk808->variant) {
+ case RK805_ID:
+ regulators = rk805_reg;
+@@ -1905,6 +1899,11 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ nregulators = ARRAY_SIZE(rk806_reg);
+ break;
+ case RK808_ID:
++ /* DVS0/1 GPIOs are supported on the RK808 only */
++ ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
++ if (ret < 0)
++ return ret;
++
+ regulators = rk808_reg;
+ nregulators = RK808_NUM_REGULATORS;
+ break;
+@@ -1930,6 +1929,8 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
++ platform_set_drvdata(pdev, pdata);
++
+ config.dev = &pdev->dev;
+ config.driver_data = pdata;
+ config.regmap = regmap;
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index 955e4e38477e6f..62f8548fb46a5d 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -341,9 +341,9 @@ config TI_K3_DSP_REMOTEPROC
+
+ config TI_K3_M4_REMOTEPROC
+ tristate "TI K3 M4 remoteproc support"
+- depends on ARCH_OMAP2PLUS || ARCH_K3
+- select MAILBOX
+- select OMAP2PLUS_MBOX
++ depends on ARCH_K3 || COMPILE_TEST
++ depends on TI_SCI_PROTOCOL || (COMPILE_TEST && TI_SCI_PROTOCOL=n)
++ depends on OMAP2PLUS_MBOX
+ help
+ Say m here to support TI's M4 remote processor subsystems
+ on various TI K3 family of SoCs through the remote processor
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index 572dcb0f055b76..223f6ca0745d3d 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -734,15 +734,22 @@ static int adsp_probe(struct platform_device *pdev)
+ desc->ssctl_id);
+ if (IS_ERR(adsp->sysmon)) {
+ ret = PTR_ERR(adsp->sysmon);
+- goto disable_pm;
++ goto deinit_remove_glink_pdm_ssr;
+ }
+
+ ret = rproc_add(rproc);
+ if (ret)
+- goto disable_pm;
++ goto remove_sysmon;
+
+ return 0;
+
++remove_sysmon:
++ qcom_remove_sysmon_subdev(adsp->sysmon);
++deinit_remove_glink_pdm_ssr:
++ qcom_q6v5_deinit(&adsp->q6v5);
++ qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
++ qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
++ qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
+ disable_pm:
+ qcom_rproc_pds_detach(adsp);
+
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 2a42215ce8e07b..32c3531b20c70a 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1162,6 +1162,9 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ goto disable_active_clks;
+ }
+
++ if (qproc->has_mba_logs)
++ qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
++
+ writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG);
+ if (qproc->dp_size) {
+ writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + RMB_PMI_CODE_START_REG);
+@@ -1172,9 +1175,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ if (ret)
+ goto reclaim_mba;
+
+- if (qproc->has_mba_logs)
+- qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
+-
+ ret = q6v5_rmb_mba_wait(qproc, 0, 5000);
+ if (ret == -ETIMEDOUT) {
+ dev_err(qproc->dev, "MBA boot timed out\n");
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index ef82835e98a4ef..f4f4b3df3884ef 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -759,16 +759,16 @@ static int adsp_probe(struct platform_device *pdev)
+
+ ret = adsp_init_clock(adsp);
+ if (ret)
+- goto free_rproc;
++ goto unassign_mem;
+
+ ret = adsp_init_regulator(adsp);
+ if (ret)
+- goto free_rproc;
++ goto unassign_mem;
+
+ ret = adsp_pds_attach(&pdev->dev, adsp->proxy_pds,
+ desc->proxy_pd_names);
+ if (ret < 0)
+- goto free_rproc;
++ goto unassign_mem;
+ adsp->proxy_pd_count = ret;
+
+ ret = qcom_q6v5_init(&adsp->q6v5, pdev, rproc, desc->crash_reason_smem, desc->load_state,
+@@ -784,18 +784,28 @@ static int adsp_probe(struct platform_device *pdev)
+ desc->ssctl_id);
+ if (IS_ERR(adsp->sysmon)) {
+ ret = PTR_ERR(adsp->sysmon);
+- goto detach_proxy_pds;
++ goto deinit_remove_pdm_smd_glink;
+ }
+
+ qcom_add_ssr_subdev(rproc, &adsp->ssr_subdev, desc->ssr_name);
+ ret = rproc_add(rproc);
+ if (ret)
+- goto detach_proxy_pds;
++ goto remove_ssr_sysmon;
+
+ return 0;
+
++remove_ssr_sysmon:
++ qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
++ qcom_remove_sysmon_subdev(adsp->sysmon);
++deinit_remove_pdm_smd_glink:
++ qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
++ qcom_remove_smd_subdev(rproc, &adsp->smd_subdev);
++ qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
++ qcom_q6v5_deinit(&adsp->q6v5);
+ detach_proxy_pds:
+ adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++unassign_mem:
++ adsp_unassign_memory_region(adsp);
+ free_rproc:
+ device_init_wakeup(adsp->dev, false);
+
+@@ -907,6 +917,7 @@ static const struct adsp_data sm8250_adsp_resource = {
+ .crash_reason_smem = 423,
+ .firmware_name = "adsp.mdt",
+ .pas_id = 1,
++ .minidump_id = 5,
+ .auto_boot = true,
+ .proxy_pd_names = (char*[]){
+ "lcx",
+@@ -1124,6 +1135,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
+ .crash_reason_smem = 601,
+ .firmware_name = "cdsp.mdt",
+ .pas_id = 18,
++ .minidump_id = 7,
+ .auto_boot = true,
+ .proxy_pd_names = (char*[]){
+ "cx",
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index d3af1dfa3c7d71..a2f9d85c7156dc 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1204,7 +1204,8 @@ void qcom_glink_native_rx(struct qcom_glink *glink)
+ ret = qcom_glink_rx_open_ack(glink, param1);
+ break;
+ case GLINK_CMD_OPEN:
+- ret = qcom_glink_rx_defer(glink, param2);
++ /* upper 16 bits of param2 are the "prio" field */
++ ret = qcom_glink_rx_defer(glink, param2 & 0xffff);
+ break;
+ case GLINK_CMD_TX_DATA:
+ case GLINK_CMD_TX_DATA_CONT:
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index cca650b2e0b94d..aaf76406cd7d7d 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -904,13 +904,18 @@ void rtc_timer_do_work(struct work_struct *work)
+ struct timerqueue_node *next;
+ ktime_t now;
+ struct rtc_time tm;
++ int err;
+
+ struct rtc_device *rtc =
+ container_of(work, struct rtc_device, irqwork);
+
+ mutex_lock(&rtc->ops_lock);
+ again:
+- __rtc_read_time(rtc, &tm);
++ err = __rtc_read_time(rtc, &tm);
++ if (err) {
++ mutex_unlock(&rtc->ops_lock);
++ return;
++ }
+ now = rtc_tm_to_ktime(tm);
+ while ((next = timerqueue_getnext(&rtc->timerqueue))) {
+ if (next->expires > now)
+diff --git a/drivers/rtc/rtc-ab-eoz9.c b/drivers/rtc/rtc-ab-eoz9.c
+index 02f7d071128772..e17bce9a27468b 100644
+--- a/drivers/rtc/rtc-ab-eoz9.c
++++ b/drivers/rtc/rtc-ab-eoz9.c
+@@ -396,13 +396,6 @@ static int abeoz9z3_temp_read(struct device *dev,
+ if (ret < 0)
+ return ret;
+
+- if ((val & ABEOZ9_REG_CTRL_STATUS_V1F) ||
+- (val & ABEOZ9_REG_CTRL_STATUS_V2F)) {
+- dev_err(dev,
+- "thermometer might be disabled due to low voltage\n");
+- return -EINVAL;
+- }
+-
+ switch (attr) {
+ case hwmon_temp_input:
+ ret = regmap_read(regmap, ABEOZ9_REG_REG_TEMP, &val);
+diff --git a/drivers/rtc/rtc-abx80x.c b/drivers/rtc/rtc-abx80x.c
+index 1298962402ff47..3fee27914ba805 100644
+--- a/drivers/rtc/rtc-abx80x.c
++++ b/drivers/rtc/rtc-abx80x.c
+@@ -39,7 +39,7 @@
+ #define ABX8XX_REG_STATUS 0x0f
+ #define ABX8XX_STATUS_AF BIT(2)
+ #define ABX8XX_STATUS_BLF BIT(4)
+-#define ABX8XX_STATUS_WDT BIT(6)
++#define ABX8XX_STATUS_WDT BIT(5)
+
+ #define ABX8XX_REG_CTRL1 0x10
+ #define ABX8XX_CTRL_WRITE BIT(0)
+diff --git a/drivers/rtc/rtc-rzn1.c b/drivers/rtc/rtc-rzn1.c
+index 56ebbd4d048147..8570c8e63d70c3 100644
+--- a/drivers/rtc/rtc-rzn1.c
++++ b/drivers/rtc/rtc-rzn1.c
+@@ -111,8 +111,8 @@ static int rzn1_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ tm->tm_hour = bcd2bin(tm->tm_hour);
+ tm->tm_wday = bcd2bin(tm->tm_wday);
+ tm->tm_mday = bcd2bin(tm->tm_mday);
+- tm->tm_mon = bcd2bin(tm->tm_mon);
+- tm->tm_year = bcd2bin(tm->tm_year);
++ tm->tm_mon = bcd2bin(tm->tm_mon) - 1;
++ tm->tm_year = bcd2bin(tm->tm_year) + 100;
+
+ return 0;
+ }
+@@ -128,8 +128,8 @@ static int rzn1_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ tm->tm_hour = bin2bcd(tm->tm_hour);
+ tm->tm_wday = bin2bcd(rzn1_rtc_tm_to_wday(tm));
+ tm->tm_mday = bin2bcd(tm->tm_mday);
+- tm->tm_mon = bin2bcd(tm->tm_mon);
+- tm->tm_year = bin2bcd(tm->tm_year);
++ tm->tm_mon = bin2bcd(tm->tm_mon + 1);
++ tm->tm_year = bin2bcd(tm->tm_year - 100);
+
+ val = readl(rtc->base + RZN1_RTC_CTL2);
+ if (!(val & RZN1_RTC_CTL2_STOPPED)) {
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index d492a2d26600c1..c6d4522411b312 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -218,15 +218,14 @@ static int st_rtc_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler, 0,
+- pdev->name, rtc);
++ ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler,
++ IRQF_NO_AUTOEN, pdev->name, rtc);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request irq %i\n", rtc->irq);
+ return ret;
+ }
+
+ enable_irq_wake(rtc->irq);
+- disable_irq(rtc->irq);
+
+ rtc->clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(rtc->clk))
+diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
+index c32e818f06dbad..ad17ab0a931494 100644
+--- a/drivers/s390/cio/cio.c
++++ b/drivers/s390/cio/cio.c
+@@ -459,10 +459,14 @@ int cio_update_schib(struct subchannel *sch)
+ {
+ struct schib schib;
+
+- if (stsch(sch->schid, &schib) || !css_sch_is_valid(&schib))
++ if (stsch(sch->schid, &schib))
+ return -ENODEV;
+
+ memcpy(&sch->schib, &schib, sizeof(schib));
++
++ if (!css_sch_is_valid(&schib))
++ return -EACCES;
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(cio_update_schib);
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index b0f23242e17145..9498825d9c7a5c 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1387,14 +1387,18 @@ enum io_sch_action {
+ IO_SCH_VERIFY,
+ IO_SCH_DISC,
+ IO_SCH_NOP,
++ IO_SCH_ORPH_CDEV,
+ };
+
+ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
++ int rc;
+
+ cdev = sch_get_cdev(sch);
+- if (cio_update_schib(sch)) {
++ rc = cio_update_schib(sch);
++
++ if (rc == -ENODEV) {
+ /* Not operational. */
+ if (!cdev)
+ return IO_SCH_UNREG;
+@@ -1402,6 +1406,16 @@ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ return IO_SCH_UNREG;
+ return IO_SCH_ORPH_UNREG;
+ }
++
++ /* Avoid unregistering subchannels without working device. */
++ if (rc == -EACCES) {
++ if (!cdev)
++ return IO_SCH_NOP;
++ if (ccw_device_notify(cdev, CIO_GONE) != NOTIFY_OK)
++ return IO_SCH_UNREG_CDEV;
++ return IO_SCH_ORPH_CDEV;
++ }
++
+ /* Operational. */
+ if (!cdev)
+ return IO_SCH_ATTACH;
+@@ -1471,6 +1485,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ rc = 0;
+ goto out_unlock;
+ case IO_SCH_ORPH_UNREG:
++ case IO_SCH_ORPH_CDEV:
+ case IO_SCH_ORPH_ATTACH:
+ ccw_device_set_disconnected(cdev);
+ break;
+@@ -1502,6 +1517,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ /* Handle attached ccw device. */
+ switch (action) {
+ case IO_SCH_ORPH_UNREG:
++ case IO_SCH_ORPH_CDEV:
+ case IO_SCH_ORPH_ATTACH:
+ /* Move ccw device to orphanage. */
+ rc = ccw_device_move_to_orph(cdev);
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index 62eca9419ad76e..21fa7ac849e5c3 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -58,6 +58,8 @@ struct virtio_ccw_device {
+ struct virtio_device vdev;
+ __u8 config[VIRTIO_CCW_CONFIG_SIZE];
+ struct ccw_device *cdev;
++ /* we make cdev->dev.dma_parms point to this */
++ struct device_dma_parameters dma_parms;
+ __u32 curr_io;
+ int err;
+ unsigned int revision; /* Transport revision */
+@@ -1303,6 +1305,7 @@ static int virtio_ccw_offline(struct ccw_device *cdev)
+ unregister_virtio_device(&vcdev->vdev);
+ spin_lock_irqsave(get_ccwdev_lock(cdev), flags);
+ dev_set_drvdata(&cdev->dev, NULL);
++ cdev->dev.dma_parms = NULL;
+ spin_unlock_irqrestore(get_ccwdev_lock(cdev), flags);
+ return 0;
+ }
+@@ -1366,6 +1369,7 @@ static int virtio_ccw_online(struct ccw_device *cdev)
+ }
+ vcdev->vdev.dev.parent = &cdev->dev;
+ vcdev->cdev = cdev;
++ cdev->dev.dma_parms = &vcdev->dma_parms;
+ vcdev->dma_area = ccw_device_dma_zalloc(vcdev->cdev,
+ sizeof(*vcdev->dma_area),
+ &vcdev->dma_area_addr);
+diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c
+index 62cb7a864fd53d..70c7515a822f52 100644
+--- a/drivers/scsi/bfa/bfad.c
++++ b/drivers/scsi/bfa/bfad.c
+@@ -1693,9 +1693,8 @@ bfad_init(void)
+
+ error = bfad_im_module_init();
+ if (error) {
+- error = -ENOMEM;
+ printk(KERN_WARNING "bfad_im_module_init failure\n");
+- goto ext;
++ return -ENOMEM;
+ }
+
+ if (strcmp(FCPI_NAME, " fcpim") == 0)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 6219807ce3b9e1..ffd15fa4f9e596 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1545,10 +1545,16 @@ void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba)
+ /* Init and wait for PHYs to come up and all libsas event finished. */
+ for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
+ struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
++ struct asd_sas_phy *sas_phy = &phy->sas_phy;
+
+- if (!(hisi_hba->phy_state & BIT(phy_no)))
++ if (!sas_phy->phy->enabled)
+ continue;
+
++ if (!(hisi_hba->phy_state & BIT(phy_no))) {
++ hisi_sas_phy_enable(hisi_hba, phy_no, 1);
++ continue;
++ }
++
+ async_schedule_domain(hisi_sas_async_init_wait_phyup,
+ phy, &async);
+ }
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index cf13148ba281c1..e979ec1478c184 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -2738,6 +2738,7 @@ static int qedf_alloc_and_init_sb(struct qedf_ctx *qedf,
+ sb_id, QED_SB_TYPE_STORAGE);
+
+ if (ret) {
++ dma_free_coherent(&qedf->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
+ QEDF_ERR(&qedf->dbg_ctx,
+ "Status block initialization failed (0x%x) for id = %d.\n",
+ ret, sb_id);
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index c5aec26019d6ab..628d59dda20cc4 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -369,6 +369,7 @@ static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
+ ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
+ sb_id, QED_SB_TYPE_STORAGE);
+ if (ret) {
++ dma_free_coherent(&qedi->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
+ QEDI_ERR(&qedi->dbg_ctx,
+ "Status block initialization failed for id = %d.\n",
+ sb_id);
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index f86be197fedd04..84334ab39c8107 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -307,10 +307,6 @@ sg_open(struct inode *inode, struct file *filp)
+ if (retval)
+ goto sg_put;
+
+- retval = scsi_autopm_get_device(device);
+- if (retval)
+- goto sdp_put;
+-
+ /* scsi_block_when_processing_errors() may block so bypass
+ * check if O_NONBLOCK. Permits SCSI commands to be issued
+ * during error recovery. Tread carefully. */
+@@ -318,7 +314,7 @@ sg_open(struct inode *inode, struct file *filp)
+ scsi_block_when_processing_errors(device))) {
+ retval = -ENXIO;
+ /* we are in error recovery for this device */
+- goto error_out;
++ goto sdp_put;
+ }
+
+ mutex_lock(&sdp->open_rel_lock);
+@@ -371,8 +367,6 @@ sg_open(struct inode *inode, struct file *filp)
+ }
+ error_mutex_locked:
+ mutex_unlock(&sdp->open_rel_lock);
+-error_out:
+- scsi_autopm_put_device(device);
+ sdp_put:
+ kref_put(&sdp->d_ref, sg_device_destroy);
+ scsi_device_put(device);
+@@ -392,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp)
+ SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
+
+ mutex_lock(&sdp->open_rel_lock);
+- scsi_autopm_put_device(sdp->device);
+ kref_put(&sfp->f_ref, sg_remove_sfp);
+ sdp->open_cnt--;
+
+diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c
+index 74350b5871dc8e..ea571eeb307878 100644
+--- a/drivers/sh/intc/core.c
++++ b/drivers/sh/intc/core.c
+@@ -209,7 +209,6 @@ int __init register_intc_controller(struct intc_desc *desc)
+ goto err0;
+
+ INIT_LIST_HEAD(&d->list);
+- list_add_tail(&d->list, &intc_list);
+
+ raw_spin_lock_init(&d->lock);
+ INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
+@@ -369,6 +368,7 @@ int __init register_intc_controller(struct intc_desc *desc)
+
+ d->skip_suspend = desc->skip_syscore_suspend;
+
++ list_add_tail(&d->list, &intc_list);
+ nr_intc_controllers++;
+
+ return 0;
+diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c
+index 19cc581b06d0c8..b3f773e135fd49 100644
+--- a/drivers/soc/fsl/qe/qmc.c
++++ b/drivers/soc/fsl/qe/qmc.c
+@@ -2004,8 +2004,10 @@ static int qmc_probe(struct platform_device *pdev)
+
+ /* Set the irq handler */
+ irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
++ if (irq < 0) {
++ ret = irq;
+ goto err_exit_xcc;
++ }
+ ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc", qmc);
+ if (ret < 0)
+ goto err_exit_xcc;
+diff --git a/drivers/soc/fsl/rcpm.c b/drivers/soc/fsl/rcpm.c
+index 3d0cae30c769ea..06bd94b29fb321 100644
+--- a/drivers/soc/fsl/rcpm.c
++++ b/drivers/soc/fsl/rcpm.c
+@@ -36,6 +36,7 @@ static void copy_ippdexpcr1_setting(u32 val)
+ return;
+
+ regs = of_iomap(np, 0);
++ of_node_put(np);
+ if (!regs)
+ return;
+
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index 2e8f24d5da80b6..4cb959106efa9e 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -585,7 +585,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
+
+ for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
+ freq = clk_round_rate(se->clk, freq + 1);
+- if (freq <= 0 || freq == se->clk_perf_tbl[i - 1])
++ if (freq <= 0 ||
++ (i > 0 && freq == se->clk_perf_tbl[i - 1]))
+ break;
+ se->clk_perf_tbl[i] = freq;
+ }
+diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c
+index d6219060b616d6..38add2ab561372 100644
+--- a/drivers/soc/ti/smartreflex.c
++++ b/drivers/soc/ti/smartreflex.c
+@@ -202,10 +202,10 @@ static int sr_late_init(struct omap_sr *sr_info)
+
+ if (sr_class->notify && sr_class->notify_flags && sr_info->irq) {
+ ret = devm_request_irq(&sr_info->pdev->dev, sr_info->irq,
+- sr_interrupt, 0, sr_info->name, sr_info);
++ sr_interrupt, IRQF_NO_AUTOEN,
++ sr_info->name, sr_info);
+ if (ret)
+ goto error;
+- disable_irq(sr_info->irq);
+ }
+
+ return ret;
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index f529e1346247cc..85df6b9c04ee69 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -188,8 +188,10 @@ static int xlnx_add_cb_for_suspend(event_cb_func_t cb_fun, void *data)
+ INIT_LIST_HEAD(&eve_data->cb_list_head);
+
+ cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL);
+- if (!cb_data)
++ if (!cb_data) {
++ kfree(eve_data);
+ return -ENOMEM;
++ }
+ cb_data->eve_cb = cb_fun;
+ cb_data->agent_data = data;
+
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 95cdfc28361ef7..caecb2ad2a150d 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -183,7 +183,7 @@ static const char *atmel_qspi_reg_name(u32 offset, char *tmp, size_t sz)
+ case QSPI_MR:
+ return "MR";
+ case QSPI_RD:
+- return "MR";
++ return "RD";
+ case QSPI_TD:
+ return "TD";
+ case QSPI_SR:
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 977e8b55c82b7d..9573b8fa4fbfc6 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -891,7 +891,7 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, 0,
++ ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, IRQF_NO_AUTOEN,
+ dev_name(&pdev->dev), fsl_lpspi);
+ if (ret) {
+ dev_err(&pdev->dev, "can't get irq%d: %d\n", irq, ret);
+@@ -948,14 +948,10 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller);
+ if (ret == -EPROBE_DEFER)
+ goto out_pm_get;
+- if (ret < 0)
++ if (ret < 0) {
+ dev_warn(&pdev->dev, "dma setup error %d, use pio\n", ret);
+- else
+- /*
+- * disable LPSPI module IRQ when enable DMA mode successfully,
+- * to prevent the unexpected LPSPI module IRQ events.
+- */
+- disable_irq(irq);
++ enable_irq(irq);
++ }
+
+ ret = devm_spi_register_controller(&pdev->dev, controller);
+ if (ret < 0) {
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index afbd64a217eb06..43f11b0e9e765c 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -341,7 +341,7 @@ tegra_qspi_fill_tx_fifo_from_client_txbuf(struct tegra_qspi *tqspi, struct spi_t
+ for (count = 0; count < max_n_32bit; count++) {
+ u32 x = 0;
+
+- for (i = 0; len && (i < bytes_per_word); i++, len--)
++ for (i = 0; len && (i < min(4, bytes_per_word)); i++, len--)
+ x |= (u32)(*tx_buf++) << (i * 8);
+ tegra_qspi_writel(tqspi, x, QSPI_TX_FIFO);
+ }
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index fcd0ca99668419..b9df39e06e7cd4 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1351,6 +1351,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+
+ clk_dis_all:
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ clk_disable_unprepare(xqspi->refclk);
+@@ -1379,6 +1380,7 @@ static void zynqmp_qspi_remove(struct platform_device *pdev)
+ zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
+
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ clk_disable_unprepare(xqspi->refclk);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index c1dad30a4528b7..0f3e6e2c24743c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -424,6 +424,16 @@ static int spi_probe(struct device *dev)
+ spi->irq = 0;
+ }
+
++ if (has_acpi_companion(dev) && spi->irq < 0) {
++ struct acpi_device *adev = to_acpi_device_node(dev->fwnode);
++
++ spi->irq = acpi_dev_gpio_irq_get(adev, 0);
++ if (spi->irq == -EPROBE_DEFER)
++ return -EPROBE_DEFER;
++ if (spi->irq < 0)
++ spi->irq = 0;
++ }
++
+ ret = dev_pm_domain_attach(dev, true);
+ if (ret)
+ return ret;
+@@ -2869,9 +2879,6 @@ static acpi_status acpi_register_spi_device(struct spi_controller *ctlr,
+ acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias,
+ sizeof(spi->modalias));
+
+- if (spi->irq < 0)
+- spi->irq = acpi_dev_gpio_irq_get(adev, 0);
+-
+ acpi_device_set_enumerated(adev);
+
+ adev->power.flags.ignore_parent = true;
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
+index 232744973ab887..b1feb6f6ebe895 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
+@@ -4181,6 +4181,8 @@ ia_css_3a_statistics_allocate(const struct ia_css_3a_grid_info *grid)
+ goto err;
+ /* No weighted histogram, no structure, treat the histogram data as a byte dump in a byte array */
+ me->rgby_data = kvmalloc(sizeof_hmem(HMEM0_ID), GFP_KERNEL);
++ if (!me->rgby_data)
++ goto err;
+
+ IA_CSS_LEAVE("return=%p", me);
+ return me;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 6c488b1e262485..5fab33adf58ed0 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1715,7 +1715,6 @@ MODULE_DEVICE_TABLE(of, vchiq_of_match);
+
+ static int vchiq_probe(struct platform_device *pdev)
+ {
+- struct device_node *fw_node;
+ const struct vchiq_platform_info *info;
+ struct vchiq_drv_mgmt *mgmt;
+ int ret;
+@@ -1724,8 +1723,8 @@ static int vchiq_probe(struct platform_device *pdev)
+ if (!info)
+ return -EINVAL;
+
+- fw_node = of_find_compatible_node(NULL, NULL,
+- "raspberrypi,bcm2835-firmware");
++ struct device_node *fw_node __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware");
+ if (!fw_node) {
+ dev_err(&pdev->dev, "Missing firmware node\n");
+ return -ENOENT;
+@@ -1736,7 +1735,6 @@ static int vchiq_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ mgmt->fw = devm_rpi_firmware_get(&pdev->dev, fw_node);
+- of_node_put(fw_node);
+ if (!mgmt->fw)
+ return -EPROBE_DEFER;
+
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 440e07b1d5cdb1..287ac5b0495f9a 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -369,7 +369,7 @@ static int pscsi_create_type_disk(struct se_device *dev, struct scsi_device *sd)
+ bdev_file = bdev_file_open_by_path(dev->udev_path,
+ BLK_OPEN_WRITE | BLK_OPEN_READ, pdv, NULL);
+ if (IS_ERR(bdev_file)) {
+- pr_err("pSCSI: bdev_open_by_path() failed\n");
++ pr_err("pSCSI: bdev_file_open_by_path() failed\n");
+ scsi_device_put(sd);
+ return PTR_ERR(bdev_file);
+ }
+diff --git a/drivers/thermal/testing/zone.c b/drivers/thermal/testing/zone.c
+index c6d8c66f40f980..1f01f495270313 100644
+--- a/drivers/thermal/testing/zone.c
++++ b/drivers/thermal/testing/zone.c
+@@ -185,7 +185,7 @@ static void tt_add_tz_work_fn(struct work_struct *work)
+ int tt_add_tz(void)
+ {
+ struct tt_thermal_zone *tt_zone __free(kfree);
+- struct tt_work *tt_work __free(kfree);
++ struct tt_work *tt_work __free(kfree) = NULL;
+ int ret;
+
+ tt_zone = kzalloc(sizeof(*tt_zone), GFP_KERNEL);
+@@ -237,7 +237,7 @@ static void tt_zone_unregister_tz(struct tt_thermal_zone *tt_zone)
+
+ int tt_del_tz(const char *arg)
+ {
+- struct tt_work *tt_work __free(kfree);
++ struct tt_work *tt_work __free(kfree) = NULL;
+ struct tt_thermal_zone *tt_zone, *aux;
+ int ret;
+ int id;
+@@ -310,6 +310,9 @@ static void tt_put_tt_zone(struct tt_thermal_zone *tt_zone)
+ tt_zone->refcount--;
+ }
+
++DEFINE_FREE(put_tt_zone, struct tt_thermal_zone *,
++ if (!IS_ERR_OR_NULL(_T)) tt_put_tt_zone(_T))
++
+ static void tt_zone_add_trip_work_fn(struct work_struct *work)
+ {
+ struct tt_work *tt_work = tt_work_of_work(work);
+@@ -332,9 +335,9 @@ static void tt_zone_add_trip_work_fn(struct work_struct *work)
+
+ int tt_zone_add_trip(const char *arg)
+ {
++ struct tt_thermal_zone *tt_zone __free(put_tt_zone) = NULL;
++ struct tt_trip *tt_trip __free(kfree) = NULL;
+ struct tt_work *tt_work __free(kfree);
+- struct tt_trip *tt_trip __free(kfree);
+- struct tt_thermal_zone *tt_zone;
+ int id;
+
+ tt_work = kzalloc(sizeof(*tt_work), GFP_KERNEL);
+@@ -350,10 +353,8 @@ int tt_zone_add_trip(const char *arg)
+ return PTR_ERR(tt_zone);
+
+ id = ida_alloc(&tt_zone->ida, GFP_KERNEL);
+- if (id < 0) {
+- tt_put_tt_zone(tt_zone);
++ if (id < 0)
+ return id;
+- }
+
+ tt_trip->trip.type = THERMAL_TRIP_ACTIVE;
+ tt_trip->trip.temperature = THERMAL_TEMP_INVALID;
+@@ -366,7 +367,7 @@ int tt_zone_add_trip(const char *arg)
+ tt_zone->num_trips++;
+
+ INIT_WORK(&tt_work->work, tt_zone_add_trip_work_fn);
+- tt_work->tt_zone = tt_zone;
++ tt_work->tt_zone = no_free_ptr(tt_zone);
+ tt_work->tt_trip = no_free_ptr(tt_trip);
+ schedule_work(&(no_free_ptr(tt_work)->work));
+
+@@ -391,7 +392,7 @@ static struct thermal_zone_device_ops tt_zone_ops = {
+
+ static int tt_zone_register_tz(struct tt_thermal_zone *tt_zone)
+ {
+- struct thermal_trip *trips __free(kfree);
++ struct thermal_trip *trips __free(kfree) = NULL;
+ struct thermal_zone_device *tz;
+ struct tt_trip *tt_trip;
+ int i;
+@@ -425,23 +426,18 @@ static int tt_zone_register_tz(struct tt_thermal_zone *tt_zone)
+
+ int tt_zone_reg(const char *arg)
+ {
+- struct tt_thermal_zone *tt_zone;
+- int ret;
++ struct tt_thermal_zone *tt_zone __free(put_tt_zone);
+
+ tt_zone = tt_get_tt_zone(arg);
+ if (IS_ERR(tt_zone))
+ return PTR_ERR(tt_zone);
+
+- ret = tt_zone_register_tz(tt_zone);
+-
+- tt_put_tt_zone(tt_zone);
+-
+- return ret;
++ return tt_zone_register_tz(tt_zone);
+ }
+
+ int tt_zone_unreg(const char *arg)
+ {
+- struct tt_thermal_zone *tt_zone;
++ struct tt_thermal_zone *tt_zone __free(put_tt_zone);
+
+ tt_zone = tt_get_tt_zone(arg);
+ if (IS_ERR(tt_zone))
+@@ -449,8 +445,6 @@ int tt_zone_unreg(const char *arg)
+
+ tt_zone_unregister_tz(tt_zone);
+
+- tt_put_tt_zone(tt_zone);
+-
+ return 0;
+ }
+
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 8f03985f971c30..1d2f2b307bac50 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -40,6 +40,8 @@ static DEFINE_MUTEX(thermal_governor_lock);
+
+ static struct thermal_governor *def_governor;
+
++static bool thermal_pm_suspended;
++
+ /*
+ * Governor section: set of functions to handle thermal governors
+ *
+@@ -547,7 +549,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ int low = -INT_MAX, high = INT_MAX;
+ int temp, ret;
+
+- if (tz->suspended || tz->mode != THERMAL_DEVICE_ENABLED)
++ if (tz->state != TZ_STATE_READY || tz->mode != THERMAL_DEVICE_ENABLED)
+ return;
+
+ ret = __thermal_zone_get_temp(tz, &temp);
+@@ -1332,6 +1334,24 @@ int thermal_zone_get_crit_temp(struct thermal_zone_device *tz, int *temp)
+ }
+ EXPORT_SYMBOL_GPL(thermal_zone_get_crit_temp);
+
++static void thermal_zone_init_complete(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ tz->state &= ~TZ_STATE_FLAG_INIT;
++ /*
++ * If system suspend or resume is in progress at this point, the
++ * new thermal zone needs to be marked as suspended because
++ * thermal_pm_notify() has run already.
++ */
++ if (thermal_pm_suspended)
++ tz->state |= TZ_STATE_FLAG_SUSPENDED;
++
++ __thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++
++ mutex_unlock(&tz->lock);
++}
++
+ /**
+ * thermal_zone_device_register_with_trips() - register a new thermal zone device
+ * @type: the thermal zone device type
+@@ -1451,6 +1471,8 @@ thermal_zone_device_register_with_trips(const char *type,
+ tz->passive_delay_jiffies = msecs_to_jiffies(passive_delay);
+ tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+
++ tz->state = TZ_STATE_FLAG_INIT;
++
+ /* sys I/F */
+ /* Add nodes that are always present via .groups */
+ result = thermal_zone_create_device_groups(tz);
+@@ -1465,6 +1487,7 @@ thermal_zone_device_register_with_trips(const char *type,
+ thermal_zone_destroy_device_groups(tz);
+ goto remove_id;
+ }
++ thermal_zone_device_init(tz);
+ result = device_register(&tz->device);
+ if (result)
+ goto release_device;
+@@ -1501,12 +1524,9 @@ thermal_zone_device_register_with_trips(const char *type,
+ list_for_each_entry(cdev, &thermal_cdev_list, node)
+ thermal_zone_cdev_bind(tz, cdev);
+
+- mutex_unlock(&thermal_list_lock);
++ thermal_zone_init_complete(tz);
+
+- thermal_zone_device_init(tz);
+- /* Update the new thermal zone and mark it as already updated. */
+- if (atomic_cmpxchg(&tz->need_update, 1, 0))
+- thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++ mutex_unlock(&thermal_list_lock);
+
+ thermal_notify_tz_create(tz);
+
+@@ -1662,7 +1682,7 @@ static void thermal_zone_device_resume(struct work_struct *work)
+
+ mutex_lock(&tz->lock);
+
+- tz->suspended = false;
++ tz->state &= ~(TZ_STATE_FLAG_SUSPENDED | TZ_STATE_FLAG_RESUMING);
+
+ thermal_debug_tz_resume(tz);
+ thermal_zone_device_init(tz);
+@@ -1670,7 +1690,48 @@ static void thermal_zone_device_resume(struct work_struct *work)
+ __thermal_zone_device_update(tz, THERMAL_TZ_RESUME);
+
+ complete(&tz->resume);
+- tz->resuming = false;
++
++ mutex_unlock(&tz->lock);
++}
++
++static void thermal_zone_pm_prepare(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ if (tz->state & TZ_STATE_FLAG_RESUMING) {
++ /*
++ * thermal_zone_device_resume() queued up for this zone has not
++ * acquired the lock yet, so release it to let the function run
++ * and wait util it has done the work.
++ */
++ mutex_unlock(&tz->lock);
++
++ wait_for_completion(&tz->resume);
++
++ mutex_lock(&tz->lock);
++ }
++
++ tz->state |= TZ_STATE_FLAG_SUSPENDED;
++
++ mutex_unlock(&tz->lock);
++}
++
++static void thermal_zone_pm_complete(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ cancel_delayed_work(&tz->poll_queue);
++
++ reinit_completion(&tz->resume);
++ tz->state |= TZ_STATE_FLAG_RESUMING;
++
++ /*
++ * Replace the work function with the resume one, which will restore the
++ * original work function and schedule the polling work if needed.
++ */
++ INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_resume);
++ /* Queue up the work without a delay. */
++ mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, 0);
+
+ mutex_unlock(&tz->lock);
+ }
+@@ -1686,27 +1747,10 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ case PM_SUSPEND_PREPARE:
+ mutex_lock(&thermal_list_lock);
+
+- list_for_each_entry(tz, &thermal_tz_list, node) {
+- mutex_lock(&tz->lock);
+-
+- if (tz->resuming) {
+- /*
+- * thermal_zone_device_resume() queued up for
+- * this zone has not acquired the lock yet, so
+- * release it to let the function run and wait
+- * util it has done the work.
+- */
+- mutex_unlock(&tz->lock);
+-
+- wait_for_completion(&tz->resume);
+-
+- mutex_lock(&tz->lock);
+- }
++ thermal_pm_suspended = true;
+
+- tz->suspended = true;
+-
+- mutex_unlock(&tz->lock);
+- }
++ list_for_each_entry(tz, &thermal_tz_list, node)
++ thermal_zone_pm_prepare(tz);
+
+ mutex_unlock(&thermal_list_lock);
+ break;
+@@ -1715,27 +1759,10 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ case PM_POST_SUSPEND:
+ mutex_lock(&thermal_list_lock);
+
+- list_for_each_entry(tz, &thermal_tz_list, node) {
+- mutex_lock(&tz->lock);
+-
+- cancel_delayed_work(&tz->poll_queue);
++ thermal_pm_suspended = false;
+
+- reinit_completion(&tz->resume);
+- tz->resuming = true;
+-
+- /*
+- * Replace the work function with the resume one, which
+- * will restore the original work function and schedule
+- * the polling work if needed.
+- */
+- INIT_DELAYED_WORK(&tz->poll_queue,
+- thermal_zone_device_resume);
+- /* Queue up the work without a delay. */
+- mod_delayed_work(system_freezable_power_efficient_wq,
+- &tz->poll_queue, 0);
+-
+- mutex_unlock(&tz->lock);
+- }
++ list_for_each_entry(tz, &thermal_tz_list, node)
++ thermal_zone_pm_complete(tz);
+
+ mutex_unlock(&thermal_list_lock);
+ break;
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index a64d39b1c86b23..421522a2bb9d4c 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -61,6 +61,12 @@ struct thermal_governor {
+ struct list_head governor_list;
+ };
+
++#define TZ_STATE_FLAG_SUSPENDED BIT(0)
++#define TZ_STATE_FLAG_RESUMING BIT(1)
++#define TZ_STATE_FLAG_INIT BIT(2)
++
++#define TZ_STATE_READY 0
++
+ /**
+ * struct thermal_zone_device - structure for a thermal zone
+ * @id: unique id number for each thermal zone
+@@ -100,8 +106,7 @@ struct thermal_governor {
+ * @node: node in thermal_tz_list (in thermal_core.c)
+ * @poll_queue: delayed work for polling
+ * @notify_event: Last notification event
+- * @suspended: thermal zone suspend indicator
+- * @resuming: indicates whether or not thermal zone resume is in progress
++ * @state: current state of the thermal zone
+ * @trips: array of struct thermal_trip objects
+ */
+ struct thermal_zone_device {
+@@ -134,8 +139,7 @@ struct thermal_zone_device {
+ struct list_head node;
+ struct delayed_work poll_queue;
+ enum thermal_notify_event notify_event;
+- bool suspended;
+- bool resuming;
++ u8 state;
+ #ifdef CONFIG_THERMAL_DEBUGFS
+ struct thermal_debugfs *debugfs;
+ #endif
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index e2aa2a1a02ddf5..ecbce226b8747e 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -21,6 +21,7 @@
+ #define CHIP_ID_F81866 0x1010
+ #define CHIP_ID_F81966 0x0215
+ #define CHIP_ID_F81216AD 0x1602
++#define CHIP_ID_F81216E 0x1617
+ #define CHIP_ID_F81216H 0x0501
+ #define CHIP_ID_F81216 0x0802
+ #define VENDOR_ID1 0x23
+@@ -158,6 +159,7 @@ static int fintek_8250_check_id(struct fintek_8250 *pdata)
+ case CHIP_ID_F81866:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ break;
+@@ -181,6 +183,7 @@ static int fintek_8250_get_ldn_range(struct fintek_8250 *pdata, int *min,
+ return 0;
+
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ *min = F81216_LDN_LOW;
+@@ -250,6 +253,7 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
+ break;
+
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ sio_write_mask_reg(pdata, FINTEK_IRQ_MODE, IRQ_SHARE,
+@@ -263,7 +267,8 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
+ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
+ {
+ switch (pdata->pid) {
+- case CHIP_ID_F81216H: /* 128Bytes FIFO */
++ case CHIP_ID_F81216E: /* 128Bytes FIFO */
++ case CHIP_ID_F81216H:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81866:
+ sio_write_mask_reg(pdata, FIFO_CTRL,
+@@ -297,6 +302,7 @@ static void fintek_8250_set_termios(struct uart_port *port,
+ goto exit;
+
+ switch (pdata->pid) {
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ reg = RS485;
+ break;
+@@ -346,6 +352,7 @@ static void fintek_8250_set_termios_handler(struct uart_8250_port *uart)
+ struct fintek_8250 *pdata = uart->port.private_data;
+
+ switch (pdata->pid) {
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81866:
+@@ -438,6 +445,11 @@ static void fintek_8250_set_rs485_handler(struct uart_8250_port *uart)
+ uart->port.rs485_supported = fintek_8250_rs485_supported;
+ break;
+
++ case CHIP_ID_F81216E: /* F81216E does not support RS485 delays */
++ uart->port.rs485_config = fintek_8250_rs485_config;
++ uart->port.rs485_supported = fintek_8250_rs485_supported;
++ break;
++
+ default: /* No RS485 Auto direction functional */
+ break;
+ }
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 88b58f44e4e976..0dd68bdbfbcf7c 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -776,12 +776,12 @@ static void omap_8250_shutdown(struct uart_port *port)
+ struct uart_8250_port *up = up_to_u8250p(port);
+ struct omap8250_priv *priv = port->private_data;
+
++ pm_runtime_get_sync(port->dev);
++
+ flush_work(&priv->qos_work);
+ if (up->dma)
+ omap_8250_rx_dma_flush(up);
+
+- pm_runtime_get_sync(port->dev);
+-
+ serial_out(up, UART_OMAP_WER, 0);
+ if (priv->habit & UART_HAS_EFR2)
+ serial_out(up, UART_OMAP_EFR2, 0x0);
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 7d0134ecd82fa5..9529a512cbd40f 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1819,6 +1819,13 @@ static void pl011_unthrottle_rx(struct uart_port *port)
+
+ pl011_write(uap->im, uap, REG_IMSC);
+
++#ifdef CONFIG_DMA_ENGINE
++ if (uap->using_rx_dma) {
++ uap->dmacr |= UART011_RXDMAE;
++ pl011_write(uap->dmacr, uap, REG_DMACR);
++ }
++#endif
++
+ uart_port_unlock_irqrestore(&uap->port, flags);
+ }
+
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 9771072da177cb..dcb1769c3625cd 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -3631,7 +3631,7 @@ static struct ctl_table tty_table[] = {
+ .data = &tty_ldisc_autoload,
+ .maxlen = sizeof(tty_ldisc_autoload),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index eab81dfdcc3502..0b9ba338b2654c 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -915,6 +915,7 @@ struct dwc3_hwparams {
+ #define DWC3_MODE(n) ((n) & 0x7)
+
+ /* HWPARAMS1 */
++#define DWC3_SPRAM_TYPE(n) (((n) >> 23) & 1)
+ #define DWC3_NUM_INT(n) (((n) & (0x3f << 15)) >> 15)
+
+ /* HWPARAMS3 */
+@@ -925,6 +926,9 @@ struct dwc3_hwparams {
+ #define DWC3_NUM_IN_EPS(p) (((p)->hwparams3 & \
+ (DWC3_NUM_IN_EPS_MASK)) >> 18)
+
++/* HWPARAMS6 */
++#define DWC3_RAM0_DEPTH(n) (((n) & (0xffff0000)) >> 16)
++
+ /* HWPARAMS7 */
+ #define DWC3_RAM1_DEPTH(n) ((n) & 0xffff)
+
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index c9533a99e47c89..874497f86499b3 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -232,7 +232,7 @@ void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+ /* stall is always issued on EP0 */
+ dep = dwc->eps[0];
+ __dwc3_gadget_ep_set_halt(dep, 1, false);
+- dep->flags &= DWC3_EP_RESOURCE_ALLOCATED;
++ dep->flags &= DWC3_EP_RESOURCE_ALLOCATED | DWC3_EP_TRANSFER_STARTED;
+ dep->flags |= DWC3_EP_ENABLED;
+ dwc->delayed_status = false;
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 4959c26d3b71b8..56744b11e67cb9 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -687,6 +687,44 @@ static int dwc3_gadget_calc_tx_fifo_size(struct dwc3 *dwc, int mult)
+ return fifo_size;
+ }
+
++/**
++ * dwc3_gadget_calc_ram_depth - calculates the ram depth for txfifo
++ * @dwc: pointer to the DWC3 context
++ */
++static int dwc3_gadget_calc_ram_depth(struct dwc3 *dwc)
++{
++ int ram_depth;
++ int fifo_0_start;
++ bool is_single_port_ram;
++
++ /* Check supporting RAM type by HW */
++ is_single_port_ram = DWC3_SPRAM_TYPE(dwc->hwparams.hwparams1);
++
++ /*
++ * If a single port RAM is utilized, then allocate TxFIFOs from
++ * RAM0. otherwise, allocate them from RAM1.
++ */
++ ram_depth = is_single_port_ram ? DWC3_RAM0_DEPTH(dwc->hwparams.hwparams6) :
++ DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
++
++ /*
++ * In a single port RAM configuration, the available RAM is shared
++ * between the RX and TX FIFOs. This means that the txfifo can begin
++ * at a non-zero address.
++ */
++ if (is_single_port_ram) {
++ u32 reg;
++
++ /* Check if TXFIFOs start at non-zero addr */
++ reg = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(0));
++ fifo_0_start = DWC3_GTXFIFOSIZ_TXFSTADDR(reg);
++
++ ram_depth -= (fifo_0_start >> 16);
++ }
++
++ return ram_depth;
++}
++
+ /**
+ * dwc3_gadget_clear_tx_fifos - Clears txfifo allocation
+ * @dwc: pointer to the DWC3 context
+@@ -753,7 +791,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+ {
+ struct dwc3 *dwc = dep->dwc;
+ int fifo_0_start;
+- int ram1_depth;
++ int ram_depth;
+ int fifo_size;
+ int min_depth;
+ int num_in_ep;
+@@ -773,7 +811,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+ if (dep->flags & DWC3_EP_TXFIFO_RESIZED)
+ return 0;
+
+- ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
++ ram_depth = dwc3_gadget_calc_ram_depth(dwc);
+
+ if ((dep->endpoint.maxburst > 1 &&
+ usb_endpoint_xfer_bulk(dep->endpoint.desc)) ||
+@@ -794,7 +832,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+
+ /* Reserve at least one FIFO for the number of IN EPs */
+ min_depth = num_in_ep * (fifo + 1);
+- remaining = ram1_depth - min_depth - dwc->last_fifo_depth;
++ remaining = ram_depth - min_depth - dwc->last_fifo_depth;
+ remaining = max_t(int, 0, remaining);
+ /*
+ * We've already reserved 1 FIFO per EP, so check what we can fit in
+@@ -820,9 +858,9 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+ dwc->last_fifo_depth += DWC31_GTXFIFOSIZ_TXFDEP(fifo_size);
+
+ /* Check fifo size allocation doesn't exceed available RAM size. */
+- if (dwc->last_fifo_depth >= ram1_depth) {
++ if (dwc->last_fifo_depth >= ram_depth) {
+ dev_err(dwc->dev, "Fifosize(%d) > RAM size(%d) %s depth:%d\n",
+- dwc->last_fifo_depth, ram1_depth,
++ dwc->last_fifo_depth, ram_depth,
+ dep->endpoint.name, fifo_size);
+ if (DWC3_IP_IS(DWC3))
+ fifo_size = DWC3_GTXFIFOSIZ_TXFDEP(fifo_size);
+@@ -1177,11 +1215,14 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ * pending to be processed by the driver.
+ */
+ if (dep->trb_enqueue == dep->trb_dequeue) {
++ struct dwc3_request *req;
++
+ /*
+- * If there is any request remained in the started_list at
+- * this point, that means there is no TRB available.
++ * If there is any request remained in the started_list with
++ * active TRBs at this point, then there is no TRB available.
+ */
+- if (!list_empty(&dep->started_list))
++ req = next_request(&dep->started_list);
++ if (req && req->num_trbs)
+ return 0;
+
+ return DWC3_TRB_NUM - 1;
+@@ -1414,8 +1455,8 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ struct scatterlist *s;
+ int i;
+ unsigned int length = req->request.length;
+- unsigned int remaining = req->request.num_mapped_sgs
+- - req->num_queued_sgs;
++ unsigned int remaining = req->num_pending_sgs;
++ unsigned int num_queued_sgs = req->request.num_mapped_sgs - remaining;
+ unsigned int num_trbs = req->num_trbs;
+ bool needs_extra_trb = dwc3_needs_extra_trb(dep, req);
+
+@@ -1423,7 +1464,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ * If we resume preparing the request, then get the remaining length of
+ * the request and resume where we left off.
+ */
+- for_each_sg(req->request.sg, s, req->num_queued_sgs, i)
++ for_each_sg(req->request.sg, s, num_queued_sgs, i)
+ length -= sg_dma_len(s);
+
+ for_each_sg(sg, s, remaining, i) {
+@@ -3075,7 +3116,7 @@ static int dwc3_gadget_check_config(struct usb_gadget *g)
+ struct dwc3 *dwc = gadget_to_dwc(g);
+ struct usb_ep *ep;
+ int fifo_size = 0;
+- int ram1_depth;
++ int ram_depth;
+ int ep_num = 0;
+
+ if (!dwc->do_fifo_resize)
+@@ -3098,8 +3139,8 @@ static int dwc3_gadget_check_config(struct usb_gadget *g)
+ fifo_size += dwc->max_cfg_eps;
+
+ /* Check if we can fit a single fifo per endpoint */
+- ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
+- if (fifo_size > ram1_depth)
++ ram_depth = dwc3_gadget_calc_ram_depth(dwc);
++ if (fifo_size > ram_depth)
+ return -ENOMEM;
+
+ return 0;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index f25dd2cb5d03b1..cec86c0c6369ca 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2111,8 +2111,20 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ memset(buf, 0, w_length);
+ buf[5] = 0x01;
+ switch (ctrl->bRequestType & USB_RECIP_MASK) {
++ /*
++ * The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and
++ * Extended Prop OS Desc Spec(w_index = 0x5) state that the
++ * HighByte of wValue is the InterfaceNumber and the LowByte is
++ * the PageNumber. This high/low byte ordering is incorrectly
++ * documented in the Spec. USB analyzer output on the below
++ * request packets show the high/low byte inverted i.e LowByte
++ * is the InterfaceNumber and the HighByte is the PageNumber.
++ * Since we dont support >64KB CompatID/ExtendedProp descriptors,
++ * PageNumber is set to 0. Hence verify that the HighByte is 0
++ * for below two cases.
++ */
+ case USB_RECIP_DEVICE:
+- if (w_index != 0x4 || (w_value & 0xff))
++ if (w_index != 0x4 || (w_value >> 8))
+ break;
+ buf[6] = w_index;
+ /* Number of ext compat interfaces */
+@@ -2128,9 +2140,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ }
+ break;
+ case USB_RECIP_INTERFACE:
+- if (w_index != 0x5 || (w_value & 0xff))
++ if (w_index != 0x5 || (w_value >> 8))
+ break;
+- interface = w_value >> 8;
++ interface = w_value & 0xFF;
+ if (interface >= MAX_CONFIG_INTERFACES ||
+ !os_desc_cfg->interface[interface])
+ break;
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index 57a851151225de..002bf724d8025d 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -480,6 +480,10 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ * up later.
+ */
+ list_add_tail(&to_queue->list, &video->req_free);
++ /*
++ * There is a new free request - wake up the pump.
++ */
++ queue_work(video->async_wq, &video->pump);
+ }
+
+ spin_unlock_irqrestore(&video->req_lock, flags);
+diff --git a/drivers/usb/host/ehci-spear.c b/drivers/usb/host/ehci-spear.c
+index d0e94e4c9fe274..11294f196ee335 100644
+--- a/drivers/usb/host/ehci-spear.c
++++ b/drivers/usb/host/ehci-spear.c
+@@ -105,7 +105,9 @@ static int spear_ehci_hcd_drv_probe(struct platform_device *pdev)
+ /* registers start at offset 0x0 */
+ hcd_to_ehci(hcd)->caps = hcd->regs;
+
+- clk_prepare_enable(sehci->clk);
++ retval = clk_prepare_enable(sehci->clk);
++ if (retval)
++ goto err_put_hcd;
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (retval)
+ goto err_stop_ehci;
+@@ -130,8 +132,7 @@ static void spear_ehci_hcd_drv_remove(struct platform_device *pdev)
+
+ usb_remove_hcd(hcd);
+
+- if (sehci->clk)
+- clk_disable_unprepare(sehci->clk);
++ clk_disable_unprepare(sehci->clk);
+ usb_put_hcd(hcd);
+ }
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index cb07cee9ed0c75..3ba9902dd2093c 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -395,14 +395,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+- pdev->device == PCI_DEVICE_ID_EJ168) {
+- xhci->quirks |= XHCI_RESET_ON_RESUME;
+- xhci->quirks |= XHCI_BROKEN_STREAMS;
+- }
+- if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+- pdev->device == PCI_DEVICE_ID_EJ188) {
++ (pdev->device == PCI_DEVICE_ID_EJ168 ||
++ pdev->device == PCI_DEVICE_ID_EJ188)) {
++ xhci->quirks |= XHCI_ETRON_HOST;
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
++ xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ }
+
+ if (pdev->vendor == PCI_VENDOR_ID_RENESAS &&
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 928b93ad1ee866..f318864732f2db 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -52,6 +52,7 @@
+ * endpoint rings; it generates events on the event ring for these.
+ */
+
++#include <linux/jiffies.h>
+ #include <linux/scatterlist.h>
+ #include <linux/slab.h>
+ #include <linux/dma-mapping.h>
+@@ -972,6 +973,13 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ unsigned int slot_id = ep->vdev->slot_id;
+ int err;
+
++ /*
++ * This is not going to work if the hardware is changing its dequeue
++ * pointers as we look at them. Completion handler will call us later.
++ */
++ if (ep->ep_state & SET_DEQ_PENDING)
++ return 0;
++
+ xhci = ep->xhci;
+
+ list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
+@@ -1061,6 +1069,19 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ return 0;
+ }
+
++/*
++ * Erase queued TDs from transfer ring(s) and give back those the xHC didn't
++ * stop on. If necessary, queue commands to move the xHC off cancelled TDs it
++ * stopped on. Those will be given back later when the commands complete.
++ *
++ * Call under xhci->lock on a stopped endpoint.
++ */
++void xhci_process_cancelled_tds(struct xhci_virt_ep *ep)
++{
++ xhci_invalidate_cancelled_tds(ep);
++ xhci_giveback_invalidated_tds(ep);
++}
++
+ /*
+ * Returns the TD the endpoint ring halted on.
+ * Only call for non-running rings without streams.
+@@ -1151,16 +1172,35 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ return;
+ case EP_STATE_STOPPED:
+ /*
+- * NEC uPD720200 sometimes sets this state and fails with
+- * Context Error while continuing to process TRBs.
+- * Be conservative and trust EP_CTX_STATE on other chips.
++ * Per xHCI 4.6.9, Stop Endpoint command on a Stopped
++ * EP is a Context State Error, and EP stays Stopped.
++ *
++ * But maybe it failed on Halted, and somebody ran Reset
++ * Endpoint later. EP state is now Stopped and EP_HALTED
++ * still set because Reset EP handler will run after us.
++ */
++ if (ep->ep_state & EP_HALTED)
++ break;
++ /*
++ * On some HCs EP state remains Stopped for some tens of
++ * us to a few ms or more after a doorbell ring, and any
++ * new Stop Endpoint fails without aborting the restart.
++ * This handler may run quickly enough to still see this
++ * Stopped state, but it will soon change to Running.
++ *
++ * Assume this bug on unexpected Stop Endpoint failures.
++ * Keep retrying until the EP starts and stops again, on
++ * chips where this is known to help. Wait for 100ms.
+ */
+ if (!(xhci->quirks & XHCI_NEC_HOST))
+ break;
++ if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
++ break;
+ fallthrough;
+ case EP_STATE_RUNNING:
+ /* Race, HW handled stop ep cmd before ep was running */
+- xhci_dbg(xhci, "Stop ep completion ctx error, ep is running\n");
++ xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n",
++ GET_EP_CTX_STATE(ep_ctx));
+
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+@@ -1339,7 +1379,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ struct xhci_ep_ctx *ep_ctx;
+ struct xhci_slot_ctx *slot_ctx;
+ struct xhci_td *td, *tmp_td;
+- bool deferred = false;
+
+ ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+ stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2]));
+@@ -1440,8 +1479,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n",
+ __func__, td->urb);
+ xhci_td_cleanup(ep->xhci, td, ep_ring, td->status);
+- } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) {
+- deferred = true;
+ } else {
+ xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n",
+ __func__, td->urb, td->cancel_status);
+@@ -1452,11 +1489,15 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ ep->queued_deq_seg = NULL;
+ ep->queued_deq_ptr = NULL;
+
+- if (deferred) {
+- /* We have more streams to clear */
++ /* Check for deferred or newly cancelled TDs */
++ if (!list_empty(&ep->cancelled_td_list)) {
+ xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n",
+ __func__);
+ xhci_invalidate_cancelled_tds(ep);
++ /* Try to restart the endpoint if all is done */
++ ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
++ /* Start giving back any TDs invalidated above */
++ xhci_giveback_invalidated_tds(ep);
+ } else {
+ /* Restart any rings with pending URBs */
+ xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__);
+@@ -3727,6 +3768,20 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ if (!urb->setup_packet)
+ return -EINVAL;
+
++ if ((xhci->quirks & XHCI_ETRON_HOST) &&
++ urb->dev->speed >= USB_SPEED_SUPER) {
++ /*
++ * If next available TRB is the Link TRB in the ring segment then
++ * enqueue a No Op TRB, this can prevent the Setup and Data Stage
++ * TRB to be breaked by the Link TRB.
++ */
++ if (trb_is_link(ep_ring->enqueue + 1)) {
++ field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;
++ queue_trb(xhci, ep_ring, false, 0, 0,
++ TRB_INTR_TARGET(0), field);
++ }
++ }
++
+ /* 1 TRB for setup, 1 for status */
+ num_trbs = 2;
+ /*
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 899c0effb5d3c1..358ed674f782fb 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -8,6 +8,7 @@
+ * Some code borrowed from the Linux EHCI driver.
+ */
+
++#include <linux/jiffies.h>
+ #include <linux/pci.h>
+ #include <linux/iommu.h>
+ #include <linux/iopoll.h>
+@@ -1768,15 +1769,27 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ }
+ }
+
+- /* Queue a stop endpoint command, but only if this is
+- * the first cancellation to be handled.
+- */
+- if (!(ep->ep_state & EP_STOP_CMD_PENDING)) {
++ /* These completion handlers will sort out cancelled TDs for us */
++ if (ep->ep_state & (EP_STOP_CMD_PENDING | EP_HALTED | SET_DEQ_PENDING)) {
++ xhci_dbg(xhci, "Not queuing Stop Endpoint on slot %d ep %d in state 0x%x\n",
++ urb->dev->slot_id, ep_index, ep->ep_state);
++ goto done;
++ }
++
++ /* In this case no commands are pending but the endpoint is stopped */
++ if (ep->ep_state & EP_CLEARING_TT) {
++ /* and cancelled TDs can be given back right away */
++ xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n",
++ urb->dev->slot_id, ep_index, ep->ep_state);
++ xhci_process_cancelled_tds(ep);
++ } else {
++ /* Otherwise, queue a new Stop Endpoint command */
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+ ret = -ENOMEM;
+ goto done;
+ }
++ ep->stop_time = jiffies;
+ ep->ep_state |= EP_STOP_CMD_PENDING;
+ xhci_queue_stop_endpoint(xhci, command, urb->dev->slot_id,
+ ep_index, 0);
+@@ -3692,6 +3705,8 @@ void xhci_free_device_endpoint_resources(struct xhci_hcd *xhci,
+ xhci->num_active_eps);
+ }
+
++static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev);
++
+ /*
+ * This submits a Reset Device Command, which will set the device state to 0,
+ * set the device address to 0, and disable all the endpoints except the default
+@@ -3762,6 +3777,23 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd,
+ SLOT_STATE_DISABLED)
+ return 0;
+
++ if (xhci->quirks & XHCI_ETRON_HOST) {
++ /*
++ * Obtaining a new device slot to inform the xHCI host that
++ * the USB device has been reset.
++ */
++ ret = xhci_disable_slot(xhci, udev->slot_id);
++ xhci_free_virt_device(xhci, udev->slot_id);
++ if (!ret) {
++ ret = xhci_alloc_dev(hcd, udev);
++ if (ret == 1)
++ ret = 0;
++ else
++ ret = -EINVAL;
++ }
++ return ret;
++ }
++
+ trace_xhci_discover_or_reset_device(slot_ctx);
+
+ xhci_dbg(xhci, "Resetting device with slot ID %u\n", slot_id);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index f0fb696d561986..673179047eb82e 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -690,6 +690,7 @@ struct xhci_virt_ep {
+ /* Bandwidth checking storage */
+ struct xhci_bw_info bw_info;
+ struct list_head bw_endpoint_list;
++ unsigned long stop_time;
+ /* Isoch Frame ID checking storage */
+ int next_frame_id;
+ /* Use new Isoch TRB layout needed for extended TBC support */
+@@ -1624,6 +1625,7 @@ struct xhci_hcd {
+ #define XHCI_ZHAOXIN_HOST BIT_ULL(46)
+ #define XHCI_WRITE_64_HI_LO BIT_ULL(47)
+ #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
++#define XHCI_ETRON_HOST BIT_ULL(49)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+@@ -1913,6 +1915,7 @@ void xhci_ring_doorbell_for_active_rings(struct xhci_hcd *xhci,
+ void xhci_cleanup_command_queue(struct xhci_hcd *xhci);
+ void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring);
+ unsigned int count_trbs(u64 addr, u64 len);
++void xhci_process_cancelled_tds(struct xhci_virt_ep *ep);
+
+ /* xHCI roothub code */
+ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,
+diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
+index 6fb5140e29b9dd..225863321dc479 100644
+--- a/drivers/usb/misc/chaoskey.c
++++ b/drivers/usb/misc/chaoskey.c
+@@ -27,6 +27,8 @@ static struct usb_class_driver chaoskey_class;
+ static int chaoskey_rng_read(struct hwrng *rng, void *data,
+ size_t max, bool wait);
+
++static DEFINE_MUTEX(chaoskey_list_lock);
++
+ #define usb_dbg(usb_if, format, arg...) \
+ dev_dbg(&(usb_if)->dev, format, ## arg)
+
+@@ -233,6 +235,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
+ usb_deregister_dev(interface, &chaoskey_class);
+
+ usb_set_intfdata(interface, NULL);
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+
+ dev->present = false;
+@@ -244,6 +247,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
+ } else
+ mutex_unlock(&dev->lock);
+
++ mutex_unlock(&chaoskey_list_lock);
+ usb_dbg(interface, "disconnect done");
+ }
+
+@@ -251,6 +255,7 @@ static int chaoskey_open(struct inode *inode, struct file *file)
+ {
+ struct chaoskey *dev;
+ struct usb_interface *interface;
++ int rv = 0;
+
+ /* get the interface from minor number and driver information */
+ interface = usb_find_interface(&chaoskey_driver, iminor(inode));
+@@ -266,18 +271,23 @@ static int chaoskey_open(struct inode *inode, struct file *file)
+ }
+
+ file->private_data = dev;
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+- ++dev->open;
++ if (dev->present)
++ ++dev->open;
++ else
++ rv = -ENODEV;
+ mutex_unlock(&dev->lock);
++ mutex_unlock(&chaoskey_list_lock);
+
+- usb_dbg(interface, "open success");
+- return 0;
++ return rv;
+ }
+
+ static int chaoskey_release(struct inode *inode, struct file *file)
+ {
+ struct chaoskey *dev = file->private_data;
+ struct usb_interface *interface;
++ int rv = 0;
+
+ if (dev == NULL)
+ return -ENODEV;
+@@ -286,14 +296,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
+
+ usb_dbg(interface, "release");
+
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+
+ usb_dbg(interface, "open count at release is %d", dev->open);
+
+ if (dev->open <= 0) {
+ usb_dbg(interface, "invalid open count (%d)", dev->open);
+- mutex_unlock(&dev->lock);
+- return -ENODEV;
++ rv = -ENODEV;
++ goto bail;
+ }
+
+ --dev->open;
+@@ -302,13 +313,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
+ if (dev->open == 0) {
+ mutex_unlock(&dev->lock);
+ chaoskey_free(dev);
+- } else
+- mutex_unlock(&dev->lock);
+- } else
+- mutex_unlock(&dev->lock);
+-
++ goto destruction;
++ }
++ }
++bail:
++ mutex_unlock(&dev->lock);
++destruction:
++ mutex_unlock(&chaoskey_list_lock);
+ usb_dbg(interface, "release success");
+- return 0;
++ return rv;
+ }
+
+ static void chaos_read_callback(struct urb *urb)
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 6d28467ce35227..365c1006934583 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -277,28 +277,45 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
+ struct iowarrior *dev;
+ int read_idx;
+ int offset;
++ int retval;
+
+ dev = file->private_data;
+
++ if (file->f_flags & O_NONBLOCK) {
++ retval = mutex_trylock(&dev->mutex);
++ if (!retval)
++ return -EAGAIN;
++ } else {
++ retval = mutex_lock_interruptible(&dev->mutex);
++ if (retval)
++ return -ERESTARTSYS;
++ }
++
+ /* verify that the device wasn't unplugged */
+- if (!dev || !dev->present)
+- return -ENODEV;
++ if (!dev->present) {
++ retval = -ENODEV;
++ goto exit;
++ }
+
+ dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n",
+ dev->minor, count);
+
+ /* read count must be packet size (+ time stamp) */
+ if ((count != dev->report_size)
+- && (count != (dev->report_size + 1)))
+- return -EINVAL;
++ && (count != (dev->report_size + 1))) {
++ retval = -EINVAL;
++ goto exit;
++ }
+
+ /* repeat until no buffer overrun in callback handler occur */
+ do {
+ atomic_set(&dev->overflow_flag, 0);
+ if ((read_idx = read_index(dev)) == -1) {
+ /* queue empty */
+- if (file->f_flags & O_NONBLOCK)
+- return -EAGAIN;
++ if (file->f_flags & O_NONBLOCK) {
++ retval = -EAGAIN;
++ goto exit;
++ }
+ else {
+ //next line will return when there is either new data, or the device is unplugged
+ int r = wait_event_interruptible(dev->read_wait,
+@@ -309,28 +326,37 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
+ -1));
+ if (r) {
+ //we were interrupted by a signal
+- return -ERESTART;
++ retval = -ERESTART;
++ goto exit;
+ }
+ if (!dev->present) {
+ //The device was unplugged
+- return -ENODEV;
++ retval = -ENODEV;
++ goto exit;
+ }
+ if (read_idx == -1) {
+ // Can this happen ???
+- return 0;
++ retval = 0;
++ goto exit;
+ }
+ }
+ }
+
+ offset = read_idx * (dev->report_size + 1);
+ if (copy_to_user(buffer, dev->read_queue + offset, count)) {
+- return -EFAULT;
++ retval = -EFAULT;
++ goto exit;
+ }
+ } while (atomic_read(&dev->overflow_flag));
+
+ read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx;
+ atomic_set(&dev->read_idx, read_idx);
++ mutex_unlock(&dev->mutex);
+ return count;
++
++exit:
++ mutex_unlock(&dev->mutex);
++ return retval;
+ }
+
+ /*
+@@ -885,7 +911,6 @@ static int iowarrior_probe(struct usb_interface *interface,
+ static void iowarrior_disconnect(struct usb_interface *interface)
+ {
+ struct iowarrior *dev = usb_get_intfdata(interface);
+- int minor = dev->minor;
+
+ usb_deregister_dev(interface, &iowarrior_class);
+
+@@ -909,9 +934,6 @@ static void iowarrior_disconnect(struct usb_interface *interface)
+ mutex_unlock(&dev->mutex);
+ iowarrior_delete(dev);
+ }
+-
+- dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n",
+- minor - IOWARRIOR_MINOR_BASE);
+ }
+
+ /* usb specific object needed to register this driver with the usb subsystem */
+diff --git a/drivers/usb/misc/usb-ljca.c b/drivers/usb/misc/usb-ljca.c
+index 01ceafc4ab78ce..d9c21f7830557b 100644
+--- a/drivers/usb/misc/usb-ljca.c
++++ b/drivers/usb/misc/usb-ljca.c
+@@ -332,14 +332,11 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
+
+ ret = usb_bulk_msg(adap->usb_dev, adap->tx_pipe, header,
+ msg_len, &transferred, LJCA_WRITE_TIMEOUT_MS);
+-
+- usb_autopm_put_interface(adap->intf);
+-
+ if (ret < 0)
+- goto out;
++ goto out_put;
+ if (transferred != msg_len) {
+ ret = -EIO;
+- goto out;
++ goto out_put;
+ }
+
+ if (ack) {
+@@ -347,11 +344,14 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
+ timeout);
+ if (!ret) {
+ ret = -ETIMEDOUT;
+- goto out;
++ goto out_put;
+ }
+ }
+ ret = adap->actual_length;
+
++out_put:
++ usb_autopm_put_interface(adap->intf);
++
+ out:
+ spin_lock_irqsave(&adap->lock, flags);
+ adap->ex_buf = NULL;
+@@ -811,6 +811,14 @@ static int ljca_probe(struct usb_interface *interface,
+ if (ret)
+ goto err_free;
+
++ /*
++ * This works around problems with ov2740 initialization on some
++ * Lenovo platforms. The autosuspend delay, has to be smaller than
++ * the delay after setting the reset_gpio line in ov2740_resume().
++ * Otherwise the sensor randomly fails to initialize.
++ */
++ pm_runtime_set_autosuspend_delay(&usb_dev->dev, 10);
++
+ usb_enable_autosuspend(usb_dev);
+
+ return 0;
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 6aebc736a80c66..70dff0db5354ff 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -441,7 +441,10 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ if (count == 0)
+ goto error;
+
+- mutex_lock(&dev->io_mutex);
++ retval = mutex_lock_interruptible(&dev->io_mutex);
++ if (retval < 0)
++ return -EINTR;
++
+ if (dev->disconnected) { /* already disconnected */
+ mutex_unlock(&dev->io_mutex);
+ retval = -ENODEV;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index bdf13911a1e590..c6076df0d50cc7 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1161,12 +1161,19 @@ void musb_free_request(struct usb_ep *ep, struct usb_request *req)
+ */
+ void musb_ep_restart(struct musb *musb, struct musb_request *req)
+ {
++ u16 csr;
++ void __iomem *epio = req->ep->hw_ep->regs;
++
+ trace_musb_req_start(req);
+ musb_ep_select(musb->mregs, req->epnum);
+- if (req->tx)
++ if (req->tx) {
+ txstate(musb, req);
+- else
+- rxstate(musb, req);
++ } else {
++ csr = musb_readw(epio, MUSB_RXCSR);
++ csr |= MUSB_RXCSR_FLUSHFIFO | MUSB_RXCSR_P_WZC_BITS;
++ musb_writew(epio, MUSB_RXCSR, csr);
++ musb_writew(epio, MUSB_RXCSR, csr);
++ }
+ }
+
+ static int musb_ep_restart_resume_work(struct musb *musb, void *data)
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index cf719307b3f6b9..60b2766a69bf8a 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -621,10 +621,6 @@ static int wcove_typec_probe(struct platform_device *pdev)
+ if (irq < 0)
+ return irq;
+
+- irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
+- if (irq < 0)
+- return irq;
+-
+ ret = guid_parse(WCOVE_DSM_UUID, &wcove->guid);
+ if (ret)
+ return ret;
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index bccfc03b5986d7..fcb8e61136cfd7 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -644,6 +644,10 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ uc->has_multiple_dp) {
+ con_index = (uc->last_cmd_sent >> 16) &
+ UCSI_CMD_CONNECTOR_MASK;
++ if (con_index == 0) {
++ ret = -EINVAL;
++ goto unlock;
++ }
+ con = &uc->ucsi->connector[con_index - 1];
+ ucsi_ccg_update_set_new_cam_cmd(uc, con, &command);
+ }
+@@ -651,6 +655,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ ret = ucsi_sync_control_common(ucsi, command);
+
+ pm_runtime_put_sync(uc->dev);
++unlock:
+ mutex_unlock(&uc->lock);
+
+ return ret;
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index 03c0fa8edc8db5..f7000d383a4e62 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -185,7 +185,7 @@ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con)
+ struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi);
+ int orientation;
+
+- if (con->num >= PMIC_GLINK_MAX_PORTS ||
++ if (con->num > PMIC_GLINK_MAX_PORTS ||
+ !ucsi->port_orientation[con->num - 1])
+ return;
+
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 7d0c83b5b07158..8455f08f5d4060 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -368,7 +368,6 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ unsigned long lgcd = 0;
+ int log_entity_size;
+ unsigned long size;
+- u64 start = 0;
+ int err;
+ struct page *pg;
+ unsigned int nsg;
+@@ -379,10 +378,9 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ struct device *dma = mvdev->vdev.dma_dev;
+
+ for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
+- map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) {
++ map; map = vhost_iotlb_itree_next(map, mr->start, mr->end - 1)) {
+ size = maplen(map, mr);
+ lgcd = gcd(lgcd, size);
+- start += size;
+ }
+ log_entity_size = ilog2(lgcd);
+
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 41a4b0cf429756..7527e277c89897 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -423,6 +423,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ unsigned long filled;
+ unsigned int to_fill;
+ int ret;
++ int i;
+
+ to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list));
+ page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT);
+@@ -443,7 +444,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ GFP_KERNEL_ACCOUNT);
+
+ if (ret)
+- goto err;
++ goto err_append;
+ buf->allocated_length += filled * PAGE_SIZE;
+ /* clean input for another bulk allocation */
+ memset(page_list, 0, filled * sizeof(*page_list));
+@@ -454,6 +455,9 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ kvfree(page_list);
+ return 0;
+
++err_append:
++ for (i = filled - 1; i >= 0; i--)
++ __free_page(page_list[i]);
+ err:
+ kvfree(page_list);
+ return ret;
+diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
+index 242c23eef452e8..8833e60d42f566 100644
+--- a/drivers/vfio/pci/mlx5/main.c
++++ b/drivers/vfio/pci/mlx5/main.c
+@@ -640,14 +640,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ O_RDONLY);
+ if (IS_ERR(migf->filp)) {
+ ret = PTR_ERR(migf->filp);
+- goto end;
++ kfree(migf);
++ return ERR_PTR(ret);
+ }
+
+ migf->mvdev = mvdev;
+- ret = mlx5vf_cmd_alloc_pd(migf);
+- if (ret)
+- goto out_free;
+-
+ stream_open(migf->filp->f_inode, migf->filp);
+ mutex_init(&migf->lock);
+ init_waitqueue_head(&migf->poll_wait);
+@@ -663,6 +660,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ INIT_LIST_HEAD(&migf->buf_list);
+ INIT_LIST_HEAD(&migf->avail_list);
+ spin_lock_init(&migf->list_lock);
++
++ ret = mlx5vf_cmd_alloc_pd(migf);
++ if (ret)
++ goto out;
++
+ ret = mlx5vf_cmd_query_vhca_migration_state(mvdev, &length, &full_size, 0);
+ if (ret)
+ goto out_pd;
+@@ -692,10 +694,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ mlx5vf_free_data_buffer(buf);
+ out_pd:
+ mlx5fv_cmd_clean_migf_resources(migf);
+-out_free:
++out:
+ fput(migf->filp);
+-end:
+- kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+@@ -1016,13 +1016,19 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev)
+ O_WRONLY);
+ if (IS_ERR(migf->filp)) {
+ ret = PTR_ERR(migf->filp);
+- goto end;
++ kfree(migf);
++ return ERR_PTR(ret);
+ }
+
++ stream_open(migf->filp->f_inode, migf->filp);
++ mutex_init(&migf->lock);
++ INIT_LIST_HEAD(&migf->buf_list);
++ INIT_LIST_HEAD(&migf->avail_list);
++ spin_lock_init(&migf->list_lock);
+ migf->mvdev = mvdev;
+ ret = mlx5vf_cmd_alloc_pd(migf);
+ if (ret)
+- goto out_free;
++ goto out;
+
+ buf = mlx5vf_alloc_data_buffer(migf, 0, DMA_TO_DEVICE);
+ if (IS_ERR(buf)) {
+@@ -1041,20 +1047,13 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev)
+ migf->buf_header[0] = buf;
+ migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER;
+
+- stream_open(migf->filp->f_inode, migf->filp);
+- mutex_init(&migf->lock);
+- INIT_LIST_HEAD(&migf->buf_list);
+- INIT_LIST_HEAD(&migf->avail_list);
+- spin_lock_init(&migf->list_lock);
+ return migf;
+ out_buf:
+ mlx5vf_free_data_buffer(migf->buf[0]);
+ out_pd:
+ mlx5vf_cmd_dealloc_pd(migf);
+-out_free:
++out:
+ fput(migf->filp);
+-end:
+- kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index 97422aafaa7b5d..ea2745c1ac5e68 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -313,6 +313,10 @@ static int vfio_virt_config_read(struct vfio_pci_core_device *vdev, int pos,
+ return count;
+ }
+
++static struct perm_bits direct_ro_perms = {
++ .readfn = vfio_direct_config_read,
++};
++
+ /* Default capability regions to read-only, no-virtualization */
+ static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = {
+ [0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read }
+@@ -1897,9 +1901,17 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_core_device *vdev, char __user
+ cap_start = *ppos;
+ } else {
+ if (*ppos >= PCI_CFG_SPACE_SIZE) {
+- WARN_ON(cap_id > PCI_EXT_CAP_ID_MAX);
++ /*
++ * We can get a cap_id that exceeds PCI_EXT_CAP_ID_MAX
++ * if we're hiding an unknown capability at the start
++ * of the extended capability list. Use default, ro
++ * access, which will virtualize the id and next values.
++ */
++ if (cap_id > PCI_EXT_CAP_ID_MAX)
++ perm = &direct_ro_perms;
++ else
++ perm = &ecap_perms[cap_id];
+
+- perm = &ecap_perms[cap_id];
+ cap_start = vfio_find_cap_start(vdev, *ppos);
+ } else {
+ WARN_ON(cap_id > PCI_CAP_ID_MAX);
+diff --git a/drivers/video/fbdev/sh7760fb.c b/drivers/video/fbdev/sh7760fb.c
+index 3d2a27fefc874a..130adef2e46869 100644
+--- a/drivers/video/fbdev/sh7760fb.c
++++ b/drivers/video/fbdev/sh7760fb.c
+@@ -409,12 +409,11 @@ static int sh7760fb_alloc_mem(struct fb_info *info)
+ vram = PAGE_SIZE;
+
+ fbmem = dma_alloc_coherent(info->device, vram, &par->fbdma, GFP_KERNEL);
+-
+ if (!fbmem)
+ return -ENOMEM;
+
+ if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) {
+- sh7760fb_free_mem(info);
++ dma_free_coherent(info->device, vram, fbmem, par->fbdma);
+ dev_err(info->device, "kernel gave me memory at 0x%08lx, which is"
+ "unusable for the LCDC\n", (unsigned long)par->fbdma);
+ return -ENOMEM;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 684b9fe84fff5b..94c96bcfefe347 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1509,7 +1509,7 @@ config 60XX_WDT
+
+ config SBC8360_WDT
+ tristate "SBC8360 Watchdog Timer"
+- depends on X86_32
++ depends on X86_32 && HAS_IOPORT
+ help
+
+ This is the driver for the hardware watchdog on the SBC8360 Single
+@@ -1522,7 +1522,7 @@ config SBC8360_WDT
+
+ config SBC7240_WDT
+ tristate "SBC Nano 7240 Watchdog Timer"
+- depends on X86_32 && !UML
++ depends on X86_32 && HAS_IOPORT
+ help
+ This is the driver for the hardware watchdog found on the IEI
+ single board computers EPIC Nano 7240 (and likely others). This
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 9f097f1f4a4cf3..6d32ffb0113650 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -313,7 +313,7 @@ int xenbus_dev_probe(struct device *_dev)
+ if (err) {
+ dev_warn(&dev->dev, "watch_otherend on %s failed.\n",
+ dev->nodename);
+- return err;
++ goto fail_remove;
+ }
+
+ dev->spurious_threshold = 1;
+@@ -322,6 +322,12 @@ int xenbus_dev_probe(struct device *_dev)
+ dev->nodename);
+
+ return 0;
++fail_remove:
++ if (drv->remove) {
++ down(&dev->reclaim_sem);
++ drv->remove(dev);
++ up(&dev->reclaim_sem);
++ }
+ fail_put:
+ module_put(drv->driver.owner);
+ fail:
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 06dc4a57ba78a7..0a216a078c3155 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1251,6 +1251,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ }
+ reloc_func_desc = interp_load_addr;
+
++ allow_write_access(interpreter);
+ fput(interpreter);
+
+ kfree(interp_elf_ex);
+@@ -1347,6 +1348,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ kfree(interp_elf_ex);
+ kfree(interp_elf_phdata);
+ out_free_file:
++ allow_write_access(interpreter);
+ if (interpreter)
+ fput(interpreter);
+ out_free_ph:
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 4fe5bb9f1b1f5e..7d35f0e1bc7641 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -394,6 +394,7 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ goto error;
+ }
+
++ allow_write_access(interpreter);
+ fput(interpreter);
+ interpreter = NULL;
+ }
+@@ -465,8 +466,10 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ retval = 0;
+
+ error:
+- if (interpreter)
++ if (interpreter) {
++ allow_write_access(interpreter);
+ fput(interpreter);
++ }
+ kfree(interpreter_name);
+ kfree(exec_params.phdrs);
+ kfree(exec_params.loadmap);
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 31660d8cc2c610..6a3a16f910516c 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -247,10 +247,13 @@ static int load_misc_binary(struct linux_binprm *bprm)
+ if (retval < 0)
+ goto ret;
+
+- if (fmt->flags & MISC_FMT_OPEN_FILE)
++ if (fmt->flags & MISC_FMT_OPEN_FILE) {
+ interp_file = file_clone_open(fmt->interp_file);
+- else
++ if (!IS_ERR(interp_file))
++ deny_write_access(interp_file);
++ } else {
+ interp_file = open_exec(fmt->interpreter);
++ }
+ retval = PTR_ERR(interp_file);
+ if (IS_ERR(interp_file))
+ goto ret;
+diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c
+index 35ba2117a6f652..3e63cfe1587472 100644
+--- a/fs/cachefiles/interface.c
++++ b/fs/cachefiles/interface.c
+@@ -327,6 +327,8 @@ static void cachefiles_commit_object(struct cachefiles_object *object,
+ static void cachefiles_clean_up_object(struct cachefiles_object *object,
+ struct cachefiles_cache *cache)
+ {
++ struct file *file;
++
+ if (test_bit(FSCACHE_COOKIE_RETIRED, &object->cookie->flags)) {
+ if (!test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags)) {
+ cachefiles_see_object(object, cachefiles_obj_see_clean_delete);
+@@ -342,10 +344,14 @@ static void cachefiles_clean_up_object(struct cachefiles_object *object,
+ }
+
+ cachefiles_unmark_inode_in_use(object, object->file);
+- if (object->file) {
+- fput(object->file);
+- object->file = NULL;
+- }
++
++ spin_lock(&object->lock);
++ file = object->file;
++ object->file = NULL;
++ spin_unlock(&object->lock);
++
++ if (file)
++ fput(file);
+ }
+
+ /*
+diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
+index 470c9665838505..fe3de9ad57bf6d 100644
+--- a/fs/cachefiles/ondemand.c
++++ b/fs/cachefiles/ondemand.c
+@@ -60,26 +60,36 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
+ {
+ struct cachefiles_object *object = kiocb->ki_filp->private_data;
+ struct cachefiles_cache *cache = object->volume->cache;
+- struct file *file = object->file;
+- size_t len = iter->count;
++ struct file *file;
++ size_t len = iter->count, aligned_len = len;
+ loff_t pos = kiocb->ki_pos;
+ const struct cred *saved_cred;
+ int ret;
+
+- if (!file)
++ spin_lock(&object->lock);
++ file = object->file;
++ if (!file) {
++ spin_unlock(&object->lock);
+ return -ENOBUFS;
++ }
++ get_file(file);
++ spin_unlock(&object->lock);
+
+ cachefiles_begin_secure(cache, &saved_cred);
+- ret = __cachefiles_prepare_write(object, file, &pos, &len, len, true);
++ ret = __cachefiles_prepare_write(object, file, &pos, &aligned_len, len, true);
+ cachefiles_end_secure(cache, saved_cred);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len);
+ ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
+- if (!ret)
++ if (!ret) {
+ ret = len;
++ kiocb->ki_pos += ret;
++ }
+
++out:
++ fput(file);
+ return ret;
+ }
+
+@@ -87,12 +97,22 @@ static loff_t cachefiles_ondemand_fd_llseek(struct file *filp, loff_t pos,
+ int whence)
+ {
+ struct cachefiles_object *object = filp->private_data;
+- struct file *file = object->file;
++ struct file *file;
++ loff_t ret;
+
+- if (!file)
++ spin_lock(&object->lock);
++ file = object->file;
++ if (!file) {
++ spin_unlock(&object->lock);
+ return -ENOBUFS;
++ }
++ get_file(file);
++ spin_unlock(&object->lock);
+
+- return vfs_llseek(file, pos, whence);
++ ret = vfs_llseek(file, pos, whence);
++ fput(file);
++
++ return ret;
+ }
+
+ static long cachefiles_ondemand_fd_ioctl(struct file *filp, unsigned int ioctl,
+diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
+index 742b30b61c196f..0fe8d80ce5e8d3 100644
+--- a/fs/dlm/ast.c
++++ b/fs/dlm/ast.c
+@@ -30,7 +30,7 @@ static void dlm_run_callback(uint32_t ls_id, uint32_t lkb_id, int8_t mode,
+ trace_dlm_bast(ls_id, lkb_id, mode, res_name, res_length);
+ bastfn(astparam, mode);
+ } else if (flags & DLM_CB_CAST) {
+- trace_dlm_ast(ls_id, lkb_id, sb_status, sb_flags, res_name,
++ trace_dlm_ast(ls_id, lkb_id, sb_flags, sb_status, res_name,
+ res_length);
+ lksb->sb_status = sb_status;
+ lksb->sb_flags = sb_flags;
+diff --git a/fs/dlm/recoverd.c b/fs/dlm/recoverd.c
+index 34f4f9f49a6ce5..12272a8f6d75f3 100644
+--- a/fs/dlm/recoverd.c
++++ b/fs/dlm/recoverd.c
+@@ -151,7 +151,7 @@ static int ls_recover(struct dlm_ls *ls, struct dlm_recover *rv)
+ error = dlm_recover_members(ls, rv, &neg);
+ if (error) {
+ log_rinfo(ls, "dlm_recover_members error %d", error);
+- goto fail;
++ goto fail_root_list;
+ }
+
+ dlm_recover_dir_nodeid(ls, &root_list);
+diff --git a/fs/efs/super.c b/fs/efs/super.c
+index e4421c10caebe5..c59086b7eabfe9 100644
+--- a/fs/efs/super.c
++++ b/fs/efs/super.c
+@@ -15,7 +15,6 @@
+ #include <linux/vfs.h>
+ #include <linux/blkdev.h>
+ #include <linux/fs_context.h>
+-#include <linux/fs_parser.h>
+ #include "efs.h"
+ #include <linux/efs_vh.h>
+ #include <linux/efs_fs_sb.h>
+@@ -49,15 +48,6 @@ static struct pt_types sgi_pt_types[] = {
+ {0, NULL}
+ };
+
+-enum {
+- Opt_explicit_open,
+-};
+-
+-static const struct fs_parameter_spec efs_param_spec[] = {
+- fsparam_flag ("explicit-open", Opt_explicit_open),
+- {}
+-};
+-
+ /*
+ * File system definition and registration.
+ */
+@@ -67,7 +57,6 @@ static struct file_system_type efs_fs_type = {
+ .kill_sb = efs_kill_sb,
+ .fs_flags = FS_REQUIRES_DEV,
+ .init_fs_context = efs_init_fs_context,
+- .parameters = efs_param_spec,
+ };
+ MODULE_ALIAS_FS("efs");
+
+@@ -265,7 +254,8 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc)
+ if (!sb_set_blocksize(s, EFS_BLOCKSIZE)) {
+ pr_err("device does not support %d byte blocks\n",
+ EFS_BLOCKSIZE);
+- return -EINVAL;
++ return invalf(fc, "device does not support %d byte blocks\n",
++ EFS_BLOCKSIZE);
+ }
+
+ /* read the vh (volume header) block */
+@@ -327,43 +317,22 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc)
+ return 0;
+ }
+
+-static void efs_free_fc(struct fs_context *fc)
+-{
+- kfree(fc->fs_private);
+-}
+-
+ static int efs_get_tree(struct fs_context *fc)
+ {
+ return get_tree_bdev(fc, efs_fill_super);
+ }
+
+-static int efs_parse_param(struct fs_context *fc, struct fs_parameter *param)
+-{
+- int token;
+- struct fs_parse_result result;
+-
+- token = fs_parse(fc, efs_param_spec, param, &result);
+- if (token < 0)
+- return token;
+- return 0;
+-}
+-
+ static int efs_reconfigure(struct fs_context *fc)
+ {
+ sync_filesystem(fc->root->d_sb);
++ fc->sb_flags |= SB_RDONLY;
+
+ return 0;
+ }
+
+-struct efs_context {
+- unsigned long s_mount_opts;
+-};
+-
+ static const struct fs_context_operations efs_context_opts = {
+- .parse_param = efs_parse_param,
+ .get_tree = efs_get_tree,
+ .reconfigure = efs_reconfigure,
+- .free = efs_free_fc,
+ };
+
+ /*
+@@ -371,12 +340,6 @@ static const struct fs_context_operations efs_context_opts = {
+ */
+ static int efs_init_fs_context(struct fs_context *fc)
+ {
+- struct efs_context *ctx;
+-
+- ctx = kzalloc(sizeof(struct efs_context), GFP_KERNEL);
+- if (!ctx)
+- return -ENOMEM;
+- fc->fs_private = ctx;
+ fc->ops = &efs_context_opts;
+
+ return 0;
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 61debd799cf904..fa51437e1d99d9 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -38,7 +38,7 @@ void *erofs_bread(struct erofs_buf *buf, erofs_off_t offset,
+ }
+ if (!folio || !folio_contains(folio, index)) {
+ erofs_put_metabuf(buf);
+- folio = read_mapping_folio(buf->mapping, index, NULL);
++ folio = read_mapping_folio(buf->mapping, index, buf->file);
+ if (IS_ERR(folio))
+ return folio;
+ }
+@@ -61,9 +61,11 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- if (erofs_is_fileio_mode(sbi))
+- buf->mapping = file_inode(sbi->fdev)->i_mapping;
+- else if (erofs_is_fscache_mode(sb))
++ buf->file = NULL;
++ if (erofs_is_fileio_mode(sbi)) {
++ buf->file = sbi->fdev; /* some fs like FUSE needs it */
++ buf->mapping = buf->file->f_mapping;
++ } else if (erofs_is_fscache_mode(sb))
+ buf->mapping = sbi->s_fscache->inode->i_mapping;
+ else
+ buf->mapping = sb->s_bdev->bd_mapping;
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 4efd578d7c627b..9b03c8f323a762 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -221,6 +221,7 @@ enum erofs_kmap_type {
+
+ struct erofs_buf {
+ struct address_space *mapping;
++ struct file *file;
+ struct page *page;
+ void *base;
+ enum erofs_kmap_type kmap_type;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index bed3dbe5b7cb8b..2dd7d819572f40 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -631,7 +631,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ errorfc(fc, "unsupported blksize for fscache mode");
+ return -EINVAL;
+ }
+- if (!sb_set_blocksize(sb, 1 << sbi->blkszbits)) {
++
++ if (erofs_is_fileio_mode(sbi)) {
++ sb->s_blocksize = 1 << sbi->blkszbits;
++ sb->s_blocksize_bits = sbi->blkszbits;
++ } else if (!sb_set_blocksize(sb, 1 << sbi->blkszbits)) {
+ errorfc(fc, "failed to set erofs blksize");
+ return -EINVAL;
+ }
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index a076cca1f54734..4535f2f0a0147e 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -219,7 +219,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
+ unsigned int amortizedshift;
+ erofs_off_t pos;
+
+- if (lcn >= totalidx)
++ if (lcn >= totalidx || vi->z_logical_clusterbits > 14)
+ return -EINVAL;
+
+ m->lcn = lcn;
+@@ -390,7 +390,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ u64 lcn = m->lcn, headlcn = map->m_la >> lclusterbits;
+ int err;
+
+- do {
++ while (1) {
+ /* handle the last EOF pcluster (no next HEAD lcluster) */
+ if ((lcn << lclusterbits) >= inode->i_size) {
+ map->m_llen = inode->i_size - map->m_la;
+@@ -402,14 +402,16 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ return err;
+
+ if (m->type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {
+- DBG_BUGON(!m->delta[1] &&
+- m->clusterofs != 1 << lclusterbits);
++ /* work around invalid d1 generated by pre-1.0 mkfs */
++ if (unlikely(!m->delta[1])) {
++ m->delta[1] = 1;
++ DBG_BUGON(1);
++ }
+ } else if (m->type == Z_EROFS_LCLUSTER_TYPE_PLAIN ||
+ m->type == Z_EROFS_LCLUSTER_TYPE_HEAD1 ||
+ m->type == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
+- /* go on until the next HEAD lcluster */
+ if (lcn != headlcn)
+- break;
++ break; /* ends at the next HEAD lcluster */
+ m->delta[1] = 1;
+ } else {
+ erofs_err(inode->i_sb, "unknown type %u @ lcn %llu of nid %llu",
+@@ -418,8 +420,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ return -EOPNOTSUPP;
+ }
+ lcn += m->delta[1];
+- } while (m->delta[1]);
+-
++ }
+ map->m_llen = (lcn << lclusterbits) + m->clusterofs - map->m_la;
+ return 0;
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index 6c53920795c2e7..9c349a74f38589 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -883,7 +883,8 @@ EXPORT_SYMBOL(transfer_args_to_stack);
+ */
+ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ {
+- struct file *file;
++ int err;
++ struct file *file __free(fput) = NULL;
+ struct open_flags open_exec_flags = {
+ .open_flag = O_LARGEFILE | O_RDONLY | __FMODE_EXEC,
+ .acc_mode = MAY_EXEC,
+@@ -908,12 +909,14 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ * an invariant that all non-regular files error out before we get here.
+ */
+ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
+- path_noexec(&file->f_path)) {
+- fput(file);
++ path_noexec(&file->f_path))
+ return ERR_PTR(-EACCES);
+- }
+
+- return file;
++ err = deny_write_access(file);
++ if (err)
++ return ERR_PTR(err);
++
++ return no_free_ptr(file);
+ }
+
+ /**
+@@ -923,7 +926,8 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ *
+ * Returns ERR_PTR on failure or allocated struct file on success.
+ *
+- * As this is a wrapper for the internal do_open_execat(). Also see
++ * As this is a wrapper for the internal do_open_execat(), callers
++ * must call allow_write_access() before fput() on release. Also see
+ * do_close_execat().
+ */
+ struct file *open_exec(const char *name)
+@@ -1475,8 +1479,10 @@ static int prepare_bprm_creds(struct linux_binprm *bprm)
+ /* Matches do_open_execat() */
+ static void do_close_execat(struct file *file)
+ {
+- if (file)
+- fput(file);
++ if (!file)
++ return;
++ allow_write_access(file);
++ fput(file);
+ }
+
+ static void free_bprm(struct linux_binprm *bprm)
+@@ -1801,6 +1807,7 @@ static int exec_binprm(struct linux_binprm *bprm)
+ bprm->file = bprm->interpreter;
+ bprm->interpreter = NULL;
+
++ allow_write_access(exec);
+ if (unlikely(bprm->have_execfd)) {
+ if (bprm->executable) {
+ fput(exec);
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index a25d7eb789f4cb..fb38769c3e39d1 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -584,6 +584,16 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ if (ret < 0)
+ goto unlock;
+
++ if (iocb->ki_flags & IOCB_DIRECT) {
++ unsigned long align = pos | iov_iter_alignment(iter);
++
++ if (!IS_ALIGNED(align, i_blocksize(inode)) &&
++ !IS_ALIGNED(align, bdev_logical_block_size(inode->i_sb->s_bdev))) {
++ ret = -EINVAL;
++ goto unlock;
++ }
++ }
++
+ if (pos > valid_size) {
+ ret = exfat_extend_valid_size(file, pos);
+ if (ret < 0 && ret != -ENOSPC) {
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 2c4c442293529b..337197ece59955 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -345,6 +345,7 @@ static int exfat_find_empty_entry(struct inode *inode,
+ if (ei->start_clu == EXFAT_EOF_CLUSTER) {
+ ei->start_clu = clu.dir;
+ p_dir->dir = clu.dir;
++ hint_femp.eidx = 0;
+ }
+
+ /* append to the FAT chain */
+@@ -637,14 +638,26 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ info->size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->size = le64_to_cpu(ep2->dentry.stream.size);
++
++ info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu);
++ if (!is_valid_cluster(sbi, info->start_clu) && info->size) {
++ exfat_warn(sb, "start_clu is invalid cluster(0x%x)",
++ info->start_clu);
++ info->size = 0;
++ info->valid_size = 0;
++ }
++
++ if (info->valid_size > info->size) {
++ exfat_warn(sb, "valid_size(%lld) is greater than size(%lld)",
++ info->valid_size, info->size);
++ info->valid_size = info->size;
++ }
++
+ if (info->size == 0) {
+ info->flags = ALLOC_NO_FAT_CHAIN;
+ info->start_clu = EXFAT_EOF_CLUSTER;
+- } else {
++ } else
+ info->flags = ep2->dentry.stream.flags;
+- info->start_clu =
+- le32_to_cpu(ep2->dentry.stream.start_clu);
+- }
+
+ exfat_get_entry_time(sbi, &info->crtime,
+ ep->dentry.file.create_tz,
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 591fb3f710be72..8042ad87380897 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -550,7 +550,8 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group,
+ trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked);
+ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO |
+ (ignore_locked ? REQ_RAHEAD : 0),
+- ext4_end_bitmap_read);
++ ext4_end_bitmap_read,
++ ext4_simulate_fail(sb, EXT4_SIM_BBITMAP_EIO));
+ return bh;
+ verify:
+ err = ext4_validate_block_bitmap(sb, desc, block_group, bh);
+@@ -577,7 +578,6 @@ int ext4_wait_block_bitmap(struct super_block *sb, ext4_group_t block_group,
+ if (!desc)
+ return -EFSCORRUPTED;
+ wait_on_buffer(bh);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_BBITMAP_EIO);
+ if (!buffer_uptodate(bh)) {
+ ext4_error_err(sb, EIO, "Cannot read block bitmap - "
+ "block_group = %u, block_bitmap = %llu",
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 44b0d418143c2e..bbffb76d9a9049 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1865,14 +1865,6 @@ static inline bool ext4_simulate_fail(struct super_block *sb,
+ return false;
+ }
+
+-static inline void ext4_simulate_fail_bh(struct super_block *sb,
+- struct buffer_head *bh,
+- unsigned long code)
+-{
+- if (!IS_ERR(bh) && ext4_simulate_fail(sb, code))
+- clear_buffer_uptodate(bh);
+-}
+-
+ /*
+ * Error number codes for s_{first,last}_error_errno
+ *
+@@ -3100,9 +3092,9 @@ extern struct buffer_head *ext4_sb_bread(struct super_block *sb,
+ extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb,
+ sector_t block);
+ extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io);
++ bh_end_io_t *end_io, bool simu_fail);
+ extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io);
++ bh_end_io_t *end_io, bool simu_fail);
+ extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
+ extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block);
+ extern int ext4_seq_options_show(struct seq_file *seq, void *offset);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 34e25eee65219c..88f98dc4402753 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -568,7 +568,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
+
+ if (!bh_uptodate_or_lock(bh)) {
+ trace_ext4_ext_load_extent(inode, pblk, _RET_IP_);
+- err = ext4_read_bh(bh, 0, NULL);
++ err = ext4_read_bh(bh, 0, NULL, false);
+ if (err < 0)
+ goto errout;
+ }
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index df853c4d3a8c91..383c6edea6dd31 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -185,6 +185,56 @@ static inline ext4_fsblk_t ext4_fsmap_next_pblk(struct ext4_fsmap *fmr)
+ return fmr->fmr_physical + fmr->fmr_length;
+ }
+
++static int ext4_getfsmap_meta_helper(struct super_block *sb,
++ ext4_group_t agno, ext4_grpblk_t start,
++ ext4_grpblk_t len, void *priv)
++{
++ struct ext4_getfsmap_info *info = priv;
++ struct ext4_fsmap *p;
++ struct ext4_fsmap *tmp;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ ext4_fsblk_t fsb, fs_start, fs_end;
++ int error;
++
++ fs_start = fsb = (EXT4_C2B(sbi, start) +
++ ext4_group_first_block_no(sb, agno));
++ fs_end = fs_start + EXT4_C2B(sbi, len);
++
++ /* Return relevant extents from the meta_list */
++ list_for_each_entry_safe(p, tmp, &info->gfi_meta_list, fmr_list) {
++ if (p->fmr_physical < info->gfi_next_fsblk) {
++ list_del(&p->fmr_list);
++ kfree(p);
++ continue;
++ }
++ if (p->fmr_physical <= fs_start ||
++ p->fmr_physical + p->fmr_length <= fs_end) {
++ /* Emit the retained free extent record if present */
++ if (info->gfi_lastfree.fmr_owner) {
++ error = ext4_getfsmap_helper(sb, info,
++ &info->gfi_lastfree);
++ if (error)
++ return error;
++ info->gfi_lastfree.fmr_owner = 0;
++ }
++ error = ext4_getfsmap_helper(sb, info, p);
++ if (error)
++ return error;
++ fsb = p->fmr_physical + p->fmr_length;
++ if (info->gfi_next_fsblk < fsb)
++ info->gfi_next_fsblk = fsb;
++ list_del(&p->fmr_list);
++ kfree(p);
++ continue;
++ }
++ }
++ if (info->gfi_next_fsblk < fsb)
++ info->gfi_next_fsblk = fsb;
++
++ return 0;
++}
++
++
+ /* Transform a blockgroup's free record into a fsmap */
+ static int ext4_getfsmap_datadev_helper(struct super_block *sb,
+ ext4_group_t agno, ext4_grpblk_t start,
+@@ -539,6 +589,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ error = ext4_mballoc_query_range(sb, info->gfi_agno,
+ EXT4_B2C(sbi, info->gfi_low.fmr_physical),
+ EXT4_B2C(sbi, info->gfi_high.fmr_physical),
++ ext4_getfsmap_meta_helper,
+ ext4_getfsmap_datadev_helper, info);
+ if (error)
+ goto err;
+@@ -560,7 +611,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+
+ /* Report any gaps at the end of the bg */
+ info->gfi_last = true;
+- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster, 0, info);
++ error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
++ 0, info);
+ if (error)
+ goto err;
+
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 7f1a5f90dbbdff..21d228073d7954 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -193,8 +193,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ * submit the buffer_head for reading
+ */
+ trace_ext4_load_inode_bitmap(sb, block_group);
+- ext4_read_bh(bh, REQ_META | REQ_PRIO, ext4_end_bitmap_read);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_IBITMAP_EIO);
++ ext4_read_bh(bh, REQ_META | REQ_PRIO,
++ ext4_end_bitmap_read,
++ ext4_simulate_fail(sb, EXT4_SIM_IBITMAP_EIO));
+ if (!buffer_uptodate(bh)) {
+ put_bh(bh);
+ ext4_error_err(sb, EIO, "Cannot read inode bitmap - "
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 7404f0935c9032..7de327fa7b1c51 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -170,7 +170,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ }
+
+ if (!bh_uptodate_or_lock(bh)) {
+- if (ext4_read_bh(bh, 0, NULL) < 0) {
++ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
+ put_bh(bh);
+ goto failure;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 54bdd4884fe67d..99d09cd9c6a37e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4497,10 +4497,10 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino,
+ * Read the block from disk.
+ */
+ trace_ext4_load_inode(sb, ino);
+- ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL);
++ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL,
++ ext4_simulate_fail(sb, EXT4_SIM_INODE_EIO));
+ blk_finish_plug(&plug);
+ wait_on_buffer(bh);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_INODE_EIO);
+ if (!buffer_uptodate(bh)) {
+ if (ret_block)
+ *ret_block = block;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index d73e38323879ce..92f49d7eb3c001 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6999,13 +6999,14 @@ int
+ ext4_mballoc_query_range(
+ struct super_block *sb,
+ ext4_group_t group,
+- ext4_grpblk_t start,
++ ext4_grpblk_t first,
+ ext4_grpblk_t end,
++ ext4_mballoc_query_range_fn meta_formatter,
+ ext4_mballoc_query_range_fn formatter,
+ void *priv)
+ {
+ void *bitmap;
+- ext4_grpblk_t next;
++ ext4_grpblk_t start, next;
+ struct ext4_buddy e4b;
+ int error;
+
+@@ -7016,10 +7017,19 @@ ext4_mballoc_query_range(
+
+ ext4_lock_group(sb, group);
+
+- start = max(e4b.bd_info->bb_first_free, start);
++ start = max(e4b.bd_info->bb_first_free, first);
+ if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
+ end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-
++ if (meta_formatter && start != first) {
++ if (start > end)
++ start = end;
++ ext4_unlock_group(sb, group);
++ error = meta_formatter(sb, group, first, start - first,
++ priv);
++ if (error)
++ goto out_unload;
++ ext4_lock_group(sb, group);
++ }
+ while (start <= end) {
+ start = mb_find_next_zero_bit(bitmap, end + 1, start);
+ if (start > end)
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index d8553f1498d3cb..f8280de3e8820a 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -259,6 +259,7 @@ ext4_mballoc_query_range(
+ ext4_group_t agno,
+ ext4_grpblk_t start,
+ ext4_grpblk_t end,
++ ext4_mballoc_query_range_fn meta_formatter,
+ ext4_mballoc_query_range_fn formatter,
+ void *priv);
+
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index bd946d0c71b700..d64c04ed061ae9 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -94,7 +94,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
+ }
+
+ lock_buffer(*bh);
+- ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL);
++ ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL, false);
+ if (ret)
+ goto warn_exit;
+
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index b64661ea6e0ed7..898443e98efc9e 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -213,7 +213,7 @@ static int mext_page_mkuptodate(struct folio *folio, size_t from, size_t to)
+ unlock_buffer(bh);
+ continue;
+ }
+- ext4_read_bh_nowait(bh, 0, NULL);
++ ext4_read_bh_nowait(bh, 0, NULL, false);
+ nr++;
+ } while (block++, (bh = bh->b_this_page) != head);
+
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index a2704f06436106..72f77f78ae8df3 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1300,7 +1300,7 @@ static struct buffer_head *ext4_get_bitmap(struct super_block *sb, __u64 block)
+ if (unlikely(!bh))
+ return NULL;
+ if (!bh_uptodate_or_lock(bh)) {
+- if (ext4_read_bh(bh, 0, NULL) < 0) {
++ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
+ brelse(bh);
+ return NULL;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 16a4ce704460e1..940ac1a49b729e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -161,8 +161,14 @@ MODULE_ALIAS("ext3");
+
+
+ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io)
++ bh_end_io_t *end_io, bool simu_fail)
+ {
++ if (simu_fail) {
++ clear_buffer_uptodate(bh);
++ unlock_buffer(bh);
++ return;
++ }
++
+ /*
+ * buffer's verified bit is no longer valid after reading from
+ * disk again due to write out error, clear it to make sure we
+@@ -176,7 +182,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+ }
+
+ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io)
++ bh_end_io_t *end_io, bool simu_fail)
+ {
+ BUG_ON(!buffer_locked(bh));
+
+@@ -184,10 +190,11 @@ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+ unlock_buffer(bh);
+ return;
+ }
+- __ext4_read_bh(bh, op_flags, end_io);
++ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
+ }
+
+-int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io)
++int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
++ bh_end_io_t *end_io, bool simu_fail)
+ {
+ BUG_ON(!buffer_locked(bh));
+
+@@ -196,7 +203,7 @@ int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io
+ return 0;
+ }
+
+- __ext4_read_bh(bh, op_flags, end_io);
++ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
+
+ wait_on_buffer(bh);
+ if (buffer_uptodate(bh))
+@@ -208,10 +215,10 @@ int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
+ {
+ lock_buffer(bh);
+ if (!wait) {
+- ext4_read_bh_nowait(bh, op_flags, NULL);
++ ext4_read_bh_nowait(bh, op_flags, NULL, false);
+ return 0;
+ }
+- return ext4_read_bh(bh, op_flags, NULL);
++ return ext4_read_bh(bh, op_flags, NULL, false);
+ }
+
+ /*
+@@ -266,7 +273,7 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
+
+ if (likely(bh)) {
+ if (trylock_buffer(bh))
+- ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL);
++ ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL, false);
+ brelse(bh);
+ }
+ }
+@@ -346,9 +353,9 @@ __u32 ext4_free_group_clusters(struct super_block *sb,
+ __u32 ext4_free_inodes_count(struct super_block *sb,
+ struct ext4_group_desc *bg)
+ {
+- return le16_to_cpu(bg->bg_free_inodes_count_lo) |
++ return le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_lo)) |
+ (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ?
+- (__u32)le16_to_cpu(bg->bg_free_inodes_count_hi) << 16 : 0);
++ (__u32)le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_hi)) << 16 : 0);
+ }
+
+ __u32 ext4_used_dirs_count(struct super_block *sb,
+@@ -402,9 +409,9 @@ void ext4_free_group_clusters_set(struct super_block *sb,
+ void ext4_free_inodes_set(struct super_block *sb,
+ struct ext4_group_desc *bg, __u32 count)
+ {
+- bg->bg_free_inodes_count_lo = cpu_to_le16((__u16)count);
++ WRITE_ONCE(bg->bg_free_inodes_count_lo, cpu_to_le16((__u16)count));
+ if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT)
+- bg->bg_free_inodes_count_hi = cpu_to_le16(count >> 16);
++ WRITE_ONCE(bg->bg_free_inodes_count_hi, cpu_to_le16(count >> 16));
+ }
+
+ void ext4_used_dirs_set(struct super_block *sb,
+@@ -6518,9 +6525,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ goto restore_opts;
+ }
+
+- if (test_opt2(sb, ABORT))
+- ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
+-
+ sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
+ (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
+
+@@ -6689,6 +6693,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ ext4_stop_mmpd(sbi);
+
++ /*
++ * Handle aborting the filesystem as the last thing during remount to
++ * avoid obsure errors during remount when some option changes fail to
++ * apply due to shutdown filesystem.
++ */
++ if (test_opt2(sb, ABORT))
++ ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
++
+ return 0;
+
+ restore_opts:
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 7f76460b721f2c..efda9a0229816b 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -32,7 +32,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io,
+ f2fs_build_fault_attr(sbi, 0, 0);
+ if (!end_io)
+ f2fs_flush_merged_writes(sbi);
+- f2fs_handle_critical_error(sbi, reason, end_io);
++ f2fs_handle_critical_error(sbi, reason);
+ }
+
+ /*
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 94f7b084f60164..9efe4c00d75bb3 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1676,7 +1676,8 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag)
+ /* reserved delalloc block should be mapped for fiemap. */
+ if (blkaddr == NEW_ADDR)
+ map->m_flags |= F2FS_MAP_DELALLOC;
+- if (flag != F2FS_GET_BLOCK_DIO || !is_hole)
++ /* DIO READ and hole case, should not map the blocks. */
++ if (!(flag == F2FS_GET_BLOCK_DIO && is_hole && !map->m_may_create))
+ map->m_flags |= F2FS_MAP_MAPPED;
+
+ map->m_pblk = blkaddr;
+@@ -1901,25 +1902,6 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return (err < 0 ? err : 0);
+ }
+
+-static loff_t max_inode_blocks(struct inode *inode)
+-{
+- loff_t result = ADDRS_PER_INODE(inode);
+- loff_t leaf_count = ADDRS_PER_BLOCK(inode);
+-
+- /* two direct node blocks */
+- result += (leaf_count * 2);
+-
+- /* two indirect node blocks */
+- leaf_count *= NIDS_PER_BLOCK;
+- result += (leaf_count * 2);
+-
+- /* one double indirect node block */
+- leaf_count *= NIDS_PER_BLOCK;
+- result += leaf_count;
+-
+- return result;
+-}
+-
+ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len)
+ {
+@@ -1992,8 +1974,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) {
+ start_blk = next_pgofs;
+
+- if (blks_to_bytes(inode, start_blk) < blks_to_bytes(inode,
+- max_inode_blocks(inode)))
++ if (blks_to_bytes(inode, start_blk) < maxbytes)
+ goto prep_next;
+
+ flags |= FIEMAP_EXTENT_LAST;
+@@ -2385,10 +2366,10 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ .nr_cpages = 0,
+ };
+ pgoff_t nc_cluster_idx = NULL_CLUSTER;
++ pgoff_t index;
+ #endif
+ unsigned nr_pages = rac ? readahead_count(rac) : 1;
+ unsigned max_nr_pages = nr_pages;
+- pgoff_t index;
+ int ret = 0;
+
+ map.m_pblk = 0;
+@@ -2406,9 +2387,9 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ prefetchw(&folio->flags);
+ }
+
++#ifdef CONFIG_F2FS_FS_COMPRESSION
+ index = folio_index(folio);
+
+-#ifdef CONFIG_F2FS_FS_COMPRESSION
+ if (!f2fs_compressed_file(inode))
+ goto read_single_page;
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 33f5449dc22d50..93a5e1c24e566e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3632,8 +3632,7 @@ int f2fs_quota_sync(struct super_block *sb, int type);
+ loff_t max_file_blocks(struct inode *inode);
+ void f2fs_quota_off_umount(struct super_block *sb);
+ void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
+-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+- bool irq_context);
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason);
+ void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error);
+ void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error);
+ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 321d8ffbab6e4b..71ddecaf771f81 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -863,7 +863,11 @@ static bool f2fs_force_buffered_io(struct inode *inode, int rw)
+ return true;
+ if (f2fs_compressed_file(inode))
+ return true;
+- if (f2fs_has_inline_data(inode))
++ /*
++ * only force direct read to use buffered IO, for direct write,
++ * it expects inline data conversion before committing IO.
++ */
++ if (f2fs_has_inline_data(inode) && rw == READ)
+ return true;
+
+ /* disallow direct IO if any of devices has unaligned blksize */
+@@ -2343,9 +2347,12 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ if (readonly)
+ goto out;
+
+- /* grab sb->s_umount to avoid racing w/ remount() */
++ /*
++ * grab sb->s_umount to avoid racing w/ remount() and other shutdown
++ * paths.
++ */
+ if (need_lock)
+- down_read(&sbi->sb->s_umount);
++ down_write(&sbi->sb->s_umount);
+
+ f2fs_stop_gc_thread(sbi);
+ f2fs_stop_discard_thread(sbi);
+@@ -2354,7 +2361,7 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ clear_opt(sbi, DISCARD);
+
+ if (need_lock)
+- up_read(&sbi->sb->s_umount);
++ up_write(&sbi->sb->s_umount);
+
+ f2fs_update_time(sbi, REQ_TIME);
+ out:
+@@ -3792,7 +3799,7 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count,
+ to_reserved = cluster_size - compr_blocks - reserved;
+
+ /* for the case all blocks in cluster were reserved */
+- if (to_reserved == 1) {
++ if (reserved && to_reserved == 1) {
+ dn->ofs_in_node += cluster_size;
+ goto next;
+ }
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9322a7200e310d..e0469316c7cd4e 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -257,6 +257,8 @@ static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
+
+ switch (sbi->gc_mode) {
+ case GC_IDLE_CB:
++ case GC_URGENT_LOW:
++ case GC_URGENT_MID:
+ gc_mode = GC_CB;
+ break;
+ case GC_IDLE_GREEDY:
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 59b13ff243fa80..af36c6d6542b8c 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -905,6 +905,16 @@ static int truncate_node(struct dnode_of_data *dn)
+ if (err)
+ return err;
+
++ if (ni.blk_addr != NEW_ADDR &&
++ !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC_ENHANCE)) {
++ f2fs_err_ratelimited(sbi,
++ "nat entry is corrupted, run fsck to fix it, ino:%u, "
++ "nid:%u, blkaddr:%u", ni.ino, ni.nid, ni.blk_addr);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
++ return -EFSCORRUPTED;
++ }
++
+ /* Deallocate node address */
+ f2fs_invalidate_blocks(sbi, ni.blk_addr);
+ dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 1766254279d24c..edf205093f4358 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2926,7 +2926,8 @@ static int change_curseg(struct f2fs_sb_info *sbi, int type)
+ struct f2fs_summary_block *sum_node;
+ struct page *sum_page;
+
+- write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
++ if (curseg->inited)
++ write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
+
+ __set_test_and_inuse(sbi, new_segno);
+
+@@ -3977,8 +3978,8 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ }
+ }
+
+- f2fs_bug_on(sbi, !IS_DATASEG(type));
+ curseg = CURSEG_I(sbi, type);
++ f2fs_bug_on(sbi, !IS_DATASEG(curseg->seg_type));
+
+ mutex_lock(&curseg->curseg_mutex);
+ down_write(&sit_i->sentry_lock);
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 71adb4a43bec53..51b2b8c5c749c5 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -559,18 +559,21 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
+ }
+
+ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+- unsigned int node_blocks, unsigned int dent_blocks)
++ unsigned int node_blocks, unsigned int data_blocks,
++ unsigned int dent_blocks)
+ {
+
+- unsigned segno, left_blocks;
++ unsigned int segno, left_blocks, blocks;
+ int i;
+
+- /* check current node sections in the worst case. */
+- for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) {
++ /* check current data/node sections in the worst case. */
++ for (i = CURSEG_HOT_DATA; i < NR_PERSISTENT_LOG; i++) {
+ segno = CURSEG_I(sbi, i)->segno;
+ left_blocks = CAP_BLKS_PER_SEC(sbi) -
+ get_ckpt_valid_blocks(sbi, segno, true);
+- if (node_blocks > left_blocks)
++
++ blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks;
++ if (blocks > left_blocks)
+ return false;
+ }
+
+@@ -584,8 +587,9 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+ }
+
+ /*
+- * calculate needed sections for dirty node/dentry
+- * and call has_curseg_enough_space
++ * calculate needed sections for dirty node/dentry and call
++ * has_curseg_enough_space, please note that, it needs to account
++ * dirty data as well in lfs mode when checkpoint is disabled.
+ */
+ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ unsigned int *lower_p, unsigned int *upper_p, bool *curseg_p)
+@@ -594,19 +598,30 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ get_pages(sbi, F2FS_DIRTY_DENTS) +
+ get_pages(sbi, F2FS_DIRTY_IMETA);
+ unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++ unsigned int total_data_blocks = 0;
+ unsigned int node_secs = total_node_blocks / CAP_BLKS_PER_SEC(sbi);
+ unsigned int dent_secs = total_dent_blocks / CAP_BLKS_PER_SEC(sbi);
++ unsigned int data_secs = 0;
+ unsigned int node_blocks = total_node_blocks % CAP_BLKS_PER_SEC(sbi);
+ unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
++ unsigned int data_blocks = 0;
++
++ if (f2fs_lfs_mode(sbi) &&
++ unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
++ data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
++ data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
++ }
+
+ if (lower_p)
+- *lower_p = node_secs + dent_secs;
++ *lower_p = node_secs + dent_secs + data_secs;
+ if (upper_p)
+ *upper_p = node_secs + dent_secs +
+- (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++ (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
++ (data_blocks ? 1 : 0);
+ if (curseg_p)
+ *curseg_p = has_curseg_enough_space(sbi,
+- node_blocks, dent_blocks);
++ node_blocks, data_blocks, dent_blocks);
+ }
+
+ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 87ab5696bd482c..983fdd98fc3755 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -150,6 +150,8 @@ enum {
+ Opt_mode,
+ Opt_fault_injection,
+ Opt_fault_type,
++ Opt_lazytime,
++ Opt_nolazytime,
+ Opt_quota,
+ Opt_noquota,
+ Opt_usrquota,
+@@ -226,6 +228,8 @@ static match_table_t f2fs_tokens = {
+ {Opt_mode, "mode=%s"},
+ {Opt_fault_injection, "fault_injection=%u"},
+ {Opt_fault_type, "fault_type=%u"},
++ {Opt_lazytime, "lazytime"},
++ {Opt_nolazytime, "nolazytime"},
+ {Opt_quota, "quota"},
+ {Opt_noquota, "noquota"},
+ {Opt_usrquota, "usrquota"},
+@@ -918,6 +922,12 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ f2fs_info(sbi, "fault_type options not supported");
+ break;
+ #endif
++ case Opt_lazytime:
++ sb->s_flags |= SB_LAZYTIME;
++ break;
++ case Opt_nolazytime:
++ sb->s_flags &= ~SB_LAZYTIME;
++ break;
+ #ifdef CONFIG_QUOTA
+ case Opt_quota:
+ case Opt_usrquota:
+@@ -3322,7 +3332,7 @@ loff_t max_file_blocks(struct inode *inode)
+ * fit within U32_MAX + 1 data units.
+ */
+
+- result = min(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096));
++ result = umin(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096));
+
+ return result;
+ }
+@@ -4155,8 +4165,7 @@ static bool system_going_down(void)
+ || system_state == SYSTEM_RESTART;
+ }
+
+-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+- bool irq_context)
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason)
+ {
+ struct super_block *sb = sbi->sb;
+ bool shutdown = reason == STOP_CP_REASON_SHUTDOWN;
+@@ -4168,10 +4177,12 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+ if (!f2fs_hw_is_readonly(sbi)) {
+ save_stop_reason(sbi, reason);
+
+- if (irq_context && !shutdown)
+- schedule_work(&sbi->s_error_work);
+- else
+- f2fs_record_stop_reason(sbi);
++ /*
++ * always create an asynchronous task to record stop_reason
++ * in order to avoid potential deadlock when running into
++ * f2fs_record_stop_reason() synchronously.
++ */
++ schedule_work(&sbi->s_error_work);
+ }
+
+ /*
+@@ -4991,9 +5002,6 @@ static int __init init_f2fs_fs(void)
+ err = f2fs_init_shrinker();
+ if (err)
+ goto free_sysfs;
+- err = register_filesystem(&f2fs_fs_type);
+- if (err)
+- goto free_shrinker;
+ f2fs_create_root_stats();
+ err = f2fs_init_post_read_processing();
+ if (err)
+@@ -5016,7 +5024,12 @@ static int __init init_f2fs_fs(void)
+ err = f2fs_create_casefold_cache();
+ if (err)
+ goto free_compress_cache;
++ err = register_filesystem(&f2fs_fs_type);
++ if (err)
++ goto free_casefold_cache;
+ return 0;
++free_casefold_cache:
++ f2fs_destroy_casefold_cache();
+ free_compress_cache:
+ f2fs_destroy_compress_cache();
+ free_compress_mempool:
+@@ -5031,8 +5044,6 @@ static int __init init_f2fs_fs(void)
+ f2fs_destroy_post_read_processing();
+ free_root_stats:
+ f2fs_destroy_root_stats();
+- unregister_filesystem(&f2fs_fs_type);
+-free_shrinker:
+ f2fs_exit_shrinker();
+ free_sysfs:
+ f2fs_exit_sysfs();
+@@ -5056,6 +5067,7 @@ static int __init init_f2fs_fs(void)
+
+ static void __exit exit_f2fs_fs(void)
+ {
++ unregister_filesystem(&f2fs_fs_type);
+ f2fs_destroy_casefold_cache();
+ f2fs_destroy_compress_cache();
+ f2fs_destroy_compress_mempool();
+@@ -5064,7 +5076,6 @@ static void __exit exit_f2fs_fs(void)
+ f2fs_destroy_iostat_processing();
+ f2fs_destroy_post_read_processing();
+ f2fs_destroy_root_stats();
+- unregister_filesystem(&f2fs_fs_type);
+ f2fs_exit_shrinker();
+ f2fs_exit_sysfs();
+ f2fs_destroy_garbage_collection_cache();
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 22dd9dcce7ecc8..3d89de31066ae0 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -397,6 +397,9 @@ static long f_dupfd_query(int fd, struct file *filp)
+ {
+ CLASS(fd_raw, f)(fd);
+
++ if (fd_empty(f))
++ return -EBADF;
++
+ /*
+ * We can do the 'fdput()' immediately, as the only thing that
+ * matters is the pointer value which isn't changed by the fdput.
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index dafdf766b1d535..e20d91d0ae558c 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -645,7 +645,7 @@ void fuse_read_args_fill(struct fuse_io_args *ia, struct file *file, loff_t pos,
+ args->out_args[0].size = count;
+ }
+
+-static void fuse_release_user_pages(struct fuse_args_pages *ap,
++static void fuse_release_user_pages(struct fuse_args_pages *ap, ssize_t nres,
+ bool should_dirty)
+ {
+ unsigned int i;
+@@ -656,6 +656,9 @@ static void fuse_release_user_pages(struct fuse_args_pages *ap,
+ if (ap->args.is_pinned)
+ unpin_user_page(ap->pages[i]);
+ }
++
++ if (nres > 0 && ap->args.invalidate_vmap)
++ invalidate_kernel_vmap_range(ap->args.vmap_base, nres);
+ }
+
+ static void fuse_io_release(struct kref *kref)
+@@ -754,25 +757,29 @@ static void fuse_aio_complete_req(struct fuse_mount *fm, struct fuse_args *args,
+ struct fuse_io_args *ia = container_of(args, typeof(*ia), ap.args);
+ struct fuse_io_priv *io = ia->io;
+ ssize_t pos = -1;
+-
+- fuse_release_user_pages(&ia->ap, io->should_dirty);
++ size_t nres;
+
+ if (err) {
+ /* Nothing */
+ } else if (io->write) {
+ if (ia->write.out.size > ia->write.in.size) {
+ err = -EIO;
+- } else if (ia->write.in.size != ia->write.out.size) {
+- pos = ia->write.in.offset - io->offset +
+- ia->write.out.size;
++ } else {
++ nres = ia->write.out.size;
++ if (ia->write.in.size != ia->write.out.size)
++ pos = ia->write.in.offset - io->offset +
++ ia->write.out.size;
+ }
+ } else {
+ u32 outsize = args->out_args[0].size;
+
++ nres = outsize;
+ if (ia->read.in.size != outsize)
+ pos = ia->read.in.offset - io->offset + outsize;
+ }
+
++ fuse_release_user_pages(&ia->ap, err ?: nres, io->should_dirty);
++
+ fuse_aio_complete(io, err, pos);
+ fuse_io_free(ia);
+ }
+@@ -1468,24 +1475,37 @@ static inline size_t fuse_get_frag_size(const struct iov_iter *ii,
+
+ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ size_t *nbytesp, int write,
+- unsigned int max_pages)
++ unsigned int max_pages,
++ bool use_pages_for_kvec_io)
+ {
++ bool flush_or_invalidate = false;
+ size_t nbytes = 0; /* # bytes already packed in req */
+ ssize_t ret = 0;
+
+- /* Special case for kernel I/O: can copy directly into the buffer */
++ /* Special case for kernel I/O: can copy directly into the buffer.
++ * However if the implementation of fuse_conn requires pages instead of
++ * pointer (e.g., virtio-fs), use iov_iter_extract_pages() instead.
++ */
+ if (iov_iter_is_kvec(ii)) {
+- unsigned long user_addr = fuse_get_user_addr(ii);
+- size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
++ void *user_addr = (void *)fuse_get_user_addr(ii);
+
+- if (write)
+- ap->args.in_args[1].value = (void *) user_addr;
+- else
+- ap->args.out_args[0].value = (void *) user_addr;
++ if (!use_pages_for_kvec_io) {
++ size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
+
+- iov_iter_advance(ii, frag_size);
+- *nbytesp = frag_size;
+- return 0;
++ if (write)
++ ap->args.in_args[1].value = user_addr;
++ else
++ ap->args.out_args[0].value = user_addr;
++
++ iov_iter_advance(ii, frag_size);
++ *nbytesp = frag_size;
++ return 0;
++ }
++
++ if (is_vmalloc_addr(user_addr)) {
++ ap->args.vmap_base = user_addr;
++ flush_or_invalidate = true;
++ }
+ }
+
+ while (nbytes < *nbytesp && ap->num_pages < max_pages) {
+@@ -1514,6 +1534,10 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ (PAGE_SIZE - ret) & (PAGE_SIZE - 1);
+ }
+
++ if (write && flush_or_invalidate)
++ flush_kernel_vmap_range(ap->args.vmap_base, nbytes);
++
++ ap->args.invalidate_vmap = !write && flush_or_invalidate;
+ ap->args.is_pinned = iov_iter_extract_will_pin(ii);
+ ap->args.user_pages = true;
+ if (write)
+@@ -1582,7 +1606,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
+ size_t nbytes = min(count, nmax);
+
+ err = fuse_get_user_pages(&ia->ap, iter, &nbytes, write,
+- max_pages);
++ max_pages, fc->use_pages_for_kvec_io);
+ if (err && !nbytes)
+ break;
+
+@@ -1596,7 +1620,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
+ }
+
+ if (!io->async || nres < 0) {
+- fuse_release_user_pages(&ia->ap, io->should_dirty);
++ fuse_release_user_pages(&ia->ap, nres, io->should_dirty);
+ fuse_io_free(ia);
+ }
+ ia = NULL;
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index e6cc3d552b1382..28cf319c1c25cf 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -309,9 +309,12 @@ struct fuse_args {
+ bool may_block:1;
+ bool is_ext:1;
+ bool is_pinned:1;
++ bool invalidate_vmap:1;
+ struct fuse_in_arg in_args[3];
+ struct fuse_arg out_args[2];
+ void (*end)(struct fuse_mount *fm, struct fuse_args *args, int error);
++ /* Used for kvec iter backed by vmalloc address */
++ void *vmap_base;
+ };
+
+ struct fuse_args_pages {
+@@ -857,6 +860,9 @@ struct fuse_conn {
+ /** Passthrough support for read/write IO */
+ unsigned int passthrough:1;
+
++ /* Use pages instead of pointer for kernel I/O */
++ unsigned int use_pages_for_kvec_io:1;
++
+ /** Maximum stack depth for passthrough backing files */
+ int max_stack_depth;
+
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 6404a189e98900..d220e28e755fef 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1691,6 +1691,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ fc->delete_stale = true;
+ fc->auto_submounts = true;
+ fc->sync_fs = true;
++ fc->use_pages_for_kvec_io = true;
+
+ /* Tell FUSE to split requests that exceed the virtqueue's size */
+ fc->max_pages_limit = min_t(unsigned int, fc->max_pages_limit,
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 269c3bc7fced71..a51fe42732c4c2 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1013,14 +1013,15 @@ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl)
+ &gl->gl_delete, 0);
+ }
+
+-static bool gfs2_queue_verify_evict(struct gfs2_glock *gl)
++bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later)
+ {
+ struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
++ unsigned long delay;
+
+- if (test_and_set_bit(GLF_VERIFY_EVICT, &gl->gl_flags))
++ if (test_and_set_bit(GLF_VERIFY_DELETE, &gl->gl_flags))
+ return false;
+- return queue_delayed_work(sdp->sd_delete_wq,
+- &gl->gl_delete, 5 * HZ);
++ delay = later ? 5 * HZ : 0;
++ return queue_delayed_work(sdp->sd_delete_wq, &gl->gl_delete, delay);
+ }
+
+ static void delete_work_func(struct work_struct *work)
+@@ -1052,19 +1053,19 @@ static void delete_work_func(struct work_struct *work)
+ if (gfs2_try_evict(gl)) {
+ if (test_bit(SDF_KILL, &sdp->sd_flags))
+ goto out;
+- if (gfs2_queue_verify_evict(gl))
++ if (gfs2_queue_verify_delete(gl, true))
+ return;
+ }
+ goto out;
+ }
+
+- if (test_and_clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags)) {
++ if (test_and_clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags)) {
+ inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino,
+ GFS2_BLKST_UNLINKED);
+ if (IS_ERR(inode)) {
+ if (PTR_ERR(inode) == -EAGAIN &&
+ !test_bit(SDF_KILL, &sdp->sd_flags) &&
+- gfs2_queue_verify_evict(gl))
++ gfs2_queue_verify_delete(gl, true))
+ return;
+ } else {
+ d_prune_aliases(inode);
+@@ -2118,7 +2119,7 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl)
+ {
+ clear_bit(GLF_TRY_TO_EVICT, &gl->gl_flags);
+- clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags);
++ clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags);
+ if (cancel_delayed_work(&gl->gl_delete))
+ gfs2_glock_put(gl);
+ }
+@@ -2371,7 +2372,7 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl)
+ *p++ = 'N';
+ if (test_bit(GLF_TRY_TO_EVICT, gflags))
+ *p++ = 'e';
+- if (test_bit(GLF_VERIFY_EVICT, gflags))
++ if (test_bit(GLF_VERIFY_DELETE, gflags))
+ *p++ = 'E';
+ *p = 0;
+ return buf;
+diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
+index adf0091cc98f95..63e101d448e961 100644
+--- a/fs/gfs2/glock.h
++++ b/fs/gfs2/glock.h
+@@ -245,6 +245,7 @@ static inline int gfs2_glock_nq_init(struct gfs2_glock *gl,
+ void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state);
+ void gfs2_glock_complete(struct gfs2_glock *gl, int ret);
+ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl);
++bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later);
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl);
+ void gfs2_flush_delete_work(struct gfs2_sbd *sdp);
+ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp);
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index aa4ef67a34e037..bd1348bff90ebe 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -329,7 +329,7 @@ enum {
+ GLF_BLOCKING = 15,
+ GLF_UNLOCKED = 16, /* Wait for glock to be unlocked */
+ GLF_TRY_TO_EVICT = 17, /* iopen glocks only */
+- GLF_VERIFY_EVICT = 18, /* iopen glocks only */
++ GLF_VERIFY_DELETE = 18, /* iopen glocks only */
+ };
+
+ struct gfs2_glock {
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 29c77281676526..53930312971530 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1879,7 +1879,7 @@ static void try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked, u64 skip
+ */
+ ip = gl->gl_object;
+
+- if (ip || !gfs2_queue_try_to_evict(gl))
++ if (ip || !gfs2_queue_verify_delete(gl, false))
+ gfs2_glock_put(gl);
+ else
+ found++;
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 6678060ed4d2bb..e22c1edc32b39e 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1045,7 +1045,7 @@ static int gfs2_drop_inode(struct inode *inode)
+ struct gfs2_glock *gl = ip->i_iopen_gh.gh_gl;
+
+ gfs2_glock_hold(gl);
+- if (!gfs2_queue_try_to_evict(gl))
++ if (!gfs2_queue_verify_delete(gl, true))
+ gfs2_glock_put_async(gl);
+ return 0;
+ }
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index 59ce81dca73fce..5389918bbf29db 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -156,6 +156,7 @@ struct hfsplus_sb_info {
+
+ /* Runtime variables */
+ u32 blockoffset;
++ u32 min_io_size;
+ sector_t part_start;
+ sector_t sect_count;
+ int fs_shift;
+@@ -307,7 +308,7 @@ struct hfsplus_readdir_data {
+ */
+ static inline unsigned short hfsplus_min_io_size(struct super_block *sb)
+ {
+- return max_t(unsigned short, bdev_logical_block_size(sb->s_bdev),
++ return max_t(unsigned short, HFSPLUS_SB(sb)->min_io_size,
+ HFSPLUS_SECTOR_SIZE);
+ }
+
+diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
+index 9592ffcb44e5ea..74801911bc1cc4 100644
+--- a/fs/hfsplus/wrapper.c
++++ b/fs/hfsplus/wrapper.c
+@@ -172,6 +172,8 @@ int hfsplus_read_wrapper(struct super_block *sb)
+ if (!blocksize)
+ goto out;
+
++ sbi->min_io_size = blocksize;
++
+ if (hfsplus_get_last_session(sb, &part_start, &part_size))
+ goto out;
+
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index 6d1cf2436ead68..084f6ed2dd7a69 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -471,8 +471,8 @@ static int hostfs_write_begin(struct file *file, struct address_space *mapping,
+
+ *foliop = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
+ mapping_gfp_mask(mapping));
+- if (!*foliop)
+- return -ENOMEM;
++ if (IS_ERR(*foliop))
++ return PTR_ERR(*foliop);
+ return 0;
+ }
+
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index f50311a6b4299d..47038e6608123c 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -948,8 +948,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc)
+ goto out_no_inode;
+ }
+
+- kfree(opt->iocharset);
+-
+ return 0;
+
+ /*
+@@ -987,7 +985,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc)
+ brelse(bh);
+ brelse(pri_bh);
+ out_freesbi:
+- kfree(opt->iocharset);
+ kfree(sbi);
+ s->s_fs_info = NULL;
+ return error;
+@@ -1528,7 +1525,10 @@ static int isofs_get_tree(struct fs_context *fc)
+
+ static void isofs_free_fc(struct fs_context *fc)
+ {
+- kfree(fc->fs_private);
++ struct isofs_options *opt = fc->fs_private;
++
++ kfree(opt->iocharset);
++ kfree(opt);
+ }
+
+ static const struct fs_context_operations isofs_context_ops = {
+diff --git a/fs/jffs2/erase.c b/fs/jffs2/erase.c
+index acd32f05b51988..ef3a1e1b6cb065 100644
+--- a/fs/jffs2/erase.c
++++ b/fs/jffs2/erase.c
+@@ -338,10 +338,9 @@ static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_erasebl
+ } while(--retlen);
+ mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
+ if (retlen) {
+- pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n",
+- *wordebuf,
+- jeb->offset +
+- c->sector_size-retlen * sizeof(*wordebuf));
++ *bad_offset = jeb->offset + c->sector_size - retlen * sizeof(*wordebuf);
++ pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n",
++ *wordebuf, *bad_offset);
+ return -EIO;
+ }
+ return 0;
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 0fb05e314edf60..24afbae87225a7 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -559,7 +559,7 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+
+ size_check:
+ if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
+- int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size);
++ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
+
+ printk(KERN_ERR "ea_get: invalid extended attribute\n");
+ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
+diff --git a/fs/netfs/fscache_volume.c b/fs/netfs/fscache_volume.c
+index cb75c07b5281a5..ced14ac78cc1c2 100644
+--- a/fs/netfs/fscache_volume.c
++++ b/fs/netfs/fscache_volume.c
+@@ -322,8 +322,7 @@ void fscache_create_volume(struct fscache_volume *volume, bool wait)
+ }
+ return;
+ no_wait:
+- clear_bit_unlock(FSCACHE_VOLUME_CREATING, &volume->flags);
+- wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING);
++ clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags);
+ }
+
+ /*
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 0becdec129704f..47189476b5538b 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -571,19 +571,32 @@ bl_find_get_deviceid(struct nfs_server *server,
+ if (!node)
+ return ERR_PTR(-ENODEV);
+
++ /*
++ * Devices that are marked unavailable are left in the cache with a
++ * timeout to avoid sending GETDEVINFO after every LAYOUTGET, or
++ * constantly attempting to register the device. Once marked as
++ * unavailable they must be deleted and never reused.
++ */
+ if (test_bit(NFS_DEVICEID_UNAVAILABLE, &node->flags)) {
+ unsigned long end = jiffies;
+ unsigned long start = end - PNFS_DEVICE_RETRY_TIMEOUT;
+
+ if (!time_in_range(node->timestamp_unavailable, start, end)) {
++ /* Uncork subsequent GETDEVINFO operations for this device */
+ nfs4_delete_deviceid(node->ld, node->nfs_client, id);
+ goto retry;
+ }
+ goto out_put;
+ }
+
+- if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node)))
++ if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node))) {
++ /*
++ * If we cannot register, treat this device as transient:
++ * Make a negative cache entry for the device
++ */
++ nfs4_mark_deviceid_unavailable(node);
+ goto out_put;
++ }
+
+ return node;
+
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index 6252f44479457b..cab8809f0e0f48 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -20,9 +20,6 @@ static void bl_unregister_scsi(struct pnfs_block_dev *dev)
+ const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops;
+ int status;
+
+- if (!test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags))
+- return;
+-
+ status = ops->pr_register(bdev, dev->pr_key, 0, false);
+ if (status)
+ trace_bl_pr_key_unreg_err(bdev, dev->pr_key, status);
+@@ -58,7 +55,8 @@ static void bl_unregister_dev(struct pnfs_block_dev *dev)
+ return;
+ }
+
+- if (dev->type == PNFS_BLOCK_VOLUME_SCSI)
++ if (dev->type == PNFS_BLOCK_VOLUME_SCSI &&
++ test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags))
+ bl_unregister_scsi(dev);
+ }
+
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 430733e3eff260..6bcc4b0e00ab72 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -12,7 +12,7 @@
+ #include <linux/nfslocalio.h>
+ #include <linux/wait_bit.h>
+
+-#define NFS_SB_MASK (SB_RDONLY|SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
++#define NFS_SB_MASK (SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
+
+ extern const struct export_operations nfs_export_ops;
+
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 8f0ce82a677e15..637528e6368ef7 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -354,6 +354,12 @@ nfs_local_read_done(struct nfs_local_kiocb *iocb, long status)
+
+ nfs_local_pgio_done(hdr, status);
+
++ /*
++ * Must clear replen otherwise NFSv3 data corruption will occur
++ * if/when switching from LOCALIO back to using normal RPC.
++ */
++ hdr->res.replen = 0;
++
+ if (hdr->res.count != hdr->args.count ||
+ hdr->args.offset + hdr->res.count >= i_size_read(file_inode(filp)))
+ hdr->res.eof = true;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9d40319e063dea..405f17e6e0b45b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2603,12 +2603,14 @@ static void nfs4_open_release(void *calldata)
+ struct nfs4_opendata *data = calldata;
+ struct nfs4_state *state = NULL;
+
++ /* In case of error, no cleanup! */
++ if (data->rpc_status != 0 || !data->rpc_done) {
++ nfs_release_seqid(data->o_arg.seqid);
++ goto out_free;
++ }
+ /* If this request hasn't been cancelled, do nothing */
+ if (!data->cancelled)
+ goto out_free;
+- /* In case of error, no cleanup! */
+- if (data->rpc_status != 0 || !data->rpc_done)
+- goto out_free;
+ /* In case we need an open_confirm, no cleanup! */
+ if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
+ goto out_free;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index ead2dc55952dba..82ae2b85d393cb 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -144,6 +144,31 @@ static void nfs_io_completion_put(struct nfs_io_completion *ioc)
+ kref_put(&ioc->refcount, nfs_io_completion_release);
+ }
+
++static void
++nfs_page_set_inode_ref(struct nfs_page *req, struct inode *inode)
++{
++ if (!test_and_set_bit(PG_INODE_REF, &req->wb_flags)) {
++ kref_get(&req->wb_kref);
++ atomic_long_inc(&NFS_I(inode)->nrequests);
++ }
++}
++
++static int
++nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)
++{
++ int ret;
++
++ if (!test_bit(PG_REMOVE, &req->wb_flags))
++ return 0;
++ ret = nfs_page_group_lock(req);
++ if (ret)
++ return ret;
++ if (test_and_clear_bit(PG_REMOVE, &req->wb_flags))
++ nfs_page_set_inode_ref(req, inode);
++ nfs_page_group_unlock(req);
++ return 0;
++}
++
+ /**
+ * nfs_folio_find_head_request - find head request associated with a folio
+ * @folio: pointer to folio
+@@ -540,7 +565,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ struct inode *inode = folio->mapping->host;
+ struct nfs_page *head, *subreq;
+ struct nfs_commit_info cinfo;
+- bool removed;
+ int ret;
+
+ /*
+@@ -565,18 +589,18 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ goto retry;
+ }
+
+- ret = nfs_page_group_lock(head);
++ ret = nfs_cancel_remove_inode(head, inode);
+ if (ret < 0)
+ goto out_unlock;
+
+- removed = test_bit(PG_REMOVE, &head->wb_flags);
++ ret = nfs_page_group_lock(head);
++ if (ret < 0)
++ goto out_unlock;
+
+ /* lock each request in the page group */
+ for (subreq = head->wb_this_page;
+ subreq != head;
+ subreq = subreq->wb_this_page) {
+- if (test_bit(PG_REMOVE, &subreq->wb_flags))
+- removed = true;
+ ret = nfs_page_group_lock_subreq(head, subreq);
+ if (ret < 0)
+ goto out_unlock;
+@@ -584,21 +608,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+
+ nfs_page_group_unlock(head);
+
+- /*
+- * If PG_REMOVE is set on any request, I/O on that request has
+- * completed, but some requests were still under I/O at the time
+- * we locked the head request.
+- *
+- * In that case the above wait for all requests means that all I/O
+- * has now finished, and we can restart from a clean slate. Let the
+- * old requests go away and start from scratch instead.
+- */
+- if (removed) {
+- nfs_unroll_locks(head, head);
+- nfs_unlock_and_release_request(head);
+- goto retry;
+- }
+-
+ nfs_init_cinfo_from_inode(&cinfo, inode);
+ nfs_join_page_group(head, &cinfo, inode);
+ return head;
+diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c
+index 09404d142d1ae6..a74ec08f6c96d0 100644
+--- a/fs/nfs_common/nfslocalio.c
++++ b/fs/nfs_common/nfslocalio.c
+@@ -155,11 +155,9 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ /* We have an implied reference to net thanks to nfsd_serv_try_get */
+ localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt,
+ cred, nfs_fh, fmode);
+- if (IS_ERR(localio)) {
+- rcu_read_lock();
+- nfs_to->nfsd_serv_put(net);
+- rcu_read_unlock();
+- }
++ if (IS_ERR(localio))
++ nfs_to_nfsd_net_put(net);
++
+ return localio;
+ }
+ EXPORT_SYMBOL_GPL(nfs_open_local_fh);
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index c82d8e3e0d4f28..984f8e6379dd47 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -40,15 +40,24 @@
+ #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)
+ #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
+
+-static void expkey_put(struct kref *ref)
++static void expkey_put_work(struct work_struct *work)
+ {
+- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
++ struct svc_expkey *key =
++ container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
+
+ if (test_bit(CACHE_VALID, &key->h.flags) &&
+ !test_bit(CACHE_NEGATIVE, &key->h.flags))
+ path_put(&key->ek_path);
+ auth_domain_put(key->ek_client);
+- kfree_rcu(key, ek_rcu);
++ kfree(key);
++}
++
++static void expkey_put(struct kref *ref)
++{
++ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
++
++ INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work);
++ queue_rcu_work(system_wq, &key->ek_rcu_work);
+ }
+
+ static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)
+@@ -355,16 +364,26 @@ static void export_stats_destroy(struct export_stats *stats)
+ EXP_STATS_COUNTERS_NUM);
+ }
+
+-static void svc_export_put(struct kref *ref)
++static void svc_export_put_work(struct work_struct *work)
+ {
+- struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
++ struct svc_export *exp =
++ container_of(to_rcu_work(work), struct svc_export, ex_rcu_work);
++
+ path_put(&exp->ex_path);
+ auth_domain_put(exp->ex_client);
+ nfsd4_fslocs_free(&exp->ex_fslocs);
+ export_stats_destroy(exp->ex_stats);
+ kfree(exp->ex_stats);
+ kfree(exp->ex_uuid);
+- kfree_rcu(exp, ex_rcu);
++ kfree(exp);
++}
++
++static void svc_export_put(struct kref *ref)
++{
++ struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
++
++ INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work);
++ queue_rcu_work(system_wq, &exp->ex_rcu_work);
+ }
+
+ static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index 3794ae253a7016..081afb68681e14 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -75,7 +75,7 @@ struct svc_export {
+ u32 ex_layout_types;
+ struct nfsd4_deviceid_map *ex_devid_map;
+ struct cache_detail *cd;
+- struct rcu_head ex_rcu;
++ struct rcu_work ex_rcu_work;
+ unsigned long ex_xprtsec_modes;
+ struct export_stats *ex_stats;
+ };
+@@ -92,7 +92,7 @@ struct svc_expkey {
+ u32 ek_fsid[6];
+
+ struct path ek_path;
+- struct rcu_head ek_rcu;
++ struct rcu_work ek_rcu_work;
+ };
+
+ #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 2e6783f6371245..146a9463c3c230 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -391,19 +391,19 @@ nfsd_file_put(struct nfsd_file *nf)
+ }
+
+ /**
+- * nfsd_file_put_local - put the reference to nfsd_file and local nfsd_serv
+- * @nf: nfsd_file of which to put the references
++ * nfsd_file_put_local - put nfsd_file reference and arm nfsd_serv_put in caller
++ * @nf: nfsd_file of which to put the reference
+ *
+- * First put the reference of the nfsd_file and then put the
+- * reference to the associated nn->nfsd_serv.
++ * First save the associated net to return to caller, then put
++ * the reference of the nfsd_file.
+ */
+-void
+-nfsd_file_put_local(struct nfsd_file *nf) __must_hold(rcu)
++struct net *
++nfsd_file_put_local(struct nfsd_file *nf)
+ {
+ struct net *net = nf->nf_net;
+
+ nfsd_file_put(nf);
+- nfsd_serv_put(net);
++ return net;
+ }
+
+ /**
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index cadf3c2689c44c..d5db6b34ba302c 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -55,7 +55,7 @@ void nfsd_file_cache_shutdown(void);
+ int nfsd_file_cache_start_net(struct net *net);
+ void nfsd_file_cache_shutdown_net(struct net *net);
+ void nfsd_file_put(struct nfsd_file *nf);
+-void nfsd_file_put_local(struct nfsd_file *nf);
++struct net *nfsd_file_put_local(struct nfsd_file *nf);
+ struct nfsd_file *nfsd_file_get(struct nfsd_file *nf);
+ struct file *nfsd_file_file(struct nfsd_file *nf);
+ void nfsd_file_close_inode_sync(struct inode *inode);
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index b5b3ab9d719a74..b8cbb15560040f 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -287,17 +287,17 @@ static int decode_cb_compound4res(struct xdr_stream *xdr,
+ u32 length;
+ __be32 *p;
+
+- p = xdr_inline_decode(xdr, 4 + 4);
++ p = xdr_inline_decode(xdr, XDR_UNIT);
+ if (unlikely(p == NULL))
+ goto out_overflow;
+- hdr->status = be32_to_cpup(p++);
++ hdr->status = be32_to_cpup(p);
+ /* Ignore the tag */
+- length = be32_to_cpup(p++);
+- p = xdr_inline_decode(xdr, length + 4);
+- if (unlikely(p == NULL))
++ if (xdr_stream_decode_u32(xdr, &length) < 0)
++ goto out_overflow;
++ if (xdr_inline_decode(xdr, length) == NULL)
++ goto out_overflow;
++ if (xdr_stream_decode_u32(xdr, &hdr->nops) < 0)
+ goto out_overflow;
+- p += XDR_QUADLEN(length);
+- hdr->nops = be32_to_cpup(p);
+ return 0;
+ out_overflow:
+ return -EIO;
+@@ -1461,6 +1461,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ ses = c->cn_session;
+ }
+ spin_unlock(&clp->cl_lock);
++ if (!c)
++ return;
+
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index d32f2dfd148fe3..7a1fdafa42ea17 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1292,7 +1292,7 @@ static void nfsd4_stop_copy(struct nfsd4_copy *copy)
+ nfs4_put_copy(copy);
+ }
+
+-static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
++static struct nfsd4_copy *nfsd4_unhash_copy(struct nfs4_client *clp)
+ {
+ struct nfsd4_copy *copy = NULL;
+
+@@ -1301,6 +1301,9 @@ static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
+ copy = list_first_entry(&clp->async_copies, struct nfsd4_copy,
+ copies);
+ refcount_inc(©->refcount);
++ copy->cp_clp = NULL;
++ if (!list_empty(©->copies))
++ list_del_init(©->copies);
+ }
+ spin_unlock(&clp->async_lock);
+ return copy;
+@@ -1310,7 +1313,7 @@ void nfsd4_shutdown_copy(struct nfs4_client *clp)
+ {
+ struct nfsd4_copy *copy;
+
+- while ((copy = nfsd4_get_copy(clp)) != NULL)
++ while ((copy = nfsd4_unhash_copy(clp)) != NULL)
+ nfsd4_stop_copy(copy);
+ }
+ #ifdef CONFIG_NFSD_V4_2_INTER_SSC
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index b7d61eb8afe9e1..4a765555bf8459 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -659,7 +659,8 @@ nfs4_reset_recoverydir(char *recdir)
+ return status;
+ status = -ENOTDIR;
+ if (d_is_dir(path.dentry)) {
+- strcpy(user_recovery_dirname, recdir);
++ strscpy(user_recovery_dirname, recdir,
++ sizeof(user_recovery_dirname));
+ status = 0;
+ }
+ path_put(&path);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 551d2958ec2905..d3cfc647153993 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5957,7 +5957,7 @@ nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
+ path.dentry = file_dentry(nf->nf_file);
+
+ rc = vfs_getattr(&path, stat,
+- (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
++ (STATX_MODE | STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
+ AT_STATX_SYNC_AS_STAT);
+
+ nfsd_file_put(nf);
+@@ -6041,8 +6041,7 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ }
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+ dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
+- dp->dl_cb_fattr.ncf_initial_cinfo =
+- nfsd4_change_attribute(&stat, d_inode(currentfh->fh_dentry));
++ dp->dl_cb_fattr.ncf_initial_cinfo = nfsd4_change_attribute(&stat);
+ trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+ } else {
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index f118921250c316..8d25aef51ad150 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3040,7 +3040,7 @@ static __be32 nfsd4_encode_fattr4_change(struct xdr_stream *xdr,
+ return nfs_ok;
+ }
+
+- c = nfsd4_change_attribute(&args->stat, d_inode(args->dentry));
++ c = nfsd4_change_attribute(&args->stat);
+ return nfsd4_encode_changeid4(xdr, c);
+ }
+
+diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
+index 40ad58a6a0361e..96e19c50a5d7ee 100644
+--- a/fs/nfsd/nfsfh.c
++++ b/fs/nfsd/nfsfh.c
+@@ -667,20 +667,18 @@ fh_update(struct svc_fh *fhp)
+ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp)
+ {
+ bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+- struct inode *inode;
+ struct kstat stat;
+ __be32 err;
+
+ if (fhp->fh_no_wcc || fhp->fh_pre_saved)
+ return nfs_ok;
+
+- inode = d_inode(fhp->fh_dentry);
+ err = fh_getattr(fhp, &stat);
+ if (err)
+ return err;
+
+ if (v4)
+- fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode);
++ fhp->fh_pre_change = nfsd4_change_attribute(&stat);
+
+ fhp->fh_pre_mtime = stat.mtime;
+ fhp->fh_pre_ctime = stat.ctime;
+@@ -697,7 +695,6 @@ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp)
+ __be32 fh_fill_post_attrs(struct svc_fh *fhp)
+ {
+ bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+- struct inode *inode = d_inode(fhp->fh_dentry);
+ __be32 err;
+
+ if (fhp->fh_no_wcc)
+@@ -713,7 +710,7 @@ __be32 fh_fill_post_attrs(struct svc_fh *fhp)
+ fhp->fh_post_saved = true;
+ if (v4)
+ fhp->fh_post_change =
+- nfsd4_change_attribute(&fhp->fh_post_attr, inode);
++ nfsd4_change_attribute(&fhp->fh_post_attr);
+ return nfs_ok;
+ }
+
+@@ -804,7 +801,14 @@ enum fsid_source fsid_source(const struct svc_fh *fhp)
+ return FSIDSOURCE_DEV;
+ }
+
+-/*
++/**
++ * nfsd4_change_attribute - Generate an NFSv4 change_attribute value
++ * @stat: inode attributes
++ *
++ * Caller must fill in @stat before calling, typically by invoking
++ * vfs_getattr() with STATX_MODE, STATX_CTIME, and STATX_CHANGE_COOKIE.
++ * Returns an unsigned 64-bit changeid4 value (RFC 8881 Section 3.2).
++ *
+ * We could use i_version alone as the change attribute. However, i_version
+ * can go backwards on a regular file after an unclean shutdown. On its own
+ * that doesn't necessarily cause a problem, but if i_version goes backwards
+@@ -821,13 +825,13 @@ enum fsid_source fsid_source(const struct svc_fh *fhp)
+ * assume that the new change attr is always logged to stable storage in some
+ * fashion before the results can be seen.
+ */
+-u64 nfsd4_change_attribute(const struct kstat *stat, const struct inode *inode)
++u64 nfsd4_change_attribute(const struct kstat *stat)
+ {
+ u64 chattr;
+
+ if (stat->result_mask & STATX_CHANGE_COOKIE) {
+ chattr = stat->change_cookie;
+- if (S_ISREG(inode->i_mode) &&
++ if (S_ISREG(stat->mode) &&
+ !(stat->attributes & STATX_ATTR_CHANGE_MONOTONIC)) {
+ chattr += (u64)stat->ctime.tv_sec << 30;
+ chattr += stat->ctime.tv_nsec;
+diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
+index 5b7394801dc427..876152a91f122f 100644
+--- a/fs/nfsd/nfsfh.h
++++ b/fs/nfsd/nfsfh.h
+@@ -297,8 +297,7 @@ static inline void fh_clear_pre_post_attrs(struct svc_fh *fhp)
+ fhp->fh_pre_saved = false;
+ }
+
+-u64 nfsd4_change_attribute(const struct kstat *stat,
+- const struct inode *inode);
++u64 nfsd4_change_attribute(const struct kstat *stat);
+ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp);
+ __be32 fh_fill_post_attrs(struct svc_fh *fhp);
+ __be32 __must_check fh_fill_both_attrs(struct svc_fh *fhp);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 82ae8254c068be..f976949d2634a1 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -333,16 +333,19 @@ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask,
+ if (!inode_mark)
+ return 0;
+
+- if (mask & FS_EVENT_ON_CHILD) {
+- /*
+- * Some events can be sent on both parent dir and child marks
+- * (e.g. FS_ATTRIB). If both parent dir and child are
+- * watching, report the event once to parent dir with name (if
+- * interested) and once to child without name (if interested).
+- * The child watcher is expecting an event without a file name
+- * and without the FS_EVENT_ON_CHILD flag.
+- */
+- mask &= ~FS_EVENT_ON_CHILD;
++ /*
++ * Some events can be sent on both parent dir and child marks (e.g.
++ * FS_ATTRIB). If both parent dir and child are watching, report the
++ * event once to parent dir with name (if interested) and once to child
++ * without name (if interested).
++ *
++ * In any case regardless whether the parent is watching or not, the
++ * child watcher is expecting an event without the FS_EVENT_ON_CHILD
++ * flag. The file name is expected if and only if this is a directory
++ * event.
++ */
++ mask &= ~FS_EVENT_ON_CHILD;
++ if (!(mask & ALL_FSNOTIFY_DIRENT_EVENTS)) {
+ dir = NULL;
+ name = NULL;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index c45b222cf9c11c..4981439e62092a 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -138,8 +138,11 @@ static void fsnotify_get_sb_watched_objects(struct super_block *sb)
+
+ static void fsnotify_put_sb_watched_objects(struct super_block *sb)
+ {
+- if (atomic_long_dec_and_test(fsnotify_sb_watched_objects(sb)))
+- wake_up_var(fsnotify_sb_watched_objects(sb));
++ atomic_long_t *watched_objects = fsnotify_sb_watched_objects(sb);
++
++ /* the superblock can go away after this decrement */
++ if (atomic_long_dec_and_test(watched_objects))
++ wake_up_var(watched_objects);
+ }
+
+ static void fsnotify_get_inode_ref(struct inode *inode)
+@@ -150,8 +153,11 @@ static void fsnotify_get_inode_ref(struct inode *inode)
+
+ static void fsnotify_put_inode_ref(struct inode *inode)
+ {
+- fsnotify_put_sb_watched_objects(inode->i_sb);
++ /* read ->i_sb before the inode can go away */
++ struct super_block *sb = inode->i_sb;
++
+ iput(inode);
++ fsnotify_put_sb_watched_objects(sb);
+ }
+
+ /*
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index e370eaf9bfe2ed..f704ceef953948 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -222,7 +222,7 @@ static int ntfs_extend_initialized_size(struct file *file,
+ if (err)
+ goto out;
+
+- folio_zero_range(folio, zerofrom, folio_size(folio));
++ folio_zero_range(folio, zerofrom, folio_size(folio) - zerofrom);
+
+ err = ntfs_write_end(file, mapping, pos, len, len, folio, NULL);
+ if (err < 0)
+diff --git a/fs/ocfs2/aops.h b/fs/ocfs2/aops.h
+index 45db1781ea735a..1d1b4b7edba02e 100644
+--- a/fs/ocfs2/aops.h
++++ b/fs/ocfs2/aops.h
+@@ -70,6 +70,8 @@ enum ocfs2_iocb_lock_bits {
+ OCFS2_IOCB_NUM_LOCKS
+ };
+
++#define ocfs2_iocb_init_rw_locked(iocb) \
++ (iocb->private = NULL)
+ #define ocfs2_iocb_clear_rw_locked(iocb) \
+ clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private)
+ #define ocfs2_iocb_rw_locked_level(iocb) \
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 06af21982c16ab..cb09330a086119 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -2398,6 +2398,8 @@ static ssize_t ocfs2_file_write_iter(struct kiocb *iocb,
+ } else
+ inode_lock(inode);
+
++ ocfs2_iocb_init_rw_locked(iocb);
++
+ /*
+ * Concurrent O_DIRECT writes are allowed with
+ * mount_option "coherency=buffered".
+@@ -2544,6 +2546,8 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb,
+ if (!direct_io && nowait)
+ return -EOPNOTSUPP;
+
++ ocfs2_iocb_init_rw_locked(iocb);
++
+ /*
+ * buffered reads protect themselves in ->read_folio(). O_DIRECT reads
+ * need locks to protect pending reads from racing with truncate.
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 51446c59388f10..7a85735d584f35 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -493,13 +493,13 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ * the previous entry, search for a matching entry.
+ */
+ if (!m || start < m->addr || start >= m->addr + m->size) {
+- struct kcore_list *iter;
++ struct kcore_list *pos;
+
+ m = NULL;
+- list_for_each_entry(iter, &kclist_head, list) {
+- if (start >= iter->addr &&
+- start < iter->addr + iter->size) {
+- m = iter;
++ list_for_each_entry(pos, &kclist_head, list) {
++ if (start >= pos->addr &&
++ start < pos->addr + pos->size) {
++ m = pos;
+ break;
+ }
+ }
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 64dc24afdb3a7f..befec0b5c537a7 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1830,18 +1830,21 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out)
+ return 0;
+ }
+
+-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos)
++int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ size_t len = iov_iter_count(iter);
+
+ if (!iter_is_ubuf(iter))
+- return false;
++ return -EINVAL;
+
+ if (!is_power_of_2(len))
+- return false;
++ return -EINVAL;
++
++ if (!IS_ALIGNED(iocb->ki_pos, len))
++ return -EINVAL;
+
+- if (!IS_ALIGNED(pos, len))
+- return false;
++ if (!(iocb->ki_flags & IOCB_DIRECT))
++ return -EOPNOTSUPP;
+
+- return true;
++ return 0;
+ }
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index 0ff2491c311d8a..9c0ef4195b5829 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -17,6 +17,11 @@ static void free_cached_dir(struct cached_fid *cfid);
+ static void smb2_close_cached_fid(struct kref *ref);
+ static void cfids_laundromat_worker(struct work_struct *work);
+
++struct cached_dir_dentry {
++ struct list_head entry;
++ struct dentry *dentry;
++};
++
+ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ const char *path,
+ bool lookup_only,
+@@ -59,6 +64,16 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ list_add(&cfid->entry, &cfids->entries);
+ cfid->on_list = true;
+ kref_get(&cfid->refcount);
++ /*
++ * Set @cfid->has_lease to true during construction so that the lease
++ * reference can be put in cached_dir_lease_break() due to a potential
++ * lease break right after the request is sent or while @cfid is still
++ * being cached, or if a reconnection is triggered during construction.
++ * Concurrent processes won't be to use it yet due to @cfid->time being
++ * zero.
++ */
++ cfid->has_lease = true;
++
+ spin_unlock(&cfids->cfid_list_lock);
+ return cfid;
+ }
+@@ -176,12 +191,12 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ return -ENOENT;
+ }
+ /*
+- * Return cached fid if it has a lease. Otherwise, it is either a new
+- * entry or laundromat worker removed it from @cfids->entries. Caller
+- * will put last reference if the latter.
++ * Return cached fid if it is valid (has a lease and has a time).
++ * Otherwise, it is either a new entry or laundromat worker removed it
++ * from @cfids->entries. Caller will put last reference if the latter.
+ */
+ spin_lock(&cfids->cfid_list_lock);
+- if (cfid->has_lease) {
++ if (cfid->has_lease && cfid->time) {
+ spin_unlock(&cfids->cfid_list_lock);
+ *ret_cfid = cfid;
+ kfree(utf16_path);
+@@ -212,6 +227,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ cfid->dentry = dentry;
++ cfid->tcon = tcon;
+
+ /*
+ * We do not hold the lock for the open because in case
+@@ -267,15 +283,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+
+ smb2_set_related(&rqst[1]);
+
+- /*
+- * Set @cfid->has_lease to true before sending out compounded request so
+- * its lease reference can be put in cached_dir_lease_break() due to a
+- * potential lease break right after the request is sent or while @cfid
+- * is still being cached. Concurrent processes won't be to use it yet
+- * due to @cfid->time being zero.
+- */
+- cfid->has_lease = true;
+-
+ if (retries) {
+ smb2_set_replay(server, &rqst[0]);
+ smb2_set_replay(server, &rqst[1]);
+@@ -292,7 +299,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ goto oshr_free;
+ }
+- cfid->tcon = tcon;
+ cfid->is_open = true;
+
+ spin_lock(&cfids->cfid_list_lock);
+@@ -347,6 +353,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ SMB2_query_info_free(&rqst[1]);
+ free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
++out:
+ if (rc) {
+ spin_lock(&cfids->cfid_list_lock);
+ if (cfid->on_list) {
+@@ -358,23 +365,14 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ /*
+ * We are guaranteed to have two references at this
+ * point. One for the caller and one for a potential
+- * lease. Release the Lease-ref so that the directory
+- * will be closed when the caller closes the cached
+- * handle.
++ * lease. Release one here, and the second below.
+ */
+ cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+- goto out;
+ }
+ spin_unlock(&cfids->cfid_list_lock);
+- }
+-out:
+- if (rc) {
+- if (cfid->is_open)
+- SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
+- cfid->fid.volatile_fid);
+- free_cached_dir(cfid);
++
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
+ } else {
+ *ret_cfid = cfid;
+ atomic_inc(&tcon->num_remote_opens);
+@@ -401,7 +399,7 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+ if (dentry && cfid->dentry == dentry) {
+- cifs_dbg(FYI, "found a cached root file handle by dentry\n");
++ cifs_dbg(FYI, "found a cached file handle by dentry\n");
+ kref_get(&cfid->refcount);
+ *ret_cfid = cfid;
+ spin_unlock(&cfids->cfid_list_lock);
+@@ -477,7 +475,10 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ struct cifs_tcon *tcon;
+ struct tcon_link *tlink;
+ struct cached_fids *cfids;
++ struct cached_dir_dentry *tmp_list, *q;
++ LIST_HEAD(entry);
+
++ spin_lock(&cifs_sb->tlink_tree_lock);
+ for (node = rb_first(root); node; node = rb_next(node)) {
+ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
+ tcon = tlink_tcon(tlink);
+@@ -486,11 +487,30 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ cfids = tcon->cfids;
+ if (cfids == NULL)
+ continue;
++ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+- dput(cfid->dentry);
++ tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC);
++ if (tmp_list == NULL)
++ break;
++ spin_lock(&cfid->fid_lock);
++ tmp_list->dentry = cfid->dentry;
+ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ list_add_tail(&tmp_list->entry, &entry);
+ }
++ spin_unlock(&cfids->cfid_list_lock);
++ }
++ spin_unlock(&cifs_sb->tlink_tree_lock);
++
++ list_for_each_entry_safe(tmp_list, q, &entry, entry) {
++ list_del(&tmp_list->entry);
++ dput(tmp_list->dentry);
++ kfree(tmp_list);
+ }
++
++ /* Flush any pending work that will drop dentries */
++ flush_workqueue(cfid_put_wq);
+ }
+
+ /*
+@@ -501,50 +521,71 @@ void invalidate_all_cached_dirs(struct cifs_tcon *tcon)
+ {
+ struct cached_fids *cfids = tcon->cfids;
+ struct cached_fid *cfid, *q;
+- LIST_HEAD(entry);
+
+ if (cfids == NULL)
+ return;
+
++ /*
++ * Mark all the cfids as closed, and move them to the cfids->dying list.
++ * They'll be cleaned up later by cfids_invalidation_worker. Take
++ * a reference to each cfid during this process.
++ */
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+- list_move(&cfid->entry, &entry);
++ list_move(&cfid->entry, &cfids->dying);
+ cfids->num_entries--;
+ cfid->is_open = false;
+ cfid->on_list = false;
+- /* To prevent race with smb2_cached_lease_break() */
+- kref_get(&cfid->refcount);
+- }
+- spin_unlock(&cfids->cfid_list_lock);
+-
+- list_for_each_entry_safe(cfid, q, &entry, entry) {
+- list_del(&cfid->entry);
+- cancel_work_sync(&cfid->lease_break);
+ if (cfid->has_lease) {
+ /*
+- * We lease was never cancelled from the server so we
+- * need to drop the reference.
++ * The lease was never cancelled from the server,
++ * so steal that reference.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+ cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
+- }
+- /* Drop the extra reference opened above*/
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
++ } else
++ kref_get(&cfid->refcount);
+ }
++ /*
++ * Queue dropping of the dentries once locks have been dropped
++ */
++ if (!list_empty(&cfids->dying))
++ queue_work(cfid_put_wq, &cfids->invalidation_work);
++ spin_unlock(&cfids->cfid_list_lock);
+ }
+
+ static void
+-smb2_cached_lease_break(struct work_struct *work)
++cached_dir_offload_close(struct work_struct *work)
+ {
+ struct cached_fid *cfid = container_of(work,
+- struct cached_fid, lease_break);
++ struct cached_fid, close_work);
++ struct cifs_tcon *tcon = cfid->tcon;
++
++ WARN_ON(cfid->on_list);
+
+- spin_lock(&cfid->cfids->cfid_list_lock);
+- cfid->has_lease = false;
+- spin_unlock(&cfid->cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cached_close);
++}
++
++/*
++ * Release the cached directory's dentry, and then queue work to drop cached
++ * directory itself (closing on server if needed).
++ *
++ * Must be called with a reference to the cached_fid and a reference to the
++ * tcon.
++ */
++static void cached_dir_put_work(struct work_struct *work)
++{
++ struct cached_fid *cfid = container_of(work, struct cached_fid,
++ put_work);
++ struct dentry *dentry;
++
++ spin_lock(&cfid->fid_lock);
++ dentry = cfid->dentry;
++ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ dput(dentry);
++ queue_work(serverclose_wq, &cfid->close_work);
+ }
+
+ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+@@ -561,6 +602,7 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+ !memcmp(lease_key,
+ cfid->fid.lease_key,
+ SMB2_LEASE_KEY_SIZE)) {
++ cfid->has_lease = false;
+ cfid->time = 0;
+ /*
+ * We found a lease remove it from the list
+@@ -570,8 +612,10 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+ cfid->on_list = false;
+ cfids->num_entries--;
+
+- queue_work(cifsiod_wq,
+- &cfid->lease_break);
++ ++tcon->tc_count;
++ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
++ netfs_trace_tcon_ref_get_cached_lease_break);
++ queue_work(cfid_put_wq, &cfid->put_work);
+ spin_unlock(&cfids->cfid_list_lock);
+ return true;
+ }
+@@ -593,7 +637,8 @@ static struct cached_fid *init_cached_dir(const char *path)
+ return NULL;
+ }
+
+- INIT_WORK(&cfid->lease_break, smb2_cached_lease_break);
++ INIT_WORK(&cfid->close_work, cached_dir_offload_close);
++ INIT_WORK(&cfid->put_work, cached_dir_put_work);
+ INIT_LIST_HEAD(&cfid->entry);
+ INIT_LIST_HEAD(&cfid->dirents.entries);
+ mutex_init(&cfid->dirents.de_mutex);
+@@ -606,6 +651,9 @@ static void free_cached_dir(struct cached_fid *cfid)
+ {
+ struct cached_dirent *dirent, *q;
+
++ WARN_ON(work_pending(&cfid->close_work));
++ WARN_ON(work_pending(&cfid->put_work));
++
+ dput(cfid->dentry);
+ cfid->dentry = NULL;
+
+@@ -623,10 +671,30 @@ static void free_cached_dir(struct cached_fid *cfid)
+ kfree(cfid);
+ }
+
++static void cfids_invalidation_worker(struct work_struct *work)
++{
++ struct cached_fids *cfids = container_of(work, struct cached_fids,
++ invalidation_work);
++ struct cached_fid *cfid, *q;
++ LIST_HEAD(entry);
++
++ spin_lock(&cfids->cfid_list_lock);
++ /* move cfids->dying to the local list */
++ list_cut_before(&entry, &cfids->dying, &cfids->dying);
++ spin_unlock(&cfids->cfid_list_lock);
++
++ list_for_each_entry_safe(cfid, q, &entry, entry) {
++ list_del(&cfid->entry);
++ /* Drop the ref-count acquired in invalidate_all_cached_dirs */
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ }
++}
++
+ static void cfids_laundromat_worker(struct work_struct *work)
+ {
+ struct cached_fids *cfids;
+ struct cached_fid *cfid, *q;
++ struct dentry *dentry;
+ LIST_HEAD(entry);
+
+ cfids = container_of(work, struct cached_fids, laundromat_work.work);
+@@ -638,33 +706,42 @@ static void cfids_laundromat_worker(struct work_struct *work)
+ cfid->on_list = false;
+ list_move(&cfid->entry, &entry);
+ cfids->num_entries--;
+- /* To prevent race with smb2_cached_lease_break() */
+- kref_get(&cfid->refcount);
++ if (cfid->has_lease) {
++ /*
++ * Our lease has not yet been cancelled from the
++ * server. Steal that reference.
++ */
++ cfid->has_lease = false;
++ } else
++ kref_get(&cfid->refcount);
+ }
+ }
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+ list_del(&cfid->entry);
+- /*
+- * Cancel and wait for the work to finish in case we are racing
+- * with it.
+- */
+- cancel_work_sync(&cfid->lease_break);
+- if (cfid->has_lease) {
++
++ spin_lock(&cfid->fid_lock);
++ dentry = cfid->dentry;
++ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ dput(dentry);
++ if (cfid->is_open) {
++ spin_lock(&cifs_tcp_ses_lock);
++ ++cfid->tcon->tc_count;
++ trace_smb3_tcon_ref(cfid->tcon->debug_id, cfid->tcon->tc_count,
++ netfs_trace_tcon_ref_get_cached_laundromat);
++ spin_unlock(&cifs_tcp_ses_lock);
++ queue_work(serverclose_wq, &cfid->close_work);
++ } else
+ /*
+- * Our lease has not yet been cancelled from the server
+- * so we need to drop the reference.
++ * Drop the ref-count from above, either the lease-ref (if there
++ * was one) or the extra one acquired.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+- cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+- }
+- /* Drop the extra reference opened above */
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
+ }
+- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
++ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
+ dir_cache_timeout * HZ);
+ }
+
+@@ -677,9 +754,11 @@ struct cached_fids *init_cached_dirs(void)
+ return NULL;
+ spin_lock_init(&cfids->cfid_list_lock);
+ INIT_LIST_HEAD(&cfids->entries);
++ INIT_LIST_HEAD(&cfids->dying);
+
++ INIT_WORK(&cfids->invalidation_work, cfids_invalidation_worker);
+ INIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker);
+- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
++ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
+ dir_cache_timeout * HZ);
+
+ return cfids;
+@@ -698,6 +777,7 @@ void free_cached_dirs(struct cached_fids *cfids)
+ return;
+
+ cancel_delayed_work_sync(&cfids->laundromat_work);
++ cancel_work_sync(&cfids->invalidation_work);
+
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+@@ -705,6 +785,11 @@ void free_cached_dirs(struct cached_fids *cfids)
+ cfid->is_open = false;
+ list_move(&cfid->entry, &entry);
+ }
++ list_for_each_entry_safe(cfid, q, &cfids->dying, entry) {
++ cfid->on_list = false;
++ cfid->is_open = false;
++ list_move(&cfid->entry, &entry);
++ }
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+diff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h
+index 81ba0fd5cc16d6..1dfe79d947a62f 100644
+--- a/fs/smb/client/cached_dir.h
++++ b/fs/smb/client/cached_dir.h
+@@ -44,7 +44,8 @@ struct cached_fid {
+ spinlock_t fid_lock;
+ struct cifs_tcon *tcon;
+ struct dentry *dentry;
+- struct work_struct lease_break;
++ struct work_struct put_work;
++ struct work_struct close_work;
+ struct smb2_file_all_info file_all_info;
+ struct cached_dirents dirents;
+ };
+@@ -53,10 +54,13 @@ struct cached_fid {
+ struct cached_fids {
+ /* Must be held when:
+ * - accessing the cfids->entries list
++ * - accessing the cfids->dying list
+ */
+ spinlock_t cfid_list_lock;
+ int num_entries;
+ struct list_head entries;
++ struct list_head dying;
++ struct work_struct invalidation_work;
+ struct delayed_work laundromat_work;
+ };
+
+diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c
+index 1d294d53f66247..c68ad526a4de1b 100644
+--- a/fs/smb/client/cifsacl.c
++++ b/fs/smb/client/cifsacl.c
+@@ -885,12 +885,17 @@ unsigned int setup_authusers_ACE(struct smb_ace *pntace)
+ * Fill in the special SID based on the mode. See
+ * https://technet.microsoft.com/en-us/library/hh509017(v=ws.10).aspx
+ */
+-unsigned int setup_special_mode_ACE(struct smb_ace *pntace, __u64 nmode)
++unsigned int setup_special_mode_ACE(struct smb_ace *pntace,
++ bool posix,
++ __u64 nmode)
+ {
+ int i;
+ unsigned int ace_size = 28;
+
+- pntace->type = ACCESS_DENIED_ACE_TYPE;
++ if (posix)
++ pntace->type = ACCESS_ALLOWED_ACE_TYPE;
++ else
++ pntace->type = ACCESS_DENIED_ACE_TYPE;
+ pntace->flags = 0x0;
+ pntace->access_req = 0;
+ pntace->sid.num_subauth = 3;
+@@ -933,7 +938,8 @@ static void populate_new_aces(char *nacl_base,
+ struct smb_sid *pownersid,
+ struct smb_sid *pgrpsid,
+ __u64 *pnmode, u32 *pnum_aces, u16 *pnsize,
+- bool modefromsid)
++ bool modefromsid,
++ bool posix)
+ {
+ __u64 nmode;
+ u32 num_aces = 0;
+@@ -950,13 +956,15 @@ static void populate_new_aces(char *nacl_base,
+ num_aces = *pnum_aces;
+ nsize = *pnsize;
+
+- if (modefromsid) {
+- pnntace = (struct smb_ace *) (nacl_base + nsize);
+- nsize += setup_special_mode_ACE(pnntace, nmode);
+- num_aces++;
++ if (modefromsid || posix) {
+ pnntace = (struct smb_ace *) (nacl_base + nsize);
+- nsize += setup_authusers_ACE(pnntace);
++ nsize += setup_special_mode_ACE(pnntace, posix, nmode);
+ num_aces++;
++ if (modefromsid) {
++ pnntace = (struct smb_ace *) (nacl_base + nsize);
++ nsize += setup_authusers_ACE(pnntace);
++ num_aces++;
++ }
+ goto set_size;
+ }
+
+@@ -1076,7 +1084,7 @@ static __u16 replace_sids_and_copy_aces(struct smb_acl *pdacl, struct smb_acl *p
+
+ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ struct smb_sid *pownersid, struct smb_sid *pgrpsid,
+- __u64 *pnmode, bool mode_from_sid)
++ __u64 *pnmode, bool mode_from_sid, bool posix)
+ {
+ int i;
+ u16 size = 0;
+@@ -1094,11 +1102,11 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ nsize = sizeof(struct smb_acl);
+
+ /* If pdacl is NULL, we don't have a src. Simply populate new ACL. */
+- if (!pdacl) {
++ if (!pdacl || posix) {
+ populate_new_aces(nacl_base,
+ pownersid, pgrpsid,
+ pnmode, &num_aces, &nsize,
+- mode_from_sid);
++ mode_from_sid, posix);
+ goto finalize_dacl;
+ }
+
+@@ -1115,7 +1123,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ populate_new_aces(nacl_base,
+ pownersid, pgrpsid,
+ pnmode, &num_aces, &nsize,
+- mode_from_sid);
++ mode_from_sid, posix);
+
+ new_aces_set = true;
+ }
+@@ -1144,7 +1152,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ populate_new_aces(nacl_base,
+ pownersid, pgrpsid,
+ pnmode, &num_aces, &nsize,
+- mode_from_sid);
++ mode_from_sid, posix);
+
+ new_aces_set = true;
+ }
+@@ -1251,7 +1259,7 @@ static int parse_sec_desc(struct cifs_sb_info *cifs_sb,
+ /* Convert permission bits from mode to equivalent CIFS ACL */
+ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ __u32 secdesclen, __u32 *pnsecdesclen, __u64 *pnmode, kuid_t uid, kgid_t gid,
+- bool mode_from_sid, bool id_from_sid, int *aclflag)
++ bool mode_from_sid, bool id_from_sid, bool posix, int *aclflag)
+ {
+ int rc = 0;
+ __u32 dacloffset;
+@@ -1288,7 +1296,7 @@ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ ndacl_ptr->num_aces = cpu_to_le32(0);
+
+ rc = set_chmod_dacl(dacl_ptr, ndacl_ptr, owner_sid_ptr, group_sid_ptr,
+- pnmode, mode_from_sid);
++ pnmode, mode_from_sid, posix);
+
+ sidsoffset = ndacloffset + le16_to_cpu(ndacl_ptr->size);
+ /* copy the non-dacl portion of secdesc */
+@@ -1587,6 +1595,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ struct tcon_link *tlink = cifs_sb_tlink(cifs_sb);
+ struct smb_version_operations *ops;
+ bool mode_from_sid, id_from_sid;
++ bool posix = tlink_tcon(tlink)->posix_extensions;
+ const u32 info = 0;
+
+ if (IS_ERR(tlink))
+@@ -1622,12 +1631,13 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ id_from_sid = false;
+
+ /* Potentially, five new ACEs can be added to the ACL for U,G,O mapping */
+- nsecdesclen = secdesclen;
+ if (pnmode && *pnmode != NO_CHANGE_64) { /* chmod */
+- if (mode_from_sid)
+- nsecdesclen += 2 * sizeof(struct smb_ace);
++ if (posix)
++ nsecdesclen = 1 * sizeof(struct smb_ace);
++ else if (mode_from_sid)
++ nsecdesclen = secdesclen + (2 * sizeof(struct smb_ace));
+ else /* cifsacl */
+- nsecdesclen += 5 * sizeof(struct smb_ace);
++ nsecdesclen = secdesclen + (5 * sizeof(struct smb_ace));
+ } else { /* chown */
+ /* When ownership changes, changes new owner sid length could be different */
+ nsecdesclen = sizeof(struct smb_ntsd) + (sizeof(struct smb_sid) * 2);
+@@ -1657,7 +1667,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ }
+
+ rc = build_sec_desc(pntsd, pnntsd, secdesclen, &nsecdesclen, pnmode, uid, gid,
+- mode_from_sid, id_from_sid, &aclflag);
++ mode_from_sid, id_from_sid, posix, &aclflag);
+
+ cifs_dbg(NOISY, "build_sec_desc rc: %d\n", rc);
+
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 20cafdff508106..bf909c2f6b963b 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -157,6 +157,7 @@ struct workqueue_struct *fileinfo_put_wq;
+ struct workqueue_struct *cifsoplockd_wq;
+ struct workqueue_struct *deferredclose_wq;
+ struct workqueue_struct *serverclose_wq;
++struct workqueue_struct *cfid_put_wq;
+ __u32 cifs_lock_secret;
+
+ /*
+@@ -1895,9 +1896,16 @@ init_cifs(void)
+ goto out_destroy_deferredclose_wq;
+ }
+
++ cfid_put_wq = alloc_workqueue("cfid_put_wq",
++ WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
++ if (!cfid_put_wq) {
++ rc = -ENOMEM;
++ goto out_destroy_serverclose_wq;
++ }
++
+ rc = cifs_init_inodecache();
+ if (rc)
+- goto out_destroy_serverclose_wq;
++ goto out_destroy_cfid_put_wq;
+
+ rc = cifs_init_netfs();
+ if (rc)
+@@ -1965,6 +1973,8 @@ init_cifs(void)
+ cifs_destroy_netfs();
+ out_destroy_inodecache:
+ cifs_destroy_inodecache();
++out_destroy_cfid_put_wq:
++ destroy_workqueue(cfid_put_wq);
+ out_destroy_serverclose_wq:
+ destroy_workqueue(serverclose_wq);
+ out_destroy_deferredclose_wq:
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 5041b1ffc244b0..9a4b3608b7d6f3 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -588,6 +588,7 @@ struct smb_version_operations {
+ /* Check for STATUS_NETWORK_NAME_DELETED */
+ bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv);
+ int (*parse_reparse_point)(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data);
+ int (*create_reparse_symlink)(const unsigned int xid,
+@@ -1983,7 +1984,7 @@ require use of the stronger protocol */
+ * cifsInodeInfo->lock_sem cifsInodeInfo->llist cifs_init_once
+ * ->can_cache_brlcks
+ * cifsInodeInfo->deferred_lock cifsInodeInfo->deferred_closes cifsInodeInfo_alloc
+- * cached_fid->fid_mutex cifs_tcon->crfid tcon_info_alloc
++ * cached_fids->cfid_list_lock cifs_tcon->cfids->entries init_cached_dirs
+ * cifsFileInfo->fh_mutex cifsFileInfo cifs_new_fileinfo
+ * cifsFileInfo->file_info_lock cifsFileInfo->count cifs_new_fileinfo
+ * ->invalidHandle initiate_cifs_search
+@@ -2071,6 +2072,7 @@ extern struct workqueue_struct *fileinfo_put_wq;
+ extern struct workqueue_struct *cifsoplockd_wq;
+ extern struct workqueue_struct *deferredclose_wq;
+ extern struct workqueue_struct *serverclose_wq;
++extern struct workqueue_struct *cfid_put_wq;
+ extern __u32 cifs_lock_secret;
+
+ extern mempool_t *cifs_sm_req_poolp;
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 1d3470bca45edd..0c6468844c4b54 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -244,7 +244,9 @@ extern int cifs_set_acl(struct mnt_idmap *idmap,
+ extern int set_cifs_acl(struct smb_ntsd *pntsd, __u32 len, struct inode *ino,
+ const char *path, int flag);
+ extern unsigned int setup_authusers_ACE(struct smb_ace *pace);
+-extern unsigned int setup_special_mode_ACE(struct smb_ace *pace, __u64 nmode);
++extern unsigned int setup_special_mode_ACE(struct smb_ace *pace,
++ bool posix,
++ __u64 nmode);
+ extern unsigned int setup_special_user_owner_ACE(struct smb_ace *pace);
+
+ extern void dequeue_mid(struct mid_q_entry *mid, bool malformed);
+@@ -666,6 +668,7 @@ char *extract_hostname(const char *unc);
+ char *extract_sharename(const char *unc);
+ int parse_reparse_point(struct reparse_data_buffer *buf,
+ u32 plen, struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data);
+ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 0ce2d704b1f3f8..a94c538ff86368 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1897,11 +1897,35 @@ static int match_session(struct cifs_ses *ses,
+ CIFS_MAX_USERNAME_LEN))
+ return 0;
+ if ((ctx->username && strlen(ctx->username) != 0) &&
+- ses->password != NULL &&
+- strncmp(ses->password,
+- ctx->password ? ctx->password : "",
+- CIFS_MAX_PASSWORD_LEN))
+- return 0;
++ ses->password != NULL) {
++
++ /* New mount can only share sessions with an existing mount if:
++ * 1. Both password and password2 match, or
++ * 2. password2 of the old mount matches password of the new mount
++ * and password of the old mount matches password2 of the new
++ * mount
++ */
++ if (ses->password2 != NULL && ctx->password2 != NULL) {
++ if (!((strncmp(ses->password, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0 &&
++ strncmp(ses->password2, ctx->password2,
++ CIFS_MAX_PASSWORD_LEN) == 0) ||
++ (strncmp(ses->password, ctx->password2,
++ CIFS_MAX_PASSWORD_LEN) == 0 &&
++ strncmp(ses->password2, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0)))
++ return 0;
++
++ } else if ((ses->password2 == NULL && ctx->password2 != NULL) ||
++ (ses->password2 != NULL && ctx->password2 == NULL)) {
++ return 0;
++
++ } else {
++ if (strncmp(ses->password, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN))
++ return 0;
++ }
++ }
+ }
+
+ if (strcmp(ctx->local_nls->charset, ses->local_nls->charset))
+@@ -2244,6 +2268,7 @@ struct cifs_ses *
+ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ {
+ int rc = 0;
++ int retries = 0;
+ unsigned int xid;
+ struct cifs_ses *ses;
+ struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
+@@ -2262,6 +2287,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ cifs_dbg(FYI, "Session needs reconnect\n");
+
+ mutex_lock(&ses->session_mutex);
++
++retry_old_session:
+ rc = cifs_negotiate_protocol(xid, ses, server);
+ if (rc) {
+ mutex_unlock(&ses->session_mutex);
+@@ -2274,6 +2301,13 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ rc = cifs_setup_session(xid, ses, server,
+ ctx->local_nls);
+ if (rc) {
++ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
++ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
++ retries++;
++ cifs_dbg(FYI, "Session reconnect failed, retrying with alternate password\n");
++ swap(ses->password, ses->password2);
++ goto retry_old_session;
++ }
+ mutex_unlock(&ses->session_mutex);
+ /* problem -- put our reference */
+ cifs_put_smb_ses(ses);
+@@ -2349,6 +2383,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ ses->chans_need_reconnect = 1;
+ spin_unlock(&ses->chan_lock);
+
++retry_new_session:
+ mutex_lock(&ses->session_mutex);
+ rc = cifs_negotiate_protocol(xid, ses, server);
+ if (!rc)
+@@ -2361,8 +2396,16 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ sizeof(ses->smb3signingkey));
+ spin_unlock(&ses->chan_lock);
+
+- if (rc)
+- goto get_ses_fail;
++ if (rc) {
++ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
++ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
++ retries++;
++ cifs_dbg(FYI, "Session setup failed, retrying with alternate password\n");
++ swap(ses->password, ses->password2);
++ goto retry_new_session;
++ } else
++ goto get_ses_fail;
++ }
+
+ /*
+ * success, put it on the list and add it as first channel
+@@ -2551,7 +2594,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+
+ if (ses->server->dialect >= SMB20_PROT_ID &&
+ (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING))
+- nohandlecache = ctx->nohandlecache;
++ nohandlecache = ctx->nohandlecache || !dir_cache_timeout;
+ else
+ nohandlecache = true;
+ tcon = tcon_info_alloc(!nohandlecache, netfs_trace_tcon_ref_new);
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 5c5a52019efada..48606e2ddffdcd 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -890,12 +890,37 @@ do { \
+ cifs_sb->ctx->field = NULL; \
+ } while (0)
+
++int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
++{
++ if (ses->password &&
++ cifs_sb->ctx->password &&
++ strcmp(ses->password, cifs_sb->ctx->password)) {
++ kfree_sensitive(cifs_sb->ctx->password);
++ cifs_sb->ctx->password = kstrdup(ses->password, GFP_KERNEL);
++ if (!cifs_sb->ctx->password)
++ return -ENOMEM;
++ }
++ if (ses->password2 &&
++ cifs_sb->ctx->password2 &&
++ strcmp(ses->password2, cifs_sb->ctx->password2)) {
++ kfree_sensitive(cifs_sb->ctx->password2);
++ cifs_sb->ctx->password2 = kstrdup(ses->password2, GFP_KERNEL);
++ if (!cifs_sb->ctx->password2) {
++ kfree_sensitive(cifs_sb->ctx->password);
++ cifs_sb->ctx->password = NULL;
++ return -ENOMEM;
++ }
++ }
++ return 0;
++}
++
+ static int smb3_reconfigure(struct fs_context *fc)
+ {
+ struct smb3_fs_context *ctx = smb3_fc2context(fc);
+ struct dentry *root = fc->root;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(root->d_sb);
+ struct cifs_ses *ses = cifs_sb_master_tcon(cifs_sb)->ses;
++ char *new_password = NULL, *new_password2 = NULL;
+ bool need_recon = false;
+ int rc;
+
+@@ -915,21 +940,63 @@ static int smb3_reconfigure(struct fs_context *fc)
+ STEAL_STRING(cifs_sb, ctx, UNC);
+ STEAL_STRING(cifs_sb, ctx, source);
+ STEAL_STRING(cifs_sb, ctx, username);
++
+ if (need_recon == false)
+ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
+ else {
+- kfree_sensitive(ses->password);
+- ses->password = kstrdup(ctx->password, GFP_KERNEL);
+- if (!ses->password)
+- return -ENOMEM;
+- kfree_sensitive(ses->password2);
+- ses->password2 = kstrdup(ctx->password2, GFP_KERNEL);
+- if (!ses->password2) {
+- kfree_sensitive(ses->password);
+- ses->password = NULL;
++ if (ctx->password) {
++ new_password = kstrdup(ctx->password, GFP_KERNEL);
++ if (!new_password)
++ return -ENOMEM;
++ } else
++ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
++ }
++
++ /*
++ * if a new password2 has been specified, then reset it's value
++ * inside the ses struct
++ */
++ if (ctx->password2) {
++ new_password2 = kstrdup(ctx->password2, GFP_KERNEL);
++ if (!new_password2) {
++ kfree_sensitive(new_password);
+ return -ENOMEM;
+ }
++ } else
++ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password2);
++
++ /*
++ * we may update the passwords in the ses struct below. Make sure we do
++ * not race with smb2_reconnect
++ */
++ mutex_lock(&ses->session_mutex);
++
++ /*
++ * smb2_reconnect may swap password and password2 in case session setup
++ * failed. First get ctx passwords in sync with ses passwords. It should
++ * be okay to do this even if this function were to return an error at a
++ * later stage
++ */
++ rc = smb3_sync_session_ctx_passwords(cifs_sb, ses);
++ if (rc) {
++ mutex_unlock(&ses->session_mutex);
++ return rc;
+ }
++
++ /*
++ * now that allocations for passwords are done, commit them
++ */
++ if (new_password) {
++ kfree_sensitive(ses->password);
++ ses->password = new_password;
++ }
++ if (new_password2) {
++ kfree_sensitive(ses->password2);
++ ses->password2 = new_password2;
++ }
++
++ mutex_unlock(&ses->session_mutex);
++
+ STEAL_STRING(cifs_sb, ctx, domainname);
+ STEAL_STRING(cifs_sb, ctx, nodename);
+ STEAL_STRING(cifs_sb, ctx, iocharset);
+diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
+index 890d6d9d4a592f..c8c8b4451b3bc7 100644
+--- a/fs/smb/client/fs_context.h
++++ b/fs/smb/client/fs_context.h
+@@ -299,6 +299,7 @@ static inline struct smb3_fs_context *smb3_fc2context(const struct fs_context *f
+ }
+
+ extern int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx);
++extern int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses);
+ extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);
+
+ /*
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index eff3f57235eef3..6d567b16998119 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1115,6 +1115,7 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ rc = 0;
+ } else if (iov && server->ops->parse_reparse_point) {
+ rc = server->ops->parse_reparse_point(cifs_sb,
++ full_path,
+ iov, data);
+ }
+ break;
+@@ -2473,13 +2474,10 @@ cifs_dentry_needs_reval(struct dentry *dentry)
+ return true;
+
+ if (!open_cached_dir_by_dentry(tcon, dentry->d_parent, &cfid)) {
+- spin_lock(&cfid->fid_lock);
+ if (cfid->time && cifs_i->time > cfid->time) {
+- spin_unlock(&cfid->fid_lock);
+ close_cached_dir(cfid);
+ return false;
+ }
+- spin_unlock(&cfid->fid_lock);
+ close_cached_dir(cfid);
+ }
+ /*
+@@ -3062,6 +3060,7 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ int rc = -EACCES;
+ __u32 dosattr = 0;
+ __u64 mode = NO_CHANGE_64;
++ bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
+
+ xid = get_xid();
+
+@@ -3152,7 +3151,8 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ mode = attrs->ia_mode;
+ rc = 0;
+ if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) ||
+- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID)) {
++ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID) ||
++ posix) {
+ rc = id_mode_to_cifs_acl(inode, full_path, &mode,
+ INVALID_UID, INVALID_GID);
+ if (rc) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 74abbdf5026c73..f74d0a86f44a4e 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -35,6 +35,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ u16 len, plen;
+ int rc = 0;
+
++ if (strlen(symname) > REPARSE_SYM_PATH_MAX)
++ return -ENAMETOOLONG;
++
+ sym = kstrdup(symname, GFP_KERNEL);
+ if (!sym)
+ return -ENOMEM;
+@@ -64,7 +67,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ if (rc < 0)
+ goto out;
+
+- plen = 2 * UniStrnlen((wchar_t *)path, PATH_MAX);
++ plen = 2 * UniStrnlen((wchar_t *)path, REPARSE_SYM_PATH_MAX);
+ len = sizeof(*buf) + plen * 2;
+ buf = kzalloc(len, GFP_KERNEL);
+ if (!buf) {
+@@ -532,9 +535,76 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+ return 0;
+ }
+
++int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
++ bool unicode, bool relative,
++ const char *full_path,
++ struct cifs_sb_info *cifs_sb)
++{
++ char sep = CIFS_DIR_SEP(cifs_sb);
++ char *linux_target = NULL;
++ char *smb_target = NULL;
++ int levels;
++ int rc;
++ int i;
++
++ smb_target = cifs_strndup_from_utf16(buf, len, unicode, cifs_sb->local_nls);
++ if (!smb_target) {
++ rc = -ENOMEM;
++ goto out;
++ }
++
++ if (smb_target[0] == sep && relative) {
++ /*
++ * This is a relative SMB symlink from the top of the share,
++ * which is the top level directory of the Linux mount point.
++ * Linux does not support such relative symlinks, so convert
++ * it to the relative symlink from the current directory.
++ * full_path is the SMB path to the symlink (from which is
++ * extracted current directory) and smb_target is the SMB path
++ * where symlink points, therefore full_path must always be on
++ * the SMB share.
++ */
++ int smb_target_len = strlen(smb_target)+1;
++ levels = 0;
++ for (i = 1; full_path[i]; i++) { /* i=1 to skip leading sep */
++ if (full_path[i] == sep)
++ levels++;
++ }
++ linux_target = kmalloc(levels*3 + smb_target_len, GFP_KERNEL);
++ if (!linux_target) {
++ rc = -ENOMEM;
++ goto out;
++ }
++ for (i = 0; i < levels; i++) {
++ linux_target[i*3 + 0] = '.';
++ linux_target[i*3 + 1] = '.';
++ linux_target[i*3 + 2] = sep;
++ }
++ memcpy(linux_target + levels*3, smb_target+1, smb_target_len); /* +1 to skip leading sep */
++ } else {
++ linux_target = smb_target;
++ smb_target = NULL;
++ }
++
++ if (sep == '\\')
++ convert_delimiter(linux_target, '/');
++
++ rc = 0;
++ *target = linux_target;
++
++ cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, *target);
++
++out:
++ if (rc != 0)
++ kfree(linux_target);
++ kfree(smb_target);
++ return rc;
++}
++
+ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
+ u32 plen, bool unicode,
+ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
+ unsigned int len;
+@@ -549,20 +619,18 @@ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
+ return -EIO;
+ }
+
+- data->symlink_target = cifs_strndup_from_utf16(sym->PathBuffer + offs,
+- len, unicode,
+- cifs_sb->local_nls);
+- if (!data->symlink_target)
+- return -ENOMEM;
+-
+- convert_delimiter(data->symlink_target, '/');
+- cifs_dbg(FYI, "%s: target path: %s\n", __func__, data->symlink_target);
+-
+- return 0;
++ return smb2_parse_native_symlink(&data->symlink_target,
++ sym->PathBuffer + offs,
++ len,
++ unicode,
++ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
++ full_path,
++ cifs_sb);
+ }
+
+ int parse_reparse_point(struct reparse_data_buffer *buf,
+ u32 plen, struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data)
+ {
+ struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+@@ -577,7 +645,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ case IO_REPARSE_TAG_SYMLINK:
+ return parse_reparse_symlink(
+ (struct reparse_symlink_data_buffer *)buf,
+- plen, unicode, cifs_sb, data);
++ plen, unicode, cifs_sb, full_path, data);
+ case IO_REPARSE_TAG_LX_SYMLINK:
+ case IO_REPARSE_TAG_AF_UNIX:
+ case IO_REPARSE_TAG_LX_FIFO:
+@@ -593,6 +661,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ }
+
+ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data)
+ {
+@@ -602,7 +671,7 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+
+ buf = (struct reparse_data_buffer *)((u8 *)io +
+ le32_to_cpu(io->OutputOffset));
+- return parse_reparse_point(buf, plen, cifs_sb, true, data);
++ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+ static void wsl_to_fattr(struct cifs_open_info_data *data,
+diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
+index 158e7b7aae646c..ff05b0e75c9284 100644
+--- a/fs/smb/client/reparse.h
++++ b/fs/smb/client/reparse.h
+@@ -12,6 +12,8 @@
+ #include "fs_context.h"
+ #include "cifsglob.h"
+
++#define REPARSE_SYM_PATH_MAX 4060
++
+ /*
+ * Used only by cifs.ko to ignore reparse points from files when client or
+ * server doesn't support FSCTL_GET_REPARSE_POINT.
+@@ -115,7 +117,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ int smb2_mknod_reparse(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, umode_t mode, dev_t dev);
+-int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, struct kvec *rsp_iov,
++int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
++ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data);
+
+ #endif /* _CIFS_REPARSE_H */
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index 9a6ece66c4d34e..db3695eddcf9d5 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -994,17 +994,17 @@ static int cifs_query_symlink(const unsigned int xid,
+ }
+
+ static int cifs_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data)
+ {
+ struct reparse_data_buffer *buf;
+ TRANSACT_IOCTL_RSP *io = rsp_iov->iov_base;
+- bool unicode = !!(io->hdr.Flags2 & SMBFLG2_UNICODE);
+ u32 plen = le16_to_cpu(io->ByteCount);
+
+ buf = (struct reparse_data_buffer *)((__u8 *)&io->hdr.Protocol +
+ le32_to_cpu(io->DataOffset));
+- return parse_reparse_point(buf, plen, cifs_sb, unicode, data);
++ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+ static bool
+diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
+index e301349b0078d1..e836bc2193ddd3 100644
+--- a/fs/smb/client/smb2file.c
++++ b/fs/smb/client/smb2file.c
+@@ -63,12 +63,12 @@ static struct smb2_symlink_err_rsp *symlink_data(const struct kvec *iov)
+ return sym;
+ }
+
+-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path)
++int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov,
++ const char *full_path, char **path)
+ {
+ struct smb2_symlink_err_rsp *sym;
+ unsigned int sub_offs, sub_len;
+ unsigned int print_offs, print_len;
+- char *s;
+
+ if (!cifs_sb || !iov || !iov->iov_base || !iov->iov_len || !path)
+ return -EINVAL;
+@@ -86,15 +86,13 @@ int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec
+ iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + print_offs + print_len)
+ return -EINVAL;
+
+- s = cifs_strndup_from_utf16((char *)sym->PathBuffer + sub_offs, sub_len, true,
+- cifs_sb->local_nls);
+- if (!s)
+- return -ENOMEM;
+- convert_delimiter(s, '/');
+- cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, s);
+-
+- *path = s;
+- return 0;
++ return smb2_parse_native_symlink(path,
++ (char *)sym->PathBuffer + sub_offs,
++ sub_len,
++ true,
++ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
++ full_path,
++ cifs_sb);
+ }
+
+ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf)
+@@ -126,6 +124,7 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
+ goto out;
+ if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
+ rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov,
++ oparms->path,
+ &data->symlink_target);
+ if (!rc) {
+ memset(smb2_data, 0, sizeof(*smb2_data));
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index e49d0c25eb0384..a188908914fe8f 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -828,6 +828,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+
+ static int parse_create_response(struct cifs_open_info_data *data,
+ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ const struct kvec *iov)
+ {
+ struct smb2_create_rsp *rsp = iov->iov_base;
+@@ -841,6 +842,7 @@ static int parse_create_response(struct cifs_open_info_data *data,
+ break;
+ case STATUS_STOPPED_ON_SYMLINK:
+ rc = smb2_parse_symlink_response(cifs_sb, iov,
++ full_path,
+ &data->symlink_target);
+ if (rc)
+ return rc;
+@@ -930,14 +932,14 @@ int smb2_query_path_info(const unsigned int xid,
+
+ switch (rc) {
+ case 0:
+- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
+ break;
+ case -EOPNOTSUPP:
+ /*
+ * BB TODO: When support for special files added to Samba
+ * re-verify this path.
+ */
+- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
+ if (rc || !data->reparse_point)
+ goto out;
+
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 24a2aa04a1086c..7571fefeb83aa1 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4080,7 +4080,7 @@ map_oplock_to_lease(u8 oplock)
+ if (oplock == SMB2_OPLOCK_LEVEL_EXCLUSIVE)
+ return SMB2_LEASE_WRITE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE;
+ else if (oplock == SMB2_OPLOCK_LEVEL_II)
+- return SMB2_LEASE_READ_CACHING_LE;
++ return SMB2_LEASE_READ_CACHING_LE | SMB2_LEASE_HANDLE_CACHING_LE;
+ else if (oplock == SMB2_OPLOCK_LEVEL_BATCH)
+ return SMB2_LEASE_HANDLE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE |
+ SMB2_LEASE_WRITE_CACHING_LE;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 6584b5cddc280a..d1bd69cbfe09a5 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1231,7 +1231,9 @@ SMB2_negotiate(const unsigned int xid,
+ * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
+ * Set the cipher type manually.
+ */
+- if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
++ if ((server->dialect == SMB30_PROT_ID ||
++ server->dialect == SMB302_PROT_ID) &&
++ (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
+ server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
+
+ security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
+@@ -2683,7 +2685,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ ptr += sizeof(struct smb3_acl);
+
+ /* create one ACE to hold the mode embedded in reserved special SID */
+- acelen = setup_special_mode_ACE((struct smb_ace *)ptr, (__u64)mode);
++ acelen = setup_special_mode_ACE((struct smb_ace *)ptr, false, (__u64)mode);
+ ptr += acelen;
+ acl_size = acelen + sizeof(struct smb3_acl);
+ ace_count = 1;
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 6f9885e4f66ca5..09349fa8da039a 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -37,8 +37,6 @@ extern struct mid_q_entry *smb2_setup_request(struct cifs_ses *ses,
+ struct smb_rqst *rqst);
+ extern struct mid_q_entry *smb2_setup_async_request(
+ struct TCP_Server_Info *server, struct smb_rqst *rqst);
+-extern struct cifs_ses *smb2_find_smb_ses(struct TCP_Server_Info *server,
+- __u64 ses_id);
+ extern struct cifs_tcon *smb2_find_smb_tcon(struct TCP_Server_Info *server,
+ __u64 ses_id, __u32 tid);
+ extern int smb2_calc_signature(struct smb_rqst *rqst,
+@@ -113,7 +111,14 @@ extern int smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ const unsigned char *path, char *pbuf,
+ unsigned int *pbytes_read);
+-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path);
++int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
++ bool unicode, bool relative,
++ const char *full_path,
++ struct cifs_sb_info *cifs_sb);
++int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb,
++ const struct kvec *iov,
++ const char *full_path,
++ char **path);
+ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
+ void *buf);
+ extern int smb2_unlock_range(struct cifsFileInfo *cfile,
+diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c
+index b486b14bb3306f..475b36c27f6543 100644
+--- a/fs/smb/client/smb2transport.c
++++ b/fs/smb/client/smb2transport.c
+@@ -74,7 +74,7 @@ smb311_crypto_shash_allocate(struct TCP_Server_Info *server)
+
+
+ static
+-int smb2_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key)
++int smb3_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key)
+ {
+ struct cifs_chan *chan;
+ struct TCP_Server_Info *pserver;
+@@ -168,16 +168,41 @@ smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id)
+ return NULL;
+ }
+
+-struct cifs_ses *
+-smb2_find_smb_ses(struct TCP_Server_Info *server, __u64 ses_id)
++static int smb2_get_sign_key(struct TCP_Server_Info *server,
++ __u64 ses_id, u8 *key)
+ {
+ struct cifs_ses *ses;
++ int rc = -ENOENT;
++
++ if (SERVER_IS_CHAN(server))
++ server = server->primary_server;
+
+ spin_lock(&cifs_tcp_ses_lock);
+- ses = smb2_find_smb_ses_unlocked(server, ses_id);
+- spin_unlock(&cifs_tcp_ses_lock);
++ list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
++ if (ses->Suid != ses_id)
++ continue;
+
+- return ses;
++ rc = 0;
++ spin_lock(&ses->ses_lock);
++ switch (ses->ses_status) {
++ case SES_EXITING: /* SMB2_LOGOFF */
++ case SES_GOOD:
++ if (likely(ses->auth_key.response)) {
++ memcpy(key, ses->auth_key.response,
++ SMB2_NTLMV2_SESSKEY_SIZE);
++ } else {
++ rc = -EIO;
++ }
++ break;
++ default:
++ rc = -EAGAIN;
++ break;
++ }
++ spin_unlock(&ses->ses_lock);
++ break;
++ }
++ spin_unlock(&cifs_tcp_ses_lock);
++ return rc;
+ }
+
+ static struct cifs_tcon *
+@@ -236,14 +261,16 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ unsigned char *sigptr = smb2_signature;
+ struct kvec *iov = rqst->rq_iov;
+ struct smb2_hdr *shdr = (struct smb2_hdr *)iov[0].iov_base;
+- struct cifs_ses *ses;
+ struct shash_desc *shash = NULL;
+ struct smb_rqst drqst;
++ __u64 sid = le64_to_cpu(shdr->SessionId);
++ u8 key[SMB2_NTLMV2_SESSKEY_SIZE];
+
+- ses = smb2_find_smb_ses(server, le64_to_cpu(shdr->SessionId));
+- if (unlikely(!ses)) {
+- cifs_server_dbg(FYI, "%s: Could not find session\n", __func__);
+- return -ENOENT;
++ rc = smb2_get_sign_key(server, sid, key);
++ if (unlikely(rc)) {
++ cifs_server_dbg(FYI, "%s: [sesid=0x%llx] couldn't find signing key: %d\n",
++ __func__, sid, rc);
++ return rc;
+ }
+
+ memset(smb2_signature, 0x0, SMB2_HMACSHA256_SIZE);
+@@ -260,8 +287,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ shash = server->secmech.hmacsha256;
+ }
+
+- rc = crypto_shash_setkey(shash->tfm, ses->auth_key.response,
+- SMB2_NTLMV2_SESSKEY_SIZE);
++ rc = crypto_shash_setkey(shash->tfm, key, sizeof(key));
+ if (rc) {
+ cifs_server_dbg(VFS,
+ "%s: Could not update with response\n",
+@@ -303,8 +329,6 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ out:
+ if (allocate_crypto)
+ cifs_free_hash(&shash);
+- if (ses)
+- cifs_put_smb_ses(ses);
+ return rc;
+ }
+
+@@ -570,7 +594,7 @@ smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ struct smb_rqst drqst;
+ u8 key[SMB3_SIGN_KEY_SIZE];
+
+- rc = smb2_get_sign_key(le64_to_cpu(shdr->SessionId), server, key);
++ rc = smb3_get_sign_key(le64_to_cpu(shdr->SessionId), server, key);
+ if (unlikely(rc)) {
+ cifs_server_dbg(FYI, "%s: Could not get signing key\n", __func__);
+ return rc;
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 0b52d22a91a0cb..12cbd3428a6da5 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -44,6 +44,8 @@
+ EM(netfs_trace_tcon_ref_free_ipc, "FRE Ipc ") \
+ EM(netfs_trace_tcon_ref_free_ipc_fail, "FRE Ipc-F ") \
+ EM(netfs_trace_tcon_ref_free_reconnect_server, "FRE Reconn") \
++ EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \
++ EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \
+ EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \
+ EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \
+ EM(netfs_trace_tcon_ref_get_find, "GET Find ") \
+@@ -52,6 +54,7 @@
+ EM(netfs_trace_tcon_ref_new, "NEW ") \
+ EM(netfs_trace_tcon_ref_new_ipc, "NEW Ipc ") \
+ EM(netfs_trace_tcon_ref_new_reconnect_server, "NEW Reconn") \
++ EM(netfs_trace_tcon_ref_put_cached_close, "PUT Ch-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
+ EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index e6cfedba999232..c8cc6fa6fc3ebb 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -276,8 +276,12 @@ static void handle_ksmbd_work(struct work_struct *wk)
+ * disconnection. waitqueue_active is safe because it
+ * uses atomic operation for condition.
+ */
++ atomic_inc(&conn->refcnt);
+ if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+ wake_up(&conn->r_count_q);
++
++ if (atomic_dec_and_test(&conn->refcnt))
++ kfree(conn);
+ }
+
+ /**
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index 291583005dd123..245a10cc1eeb4d 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -773,10 +773,10 @@ static void init_constants_master(struct ubifs_info *c)
+ * necessary to report something for the 'statfs()' call.
+ *
+ * Subtract the LEB reserved for GC, the LEB which is reserved for
+- * deletions, minimum LEBs for the index, and assume only one journal
+- * head is available.
++ * deletions, minimum LEBs for the index, the LEBs which are reserved
++ * for each journal head.
+ */
+- tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt + 1;
++ tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt;
+ tmp64 *= (long long)c->leb_size - c->leb_overhead;
+ tmp64 = ubifs_reported_space(c, tmp64);
+ c->block_cnt = tmp64 >> UBIFS_BLOCK_SHIFT;
+diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c
+index a55e04822d16e9..7c43e0ccf6d47d 100644
+--- a/fs/ubifs/tnc_commit.c
++++ b/fs/ubifs/tnc_commit.c
+@@ -657,6 +657,8 @@ static int get_znodes_to_commit(struct ubifs_info *c)
+ znode->alt = 0;
+ cnext = find_next_dirty(znode);
+ if (!cnext) {
++ ubifs_assert(c, !znode->parent);
++ znode->cparent = NULL;
+ znode->cnext = c->cnext;
+ break;
+ }
+diff --git a/fs/unicode/utf8-core.c b/fs/unicode/utf8-core.c
+index 8395066341a437..0400824ef4936e 100644
+--- a/fs/unicode/utf8-core.c
++++ b/fs/unicode/utf8-core.c
+@@ -198,7 +198,7 @@ struct unicode_map *utf8_load(unsigned int version)
+ return um;
+
+ out_symbol_put:
+- symbol_put(um->tables);
++ symbol_put(utf8_data_table);
+ out_free_um:
+ kfree(um);
+ return ERR_PTR(-EINVAL);
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index 4719ec90029cb7..edaf193dbd5ccc 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -546,10 +546,14 @@ xfs_can_free_eofblocks(
+ return false;
+
+ /*
+- * Check if there is an post-EOF extent to free.
++ * Check if there is an post-EOF extent to free. If there are any
++ * delalloc blocks attached to the inode (data fork delalloc
++ * reservations or CoW extents of any kind), we need to free them so
++ * that inactivation doesn't fail to erase them.
+ */
+ xfs_ilock(ip, XFS_ILOCK_SHARED);
+- if (xfs_iext_lookup_extent(ip, &ip->i_df, end_fsb, &icur, &imap))
++ if (ip->i_delayed_blks ||
++ xfs_iext_lookup_extent(ip, &ip->i_df, end_fsb, &icur, &imap))
+ found_blocks = true;
+ xfs_iunlock(ip, XFS_ILOCK_SHARED);
+ return found_blocks;
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index eeadbaeccf88b7..fa284b64b2de20 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -350,9 +350,9 @@
+ *(.data..decrypted) \
+ *(.ref.data) \
+ *(.data..shared_aligned) /* percpu related */ \
+- *(.data.unlikely) \
++ *(.data..unlikely) \
+ __start_once = .; \
+- *(.data.once) \
++ *(.data..once) \
+ __end_once = .; \
+ STRUCT_ALIGN(); \
+ *(__tracepoints) \
+diff --git a/include/kunit/skbuff.h b/include/kunit/skbuff.h
+index 44d12370939a90..345e1e8f031235 100644
+--- a/include/kunit/skbuff.h
++++ b/include/kunit/skbuff.h
+@@ -29,7 +29,7 @@ static void kunit_action_kfree_skb(void *p)
+ static inline struct sk_buff *kunit_zalloc_skb(struct kunit *test, int len,
+ gfp_t gfp)
+ {
+- struct sk_buff *res = alloc_skb(len, GFP_KERNEL);
++ struct sk_buff *res = alloc_skb(len, gfp);
+
+ if (!res || skb_pad(res, len))
+ return NULL;
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 4fecf46ef681b3..c5063e0a38a058 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -925,6 +925,8 @@ void blk_freeze_queue_start(struct request_queue *q);
+ void blk_mq_freeze_queue_wait(struct request_queue *q);
+ int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
+ unsigned long timeout);
++void blk_mq_unfreeze_queue_non_owner(struct request_queue *q);
++void blk_freeze_queue_start_non_owner(struct request_queue *q);
+
+ void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
+ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 50c3b959da2816..e84a93c4013207 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -25,6 +25,7 @@
+ #include <linux/uuid.h>
+ #include <linux/xarray.h>
+ #include <linux/file.h>
++#include <linux/lockdep.h>
+
+ struct module;
+ struct request_queue;
+@@ -471,6 +472,11 @@ struct request_queue {
+ struct xarray hctx_table;
+
+ struct percpu_ref q_usage_counter;
++ struct lock_class_key io_lock_cls_key;
++ struct lockdep_map io_lockdep_map;
++
++ struct lock_class_key q_lock_cls_key;
++ struct lockdep_map q_lockdep_map;
+
+ struct request *last_merge;
+
+@@ -566,6 +572,10 @@ struct request_queue {
+ struct throtl_data *td;
+ #endif
+ struct rcu_head rcu_head;
++#ifdef CONFIG_LOCKDEP
++ struct task_struct *mq_freeze_owner;
++ int mq_freeze_owner_depth;
++#endif
+ wait_queue_head_t mq_freeze_wq;
+ /*
+ * Protect concurrent access to q_usage_counter by
+@@ -1247,7 +1257,7 @@ static inline unsigned int queue_io_min(const struct request_queue *q)
+ return q->limits.io_min;
+ }
+
+-static inline int bdev_io_min(struct block_device *bdev)
++static inline unsigned int bdev_io_min(struct block_device *bdev)
+ {
+ return queue_io_min(bdev_get_queue(bdev));
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bdadb0bb6cecd1..bc2e3dab0487ea 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1373,7 +1373,8 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
+ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+ struct bpf_prog *to);
+ /* Called only from JIT-enabled code, so there's no need for stubs. */
+-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym);
++void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym);
++void bpf_image_ksym_add(struct bpf_ksym *ksym);
+ void bpf_image_ksym_del(struct bpf_ksym *ksym);
+ void bpf_ksym_add(struct bpf_ksym *ksym);
+ void bpf_ksym_del(struct bpf_ksym *ksym);
+@@ -3461,4 +3462,10 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
+ return prog->aux->func_idx != 0;
+ }
+
++static inline bool bpf_prog_is_raw_tp(const struct bpf_prog *prog)
++{
++ return prog->type == BPF_PROG_TYPE_TRACING &&
++ prog->expected_attach_type == BPF_TRACE_RAW_TP;
++}
++
+ #endif /* _LINUX_BPF_H */
+diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
+index 038b2d523bf884..518bd1fd86fbe0 100644
+--- a/include/linux/cleanup.h
++++ b/include/linux/cleanup.h
+@@ -290,7 +290,7 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ #define DEFINE_GUARD(_name, _type, _lock, _unlock) \
+ DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \
+ static inline void * class_##_name##_lock_ptr(class_##_name##_t *_T) \
+- { return *_T; }
++ { return (void *)(__force unsigned long)*_T; }
+
+ #define DEFINE_GUARD_COND(_name, _ext, _condlock) \
+ EXTEND_CLASS(_name, _ext, \
+@@ -347,7 +347,7 @@ static inline void class_##_name##_destructor(class_##_name##_t *_T) \
+ \
+ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T) \
+ { \
+- return _T->lock; \
++ return (void *)(__force unsigned long)_T->lock; \
+ }
+
+
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 32284cd26d52a7..c16d4199bf9231 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -94,19 +94,6 @@
+ # define __copy(symbol)
+ #endif
+
+-/*
+- * Optional: only supported since gcc >= 15
+- * Optional: only supported since clang >= 18
+- *
+- * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
+- * clang: https://github.com/llvm/llvm-project/pull/76348
+- */
+-#if __has_attribute(__counted_by__)
+-# define __counted_by(member) __attribute__((__counted_by__(member)))
+-#else
+-# define __counted_by(member)
+-#endif
+-
+ /*
+ * Optional: not supported by gcc
+ * Optional: only supported since clang >= 14.0
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index 1a957ea2f4fe78..639be0f30b455d 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -323,6 +323,25 @@ struct ftrace_likely_data {
+ #define __no_sanitize_or_inline __always_inline
+ #endif
+
++/*
++ * Optional: only supported since gcc >= 15
++ * Optional: only supported since clang >= 18
++ *
++ * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
++ * clang: https://github.com/llvm/llvm-project/pull/76348
++ *
++ * __bdos on clang < 19.1.2 can erroneously return 0:
++ * https://github.com/llvm/llvm-project/pull/110497
++ *
++ * __bdos on clang < 19.1.3 can be off by 4:
++ * https://github.com/llvm/llvm-project/pull/112636
++ */
++#ifdef CONFIG_CC_HAS_COUNTED_BY
++# define __counted_by(member) __attribute__((__counted_by__(member)))
++#else
++# define __counted_by(member)
++#endif
++
+ /*
+ * Apply __counted_by() when the Endianness matches to increase test coverage.
+ */
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index b0b821edfd97d1..3b2ad444c002ee 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -24,10 +24,10 @@
+ #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+ #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
+
+-#define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS)
+-#define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS)
++#define F2FS_BYTES_TO_BLK(bytes) ((unsigned long long)(bytes) >> F2FS_BLKSIZE_BITS)
++#define F2FS_BLK_TO_BYTES(blk) ((unsigned long long)(blk) << F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1)
+-#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1))
++#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1))
+
+ /* 0, 1(node nid), 2(meta nid) are reserved node id */
+ #define F2FS_RESERVED_NODE_NUM 3
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 3559446279c152..4b5cad44a12683 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3726,6 +3726,6 @@ static inline bool vfs_empty_path(int dfd, const char __user *path)
+ return !c;
+ }
+
+-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos);
++int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter);
+
+ #endif /* _LINUX_FS_H */
+diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h
+index 9d7754ad5e9b08..43ad280935e360 100644
+--- a/include/linux/hisi_acc_qm.h
++++ b/include/linux/hisi_acc_qm.h
+@@ -229,6 +229,12 @@ struct hisi_qm_status {
+
+ struct hisi_qm;
+
++enum acc_err_result {
++ ACC_ERR_NONE,
++ ACC_ERR_NEED_RESET,
++ ACC_ERR_RECOVERED,
++};
++
+ struct hisi_qm_err_info {
+ char *acpi_rst;
+ u32 msi_wr_port;
+@@ -257,9 +263,9 @@ struct hisi_qm_err_ini {
+ void (*close_axi_master_ooo)(struct hisi_qm *qm);
+ void (*open_sva_prefetch)(struct hisi_qm *qm);
+ void (*close_sva_prefetch)(struct hisi_qm *qm);
+- void (*log_dev_hw_err)(struct hisi_qm *qm, u32 err_sts);
+ void (*show_last_dfx_regs)(struct hisi_qm *qm);
+ void (*err_info_init)(struct hisi_qm *qm);
++ enum acc_err_result (*get_err_result)(struct hisi_qm *qm);
+ };
+
+ struct hisi_qm_cap_info {
+diff --git a/include/linux/intel_vsec.h b/include/linux/intel_vsec.h
+index 11ee185566c31c..b94beab64610b9 100644
+--- a/include/linux/intel_vsec.h
++++ b/include/linux/intel_vsec.h
+@@ -74,10 +74,11 @@ enum intel_vsec_quirks {
+ * @pdev: PCI device reference for the callback's use
+ * @guid: ID of data to acccss
+ * @data: buffer for the data to be copied
++ * @off: offset into the requested buffer
+ * @count: size of buffer
+ */
+ struct pmt_callbacks {
+- int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, u32 count);
++ int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, loff_t off, u32 count);
+ };
+
+ /**
+diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h
+index 1220f0fbe5bf9f..5d21dacd62bc7d 100644
+--- a/include/linux/jiffies.h
++++ b/include/linux/jiffies.h
+@@ -502,7 +502,7 @@ static inline unsigned long _msecs_to_jiffies(const unsigned int m)
+ * - all other values are converted to jiffies by either multiplying
+ * the input value by a factor or dividing it with a factor and
+ * handling any 32-bit overflows.
+- * for the details see __msecs_to_jiffies()
++ * for the details see _msecs_to_jiffies()
+ *
+ * msecs_to_jiffies() checks for the passed in value being a constant
+ * via __builtin_constant_p() allowing gcc to eliminate most of the
+diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h
+index 564868bdce898b..fd743d4c4b4bdc 100644
+--- a/include/linux/kfifo.h
++++ b/include/linux/kfifo.h
+@@ -37,7 +37,6 @@
+ */
+
+ #include <linux/array_size.h>
+-#include <linux/dma-mapping.h>
+ #include <linux/spinlock.h>
+ #include <linux/stddef.h>
+ #include <linux/types.h>
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 45be36e5285ffb..85fe9d0ebb9152 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -2382,12 +2382,6 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ }
+ #endif /* CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE */
+
+-typedef int (*kvm_vm_thread_fn_t)(struct kvm *kvm, uintptr_t data);
+-
+-int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
+- uintptr_t data, const char *name,
+- struct task_struct **thread_ptr);
+-
+ #ifdef CONFIG_KVM_XFER_TO_GUEST_WORK
+ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
+ {
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index 217f7abf2cbfab..67964dc4db952e 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -173,7 +173,7 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
+ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_subclass(lock, sub) \
+- lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
++ lockdep_init_map_type(&(lock)->dep_map, (lock)->dep_map.name, (lock)->dep_map.key, sub,\
+ (lock)->dep_map.wait_type_inner, \
+ (lock)->dep_map.wait_type_outer, \
+ (lock)->dep_map.lock_type)
+diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
+index 39a7714605a796..d7cb1e5ecbda9d 100644
+--- a/include/linux/mmdebug.h
++++ b/include/linux/mmdebug.h
+@@ -46,7 +46,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ } \
+ } while (0)
+ #define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+@@ -66,7 +66,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ unlikely(__ret_warn); \
+ })
+ #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+@@ -77,7 +77,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ unlikely(__ret_warn_once); \
+ })
+ #define VM_WARN_ON_ONCE_MM(cond, mm) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h
+index cd4e28db0cbd77..959a4daacea1f2 100644
+--- a/include/linux/netpoll.h
++++ b/include/linux/netpoll.h
+@@ -72,7 +72,7 @@ static inline void *netpoll_poll_lock(struct napi_struct *napi)
+ {
+ struct net_device *dev = napi->dev;
+
+- if (dev && dev->npinfo) {
++ if (dev && rcu_access_pointer(dev->npinfo)) {
+ int owner = smp_processor_id();
+
+ while (cmpxchg(&napi->poll_owner, -1, owner) != -1)
+diff --git a/include/linux/nfslocalio.h b/include/linux/nfslocalio.h
+index 3982fea799195e..9202f4b24343d7 100644
+--- a/include/linux/nfslocalio.h
++++ b/include/linux/nfslocalio.h
+@@ -55,7 +55,7 @@ struct nfsd_localio_operations {
+ const struct cred *,
+ const struct nfs_fh *,
+ const fmode_t);
+- void (*nfsd_file_put_local)(struct nfsd_file *);
++ struct net *(*nfsd_file_put_local)(struct nfsd_file *);
+ struct file *(*nfsd_file_file)(struct nfsd_file *);
+ } ____cacheline_aligned;
+
+@@ -66,7 +66,7 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *,
+ struct rpc_clnt *, const struct cred *,
+ const struct nfs_fh *, const fmode_t);
+
+-static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
++static inline void nfs_to_nfsd_net_put(struct net *net)
+ {
+ /*
+ * Once reference to nfsd_serv is dropped, NFSD could be
+@@ -74,10 +74,22 @@ static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
+ * by always taking RCU.
+ */
+ rcu_read_lock();
+- nfs_to->nfsd_file_put_local(localio);
++ nfs_to->nfsd_serv_put(net);
+ rcu_read_unlock();
+ }
+
++static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
++{
++ /*
++ * Must not hold RCU otherwise nfsd_file_put() can easily trigger:
++ * "Voluntary context switch within RCU read-side critical section!"
++ * by scheduling deep in underlying filesystem (e.g. XFS).
++ */
++ struct net *net = nfs_to->nfsd_file_put_local(localio);
++
++ nfs_to_nfsd_net_put(net);
++}
++
+ #else /* CONFIG_NFS_LOCALIO */
+ static inline void nfsd_localio_ops_init(void)
+ {
+diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
+index d69ad5bb1eb1e6..b8d6c0c208760a 100644
+--- a/include/linux/of_fdt.h
++++ b/include/linux/of_fdt.h
+@@ -31,6 +31,7 @@ extern void *of_fdt_unflatten_tree(const unsigned long *blob,
+ extern int __initdata dt_root_addr_cells;
+ extern int __initdata dt_root_size_cells;
+ extern void *initial_boot_params;
++extern phys_addr_t initial_boot_params_pa;
+
+ extern char __dtb_start[];
+ extern char __dtb_end[];
+@@ -70,8 +71,8 @@ extern u64 dt_mem_next_cell(int s, const __be32 **cellp);
+ /* Early flat tree scan hooks */
+ extern int early_init_dt_scan_root(void);
+
+-extern bool early_init_dt_scan(void *params);
+-extern bool early_init_dt_verify(void *params);
++extern bool early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys);
++extern bool early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys);
+ extern void early_init_dt_scan_nodes(void);
+
+ extern const char *of_flat_dt_get_machine_name(void);
+diff --git a/include/linux/once.h b/include/linux/once.h
+index bc714d414448a7..30346fcdc7995d 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -46,7 +46,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
+ #define DO_ONCE(func, ...) \
+ ({ \
+ bool ___ret = false; \
+- static bool __section(".data.once") ___done = false; \
++ static bool __section(".data..once") ___done = false; \
+ static DEFINE_STATIC_KEY_TRUE(___once_key); \
+ if (static_branch_unlikely(&___once_key)) { \
+ unsigned long ___flags; \
+@@ -64,7 +64,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
+ #define DO_ONCE_SLEEPABLE(func, ...) \
+ ({ \
+ bool ___ret = false; \
+- static bool __section(".data.once") ___done = false; \
++ static bool __section(".data..once") ___done = false; \
+ static DEFINE_STATIC_KEY_TRUE(___once_key); \
+ if (static_branch_unlikely(&___once_key)) { \
+ ___ret = __do_once_sleepable_start(&___done); \
+diff --git a/include/linux/once_lite.h b/include/linux/once_lite.h
+index b7bce4983638f8..27de7bc32a0610 100644
+--- a/include/linux/once_lite.h
++++ b/include/linux/once_lite.h
+@@ -12,7 +12,7 @@
+
+ #define __ONCE_LITE_IF(condition) \
+ ({ \
+- static bool __section(".data.once") __already_done; \
++ static bool __section(".data..once") __already_done; \
+ bool __ret_cond = !!(condition); \
+ bool __ret_once = false; \
+ \
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 58d84c59f3ddae..48e5c03df1dd83 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -401,7 +401,7 @@ static inline int debug_lockdep_rcu_enabled(void)
+ */
+ #define RCU_LOCKDEP_WARN(c, s) \
+ do { \
+- static bool __section(".data.unlikely") __warned; \
++ static bool __section(".data..unlikely") __warned; \
+ if (debug_lockdep_rcu_enabled() && (c) && \
+ debug_lockdep_rcu_enabled() && !__warned) { \
+ __warned = true; \
+diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
+index 8544ff05e594d7..7d81fc6918ee86 100644
+--- a/include/linux/rwlock_rt.h
++++ b/include/linux/rwlock_rt.h
+@@ -24,13 +24,13 @@ do { \
+ __rt_rwlock_init(rwl, #rwl, &__key); \
+ } while (0)
+
+-extern void rt_read_lock(rwlock_t *rwlock);
++extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock);
+ extern int rt_read_trylock(rwlock_t *rwlock);
+-extern void rt_read_unlock(rwlock_t *rwlock);
+-extern void rt_write_lock(rwlock_t *rwlock);
+-extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass);
++extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock);
++extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock);
++extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock);
+ extern int rt_write_trylock(rwlock_t *rwlock);
+-extern void rt_write_unlock(rwlock_t *rwlock);
++extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock);
+
+ static __always_inline void read_lock(rwlock_t *rwlock)
+ {
+diff --git a/include/linux/sched/ext.h b/include/linux/sched/ext.h
+index 1ddbde64a31b4a..2799e7284fff72 100644
+--- a/include/linux/sched/ext.h
++++ b/include/linux/sched/ext.h
+@@ -199,7 +199,6 @@ struct sched_ext_entity {
+ #ifdef CONFIG_EXT_GROUP_SCHED
+ struct cgroup *cgrp_moving_from;
+ #endif
+- /* must be the last field, see init_scx_entity() */
+ struct list_head tasks_node;
+ };
+
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index fffeb754880fca..5298765d6ca482 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -621,6 +621,23 @@ static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *
+ return READ_ONCE(s->seqcount.sequence);
+ }
+
++/**
++ * read_seqcount_latch() - pick even/odd latch data copy
++ * @s: Pointer to seqcount_latch_t
++ *
++ * See write_seqcount_latch() for details and a full reader/writer usage
++ * example.
++ *
++ * Return: sequence counter raw value. Use the lowest bit as an index for
++ * picking which data copy to read. The full counter must then be checked
++ * with read_seqcount_latch_retry().
++ */
++static __always_inline unsigned read_seqcount_latch(const seqcount_latch_t *s)
++{
++ kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
++ return raw_read_seqcount_latch(s);
++}
++
+ /**
+ * raw_read_seqcount_latch_retry() - end a seqcount_latch_t read section
+ * @s: Pointer to seqcount_latch_t
+@@ -635,9 +652,34 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ return unlikely(READ_ONCE(s->seqcount.sequence) != start);
+ }
+
++/**
++ * read_seqcount_latch_retry() - end a seqcount_latch_t read section
++ * @s: Pointer to seqcount_latch_t
++ * @start: count, from read_seqcount_latch()
++ *
++ * Return: true if a read section retry is required, else false
++ */
++static __always_inline int
++read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
++{
++ kcsan_atomic_next(0);
++ return raw_read_seqcount_latch_retry(s, start);
++}
++
+ /**
+ * raw_write_seqcount_latch() - redirect latch readers to even/odd copy
+ * @s: Pointer to seqcount_latch_t
++ */
++static __always_inline void raw_write_seqcount_latch(seqcount_latch_t *s)
++{
++ smp_wmb(); /* prior stores before incrementing "sequence" */
++ s->seqcount.sequence++;
++ smp_wmb(); /* increment "sequence" before following stores */
++}
++
++/**
++ * write_seqcount_latch_begin() - redirect latch readers to odd copy
++ * @s: Pointer to seqcount_latch_t
+ *
+ * The latch technique is a multiversion concurrency control method that allows
+ * queries during non-atomic modifications. If you can guarantee queries never
+@@ -665,17 +707,11 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ *
+ * void latch_modify(struct latch_struct *latch, ...)
+ * {
+- * smp_wmb(); // Ensure that the last data[1] update is visible
+- * latch->seq.sequence++;
+- * smp_wmb(); // Ensure that the seqcount update is visible
+- *
++ * write_seqcount_latch_begin(&latch->seq);
+ * modify(latch->data[0], ...);
+- *
+- * smp_wmb(); // Ensure that the data[0] update is visible
+- * latch->seq.sequence++;
+- * smp_wmb(); // Ensure that the seqcount update is visible
+- *
++ * write_seqcount_latch(&latch->seq);
+ * modify(latch->data[1], ...);
++ * write_seqcount_latch_end(&latch->seq);
+ * }
+ *
+ * The query will have a form like::
+@@ -686,13 +722,13 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ * unsigned seq, idx;
+ *
+ * do {
+- * seq = raw_read_seqcount_latch(&latch->seq);
++ * seq = read_seqcount_latch(&latch->seq);
+ *
+ * idx = seq & 0x01;
+ * entry = data_query(latch->data[idx], ...);
+ *
+ * // This includes needed smp_rmb()
+- * } while (raw_read_seqcount_latch_retry(&latch->seq, seq));
++ * } while (read_seqcount_latch_retry(&latch->seq, seq));
+ *
+ * return entry;
+ * }
+@@ -716,11 +752,31 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ * When data is a dynamic data structure; one should use regular RCU
+ * patterns to manage the lifetimes of the objects within.
+ */
+-static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
++static __always_inline void write_seqcount_latch_begin(seqcount_latch_t *s)
+ {
+- smp_wmb(); /* prior stores before incrementing "sequence" */
+- s->seqcount.sequence++;
+- smp_wmb(); /* increment "sequence" before following stores */
++ kcsan_nestable_atomic_begin();
++ raw_write_seqcount_latch(s);
++}
++
++/**
++ * write_seqcount_latch() - redirect latch readers to even copy
++ * @s: Pointer to seqcount_latch_t
++ */
++static __always_inline void write_seqcount_latch(seqcount_latch_t *s)
++{
++ raw_write_seqcount_latch(s);
++}
++
++/**
++ * write_seqcount_latch_end() - end a seqcount_latch_t write section
++ * @s: Pointer to seqcount_latch_t
++ *
++ * Marks the end of a seqcount_latch_t writer section, after all copies of the
++ * latch-protected data have been updated.
++ */
++static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
++{
++ kcsan_nestable_atomic_end();
+ }
+
+ #define __SEQLOCK_UNLOCKED(lockname) \
+@@ -754,11 +810,7 @@ static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
+ */
+ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ {
+- unsigned ret = read_seqcount_begin(&sl->seqcount);
+-
+- kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */
+- kcsan_flat_atomic_begin();
+- return ret;
++ return read_seqcount_begin(&sl->seqcount);
+ }
+
+ /**
+@@ -774,12 +826,6 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ */
+ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ {
+- /*
+- * Assume not nested: read_seqretry() may be called multiple times when
+- * completing read critical section.
+- */
+- kcsan_flat_atomic_end();
+-
+ return read_seqcount_retry(&sl->seqcount, start);
+ }
+
+diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
+index 61c49b16f69ab0..6175cd682ca0d8 100644
+--- a/include/linux/spinlock_rt.h
++++ b/include/linux/spinlock_rt.h
+@@ -16,26 +16,25 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name,
+ }
+ #endif
+
+-#define spin_lock_init(slock) \
++#define __spin_lock_init(slock, name, key, percpu) \
+ do { \
+- static struct lock_class_key __key; \
+- \
+ rt_mutex_base_init(&(slock)->lock); \
+- __rt_spin_lock_init(slock, #slock, &__key, false); \
++ __rt_spin_lock_init(slock, name, key, percpu); \
+ } while (0)
+
+-#define local_spin_lock_init(slock) \
++#define _spin_lock_init(slock, percpu) \
+ do { \
+ static struct lock_class_key __key; \
+- \
+- rt_mutex_base_init(&(slock)->lock); \
+- __rt_spin_lock_init(slock, #slock, &__key, true); \
++ __spin_lock_init(slock, #slock, &__key, percpu); \
+ } while (0)
+
+-extern void rt_spin_lock(spinlock_t *lock);
+-extern void rt_spin_lock_nested(spinlock_t *lock, int subclass);
+-extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock);
+-extern void rt_spin_unlock(spinlock_t *lock);
++#define spin_lock_init(slock) _spin_lock_init(slock, false)
++#define local_spin_lock_init(slock) _spin_lock_init(slock, true)
++
++extern void rt_spin_lock(spinlock_t *lock) __acquires(lock);
++extern void rt_spin_lock_nested(spinlock_t *lock, int subclass) __acquires(lock);
++extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock);
++extern void rt_spin_unlock(spinlock_t *lock) __releases(lock);
+ extern void rt_spin_lock_unlock(spinlock_t *lock);
+ extern int rt_spin_trylock_bh(spinlock_t *lock);
+ extern int rt_spin_trylock(spinlock_t *lock);
+diff --git a/include/media/v4l2-dv-timings.h b/include/media/v4l2-dv-timings.h
+index 8fa963326bf6a2..c64096b5c78215 100644
+--- a/include/media/v4l2-dv-timings.h
++++ b/include/media/v4l2-dv-timings.h
+@@ -146,15 +146,18 @@ void v4l2_print_dv_timings(const char *dev_prefix, const char *prefix,
+ * @polarities: the horizontal and vertical polarities (same as struct
+ * v4l2_bt_timings polarities).
+ * @interlaced: if this flag is true, it indicates interlaced format
++ * @cap: the v4l2_dv_timings_cap capabilities.
+ * @fmt: the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid CVT format. If so, then it will return true, and fmt will be filled
+ * in with the found CVT timings.
+ */
+-bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
+- unsigned active_width, u32 polarities, bool interlaced,
+- struct v4l2_dv_timings *fmt);
++bool v4l2_detect_cvt(unsigned int frame_height, unsigned int hfreq,
++ unsigned int vsync, unsigned int active_width,
++ u32 polarities, bool interlaced,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *fmt);
+
+ /**
+ * v4l2_detect_gtf - detect if the given timings follow the GTF standard
+@@ -170,15 +173,18 @@ bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
+ * image height, so it has to be passed explicitly. Usually
+ * the native screen aspect ratio is used for this. If it
+ * is not filled in correctly, then 16:9 will be assumed.
++ * @cap: the v4l2_dv_timings_cap capabilities.
+ * @fmt: the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid GTF format. If so, then it will return true, and fmt will be filled
+ * in with the found GTF timings.
+ */
+-bool v4l2_detect_gtf(unsigned frame_height, unsigned hfreq, unsigned vsync,
+- u32 polarities, bool interlaced, struct v4l2_fract aspect,
+- struct v4l2_dv_timings *fmt);
++bool v4l2_detect_gtf(unsigned int frame_height, unsigned int hfreq,
++ unsigned int vsync, u32 polarities, bool interlaced,
++ struct v4l2_fract aspect,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *fmt);
+
+ /**
+ * v4l2_calc_aspect_ratio - calculate the aspect ratio based on bytes
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index bab1e3d7452a2c..a1864cff616aee 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1,7 +1,7 @@
+ /*
+ BlueZ - Bluetooth protocol stack for Linux
+ Copyright (C) 2000-2001 Qualcomm Incorporated
+- Copyright 2023 NXP
++ Copyright 2023-2024 NXP
+
+ Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+
+@@ -29,6 +29,7 @@
+ #define HCI_MAX_ACL_SIZE 1024
+ #define HCI_MAX_SCO_SIZE 255
+ #define HCI_MAX_ISO_SIZE 251
++#define HCI_MAX_ISO_BIS 31
+ #define HCI_MAX_EVENT_SIZE 260
+ #define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4)
+
+@@ -683,6 +684,7 @@ enum {
+ #define HCI_RSSI_INVALID 127
+
+ #define HCI_SYNC_HANDLE_INVALID 0xffff
++#define HCI_SID_INVALID 0xff
+
+ #define HCI_ROLE_MASTER 0x00
+ #define HCI_ROLE_SLAVE 0x01
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 88265d37aa72e3..4c185a08c3a3af 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -668,6 +668,7 @@ struct hci_conn {
+ __u8 adv_instance;
+ __u16 handle;
+ __u16 sync_handle;
++ __u8 sid;
+ __u16 state;
+ __u16 mtu;
+ __u8 mode;
+@@ -710,6 +711,9 @@ struct hci_conn {
+ __s8 tx_power;
+ __s8 max_tx_power;
+ struct bt_iso_qos iso_qos;
++ __u8 num_bis;
++ __u8 bis[HCI_MAX_ISO_BIS];
++
+ unsigned long flags;
+
+ enum conn_reasons conn_reason;
+@@ -945,8 +949,10 @@ enum {
+ HCI_CONN_PER_ADV,
+ HCI_CONN_BIG_CREATED,
+ HCI_CONN_CREATE_CIS,
++ HCI_CONN_CREATE_BIG_SYNC,
+ HCI_CONN_BIG_SYNC,
+ HCI_CONN_BIG_SYNC_FAILED,
++ HCI_CONN_CREATE_PA_SYNC,
+ HCI_CONN_PA_SYNC,
+ HCI_CONN_PA_SYNC_FAILED,
+ };
+@@ -1099,6 +1105,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ return NULL;
+ }
+
++static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev,
++ __u8 sid,
++ bdaddr_t *dst,
++ __u8 dst_type)
++{
++ struct hci_conn_hash *h = &hdev->conn_hash;
++ struct hci_conn *c;
++
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(c, &h->list, list) {
++ if (c->type != ISO_LINK || bacmp(&c->dst, dst) ||
++ c->dst_type != dst_type || c->sid != sid)
++ continue;
++
++ rcu_read_unlock();
++ return c;
++ }
++
++ rcu_read_unlock();
++
++ return NULL;
++}
++
+ static inline struct hci_conn *
+ hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev,
+ bdaddr_t *ba,
+@@ -1269,6 +1299,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ return NULL;
+ }
+
++static inline struct hci_conn *
++hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
++ __u8 handle, __u8 num_bis)
++{
++ struct hci_conn_hash *h = &hdev->conn_hash;
++ struct hci_conn *c;
++
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(c, &h->list, list) {
++ if (c->type != ISO_LINK)
++ continue;
++
++ if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) {
++ rcu_read_unlock();
++ return c;
++ }
++ }
++
++ rcu_read_unlock();
++
++ return NULL;
++}
++
+ static inline struct hci_conn *
+ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle, __u16 state)
+ {
+@@ -1328,6 +1382,13 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
+ if (c->type != ISO_LINK)
+ continue;
+
++ /* Ignore the listen hcon, we are looking
++ * for the child hcon that was created as
++ * a result of the PA sync established event.
++ */
++ if (c->state == BT_LISTEN)
++ continue;
++
+ if (c->sync_handle == sync_handle) {
+ rcu_read_unlock();
+ return c;
+@@ -1445,6 +1506,8 @@ bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
+ void hci_sco_setup(struct hci_conn *conn, __u8 status);
+ bool hci_iso_setup_path(struct hci_conn *conn);
+ int hci_le_create_cis_pending(struct hci_dev *hdev);
++int hci_pa_create_sync_pending(struct hci_dev *hdev);
++int hci_le_big_create_sync_pending(struct hci_dev *hdev);
+ int hci_conn_check_create_cis(struct hci_conn *conn);
+
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+diff --git a/include/net/net_debug.h b/include/net/net_debug.h
+index 1e74684cbbdbcd..4a79204c8d306e 100644
+--- a/include/net/net_debug.h
++++ b/include/net/net_debug.h
+@@ -27,7 +27,7 @@ void netdev_info(const struct net_device *dev, const char *format, ...);
+
+ #define netdev_level_once(level, dev, fmt, ...) \
+ do { \
+- static bool __section(".data.once") __print_once; \
++ static bool __section(".data..once") __print_once; \
+ \
+ if (!__print_once) { \
+ __print_once = true; \
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index aa8ede439905cb..67551133b5228e 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -2948,6 +2948,14 @@ int rdma_user_mmap_entry_insert_range(struct ib_ucontext *ucontext,
+ size_t length, u32 min_pgoff,
+ u32 max_pgoff);
+
++#if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)
++void rdma_user_mmap_disassociate(struct ib_device *device);
++#else
++static inline void rdma_user_mmap_disassociate(struct ib_device *device)
++{
++}
++#endif
++
+ static inline int
+ rdma_user_mmap_entry_insert_exact(struct ib_ucontext *ucontext,
+ struct rdma_user_mmap_entry *entry,
+@@ -4726,6 +4734,9 @@ ib_get_vector_affinity(struct ib_device *device, int comp_vector)
+ * @device: the rdma device
+ */
+ void rdma_roce_rescan_device(struct ib_device *ibdev);
++void rdma_roce_rescan_port(struct ib_device *ib_dev, u32 port);
++void roce_del_all_netdev_gids(struct ib_device *ib_dev,
++ u32 port, struct net_device *ndev);
+
+ struct ib_ucontext *ib_uverbs_get_ucontext_file(struct ib_uverbs_file *ufile);
+
+diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
+index 3b687d20c9ed34..db7254d52d9355 100644
+--- a/include/uapi/linux/rtnetlink.h
++++ b/include/uapi/linux/rtnetlink.h
+@@ -174,7 +174,7 @@ enum {
+ #define RTM_GETLINKPROP RTM_GETLINKPROP
+
+ RTM_NEWVLAN = 112,
+-#define RTM_NEWNVLAN RTM_NEWVLAN
++#define RTM_NEWVLAN RTM_NEWVLAN
+ RTM_DELVLAN,
+ #define RTM_DELVLAN RTM_DELVLAN
+ RTM_GETVLAN,
+diff --git a/init/Kconfig b/init/Kconfig
+index c521e1421ad4ab..7256fa127530ff 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -120,6 +120,15 @@ config CC_HAS_ASM_INLINE
+ config CC_HAS_NO_PROFILE_FN_ATTR
+ def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror)
+
++config CC_HAS_COUNTED_BY
++ # TODO: when gcc 15 is released remove the build test and add
++ # a gcc version check
++ def_bool $(success,echo 'struct flex { int count; int array[] __attribute__((__counted_by__(count))); };' | $(CC) $(CLANG_FLAGS) -x c - -c -o /dev/null -Werror)
++ # clang needs to be at least 19.1.3 to avoid __bdos miscalculations
++ # https://github.com/llvm/llvm-project/pull/110497
++ # https://github.com/llvm/llvm-project/pull/112636
++ depends on !(CC_IS_CLANG && CLANG_VERSION < 190103)
++
+ config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+diff --git a/init/initramfs.c b/init/initramfs.c
+index bc911e466d5bbb..b2f7583bb1f5c2 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -360,6 +360,15 @@ static int __init do_name(void)
+ {
+ state = SkipIt;
+ next_state = Reset;
++
++ /* name_len > 0 && name_len <= PATH_MAX checked in do_header */
++ if (collected[name_len - 1] != '\0') {
++ pr_err("initramfs name without nulterm: %.*s\n",
++ (int)name_len, collected);
++ error("malformed archive");
++ return 1;
++ }
++
+ if (strcmp(collected, "TRAILER!!!") == 0) {
+ free_hash();
+ return 0;
+@@ -424,6 +433,12 @@ static int __init do_copy(void)
+
+ static int __init do_symlink(void)
+ {
++ if (collected[name_len - 1] != '\0') {
++ pr_err("initramfs symlink without nulterm: %.*s\n",
++ (int)name_len, collected);
++ error("malformed archive");
++ return 1;
++ }
+ collected[N_ALIGN(name_len) + body_len] = '\0';
+ clean_path(collected, 0);
+ init_symlink(collected + N_ALIGN(name_len), collected);
+diff --git a/io_uring/memmap.c b/io_uring/memmap.c
+index a0f32a255fd1e1..6d151e46f3d69e 100644
+--- a/io_uring/memmap.c
++++ b/io_uring/memmap.c
+@@ -72,6 +72,8 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages,
+ ret = io_mem_alloc_compound(pages, nr_pages, size, gfp);
+ if (!IS_ERR(ret))
+ goto done;
++ if (nr_pages == 1)
++ goto fail;
+
+ ret = io_mem_alloc_single(pages, nr_pages, size, gfp);
+ if (!IS_ERR(ret)) {
+@@ -80,7 +82,7 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages,
+ *npages = nr_pages;
+ return ret;
+ }
+-
++fail:
+ kvfree(pages);
+ *out_pages = NULL;
+ *npages = 0;
+@@ -135,7 +137,12 @@ struct page **io_pin_pages(unsigned long uaddr, unsigned long len, int *npages)
+ struct page **pages;
+ int ret;
+
+- end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
++ if (check_add_overflow(uaddr, len, &end))
++ return ERR_PTR(-EOVERFLOW);
++ if (check_add_overflow(end, PAGE_SIZE - 1, &end))
++ return ERR_PTR(-EOVERFLOW);
++
++ end = end >> PAGE_SHIFT;
+ start = uaddr >> PAGE_SHIFT;
+ nr_pages = end - start;
+ if (WARN_ON_ONCE(!nr_pages))
+diff --git a/ipc/namespace.c b/ipc/namespace.c
+index 6ecc30effd3ec6..4df91ceeeafe9f 100644
+--- a/ipc/namespace.c
++++ b/ipc/namespace.c
+@@ -83,13 +83,15 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
+
+ err = msg_init_ns(ns);
+ if (err)
+- goto fail_put;
++ goto fail_ipc;
+
+ sem_init_ns(ns);
+ shm_init_ns(ns);
+
+ return ns;
+
++fail_ipc:
++ retire_ipc_sysctls(ns);
+ fail_mq:
+ retire_mq_sysctls(ns);
+
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index fda3dd2ee9844f..b3a2ce1e5e22ec 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -32,7 +32,9 @@ struct bpf_struct_ops_map {
+ * (in kvalue.data).
+ */
+ struct bpf_link **links;
+- u32 links_cnt;
++ /* ksyms for bpf trampolines */
++ struct bpf_ksym **ksyms;
++ u32 funcs_cnt;
+ u32 image_pages_cnt;
+ /* image_pages is an array of pages that has all the trampolines
+ * that stores the func args before calling the bpf_prog.
+@@ -481,11 +483,11 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
+ {
+ u32 i;
+
+- for (i = 0; i < st_map->links_cnt; i++) {
+- if (st_map->links[i]) {
+- bpf_link_put(st_map->links[i]);
+- st_map->links[i] = NULL;
+- }
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->links[i])
++ break;
++ bpf_link_put(st_map->links[i]);
++ st_map->links[i] = NULL;
+ }
+ }
+
+@@ -586,6 +588,49 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
+ return 0;
+ }
+
++static void bpf_struct_ops_ksym_init(const char *tname, const char *mname,
++ void *image, unsigned int size,
++ struct bpf_ksym *ksym)
++{
++ snprintf(ksym->name, KSYM_NAME_LEN, "bpf__%s_%s", tname, mname);
++ INIT_LIST_HEAD_RCU(&ksym->lnode);
++ bpf_image_ksym_init(image, size, ksym);
++}
++
++static void bpf_struct_ops_map_add_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ bpf_image_ksym_add(st_map->ksyms[i]);
++ }
++}
++
++static void bpf_struct_ops_map_del_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ bpf_image_ksym_del(st_map->ksyms[i]);
++ }
++}
++
++static void bpf_struct_ops_map_free_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ kfree(st_map->ksyms[i]);
++ st_map->ksyms[i] = NULL;
++ }
++}
++
+ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ void *value, u64 flags)
+ {
+@@ -601,6 +646,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ int prog_fd, err;
+ u32 i, trampoline_start, image_off = 0;
+ void *cur_image = NULL, *image = NULL;
++ struct bpf_link **plink;
++ struct bpf_ksym **pksym;
++ const char *tname, *mname;
+
+ if (flags)
+ return -EINVAL;
+@@ -639,14 +687,19 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ udata = &uvalue->data;
+ kdata = &kvalue->data;
+
++ plink = st_map->links;
++ pksym = st_map->ksyms;
++ tname = btf_name_by_offset(st_map->btf, t->name_off);
+ module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[IDX_MODULE_ID]);
+ for_each_member(i, t, member) {
+ const struct btf_type *mtype, *ptype;
+ struct bpf_prog *prog;
+ struct bpf_tramp_link *link;
++ struct bpf_ksym *ksym;
+ u32 moff;
+
+ moff = __btf_member_bit_offset(t, member) / 8;
++ mname = btf_name_by_offset(st_map->btf, member->name_off);
+ ptype = btf_type_resolve_ptr(st_map->btf, member->type, NULL);
+ if (ptype == module_type) {
+ if (*(void **)(udata + moff))
+@@ -714,7 +767,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ }
+ bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS,
+ &bpf_struct_ops_link_lops, prog);
+- st_map->links[i] = &link->link;
++ *plink++ = &link->link;
++
++ ksym = kzalloc(sizeof(*ksym), GFP_USER);
++ if (!ksym) {
++ err = -ENOMEM;
++ goto reset_unlock;
++ }
++ *pksym++ = ksym;
+
+ trampoline_start = image_off;
+ err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+@@ -735,6 +795,12 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+
+ /* put prog_id to udata */
+ *(unsigned long *)(udata + moff) = prog->aux->id;
++
++ /* init ksym for this trampoline */
++ bpf_struct_ops_ksym_init(tname, mname,
++ image + trampoline_start,
++ image_off - trampoline_start,
++ ksym);
+ }
+
+ if (st_ops->validate) {
+@@ -783,6 +849,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ */
+
+ reset_unlock:
++ bpf_struct_ops_map_free_ksyms(st_map);
+ bpf_struct_ops_map_free_image(st_map);
+ bpf_struct_ops_map_put_progs(st_map);
+ memset(uvalue, 0, map->value_size);
+@@ -790,6 +857,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ unlock:
+ kfree(tlinks);
+ mutex_unlock(&st_map->lock);
++ if (!err)
++ bpf_struct_ops_map_add_ksyms(st_map);
+ return err;
+ }
+
+@@ -849,7 +918,10 @@ static void __bpf_struct_ops_map_free(struct bpf_map *map)
+
+ if (st_map->links)
+ bpf_struct_ops_map_put_progs(st_map);
++ if (st_map->ksyms)
++ bpf_struct_ops_map_free_ksyms(st_map);
+ bpf_map_area_free(st_map->links);
++ bpf_map_area_free(st_map->ksyms);
+ bpf_struct_ops_map_free_image(st_map);
+ bpf_map_area_free(st_map->uvalue);
+ bpf_map_area_free(st_map);
+@@ -866,6 +938,8 @@ static void bpf_struct_ops_map_free(struct bpf_map *map)
+ if (btf_is_module(st_map->btf))
+ module_put(st_map->st_ops_desc->st_ops->owner);
+
++ bpf_struct_ops_map_del_ksyms(st_map);
++
+ /* The struct_ops's function may switch to another struct_ops.
+ *
+ * For example, bpf_tcp_cc_x->init() may switch to
+@@ -895,6 +969,19 @@ static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr)
+ return 0;
+ }
+
++static u32 count_func_ptrs(const struct btf *btf, const struct btf_type *t)
++{
++ int i;
++ u32 count;
++ const struct btf_member *member;
++
++ count = 0;
++ for_each_member(i, t, member)
++ if (btf_type_resolve_func_ptr(btf, member->type, NULL))
++ count++;
++ return count;
++}
++
+ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
+ {
+ const struct bpf_struct_ops_desc *st_ops_desc;
+@@ -961,11 +1048,15 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
+ map = &st_map->map;
+
+ st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
+- st_map->links_cnt = btf_type_vlen(t);
++ st_map->funcs_cnt = count_func_ptrs(btf, t);
+ st_map->links =
+- bpf_map_area_alloc(st_map->links_cnt * sizeof(struct bpf_links *),
++ bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_link *),
++ NUMA_NO_NODE);
++
++ st_map->ksyms =
++ bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_ksym *),
+ NUMA_NO_NODE);
+- if (!st_map->uvalue || !st_map->links) {
++ if (!st_map->uvalue || !st_map->links || !st_map->ksyms) {
+ ret = -ENOMEM;
+ goto errout_free;
+ }
+@@ -994,7 +1085,8 @@ static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
+ usage = sizeof(*st_map) +
+ vt->size - sizeof(struct bpf_struct_ops_value);
+ usage += vt->size;
+- usage += btf_type_vlen(vt) * sizeof(struct bpf_links *);
++ usage += st_map->funcs_cnt * sizeof(struct bpf_link *);
++ usage += st_map->funcs_cnt * sizeof(struct bpf_ksym *);
+ usage += PAGE_SIZE;
+ return usage;
+ }
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 5cd1c7a23848cc..346826e3c933da 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6564,7 +6564,10 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ if (prog_args_trusted(prog))
+ info->reg_type |= PTR_TRUSTED;
+
+- if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
++ /* Raw tracepoint arguments always get marked as maybe NULL */
++ if (bpf_prog_is_raw_tp(prog))
++ info->reg_type |= PTR_MAYBE_NULL;
++ else if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
+ info->reg_type |= PTR_MAYBE_NULL;
+
+ if (tgt_prog) {
+diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
+index 70fb82bf16370e..b77db7413f8c70 100644
+--- a/kernel/bpf/dispatcher.c
++++ b/kernel/bpf/dispatcher.c
+@@ -154,7 +154,8 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+ d->image = NULL;
+ goto out;
+ }
+- bpf_image_ksym_add(d->image, PAGE_SIZE, &d->ksym);
++ bpf_image_ksym_init(d->image, PAGE_SIZE, &d->ksym);
++ bpf_image_ksym_add(&d->ksym);
+ }
+
+ prev_num_progs = d->num_progs;
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index f8302a5ca400da..1166d9dd3e8b5d 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -115,10 +115,14 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
+ (ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC);
+ }
+
+-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym)
++void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym)
+ {
+ ksym->start = (unsigned long) data;
+ ksym->end = ksym->start + size;
++}
++
++void bpf_image_ksym_add(struct bpf_ksym *ksym)
++{
+ bpf_ksym_add(ksym);
+ perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start,
+ PAGE_SIZE, false, ksym->name);
+@@ -377,7 +381,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size)
+ ksym = &im->ksym;
+ INIT_LIST_HEAD_RCU(&ksym->lnode);
+ snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", key);
+- bpf_image_ksym_add(image, size, ksym);
++ bpf_image_ksym_init(image, size, ksym);
++ bpf_image_ksym_add(ksym);
+ return im;
+
+ out_free_image:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index bb99bada7e2ed2..91317857ea3ee5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -418,6 +418,25 @@ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+ return rec;
+ }
+
++static bool mask_raw_tp_reg_cond(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) {
++ return reg->type == (PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL) &&
++ bpf_prog_is_raw_tp(env->prog) && !reg->ref_obj_id;
++}
++
++static bool mask_raw_tp_reg(const struct bpf_verifier_env *env, struct bpf_reg_state *reg)
++{
++ if (!mask_raw_tp_reg_cond(env, reg))
++ return false;
++ reg->type &= ~PTR_MAYBE_NULL;
++ return true;
++}
++
++static void unmask_raw_tp_reg(struct bpf_reg_state *reg, bool result)
++{
++ if (result)
++ reg->type |= PTR_MAYBE_NULL;
++}
++
+ static bool subprog_is_global(const struct bpf_verifier_env *env, int subprog)
+ {
+ struct bpf_func_info_aux *aux = env->prog->aux->func_info_aux;
+@@ -6595,6 +6614,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ const char *field_name = NULL;
+ enum bpf_type_flag flag = 0;
+ u32 btf_id = 0;
++ bool mask;
+ int ret;
+
+ if (!env->allow_ptr_leaks) {
+@@ -6666,7 +6686,21 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+
+ if (ret < 0)
+ return ret;
+-
++ /* For raw_tp progs, we allow dereference of PTR_MAYBE_NULL
++ * trusted PTR_TO_BTF_ID, these are the ones that are possibly
++ * arguments to the raw_tp. Since internal checks in for trusted
++ * reg in check_ptr_to_btf_access would consider PTR_MAYBE_NULL
++ * modifier as problematic, mask it out temporarily for the
++ * check. Don't apply this to pointers with ref_obj_id > 0, as
++ * those won't be raw_tp args.
++ *
++ * We may end up applying this relaxation to other trusted
++ * PTR_TO_BTF_ID with maybe null flag, since we cannot
++ * distinguish PTR_MAYBE_NULL tagged for arguments vs normal
++ * tagging, but that should expand allowed behavior, and not
++ * cause regression for existing behavior.
++ */
++ mask = mask_raw_tp_reg(env, reg);
+ if (ret != PTR_TO_BTF_ID) {
+ /* just mark; */
+
+@@ -6727,8 +6761,13 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ clear_trusted_flags(&flag);
+ }
+
+- if (atype == BPF_READ && value_regno >= 0)
++ if (atype == BPF_READ && value_regno >= 0) {
+ mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
++ /* We've assigned a new type to regno, so don't undo masking. */
++ if (regno == value_regno)
++ mask = false;
++ }
++ unmask_raw_tp_reg(reg, mask);
+
+ return 0;
+ }
+@@ -7103,7 +7142,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown(env, regs, value_regno);
+ } else if (base_type(reg->type) == PTR_TO_BTF_ID &&
+- !type_may_be_null(reg->type)) {
++ (mask_raw_tp_reg_cond(env, reg) || !type_may_be_null(reg->type))) {
+ err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ value_regno);
+ } else if (reg->type == CONST_PTR_TO_MAP) {
+@@ -8796,6 +8835,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ enum bpf_reg_type type = reg->type;
+ u32 *arg_btf_id = NULL;
+ int err = 0;
++ bool mask;
+
+ if (arg_type == ARG_DONTCARE)
+ return 0;
+@@ -8836,11 +8876,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
+ arg_btf_id = fn->arg_btf_id[arg];
+
++ mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
+- if (err)
+- return err;
+
+- err = check_func_arg_reg_off(env, reg, regno, arg_type);
++ err = err ?: check_func_arg_reg_off(env, reg, regno, arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+
+@@ -9635,14 +9675,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
+ return ret;
+ } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
+ struct bpf_call_arg_meta meta;
++ bool mask;
+ int err;
+
+ if (register_is_null(reg) && type_may_be_null(arg->arg_type))
+ continue;
+
+ memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
++ mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+ } else {
+@@ -10583,11 +10626,26 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+
+ switch (func_id) {
+ case BPF_FUNC_tail_call:
++ if (env->cur_state->active_lock.ptr) {
++ verbose(env, "tail_call cannot be used inside bpf_spin_lock-ed region\n");
++ return -EINVAL;
++ }
++
+ err = check_reference_leak(env, false);
+ if (err) {
+ verbose(env, "tail_call would lead to reference leak\n");
+ return err;
+ }
++
++ if (env->cur_state->active_rcu_lock) {
++ verbose(env, "tail_call cannot be used inside bpf_rcu_read_lock-ed region\n");
++ return -EINVAL;
++ }
++
++ if (env->cur_state->active_preempt_lock) {
++ verbose(env, "tail_call cannot be used inside bpf_preempt_disable-ed region\n");
++ return -EINVAL;
++ }
+ break;
+ case BPF_FUNC_get_local_storage:
+ /* check that flags argument in get_local_storage(map, flags) is 0,
+@@ -11942,6 +12000,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 regno = i + 1, ref_id, type_size;
+ bool is_ret_buf_sz = false;
++ bool mask = false;
+ int kf_arg_type;
+
+ t = btf_type_skip_modifiers(btf, args[i].type, NULL);
+@@ -12000,12 +12059,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ return -EINVAL;
+ }
+
++ mask = mask_raw_tp_reg(env, reg);
+ if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
+ (register_is_null(reg) || type_may_be_null(reg->type)) &&
+ !is_kfunc_arg_nullable(meta->btf, &args[i])) {
+ verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
++ unmask_raw_tp_reg(reg, mask);
+ return -EACCES;
+ }
++ unmask_raw_tp_reg(reg, mask);
+
+ if (reg->ref_obj_id) {
+ if (is_kfunc_release(meta) && meta->ref_obj_id) {
+@@ -12063,16 +12125,24 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
+ break;
+
++ /* Allow passing maybe NULL raw_tp arguments to
++ * kfuncs for compatibility. Don't apply this to
++ * arguments with ref_obj_id > 0.
++ */
++ mask = mask_raw_tp_reg(env, reg);
+ if (!is_trusted_reg(reg)) {
+ if (!is_kfunc_rcu(meta)) {
+ verbose(env, "R%d must be referenced or trusted\n", regno);
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ if (!is_rcu_reg(reg)) {
+ verbose(env, "R%d must be a rcu pointer\n", regno);
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ }
++ unmask_raw_tp_reg(reg, mask);
+ fallthrough;
+ case KF_ARG_PTR_TO_CTX:
+ case KF_ARG_PTR_TO_DYNPTR:
+@@ -12095,7 +12165,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+
+ if (is_kfunc_release(meta) && reg->ref_obj_id)
+ arg_type |= OBJ_RELEASE;
++ mask = mask_raw_tp_reg(env, reg);
+ ret = check_func_arg_reg_off(env, reg, regno, arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+
+@@ -12272,6 +12344,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ ref_tname = btf_name_by_offset(btf, ref_t->name_off);
+ fallthrough;
+ case KF_ARG_PTR_TO_BTF_ID:
++ mask = mask_raw_tp_reg(env, reg);
+ /* Only base_type is checked, further checks are done here */
+ if ((base_type(reg->type) != PTR_TO_BTF_ID ||
+ (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
+@@ -12280,9 +12353,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ verbose(env, "expected %s or socket\n",
+ reg_type_str(env, base_type(reg->type) |
+ (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i);
++ unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+ break;
+@@ -13252,7 +13327,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
+ */
+ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_insn *insn,
+- const struct bpf_reg_state *ptr_reg,
++ struct bpf_reg_state *ptr_reg,
+ const struct bpf_reg_state *off_reg)
+ {
+ struct bpf_verifier_state *vstate = env->cur_state;
+@@ -13266,6 +13341,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_sanitize_info info = {};
+ u8 opcode = BPF_OP(insn->code);
+ u32 dst = insn->dst_reg;
++ bool mask;
+ int ret;
+
+ dst_reg = ®s[dst];
+@@ -13292,11 +13368,14 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ return -EACCES;
+ }
+
++ mask = mask_raw_tp_reg(env, ptr_reg);
+ if (ptr_reg->type & PTR_MAYBE_NULL) {
+ verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
+ dst, reg_type_str(env, ptr_reg->type));
++ unmask_raw_tp_reg(ptr_reg, mask);
+ return -EACCES;
+ }
++ unmask_raw_tp_reg(ptr_reg, mask);
+
+ switch (base_type(ptr_reg->type)) {
+ case PTR_TO_CTX:
+@@ -15909,6 +15988,15 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ return -ENOTSUPP;
+ }
+ break;
++ case BPF_PROG_TYPE_KPROBE:
++ switch (env->prog->expected_attach_type) {
++ case BPF_TRACE_KPROBE_SESSION:
++ range = retval_range(0, 1);
++ break;
++ default:
++ return 0;
++ }
++ break;
+ case BPF_PROG_TYPE_SK_LOOKUP:
+ range = retval_range(SK_DROP, SK_PASS);
+ break;
+@@ -19837,6 +19925,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ * for this case.
+ */
+ case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
++ case PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL:
+ if (type == BPF_READ) {
+ if (BPF_MODE(insn->code) == BPF_MEM)
+ insn->code = BPF_LDX | BPF_PROBE_MEM |
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 044c7ba1cc482b..e275eaf2de7f8f 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2140,8 +2140,10 @@ int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask)
+ if (ret)
+ goto exit_stats;
+
+- ret = cgroup_bpf_inherit(root_cgrp);
+- WARN_ON_ONCE(ret);
++ if (root == &cgrp_dfl_root) {
++ ret = cgroup_bpf_inherit(root_cgrp);
++ WARN_ON_ONCE(ret);
++ }
+
+ trace_cgroup_setup_root(root);
+
+@@ -2314,10 +2316,8 @@ static void cgroup_kill_sb(struct super_block *sb)
+ * And don't kill the default root.
+ */
+ if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
+- !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
+- cgroup_bpf_offline(&root->cgrp);
++ !percpu_ref_is_dying(&root->cgrp.self.refcnt))
+ percpu_ref_kill(&root->cgrp.self.refcnt);
+- }
+ cgroup_put(&root->cgrp);
+ kernfs_kill_sb(sb);
+ }
+@@ -5710,9 +5710,11 @@ static struct cgroup *cgroup_create(struct cgroup *parent, const char *name,
+ if (ret)
+ goto out_kernfs_remove;
+
+- ret = cgroup_bpf_inherit(cgrp);
+- if (ret)
+- goto out_psi_free;
++ if (cgrp->root == &cgrp_dfl_root) {
++ ret = cgroup_bpf_inherit(cgrp);
++ if (ret)
++ goto out_psi_free;
++ }
+
+ /*
+ * New cgroup inherits effective freeze counter, and
+@@ -6026,7 +6028,8 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+
+ cgroup1_check_for_release(parent);
+
+- cgroup_bpf_offline(cgrp);
++ if (cgrp->root == &cgrp_dfl_root)
++ cgroup_bpf_offline(cgrp);
+
+ /* put the base reference */
+ percpu_ref_kill(&cgrp->self.refcnt);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 22f43721d031d4..ce8be55e5e04b3 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -622,6 +622,12 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm)
+
+ exe_file = get_mm_exe_file(oldmm);
+ RCU_INIT_POINTER(mm->exe_file, exe_file);
++ /*
++ * We depend on the oldmm having properly denied write access to the
++ * exe_file already.
++ */
++ if (exe_file && deny_write_access(exe_file))
++ pr_warn_once("deny_write_access() failed in %s\n", __func__);
+ }
+
+ #ifdef CONFIG_MMU
+@@ -1414,11 +1420,20 @@ int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ */
+ old_exe_file = rcu_dereference_raw(mm->exe_file);
+
+- if (new_exe_file)
++ if (new_exe_file) {
++ /*
++ * We expect the caller (i.e., sys_execve) to already denied
++ * write access, so this is unlikely to fail.
++ */
++ if (unlikely(deny_write_access(new_exe_file)))
++ return -EACCES;
+ get_file(new_exe_file);
++ }
+ rcu_assign_pointer(mm->exe_file, new_exe_file);
+- if (old_exe_file)
++ if (old_exe_file) {
++ allow_write_access(old_exe_file);
+ fput(old_exe_file);
++ }
+ return 0;
+ }
+
+@@ -1457,6 +1472,9 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ return ret;
+ }
+
++ ret = deny_write_access(new_exe_file);
++ if (ret)
++ return -EACCES;
+ get_file(new_exe_file);
+
+ /* set the new file */
+@@ -1465,8 +1483,10 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ rcu_assign_pointer(mm->exe_file, new_exe_file);
+ mmap_write_unlock(mm);
+
+- if (old_exe_file)
++ if (old_exe_file) {
++ allow_write_access(old_exe_file);
+ fput(old_exe_file);
++ }
+ return 0;
+ }
+
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index 6d37596deb1f12..d360fa44b234db 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -890,13 +890,15 @@ kfree_scale_init(void)
+ if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) {
+ pr_alert("ERROR: call_rcu() CBs are not being lazy as expected!\n");
+ WARN_ON_ONCE(1);
+- return -1;
++ firsterr = -1;
++ goto unwind;
+ }
+
+ if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) {
+ pr_alert("ERROR: call_rcu() CBs are being too lazy!\n");
+ WARN_ON_ONCE(1);
+- return -1;
++ firsterr = -1;
++ goto unwind;
+ }
+ }
+
+diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
+index 549c03336ee971..4dcbf8aa80ff73 100644
+--- a/kernel/rcu/srcutiny.c
++++ b/kernel/rcu/srcutiny.c
+@@ -122,8 +122,8 @@ void srcu_drive_gp(struct work_struct *wp)
+ ssp = container_of(wp, struct srcu_struct, srcu_work);
+ preempt_disable(); // Needed for PREEMPT_AUTO
+ if (ssp->srcu_gp_running || ULONG_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max))) {
+- return; /* Already running or nothing to do. */
+ preempt_enable();
++ return; /* Already running or nothing to do. */
+ }
+
+ /* Remove recently arrived callbacks and wait for readers. */
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index b1f883fcd9185a..3e486ccaa4ca34 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3511,7 +3511,7 @@ static int krc_count(struct kfree_rcu_cpu *krcp)
+ }
+
+ static void
+-schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
++__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ {
+ long delay, delay_left;
+
+@@ -3525,6 +3525,16 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay);
+ }
+
++static void
++schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&krcp->lock, flags);
++ __schedule_delayed_monitor_work(krcp);
++ raw_spin_unlock_irqrestore(&krcp->lock, flags);
++}
++
+ static void
+ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
+ {
+@@ -3836,7 +3846,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
+
+ // Set timer to drain after KFREE_DRAIN_JIFFIES.
+ if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING)
+- schedule_delayed_monitor_work(krcp);
++ __schedule_delayed_monitor_work(krcp);
+
+ unlock_return:
+ krc_this_cpu_unlock(krcp, flags);
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 16865475120ba3..2605dd234a13c8 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -891,7 +891,18 @@ static void nocb_cb_wait(struct rcu_data *rdp)
+ swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
+ nocb_cb_wait_cond(rdp));
+ if (kthread_should_park()) {
+- kthread_parkme();
++ /*
++ * kthread_park() must be preceded by an rcu_barrier().
++ * But yet another rcu_barrier() might have sneaked in between
++ * the barrier callback execution and the callbacks counter
++ * decrement.
++ */
++ if (rdp->nocb_cb_sleep) {
++ rcu_nocb_lock_irqsave(rdp, flags);
++ WARN_ON_ONCE(rcu_segcblist_n_cbs(&rdp->cblist));
++ rcu_nocb_unlock_irqrestore(rdp, flags);
++ kthread_parkme();
++ }
+ } else if (READ_ONCE(rdp->nocb_cb_sleep)) {
+ WARN_ON(signal_pending(current));
+ trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty"));
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index c6ba15388ea706..28c77904ea749f 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -783,9 +783,8 @@ static int sugov_init(struct cpufreq_policy *policy)
+ if (ret)
+ goto fail;
+
+- sugov_eas_rebuild_sd();
+-
+ out:
++ sugov_eas_rebuild_sd();
+ mutex_unlock(&global_tunables_lock);
+ return 0;
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 751d73d500e51d..16613631543f18 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3567,12 +3567,7 @@ static void scx_ops_exit_task(struct task_struct *p)
+
+ void init_scx_entity(struct sched_ext_entity *scx)
+ {
+- /*
+- * init_idle() calls this function again after fork sequence is
+- * complete. Don't touch ->tasks_node as it's already linked.
+- */
+- memset(scx, 0, offsetof(struct sched_ext_entity, tasks_node));
+-
++ memset(scx, 0, sizeof(*scx));
+ INIT_LIST_HEAD(&scx->dsq_list.node);
+ RB_CLEAR_NODE(&scx->dsq_priq);
+ scx->sticky_cpu = -1;
+@@ -6478,6 +6473,8 @@ __bpf_kfunc_end_defs();
+
+ BTF_KFUNCS_START(scx_kfunc_ids_unlocked)
+ BTF_ID_FLAGS(func, scx_bpf_create_dsq, KF_SLEEPABLE)
++BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_slice)
++BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_vtime)
+ BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq, KF_RCU)
+ BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime_from_dsq, KF_RCU)
+ BTF_KFUNCS_END(scx_kfunc_ids_unlocked)
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index 642647f5046be0..1ad88e97b4ebcf 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -556,9 +556,9 @@ EXPORT_SYMBOL(ns_to_timespec64);
+ * - all other values are converted to jiffies by either multiplying
+ * the input value by a factor or dividing it with a factor and
+ * handling any 32-bit overflows.
+- * for the details see __msecs_to_jiffies()
++ * for the details see _msecs_to_jiffies()
+ *
+- * __msecs_to_jiffies() checks for the passed in value being a constant
++ * msecs_to_jiffies() checks for the passed in value being a constant
+ * via __builtin_constant_p() allowing gcc to eliminate most of the
+ * code, __msecs_to_jiffies() is called if the value passed does not
+ * allow constant folding and the actual conversion must be done at
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 0fc9d066a7be46..7835f9b376e76a 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -2422,7 +2422,8 @@ static inline void __run_timers(struct timer_base *base)
+
+ static void __run_timer_base(struct timer_base *base)
+ {
+- if (time_before(jiffies, base->next_expiry))
++ /* Can race against a remote CPU updating next_expiry under the lock */
++ if (time_before(jiffies, READ_ONCE(base->next_expiry)))
+ return;
+
+ timer_base_lock_expiry(base);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 630b763e52402f..792dc35414a3c3 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -3205,7 +3205,6 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ struct bpf_prog *prog = link->link.prog;
+ bool sleepable = prog->sleepable;
+ struct bpf_run_ctx *old_run_ctx;
+- int err = 0;
+
+ if (link->task && !same_thread_group(current, link->task))
+ return 0;
+@@ -3218,7 +3217,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ migrate_disable();
+
+ old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+- err = bpf_prog_run(link->link.prog, regs);
++ bpf_prog_run(link->link.prog, regs);
+ bpf_reset_run_ctx(old_run_ctx);
+
+ migrate_enable();
+@@ -3227,7 +3226,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ rcu_read_unlock_trace();
+ else
+ rcu_read_unlock();
+- return err;
++ return 0;
+ }
+
+ static bool
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 05e7912418126b..3ff9caa4a71bbd 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -352,10 +352,16 @@ void perf_uprobe_destroy(struct perf_event *p_event)
+ int perf_trace_add(struct perf_event *p_event, int flags)
+ {
+ struct trace_event_call *tp_event = p_event->tp_event;
++ struct hw_perf_event *hwc = &p_event->hw;
+
+ if (!(flags & PERF_EF_START))
+ p_event->hw.state = PERF_HES_STOPPED;
+
++ if (is_sampling_event(p_event)) {
++ hwc->last_period = hwc->sample_period;
++ perf_swevent_set_period(p_event);
++ }
++
+ /*
+ * If TRACE_REG_PERF_ADD returns false; no custom action was performed
+ * and we need to take the default action of enqueueing our event on
+diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c
+index 2abc78367dd110..5222c6393f1168 100644
+--- a/lib/overflow_kunit.c
++++ b/lib/overflow_kunit.c
+@@ -1187,7 +1187,7 @@ static void DEFINE_FLEX_test(struct kunit *test)
+ {
+ /* Using _RAW_ on a __counted_by struct will initialize "counter" to zero */
+ DEFINE_RAW_FLEX(struct foo, two_but_zero, array, 2);
+-#if __has_attribute(__counted_by__)
++#ifdef CONFIG_CC_HAS_COUNTED_BY
+ int expected_raw_size = sizeof(struct foo);
+ #else
+ int expected_raw_size = sizeof(struct foo) + 2 * sizeof(s16);
+diff --git a/lib/string_helpers.c b/lib/string_helpers.c
+index 4f887aa62fa0cd..91fa37b5c510a7 100644
+--- a/lib/string_helpers.c
++++ b/lib/string_helpers.c
+@@ -57,7 +57,7 @@ int string_get_size(u64 size, u64 blk_size, const enum string_size_units units,
+ static const unsigned int rounding[] = { 500, 50, 5 };
+ int i = 0, j;
+ u32 remainder = 0, sf_cap;
+- char tmp[8];
++ char tmp[12];
+ const char *unit;
+
+ tmp[0] = '\0';
+diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
+index 989a12a6787214..6dc234913dd58e 100644
+--- a/lib/strncpy_from_user.c
++++ b/lib/strncpy_from_user.c
+@@ -120,6 +120,9 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (unlikely(count <= 0))
+ return 0;
+
++ kasan_check_write(dst, count);
++ check_object_size(dst, count, false);
++
+ if (can_do_masked_user_access()) {
+ long retval;
+
+@@ -142,8 +145,6 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (max > count)
+ max = count;
+
+- kasan_check_write(dst, count);
+- check_object_size(dst, count, false);
+ if (user_read_access_begin(src, max)) {
+ retval = do_strncpy_from_user(dst, src, count, max);
+ user_read_access_end();
+diff --git a/mm/internal.h b/mm/internal.h
+index 64c2eb0b160e16..9bb098e78f1556 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -48,7 +48,7 @@ struct folio_batch;
+ * when we specify __GFP_NOWARN.
+ */
+ #define WARN_ON_ONCE_GFP(cond, gfp) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(!(gfp & __GFP_NOWARN) && __ret_warn_once && !__warned)) { \
+diff --git a/net/9p/trans_usbg.c b/net/9p/trans_usbg.c
+index 975b76839dca1a..6b694f117aef29 100644
+--- a/net/9p/trans_usbg.c
++++ b/net/9p/trans_usbg.c
+@@ -909,9 +909,9 @@ static struct usb_function_instance *usb9pfs_alloc_instance(void)
+ usb9pfs_opts->buflen = DEFAULT_BUFLEN;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+- if (IS_ERR(dev)) {
++ if (!dev) {
+ kfree(usb9pfs_opts);
+- return ERR_CAST(dev);
++ return ERR_PTR(-ENOMEM);
+ }
+
+ usb9pfs_opts->dev = dev;
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index dfdbe1ca533872..b9ff69c7522a19 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -286,7 +286,7 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+ if (!priv->rings[i].intf)
+ break;
+ if (priv->rings[i].irq > 0)
+- unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
++ unbind_from_irqhandler(priv->rings[i].irq, ring);
+ if (priv->rings[i].data.in) {
+ for (j = 0;
+ j < (1 << priv->rings[i].intf->ring_order);
+@@ -465,6 +465,7 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
+ goto error;
+ }
+
++ xenbus_switch_state(dev, XenbusStateInitialised);
+ return 0;
+
+ error_xenbus:
+@@ -512,8 +513,10 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev,
+ break;
+
+ case XenbusStateInitWait:
+- if (!xen_9pfs_front_init(dev))
+- xenbus_switch_state(dev, XenbusStateInitialised);
++ if (dev->state != XenbusStateInitialising)
++ break;
++
++ xen_9pfs_front_init(dev);
+ break;
+
+ case XenbusStateConnected:
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index c4c74b82ed211c..6354cdf9c2b372 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -952,6 +952,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ conn->tx_power = HCI_TX_POWER_INVALID;
+ conn->max_tx_power = HCI_TX_POWER_INVALID;
+ conn->sync_handle = HCI_SYNC_HANDLE_INVALID;
++ conn->sid = HCI_SID_INVALID;
+
+ set_bit(HCI_CONN_POWER_SAVE, &conn->flags);
+ conn->disc_timeout = HCI_DISCONN_TIMEOUT;
+@@ -2062,105 +2063,225 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
+
+ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ {
+- struct hci_cp_le_pa_create_sync *cp = data;
+-
+ bt_dev_dbg(hdev, "");
+
+ if (err)
+ bt_dev_err(hdev, "Unable to create PA: %d", err);
++}
+
+- kfree(cp);
++static bool hci_conn_check_create_pa_sync(struct hci_conn *conn)
++{
++ if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID)
++ return false;
++
++ return true;
+ }
+
+ static int create_pa_sync(struct hci_dev *hdev, void *data)
+ {
+- struct hci_cp_le_pa_create_sync *cp = data;
+- int err;
++ struct hci_cp_le_pa_create_sync *cp = NULL;
++ struct hci_conn *conn;
++ int err = 0;
+
+- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
+- sizeof(*cp), cp, HCI_CMD_TIMEOUT);
+- if (err) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- return err;
++ hci_dev_lock(hdev);
++
++ rcu_read_lock();
++
++ /* The spec allows only one pending LE Periodic Advertising Create
++ * Sync command at a time. If the command is pending now, don't do
++ * anything. We check for pending connections after each PA Sync
++ * Established event.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2493:
++ *
++ * If the Host issues this command when another HCI_LE_Periodic_
++ * Advertising_Create_Sync command is pending, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags))
++ goto unlock;
++ }
++
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (hci_conn_check_create_pa_sync(conn)) {
++ struct bt_iso_qos *qos = &conn->iso_qos;
++
++ cp = kzalloc(sizeof(*cp), GFP_KERNEL);
++ if (!cp) {
++ err = -ENOMEM;
++ goto unlock;
++ }
++
++ cp->options = qos->bcast.options;
++ cp->sid = conn->sid;
++ cp->addr_type = conn->dst_type;
++ bacpy(&cp->addr, &conn->dst);
++ cp->skip = cpu_to_le16(qos->bcast.skip);
++ cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
++ cp->sync_cte_type = qos->bcast.sync_cte_type;
++
++ break;
++ }
+ }
+
+- return hci_update_passive_scan_sync(hdev);
++unlock:
++ rcu_read_unlock();
++
++ hci_dev_unlock(hdev);
++
++ if (cp) {
++ hci_dev_set_flag(hdev, HCI_PA_SYNC);
++ set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
++ sizeof(*cp), cp, HCI_CMD_TIMEOUT);
++ if (!err)
++ err = hci_update_passive_scan_sync(hdev);
++
++ kfree(cp);
++
++ if (err) {
++ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
++ clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++ }
++ }
++
++ return err;
++}
++
++int hci_pa_create_sync_pending(struct hci_dev *hdev)
++{
++ /* Queue start pa_create_sync and scan */
++ return hci_cmd_sync_queue(hdev, create_pa_sync,
++ NULL, create_pa_complete);
+ }
+
+ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ __u8 dst_type, __u8 sid,
+ struct bt_iso_qos *qos)
+ {
+- struct hci_cp_le_pa_create_sync *cp;
+ struct hci_conn *conn;
+- int err;
+-
+- if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
+- return ERR_PTR(-EBUSY);
+
+ conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE);
+ if (IS_ERR(conn))
+ return conn;
+
+ conn->iso_qos = *qos;
++ conn->dst_type = dst_type;
++ conn->sid = sid;
+ conn->state = BT_LISTEN;
+
+ hci_conn_hold(conn);
+
+- cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+- if (!cp) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- hci_conn_drop(conn);
+- return ERR_PTR(-ENOMEM);
++ hci_pa_create_sync_pending(hdev);
++
++ return conn;
++}
++
++static bool hci_conn_check_create_big_sync(struct hci_conn *conn)
++{
++ if (!conn->num_bis)
++ return false;
++
++ return true;
++}
++
++static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err)
++{
++ bt_dev_dbg(hdev, "");
++
++ if (err)
++ bt_dev_err(hdev, "Unable to create BIG sync: %d", err);
++}
++
++static int big_create_sync(struct hci_dev *hdev, void *data)
++{
++ DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
++ struct hci_conn *conn;
++
++ rcu_read_lock();
++
++ pdu->num_bis = 0;
++
++ /* The spec allows only one pending LE BIG Create Sync command at
++ * a time. If the command is pending now, don't do anything. We
++ * check for pending connections after each BIG Sync Established
++ * event.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2586:
++ *
++ * If the Host sends this command when the Controller is in the
++ * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
++ * Established event has not been generated, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags))
++ goto unlock;
+ }
+
+- cp->options = qos->bcast.options;
+- cp->sid = sid;
+- cp->addr_type = dst_type;
+- bacpy(&cp->addr, dst);
+- cp->skip = cpu_to_le16(qos->bcast.skip);
+- cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
+- cp->sync_cte_type = qos->bcast.sync_cte_type;
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (hci_conn_check_create_big_sync(conn)) {
++ struct bt_iso_qos *qos = &conn->iso_qos;
+
+- /* Queue start pa_create_sync and scan */
+- err = hci_cmd_sync_queue(hdev, create_pa_sync, cp, create_pa_complete);
+- if (err < 0) {
+- hci_conn_drop(conn);
+- kfree(cp);
+- return ERR_PTR(err);
++ set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ pdu->handle = qos->bcast.big;
++ pdu->sync_handle = cpu_to_le16(conn->sync_handle);
++ pdu->encryption = qos->bcast.encryption;
++ memcpy(pdu->bcode, qos->bcast.bcode,
++ sizeof(pdu->bcode));
++ pdu->mse = qos->bcast.mse;
++ pdu->timeout = cpu_to_le16(qos->bcast.timeout);
++ pdu->num_bis = conn->num_bis;
++ memcpy(pdu->bis, conn->bis, conn->num_bis);
++
++ break;
++ }
+ }
+
+- return conn;
++unlock:
++ rcu_read_unlock();
++
++ if (!pdu->num_bis)
++ return 0;
++
++ return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
++ struct_size(pdu, bis, pdu->num_bis), pdu);
++}
++
++int hci_le_big_create_sync_pending(struct hci_dev *hdev)
++{
++ /* Queue big_create_sync */
++ return hci_cmd_sync_queue_once(hdev, big_create_sync,
++ NULL, big_create_sync_complete);
+ }
+
+ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
+ struct bt_iso_qos *qos,
+ __u16 sync_handle, __u8 num_bis, __u8 bis[])
+ {
+- DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
+ int err;
+
+- if (num_bis < 0x01 || num_bis > pdu->num_bis)
++ if (num_bis < 0x01 || num_bis > ISO_MAX_NUM_BIS)
+ return -EINVAL;
+
+ err = qos_set_big(hdev, qos);
+ if (err)
+ return err;
+
+- if (hcon)
+- hcon->iso_qos.bcast.big = qos->bcast.big;
++ if (hcon) {
++ /* Update hcon QoS */
++ hcon->iso_qos = *qos;
+
+- pdu->handle = qos->bcast.big;
+- pdu->sync_handle = cpu_to_le16(sync_handle);
+- pdu->encryption = qos->bcast.encryption;
+- memcpy(pdu->bcode, qos->bcast.bcode, sizeof(pdu->bcode));
+- pdu->mse = qos->bcast.mse;
+- pdu->timeout = cpu_to_le16(qos->bcast.timeout);
+- pdu->num_bis = num_bis;
+- memcpy(pdu->bis, bis, num_bis);
++ hcon->num_bis = num_bis;
++ memcpy(hcon->bis, bis, num_bis);
++ }
+
+- return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
+- struct_size(pdu, bis, num_bis), pdu);
++ return hci_le_big_create_sync_pending(hdev);
+ }
+
+ static void create_big_complete(struct hci_dev *hdev, void *data, int err)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 0bbad90ddd6f87..2e4bd3e961ce09 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6345,7 +6345,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ struct hci_ev_le_pa_sync_established *ev = data;
+ int mask = hdev->link_mode;
+ __u8 flags = 0;
+- struct hci_conn *pa_sync;
++ struct hci_conn *pa_sync, *conn;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+
+@@ -6353,6 +6353,20 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+
++ conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr,
++ ev->bdaddr_type);
++ if (!conn) {
++ bt_dev_err(hdev,
++ "Unable to find connection for dst %pMR sid 0x%2.2x",
++ &ev->bdaddr, ev->sid);
++ goto unlock;
++ }
++
++ clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ conn->sync_handle = le16_to_cpu(ev->handle);
++ conn->sid = HCI_SID_INVALID;
++
+ mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ISO_LINK, &flags);
+ if (!(mask & HCI_LM_ACCEPT)) {
+ hci_le_pa_term_sync(hdev, ev->handle);
+@@ -6379,6 +6393,9 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ }
+
+ unlock:
++ /* Handle any other pending PA sync command */
++ hci_pa_create_sync_pending(hdev);
++
+ hci_dev_unlock(hdev);
+ }
+
+@@ -6896,7 +6913,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ struct sk_buff *skb)
+ {
+ struct hci_evt_le_big_sync_estabilished *ev = data;
+- struct hci_conn *bis;
++ struct hci_conn *bis, *conn;
+ int i;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+@@ -6907,6 +6924,20 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_lock(hdev);
+
++ conn = hci_conn_hash_lookup_big_sync_pend(hdev, ev->handle,
++ ev->num_bis);
++ if (!conn) {
++ bt_dev_err(hdev,
++ "Unable to find connection for big 0x%2.2x",
++ ev->handle);
++ goto unlock;
++ }
++
++ clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ conn->num_bis = 0;
++ memset(conn->bis, 0, sizeof(conn->num_bis));
++
+ for (i = 0; i < ev->num_bis; i++) {
+ u16 handle = le16_to_cpu(ev->bis[i]);
+ __le32 interval;
+@@ -6956,6 +6987,10 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ hci_connect_cfm(bis, ev->status);
+ }
+
++unlock:
++ /* Handle any other pending BIG sync command */
++ hci_le_big_create_sync_pending(hdev);
++
+ hci_dev_unlock(hdev);
+ }
+
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 367e32fe30eb84..4b54dbbf0729a3 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -21,16 +21,6 @@ static const struct device_type bt_link = {
+ .release = bt_link_release,
+ };
+
+-/*
+- * The rfcomm tty device will possibly retain even when conn
+- * is down, and sysfs doesn't support move zombie device,
+- * so we should move the device before conn device is destroyed.
+- */
+-static int __match_tty(struct device *dev, void *data)
+-{
+- return !strncmp(dev_name(dev), "rfcomm", 6);
+-}
+-
+ void hci_conn_init_sysfs(struct hci_conn *conn)
+ {
+ struct hci_dev *hdev = conn->hdev;
+@@ -73,10 +63,13 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
+ return;
+ }
+
++ /* If there are devices using the connection as parent reset it to NULL
++ * before unregistering the device.
++ */
+ while (1) {
+ struct device *dev;
+
+- dev = device_find_child(&conn->dev, NULL, __match_tty);
++ dev = device_find_any_child(&conn->dev);
+ if (!dev)
+ break;
+ device_move(dev, NULL, DPM_ORDER_DEV_LAST);
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 7a83e400ac77a0..5e2d9758bd3c1c 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -35,6 +35,7 @@ struct iso_conn {
+ struct sk_buff *rx_skb;
+ __u32 rx_len;
+ __u16 tx_sn;
++ struct kref ref;
+ };
+
+ #define iso_conn_lock(c) spin_lock(&(c)->lock)
+@@ -93,6 +94,49 @@ static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
+ #define ISO_CONN_TIMEOUT (HZ * 40)
+ #define ISO_DISCONN_TIMEOUT (HZ * 2)
+
++static void iso_conn_free(struct kref *ref)
++{
++ struct iso_conn *conn = container_of(ref, struct iso_conn, ref);
++
++ BT_DBG("conn %p", conn);
++
++ if (conn->sk)
++ iso_pi(conn->sk)->conn = NULL;
++
++ if (conn->hcon) {
++ conn->hcon->iso_data = NULL;
++ hci_conn_drop(conn->hcon);
++ }
++
++ /* Ensure no more work items will run since hci_conn has been dropped */
++ disable_delayed_work_sync(&conn->timeout_work);
++
++ kfree(conn);
++}
++
++static void iso_conn_put(struct iso_conn *conn)
++{
++ if (!conn)
++ return;
++
++ BT_DBG("conn %p refcnt %d", conn, kref_read(&conn->ref));
++
++ kref_put(&conn->ref, iso_conn_free);
++}
++
++static struct iso_conn *iso_conn_hold_unless_zero(struct iso_conn *conn)
++{
++ if (!conn)
++ return NULL;
++
++ BT_DBG("conn %p refcnt %u", conn, kref_read(&conn->ref));
++
++ if (!kref_get_unless_zero(&conn->ref))
++ return NULL;
++
++ return conn;
++}
++
+ static struct sock *iso_sock_hold(struct iso_conn *conn)
+ {
+ if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk))
+@@ -109,9 +153,14 @@ static void iso_sock_timeout(struct work_struct *work)
+ timeout_work.work);
+ struct sock *sk;
+
++ conn = iso_conn_hold_unless_zero(conn);
++ if (!conn)
++ return;
++
+ iso_conn_lock(conn);
+ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
++ iso_conn_put(conn);
+
+ if (!sk)
+ return;
+@@ -149,9 +198,14 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ {
+ struct iso_conn *conn = hcon->iso_data;
+
++ conn = iso_conn_hold_unless_zero(conn);
+ if (conn) {
+- if (!conn->hcon)
++ if (!conn->hcon) {
++ iso_conn_lock(conn);
+ conn->hcon = hcon;
++ iso_conn_unlock(conn);
++ }
++ iso_conn_put(conn);
+ return conn;
+ }
+
+@@ -159,6 +213,7 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ if (!conn)
+ return NULL;
+
++ kref_init(&conn->ref);
+ spin_lock_init(&conn->lock);
+ INIT_DELAYED_WORK(&conn->timeout_work, iso_sock_timeout);
+
+@@ -178,17 +233,15 @@ static void iso_chan_del(struct sock *sk, int err)
+ struct sock *parent;
+
+ conn = iso_pi(sk)->conn;
++ iso_pi(sk)->conn = NULL;
+
+ BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
+
+ if (conn) {
+ iso_conn_lock(conn);
+ conn->sk = NULL;
+- iso_pi(sk)->conn = NULL;
+ iso_conn_unlock(conn);
+-
+- if (conn->hcon)
+- hci_conn_drop(conn->hcon);
++ iso_conn_put(conn);
+ }
+
+ sk->sk_state = BT_CLOSED;
+@@ -210,6 +263,7 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ struct iso_conn *conn = hcon->iso_data;
+ struct sock *sk;
+
++ conn = iso_conn_hold_unless_zero(conn);
+ if (!conn)
+ return;
+
+@@ -219,20 +273,18 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ iso_conn_lock(conn);
+ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
++ iso_conn_put(conn);
+
+- if (sk) {
+- lock_sock(sk);
+- iso_sock_clear_timer(sk);
+- iso_chan_del(sk, err);
+- release_sock(sk);
+- sock_put(sk);
++ if (!sk) {
++ iso_conn_put(conn);
++ return;
+ }
+
+- /* Ensure no more work items will run before freeing conn. */
+- cancel_delayed_work_sync(&conn->timeout_work);
+-
+- hcon->iso_data = NULL;
+- kfree(conn);
++ lock_sock(sk);
++ iso_sock_clear_timer(sk);
++ iso_chan_del(sk, err);
++ release_sock(sk);
++ sock_put(sk);
+ }
+
+ static int __iso_chan_add(struct iso_conn *conn, struct sock *sk,
+@@ -652,6 +704,8 @@ static void iso_sock_destruct(struct sock *sk)
+ {
+ BT_DBG("sk %p", sk);
+
++ iso_conn_put(iso_pi(sk)->conn);
++
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
+ }
+@@ -711,6 +765,7 @@ static void iso_sock_disconn(struct sock *sk)
+ */
+ if (bis_sk) {
+ hcon->state = BT_OPEN;
++ hcon->iso_data = NULL;
+ iso_pi(sk)->conn->hcon = NULL;
+ iso_sock_clear_timer(sk);
+ iso_chan_del(sk, bt_to_errno(hcon->abort_reason));
+@@ -720,7 +775,6 @@ static void iso_sock_disconn(struct sock *sk)
+ }
+
+ sk->sk_state = BT_DISCONN;
+- iso_sock_set_timer(sk, ISO_DISCONN_TIMEOUT);
+ iso_conn_lock(iso_pi(sk)->conn);
+ hci_conn_drop(iso_pi(sk)->conn->hcon);
+ iso_pi(sk)->conn->hcon = NULL;
+@@ -1338,6 +1392,13 @@ static void iso_conn_big_sync(struct sock *sk)
+ if (!hdev)
+ return;
+
++ /* hci_le_big_create_sync requires hdev lock to be held, since
++ * it enqueues the HCI LE BIG Create Sync command via
++ * hci_cmd_sync_queue_once, which checks hdev flags that might
++ * change.
++ */
++ hci_dev_lock(hdev);
++
+ if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+ err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
+ &iso_pi(sk)->qos,
+@@ -1348,6 +1409,8 @@ static void iso_conn_big_sync(struct sock *sk)
+ bt_dev_err(hdev, "hci_le_big_create_sync: %d",
+ err);
+ }
++
++ hci_dev_unlock(hdev);
+ }
+
+ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+@@ -1942,6 +2005,7 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (sk) {
+ int err;
++ struct hci_conn *hcon = iso_pi(sk)->conn->hcon;
+
+ iso_pi(sk)->qos.bcast.encryption = ev2->encryption;
+
+@@ -1950,7 +2014,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) &&
+ !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+- err = hci_le_big_create_sync(hdev, NULL,
++ err = hci_le_big_create_sync(hdev,
++ hcon,
+ &iso_pi(sk)->qos,
+ iso_pi(sk)->sync_handle,
+ iso_pi(sk)->bc_num_bis,
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index a429661b676a83..2343e15f8938ec 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1317,7 +1317,8 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ struct mgmt_mode *cp;
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
+ return;
+
+ cp = cmd->param;
+@@ -1350,7 +1351,13 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_powered_sync(struct hci_dev *hdev, void *data)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+- struct mgmt_mode *cp = cmd->param;
++ struct mgmt_mode *cp;
++
++ /* Make sure cmd still outstanding. */
++ if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++ return -ECANCELED;
++
++ cp = cmd->param;
+
+ BT_DBG("%s", hdev->name);
+
+@@ -1510,7 +1517,8 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ bt_dev_dbg(hdev, "err %d", err);
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
+ return;
+
+ hci_dev_lock(hdev);
+@@ -1684,7 +1692,8 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ bt_dev_dbg(hdev, "err %d", err);
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
+ return;
+
+ hci_dev_lock(hdev);
+@@ -1916,7 +1925,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ bool changed;
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_SSP, hdev))
++ if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
+ return;
+
+ if (err) {
+@@ -3782,7 +3791,8 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+
+ bt_dev_dbg(hdev, "err %d", err);
+
+- if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
+ return;
+
+ if (status) {
+@@ -3957,7 +3967,8 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ struct sk_buff *skb = cmd->skb;
+ u8 status = mgmt_status(err);
+
+- if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+ return;
+
+ if (!status) {
+@@ -5848,13 +5859,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+
++ bt_dev_dbg(hdev, "err %d", err);
++
++ if (err == -ECANCELED)
++ return;
++
+ if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
+ cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
+ cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
+ return;
+
+- bt_dev_dbg(hdev, "err %d", err);
+-
+ mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
+ cmd->param, 1);
+ mgmt_pending_remove(cmd);
+@@ -6087,7 +6101,8 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+
+- if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
+ return;
+
+ bt_dev_dbg(hdev, "err %d", err);
+@@ -8078,7 +8093,8 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ u8 status = mgmt_status(err);
+ u16 eir_len;
+
+- if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+ return;
+
+ if (!status) {
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index f48250e3f2e103..8af1bf518321fd 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -729,7 +729,8 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
+ struct sock *l2cap_sk;
+ struct l2cap_conn *conn;
+ struct rfcomm_conninfo cinfo;
+- int len, err = 0;
++ int err = 0;
++ size_t len;
+ u32 opt;
+
+ BT_DBG("sk %p", sk);
+@@ -783,7 +784,7 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
+ cinfo.hci_handle = conn->hcon->handle;
+ memcpy(cinfo.dev_class, conn->hcon->dev_class, 3);
+
+- len = min_t(unsigned int, len, sizeof(cinfo));
++ len = min(len, sizeof(cinfo));
+ if (copy_to_user(optval, (char *) &cinfo, len))
+ err = -EFAULT;
+
+@@ -802,7 +803,8 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
+ {
+ struct sock *sk = sock->sk;
+ struct bt_security sec;
+- int len, err = 0;
++ int err = 0;
++ size_t len;
+
+ BT_DBG("sk %p", sk);
+
+@@ -827,7 +829,7 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
+ sec.level = rfcomm_pi(sk)->sec_level;
+ sec.key_size = 0;
+
+- len = min_t(unsigned int, len, sizeof(sec));
++ len = min(len, sizeof(sec));
+ if (copy_to_user(optval, (char *) &sec, len))
+ err = -EFAULT;
+
+diff --git a/net/core/filter.c b/net/core/filter.c
+index fb56567c551ed6..9a459213d283f1 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2621,18 +2621,16 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
+
+ static void sk_msg_reset_curr(struct sk_msg *msg)
+ {
+- u32 i = msg->sg.start;
+- u32 len = 0;
+-
+- do {
+- len += sk_msg_elem(msg, i)->length;
+- sk_msg_iter_var_next(i);
+- if (len >= msg->sg.size)
+- break;
+- } while (i != msg->sg.end);
++ if (!msg->sg.size) {
++ msg->sg.curr = msg->sg.start;
++ msg->sg.copybreak = 0;
++ } else {
++ u32 i = msg->sg.end;
+
+- msg->sg.curr = i;
+- msg->sg.copybreak = 0;
++ sk_msg_iter_var_prev(i);
++ msg->sg.curr = i;
++ msg->sg.copybreak = msg->sg.data[i].length;
++ }
+ }
+
+ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+@@ -2795,7 +2793,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ sk_msg_iter_var_next(i);
+ } while (i != msg->sg.end);
+
+- if (start >= offset + l)
++ if (start > offset + l)
+ return -EINVAL;
+
+ space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
+@@ -2820,6 +2818,8 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+
+ raw = page_address(page);
+
++ if (i == msg->sg.end)
++ sk_msg_iter_var_prev(i);
+ psge = sk_msg_elem(msg, i);
+ front = start - offset;
+ back = psge->length - front;
+@@ -2836,7 +2836,13 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ }
+
+ put_page(sg_page(psge));
+- } else if (start - offset) {
++ new = i;
++ goto place_new;
++ }
++
++ if (start - offset) {
++ if (i == msg->sg.end)
++ sk_msg_iter_var_prev(i);
+ psge = sk_msg_elem(msg, i);
+ rsge = sk_msg_elem_cpy(msg, i);
+
+@@ -2847,39 +2853,44 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ sk_msg_iter_var_next(i);
+ sg_unmark_end(psge);
+ sg_unmark_end(&rsge);
+- sk_msg_iter_next(msg, end);
+ }
+
+ /* Slot(s) to place newly allocated data */
++ sk_msg_iter_next(msg, end);
+ new = i;
++ sk_msg_iter_var_next(i);
++
++ if (i == msg->sg.end) {
++ if (!rsge.length)
++ goto place_new;
++ sk_msg_iter_next(msg, end);
++ goto place_new;
++ }
+
+ /* Shift one or two slots as needed */
+- if (!copy) {
+- sge = sk_msg_elem_cpy(msg, i);
++ sge = sk_msg_elem_cpy(msg, new);
++ sg_unmark_end(&sge);
+
++ nsge = sk_msg_elem_cpy(msg, i);
++ if (rsge.length) {
+ sk_msg_iter_var_next(i);
+- sg_unmark_end(&sge);
++ nnsge = sk_msg_elem_cpy(msg, i);
+ sk_msg_iter_next(msg, end);
++ }
+
+- nsge = sk_msg_elem_cpy(msg, i);
++ while (i != msg->sg.end) {
++ msg->sg.data[i] = sge;
++ sge = nsge;
++ sk_msg_iter_var_next(i);
+ if (rsge.length) {
+- sk_msg_iter_var_next(i);
++ nsge = nnsge;
+ nnsge = sk_msg_elem_cpy(msg, i);
+- }
+-
+- while (i != msg->sg.end) {
+- msg->sg.data[i] = sge;
+- sge = nsge;
+- sk_msg_iter_var_next(i);
+- if (rsge.length) {
+- nsge = nnsge;
+- nnsge = sk_msg_elem_cpy(msg, i);
+- } else {
+- nsge = sk_msg_elem_cpy(msg, i);
+- }
++ } else {
++ nsge = sk_msg_elem_cpy(msg, i);
+ }
+ }
+
++place_new:
+ /* Place newly allocated data buffer */
+ sk_mem_charge(msg->sk, len);
+ msg->sg.size += len;
+@@ -2908,8 +2919,10 @@ static const struct bpf_func_proto bpf_msg_push_data_proto = {
+
+ static void sk_msg_shift_left(struct sk_msg *msg, int i)
+ {
++ struct scatterlist *sge = sk_msg_elem(msg, i);
+ int prev;
+
++ put_page(sg_page(sge));
+ do {
+ prev = i;
+ sk_msg_iter_var_next(i);
+@@ -2946,6 +2959,9 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ if (unlikely(flags))
+ return -EINVAL;
+
++ if (unlikely(len == 0))
++ return 0;
++
+ /* First find the starting scatterlist element */
+ i = msg->sg.start;
+ do {
+@@ -2958,7 +2974,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ } while (i != msg->sg.end);
+
+ /* Bounds checks: start and pop must be inside message */
+- if (start >= offset + l || last >= msg->sg.size)
++ if (start >= offset + l || last > msg->sg.size)
+ return -EINVAL;
+
+ space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
+@@ -2987,12 +3003,12 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ */
+ if (start != offset) {
+ struct scatterlist *nsge, *sge = sk_msg_elem(msg, i);
+- int a = start;
++ int a = start - offset;
+ int b = sge->length - pop - a;
+
+ sk_msg_iter_var_next(i);
+
+- if (pop < sge->length - a) {
++ if (b > 0) {
+ if (space) {
+ sge->length = a;
+ sk_msg_shift_right(msg, i);
+@@ -3011,7 +3027,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ if (unlikely(!page))
+ return -ENOMEM;
+
+- sge->length = a;
+ orig = sg_page(sge);
+ from = sg_virt(sge);
+ to = page_address(page);
+@@ -3021,7 +3036,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ put_page(orig);
+ }
+ pop = 0;
+- } else if (pop >= sge->length - a) {
++ } else {
+ pop -= (sge->length - a);
+ sge->length = a;
+ }
+@@ -3055,7 +3070,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ pop -= sge->length;
+ sk_msg_shift_left(msg, i);
+ }
+- sk_msg_iter_var_next(i);
+ }
+
+ sk_mem_uncharge(msg->sk, len - pop);
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 1cb954f2d39e82..d2baa1af9df09e 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -215,6 +215,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ return -ENOMEM;
+
+ rtnl_lock();
++ rcu_read_lock();
+
+ napi = napi_by_id(napi_id);
+ if (napi) {
+@@ -224,6 +225,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ err = -ENOENT;
+ }
+
++ rcu_read_unlock();
+ rtnl_unlock();
+
+ if (err)
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index b1dcbd3be89e10..e90fbab703b2db 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -1117,9 +1117,9 @@ static void sk_psock_strp_data_ready(struct sock *sk)
+ if (tls_sw_has_ctx_rx(sk)) {
+ psock->saved_data_ready(sk);
+ } else {
+- write_lock_bh(&sk->sk_callback_lock);
++ read_lock_bh(&sk->sk_callback_lock);
+ strp_data_ready(&psock->strp);
+- write_unlock_bh(&sk->sk_callback_lock);
++ read_unlock_bh(&sk->sk_callback_lock);
+ }
+ }
+ rcu_read_unlock();
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index ebdfd5b64e17a2..f630d6645636dd 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -268,6 +268,8 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+ skb->dev = master->dev;
+ skb->priority = TC_PRIO_CONTROL;
+
++ skb_reset_network_header(skb);
++ skb_reset_transport_header(skb);
+ if (dev_hard_header(skb, skb->dev, ETH_P_PRP,
+ hsr->sup_multicast_addr,
+ skb->dev->dev_addr, skb->len) <= 0)
+@@ -275,8 +277,6 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+
+ skb_reset_mac_header(skb);
+ skb_reset_mac_len(skb);
+- skb_reset_network_header(skb);
+- skb_reset_transport_header(skb);
+
+ return skb;
+ out:
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 2b698f8419fe2b..fe7947f7740623 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1189,7 +1189,7 @@ static void reqsk_timer_handler(struct timer_list *t)
+
+ drop:
+ __inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
+- reqsk_put(req);
++ reqsk_put(oreq);
+ }
+
+ static bool reqsk_queue_hash_req(struct request_sock *req,
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 089864c6a35eec..449a2ac40bdc00 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -137,7 +137,7 @@ static struct mr_table *ipmr_mr_table_iter(struct net *net,
+ return ret;
+ }
+
+-static struct mr_table *ipmr_get_table(struct net *net, u32 id)
++static struct mr_table *__ipmr_get_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+@@ -148,6 +148,16 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+ return NULL;
+ }
+
++static struct mr_table *ipmr_get_table(struct net *net, u32 id)
++{
++ struct mr_table *mrt;
++
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, id);
++ rcu_read_unlock();
++ return mrt;
++}
++
+ static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
+ struct mr_table **mrt)
+ {
+@@ -189,7 +199,7 @@ static int ipmr_rule_action(struct fib_rule *rule, struct flowi *flp,
+
+ arg->table = fib_rule_get_table(rule, arg);
+
+- mrt = ipmr_get_table(rule->fr_net, arg->table);
++ mrt = __ipmr_get_table(rule->fr_net, arg->table);
+ if (!mrt)
+ return -EAGAIN;
+ res->mrt = mrt;
+@@ -315,6 +325,8 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+ return net->ipv4.mrt;
+ }
+
++#define __ipmr_get_table ipmr_get_table
++
+ static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
+ struct mr_table **mrt)
+ {
+@@ -403,7 +415,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
+ if (id != RT_TABLE_DEFAULT && id >= 1000000000)
+ return ERR_PTR(-EINVAL);
+
+- mrt = ipmr_get_table(net, id);
++ mrt = __ipmr_get_table(net, id);
+ if (mrt)
+ return mrt;
+
+@@ -1374,7 +1386,7 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval,
+ goto out_unlock;
+ }
+
+- mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
++ mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
+ if (!mrt) {
+ ret = -ENOENT;
+ goto out_unlock;
+@@ -2262,11 +2274,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
+ struct mr_table *mrt;
+ int err;
+
+- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return -ENOENT;
++ }
+
+- rcu_read_lock();
+ cache = ipmr_cache_find(mrt, saddr, daddr);
+ if (!cache && skb->dev) {
+ int vif = ipmr_find_vif(mrt, skb->dev);
+@@ -2550,7 +2564,7 @@ static int ipmr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ grp = tb[RTA_DST] ? nla_get_in_addr(tb[RTA_DST]) : 0;
+ tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
+
+- mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
++ mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
+ if (!mrt) {
+ err = -ENOENT;
+ goto errout_free;
+@@ -2604,7 +2618,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ if (filter.table_id) {
+ struct mr_table *mrt;
+
+- mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id);
++ mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id);
+ if (!mrt) {
+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)
+ return skb->len;
+@@ -2712,7 +2726,7 @@ static int rtm_to_ipmr_mfcc(struct net *net, struct nlmsghdr *nlh,
+ break;
+ }
+ }
+- mrt = ipmr_get_table(net, tblid);
++ mrt = __ipmr_get_table(net, tblid);
+ if (!mrt) {
+ ret = -ENOENT;
+ goto out;
+@@ -2920,13 +2934,15 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
+ struct net *net = seq_file_net(seq);
+ struct mr_table *mrt;
+
+- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return ERR_PTR(-ENOENT);
++ }
+
+ iter->mrt = mrt;
+
+- rcu_read_lock();
+ return mr_vif_seq_start(seq, pos);
+ }
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 94dceac528842c..01115e1a34cb66 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2570,6 +2570,24 @@ static struct inet6_dev *addrconf_add_dev(struct net_device *dev)
+ return idev;
+ }
+
++static void delete_tempaddrs(struct inet6_dev *idev,
++ struct inet6_ifaddr *ifp)
++{
++ struct inet6_ifaddr *ift, *tmp;
++
++ write_lock_bh(&idev->lock);
++ list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) {
++ if (ift->ifpub != ifp)
++ continue;
++
++ in6_ifa_hold(ift);
++ write_unlock_bh(&idev->lock);
++ ipv6_del_addr(ift);
++ write_lock_bh(&idev->lock);
++ }
++ write_unlock_bh(&idev->lock);
++}
++
+ static void manage_tempaddrs(struct inet6_dev *idev,
+ struct inet6_ifaddr *ifp,
+ __u32 valid_lft, __u32 prefered_lft,
+@@ -3124,11 +3142,12 @@ static int inet6_addr_del(struct net *net, int ifindex, u32 ifa_flags,
+ in6_ifa_hold(ifp);
+ read_unlock_bh(&idev->lock);
+
+- if (!(ifp->flags & IFA_F_TEMPORARY) &&
+- (ifa_flags & IFA_F_MANAGETEMPADDR))
+- manage_tempaddrs(idev, ifp, 0, 0, false,
+- jiffies);
+ ipv6_del_addr(ifp);
++
++ if (!(ifp->flags & IFA_F_TEMPORARY) &&
++ (ifp->flags & IFA_F_MANAGETEMPADDR))
++ delete_tempaddrs(idev, ifp);
++
+ addrconf_verify_rtnl(net);
+ if (ipv6_addr_is_multicast(pfx)) {
+ ipv6_mc_config(net->ipv6.mc_autojoin_sk,
+@@ -4952,14 +4971,12 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ }
+
+ if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) {
+- if (was_managetempaddr &&
+- !(ifp->flags & IFA_F_MANAGETEMPADDR)) {
+- cfg->valid_lft = 0;
+- cfg->preferred_lft = 0;
+- }
+- manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
+- cfg->preferred_lft, !was_managetempaddr,
+- jiffies);
++ if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR))
++ delete_tempaddrs(ifp->idev, ifp);
++ else
++ manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
++ cfg->preferred_lft, !was_managetempaddr,
++ jiffies);
+ }
+
+ addrconf_verify_rtnl(net);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index eb111d20615c62..9a1c59275a1099 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1190,8 +1190,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ while (sibling) {
+ if (sibling->fib6_metric == rt->fib6_metric &&
+ rt6_qualify_for_ecmp(sibling)) {
+- list_add_tail(&rt->fib6_siblings,
+- &sibling->fib6_siblings);
++ list_add_tail_rcu(&rt->fib6_siblings,
++ &sibling->fib6_siblings);
+ break;
+ }
+ sibling = rcu_dereference_protected(sibling->fib6_next,
+@@ -1252,7 +1252,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ fib6_siblings)
+ sibling->fib6_nsiblings--;
+ rt->fib6_nsiblings = 0;
+- list_del_init(&rt->fib6_siblings);
++ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ return err;
+ }
+@@ -1970,7 +1970,7 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ &rt->fib6_siblings, fib6_siblings)
+ sibling->fib6_nsiblings--;
+ rt->fib6_nsiblings = 0;
+- list_del_init(&rt->fib6_siblings);
++ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ }
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 2ce4ae0d8dc3b4..d5057401701c1a 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -125,7 +125,7 @@ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
+ return ret;
+ }
+
+-static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
++static struct mr_table *__ip6mr_get_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+@@ -136,6 +136,16 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+ return NULL;
+ }
+
++static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
++{
++ struct mr_table *mrt;
++
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, id);
++ rcu_read_unlock();
++ return mrt;
++}
++
+ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+ struct mr_table **mrt)
+ {
+@@ -177,7 +187,7 @@ static int ip6mr_rule_action(struct fib_rule *rule, struct flowi *flp,
+
+ arg->table = fib_rule_get_table(rule, arg);
+
+- mrt = ip6mr_get_table(rule->fr_net, arg->table);
++ mrt = __ip6mr_get_table(rule->fr_net, arg->table);
+ if (!mrt)
+ return -EAGAIN;
+ res->mrt = mrt;
+@@ -304,6 +314,8 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+ return net->ipv6.mrt6;
+ }
+
++#define __ip6mr_get_table ip6mr_get_table
++
+ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+ struct mr_table **mrt)
+ {
+@@ -382,7 +394,7 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(net, id);
++ mrt = __ip6mr_get_table(net, id);
+ if (mrt)
+ return mrt;
+
+@@ -411,13 +423,15 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
+ struct net *net = seq_file_net(seq);
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return ERR_PTR(-ENOENT);
++ }
+
+ iter->mrt = mrt;
+
+- rcu_read_lock();
+ return mr_vif_seq_start(seq, pos);
+ }
+
+@@ -2275,11 +2289,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
+ struct mfc6_cache *cache;
+ struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
+
+- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return -ENOENT;
++ }
+
+- rcu_read_lock();
+ cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr);
+ if (!cache && skb->dev) {
+ int vif = ip6mr_find_vif(mrt, skb->dev);
+@@ -2559,7 +2575,7 @@ static int ip6mr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ grp = nla_get_in6_addr(tb[RTA_DST]);
+ tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
+
+- mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
++ mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
+ if (!mrt) {
+ NL_SET_ERR_MSG_MOD(extack, "MR table does not exist");
+ return -ENOENT;
+@@ -2606,7 +2622,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ if (filter.table_id) {
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id);
++ mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id);
+ if (!mrt) {
+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)
+ return skb->len;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b4251915585f75..cff4fbbc66efb2 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -374,6 +374,7 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
+ {
+ struct rt6_info *rt = dst_rt6_info(dst);
+ struct inet6_dev *idev = rt->rt6i_idev;
++ struct fib6_info *from;
+
+ if (idev && idev->dev != blackhole_netdev) {
+ struct inet6_dev *blackhole_idev = in6_dev_get(blackhole_netdev);
+@@ -383,6 +384,8 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
+ in6_dev_put(idev);
+ }
+ }
++ from = unrcu_pointer(xchg(&rt->from, NULL));
++ fib6_info_release(from);
+ }
+
+ static bool __rt6_check_expired(const struct rt6_info *rt)
+@@ -413,8 +416,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ struct flowi6 *fl6, int oif, bool have_oif_match,
+ const struct sk_buff *skb, int strict)
+ {
+- struct fib6_info *sibling, *next_sibling;
+ struct fib6_info *match = res->f6i;
++ struct fib6_info *sibling;
+
+ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))
+ goto out;
+@@ -440,8 +443,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound))
+ goto out;
+
+- list_for_each_entry_safe(sibling, next_sibling, &match->fib6_siblings,
+- fib6_siblings) {
++ list_for_each_entry_rcu(sibling, &match->fib6_siblings,
++ fib6_siblings) {
+ const struct fib6_nh *nh = sibling->fib6_nh;
+ int nh_upper_bound;
+
+@@ -1455,7 +1458,6 @@ static DEFINE_SPINLOCK(rt6_exception_lock);
+ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ struct rt6_exception *rt6_ex)
+ {
+- struct fib6_info *from;
+ struct net *net;
+
+ if (!bucket || !rt6_ex)
+@@ -1467,8 +1469,6 @@ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ /* purge completely the exception to allow releasing the held resources:
+ * some [sk] cache may keep the dst around for unlimited time
+ */
+- from = unrcu_pointer(xchg(&rt6_ex->rt6i->from, NULL));
+- fib6_info_release(from);
+ dst_dev_put(&rt6_ex->rt6i->dst);
+
+ hlist_del_rcu(&rt6_ex->hlist);
+@@ -5195,14 +5195,18 @@ static void ip6_route_mpath_notify(struct fib6_info *rt,
+ * nexthop. Since sibling routes are always added at the end of
+ * the list, find the first sibling of the last route appended
+ */
++ rcu_read_lock();
++
+ if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) {
+- rt = list_first_entry(&rt_last->fib6_siblings,
+- struct fib6_info,
+- fib6_siblings);
++ rt = list_first_or_null_rcu(&rt_last->fib6_siblings,
++ struct fib6_info,
++ fib6_siblings);
+ }
+
+ if (rt)
+ inet6_rt_notify(RTM_NEWROUTE, rt, info, nlflags);
++
++ rcu_read_unlock();
+ }
+
+ static bool ip6_route_mpath_should_notify(const struct fib6_info *rt)
+@@ -5547,17 +5551,21 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ &nexthop_len);
+ } else {
+- struct fib6_info *sibling, *next_sibling;
+ struct fib6_nh *nh = f6i->fib6_nh;
++ struct fib6_info *sibling;
+
+ nexthop_len = 0;
+ if (f6i->fib6_nsiblings) {
+ rt6_nh_nlmsg_size(nh, &nexthop_len);
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &f6i->fib6_siblings, fib6_siblings) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++ fib6_siblings) {
+ rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+ }
++
++ rcu_read_unlock();
+ }
+ nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ }
+@@ -5721,7 +5729,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
+ goto nla_put_failure;
+ } else if (rt->fib6_nsiblings) {
+- struct fib6_info *sibling, *next_sibling;
++ struct fib6_info *sibling;
+ struct nlattr *mp;
+
+ mp = nla_nest_start_noflag(skb, RTA_MULTIPATH);
+@@ -5733,14 +5741,21 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 0) < 0)
+ goto nla_put_failure;
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &rt->fib6_siblings, fib6_siblings) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(sibling, &rt->fib6_siblings,
++ fib6_siblings) {
+ if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common,
+ sibling->fib6_nh->fib_nh_weight,
+- AF_INET6, 0) < 0)
++ AF_INET6, 0) < 0) {
++ rcu_read_unlock();
++
+ goto nla_put_failure;
++ }
+ }
+
++ rcu_read_unlock();
++
+ nla_nest_end(skb, mp);
+ } else if (rt->nh) {
+ if (nla_put_u32(skb, RTA_NH_ID, rt->nh->id))
+@@ -6177,7 +6192,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ err = -ENOBUFS;
+ seq = info->nlh ? info->nlh->nlmsg_seq : 0;
+
+- skb = nlmsg_new(rt6_nlmsg_size(rt), gfp_any());
++ skb = nlmsg_new(rt6_nlmsg_size(rt), GFP_ATOMIC);
+ if (!skb)
+ goto errout;
+
+@@ -6190,7 +6205,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ goto errout;
+ }
+ rtnl_notify(skb, net, info->portid, RTNLGRP_IPV6_ROUTE,
+- info->nlh, gfp_any());
++ info->nlh, GFP_ATOMIC);
+ return;
+ errout:
+ rtnl_set_sk_err(net, RTNLGRP_IPV6_ROUTE, err);
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index c00323fa9eb66e..7929df08d4e023 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -1236,7 +1236,9 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ return -EOPNOTSUPP;
+
+ /* receive/dequeue next skb:
+- * the function understands MSG_PEEK and, thus, does not dequeue skb */
++ * the function understands MSG_PEEK and, thus, does not dequeue skb
++ * only refcount is increased.
++ */
+ skb = skb_recv_datagram(sk, flags, &err);
+ if (!skb) {
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+@@ -1252,9 +1254,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+
+ cskb = skb;
+ if (skb_copy_datagram_msg(cskb, offset, msg, copied)) {
+- if (!(flags & MSG_PEEK))
+- skb_queue_head(&sk->sk_receive_queue, skb);
+- return -EFAULT;
++ err = -EFAULT;
++ goto err_out;
+ }
+
+ /* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */
+@@ -1271,11 +1272,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
+ sizeof(IUCV_SKB_CB(skb)->class),
+ (void *)&IUCV_SKB_CB(skb)->class);
+- if (err) {
+- if (!(flags & MSG_PEEK))
+- skb_queue_head(&sk->sk_receive_queue, skb);
+- return err;
+- }
++ if (err)
++ goto err_out;
+
+ /* Mark read part of skb as used */
+ if (!(flags & MSG_PEEK)) {
+@@ -1331,8 +1329,18 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ /* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */
+ if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC))
+ copied = rlen;
++ if (flags & MSG_PEEK)
++ skb_unref(skb);
+
+ return copied;
++
++err_out:
++ if (!(flags & MSG_PEEK))
++ skb_queue_head(&sk->sk_receive_queue, skb);
++ else
++ skb_unref(skb);
++
++ return err;
+ }
+
+ static inline __poll_t iucv_accept_poll(struct sock *parent)
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 3eec23ac5ab10e..369a2f2e459cdb 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1870,15 +1870,31 @@ static __net_exit void l2tp_pre_exit_net(struct net *net)
+ }
+ }
+
++static int l2tp_idr_item_unexpected(int id, void *p, void *data)
++{
++ const char *idr_name = data;
++
++ pr_err("l2tp: %s IDR not empty at net %d exit\n", idr_name, id);
++ WARN_ON_ONCE(1);
++ return 1;
++}
++
+ static __net_exit void l2tp_exit_net(struct net *net)
+ {
+ struct l2tp_net *pn = l2tp_pernet(net);
+
+- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v2_session_idr));
++ /* Our per-net IDRs should be empty. Check that is so, to
++ * help catch cleanup races or refcnt leaks.
++ */
++ idr_for_each(&pn->l2tp_v2_session_idr, l2tp_idr_item_unexpected,
++ "v2_session");
++ idr_for_each(&pn->l2tp_v3_session_idr, l2tp_idr_item_unexpected,
++ "v3_session");
++ idr_for_each(&pn->l2tp_tunnel_idr, l2tp_idr_item_unexpected,
++ "tunnel");
++
+ idr_destroy(&pn->l2tp_v2_session_idr);
+- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v3_session_idr));
+ idr_destroy(&pn->l2tp_v3_session_idr);
+- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_tunnel_idr));
+ idr_destroy(&pn->l2tp_tunnel_idr);
+ }
+
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 4eb52add7103b0..0259cde394ba09 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -1098,7 +1098,7 @@ static int llc_ui_setsockopt(struct socket *sock, int level, int optname,
+ lock_sock(sk);
+ if (unlikely(level != SOL_LLC || optlen != sizeof(int)))
+ goto out;
+- rc = copy_from_sockptr(&opt, optval, sizeof(opt));
++ rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (rc)
+ goto out;
+ rc = -EINVAL;
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
+index e4fa00abde6a2a..5988b9bb9029dc 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
+@@ -163,11 +163,8 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
+ ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ if (ret)
+ return ret;
+- if (ip > ip_to) {
++ if (ip > ip_to)
+ swap(ip, ip_to);
+- if (ip < map->first_ip)
+- return -IPSET_ERR_BITMAP_RANGE;
+- }
+ } else if (tb[IPSET_ATTR_CIDR]) {
+ u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+
+@@ -178,7 +175,7 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
+ ip_to = ip;
+ }
+
+- if (ip_to > map->last_ip)
++ if (ip < map->first_ip || ip_to > map->last_ip)
+ return -IPSET_ERR_BITMAP_RANGE;
+
+ for (; !before(ip_to, ip); ip += map->hosts) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 588a2757986c1d..4a137afaf0b87e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3295,25 +3295,37 @@ int nft_expr_inner_parse(const struct nft_ctx *ctx, const struct nlattr *nla,
+ if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME])
+ return -EINVAL;
+
++ rcu_read_lock();
++
+ type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]);
+- if (!type)
+- return -ENOENT;
++ if (!type) {
++ err = -ENOENT;
++ goto out_unlock;
++ }
+
+- if (!type->inner_ops)
+- return -EOPNOTSUPP;
++ if (!type->inner_ops) {
++ err = -EOPNOTSUPP;
++ goto out_unlock;
++ }
+
+ err = nla_parse_nested_deprecated(info->tb, type->maxattr,
+ tb[NFTA_EXPR_DATA],
+ type->policy, NULL);
+ if (err < 0)
+- goto err_nla_parse;
++ goto out_unlock;
+
+ info->attr = nla;
+ info->ops = type->inner_ops;
+
++ /* No module reference will be taken on type->owner.
++ * Presence of type->inner_ops implies that the expression
++ * is builtin, so it cannot go away.
++ */
++ rcu_read_unlock();
+ return 0;
+
+-err_nla_parse:
++out_unlock:
++ rcu_read_unlock();
+ return err;
+ }
+
+@@ -3412,13 +3424,15 @@ void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr)
+ * Rules
+ */
+
+-static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
++static struct nft_rule *__nft_rule_lookup(const struct net *net,
++ const struct nft_chain *chain,
+ u64 handle)
+ {
+ struct nft_rule *rule;
+
+ // FIXME: this sucks
+- list_for_each_entry_rcu(rule, &chain->rules, list) {
++ list_for_each_entry_rcu(rule, &chain->rules, list,
++ lockdep_commit_lock_is_held(net)) {
+ if (handle == rule->handle)
+ return rule;
+ }
+@@ -3426,13 +3440,14 @@ static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
+ return ERR_PTR(-ENOENT);
+ }
+
+-static struct nft_rule *nft_rule_lookup(const struct nft_chain *chain,
++static struct nft_rule *nft_rule_lookup(const struct net *net,
++ const struct nft_chain *chain,
+ const struct nlattr *nla)
+ {
+ if (nla == NULL)
+ return ERR_PTR(-EINVAL);
+
+- return __nft_rule_lookup(chain, be64_to_cpu(nla_get_be64(nla)));
++ return __nft_rule_lookup(net, chain, be64_to_cpu(nla_get_be64(nla)));
+ }
+
+ static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = {
+@@ -3733,7 +3748,7 @@ static int nf_tables_dump_rules_done(struct netlink_callback *cb)
+ return 0;
+ }
+
+-/* called with rcu_read_lock held */
++/* Caller must hold rcu read lock or transaction mutex */
+ static struct sk_buff *
+ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
+@@ -3760,7 +3775,7 @@ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ return ERR_CAST(chain);
+ }
+
+- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
++ rule = nft_rule_lookup(net, chain, nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
+ return ERR_CAST(rule);
+@@ -4058,7 +4073,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (nla[NFTA_RULE_HANDLE]) {
+ handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE]));
+- rule = __nft_rule_lookup(chain, handle);
++ rule = __nft_rule_lookup(net, chain, handle);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
+ return PTR_ERR(rule);
+@@ -4080,7 +4095,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (nla[NFTA_RULE_POSITION]) {
+ pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
+- old_rule = __nft_rule_lookup(chain, pos_handle);
++ old_rule = __nft_rule_lookup(net, chain, pos_handle);
+ if (IS_ERR(old_rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]);
+ return PTR_ERR(old_rule);
+@@ -4297,7 +4312,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (chain) {
+ if (nla[NFTA_RULE_HANDLE]) {
+- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
++ rule = nft_rule_lookup(info->net, chain, nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule)) {
+ if (PTR_ERR(rule) == -ENOENT &&
+ NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYRULE)
+@@ -7790,9 +7805,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ struct nft_trans *trans;
+ int err = -ENOMEM;
+
+- if (!try_module_get(type->owner))
+- return -ENOENT;
+-
++ /* caller must have obtained type->owner reference. */
+ trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
+ sizeof(struct nft_trans_obj));
+ if (!trans)
+@@ -7860,15 +7873,16 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
+ if (info->nlh->nlmsg_flags & NLM_F_REPLACE)
+ return -EOPNOTSUPP;
+
+- type = __nft_obj_type_get(objtype, family);
+- if (WARN_ON_ONCE(!type))
+- return -ENOENT;
+-
+ if (!obj->ops->update)
+ return 0;
+
++ type = nft_obj_type_get(net, objtype, family);
++ if (WARN_ON_ONCE(IS_ERR(type)))
++ return PTR_ERR(type);
++
+ nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+
++ /* type->owner reference is put when transaction object is released. */
+ return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
+ }
+
+@@ -8104,7 +8118,7 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb)
+ return 0;
+ }
+
+-/* called with rcu_read_lock held */
++/* Caller must hold rcu read lock or transaction mutex */
+ static struct sk_buff *
+ nf_tables_getobj_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index f84aad420d4464..775d707ec708a7 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2176,9 +2176,14 @@ netlink_ack_tlv_len(struct netlink_sock *nlk, int err,
+ return tlvlen;
+ }
+
++static bool nlmsg_check_in_payload(const struct nlmsghdr *nlh, const void *addr)
++{
++ return !WARN_ON(addr < nlmsg_data(nlh) ||
++ addr - (const void *) nlh >= nlh->nlmsg_len);
++}
++
+ static void
+-netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+- const struct nlmsghdr *nlh, int err,
++netlink_ack_tlv_fill(struct sk_buff *skb, const struct nlmsghdr *nlh, int err,
+ const struct netlink_ext_ack *extack)
+ {
+ if (extack->_msg)
+@@ -2190,9 +2195,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+ if (!err)
+ return;
+
+- if (extack->bad_attr &&
+- !WARN_ON((u8 *)extack->bad_attr < in_skb->data ||
+- (u8 *)extack->bad_attr >= in_skb->data + in_skb->len))
++ if (extack->bad_attr && nlmsg_check_in_payload(nlh, extack->bad_attr))
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS,
+ (u8 *)extack->bad_attr - (const u8 *)nlh));
+ if (extack->policy)
+@@ -2201,9 +2204,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+ if (extack->miss_type)
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE,
+ extack->miss_type));
+- if (extack->miss_nest &&
+- !WARN_ON((u8 *)extack->miss_nest < in_skb->data ||
+- (u8 *)extack->miss_nest > in_skb->data + in_skb->len))
++ if (extack->miss_nest && nlmsg_check_in_payload(nlh, extack->miss_nest))
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST,
+ (u8 *)extack->miss_nest - (const u8 *)nlh));
+ }
+@@ -2232,7 +2233,7 @@ static int netlink_dump_done(struct netlink_sock *nlk, struct sk_buff *skb,
+ if (extack_len) {
+ nlh->nlmsg_flags |= NLM_F_ACK_TLVS;
+ if (skb_tailroom(skb) >= extack_len) {
+- netlink_ack_tlv_fill(cb->skb, skb, cb->nlh,
++ netlink_ack_tlv_fill(skb, cb->nlh,
+ nlk->dump_done_errno, extack);
+ nlmsg_end(skb, nlh);
+ }
+@@ -2491,7 +2492,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
+ }
+
+ if (tlvlen)
+- netlink_ack_tlv_fill(in_skb, skb, nlh, err, extack);
++ netlink_ack_tlv_fill(skb, nlh, err, extack);
+
+ nlmsg_end(skb, rep);
+
+diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
+index c268c2b011f428..a8e21060112ffd 100644
+--- a/net/rfkill/rfkill-gpio.c
++++ b/net/rfkill/rfkill-gpio.c
+@@ -32,8 +32,12 @@ static int rfkill_gpio_set_power(void *data, bool blocked)
+ {
+ struct rfkill_gpio_data *rfkill = data;
+
+- if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled)
+- clk_enable(rfkill->clk);
++ if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled) {
++ int ret = clk_enable(rfkill->clk);
++
++ if (ret)
++ return ret;
++ }
+
+ gpiod_set_value_cansleep(rfkill->shutdown_gpio, !blocked);
+ gpiod_set_value_cansleep(rfkill->reset_gpio, !blocked);
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index f4844683e12039..9d8bd0b37e41da 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -707,9 +707,10 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
+ ret = -EISCONN;
+ if (rx->sk.sk_state != RXRPC_UNBOUND)
+ goto error;
+- ret = copy_from_sockptr(&min_sec_level, optval,
+- sizeof(unsigned int));
+- if (ret < 0)
++ ret = copy_safe_from_sockptr(&min_sec_level,
++ sizeof(min_sec_level),
++ optval, optlen);
++ if (ret)
+ goto error;
+ ret = -EINVAL;
+ if (min_sec_level > RXRPC_SECURITY_MAX)
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 19a49af5a9e527..afefe124d9039e 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -331,6 +331,12 @@ static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb,
+ */
+ if (q->internal.qlen >= 8)
+ return false;
++
++ /* Ordering invariants fall apart if some delayed flows
++ * are ready but we haven't serviced them, yet.
++ */
++ if (q->time_next_delayed_flow <= now)
++ return false;
+ }
+
+ sk = skb->sk;
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 1bd3e531b0e090..059f6ef1ad1898 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -1427,7 +1427,9 @@ static int c_show(struct seq_file *m, void *p)
+ seq_printf(m, "# expiry=%lld refcnt=%d flags=%lx\n",
+ convert_to_wallclock(cp->expiry_time),
+ kref_read(&cp->ref), cp->flags);
+- cache_get(cp);
++ if (!cache_get_rcu(cp))
++ return 0;
++
+ if (cache_check(cd, cp, NULL))
+ /* cache_check does a cache_put on failure */
+ seq_puts(m, "# ");
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 825ec53576912a..59e2c46240f5c1 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1551,6 +1551,10 @@ static struct svc_xprt *svc_create_socket(struct svc_serv *serv,
+ newlen = error;
+
+ if (protocol == IPPROTO_TCP) {
++ __netns_tracker_free(net, &sock->sk->ns_tracker, false);
++ sock->sk->sk_net_refcnt = 1;
++ get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
+ if ((error = kernel_listen(sock, 64)) < 0)
+ goto bummer;
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
+index 58ae6ec4f25b4f..415c0310101f0d 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma.c
++++ b/net/sunrpc/xprtrdma/svc_rdma.c
+@@ -233,25 +233,34 @@ static int svc_rdma_proc_init(void)
+
+ rc = percpu_counter_init(&svcrdma_stat_read, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err;
+ rc = percpu_counter_init(&svcrdma_stat_recv, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_read;
+ rc = percpu_counter_init(&svcrdma_stat_sq_starve, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_recv;
+ rc = percpu_counter_init(&svcrdma_stat_write, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_sq;
+
+ svcrdma_table_header = register_sysctl("sunrpc/svc_rdma",
+ svcrdma_parm_table);
++ if (!svcrdma_table_header)
++ goto err_write;
++
+ return 0;
+
+-out_err:
++err_write:
++ rc = -ENOMEM;
++ percpu_counter_destroy(&svcrdma_stat_write);
++err_sq:
+ percpu_counter_destroy(&svcrdma_stat_sq_starve);
++err_recv:
+ percpu_counter_destroy(&svcrdma_stat_recv);
++err_read:
+ percpu_counter_destroy(&svcrdma_stat_read);
++err:
+ return rc;
+ }
+
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index ae3fb9bc8a2168..292022f0976e17 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -493,7 +493,13 @@ static bool xdr_check_write_chunk(struct svc_rdma_recv_ctxt *rctxt)
+ if (xdr_stream_decode_u32(&rctxt->rc_stream, &segcount))
+ return false;
+
+- /* A bogus segcount causes this buffer overflow check to fail. */
++ /* Before trusting the segcount value enough to use it in
++ * a computation, perform a simple range check. This is an
++ * arbitrary but sensible limit (ie, not architectural).
++ */
++ if (unlikely(segcount > RPCSVC_MAXPAGES))
++ return false;
++
+ p = xdr_inline_decode(&rctxt->rc_stream,
+ segcount * rpcrdma_segment_maxsz * sizeof(*p));
+ return p != NULL;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 1326fbf45a3479..b69e6290acfabe 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1198,6 +1198,7 @@ static void xs_sock_reset_state_flags(struct rpc_xprt *xprt)
+ clear_bit(XPRT_SOCK_WAKE_WRITE, &transport->sock_state);
+ clear_bit(XPRT_SOCK_WAKE_DISCONNECT, &transport->sock_state);
+ clear_bit(XPRT_SOCK_NOSPACE, &transport->sock_state);
++ clear_bit(XPRT_SOCK_UPD_TIMEOUT, &transport->sock_state);
+ }
+
+ static void xs_run_error_worker(struct sock_xprt *transport, unsigned int nr)
+@@ -1939,6 +1940,13 @@ static struct socket *xs_create_sock(struct rpc_xprt *xprt,
+ goto out;
+ }
+
++ if (protocol == IPPROTO_TCP) {
++ __netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false);
++ sock->sk->sk_net_refcnt = 1;
++ get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(xprt->xprt_net, 1);
++ }
++
+ filp = sock_alloc_file(sock, O_NONBLOCK, NULL);
+ if (IS_ERR(filp))
+ return ERR_CAST(filp);
+@@ -2614,11 +2622,10 @@ static int xs_tls_handshake_sync(struct rpc_xprt *lower_xprt, struct xprtsec_par
+ rc = wait_for_completion_interruptible_timeout(&lower_transport->handshake_done,
+ XS_TLS_HANDSHAKE_TO);
+ if (rc <= 0) {
+- if (!tls_handshake_cancel(sk)) {
+- if (rc == 0)
+- rc = -ETIMEDOUT;
+- goto out_put_xprt;
+- }
++ tls_handshake_cancel(sk);
++ if (rc == 0)
++ rc = -ETIMEDOUT;
++ goto out_put_xprt;
+ }
+
+ rc = lower_transport->xprt_err;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 74ca18833df172..7d313fb66d76ba 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -603,16 +603,20 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
+ }
+ EXPORT_SYMBOL(wiphy_new_nm);
+
+-static int wiphy_verify_combinations(struct wiphy *wiphy)
++static
++int wiphy_verify_iface_combinations(struct wiphy *wiphy,
++ const struct ieee80211_iface_combination *iface_comb,
++ int n_iface_comb,
++ bool combined_radio)
+ {
+ const struct ieee80211_iface_combination *c;
+ int i, j;
+
+- for (i = 0; i < wiphy->n_iface_combinations; i++) {
++ for (i = 0; i < n_iface_comb; i++) {
+ u32 cnt = 0;
+ u16 all_iftypes = 0;
+
+- c = &wiphy->iface_combinations[i];
++ c = &iface_comb[i];
+
+ /*
+ * Combinations with just one interface aren't real,
+@@ -625,9 +629,13 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ if (WARN_ON(!c->num_different_channels))
+ return -EINVAL;
+
+- /* DFS only works on one channel. */
+- if (WARN_ON(c->radar_detect_widths &&
+- (c->num_different_channels > 1)))
++ /* DFS only works on one channel. Avoid this check
++ * for multi-radio global combination, since it hold
++ * the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(c->radar_detect_widths &&
++ c->num_different_channels > 1))
+ return -EINVAL;
+
+ if (WARN_ON(!c->n_limits))
+@@ -648,13 +656,21 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ if (WARN_ON(wiphy->software_iftypes & types))
+ return -EINVAL;
+
+- /* Only a single P2P_DEVICE can be allowed */
+- if (WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) &&
++ /* Only a single P2P_DEVICE can be allowed, avoid this
++ * check for multi-radio global combination, since it
++ * hold the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) &&
+ c->limits[j].max > 1))
+ return -EINVAL;
+
+- /* Only a single NAN can be allowed */
+- if (WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
++ /* Only a single NAN can be allowed, avoid this
++ * check for multi-radio global combination, since it
++ * hold the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
+ c->limits[j].max > 1))
+ return -EINVAL;
+
+@@ -693,6 +709,34 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ return 0;
+ }
+
++static int wiphy_verify_combinations(struct wiphy *wiphy)
++{
++ int i, ret;
++ bool combined_radio = false;
++
++ if (wiphy->n_radio) {
++ for (i = 0; i < wiphy->n_radio; i++) {
++ const struct wiphy_radio *radio = &wiphy->radio[i];
++
++ ret = wiphy_verify_iface_combinations(wiphy,
++ radio->iface_combinations,
++ radio->n_iface_combinations,
++ false);
++ if (ret)
++ return ret;
++ }
++
++ combined_radio = true;
++ }
++
++ ret = wiphy_verify_iface_combinations(wiphy,
++ wiphy->iface_combinations,
++ wiphy->n_iface_combinations,
++ combined_radio);
++
++ return ret;
++}
++
+ int wiphy_register(struct wiphy *wiphy)
+ {
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 4dac8185472100..a5eb92d93074e6 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -340,12 +340,6 @@ cfg80211_mlme_check_mlo_compat(const struct ieee80211_multi_link_elem *mle_a,
+ return -EINVAL;
+ }
+
+- if (ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_a) !=
+- ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_b)) {
+- NL_SET_ERR_MSG(extack, "link EML medium sync delay mismatch");
+- return -EINVAL;
+- }
+-
+ if (ieee80211_mle_get_eml_cap((const u8 *)mle_a) !=
+ ieee80211_mle_get_eml_cap((const u8 *)mle_b)) {
+ NL_SET_ERR_MSG(extack, "link EML capabilities mismatch");
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index d7d099f7118ab5..9b1b9dc5a7eb2a 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -9776,6 +9776,7 @@ nl80211_parse_sched_scan(struct wiphy *wiphy, struct wireless_dev *wdev,
+ request = kzalloc(size, GFP_KERNEL);
+ if (!request)
+ return ERR_PTR(-ENOMEM);
++ request->n_channels = n_channels;
+
+ if (n_ssids)
+ request->ssids = (void *)request +
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 1140b2a120caec..b57d5d2904eb46 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -675,6 +675,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ len = desc->len;
+
+ if (!skb) {
++ first_frag = true;
++
+ hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom));
+ tr = dev->needed_tailroom;
+ skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err);
+@@ -685,12 +687,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ skb_put(skb, len);
+
+ err = skb_store_bits(skb, 0, buffer, len);
+- if (unlikely(err)) {
+- kfree_skb(skb);
++ if (unlikely(err))
+ goto free_err;
+- }
+-
+- first_frag = true;
+ } else {
+ int nr_frags = skb_shinfo(skb)->nr_frags;
+ struct page *page;
+@@ -758,6 +756,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ return skb;
+
+ free_err:
++ if (first_frag && skb)
++ kfree_skb(skb);
++
+ if (err == -EOVERFLOW) {
+ /* Drop the packet */
+ xsk_set_destructor_arg(xs->skb);
+diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c
+index acc1376b833c78..92f7fc41842531 100644
+--- a/rust/helpers/spinlock.c
++++ b/rust/helpers/spinlock.c
+@@ -7,10 +7,14 @@ void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
+ struct lock_class_key *key)
+ {
+ #ifdef CONFIG_DEBUG_SPINLOCK
++# if defined(CONFIG_PREEMPT_RT)
++ __spin_lock_init(lock, name, key, false);
++# else /*!CONFIG_PREEMPT_RT */
+ __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG);
+-#else
++# endif /* CONFIG_PREEMPT_RT */
++#else /* !CONFIG_DEBUG_SPINLOCK */
+ spin_lock_init(lock);
+-#endif
++#endif /* CONFIG_DEBUG_SPINLOCK */
+ }
+
+ void rust_helper_spin_lock(spinlock_t *lock)
+diff --git a/rust/kernel/block/mq/request.rs b/rust/kernel/block/mq/request.rs
+index a0e22827f3f4ec..7943f43b957532 100644
+--- a/rust/kernel/block/mq/request.rs
++++ b/rust/kernel/block/mq/request.rs
+@@ -16,50 +16,55 @@
+ sync::atomic::{AtomicU64, Ordering},
+ };
+
+-/// A wrapper around a blk-mq `struct request`. This represents an IO request.
++/// A wrapper around a blk-mq [`struct request`]. This represents an IO request.
+ ///
+ /// # Implementation details
+ ///
+ /// There are four states for a request that the Rust bindings care about:
+ ///
+-/// A) Request is owned by block layer (refcount 0)
+-/// B) Request is owned by driver but with zero `ARef`s in existence
+-/// (refcount 1)
+-/// C) Request is owned by driver with exactly one `ARef` in existence
+-/// (refcount 2)
+-/// D) Request is owned by driver with more than one `ARef` in existence
+-/// (refcount > 2)
++/// 1. Request is owned by block layer (refcount 0).
++/// 2. Request is owned by driver but with zero [`ARef`]s in existence
++/// (refcount 1).
++/// 3. Request is owned by driver with exactly one [`ARef`] in existence
++/// (refcount 2).
++/// 4. Request is owned by driver with more than one [`ARef`] in existence
++/// (refcount > 2).
+ ///
+ ///
+-/// We need to track A and B to ensure we fail tag to request conversions for
++/// We need to track 1 and 2 to ensure we fail tag to request conversions for
+ /// requests that are not owned by the driver.
+ ///
+-/// We need to track C and D to ensure that it is safe to end the request and hand
++/// We need to track 3 and 4 to ensure that it is safe to end the request and hand
+ /// back ownership to the block layer.
+ ///
+ /// The states are tracked through the private `refcount` field of
+ /// `RequestDataWrapper`. This structure lives in the private data area of the C
+-/// `struct request`.
++/// [`struct request`].
+ ///
+ /// # Invariants
+ ///
+-/// * `self.0` is a valid `struct request` created by the C portion of the kernel.
++/// * `self.0` is a valid [`struct request`] created by the C portion of the
++/// kernel.
+ /// * The private data area associated with this request must be an initialized
+ /// and valid `RequestDataWrapper<T>`.
+ /// * `self` is reference counted by atomic modification of
+-/// self.wrapper_ref().refcount().
++/// `self.wrapper_ref().refcount()`.
++///
++/// [`struct request`]: srctree/include/linux/blk-mq.h
+ ///
+ #[repr(transparent)]
+ pub struct Request<T: Operations>(Opaque<bindings::request>, PhantomData<T>);
+
+ impl<T: Operations> Request<T> {
+- /// Create an `ARef<Request>` from a `struct request` pointer.
++ /// Create an [`ARef<Request>`] from a [`struct request`] pointer.
+ ///
+ /// # Safety
+ ///
+ /// * The caller must own a refcount on `ptr` that is transferred to the
+- /// returned `ARef`.
+- /// * The type invariants for `Request` must hold for the pointee of `ptr`.
++ /// returned [`ARef`].
++ /// * The type invariants for [`Request`] must hold for the pointee of `ptr`.
++ ///
++ /// [`struct request`]: srctree/include/linux/blk-mq.h
+ pub(crate) unsafe fn aref_from_raw(ptr: *mut bindings::request) -> ARef<Self> {
+ // INVARIANT: By the safety requirements of this function, invariants are upheld.
+ // SAFETY: By the safety requirement of this function, we own a
+@@ -84,12 +89,14 @@ pub(crate) unsafe fn start_unchecked(this: &ARef<Self>) {
+ }
+
+ /// Try to take exclusive ownership of `this` by dropping the refcount to 0.
+- /// This fails if `this` is not the only `ARef` pointing to the underlying
+- /// `Request`.
++ /// This fails if `this` is not the only [`ARef`] pointing to the underlying
++ /// [`Request`].
+ ///
+- /// If the operation is successful, `Ok` is returned with a pointer to the
+- /// C `struct request`. If the operation fails, `this` is returned in the
+- /// `Err` variant.
++ /// If the operation is successful, [`Ok`] is returned with a pointer to the
++ /// C [`struct request`]. If the operation fails, `this` is returned in the
++ /// [`Err`] variant.
++ ///
++ /// [`struct request`]: srctree/include/linux/blk-mq.h
+ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
+ // We can race with `TagSet::tag_to_rq`
+ if let Err(_old) = this.wrapper_ref().refcount().compare_exchange(
+@@ -109,7 +116,7 @@ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
+
+ /// Notify the block layer that the request has been completed without errors.
+ ///
+- /// This function will return `Err` if `this` is not the only `ARef`
++ /// This function will return [`Err`] if `this` is not the only [`ARef`]
+ /// referencing the request.
+ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> {
+ let request_ptr = Self::try_set_end(this)?;
+@@ -123,13 +130,13 @@ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> {
+ Ok(())
+ }
+
+- /// Return a pointer to the `RequestDataWrapper` stored in the private area
++ /// Return a pointer to the [`RequestDataWrapper`] stored in the private area
+ /// of the request structure.
+ ///
+ /// # Safety
+ ///
+ /// - `this` must point to a valid allocation of size at least size of
+- /// `Self` plus size of `RequestDataWrapper`.
++ /// [`Self`] plus size of [`RequestDataWrapper`].
+ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper> {
+ let request_ptr = this.cast::<bindings::request>();
+ // SAFETY: By safety requirements for this function, `this` is a
+@@ -141,7 +148,7 @@ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper>
+ unsafe { NonNull::new_unchecked(wrapper_ptr) }
+ }
+
+- /// Return a reference to the `RequestDataWrapper` stored in the private
++ /// Return a reference to the [`RequestDataWrapper`] stored in the private
+ /// area of the request structure.
+ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper {
+ // SAFETY: By type invariant, `self.0` is a valid allocation. Further,
+@@ -152,13 +159,15 @@ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper {
+ }
+ }
+
+-/// A wrapper around data stored in the private area of the C `struct request`.
++/// A wrapper around data stored in the private area of the C [`struct request`].
++///
++/// [`struct request`]: srctree/include/linux/blk-mq.h
+ pub(crate) struct RequestDataWrapper {
+ /// The Rust request refcount has the following states:
+ ///
+ /// - 0: The request is owned by C block layer.
+- /// - 1: The request is owned by Rust abstractions but there are no ARef references to it.
+- /// - 2+: There are `ARef` references to the request.
++ /// - 1: The request is owned by Rust abstractions but there are no [`ARef`] references to it.
++ /// - 2+: There are [`ARef`] references to the request.
+ refcount: AtomicU64,
+ }
+
+@@ -204,7 +213,7 @@ fn atomic_relaxed_op_return(target: &AtomicU64, op: impl Fn(u64) -> u64) -> u64
+ }
+
+ /// Store the result of `op(target.load)` in `target` if `target.load() !=
+-/// pred`, returning true if the target was updated.
++/// pred`, returning [`true`] if the target was updated.
+ fn atomic_relaxed_op_unless(target: &AtomicU64, op: impl Fn(u64) -> u64, pred: u64) -> bool {
+ target
+ .fetch_update(Ordering::Relaxed, Ordering::Relaxed, |x| {
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index b5f4b3ce6b4820..032c9089e6862d 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -83,7 +83,7 @@ pub trait Module: Sized + Sync + Send {
+
+ /// Equivalent to `THIS_MODULE` in the C API.
+ ///
+-/// C header: [`include/linux/export.h`](srctree/include/linux/export.h)
++/// C header: [`include/linux/init.h`](srctree/include/linux/init.h)
+ pub struct ThisModule(*mut bindings::module);
+
+ // SAFETY: `THIS_MODULE` may be used from all threads within a module.
+diff --git a/rust/kernel/rbtree.rs b/rust/kernel/rbtree.rs
+index 25eb36fd1cdceb..d03e4aa1f4812b 100644
+--- a/rust/kernel/rbtree.rs
++++ b/rust/kernel/rbtree.rs
+@@ -884,7 +884,8 @@ fn get_neighbor_raw(&self, direction: Direction) -> Option<NonNull<bindings::rb_
+ NonNull::new(neighbor)
+ }
+
+- /// SAFETY:
++ /// # Safety
++ ///
+ /// - `node` must be a valid pointer to a node in an [`RBTree`].
+ /// - The caller has immutable access to `node` for the duration of 'b.
+ unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) {
+@@ -894,7 +895,8 @@ unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) {
+ (k, unsafe { &*v })
+ }
+
+- /// SAFETY:
++ /// # Safety
++ ///
+ /// - `node` must be a valid pointer to a node in an [`RBTree`].
+ /// - The caller has mutable access to `node` for the duration of 'b.
+ unsafe fn to_key_value_mut<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b mut V) {
+@@ -904,7 +906,8 @@ unsafe fn to_key_value_mut<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b
+ (k, unsafe { &mut *v })
+ }
+
+- /// SAFETY:
++ /// # Safety
++ ///
+ /// - `node` must be a valid pointer to a node in an [`RBTree`].
+ /// - The caller has immutable access to the key for the duration of 'b.
+ unsafe fn to_key_value_raw<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, *mut V) {
+diff --git a/rust/macros/lib.rs b/rust/macros/lib.rs
+index a626b1145e5c4f..90e2202ba4d5a0 100644
+--- a/rust/macros/lib.rs
++++ b/rust/macros/lib.rs
+@@ -359,7 +359,7 @@ pub fn pinned_drop(args: TokenStream, input: TokenStream) -> TokenStream {
+ /// macro_rules! pub_no_prefix {
+ /// ($prefix:ident, $($newname:ident),+) => {
+ /// kernel::macros::paste! {
+-/// $(pub(crate) const fn [<$newname:lower:span>]: u32 = [<$prefix $newname:span>];)+
++/// $(pub(crate) const fn [<$newname:lower:span>]() -> u32 { [<$prefix $newname:span>] })+
+ /// }
+ /// };
+ /// }
+diff --git a/samples/bpf/xdp_adjust_tail_kern.c b/samples/bpf/xdp_adjust_tail_kern.c
+index ffdd548627f0a4..da67bcad1c6381 100644
+--- a/samples/bpf/xdp_adjust_tail_kern.c
++++ b/samples/bpf/xdp_adjust_tail_kern.c
+@@ -57,6 +57,7 @@ static __always_inline void swap_mac(void *data, struct ethhdr *orig_eth)
+
+ static __always_inline __u16 csum_fold_helper(__u32 csum)
+ {
++ csum = (csum & 0xffff) + (csum >> 16);
+ return ~((csum & 0xffff) + (csum >> 16));
+ }
+
+diff --git a/samples/kfifo/dma-example.c b/samples/kfifo/dma-example.c
+index 48df719dac8c6d..8076ac410161a3 100644
+--- a/samples/kfifo/dma-example.c
++++ b/samples/kfifo/dma-example.c
+@@ -9,6 +9,7 @@
+ #include <linux/kfifo.h>
+ #include <linux/module.h>
+ #include <linux/scatterlist.h>
++#include <linux/dma-mapping.h>
+
+ /*
+ * This module shows how to handle fifo dma operations.
+diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
+index 4427572b24771d..b03d526e4c454a 100755
+--- a/scripts/checkpatch.pl
++++ b/scripts/checkpatch.pl
+@@ -3209,36 +3209,31 @@ sub process {
+
+ # Check Fixes: styles is correct
+ if (!$in_header_lines &&
+- $line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) {
+- my $orig_commit = "";
+- my $id = "0123456789ab";
+- my $title = "commit title";
+- my $tag_case = 1;
+- my $tag_space = 1;
+- my $id_length = 1;
+- my $id_case = 1;
++ $line =~ /^\s*(fixes:?)\s*(?:commit\s*)?([0-9a-f]{5,40})(?:\s*($balanced_parens))?/i) {
++ my $tag = $1;
++ my $orig_commit = $2;
++ my $title;
+ my $title_has_quotes = 0;
+ $fixes_tag = 1;
+-
+- if ($line =~ /(\s*fixes:?)\s+([0-9a-f]{5,})\s+($balanced_parens)/i) {
+- my $tag = $1;
+- $orig_commit = $2;
+- $title = $3;
+-
+- $tag_case = 0 if $tag eq "Fixes:";
+- $tag_space = 0 if ($line =~ /^fixes:? [0-9a-f]{5,} ($balanced_parens)/i);
+-
+- $id_length = 0 if ($orig_commit =~ /^[0-9a-f]{12}$/i);
+- $id_case = 0 if ($orig_commit !~ /[A-F]/);
+-
++ if (defined $3) {
+ # Always strip leading/trailing parens then double quotes if existing
+- $title = substr($title, 1, -1);
++ $title = substr($3, 1, -1);
+ if ($title =~ /^".*"$/) {
+ $title = substr($title, 1, -1);
+ $title_has_quotes = 1;
+ }
++ } else {
++ $title = "commit title"
+ }
+
++
++ my $tag_case = not ($tag eq "Fixes:");
++ my $tag_space = not ($line =~ /^fixes:? [0-9a-f]{5,40} ($balanced_parens)/i);
++
++ my $id_length = not ($orig_commit =~ /^[0-9a-f]{12}$/i);
++ my $id_case = not ($orig_commit !~ /[A-F]/);
++
++ my $id = "0123456789ab";
+ my ($cid, $ctitle) = git_commit_info($orig_commit, $id,
+ $title);
+
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index fe0cc45f03be11..1fa6beef9f978e 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -252,7 +252,7 @@ __faddr2line() {
+ found=2
+ break
+ fi
+- done < <(echo "${ELF_SYMS}" | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2 | ${GREP} -A1 --no-group-separator " ${sym_name}$")
++ done < <(echo "${ELF_SYMS}" | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
+
+ if [[ $found = 0 ]]; then
+ warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 2791f819520387..320544321ecba5 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -569,6 +569,8 @@ sub output_function_man(%) {
+ my %args = %{$_[0]};
+ my ($parameter, $section);
+ my $count;
++ my $func_macro = $args{'func_macro'};
++ my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
+
+ print ".TH \"$args{'function'}\" 9 \"$args{'function'}\" \"$man_date\" \"Kernel Hacker's Manual\" LINUX\n";
+
+@@ -600,7 +602,10 @@ sub output_function_man(%) {
+ $parenth = "";
+ }
+
+- print ".SH ARGUMENTS\n";
++ $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
++ if ($paramcount >= 0) {
++ print ".SH ARGUMENTS\n";
++ }
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ my $parameter_name = $parameter;
+ $parameter_name =~ s/\[.*//;
+@@ -822,10 +827,16 @@ sub output_function_rst(%) {
+ my $oldprefix = $lineprefix;
+
+ my $signature = "";
+- if ($args{'functiontype'} ne "") {
+- $signature = $args{'functiontype'} . " " . $args{'function'} . " (";
+- } else {
+- $signature = $args{'function'} . " (";
++ my $func_macro = $args{'func_macro'};
++ my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
++
++ if ($func_macro) {
++ $signature = $args{'function'};
++ } else {
++ if ($args{'functiontype'}) {
++ $signature = $args{'functiontype'} . " ";
++ }
++ $signature .= $args{'function'} . " (";
+ }
+
+ my $count = 0;
+@@ -844,7 +855,9 @@ sub output_function_rst(%) {
+ }
+ }
+
+- $signature .= ")";
++ if (!$func_macro) {
++ $signature .= ")";
++ }
+
+ if ($sphinx_major < 3) {
+ if ($args{'typedef'}) {
+@@ -888,9 +901,11 @@ sub output_function_rst(%) {
+ # Put our descriptive text into a container (thus an HTML <div>) to help
+ # set the function prototypes apart.
+ #
+- print ".. container:: kernelindent\n\n";
+ $lineprefix = " ";
+- print $lineprefix . "**Parameters**\n\n";
++ if ($paramcount >= 0) {
++ print ".. container:: kernelindent\n\n";
++ print $lineprefix . "**Parameters**\n\n";
++ }
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ my $parameter_name = $parameter;
+ $parameter_name =~ s/\[.*//;
+@@ -1704,7 +1719,7 @@ sub check_return_section {
+ sub dump_function($$) {
+ my $prototype = shift;
+ my $file = shift;
+- my $noret = 0;
++ my $func_macro = 0;
+
+ print_lineno($new_start_line);
+
+@@ -1769,7 +1784,7 @@ sub dump_function($$) {
+ # declaration_name and opening parenthesis (notice the \s+).
+ $return_type = $1;
+ $declaration_name = $2;
+- $noret = 1;
++ $func_macro = 1;
+ } elsif ($prototype =~ m/^()($name)\s*$prototype_end/ ||
+ $prototype =~ m/^($type1)\s+($name)\s*$prototype_end/ ||
+ $prototype =~ m/^($type2+)\s*($name)\s*$prototype_end/) {
+@@ -1796,7 +1811,7 @@ sub dump_function($$) {
+ # of warnings goes sufficiently down, the check is only performed in
+ # -Wreturn mode.
+ # TODO: always perform the check.
+- if ($Wreturn && !$noret) {
++ if ($Wreturn && !$func_macro) {
+ check_return_section($file, $declaration_name, $return_type);
+ }
+
+@@ -1814,7 +1829,8 @@ sub dump_function($$) {
+ 'parametertypes' => \%parametertypes,
+ 'sectionlist' => \@sectionlist,
+ 'sections' => \%sections,
+- 'purpose' => $declaration_purpose
++ 'purpose' => $declaration_purpose,
++ 'func_macro' => $func_macro
+ });
+ } else {
+ output_declaration($declaration_name,
+@@ -1827,7 +1843,8 @@ sub dump_function($$) {
+ 'parametertypes' => \%parametertypes,
+ 'sectionlist' => \@sectionlist,
+ 'sections' => \%sections,
+- 'purpose' => $declaration_purpose
++ 'purpose' => $declaration_purpose,
++ 'func_macro' => $func_macro
+ });
+ }
+ }
+@@ -2322,7 +2339,6 @@ sub process_inline($$) {
+
+ sub process_file($) {
+ my $file;
+- my $initial_section_counter = $section_counter;
+ my ($orig_file) = @_;
+
+ $file = map_filename($orig_file);
+@@ -2360,8 +2376,7 @@ sub process_file($) {
+ }
+
+ # Make sure we got something interesting.
+- if ($initial_section_counter == $section_counter && $
+- output_mode ne "none") {
++ if (!$section_counter && $output_mode ne "none") {
+ if ($output_selection == OUTPUT_INCLUDE) {
+ emit_warning("${file}:1", "'$_' not found\n")
+ for keys %function_table;
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index c4cc11aa558f5f..634e40748287c0 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -809,10 +809,7 @@ static int do_eisa_entry(const char *filename, void *symval,
+ char *alias)
+ {
+ DEF_FIELD_ADDR(symval, eisa_device_id, sig);
+- if (sig[0])
+- sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
+- else
+- strcat(alias, "*");
++ sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
+ return 1;
+ }
+
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index 441b0bb66e0d0c..fb686fd3266f01 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -96,16 +96,18 @@ install_linux_image_dbg () {
+
+ # Parse modules.order directly because 'make modules_install' may sign,
+ # compress modules, and then run unneeded depmod.
+- while read -r mod; do
+- mod="${mod%.o}.ko"
+- dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}"
+- buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p')
+- link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug"
+-
+- mkdir -p "${dbg%/*}" "${link%/*}"
+- "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}"
+- ln -sf --relative "${dbg}" "${link}"
+- done < modules.order
++ if is_enabled CONFIG_MODULES; then
++ while read -r mod; do
++ mod="${mod%.o}.ko"
++ dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}"
++ buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p')
++ link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug"
++
++ mkdir -p "${dbg%/*}" "${link%/*}"
++ "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}"
++ ln -sf --relative "${dbg}" "${link}"
++ done < modules.order
++ fi
+
+ # Build debug package
+ # Different tools want the image in different locations
+diff --git a/security/apparmor/capability.c b/security/apparmor/capability.c
+index 9934df16c8431d..bf7df60868308d 100644
+--- a/security/apparmor/capability.c
++++ b/security/apparmor/capability.c
+@@ -96,6 +96,8 @@ static int audit_caps(struct apparmor_audit_data *ad, struct aa_profile *profile
+ return error;
+ } else {
+ aa_put_profile(ent->profile);
++ if (profile != ent->profile)
++ cap_clear(ent->caps);
+ ent->profile = aa_get_profile(profile);
+ cap_raise(ent->caps, cap);
+ }
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index c64733d6c98fbb..f070902da8fcce 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -281,6 +281,8 @@ static void policy_unpack_test_unpack_strdup_with_null_name(struct kunit *test)
+ ((uintptr_t)puf->e->start <= (uintptr_t)string)
+ && ((uintptr_t)string <= (uintptr_t)puf->e->end));
+ KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
+@@ -296,6 +298,8 @@ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
+ ((uintptr_t)puf->e->start <= (uintptr_t)string)
+ && ((uintptr_t)string <= (uintptr_t)puf->e->end));
+ KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
+@@ -313,6 +317,8 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
+ KUNIT_EXPECT_EQ(test, size, 0);
+ KUNIT_EXPECT_NULL(test, string);
+ KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_nameX_with_null_name(struct kunit *test)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index b465fb6e1f5f0d..0790b5fd917e12 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -3793,9 +3793,11 @@ static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf)
+ return VM_FAULT_SIGBUS;
+ if (substream->ops->page)
+ page = substream->ops->page(substream, offset);
+- else if (!snd_pcm_get_dma_buf(substream))
++ else if (!snd_pcm_get_dma_buf(substream)) {
++ if (WARN_ON_ONCE(!runtime->dma_area))
++ return VM_FAULT_SIGBUS;
+ page = virt_to_page(runtime->dma_area + offset);
+- else
++ } else
+ page = snd_sgbuf_get_page(snd_pcm_get_dma_buf(substream), offset);
+ if (!page)
+ return VM_FAULT_SIGBUS;
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 03306be5fa0245..348ce1b7725ea2 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -724,8 +724,9 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream,
+ newbuf = kvzalloc(params->buffer_size, GFP_KERNEL);
+ if (!newbuf)
+ return -ENOMEM;
+- guard(spinlock_irq)(&substream->lock);
++ spin_lock_irq(&substream->lock);
+ if (runtime->buffer_ref) {
++ spin_unlock_irq(&substream->lock);
+ kvfree(newbuf);
+ return -EBUSY;
+ }
+@@ -733,6 +734,7 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream,
+ runtime->buffer = newbuf;
+ runtime->buffer_size = params->buffer_size;
+ __reset_runtime_ptrs(runtime, is_input);
++ spin_unlock_irq(&substream->lock);
+ kvfree(oldbuf);
+ }
+ runtime->avail_min = params->avail_min;
+diff --git a/sound/core/sound_kunit.c b/sound/core/sound_kunit.c
+index bfed1a25fc8f74..84e337ecbddd0a 100644
+--- a/sound/core/sound_kunit.c
++++ b/sound/core/sound_kunit.c
+@@ -172,6 +172,7 @@ static void test_format_fill_silence(struct kunit *test)
+ u32 i, j;
+
+ buffer = kunit_kzalloc(test, SILENCE_BUFFER_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
+
+ for (i = 0; i < ARRAY_SIZE(buf_samples); i++) {
+ for (j = 0; j < ARRAY_SIZE(valid_fmt); j++)
+@@ -208,8 +209,12 @@ static void test_playback_avail(struct kunit *test)
+ struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL);
+ u32 i;
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r);
++
+ r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL);
+ r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
+
+ for (i = 0; i < ARRAY_SIZE(p_avail_data); i++) {
+ r->buffer_size = p_avail_data[i].buffer_size;
+@@ -232,8 +237,12 @@ static void test_capture_avail(struct kunit *test)
+ struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL);
+ u32 i;
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r);
++
+ r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL);
+ r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
+
+ for (i = 0; i < ARRAY_SIZE(c_avail_data); i++) {
+ r->buffer_size = c_avail_data[i].buffer_size;
+@@ -247,6 +256,7 @@ static void test_capture_avail(struct kunit *test)
+ static void test_card_set_id(struct kunit *test)
+ {
+ struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
+
+ snd_card_set_id(card, VALID_NAME);
+ KUNIT_EXPECT_STREQ(test, card->id, VALID_NAME);
+@@ -280,6 +290,7 @@ static void test_pcm_format_name(struct kunit *test)
+ static void test_card_add_component(struct kunit *test)
+ {
+ struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
+
+ snd_component_add(card, TEST_FIRST_COMPONENT);
+ KUNIT_ASSERT_STREQ(test, card->components, TEST_FIRST_COMPONENT);
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 7d59a0a9b037ad..8d37f237f83b2e 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -788,7 +788,10 @@ static void fill_fb_info(struct snd_ump_endpoint *ump,
+ info->ui_hint = buf->fb_info.ui_hint;
+ info->first_group = buf->fb_info.first_group;
+ info->num_groups = buf->fb_info.num_groups;
+- info->flags = buf->fb_info.midi_10;
++ if (buf->fb_info.midi_10 < 2)
++ info->flags = buf->fb_info.midi_10;
++ else
++ info->flags = SNDRV_UMP_BLOCK_IS_MIDI1 | SNDRV_UMP_BLOCK_IS_LOWSPEED;
+ info->active = buf->fb_info.active;
+ info->midi_ci_version = buf->fb_info.midi_ci_version;
+ info->sysex8_streams = buf->fb_info.sysex8_streams;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 24b4fe99304a40..18e6779a83be2f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -473,6 +473,8 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ break;
+ case 0x10ec0234:
+ case 0x10ec0274:
++ alc_write_coef_idx(codec, 0x6e, 0x0c25);
++ fallthrough;
+ case 0x10ec0294:
+ case 0x10ec0700:
+ case 0x10ec0701:
+@@ -3613,25 +3615,22 @@ static void alc256_init(struct hda_codec *codec)
+
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(85);
+-
+- snd_hda_codec_write(codec, hp_pin, 0,
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ msleep(75);
++
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
++ msleep(75);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ }
+ alc_update_coef_idx(codec, 0x46, 3 << 12, 0);
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
+ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
+ /*
+@@ -3655,29 +3654,28 @@ static void alc256_shutup(struct hda_codec *codec)
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(85);
++ msleep(75);
+
+ /* 3k pull low control for Headset jack. */
+ /* NOTE: call this before clearing the pin, otherwise codec stalls */
+ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ * when booting with headset plugged. So skip setting it for the codec alc257
+ */
+- if (spec->en_3kpull_low)
+- alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
++ if (spec->en_3kpull_low)
++ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+- if (!spec->no_shutup_pins)
+- snd_hda_codec_write(codec, hp_pin, 0,
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ msleep(75);
++ }
+
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+@@ -3772,33 +3770,28 @@ static void alc225_init(struct hda_codec *codec)
+ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
+
+- if (hp1_pin_sense || hp2_pin_sense)
++ if (hp1_pin_sense || hp2_pin_sense) {
+ msleep(2);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(85);
+-
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ msleep(75);
+
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
+- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ msleep(75);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ }
+ }
+
+ static void alc225_shutup(struct hda_codec *codec)
+@@ -3810,36 +3803,35 @@ static void alc225_shutup(struct hda_codec *codec)
+ if (!hp_pin)
+ hp_pin = 0x21;
+
+- alc_disable_headset_jack_key(codec);
+- /* 3k pull low control for Headset jack. */
+- alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+-
+ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
+
+- if (hp1_pin_sense || hp2_pin_sense)
++ if (hp1_pin_sense || hp2_pin_sense) {
++ alc_disable_headset_jack_key(codec);
++ /* 3k pull low control for Headset jack. */
++ alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+ msleep(2);
+
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(85);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ msleep(75);
+
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
++ msleep(75);
++ alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
++ alc_enable_headset_jack_key(codec);
++ }
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+ if (spec->ultra_low_power) {
+@@ -3850,9 +3842,6 @@ static void alc225_shutup(struct hda_codec *codec)
+ alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4);
+ msleep(30);
+ }
+-
+- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+- alc_enable_headset_jack_key(codec);
+ }
+
+ static void alc_default_init(struct hda_codec *codec)
+@@ -7559,6 +7548,7 @@ enum {
+ ALC269_FIXUP_THINKPAD_ACPI,
+ ALC269_FIXUP_DMIC_THINKPAD_ACPI,
+ ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13,
++ ALC269VC_FIXUP_INFINIX_Y4_MAX,
+ ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO,
+ ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ ALC255_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -7786,6 +7776,7 @@ enum {
+ ALC287_FIXUP_LENOVO_SSID_17AA3820,
+ ALC245_FIXUP_CLEVO_NOISY_MIC,
+ ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE,
++ ALC233_FIXUP_MEDION_MTL_SPK,
+ };
+
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -8015,6 +8006,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
++ [ALC269VC_FIXUP_INFINIX_Y4_MAX] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1b, 0x90170150 }, /* use as internal speaker */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
++ },
+ [ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -10160,6 +10160,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
++ [ALC233_FIXUP_MEDION_MTL_SPK] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1b, 0x90170110 },
++ { }
++ },
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -10585,6 +10592,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -11025,7 +11033,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
+ SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
++ SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX),
++ SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX),
+ SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
++ SND_PCI_QUIRK(0x2782, 0x4900, "MEDION E15443", ALC233_FIXUP_MEDION_MTL_SPK),
+ SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+diff --git a/sound/soc/amd/acp/acp-sdw-sof-mach.c b/sound/soc/amd/acp/acp-sdw-sof-mach.c
+index 306854fb08e3d7..3be401c7227040 100644
+--- a/sound/soc/amd/acp/acp-sdw-sof-mach.c
++++ b/sound/soc/amd/acp/acp-sdw-sof-mach.c
+@@ -154,7 +154,7 @@ static int create_sdw_dailink(struct snd_soc_card *card,
+ int num_cpus = hweight32(sof_dai->link_mask[stream]);
+ int num_codecs = sof_dai->num_devs[stream];
+ int playback, capture;
+- int i = 0, j = 0;
++ int j = 0;
+ char *name;
+
+ if (!sof_dai->num_devs[stream])
+@@ -213,14 +213,14 @@ static int create_sdw_dailink(struct snd_soc_card *card,
+
+ int link_num = ffs(sof_end->link_mask) - 1;
+
+- cpus[i].dai_name = devm_kasprintf(dev, GFP_KERNEL,
+- "SDW%d Pin%d",
+- link_num, cpu_pin_id);
+- dev_dbg(dev, "cpu[%d].dai_name:%s\n", i, cpus[i].dai_name);
+- if (!cpus[i].dai_name)
++ cpus->dai_name = devm_kasprintf(dev, GFP_KERNEL,
++ "SDW%d Pin%d",
++ link_num, cpu_pin_id);
++ dev_dbg(dev, "cpu->dai_name:%s\n", cpus->dai_name);
++ if (!cpus->dai_name)
+ return -ENOMEM;
+
+- codec_maps[j].cpu = i;
++ codec_maps[j].cpu = 0;
+ codec_maps[j].codec = j;
+
+ codecs[j].name = sof_end->codec_name;
+@@ -362,7 +362,7 @@ static int sof_card_dai_links_create(struct snd_soc_card *card)
+ dai_links = devm_kcalloc(dev, num_links, sizeof(*dai_links), GFP_KERNEL);
+ if (!dai_links) {
+ ret = -ENOMEM;
+- goto err_end;
++ goto err_end;
+ }
+
+ card->codec_conf = codec_conf;
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 2436e8deb2be48..5153a68d8c0795 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -241,6 +241,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21M5"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21ME"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -537,8 +544,14 @@ static int acp6x_probe(struct platform_device *pdev)
+ struct acp6x_pdm *machine = NULL;
+ struct snd_soc_card *card;
+ struct acpi_device *adev;
++ acpi_handle handle;
++ acpi_integer dmic_status;
+ int ret;
++ bool is_dmic_enable, wov_en;
+
++ /* IF WOV entry not found, enable dmic based on AcpDmicConnected entry*/
++ is_dmic_enable = false;
++ wov_en = true;
+ /* check the parent device's firmware node has _DSD or not */
+ adev = ACPI_COMPANION(pdev->dev.parent);
+ if (adev) {
+@@ -546,9 +559,19 @@ static int acp6x_probe(struct platform_device *pdev)
+
+ if (!acpi_dev_get_property(adev, "AcpDmicConnected", ACPI_TYPE_INTEGER, &obj) &&
+ obj->integer.value == 1)
+- platform_set_drvdata(pdev, &acp6x_card);
++ is_dmic_enable = true;
+ }
+
++ handle = ACPI_HANDLE(pdev->dev.parent);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ wov_en = dmic_status;
++
++ if (is_dmic_enable && wov_en)
++ platform_set_drvdata(pdev, &acp6x_card);
++ else
++ return 0;
++
+ /* check for any DMI overrides */
+ dmi_id = dmi_first_match(yc_acp_quirk_table);
+ if (dmi_id)
+diff --git a/sound/soc/codecs/da7213.c b/sound/soc/codecs/da7213.c
+index f3ef6fb5530471..486db60bf2dd14 100644
+--- a/sound/soc/codecs/da7213.c
++++ b/sound/soc/codecs/da7213.c
+@@ -2136,6 +2136,7 @@ static const struct regmap_config da7213_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
++ .max_register = DA7213_TONE_GEN_OFF_PER,
+ .reg_defaults = da7213_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(da7213_reg_defaults),
+ .volatile_reg = da7213_volatile_register,
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 311ea7918b3124..e2da3e317b5a3e 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -1167,17 +1167,20 @@ static int da7219_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component);
+ int ret = 0;
+
+- if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq))
++ mutex_lock(&da7219->pll_lock);
++
++ if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq)) {
++ mutex_unlock(&da7219->pll_lock);
+ return 0;
++ }
+
+ if ((freq < 2000000) || (freq > 54000000)) {
++ mutex_unlock(&da7219->pll_lock);
+ dev_err(codec_dai->dev, "Unsupported MCLK value %d\n",
+ freq);
+ return -EINVAL;
+ }
+
+- mutex_lock(&da7219->pll_lock);
+-
+ switch (clk_id) {
+ case DA7219_CLKSRC_MCLK_SQR:
+ snd_soc_component_update_bits(component, DA7219_PLL_CTRL,
+diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c
+index e5bd9ef812de13..f9f7512ca36087 100644
+--- a/sound/soc/codecs/rt722-sdca.c
++++ b/sound/soc/codecs/rt722-sdca.c
+@@ -607,12 +607,8 @@ static int rt722_sdca_dmic_set_gain_get(struct snd_kcontrol *kcontrol,
+
+ if (!adc_vol_flag) /* boost gain */
+ ctl = regvalue / boost_step;
+- else { /* ADC gain */
+- if (adc_vol_flag)
+- ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
+- else
+- ctl = p->max - (((0 - regvalue) & 0xffff) / interval_offset);
+- }
++ else /* ADC gain */
++ ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
+
+ ucontrol->value.integer.value[i] = ctl;
+ }
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index f6c3aeff0d8eaf..a0c2ce84c32b1d 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -1033,14 +1033,15 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ }
+
+ /*
+- * Properties "hp-det-gpio" and "mic-det-gpio" are optional, and
++ * Properties "hp-det-gpios" and "mic-det-gpios" are optional, and
+ * simple_util_init_jack() uses these properties for creating
+ * Headphone Jack and Microphone Jack.
+ *
+ * The notifier is initialized in snd_soc_card_jack_new(), then
+ * snd_soc_jack_notifier_register can be called.
+ */
+- if (of_property_read_bool(np, "hp-det-gpio")) {
++ if (of_property_read_bool(np, "hp-det-gpios") ||
++ of_property_read_bool(np, "hp-det-gpio") /* deprecated */) {
+ ret = simple_util_init_jack(&priv->card, &priv->hp_jack,
+ 1, NULL, "Headphone Jack");
+ if (ret)
+@@ -1049,7 +1050,8 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ snd_soc_jack_notifier_register(&priv->hp_jack.jack, &hp_jack_nb);
+ }
+
+- if (of_property_read_bool(np, "mic-det-gpio")) {
++ if (of_property_read_bool(np, "mic-det-gpios") ||
++ of_property_read_bool(np, "mic-det-gpio") /* deprecated */) {
+ ret = simple_util_init_jack(&priv->card, &priv->mic_jack,
+ 0, NULL, "Mic Jack");
+ if (ret)
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 0c71a73476dfa6..67c2d4cb0dea21 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -1061,7 +1061,7 @@ static irqreturn_t micfil_isr(int irq, void *devid)
+ regmap_write_bits(micfil->regmap,
+ REG_MICFIL_STAT,
+ MICFIL_STAT_CHXF(i),
+- 1);
++ MICFIL_STAT_CHXF(i));
+ }
+
+ for (i = 0; i < MICFIL_FIFO_NUM; i++) {
+@@ -1096,7 +1096,7 @@ static irqreturn_t micfil_err_isr(int irq, void *devid)
+ if (stat_reg & MICFIL_STAT_LOWFREQF) {
+ dev_dbg(&pdev->dev, "isr: ipg_clk_app is too low\n");
+ regmap_write_bits(micfil->regmap, REG_MICFIL_STAT,
+- MICFIL_STAT_LOWFREQF, 1);
++ MICFIL_STAT_LOWFREQF, MICFIL_STAT_LOWFREQF);
+ }
+
+ return IRQ_HANDLED;
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 6fbcf33fd0dea6..8e7b75cf64db42 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -275,6 +275,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ /* Add AUDMIX Backend */
+ be_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "audmix-%d", i);
++ if (!be_name)
++ return -ENOMEM;
++
+ priv->dai[num_dai + i].cpus = &dlc[1];
+ priv->dai[num_dai + i].codecs = &snd_soc_dummy_dlc;
+
+diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+index 08ae962afeb929..4eed90d13a5326 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c
++++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+@@ -1279,10 +1279,12 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+
+ for_each_card_prelinks(card, i, dai_link) {
+ if (strcmp(dai_link->name, "DPTX_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8188_dptx_codec_init;
+ } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8188_hdmi_codec_init;
+ } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 ||
+ strcmp(dai_link->name, "UL_SRC_BE") == 0) {
+@@ -1294,6 +1296,9 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM1_IN_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM2_IN_BE") == 0) {
++ if (!dai_link->num_codecs)
++ continue;
++
+ if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) {
+ /*
+ * The TDM protocol settings with fixed 4 slots are defined in
+diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+index db00704e206d6d..943f8116840373 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
++++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+@@ -1099,7 +1099,7 @@ static int mt8192_mt6359_legacy_probe(struct mtk_soc_card_data *soc_card_data)
+ dai_link->ignore = 0;
+ }
+
+- if (dai_link->num_codecs && dai_link->codecs[0].dai_name &&
++ if (dai_link->num_codecs &&
+ strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0)
+ dai_link->ops = &mt8192_rt1015_i2s_ops;
+ }
+@@ -1127,7 +1127,7 @@ static int mt8192_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ int i;
+
+ for_each_card_prelinks(card, i, dai_link)
+- if (dai_link->num_codecs && dai_link->codecs[0].dai_name &&
++ if (dai_link->num_codecs &&
+ strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0)
+ dai_link->ops = &mt8192_rt1015_i2s_ops;
+ }
+diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359.c b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+index 2832ef78eaed72..8ebf6c7502aa3d 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-mt6359.c
++++ b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+@@ -1380,10 +1380,12 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+
+ for_each_card_prelinks(card, i, dai_link) {
+ if (strcmp(dai_link->name, "DPTX_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8195_dptx_codec_init;
+ } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8195_hdmi_codec_init;
+ } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 ||
+ strcmp(dai_link->name, "UL_SRC1_BE") == 0 ||
+@@ -1396,6 +1398,9 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM1_IN_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM2_IN_BE") == 0) {
++ if (!dai_link->num_codecs)
++ continue;
++
+ if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) {
+ if (!(codec_init & MAX98390_CODEC_INIT)) {
+ dai_link->init = mt8195_max98390_init;
+diff --git a/sound/usb/6fire/chip.c b/sound/usb/6fire/chip.c
+index 33e962178c9363..d562a30b087f01 100644
+--- a/sound/usb/6fire/chip.c
++++ b/sound/usb/6fire/chip.c
+@@ -61,8 +61,10 @@ static void usb6fire_chip_abort(struct sfire_chip *chip)
+ }
+ }
+
+-static void usb6fire_chip_destroy(struct sfire_chip *chip)
++static void usb6fire_card_free(struct snd_card *card)
+ {
++ struct sfire_chip *chip = card->private_data;
++
+ if (chip) {
+ if (chip->pcm)
+ usb6fire_pcm_destroy(chip);
+@@ -72,8 +74,6 @@ static void usb6fire_chip_destroy(struct sfire_chip *chip)
+ usb6fire_comm_destroy(chip);
+ if (chip->control)
+ usb6fire_control_destroy(chip);
+- if (chip->card)
+- snd_card_free(chip->card);
+ }
+ }
+
+@@ -136,6 +136,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
+ chip->regidx = regidx;
+ chip->intf_count = 1;
+ chip->card = card;
++ card->private_free = usb6fire_card_free;
+
+ ret = usb6fire_comm_init(chip);
+ if (ret < 0)
+@@ -162,7 +163,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
+ return 0;
+
+ destroy_chip:
+- usb6fire_chip_destroy(chip);
++ snd_card_free(card);
+ return ret;
+ }
+
+@@ -181,7 +182,6 @@ static void usb6fire_chip_disconnect(struct usb_interface *intf)
+
+ chip->shutdown = true;
+ usb6fire_chip_abort(chip);
+- usb6fire_chip_destroy(chip);
+ }
+ }
+ }
+diff --git a/sound/usb/caiaq/audio.c b/sound/usb/caiaq/audio.c
+index 772c0ecb707738..05f964347ed6c2 100644
+--- a/sound/usb/caiaq/audio.c
++++ b/sound/usb/caiaq/audio.c
+@@ -858,14 +858,20 @@ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev)
+ return 0;
+ }
+
+-void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
++void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev)
+ {
+ struct device *dev = caiaqdev_to_dev(cdev);
+
+ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
+ stream_stop(cdev);
++}
++
++void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
++{
++ struct device *dev = caiaqdev_to_dev(cdev);
++
++ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
+ free_urbs(cdev->data_urbs_in);
+ free_urbs(cdev->data_urbs_out);
+ kfree(cdev->data_cb_info);
+ }
+-
+diff --git a/sound/usb/caiaq/audio.h b/sound/usb/caiaq/audio.h
+index 869bf6264d6a09..07f5d064456cf7 100644
+--- a/sound/usb/caiaq/audio.h
++++ b/sound/usb/caiaq/audio.h
+@@ -3,6 +3,7 @@
+ #define CAIAQ_AUDIO_H
+
+ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev);
++void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev);
+ void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev);
+
+ #endif /* CAIAQ_AUDIO_H */
+diff --git a/sound/usb/caiaq/device.c b/sound/usb/caiaq/device.c
+index b5cbf1f195c48c..dfd820483849eb 100644
+--- a/sound/usb/caiaq/device.c
++++ b/sound/usb/caiaq/device.c
+@@ -376,6 +376,17 @@ static void setup_card(struct snd_usb_caiaqdev *cdev)
+ dev_err(dev, "Unable to set up control system (ret=%d)\n", ret);
+ }
+
++static void card_free(struct snd_card *card)
++{
++ struct snd_usb_caiaqdev *cdev = caiaqdev(card);
++
++#ifdef CONFIG_SND_USB_CAIAQ_INPUT
++ snd_usb_caiaq_input_free(cdev);
++#endif
++ snd_usb_caiaq_audio_free(cdev);
++ usb_reset_device(cdev->chip.dev);
++}
++
+ static int create_card(struct usb_device *usb_dev,
+ struct usb_interface *intf,
+ struct snd_card **cardp)
+@@ -489,6 +500,7 @@ static int init_card(struct snd_usb_caiaqdev *cdev)
+ cdev->vendor_name, cdev->product_name, usbpath);
+
+ setup_card(cdev);
++ card->private_free = card_free;
+ return 0;
+
+ err_kill_urb:
+@@ -534,15 +546,14 @@ static void snd_disconnect(struct usb_interface *intf)
+ snd_card_disconnect(card);
+
+ #ifdef CONFIG_SND_USB_CAIAQ_INPUT
+- snd_usb_caiaq_input_free(cdev);
++ snd_usb_caiaq_input_disconnect(cdev);
+ #endif
+- snd_usb_caiaq_audio_free(cdev);
++ snd_usb_caiaq_audio_disconnect(cdev);
+
+ usb_kill_urb(&cdev->ep1_in_urb);
+ usb_kill_urb(&cdev->midi_out_urb);
+
+- snd_card_free(card);
+- usb_reset_device(interface_to_usbdev(intf));
++ snd_card_free_when_closed(card);
+ }
+
+
+diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c
+index 84f26dce7f5d03..a9130891bb696d 100644
+--- a/sound/usb/caiaq/input.c
++++ b/sound/usb/caiaq/input.c
+@@ -829,15 +829,21 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
+ return ret;
+ }
+
+-void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
++void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev)
+ {
+ if (!cdev || !cdev->input_dev)
+ return;
+
+ usb_kill_urb(cdev->ep4_in_urb);
++ input_unregister_device(cdev->input_dev);
++}
++
++void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
++{
++ if (!cdev || !cdev->input_dev)
++ return;
++
+ usb_free_urb(cdev->ep4_in_urb);
+ cdev->ep4_in_urb = NULL;
+-
+- input_unregister_device(cdev->input_dev);
+ cdev->input_dev = NULL;
+ }
+diff --git a/sound/usb/caiaq/input.h b/sound/usb/caiaq/input.h
+index c42891e7be884d..fbe267f85d025f 100644
+--- a/sound/usb/caiaq/input.h
++++ b/sound/usb/caiaq/input.h
+@@ -4,6 +4,7 @@
+
+ void snd_usb_caiaq_input_dispatch(struct snd_usb_caiaqdev *cdev, char *buf, unsigned int len);
+ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev);
++void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev);
+ void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev);
+
+ #endif
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 8f85200292f3ff..842ba5b801eae8 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -36,6 +36,12 @@ union uac23_clock_multiplier_desc {
+ struct uac_clock_multiplier_descriptor v3;
+ };
+
++/* check whether the descriptor bLength has the minimal length */
++#define DESC_LENGTH_CHECK(p, proto) \
++ ((proto) == UAC_VERSION_3 ? \
++ ((p)->v3.bLength >= sizeof((p)->v3)) : \
++ ((p)->v2.bLength >= sizeof((p)->v2)))
++
+ #define GET_VAL(p, proto, field) \
+ ((proto) == UAC_VERSION_3 ? (p)->v3.field : (p)->v2.field)
+
+@@ -58,6 +64,8 @@ static bool validate_clock_source(void *p, int id, int proto)
+ {
+ union uac23_clock_source_desc *cs = p;
+
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
+ return GET_VAL(cs, proto, bClockID) == id;
+ }
+
+@@ -65,13 +73,27 @@ static bool validate_clock_selector(void *p, int id, int proto)
+ {
+ union uac23_clock_selector_desc *cs = p;
+
+- return GET_VAL(cs, proto, bClockID) == id;
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
++ if (GET_VAL(cs, proto, bClockID) != id)
++ return false;
++ /* additional length check for baCSourceID array (in bNrInPins size)
++ * and two more fields (which sizes depend on the protocol)
++ */
++ if (proto == UAC_VERSION_3)
++ return cs->v3.bLength >= sizeof(cs->v3) + cs->v3.bNrInPins +
++ 4 /* bmControls */ + 2 /* wCSelectorDescrStr */;
++ else
++ return cs->v2.bLength >= sizeof(cs->v2) + cs->v2.bNrInPins +
++ 1 /* bmControls */ + 1 /* iClockSelector */;
+ }
+
+ static bool validate_clock_multiplier(void *p, int id, int proto)
+ {
+ union uac23_clock_multiplier_desc *cs = p;
+
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
+ return GET_VAL(cs, proto, bClockID) == id;
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index c5fd180357d1e8..8538fdfce3535b 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -555,6 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
+ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+
+ if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD ||
+@@ -566,10 +567,14 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac
+ if (err < 0)
+ dev_dbg(&dev->dev, "error sending boot message: %d\n", err);
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err);
+@@ -901,6 +906,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev)
+ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+ u8 bootresponse[0x12];
+ int fwsize;
+@@ -936,10 +942,14 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ dev_dbg(&dev->dev, "device initialised!\n");
+
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1249,6 +1259,7 @@ static void mbox3_setup_defaults(struct usb_device *dev)
+ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+ int descriptor_size;
+
+@@ -1262,10 +1273,14 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ dev_dbg(&dev->dev, "MBOX3: device initialised!\n");
+
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "MBOX3: error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "MBOX3: error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
+index 1be0e980feb958..ca5fac03ec798d 100644
+--- a/sound/usb/usx2y/us122l.c
++++ b/sound/usb/usx2y/us122l.c
+@@ -606,10 +606,7 @@ static void snd_us122l_disconnect(struct usb_interface *intf)
+ usb_put_intf(usb_ifnum_to_if(us122l->dev, 1));
+ usb_put_dev(us122l->dev);
+
+- while (atomic_read(&us122l->mmap_count))
+- msleep(500);
+-
+- snd_card_free(card);
++ snd_card_free_when_closed(card);
+ }
+
+ static int snd_us122l_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
+index 2f9cede242b3a9..5f81c68fd42b68 100644
+--- a/sound/usb/usx2y/usbusx2y.c
++++ b/sound/usb/usx2y/usbusx2y.c
+@@ -422,7 +422,7 @@ static void snd_usx2y_disconnect(struct usb_interface *intf)
+ }
+ if (usx2y->us428ctls_sharedmem)
+ wake_up(&usx2y->us428ctls_wait_queue_head);
+- snd_card_free(card);
++ snd_card_free_when_closed(card);
+ }
+
+ static int snd_usx2y_probe(struct usb_interface *intf,
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index 7b8d9ec89ebd35..c032d2c6ab6d55 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -80,7 +80,8 @@ symbol_lookup_callback(__maybe_unused void *disasm_info,
+ static int
+ init_context(disasm_ctx_t *ctx, const char *arch,
+ __maybe_unused const char *disassembler_options,
+- __maybe_unused unsigned char *image, __maybe_unused ssize_t len)
++ __maybe_unused unsigned char *image, __maybe_unused ssize_t len,
++ __maybe_unused __u64 func_ksym)
+ {
+ char *triple;
+
+@@ -109,12 +110,13 @@ static void destroy_context(disasm_ctx_t *ctx)
+ }
+
+ static int
+-disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc)
++disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc,
++ __u64 func_ksym)
+ {
+ char buf[256];
+ int count;
+
+- count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, pc,
++ count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, func_ksym + pc,
+ buf, sizeof(buf));
+ if (json_output)
+ printf_json(buf);
+@@ -136,8 +138,21 @@ int disasm_init(void)
+ #ifdef HAVE_LIBBFD_SUPPORT
+ #define DISASM_SPACER "\t"
+
++struct disasm_info {
++ struct disassemble_info info;
++ __u64 func_ksym;
++};
++
++static void disasm_print_addr(bfd_vma addr, struct disassemble_info *info)
++{
++ struct disasm_info *dinfo = container_of(info, struct disasm_info, info);
++
++ addr += dinfo->func_ksym;
++ generic_print_address(addr, info);
++}
++
+ typedef struct {
+- struct disassemble_info *info;
++ struct disasm_info *info;
+ disassembler_ftype disassemble;
+ bfd *bfdf;
+ } disasm_ctx_t;
+@@ -215,7 +230,7 @@ static int fprintf_json_styled(void *out,
+
+ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ const char *disassembler_options,
+- unsigned char *image, ssize_t len)
++ unsigned char *image, ssize_t len, __u64 func_ksym)
+ {
+ struct disassemble_info *info;
+ char tpath[PATH_MAX];
+@@ -238,12 +253,13 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ }
+ bfdf = ctx->bfdf;
+
+- ctx->info = malloc(sizeof(struct disassemble_info));
++ ctx->info = malloc(sizeof(struct disasm_info));
+ if (!ctx->info) {
+ p_err("mem alloc failed");
+ goto err_close;
+ }
+- info = ctx->info;
++ ctx->info->func_ksym = func_ksym;
++ info = &ctx->info->info;
+
+ if (json_output)
+ init_disassemble_info_compat(info, stdout,
+@@ -272,6 +288,7 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ info->disassembler_options = disassembler_options;
+ info->buffer = image;
+ info->buffer_length = len;
++ info->print_address_func = disasm_print_addr;
+
+ disassemble_init_for_target(info);
+
+@@ -304,9 +321,10 @@ static void destroy_context(disasm_ctx_t *ctx)
+
+ static int
+ disassemble_insn(disasm_ctx_t *ctx, __maybe_unused unsigned char *image,
+- __maybe_unused ssize_t len, int pc)
++ __maybe_unused ssize_t len, int pc,
++ __maybe_unused __u64 func_ksym)
+ {
+- return ctx->disassemble(pc, ctx->info);
++ return ctx->disassemble(pc, &ctx->info->info);
+ }
+
+ int disasm_init(void)
+@@ -331,7 +349,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ if (!len)
+ return -1;
+
+- if (init_context(&ctx, arch, disassembler_options, image, len))
++ if (init_context(&ctx, arch, disassembler_options, image, len, func_ksym))
+ return -1;
+
+ if (json_output)
+@@ -360,7 +378,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ printf("%4x:" DISASM_SPACER, pc);
+ }
+
+- count = disassemble_insn(&ctx, image, len, pc);
++ count = disassemble_insn(&ctx, image, len, pc, func_ksym);
+
+ if (json_output) {
+ /* Operand array, was started in fprintf_json. Before
+diff --git a/tools/gpio/gpio-sloppy-logic-analyzer.sh b/tools/gpio/gpio-sloppy-logic-analyzer.sh
+index ed21a110df5e5d..3ef2278e49f916 100755
+--- a/tools/gpio/gpio-sloppy-logic-analyzer.sh
++++ b/tools/gpio/gpio-sloppy-logic-analyzer.sh
+@@ -113,7 +113,7 @@ init_cpu()
+ taskset -p "$newmask" "$p" || continue
+ done 2>/dev/null >/dev/null
+
+- # Big hammer! Working with 'rcu_momentary_dyntick_idle()' for a more fine-grained solution
++ # Big hammer! Working with 'rcu_momentary_eqs()' for a more fine-grained solution
+ # still printed warnings. Same for re-enabling the stall detector after sampling.
+ echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress
+
+diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h
+index 2ec13d8b9a2db8..f9ab83a219b8a2 100644
+--- a/tools/include/nolibc/arch-s390.h
++++ b/tools/include/nolibc/arch-s390.h
+@@ -10,6 +10,7 @@
+
+ #include "compiler.h"
+ #include "crt.h"
++#include "std.h"
+
+ /* Syscalls for s390:
+ * - registers are 64-bit
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 1b22f0f372880e..857a5f7b413d6d 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -61,7 +61,8 @@ ifndef VERBOSE
+ endif
+
+ INCLUDES = -I$(or $(OUTPUT),.) \
+- -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi
++ -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi \
++ -I$(srctree)/tools/arch/$(SRCARCH)/include
+
+ export prefix libdir src obj
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 219facd0e66e8b..5ff643e60d09ca 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3985,7 +3985,7 @@ static bool sym_is_subprog(const Elf64_Sym *sym, int text_shndx)
+ return true;
+
+ /* global function */
+- return bind == STB_GLOBAL && type == STT_FUNC;
++ return (bind == STB_GLOBAL || bind == STB_WEAK) && type == STT_FUNC;
+ }
+
+ static int find_extern_btf_id(const struct btf *btf, const char *ext_name)
+@@ -4389,7 +4389,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
+
+ static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog)
+ {
+- return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1;
++ return prog->sec_idx == obj->efile.text_shndx;
+ }
+
+ struct bpf_program *
+@@ -5094,6 +5094,7 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ enum libbpf_map_type map_type = map->libbpf_type;
+ char *cp, errmsg[STRERR_BUFSIZE];
+ int err, zero = 0;
++ size_t mmap_sz;
+
+ if (obj->gen_loader) {
+ bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps,
+@@ -5107,8 +5108,8 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ if (err) {
+ err = -errno;
+ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error setting initial map(%s) contents: %s\n",
+- map->name, cp);
++ pr_warn("map '%s': failed to set initial contents: %s\n",
++ bpf_map__name(map), cp);
+ return err;
+ }
+
+@@ -5118,11 +5119,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ if (err) {
+ err = -errno;
+ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error freezing map(%s) as read-only: %s\n",
+- map->name, cp);
++ pr_warn("map '%s': failed to freeze as read-only: %s\n",
++ bpf_map__name(map), cp);
+ return err;
+ }
+ }
++
++ /* Remap anonymous mmap()-ed "map initialization image" as
++ * a BPF map-backed mmap()-ed memory, but preserving the same
++ * memory address. This will cause kernel to change process'
++ * page table to point to a different piece of kernel memory,
++ * but from userspace point of view memory address (and its
++ * contents, being identical at this point) will stay the
++ * same. This mapping will be released by bpf_object__close()
++ * as per normal clean up procedure.
++ */
++ mmap_sz = bpf_map_mmap_sz(map);
++ if (map->def.map_flags & BPF_F_MMAPABLE) {
++ void *mmaped;
++ int prot;
++
++ if (map->def.map_flags & BPF_F_RDONLY_PROG)
++ prot = PROT_READ;
++ else
++ prot = PROT_READ | PROT_WRITE;
++ mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0);
++ if (mmaped == MAP_FAILED) {
++ err = -errno;
++ pr_warn("map '%s': failed to re-mmap() contents: %d\n",
++ bpf_map__name(map), err);
++ return err;
++ }
++ map->mmaped = mmaped;
++ } else if (map->mmaped) {
++ munmap(map->mmaped, mmap_sz);
++ map->mmaped = NULL;
++ }
++
+ return 0;
+ }
+
+@@ -5439,8 +5472,7 @@ bpf_object__create_maps(struct bpf_object *obj)
+ err = bpf_object__populate_internal_map(obj, map);
+ if (err < 0)
+ goto err_out;
+- }
+- if (map->def.type == BPF_MAP_TYPE_ARENA) {
++ } else if (map->def.type == BPF_MAP_TYPE_ARENA) {
+ map->mmaped = mmap((void *)(long)map->map_extra,
+ bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE,
+ map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED,
+@@ -7352,8 +7384,14 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
+ opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
+
+ /* special check for usdt to use uprobe_multi link */
+- if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK))
++ if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) {
++ /* for BPF_TRACE_UPROBE_MULTI, user might want to query expected_attach_type
++ * in prog, and expected_attach_type we set in kernel is from opts, so we
++ * update both.
++ */
+ prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
++ opts->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
++ }
+
+ if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) {
+ int btf_obj_fd = 0, btf_type_id = 0, err;
+@@ -7443,6 +7481,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
+ load_attr.attach_btf_id = prog->attach_btf_id;
+ load_attr.kern_version = kern_version;
+ load_attr.prog_ifindex = prog->prog_ifindex;
++ load_attr.expected_attach_type = prog->expected_attach_type;
+
+ /* specify func_info/line_info only if kernel supports them */
+ if (obj->btf && btf__fd(obj->btf) >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
+@@ -7474,9 +7513,6 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
+ insns_cnt = prog->insns_cnt;
+ }
+
+- /* allow prog_prepare_load_fn to change expected_attach_type */
+- load_attr.expected_attach_type = prog->expected_attach_type;
+-
+ if (obj->gen_loader) {
+ bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name,
+ license, insns, insns_cnt, &load_attr,
+@@ -13877,46 +13913,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s)
+ for (i = 0; i < s->map_cnt; i++) {
+ struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz;
+ struct bpf_map *map = *map_skel->map;
+- size_t mmap_sz = bpf_map_mmap_sz(map);
+- int prot, map_fd = map->fd;
+- void **mmaped = map_skel->mmaped;
+-
+- if (!mmaped)
+- continue;
+
+- if (!(map->def.map_flags & BPF_F_MMAPABLE)) {
+- *mmaped = NULL;
++ if (!map_skel->mmaped)
+ continue;
+- }
+-
+- if (map->def.type == BPF_MAP_TYPE_ARENA) {
+- *mmaped = map->mmaped;
+- continue;
+- }
+
+- if (map->def.map_flags & BPF_F_RDONLY_PROG)
+- prot = PROT_READ;
+- else
+- prot = PROT_READ | PROT_WRITE;
+-
+- /* Remap anonymous mmap()-ed "map initialization image" as
+- * a BPF map-backed mmap()-ed memory, but preserving the same
+- * memory address. This will cause kernel to change process'
+- * page table to point to a different piece of kernel memory,
+- * but from userspace point of view memory address (and its
+- * contents, being identical at this point) will stay the
+- * same. This mapping will be released by bpf_object__close()
+- * as per normal clean up procedure, so we don't need to worry
+- * about it from skeleton's clean up perspective.
+- */
+- *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0);
+- if (*mmaped == MAP_FAILED) {
+- err = -errno;
+- *mmaped = NULL;
+- pr_warn("failed to re-mmap() map '%s': %d\n",
+- bpf_map__name(map), err);
+- return libbpf_err(err);
+- }
++ *map_skel->mmaped = map->mmaped;
+ }
+
+ return 0;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index e0005c6ade88a2..6985ab0f1ca9e8 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -396,6 +396,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
+ pr_warn_elf("failed to create SYMTAB data");
+ return -EINVAL;
+ }
++ /* Ensure libelf translates byte-order of symbol records */
++ sec->data->d_type = ELF_T_SYM;
+
+ str_off = strset__add_str(linker->strtab_strs, sec->sec_name);
+ if (str_off < 0)
+diff --git a/tools/lib/thermal/commands.c b/tools/lib/thermal/commands.c
+index 73d4d4e8d6ec0b..27b4442f0e347a 100644
+--- a/tools/lib/thermal/commands.c
++++ b/tools/lib/thermal/commands.c
+@@ -261,9 +261,25 @@ static struct genl_ops thermal_cmd_ops = {
+ .o_ncmds = ARRAY_SIZE(thermal_cmds),
+ };
+
+-static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int cmd,
+- int flags, void *arg)
++struct cmd_param {
++ int tz_id;
++};
++
++typedef int (*cmd_cb_t)(struct nl_msg *, struct cmd_param *);
++
++static int thermal_genl_tz_id_encode(struct nl_msg *msg, struct cmd_param *p)
++{
++ if (p->tz_id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, p->tz_id))
++ return -1;
++
++ return 0;
++}
++
++static thermal_error_t thermal_genl_auto(struct thermal_handler *th, cmd_cb_t cmd_cb,
++ struct cmd_param *param,
++ int cmd, int flags, void *arg)
+ {
++ thermal_error_t ret = THERMAL_ERROR;
+ struct nl_msg *msg;
+ void *hdr;
+
+@@ -274,45 +290,55 @@ static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int
+ hdr = genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, thermal_cmd_ops.o_id,
+ 0, flags, cmd, THERMAL_GENL_VERSION);
+ if (!hdr)
+- return THERMAL_ERROR;
++ goto out;
+
+- if (id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, id))
+- return THERMAL_ERROR;
++ if (cmd_cb && cmd_cb(msg, param))
++ goto out;
+
+ if (nl_send_msg(th->sk_cmd, th->cb_cmd, msg, genl_handle_msg, arg))
+- return THERMAL_ERROR;
++ goto out;
+
++ ret = THERMAL_SUCCESS;
++out:
+ nlmsg_free(msg);
+
+- return THERMAL_SUCCESS;
++ return ret;
+ }
+
+ thermal_error_t thermal_cmd_get_tz(struct thermal_handler *th, struct thermal_zone **tz)
+ {
+- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_TZ_GET_ID,
++ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_TZ_GET_ID,
+ NLM_F_DUMP | NLM_F_ACK, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_cdev(struct thermal_handler *th, struct thermal_cdev **tc)
+ {
+- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_CDEV_GET,
++ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_CDEV_GET,
+ NLM_F_DUMP | NLM_F_ACK, tc);
+ }
+
+ thermal_error_t thermal_cmd_get_trip(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TRIP,
+- 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_TRIP, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_governor(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_exit(struct thermal_handler *th)
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index d4332675babb74..2ce71d2e5fae05 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -1194,7 +1194,7 @@ endif
+ ifneq ($(NO_LIBTRACEEVENT),1)
+ $(call feature_check,libtraceevent)
+ ifeq ($(feature-libtraceevent), 1)
+- CFLAGS += -DHAVE_LIBTRACEEVENT
++ CFLAGS += -DHAVE_LIBTRACEEVENT $(shell $(PKG_CONFIG) --cflags libtraceevent)
+ LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtraceevent)
+ EXTLIBS += $(shell $(PKG_CONFIG) --libs-only-l libtraceevent)
+ LIBTRACEEVENT_VERSION := $(shell $(PKG_CONFIG) --modversion libtraceevent).0.0
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index abcdc49b7a987f..272d3c70810e7d 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -815,7 +815,7 @@ static void display_histogram(int buckets[], bool use_nsec)
+
+ bar_len = buckets[0] * bar_total / total;
+ printf(" %4d - %-4d %s | %10d | %.*s%*s |\n",
+- 0, 1, "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
++ 0, 1, use_nsec ? "ns" : "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
+
+ for (i = 1; i < NUM_BUCKET - 1; i++) {
+ int start = (1 << (i - 1));
+diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
+index 65b8cba324be4b..c5331721dfee98 100644
+--- a/tools/perf/builtin-list.c
++++ b/tools/perf/builtin-list.c
+@@ -112,7 +112,7 @@ static void wordwrap(FILE *fp, const char *s, int start, int max, int corr)
+ }
+ }
+
+-static void default_print_event(void *ps, const char *pmu_name, const char *topic,
++static void default_print_event(void *ps, const char *topic, const char *pmu_name,
+ const char *event_name, const char *event_alias,
+ const char *scale_unit __maybe_unused,
+ bool deprecated, const char *event_type_desc,
+@@ -353,7 +353,7 @@ static void fix_escape_fprintf(FILE *fp, struct strbuf *buf, const char *fmt, ..
+ fputs(buf->buf, fp);
+ }
+
+-static void json_print_event(void *ps, const char *pmu_name, const char *topic,
++static void json_print_event(void *ps, const char *topic, const char *pmu_name,
+ const char *event_name, const char *event_alias,
+ const char *scale_unit,
+ bool deprecated, const char *event_type_desc,
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 689a3d43c2584f..4933efdfee76fb 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -716,15 +716,19 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ if (!cpu_map__is_dummy(evsel_list->core.user_requested_cpus)) {
+- if (affinity__setup(&saved_affinity) < 0)
+- return -1;
++ if (affinity__setup(&saved_affinity) < 0) {
++ err = -1;
++ goto err_out;
++ }
+ affinity = &saved_affinity;
+ }
+
+ evlist__for_each_entry(evsel_list, counter) {
+ counter->reset_group = false;
+- if (bpf_counter__load(counter, &target))
+- return -1;
++ if (bpf_counter__load(counter, &target)) {
++ err = -1;
++ goto err_out;
++ }
+ if (!(evsel__is_bperf(counter)))
+ all_counters_use_bpf = false;
+ }
+@@ -767,7 +771,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+
+ switch (stat_handle_error(counter)) {
+ case COUNTER_FATAL:
+- return -1;
++ err = -1;
++ goto err_out;
+ case COUNTER_RETRY:
+ goto try_again;
+ case COUNTER_SKIP:
+@@ -808,7 +813,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+
+ switch (stat_handle_error(counter)) {
+ case COUNTER_FATAL:
+- return -1;
++ err = -1;
++ goto err_out;
+ case COUNTER_RETRY:
+ goto try_again_reset;
+ case COUNTER_SKIP:
+@@ -821,6 +827,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+ }
+ affinity__cleanup(affinity);
++ affinity = NULL;
+
+ evlist__for_each_entry(evsel_list, counter) {
+ if (!counter->supported) {
+@@ -833,8 +840,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ stat_config.unit_width = l;
+
+ if (evsel__should_store_id(counter) &&
+- evsel__store_ids(counter, evsel_list))
+- return -1;
++ evsel__store_ids(counter, evsel_list)) {
++ err = -1;
++ goto err_out;
++ }
+ }
+
+ if (evlist__apply_filters(evsel_list, &counter, &target)) {
+@@ -855,20 +864,23 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ if (err < 0)
+- return err;
++ goto err_out;
+
+ err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list,
+ process_synthesized_event, is_pipe);
+ if (err < 0)
+- return err;
++ goto err_out;
++
+ }
+
+ if (target.initial_delay) {
+ pr_info(EVLIST_DISABLED_MSG);
+ } else {
+ err = enable_counters();
+- if (err)
+- return -1;
++ if (err) {
++ err = -1;
++ goto err_out;
++ }
+ }
+
+ /* Exec the command, if any */
+@@ -878,8 +890,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ if (target.initial_delay > 0) {
+ usleep(target.initial_delay * USEC_PER_MSEC);
+ err = enable_counters();
+- if (err)
+- return -1;
++ if (err) {
++ err = -1;
++ goto err_out;
++ }
+
+ pr_info(EVLIST_ENABLED_MSG);
+ }
+@@ -899,7 +913,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ if (workload_exec_errno) {
+ const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
+ pr_err("Workload failed: %s\n", emsg);
+- return -1;
++ err = -1;
++ goto err_out;
+ }
+
+ if (WIFSIGNALED(status))
+@@ -946,6 +961,13 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ evlist__close(evsel_list);
+
+ return WEXITSTATUS(status);
++
++err_out:
++ if (forks)
++ evlist__cancel_workload(evsel_list);
++
++ affinity__cleanup(affinity);
++ return err;
+ }
+
+ static int run_perf_stat(int argc, const char **argv, int run_idx)
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index d3f11b90d0255c..ffa1295273099e 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -2702,6 +2702,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+ char msg[1024];
+ void *args, *augmented_args = NULL;
+ int augmented_args_size;
++ size_t printed = 0;
+
+ if (sc == NULL)
+ return -1;
+@@ -2717,8 +2718,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+
+ args = perf_evsel__sc_tp_ptr(evsel, args, sample);
+ augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size);
+- syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
+- fprintf(trace->output, "%s", msg);
++ printed += syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
++ fprintf(trace->output, "%.*s", (int)printed, msg);
+ err = 0;
+ out_put:
+ thread__put(thread);
+@@ -3087,7 +3088,7 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
+ printed += syscall_arg_fmt__scnprintf_val(arg, bf + printed, size - printed, &syscall_arg, val);
+ }
+
+- return printed + fprintf(trace->output, "%s", bf);
++ return printed + fprintf(trace->output, "%.*s", (int)printed, bf);
+ }
+
+ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
+@@ -3096,13 +3097,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
+ {
+ struct thread *thread;
+ int callchain_ret = 0;
+- /*
+- * Check if we called perf_evsel__disable(evsel) due to, for instance,
+- * this event's max_events having been hit and this is an entry coming
+- * from the ring buffer that we should discard, since the max events
+- * have already been considered/printed.
+- */
+- if (evsel->disabled)
++
++ if (evsel->nr_events_printed >= evsel->max_events)
+ return 0;
+
+ thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
+@@ -4326,6 +4322,9 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
+ sizeof(__u32), BPF_ANY);
+ }
+ }
++
++ if (trace->skel)
++ trace->filter_pids.map = trace->skel->maps.pids_filtered;
+ #endif
+ err = trace__set_filter_pids(trace);
+ if (err < 0)
+@@ -5449,6 +5448,10 @@ int cmd_trace(int argc, const char **argv)
+ if (trace.summary_only)
+ trace.summary = trace.summary_only;
+
++ /* Keep exited threads, otherwise information might be lost for summary */
++ if (trace.summary)
++ symbol_conf.keep_exited_threads = true;
++
+ if (output_name != NULL) {
+ err = trace__open_output(&trace, output_name);
+ if (err < 0) {
+diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
+index c592079982fbd8..873e9fb2041f02 100644
+--- a/tools/perf/pmu-events/empty-pmu-events.c
++++ b/tools/perf/pmu-events/empty-pmu-events.c
+@@ -380,7 +380,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table,
+ continue;
+
+ ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data);
+- if (pmu || ret)
++ if (ret)
+ return ret;
+ }
+ return 0;
+diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
+index bb0a5d92df4a15..d46a22fb5573de 100755
+--- a/tools/perf/pmu-events/jevents.py
++++ b/tools/perf/pmu-events/jevents.py
+@@ -930,7 +930,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table,
+ continue;
+
+ ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data);
+- if (pmu || ret)
++ if (ret)
+ return ret;
+ }
+ return 0;
+diff --git a/tools/perf/tests/attr/test-stat-default b/tools/perf/tests/attr/test-stat-default
+index a1e2da0a9a6ddb..e47fb49446799b 100644
+--- a/tools/perf/tests/attr/test-stat-default
++++ b/tools/perf/tests/attr/test-stat-default
+@@ -88,98 +88,142 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-1 b/tools/perf/tests/attr/test-stat-detailed-1
+index 1c52cb05c900d7..3d500d3e0c5c8a 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-1
++++ b/tools/perf/tests/attr/test-stat-detailed-1
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-2 b/tools/perf/tests/attr/test-stat-detailed-2
+index 7e961d24a885a7..01777a63752fe6 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-2
++++ b/tools/perf/tests/attr/test-stat-detailed-2
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+@@ -230,8 +274,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event29:base-stat]
+-fd=29
++[event33:base-stat]
++fd=33
+ type=3
+ config=1
+ optional=1
+@@ -240,8 +284,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event30:base-stat]
+-fd=30
++[event34:base-stat]
++fd=34
+ type=3
+ config=65537
+ optional=1
+@@ -250,8 +294,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event31:base-stat]
+-fd=31
++[event35:base-stat]
++fd=35
+ type=3
+ config=3
+ optional=1
+@@ -260,8 +304,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event32:base-stat]
+-fd=32
++[event36:base-stat]
++fd=36
+ type=3
+ config=65539
+ optional=1
+@@ -270,8 +314,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event33:base-stat]
+-fd=33
++[event37:base-stat]
++fd=37
+ type=3
+ config=4
+ optional=1
+@@ -280,8 +324,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event34:base-stat]
+-fd=34
++[event38:base-stat]
++fd=38
+ type=3
+ config=65540
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-3 b/tools/perf/tests/attr/test-stat-detailed-3
+index e50535f45977c6..8400abd7e1e488 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-3
++++ b/tools/perf/tests/attr/test-stat-detailed-3
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+@@ -230,8 +274,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event29:base-stat]
+-fd=29
++[event33:base-stat]
++fd=33
+ type=3
+ config=1
+ optional=1
+@@ -240,8 +284,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event30:base-stat]
+-fd=30
++[event34:base-stat]
++fd=34
+ type=3
+ config=65537
+ optional=1
+@@ -250,8 +294,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event31:base-stat]
+-fd=31
++[event35:base-stat]
++fd=35
+ type=3
+ config=3
+ optional=1
+@@ -260,8 +304,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event32:base-stat]
+-fd=32
++[event36:base-stat]
++fd=36
+ type=3
+ config=65539
+ optional=1
+@@ -270,8 +314,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event33:base-stat]
+-fd=33
++[event37:base-stat]
++fd=37
+ type=3
+ config=4
+ optional=1
+@@ -280,8 +324,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event34:base-stat]
+-fd=34
++[event38:base-stat]
++fd=38
+ type=3
+ config=65540
+ optional=1
+@@ -290,8 +334,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event35:base-stat]
+-fd=35
++[event39:base-stat]
++fd=39
+ type=3
+ config=512
+ optional=1
+@@ -300,8 +344,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event36:base-stat]
+-fd=36
++[event40:base-stat]
++fd=40
+ type=3
+ config=66048
+ optional=1
+diff --git a/tools/perf/util/bpf-filter.c b/tools/perf/util/bpf-filter.c
+index e87b6789eb9ef3..a4fdf6911ec1c3 100644
+--- a/tools/perf/util/bpf-filter.c
++++ b/tools/perf/util/bpf-filter.c
+@@ -375,7 +375,7 @@ static int create_idx_hash(struct evsel *evsel, struct perf_bpf_filter_entry *en
+ pfi = zalloc(sizeof(*pfi));
+ if (pfi == NULL) {
+ pr_err("Cannot save pinned filter index\n");
+- goto err;
++ return -ENOMEM;
+ }
+
+ pfi->evsel = evsel;
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index 40f047baef8100..0bf9e5c27b599b 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -2490,12 +2490,6 @@ static void cs_etm__clear_all_traceid_queues(struct cs_etm_queue *etmq)
+
+ /* Ignore return value */
+ cs_etm__process_traceid_queue(etmq, tidq);
+-
+- /*
+- * Generate an instruction sample with the remaining
+- * branchstack entries.
+- */
+- cs_etm__flush(etmq, tidq);
+ }
+ }
+
+@@ -2638,7 +2632,7 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
+
+ while (1) {
+ if (!etm->heap.heap_cnt)
+- goto out;
++ break;
+
+ /* Take the entry at the top of the min heap */
+ cs_queue_nr = etm->heap.heap_array[0].queue_nr;
+@@ -2721,6 +2715,23 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
+ ret = auxtrace_heap__add(&etm->heap, cs_queue_nr, cs_timestamp);
+ }
+
++ for (i = 0; i < etm->queues.nr_queues; i++) {
++ struct int_node *inode;
++
++ etmq = etm->queues.queue_array[i].priv;
++ if (!etmq)
++ continue;
++
++ intlist__for_each_entry(inode, etmq->traceid_queues_list) {
++ int idx = (int)(intptr_t)inode->priv;
++
++ /* Flush any remaining branch stack entries */
++ tidq = etmq->traceid_queues[idx];
++ ret = cs_etm__end_block(etmq, tidq);
++ if (ret)
++ return ret;
++ }
++ }
+ out:
+ return ret;
+ }
+diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c
+index f05ba7739c1e91..648e8d87ef1945 100644
+--- a/tools/perf/util/disasm.c
++++ b/tools/perf/util/disasm.c
+@@ -1627,12 +1627,12 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ u64 start = map__rip_2objdump(map, sym->start);
+ u64 len;
+ u64 offset;
+- int i, count;
++ int i, count, free_count;
+ bool is_64bit = false;
+ bool needs_cs_close = false;
+ u8 *buf = NULL;
+ csh handle;
+- cs_insn *insn;
++ cs_insn *insn = NULL;
+ char disasm_buf[512];
+ struct disasm_line *dl;
+
+@@ -1664,7 +1664,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+
+ needs_cs_close = true;
+
+- count = cs_disasm(handle, buf, len, start, len, &insn);
++ free_count = count = cs_disasm(handle, buf, len, start, len, &insn);
+ for (i = 0, offset = 0; i < count; i++) {
+ int printed;
+
+@@ -1702,8 +1702,11 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ }
+
+ out:
+- if (needs_cs_close)
++ if (needs_cs_close) {
+ cs_close(&handle);
++ if (free_count > 0)
++ cs_free(insn, free_count);
++ }
+ free(buf);
+ return count < 0 ? count : 0;
+
+@@ -1717,7 +1720,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ */
+ list_for_each_entry_safe(dl, tmp, ¬es->src->source, al.node) {
+ list_del(&dl->al.node);
+- free(dl);
++ disasm_line__free(dl);
+ }
+ }
+ count = -1;
+@@ -1782,7 +1785,7 @@ static int symbol__disassemble_raw(char *filename, struct symbol *sym,
+ sprintf(args->line, "%x", line[i]);
+ dl = disasm_line__new(args);
+ if (dl == NULL)
+- goto err;
++ break;
+
+ annotation_line__add(&dl->al, ¬es->src->source);
+ offset += 4;
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index f14b7e6ff1dcc2..a9df84692d4a88 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -48,6 +48,7 @@
+ #include <sys/mman.h>
+ #include <sys/prctl.h>
+ #include <sys/timerfd.h>
++#include <sys/wait.h>
+
+ #include <linux/bitops.h>
+ #include <linux/hash.h>
+@@ -1484,6 +1485,8 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+ int child_ready_pipe[2], go_pipe[2];
+ char bf;
+
++ evlist->workload.cork_fd = -1;
++
+ if (pipe(child_ready_pipe) < 0) {
+ perror("failed to create 'ready' pipe");
+ return -1;
+@@ -1536,7 +1539,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+ * For cancelling the workload without actually running it,
+ * the parent will just close workload.cork_fd, without writing
+ * anything, i.e. read will return zero and we just exit()
+- * here.
++ * here (See evlist__cancel_workload()).
+ */
+ if (ret != 1) {
+ if (ret == -1)
+@@ -1600,7 +1603,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+
+ int evlist__start_workload(struct evlist *evlist)
+ {
+- if (evlist->workload.cork_fd > 0) {
++ if (evlist->workload.cork_fd >= 0) {
+ char bf = 0;
+ int ret;
+ /*
+@@ -1611,12 +1614,24 @@ int evlist__start_workload(struct evlist *evlist)
+ perror("unable to write to pipe");
+
+ close(evlist->workload.cork_fd);
++ evlist->workload.cork_fd = -1;
+ return ret;
+ }
+
+ return 0;
+ }
+
++void evlist__cancel_workload(struct evlist *evlist)
++{
++ int status;
++
++ if (evlist->workload.cork_fd >= 0) {
++ close(evlist->workload.cork_fd);
++ evlist->workload.cork_fd = -1;
++ waitpid(evlist->workload.pid, &status, WNOHANG);
++ }
++}
++
+ int evlist__parse_sample(struct evlist *evlist, union perf_event *event, struct perf_sample *sample)
+ {
+ struct evsel *evsel = evlist__event2evsel(evlist, event);
+diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
+index bcc1c6984bb58a..888fda751e1a6e 100644
+--- a/tools/perf/util/evlist.h
++++ b/tools/perf/util/evlist.h
+@@ -186,6 +186,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target,
+ const char *argv[], bool pipe_output,
+ void (*exec_error)(int signo, siginfo_t *info, void *ucontext));
+ int evlist__start_workload(struct evlist *evlist);
++void evlist__cancel_workload(struct evlist *evlist);
+
+ struct option;
+
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index fad227b625d155..4f0ac998b0ccfd 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1343,7 +1343,7 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
+ * we need to update the symtab_type if needed.
+ */
+ if (m->comp && is_kmod_dso(dso)) {
+- dso__set_symtab_type(dso, dso__symtab_type(dso));
++ dso__set_symtab_type(dso, dso__symtab_type(dso)+1);
+ dso__set_comp(dso, m->comp);
+ }
+ map__put(map);
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index 051feb93ed8d40..bf5090f5220bbd 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -366,6 +366,12 @@ static const char * const mem_lvl[] = {
+ };
+
+ static const char * const mem_lvlnum[] = {
++ [PERF_MEM_LVLNUM_L1] = "L1",
++ [PERF_MEM_LVLNUM_L2] = "L2",
++ [PERF_MEM_LVLNUM_L3] = "L3",
++ [PERF_MEM_LVLNUM_L4] = "L4",
++ [PERF_MEM_LVLNUM_L2_MHB] = "L2 MHB",
++ [PERF_MEM_LVLNUM_MSC] = "Memory-side Cache",
+ [PERF_MEM_LVLNUM_UNC] = "Uncached",
+ [PERF_MEM_LVLNUM_CXL] = "CXL",
+ [PERF_MEM_LVLNUM_IO] = "I/O",
+@@ -448,7 +454,7 @@ int perf_mem__lvl_scnprintf(char *out, size_t sz, const struct mem_info *mem_inf
+ if (mem_lvlnum[lvl])
+ l += scnprintf(out + l, sz - l, mem_lvlnum[lvl]);
+ else
+- l += scnprintf(out + l, sz - l, "L%d", lvl);
++ l += scnprintf(out + l, sz - l, "Unknown level %d", lvl);
+
+ l += scnprintf(out + l, sz - l, " %s", hit_miss);
+ return l;
+diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c
+index 5ccfe4b64cdfe4..0dacc133ed3960 100644
+--- a/tools/perf/util/pfm.c
++++ b/tools/perf/util/pfm.c
+@@ -233,7 +233,7 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
+ }
+
+ if (is_libpfm_event_supported(name, cpus, threads)) {
+- print_cb->print_event(print_state, pinfo->name, topic,
++ print_cb->print_event(print_state, topic, pinfo->name,
+ name, info->equiv,
+ /*scale_unit=*/NULL,
+ /*deprecated=*/NULL, "PFM event",
+@@ -267,8 +267,8 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
+ continue;
+
+ print_cb->print_event(print_state,
+- pinfo->name,
+ topic,
++ pinfo->name,
+ name, /*alias=*/NULL,
+ /*scale_unit=*/NULL,
+ /*deprecated=*/NULL, "PFM event",
+diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
+index 52109af5f2f129..d7d67e09d759bb 100644
+--- a/tools/perf/util/pmus.c
++++ b/tools/perf/util/pmus.c
+@@ -494,8 +494,8 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p
+ goto free;
+
+ print_cb->print_event(print_state,
+- aliases[j].pmu_name,
+ aliases[j].topic,
++ aliases[j].pmu_name,
+ aliases[j].name,
+ aliases[j].alias,
+ aliases[j].scale_unit,
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 630e16c54ed5cb..a30f88ed030044 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -1379,6 +1379,10 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
+ if (ret >= 0 && tf.pf.skip_empty_arg)
+ ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+
++#if _ELFUTILS_PREREQ(0, 142)
++ dwarf_cfi_end(tf.pf.cfi_eh);
++#endif
++
+ if (ret < 0 || tf.ntevs == 0) {
+ for (i = 0; i < tf.ntevs; i++)
+ clear_probe_trace_event(&tf.tevs[i]);
+@@ -1583,8 +1587,21 @@ int debuginfo__find_probe_point(struct debuginfo *dbg, u64 addr,
+
+ /* Find a corresponding function (name, baseline and baseaddr) */
+ if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {
+- /* Get function entry information */
+- func = basefunc = dwarf_diename(&spdie);
++ /*
++ * Get function entry information.
++ *
++ * As described in the document DWARF Debugging Information
++ * Format Version 5, section 2.22 Linkage Names, "mangled names,
++ * are used in various ways, ... to distinguish multiple
++ * entities that have the same name".
++ *
++ * Firstly try to get distinct linkage name, if fail then
++ * rollback to get associated name in DIE.
++ */
++ func = basefunc = die_get_linkage_name(&spdie);
++ if (!func)
++ func = basefunc = dwarf_diename(&spdie);
++
+ if (!func ||
+ die_entrypc(&spdie, &baseaddr) != 0 ||
+ dwarf_decl_line(&spdie, &baseline) != 0) {
+diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
+index 3add5ff516e12d..724db829b49e02 100644
+--- a/tools/perf/util/probe-finder.h
++++ b/tools/perf/util/probe-finder.h
+@@ -64,9 +64,9 @@ struct probe_finder {
+
+ /* For variable searching */
+ #if _ELFUTILS_PREREQ(0, 142)
+- /* Call Frame Information from .eh_frame */
++ /* Call Frame Information from .eh_frame. Owned by this struct. */
+ Dwarf_CFI *cfi_eh;
+- /* Call Frame Information from .debug_frame */
++ /* Call Frame Information from .debug_frame. Not owned. */
+ Dwarf_CFI *cfi_dbg;
+ #endif
+ Dwarf_Op *fb_ops; /* Frame base attribute */
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 089220aaa5c929..a5ebee8b23bbe3 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -5385,6 +5385,9 @@ static int parse_cpu_str(char *cpu_str, cpu_set_t *cpu_set, int cpu_set_size)
+ if (*next == '-') /* no negative cpu numbers */
+ return 1;
+
++ if (*next == '\0' || *next == '\n')
++ break;
++
+ start = strtoul(next, &next, 10);
+
+ if (start >= CPU_SUBSET_MAXCPUS)
+@@ -9781,7 +9784,7 @@ void cmdline(int argc, char **argv)
+ * Parse some options early, because they may make other options invalid,
+ * like adding the MSR counter with --add and at the same time using --no-msr.
+ */
+- while ((opt = getopt_long_only(argc, argv, "MPn:", long_options, &option_index)) != -1) {
++ while ((opt = getopt_long_only(argc, argv, "+MPn:", long_options, &option_index)) != -1) {
+ switch (opt) {
+ case 'M':
+ no_msr = 1;
+diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c
+index f2d6007a2b983e..265654ec48b9fc 100644
+--- a/tools/testing/selftests/arm64/abi/hwcap.c
++++ b/tools/testing/selftests/arm64/abi/hwcap.c
+@@ -361,8 +361,8 @@ static void sveaes_sigill(void)
+
+ static void sveb16b16_sigill(void)
+ {
+- /* BFADD ZA.H[W0, 0], {Z0.H-Z1.H} */
+- asm volatile(".inst 0xC1E41C00" : : : );
++ /* BFADD Z0.H, Z0.H, Z0.H */
++ asm volatile(".inst 0x65000000" : : : );
+ }
+
+ static void svepmull_sigill(void)
+@@ -490,7 +490,7 @@ static const struct hwcap_data {
+ .name = "F8DP2",
+ .at_hwcap = AT_HWCAP2,
+ .hwcap_bit = HWCAP2_F8DP2,
+- .cpuinfo = "f8dp4",
++ .cpuinfo = "f8dp2",
+ .sigill_fn = f8dp2_sigill,
+ },
+ {
+diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+index 2b1425b92b6991..a3d1e23fe02aff 100644
+--- a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
++++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+@@ -65,7 +65,7 @@ static int check_single_included_tags(int mem_type, int mode)
+ ptr = mte_insert_tags(ptr, BUFFER_SIZE);
+ /* Check tag value */
+ if (MT_FETCH_TAG((uintptr_t)ptr) == tag) {
+- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
++ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%x\n",
+ MT_FETCH_TAG((uintptr_t)ptr),
+ MT_INCLUDE_VALID_TAG(tag));
+ result = KSFT_FAIL;
+@@ -97,7 +97,7 @@ static int check_multiple_included_tags(int mem_type, int mode)
+ ptr = mte_insert_tags(ptr, BUFFER_SIZE);
+ /* Check tag value */
+ if (MT_FETCH_TAG((uintptr_t)ptr) < tag) {
+- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
++ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%lx\n",
+ MT_FETCH_TAG((uintptr_t)ptr),
+ MT_INCLUDE_VALID_TAGS(excl_mask));
+ result = KSFT_FAIL;
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index 00ffd34c66d301..1120f5aa76550f 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -38,7 +38,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
+ if (cur_mte_cxt.trig_si_code == si->si_code)
+ cur_mte_cxt.fault_valid = true;
+ else
+- ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=$lx, fault addr=%lx\n",
++ ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=%llx, fault addr=%lx\n",
+ ((ucontext_t *)uc)->uc_mcontext.pc,
+ addr);
+ return;
+@@ -64,7 +64,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
+ exit(1);
+ }
+ } else if (signum == SIGBUS) {
+- ksft_print_msg("INFO: SIGBUS signal at pc=%lx, fault addr=%lx, si_code=%lx\n",
++ ksft_print_msg("INFO: SIGBUS signal at pc=%llx, fault addr=%lx, si_code=%x\n",
+ ((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code);
+ if ((cur_mte_cxt.trig_range >= 0 &&
+ addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 75016962f79563..43a02931847854 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -10,6 +10,7 @@ TOOLSDIR := $(abspath ../../..)
+ LIBDIR := $(TOOLSDIR)/lib
+ BPFDIR := $(LIBDIR)/bpf
+ TOOLSINCDIR := $(TOOLSDIR)/include
++TOOLSARCHINCDIR := $(TOOLSDIR)/arch/$(SRCARCH)/include
+ BPFTOOLDIR := $(TOOLSDIR)/bpf/bpftool
+ APIDIR := $(TOOLSINCDIR)/uapi
+ ifneq ($(O),)
+@@ -44,7 +45,7 @@ CFLAGS += -g $(OPT_FLAGS) -rdynamic \
+ -Wall -Werror -fno-omit-frame-pointer \
+ $(GENFLAGS) $(SAN_CFLAGS) $(LIBELF_CFLAGS) \
+ -I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \
+- -I$(TOOLSINCDIR) -I$(APIDIR) -I$(OUTPUT)
++ -I$(TOOLSINCDIR) -I$(TOOLSARCHINCDIR) -I$(APIDIR) -I$(OUTPUT)
+ LDFLAGS += $(SAN_LDFLAGS)
+ LDLIBS += $(LIBELF_LIBS) -lz -lrt -lpthread
+
+diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
+index c72c16e1aff825..5764155b6d2518 100644
+--- a/tools/testing/selftests/bpf/network_helpers.h
++++ b/tools/testing/selftests/bpf/network_helpers.h
+@@ -1,6 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef __NETWORK_HELPERS_H
+ #define __NETWORK_HELPERS_H
++#include <arpa/inet.h>
+ #include <sys/socket.h>
+ #include <sys/types.h>
+ #include <linux/types.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+index 871d16cb95cfde..1a2f99596916fb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+@@ -5,6 +5,7 @@
+ #include <test_progs.h>
+ #include <pthread.h>
+ #include <network_helpers.h>
++#include <sys/sysinfo.h>
+
+ #include "timer_lockup.skel.h"
+
+@@ -52,6 +53,11 @@ void test_timer_lockup(void)
+ pthread_t thrds[2];
+ void *ret;
+
++ if (get_nprocs() < 2) {
++ test__skip();
++ return;
++ }
++
+ skel = timer_lockup__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load"))
+ return;
+diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+index 43f40c4fe241ac..1c8b678e2e9a39 100644
+--- a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
++++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+@@ -28,8 +28,8 @@ struct {
+ },
+ };
+
+-SEC(".data.A") struct bpf_spin_lock lockA;
+-SEC(".data.B") struct bpf_spin_lock lockB;
++static struct bpf_spin_lock lockA SEC(".data.A");
++static struct bpf_spin_lock lockB SEC(".data.B");
+
+ SEC("?tc")
+ int lock_id_kptr_preserve(void *ctx)
+diff --git a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+index bba3e37f749b86..5aaf2b065f86c2 100644
+--- a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
++++ b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+@@ -7,7 +7,11 @@
+ #include "bpf_misc.h"
+
+ SEC("tp_btf/bpf_testmod_test_nullable_bare")
+-__failure __msg("R1 invalid mem access 'trusted_ptr_or_null_'")
++/* This used to be a failure test, but raw_tp nullable arguments can now
++ * directly be dereferenced, whether they have nullable annotation or not,
++ * and don't need to be explicitly checked.
++ */
++__success
+ int BPF_PROG(handle_tp_btf_nullable_bare1, struct bpf_testmod_test_read_ctx *nullable_ctx)
+ {
+ return nullable_ctx->len;
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index c7a70e1a1085a5..fa829a7854f24c 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -20,11 +20,13 @@
+
+ #include "network_helpers.h"
+
++/* backtrace() and backtrace_symbols_fd() are glibc specific,
++ * use header file when glibc is available and provide stub
++ * implementations when another libc implementation is used.
++ */
+ #ifdef __GLIBC__
+ #include <execinfo.h> /* backtrace */
+-#endif
+-
+-/* Default backtrace funcs if missing at link */
++#else
+ __weak int backtrace(void **buffer, int size)
+ {
+ return 0;
+@@ -34,6 +36,7 @@ __weak void backtrace_symbols_fd(void *const *buffer, int size, int fd)
+ {
+ dprintf(fd, "<backtrace not supported>\n");
+ }
++#endif /*__GLIBC__ */
+
+ int env_verbosity = 0;
+
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 3e02d7267de8bb..61a747afcd05fb 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -56,6 +56,8 @@ static void running_handler(int a);
+ #define BPF_SOCKHASH_FILENAME "test_sockhash_kern.bpf.o"
+ #define CG_PATH "/sockmap"
+
++#define EDATAINTEGRITY 2001
++
+ /* global sockets */
+ int s1, s2, c1, c2, p1, p2;
+ int test_cnt;
+@@ -86,6 +88,10 @@ int ktls;
+ int peek_flag;
+ int skb_use_parser;
+ int txmsg_omit_skb_parser;
++int verify_push_start;
++int verify_push_len;
++int verify_pop_start;
++int verify_pop_len;
+
+ static const struct option long_options[] = {
+ {"help", no_argument, NULL, 'h' },
+@@ -418,16 +424,18 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt,
+ {
+ bool drop = opt->drop_expected;
+ unsigned char k = 0;
++ int i, j, fp;
+ FILE *file;
+- int i, fp;
+
+ file = tmpfile();
+ if (!file) {
+ perror("create file for sendpage");
+ return 1;
+ }
+- for (i = 0; i < iov_length * cnt; i++, k++)
+- fwrite(&k, sizeof(char), 1, file);
++ for (i = 0; i < cnt; i++, k = 0) {
++ for (j = 0; j < iov_length; j++, k++)
++ fwrite(&k, sizeof(char), 1, file);
++ }
+ fflush(file);
+ fseek(file, 0, SEEK_SET);
+
+@@ -510,42 +518,111 @@ static int msg_alloc_iov(struct msghdr *msg,
+ return -ENOMEM;
+ }
+
+-static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz)
++/* In push or pop test, we need to do some calculations for msg_verify_data */
++static void msg_verify_date_prep(void)
+ {
+- int i, j = 0, bytes_cnt = 0;
+- unsigned char k = 0;
++ int push_range_end = txmsg_start_push + txmsg_end_push - 1;
++ int pop_range_end = txmsg_start_pop + txmsg_pop - 1;
++
++ if (txmsg_end_push && txmsg_pop &&
++ txmsg_start_push <= pop_range_end && txmsg_start_pop <= push_range_end) {
++ /* The push range and the pop range overlap */
++ int overlap_len;
++
++ verify_push_start = txmsg_start_push;
++ verify_pop_start = txmsg_start_pop;
++ if (txmsg_start_push < txmsg_start_pop)
++ overlap_len = min(push_range_end - txmsg_start_pop + 1, txmsg_pop);
++ else
++ overlap_len = min(pop_range_end - txmsg_start_push + 1, txmsg_end_push);
++ verify_push_len = max(txmsg_end_push - overlap_len, 0);
++ verify_pop_len = max(txmsg_pop - overlap_len, 0);
++ } else {
++ /* Otherwise */
++ verify_push_start = txmsg_start_push;
++ verify_pop_start = txmsg_start_pop;
++ verify_push_len = txmsg_end_push;
++ verify_pop_len = txmsg_pop;
++ }
++}
++
++static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz,
++ unsigned char *k_p, int *bytes_cnt_p,
++ int *check_cnt_p, int *push_p)
++{
++ int bytes_cnt = *bytes_cnt_p, check_cnt = *check_cnt_p, push = *push_p;
++ unsigned char k = *k_p;
++ int i, j;
+
+- for (i = 0; i < msg->msg_iovlen; i++) {
++ for (i = 0, j = 0; i < msg->msg_iovlen && size; i++, j = 0) {
+ unsigned char *d = msg->msg_iov[i].iov_base;
+
+ /* Special case test for skb ingress + ktls */
+ if (i == 0 && txmsg_ktls_skb) {
+ if (msg->msg_iov[i].iov_len < 4)
+- return -EIO;
++ return -EDATAINTEGRITY;
+ if (memcmp(d, "PASS", 4) != 0) {
+ fprintf(stderr,
+ "detected skb data error with skb ingress update @iov[%i]:%i \"%02x %02x %02x %02x\" != \"PASS\"\n",
+ i, 0, d[0], d[1], d[2], d[3]);
+- return -EIO;
++ return -EDATAINTEGRITY;
+ }
+ j = 4; /* advance index past PASS header */
+ }
+
+ for (; j < msg->msg_iov[i].iov_len && size; j++) {
++ if (push > 0 &&
++ check_cnt == verify_push_start + verify_push_len - push) {
++ int skipped;
++revisit_push:
++ skipped = push;
++ if (j + push >= msg->msg_iov[i].iov_len)
++ skipped = msg->msg_iov[i].iov_len - j;
++ push -= skipped;
++ size -= skipped;
++ j += skipped - 1;
++ check_cnt += skipped;
++ continue;
++ }
++
++ if (verify_pop_len > 0 && check_cnt == verify_pop_start) {
++ bytes_cnt += verify_pop_len;
++ check_cnt += verify_pop_len;
++ k += verify_pop_len;
++
++ if (bytes_cnt == chunk_sz) {
++ k = 0;
++ bytes_cnt = 0;
++ check_cnt = 0;
++ push = verify_push_len;
++ }
++
++ if (push > 0 &&
++ check_cnt == verify_push_start + verify_push_len - push)
++ goto revisit_push;
++ }
++
+ if (d[j] != k++) {
+ fprintf(stderr,
+ "detected data corruption @iov[%i]:%i %02x != %02x, %02x ?= %02x\n",
+ i, j, d[j], k - 1, d[j+1], k);
+- return -EIO;
++ return -EDATAINTEGRITY;
+ }
+ bytes_cnt++;
++ check_cnt++;
+ if (bytes_cnt == chunk_sz) {
+ k = 0;
+ bytes_cnt = 0;
++ check_cnt = 0;
++ push = verify_push_len;
+ }
+ size--;
+ }
+ }
++ *k_p = k;
++ *bytes_cnt_p = bytes_cnt;
++ *check_cnt_p = check_cnt;
++ *push_p = push;
+ return 0;
+ }
+
+@@ -598,10 +675,14 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ }
+ clock_gettime(CLOCK_MONOTONIC, &s->end);
+ } else {
++ float total_bytes, txmsg_pop_total, txmsg_push_total;
+ int slct, recvp = 0, recv, max_fd = fd;
+- float total_bytes, txmsg_pop_total;
+ int fd_flags = O_NONBLOCK;
+ struct timeval timeout;
++ unsigned char k = 0;
++ int bytes_cnt = 0;
++ int check_cnt = 0;
++ int push = 0;
+ fd_set w;
+
+ fcntl(fd, fd_flags);
+@@ -615,12 +696,22 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ * This is really only useful for testing edge cases in code
+ * paths.
+ */
+- total_bytes = (float)iov_count * (float)iov_length * (float)cnt;
+- if (txmsg_apply)
++ total_bytes = (float)iov_length * (float)cnt;
++ if (!opt->sendpage)
++ total_bytes *= (float)iov_count;
++ if (txmsg_apply) {
++ txmsg_push_total = txmsg_end_push * (total_bytes / txmsg_apply);
+ txmsg_pop_total = txmsg_pop * (total_bytes / txmsg_apply);
+- else
++ } else {
++ txmsg_push_total = txmsg_end_push * cnt;
+ txmsg_pop_total = txmsg_pop * cnt;
++ }
++ total_bytes += txmsg_push_total;
+ total_bytes -= txmsg_pop_total;
++ if (data) {
++ msg_verify_date_prep();
++ push = verify_push_len;
++ }
+ err = clock_gettime(CLOCK_MONOTONIC, &s->start);
+ if (err < 0)
+ perror("recv start time");
+@@ -693,10 +784,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+
+ if (data) {
+ int chunk_sz = opt->sendpage ?
+- iov_length * cnt :
++ iov_length :
+ iov_length * iov_count;
+
+- errno = msg_verify_data(&msg, recv, chunk_sz);
++ errno = msg_verify_data(&msg, recv, chunk_sz, &k, &bytes_cnt,
++ &check_cnt, &push);
+ if (errno) {
+ perror("data verify msg failed");
+ goto out_errno;
+@@ -704,7 +796,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ if (recvp) {
+ errno = msg_verify_data(&msg_peek,
+ recvp,
+- chunk_sz);
++ chunk_sz,
++ &k,
++ &bytes_cnt,
++ &check_cnt,
++ &push);
+ if (errno) {
+ perror("data verify msg_peek failed");
+ goto out_errno;
+@@ -786,8 +882,6 @@ static int sendmsg_test(struct sockmap_options *opt)
+
+ rxpid = fork();
+ if (rxpid == 0) {
+- if (txmsg_pop || txmsg_start_pop)
+- iov_buf -= (txmsg_pop - txmsg_start_pop + 1);
+ if (opt->drop_expected || txmsg_ktls_skb_drop)
+ _exit(0);
+
+@@ -812,7 +906,7 @@ static int sendmsg_test(struct sockmap_options *opt)
+ s.bytes_sent, sent_Bps, sent_Bps/giga,
+ s.bytes_recvd, recvd_Bps, recvd_Bps/giga,
+ peek_flag ? "(peek_msg)" : "");
+- if (err && txmsg_cork)
++ if (err && err != -EDATAINTEGRITY && txmsg_cork)
+ err = 0;
+ exit(err ? 1 : 0);
+ } else if (rxpid == -1) {
+@@ -1456,8 +1550,8 @@ static void test_send_many(struct sockmap_options *opt, int cgrp)
+
+ static void test_send_large(struct sockmap_options *opt, int cgrp)
+ {
+- opt->iov_length = 256;
+- opt->iov_count = 1024;
++ opt->iov_length = 8192;
++ opt->iov_count = 32;
+ opt->rate = 2;
+ test_exec(cgrp, opt);
+ }
+@@ -1586,17 +1680,19 @@ static void test_txmsg_cork_hangs(int cgrp, struct sockmap_options *opt)
+ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
+ {
+ /* Test basic start/end */
++ txmsg_pass = 1;
+ txmsg_start = 1;
+ txmsg_end = 2;
+ test_send(opt, cgrp);
+
+ /* Test >4k pull */
++ txmsg_pass = 1;
+ txmsg_start = 4096;
+ txmsg_end = 9182;
+ test_send_large(opt, cgrp);
+
+ /* Test pull + redirect */
+- txmsg_redir = 0;
++ txmsg_redir = 1;
+ txmsg_start = 1;
+ txmsg_end = 2;
+ test_send(opt, cgrp);
+@@ -1618,12 +1714,16 @@ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
+
+ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ {
++ bool data = opt->data_test;
++
+ /* Test basic pop */
++ txmsg_pass = 1;
+ txmsg_start_pop = 1;
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
+
+ /* Test pop with >4k */
++ txmsg_pass = 1;
+ txmsg_start_pop = 4096;
+ txmsg_pop = 4096;
+ test_send_large(opt, cgrp);
+@@ -1634,6 +1734,12 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
+
++ /* TODO: Test for pop + cork should be different,
++ * - It makes the layout of the received data difficult
++ * - It makes it hard to calculate the total_bytes in the recvmsg
++ * Temporarily skip the data integrity test for this case now.
++ */
++ opt->data_test = false;
+ /* Test pop + cork */
+ txmsg_redir = 0;
+ txmsg_cork = 512;
+@@ -1647,16 +1753,21 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ txmsg_start_pop = 1;
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
++ opt->data_test = data;
+ }
+
+ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
+ {
++ bool data = opt->data_test;
++
+ /* Test basic push */
++ txmsg_pass = 1;
+ txmsg_start_push = 1;
+ txmsg_end_push = 1;
+ test_send(opt, cgrp);
+
+ /* Test push 4kB >4k */
++ txmsg_pass = 1;
+ txmsg_start_push = 4096;
+ txmsg_end_push = 4096;
+ test_send_large(opt, cgrp);
+@@ -1667,16 +1778,24 @@ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
+ txmsg_end_push = 2;
+ test_send_many(opt, cgrp);
+
++ /* TODO: Test for push + cork should be different,
++ * - It makes the layout of the received data difficult
++ * - It makes it hard to calculate the total_bytes in the recvmsg
++ * Temporarily skip the data integrity test for this case now.
++ */
++ opt->data_test = false;
+ /* Test push + cork */
+ txmsg_redir = 0;
+ txmsg_cork = 512;
+ txmsg_start_push = 1;
+ txmsg_end_push = 2;
+ test_send_many(opt, cgrp);
++ opt->data_test = data;
+ }
+
+ static void test_txmsg_push_pop(int cgrp, struct sockmap_options *opt)
+ {
++ txmsg_pass = 1;
+ txmsg_start_push = 1;
+ txmsg_end_push = 10;
+ txmsg_start_pop = 5;
+diff --git a/tools/testing/selftests/bpf/uprobe_multi.c b/tools/testing/selftests/bpf/uprobe_multi.c
+index c7828b13e5ffd8..dd38dc68f63592 100644
+--- a/tools/testing/selftests/bpf/uprobe_multi.c
++++ b/tools/testing/selftests/bpf/uprobe_multi.c
+@@ -12,6 +12,10 @@
+ #define MADV_POPULATE_READ 22
+ #endif
+
++#ifndef MADV_PAGEOUT
++#define MADV_PAGEOUT 21
++#endif
++
+ int __attribute__((weak)) uprobe(void)
+ {
+ return 0;
+diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+index 68801e1a9ec2d1..70f65eb320a7a7 100644
+--- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c
++++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+@@ -1026,7 +1026,7 @@ FIXTURE_SETUP(mount_setattr_idmapped)
+ "size=100000,mode=700"), 0);
+
+ ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV,
+- "size=100000,mode=700"), 0);
++ "size=2m,mode=700"), 0);
+
+ ASSERT_EQ(mkdir("/mnt/A", 0777), 0);
+
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 5e86f7a51b43c5..2c4b6e404a7c7f 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -97,6 +97,7 @@ TEST_PROGS += fdb_flush.sh
+ TEST_PROGS += fq_band_pktlimit.sh
+ TEST_PROGS += vlan_hw_filter.sh
+ TEST_PROGS += bpf_offload.py
++TEST_PROGS += ipv6_route_update_soft_lockup.sh
+
+ # YNL files, must be before "include ..lib.mk"
+ EXTRA_CLEAN += $(OUTPUT)/libynl.a
+diff --git a/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
+new file mode 100755
+index 00000000000000..a6b2b1f9c641c9
+--- /dev/null
++++ b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
+@@ -0,0 +1,262 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++#
++# Testing for potential kernel soft lockup during IPv6 routing table
++# refresh under heavy outgoing IPv6 traffic. If a kernel soft lockup
++# occurs, a kernel panic will be triggered to prevent associated issues.
++#
++#
++# Test Environment Layout
++#
++# ┌----------------┐ ┌----------------┐
++# | SOURCE_NS | | SINK_NS |
++# | NAMESPACE | | NAMESPACE |
++# |(iperf3 clients)| |(iperf3 servers)|
++# | | | |
++# | | | |
++# | ┌-----------| nexthops |---------┐ |
++# | |veth_source|<--------------------------------------->|veth_sink|<┐ |
++# | └-----------|2001:0DB8:1::0:1/96 2001:0DB8:1::1:1/96 |---------┘ | |
++# | | ^ 2001:0DB8:1::1:2/96 | | |
++# | | . . | fwd | |
++# | ┌---------┐ | . . | | |
++# | | IPv6 | | . . | V |
++# | | routing | | . 2001:0DB8:1::1:80/96| ┌-----┐ |
++# | | table | | . | | lo | |
++# | | nexthop | | . └--------┴-----┴-┘
++# | | update | | ............................> 2001:0DB8:2::1:1/128
++# | └-------- ┘ |
++# └----------------┘
++#
++# The test script sets up two network namespaces, source_ns and sink_ns,
++# connected via a veth link. Within source_ns, it continuously updates the
++# IPv6 routing table by flushing and inserting IPV6_NEXTHOP_ADDR_COUNT nexthop
++# IPs destined for SINK_LOOPBACK_IP_ADDR in sink_ns. This refresh occurs at a
++# rate of 1/ROUTING_TABLE_REFRESH_PERIOD per second for TEST_DURATION seconds.
++#
++# Simultaneously, multiple iperf3 clients within source_ns generate heavy
++# outgoing IPv6 traffic. Each client is assigned a unique port number starting
++# at 5000 and incrementing sequentially. Each client targets a unique iperf3
++# server running in sink_ns, connected to the SINK_LOOPBACK_IFACE interface
++# using the same port number.
++#
++# The number of iperf3 servers and clients is set to half of the total
++# available cores on each machine.
++#
++# NOTE: We have tested this script on machines with various CPU specifications,
++# ranging from lower to higher performance as listed below. The test script
++# effectively triggered a kernel soft lockup on machines running an unpatched
++# kernel in under a minute:
++#
++# - 1x Intel Xeon E-2278G 8-Core Processor @ 3.40GHz
++# - 1x Intel Xeon E-2378G Processor 8-Core @ 2.80GHz
++# - 1x AMD EPYC 7401P 24-Core Processor @ 2.00GHz
++# - 1x AMD EPYC 7402P 24-Core Processor @ 2.80GHz
++# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz
++# - 1x Ampere Altra Q80-30 80-Core Processor @ 3.00GHz
++# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz
++# - 2x Intel Xeon Silver 4214 24-Core Processor @ 2.20GHz
++# - 1x AMD EPYC 7502P 32-Core @ 2.50GHz
++# - 1x Intel Xeon Gold 6314U 32-Core Processor @ 2.30GHz
++# - 2x Intel Xeon Gold 6338 32-Core Processor @ 2.00GHz
++#
++# On less performant machines, you may need to increase the TEST_DURATION
++# parameter to enhance the likelihood of encountering a race condition leading
++# to a kernel soft lockup and avoid a false negative result.
++#
++# NOTE: The test may not produce the expected result in virtualized
++# environments (e.g., qemu) due to differences in timing and CPU handling,
++# which can affect the conditions needed to trigger a soft lockup.
++
++source lib.sh
++source net_helper.sh
++
++TEST_DURATION=300
++ROUTING_TABLE_REFRESH_PERIOD=0.01
++
++IPERF3_BITRATE="300m"
++
++
++IPV6_NEXTHOP_ADDR_COUNT="128"
++IPV6_NEXTHOP_ADDR_MASK="96"
++IPV6_NEXTHOP_PREFIX="2001:0DB8:1"
++
++
++SOURCE_TEST_IFACE="veth_source"
++SOURCE_TEST_IP_ADDR="2001:0DB8:1::0:1/96"
++
++SINK_TEST_IFACE="veth_sink"
++# ${SINK_TEST_IFACE} is populated with the following range of IPv6 addresses:
++# 2001:0DB8:1::1:1 to 2001:0DB8:1::1:${IPV6_NEXTHOP_ADDR_COUNT}
++SINK_LOOPBACK_IFACE="lo"
++SINK_LOOPBACK_IP_MASK="128"
++SINK_LOOPBACK_IP_ADDR="2001:0DB8:2::1:1"
++
++nexthop_ip_list=""
++termination_signal=""
++kernel_softlokup_panic_prev_val=""
++
++terminate_ns_processes_by_pattern() {
++ local ns=$1
++ local pattern=$2
++
++ for pid in $(ip netns pids ${ns}); do
++ [ -e /proc/$pid/cmdline ] && grep -qe "${pattern}" /proc/$pid/cmdline && kill -9 $pid
++ done
++}
++
++cleanup() {
++ echo "info: cleaning up namespaces and terminating all processes within them..."
++
++
++ # Terminate iperf3 instances running in the source_ns. To avoid race
++ # conditions, first iterate over the PIDs and terminate those
++ # associated with the bash shells running the
++ # `while true; do iperf3 -c ...; done` loops. In a second iteration,
++ # terminate the individual `iperf3 -c ...` instances.
++ terminate_ns_processes_by_pattern ${source_ns} while
++ terminate_ns_processes_by_pattern ${source_ns} iperf3
++
++ # Repeat the same process for sink_ns
++ terminate_ns_processes_by_pattern ${sink_ns} while
++ terminate_ns_processes_by_pattern ${sink_ns} iperf3
++
++ # Check if any iperf3 instances are still running. This could happen
++ # if a core has entered an infinite loop and the timeout for detecting
++ # the soft lockup has not expired, but either the test interval has
++ # already elapsed or the test was terminated manually (e.g., with ^C)
++ for pid in $(ip netns pids ${source_ns}); do
++ if [ -e /proc/$pid/cmdline ] && grep -qe 'iperf3' /proc/$pid/cmdline; then
++ echo "FAIL: unable to terminate some iperf3 instances. Soft lockup is underway. A kernel panic is on the way!"
++ exit ${ksft_fail}
++ fi
++ done
++
++ if [ "$termination_signal" == "SIGINT" ]; then
++ echo "SKIP: Termination due to ^C (SIGINT)"
++ elif [ "$termination_signal" == "SIGALRM" ]; then
++ echo "PASS: No kernel soft lockup occurred during this ${TEST_DURATION} second test"
++ fi
++
++ cleanup_ns ${source_ns} ${sink_ns}
++
++ sysctl -qw kernel.softlockup_panic=${kernel_softlokup_panic_prev_val}
++}
++
++setup_prepare() {
++ setup_ns source_ns sink_ns
++
++ ip -n ${source_ns} link add name ${SOURCE_TEST_IFACE} type veth peer name ${SINK_TEST_IFACE} netns ${sink_ns}
++
++ # Setting up the Source namespace
++ ip -n ${source_ns} addr add ${SOURCE_TEST_IP_ADDR} dev ${SOURCE_TEST_IFACE}
++ ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} qlen 10000
++ ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} up
++ ip netns exec ${source_ns} sysctl -qw net.ipv6.fib_multipath_hash_policy=1
++
++ # Setting up the Sink namespace
++ ip -n ${sink_ns} addr add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} dev ${SINK_LOOPBACK_IFACE}
++ ip -n ${sink_ns} link set dev ${SINK_LOOPBACK_IFACE} up
++ ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_LOOPBACK_IFACE}.forwarding=1
++
++ ip -n ${sink_ns} link set ${SINK_TEST_IFACE} up
++ ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_TEST_IFACE}.forwarding=1
++
++
++ # Populate nexthop IPv6 addresses on the test interface in the sink_ns
++ echo "info: populating ${IPV6_NEXTHOP_ADDR_COUNT} IPv6 addresses on the ${SINK_TEST_IFACE} interface ..."
++ for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do
++ ip -n ${sink_ns} addr add ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" "${IP}")/${IPV6_NEXTHOP_ADDR_MASK} dev ${SINK_TEST_IFACE};
++ done
++
++ # Preparing list of nexthops
++ for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do
++ nexthop_ip_list=$nexthop_ip_list" nexthop via ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" $IP) dev ${SOURCE_TEST_IFACE} weight 1"
++ done
++}
++
++
++test_soft_lockup_during_routing_table_refresh() {
++ # Start num_of_iperf_servers iperf3 servers in the sink_ns namespace,
++ # each listening on ports starting at 5001 and incrementing
++ # sequentially. Since iperf3 instances may terminate unexpectedly, a
++ # while loop is used to automatically restart them in such cases.
++ echo "info: starting ${num_of_iperf_servers} iperf3 servers in the sink_ns namespace ..."
++ for i in $(seq 1 ${num_of_iperf_servers}); do
++ cmd="iperf3 --bind ${SINK_LOOPBACK_IP_ADDR} -s -p $(printf '5%03d' ${i}) --rcv-timeout 200 &>/dev/null"
++ ip netns exec ${sink_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null
++ done
++
++ # Wait for the iperf3 servers to be ready
++ for i in $(seq ${num_of_iperf_servers}); do
++ port=$(printf '5%03d' ${i});
++ wait_local_port_listen ${sink_ns} ${port} tcp
++ done
++
++ # Continuously refresh the routing table in the background within
++ # the source_ns namespace
++ ip netns exec ${source_ns} bash -c "
++ while \$(ip netns list | grep -q ${source_ns}); do
++ ip -6 route add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} ${nexthop_ip_list};
++ sleep ${ROUTING_TABLE_REFRESH_PERIOD};
++ ip -6 route delete ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK};
++ done &"
++
++ # Start num_of_iperf_servers iperf3 clients in the source_ns namespace,
++ # each sending TCP traffic on sequential ports starting at 5001.
++ # Since iperf3 instances may terminate unexpectedly (e.g., if the route
++ # to the server is deleted in the background during a route refresh), a
++ # while loop is used to automatically restart them in such cases.
++ echo "info: starting ${num_of_iperf_servers} iperf3 clients in the source_ns namespace ..."
++ for i in $(seq 1 ${num_of_iperf_servers}); do
++ cmd="iperf3 -c ${SINK_LOOPBACK_IP_ADDR} -p $(printf '5%03d' ${i}) --length 64 --bitrate ${IPERF3_BITRATE} -t 0 --connect-timeout 150 &>/dev/null"
++ ip netns exec ${source_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null
++ done
++
++ echo "info: IPv6 routing table is being updated at the rate of $(echo "1/${ROUTING_TABLE_REFRESH_PERIOD}" | bc)/s for ${TEST_DURATION} seconds ..."
++ echo "info: A kernel soft lockup, if detected, results in a kernel panic!"
++
++ wait
++}
++
++# Make sure 'iperf3' is installed, skip the test otherwise
++if [ ! -x "$(command -v "iperf3")" ]; then
++ echo "SKIP: 'iperf3' is not installed. Skipping the test."
++ exit ${ksft_skip}
++fi
++
++# Determine the number of cores on the machine
++num_of_iperf_servers=$(( $(nproc)/2 ))
++
++# Check if we are running on a multi-core machine, skip the test otherwise
++if [ "${num_of_iperf_servers}" -eq 0 ]; then
++ echo "SKIP: This test is not valid on a single core machine!"
++ exit ${ksft_skip}
++fi
++
++# Since the kernel soft lockup we're testing causes at least one core to enter
++# an infinite loop, destabilizing the host and likely affecting subsequent
++# tests, we trigger a kernel panic instead of reporting a failure and
++# continuing
++kernel_softlokup_panic_prev_val=$(sysctl -n kernel.softlockup_panic)
++sysctl -qw kernel.softlockup_panic=1
++
++handle_sigint() {
++ termination_signal="SIGINT"
++ cleanup
++ exit ${ksft_skip}
++}
++
++handle_sigalrm() {
++ termination_signal="SIGALRM"
++ cleanup
++ exit ${ksft_pass}
++}
++
++trap handle_sigint SIGINT
++trap handle_sigalrm SIGALRM
++
++(sleep ${TEST_DURATION} && kill -s SIGALRM $$)&
++
++setup_prepare
++test_soft_lockup_during_routing_table_refresh
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+index 254ff03297f06c..5f827e10717d19 100644
+--- a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
++++ b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+@@ -43,6 +43,8 @@ static int build_cta_tuple_v4(struct nlmsghdr *nlh, int type,
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type,
+@@ -71,6 +73,8 @@ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type,
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int build_cta_proto(struct nlmsghdr *nlh)
+@@ -90,6 +94,8 @@ static int build_cta_proto(struct nlmsghdr *nlh)
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int conntrack_data_insert(struct mnl_socket *sock, struct nlmsghdr *nlh,
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 569bce8b6383ee..6c651c880fe83d 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -2056,7 +2056,7 @@ check_running() {
+ pid=${1}
+ cmd=${2}
+
+- [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ]
++ [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "${cmd}" ]
+ }
+
+ test_cleanup_vxlanX_exception() {
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index ae120f1735c0bc..34e5df721430ee 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -127,7 +127,7 @@ unsigned char *alloc_buffer(size_t buf_size, int memflush)
+ {
+ void *buf = NULL;
+ uint64_t *p64;
+- size_t s64;
++ ssize_t s64;
+ int ret;
+
+ ret = posix_memalign(&buf, PAGE_SIZE, buf_size);
+diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
+index 6b5a3b52d861b8..cf08ba5e314e2a 100644
+--- a/tools/testing/selftests/resctrl/mbm_test.c
++++ b/tools/testing/selftests/resctrl/mbm_test.c
+@@ -40,7 +40,8 @@ show_bw_info(unsigned long *bw_imc, unsigned long *bw_resc, size_t span)
+ ksft_print_msg("%s Check MBM diff within %d%%\n",
+ ret ? "Fail:" : "Pass:", MAX_DIFF_PERCENT);
+ ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per);
+- ksft_print_msg("Span (MB): %zu\n", span / MB);
++ if (span)
++ ksft_print_msg("Span (MB): %zu\n", span / MB);
+ ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc);
+ ksft_print_msg("avg_bw_resc: %lu\n", avg_bw_resc);
+
+@@ -138,15 +139,26 @@ static int mbm_run_test(const struct resctrl_test *test, const struct user_param
+ .setup = mbm_setup,
+ .measure = mbm_measure,
+ };
++ char *endptr = NULL;
++ size_t span = 0;
+ int ret;
+
+ remove(RESULT_FILE_NAME);
+
++ if (uparams->benchmark_cmd[0] && strcmp(uparams->benchmark_cmd[0], "fill_buf") == 0) {
++ if (uparams->benchmark_cmd[1] && *uparams->benchmark_cmd[1] != '\0') {
++ errno = 0;
++ span = strtoul(uparams->benchmark_cmd[1], &endptr, 10);
++ if (errno || *endptr != '\0')
++ return -EINVAL;
++ }
++ }
++
+ ret = resctrl_val(test, uparams, uparams->benchmark_cmd, ¶m);
+ if (ret)
+ return ret;
+
+- ret = check_results(DEFAULT_SPAN);
++ ret = check_results(span);
+ if (ret && (get_vendor() == ARCH_INTEL))
+ ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
+
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index 8c275f6b4dd777..f118f659e89600 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -83,13 +83,12 @@ void get_event_and_umask(char *cas_count_cfg, int count, bool op)
+ char *token[MAX_TOKENS];
+ int i = 0;
+
+- strcat(cas_count_cfg, ",");
+ token[0] = strtok(cas_count_cfg, "=,");
+
+ for (i = 1; i < MAX_TOKENS; i++)
+ token[i] = strtok(NULL, "=,");
+
+- for (i = 0; i < MAX_TOKENS; i++) {
++ for (i = 0; i < MAX_TOKENS - 1; i++) {
+ if (!token[i])
+ break;
+ if (strcmp(token[i], "event") == 0) {
+diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
+index 7dd5668ea8a6e3..28f35620c49919 100644
+--- a/tools/testing/selftests/vDSO/parse_vdso.c
++++ b/tools/testing/selftests/vDSO/parse_vdso.c
+@@ -222,8 +222,7 @@ void *vdso_sym(const char *version, const char *name)
+ ELF(Sym) *sym = &vdso_info.symtab[chain];
+
+ /* Check for a defined global or weak function w/ right name. */
+- if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC &&
+- ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE)
++ if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC)
+ continue;
+ if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL &&
+ ELF64_ST_BIND(sym->st_info) != STB_WEAK)
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 405ff262ca93d4..55500f901fbc36 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -332,6 +332,7 @@ waitiface $netns1 vethc
+ waitiface $netns2 veths
+
+ n0 bash -c 'printf 1 > /proc/sys/net/ipv4/ip_forward'
++[[ -e /proc/sys/net/netfilter/nf_conntrack_udp_timeout ]] || modprobe nf_conntrack
+ n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout'
+ n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout_stream'
+ n0 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 10.0.0.0/24 -j SNAT --to 10.0.0.1
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index a3907c390d67a5..829511a712224f 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1064,7 +1064,7 @@ timerlat_hist_apply_config(struct osnoise_tool *tool, struct timerlat_hist_param
+ * If the user did not specify a type of thread, try user-threads first.
+ * Fall back to kernel threads otherwise.
+ */
+- if (!params->kernel_workload && !params->user_workload) {
++ if (!params->kernel_workload && !params->user_hist) {
+ retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd");
+ if (retval) {
+ debug_msg("User-space interface detected, setting user-threads\n");
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 210b0f533534ab..3b62519a412fc9 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -830,7 +830,7 @@ timerlat_top_apply_config(struct osnoise_tool *top, struct timerlat_top_params *
+ * If the user did not specify a type of thread, try user-threads first.
+ * Fall back to kernel threads otherwise.
+ */
+- if (!params->kernel_workload && !params->user_workload) {
++ if (!params->kernel_workload && !params->user_top) {
+ retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd");
+ if (retval) {
+ debug_msg("User-space interface detected, setting user-threads\n");
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 6ca7a1045bbb75..279e03029ce149 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -6561,106 +6561,3 @@ void kvm_exit(void)
+ kvm_irqfd_exit();
+ }
+ EXPORT_SYMBOL_GPL(kvm_exit);
+-
+-struct kvm_vm_worker_thread_context {
+- struct kvm *kvm;
+- struct task_struct *parent;
+- struct completion init_done;
+- kvm_vm_thread_fn_t thread_fn;
+- uintptr_t data;
+- int err;
+-};
+-
+-static int kvm_vm_worker_thread(void *context)
+-{
+- /*
+- * The init_context is allocated on the stack of the parent thread, so
+- * we have to locally copy anything that is needed beyond initialization
+- */
+- struct kvm_vm_worker_thread_context *init_context = context;
+- struct task_struct *parent;
+- struct kvm *kvm = init_context->kvm;
+- kvm_vm_thread_fn_t thread_fn = init_context->thread_fn;
+- uintptr_t data = init_context->data;
+- int err;
+-
+- err = kthread_park(current);
+- /* kthread_park(current) is never supposed to return an error */
+- WARN_ON(err != 0);
+- if (err)
+- goto init_complete;
+-
+- err = cgroup_attach_task_all(init_context->parent, current);
+- if (err) {
+- kvm_err("%s: cgroup_attach_task_all failed with err %d\n",
+- __func__, err);
+- goto init_complete;
+- }
+-
+- set_user_nice(current, task_nice(init_context->parent));
+-
+-init_complete:
+- init_context->err = err;
+- complete(&init_context->init_done);
+- init_context = NULL;
+-
+- if (err)
+- goto out;
+-
+- /* Wait to be woken up by the spawner before proceeding. */
+- kthread_parkme();
+-
+- if (!kthread_should_stop())
+- err = thread_fn(kvm, data);
+-
+-out:
+- /*
+- * Move kthread back to its original cgroup to prevent it lingering in
+- * the cgroup of the VM process, after the latter finishes its
+- * execution.
+- *
+- * kthread_stop() waits on the 'exited' completion condition which is
+- * set in exit_mm(), via mm_release(), in do_exit(). However, the
+- * kthread is removed from the cgroup in the cgroup_exit() which is
+- * called after the exit_mm(). This causes the kthread_stop() to return
+- * before the kthread actually quits the cgroup.
+- */
+- rcu_read_lock();
+- parent = rcu_dereference(current->real_parent);
+- get_task_struct(parent);
+- rcu_read_unlock();
+- cgroup_attach_task_all(parent, current);
+- put_task_struct(parent);
+-
+- return err;
+-}
+-
+-int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
+- uintptr_t data, const char *name,
+- struct task_struct **thread_ptr)
+-{
+- struct kvm_vm_worker_thread_context init_context = {};
+- struct task_struct *thread;
+-
+- *thread_ptr = NULL;
+- init_context.kvm = kvm;
+- init_context.parent = current;
+- init_context.thread_fn = thread_fn;
+- init_context.data = data;
+- init_completion(&init_context.init_done);
+-
+- thread = kthread_run(kvm_vm_worker_thread, &init_context,
+- "%s-%d", name, task_pid_nr(current));
+- if (IS_ERR(thread))
+- return PTR_ERR(thread);
+-
+- /* kthread_run is never supposed to return NULL */
+- WARN_ON(thread == NULL);
+-
+- wait_for_completion(&init_context.init_done);
+-
+- if (!init_context.err)
+- *thread_ptr = thread;
+-
+- return init_context.err;
+-}
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-05 20:05 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-05 20:05 UTC (permalink / raw
To: gentoo-commits
commit: 2fcc7a615b8b2de79d0b1b3ce13cb5430b8c80d4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 5 20:05:18 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 5 20:05:18 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2fcc7a61
sched: Initialize idle tasks only once
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
1800_sched-init-idle-tasks-only-once.patch | 80 ++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+)
diff --git a/0000_README b/0000_README
index ac1104a1..f7334645 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+Patch: 1800_sched-init-idle-tasks-only-once.patch
+From: https://git.kernel.org/
+Desc: sched: Initialize idle tasks only once
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1800_sched-init-idle-tasks-only-once.patch b/1800_sched-init-idle-tasks-only-once.patch
new file mode 100644
index 00000000..013a45fc
--- /dev/null
+++ b/1800_sched-init-idle-tasks-only-once.patch
@@ -0,0 +1,80 @@
+From b23decf8ac9102fc52c4de5196f4dc0a5f3eb80b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Mon, 28 Oct 2024 11:43:42 +0100
+Subject: sched: Initialize idle tasks only once
+
+Idle tasks are initialized via __sched_fork() twice:
+
+ fork_idle()
+ copy_process()
+ sched_fork()
+ __sched_fork()
+ init_idle()
+ __sched_fork()
+
+Instead of cleaning this up, sched_ext hacked around it. Even when analyis
+and solution were provided in a discussion, nobody cared to clean this up.
+
+init_idle() is also invoked from sched_init() to initialize the boot CPU's
+idle task, which requires the __sched_fork() invocation. But this can be
+trivially solved by invoking __sched_fork() before init_idle() in
+sched_init() and removing the __sched_fork() invocation from init_idle().
+
+Do so and clean up the comments explaining this historical leftover.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://lore.kernel.org/r/20241028103142.359584747@linutronix.de
+---
+ kernel/sched/core.c | 12 +++++-------
+ 1 file changed, 5 insertions(+), 7 deletions(-)
+
+(limited to 'kernel/sched/core.c')
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c57a79e3491103..aad48850c1ef0d 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4423,7 +4423,8 @@ int wake_up_state(struct task_struct *p, unsigned int state)
+ * Perform scheduler related setup for a newly forked process p.
+ * p is forked by current.
+ *
+- * __sched_fork() is basic setup used by init_idle() too:
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
+ */
+ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+@@ -7697,8 +7698,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
+- __sched_fork(0, idle);
+-
+ raw_spin_lock_irqsave(&idle->pi_lock, flags);
+ raw_spin_rq_lock(rq);
+
+@@ -7713,10 +7712,8 @@ void __init init_idle(struct task_struct *idle, int cpu)
+
+ #ifdef CONFIG_SMP
+ /*
+- * It's possible that init_idle() gets called multiple times on a task,
+- * in that case do_set_cpus_allowed() will not do the right thing.
+- *
+- * And since this is boot we can forgo the serialization.
++ * No validation and serialization required at boot time and for
++ * setting up the idle tasks of not yet online CPUs.
+ */
+ set_cpus_allowed_common(idle, &ac);
+ #endif
+@@ -8561,6 +8558,7 @@ void __init sched_init(void)
+ * but because we are the idle thread, we just pick up running again
+ * when this runqueue becomes "idle".
+ */
++ __sched_fork(0, current);
+ init_idle(current, smp_processor_id());
+
+ calc_load_update = jiffies + LOAD_FREQ;
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-06 12:44 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-06 12:44 UTC (permalink / raw
To: gentoo-commits
commit: 7ff281e950aa65bc7416b43f48eeb7cabcbb7195
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 6 12:43:43 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 6 12:43:43 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7ff281e9
Linux patch 6.12.3, remove redundant patch
Removed:
1800_sched-init-idle-tasks-only-once.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +--
1002_linux-6.12.3.patch | 57 +++++++++++++++++++++
1800_sched-init-idle-tasks-only-once.patch | 80 ------------------------------
3 files changed, 61 insertions(+), 84 deletions(-)
diff --git a/0000_README b/0000_README
index f7334645..c7f77bd5 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-6.12.2.patch
From: https://www.kernel.org
Desc: Linux 6.12.2
+Patch: 1002_linux-6.12.3.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.3
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
@@ -63,10 +67,6 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
-Patch: 1800_sched-init-idle-tasks-only-once.patch
-From: https://git.kernel.org/
-Desc: sched: Initialize idle tasks only once
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1002_linux-6.12.3.patch b/1002_linux-6.12.3.patch
new file mode 100644
index 00000000..2e07970b
--- /dev/null
+++ b/1002_linux-6.12.3.patch
@@ -0,0 +1,57 @@
+diff --git a/Makefile b/Makefile
+index da6e99309a4da4..e81030ec683143 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a1c353a62c5684..76b27b2a9c56ad 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4424,7 +4424,8 @@ int wake_up_state(struct task_struct *p, unsigned int state)
+ * Perform scheduler related setup for a newly forked process p.
+ * p is forked by current.
+ *
+- * __sched_fork() is basic setup used by init_idle() too:
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
+ */
+ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+@@ -7683,8 +7684,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
+- __sched_fork(0, idle);
+-
+ raw_spin_lock_irqsave(&idle->pi_lock, flags);
+ raw_spin_rq_lock(rq);
+
+@@ -7699,10 +7698,8 @@ void __init init_idle(struct task_struct *idle, int cpu)
+
+ #ifdef CONFIG_SMP
+ /*
+- * It's possible that init_idle() gets called multiple times on a task,
+- * in that case do_set_cpus_allowed() will not do the right thing.
+- *
+- * And since this is boot we can forgo the serialization.
++ * No validation and serialization required at boot time and for
++ * setting up the idle tasks of not yet online CPUs.
+ */
+ set_cpus_allowed_common(idle, &ac);
+ #endif
+@@ -8546,6 +8543,7 @@ void __init sched_init(void)
+ * but because we are the idle thread, we just pick up running again
+ * when this runqueue becomes "idle".
+ */
++ __sched_fork(0, current);
+ init_idle(current, smp_processor_id());
+
+ calc_load_update = jiffies + LOAD_FREQ;
diff --git a/1800_sched-init-idle-tasks-only-once.patch b/1800_sched-init-idle-tasks-only-once.patch
deleted file mode 100644
index 013a45fc..00000000
--- a/1800_sched-init-idle-tasks-only-once.patch
+++ /dev/null
@@ -1,80 +0,0 @@
-From b23decf8ac9102fc52c4de5196f4dc0a5f3eb80b Mon Sep 17 00:00:00 2001
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 28 Oct 2024 11:43:42 +0100
-Subject: sched: Initialize idle tasks only once
-
-Idle tasks are initialized via __sched_fork() twice:
-
- fork_idle()
- copy_process()
- sched_fork()
- __sched_fork()
- init_idle()
- __sched_fork()
-
-Instead of cleaning this up, sched_ext hacked around it. Even when analyis
-and solution were provided in a discussion, nobody cared to clean this up.
-
-init_idle() is also invoked from sched_init() to initialize the boot CPU's
-idle task, which requires the __sched_fork() invocation. But this can be
-trivially solved by invoking __sched_fork() before init_idle() in
-sched_init() and removing the __sched_fork() invocation from init_idle().
-
-Do so and clean up the comments explaining this historical leftover.
-
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
-Link: https://lore.kernel.org/r/20241028103142.359584747@linutronix.de
----
- kernel/sched/core.c | 12 +++++-------
- 1 file changed, 5 insertions(+), 7 deletions(-)
-
-(limited to 'kernel/sched/core.c')
-
-diff --git a/kernel/sched/core.c b/kernel/sched/core.c
-index c57a79e3491103..aad48850c1ef0d 100644
---- a/kernel/sched/core.c
-+++ b/kernel/sched/core.c
-@@ -4423,7 +4423,8 @@ int wake_up_state(struct task_struct *p, unsigned int state)
- * Perform scheduler related setup for a newly forked process p.
- * p is forked by current.
- *
-- * __sched_fork() is basic setup used by init_idle() too:
-+ * __sched_fork() is basic setup which is also used by sched_init() to
-+ * initialize the boot CPU's idle task.
- */
- static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
- {
-@@ -7697,8 +7698,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
- struct rq *rq = cpu_rq(cpu);
- unsigned long flags;
-
-- __sched_fork(0, idle);
--
- raw_spin_lock_irqsave(&idle->pi_lock, flags);
- raw_spin_rq_lock(rq);
-
-@@ -7713,10 +7712,8 @@ void __init init_idle(struct task_struct *idle, int cpu)
-
- #ifdef CONFIG_SMP
- /*
-- * It's possible that init_idle() gets called multiple times on a task,
-- * in that case do_set_cpus_allowed() will not do the right thing.
-- *
-- * And since this is boot we can forgo the serialization.
-+ * No validation and serialization required at boot time and for
-+ * setting up the idle tasks of not yet online CPUs.
- */
- set_cpus_allowed_common(idle, &ac);
- #endif
-@@ -8561,6 +8558,7 @@ void __init sched_init(void)
- * but because we are the idle thread, we just pick up running again
- * when this runqueue becomes "idle".
- */
-+ __sched_fork(0, current);
- init_idle(current, smp_processor_id());
-
- calc_load_update = jiffies + LOAD_FREQ;
---
-cgit 1.2.3-korg
-
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-09 11:35 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-09 11:35 UTC (permalink / raw
To: gentoo-commits
commit: a86bef4a2fd2b250f44dcf0c300bf7d7b26f05e5
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 9 11:34:48 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 9 11:34:48 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a86bef4a
Linux patch 6.12.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-6.12.4.patch | 4189 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4193 insertions(+)
diff --git a/0000_README b/0000_README
index c7f77bd5..87f43cf7 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-6.12.3.patch
From: https://www.kernel.org
Desc: Linux 6.12.3
+Patch: 1003_linux-6.12.4.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.4
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1003_linux-6.12.4.patch b/1003_linux-6.12.4.patch
new file mode 100644
index 00000000..42f90cf9
--- /dev/null
+++ b/1003_linux-6.12.4.patch
@@ -0,0 +1,4189 @@
+diff --git a/Documentation/devicetree/bindings/net/fsl,fec.yaml b/Documentation/devicetree/bindings/net/fsl,fec.yaml
+index 5536c06139cae5..24e863fdbdab08 100644
+--- a/Documentation/devicetree/bindings/net/fsl,fec.yaml
++++ b/Documentation/devicetree/bindings/net/fsl,fec.yaml
+@@ -183,6 +183,13 @@ properties:
+ description:
+ Register bits of stop mode control, the format is <&gpr req_gpr req_bit>.
+
++ fsl,pps-channel:
++ $ref: /schemas/types.yaml#/definitions/uint32
++ default: 0
++ description:
++ Specifies to which timer instance the PPS signal is routed.
++ enum: [0, 1, 2, 3]
++
+ mdio:
+ $ref: mdio.yaml#
+ unevaluatedProperties: false
+diff --git a/Makefile b/Makefile
+index e81030ec683143..87dc2f81086021 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 1dfae1af8e31b0..ef6a657c8d1306 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -25,6 +25,7 @@
+ #include <asm/tls.h>
+ #include <asm/system_info.h>
+ #include <asm/uaccess-asm.h>
++#include <asm/kasan_def.h>
+
+ #include "entry-header.S"
+ #include <asm/probes.h>
+@@ -561,6 +562,13 @@ ENTRY(__switch_to)
+ @ entries covering the vmalloc region.
+ @
+ ldr r2, [ip]
++#ifdef CONFIG_KASAN_VMALLOC
++ @ Also dummy read from the KASAN shadow memory for the new stack if we
++ @ are using KASAN
++ mov_l r2, KASAN_SHADOW_OFFSET
++ add r2, r2, ip, lsr #KASAN_SHADOW_SCALE_SHIFT
++ ldr r2, [r2]
++#endif
+ #endif
+
+ @ When CONFIG_THREAD_INFO_IN_TASK=n, the update of SP itself is what
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index 794cfea9f9d4c8..89f1c97f3079c1 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -23,6 +23,7 @@
+ */
+ #include <linux/module.h>
+ #include <linux/errno.h>
++#include <linux/kasan.h>
+ #include <linux/mm.h>
+ #include <linux/vmalloc.h>
+ #include <linux/io.h>
+@@ -115,16 +116,40 @@ int ioremap_page(unsigned long virt, unsigned long phys,
+ }
+ EXPORT_SYMBOL(ioremap_page);
+
++#ifdef CONFIG_KASAN
++static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
++{
++ return (unsigned long)kasan_mem_to_shadow((void *)addr);
++}
++#else
++static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
++{
++ return 0;
++}
++#endif
++
++static void memcpy_pgd(struct mm_struct *mm, unsigned long start,
++ unsigned long end)
++{
++ end = ALIGN(end, PGDIR_SIZE);
++ memcpy(pgd_offset(mm, start), pgd_offset_k(start),
++ sizeof(pgd_t) * (pgd_index(end) - pgd_index(start)));
++}
++
+ void __check_vmalloc_seq(struct mm_struct *mm)
+ {
+ int seq;
+
+ do {
+- seq = atomic_read(&init_mm.context.vmalloc_seq);
+- memcpy(pgd_offset(mm, VMALLOC_START),
+- pgd_offset_k(VMALLOC_START),
+- sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
+- pgd_index(VMALLOC_START)));
++ seq = atomic_read_acquire(&init_mm.context.vmalloc_seq);
++ memcpy_pgd(mm, VMALLOC_START, VMALLOC_END);
++ if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
++ unsigned long start =
++ arm_kasan_mem_to_shadow(VMALLOC_START);
++ unsigned long end =
++ arm_kasan_mem_to_shadow(VMALLOC_END);
++ memcpy_pgd(mm, start, end);
++ }
+ /*
+ * Use a store-release so that other CPUs that observe the
+ * counter's new value are guaranteed to see the results of the
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+index 6eab61a12cd8f8..b844759f52c0d8 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+@@ -212,6 +212,9 @@ accelerometer@68 {
+ interrupts = <7 5 IRQ_TYPE_EDGE_RISING>; /* PH5 */
+ vdd-supply = <®_dldo1>;
+ vddio-supply = <®_dldo1>;
++ mount-matrix = "0", "1", "0",
++ "-1", "0", "0",
++ "0", "0", "1";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+index 5fa39591419115..aee79a50d0e26a 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+@@ -162,7 +162,7 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
+ regulator-max-microvolt = <3300000>;
+ regulator-min-microvolt = <3300000>;
+ regulator-name = "+V3.3_SD";
+- startup-delay-us = <2000>;
++ startup-delay-us = <20000>;
+ };
+
+ reserved-memory {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+index a19ad5ee7f792b..1689fe44099396 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+@@ -175,7 +175,7 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
+ regulator-max-microvolt = <3300000>;
+ regulator-min-microvolt = <3300000>;
+ regulator-name = "+V3.3_SD";
+- startup-delay-us = <2000>;
++ startup-delay-us = <20000>;
+ };
+
+ reserved-memory {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index 0c0b3ac5974525..cfcc7909dfe68d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -423,7 +423,7 @@ it6505dptx: dp-bridge@5c {
+ #sound-dai-cells = <0>;
+ ovdd-supply = <&mt6366_vsim2_reg>;
+ pwr18-supply = <&pp1800_dpbrdg_dx>;
+- reset-gpios = <&pio 177 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&pio 177 GPIO_ACTIVE_LOW>;
+
+ ports {
+ #address-cells = <1>;
+@@ -1336,7 +1336,7 @@ mt6366_vgpu_reg: vgpu {
+ regulator-allowed-modes = <MT6397_BUCK_MODE_AUTO
+ MT6397_BUCK_MODE_FORCE_PWM>;
+ regulator-coupled-with = <&mt6366_vsram_gpu_reg>;
+- regulator-coupled-max-spread = <10000>;
++ regulator-coupled-max-spread = <100000>;
+ };
+
+ mt6366_vproc11_reg: vproc11 {
+@@ -1545,7 +1545,7 @@ mt6366_vsram_gpu_reg: vsram-gpu {
+ regulator-ramp-delay = <6250>;
+ regulator-enable-ramp-delay = <240>;
+ regulator-coupled-with = <&mt6366_vgpu_reg>;
+- regulator-coupled-max-spread = <10000>;
++ regulator-coupled-max-spread = <100000>;
+ };
+
+ mt6366_vsram_others_reg: vsram-others {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+index 5bef31b8577be5..f0eac05f7483ea 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+@@ -160,7 +160,7 @@ reg_sdhc1_vmmc: regulator-sdhci1 {
+ regulator-max-microvolt = <3300000>;
+ regulator-min-microvolt = <3300000>;
+ regulator-name = "+V3.3_SD";
+- startup-delay-us = <2000>;
++ startup-delay-us = <20000>;
+ };
+
+ reg_sdhc1_vqmmc: regulator-sdhci1-vqmmc {
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 1a2ff0276365b4..c7b420d6787ca1 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -275,8 +275,8 @@ config PPC
+ select HAVE_RSEQ
+ select HAVE_SETUP_PER_CPU_AREA if PPC64
+ select HAVE_SOFTIRQ_ON_OWN_STACK
+- select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2)
+- select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13)
++ select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0)
++ select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0)
+ select HAVE_STATIC_CALL if PPC32
+ select HAVE_SYSCALL_TRACEPOINTS
+ select HAVE_VIRT_CPU_ACCOUNTING
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index bbfe4a1f06ef9d..cbb353ddacb7ad 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -100,13 +100,6 @@ KBUILD_AFLAGS += -m$(BITS)
+ KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
+ endif
+
+-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard=tls
+-ifdef CONFIG_PPC64
+-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r13
+-else
+-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r2
+-endif
+-
+ LDFLAGS_vmlinux-y := -Bstatic
+ LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+ LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) += -z notext
+@@ -402,9 +395,11 @@ prepare: stack_protector_prepare
+ PHONY += stack_protector_prepare
+ stack_protector_prepare: prepare0
+ ifdef CONFIG_PPC64
+- $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
++ $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 \
++ -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
+ else
+- $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
++ $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 \
++ -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
+ endif
+ endif
+
+diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
+index 31ca5a5470047e..c568cad6a22e6b 100644
+--- a/arch/powerpc/kernel/vdso/Makefile
++++ b/arch/powerpc/kernel/vdso/Makefile
+@@ -54,10 +54,14 @@ ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -W
+
+ CC32FLAGS := -m32
+ CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc
+- # This flag is supported by clang for 64-bit but not 32-bit so it will cause
+- # an unused command line flag warning for this file.
+ ifdef CONFIG_CC_IS_CLANG
++# This flag is supported by clang for 64-bit but not 32-bit so it will cause
++# an unused command line flag warning for this file.
+ CC32FLAGSREMOVE += -fno-stack-clash-protection
++# -mstack-protector-guard values from the 64-bit build are not valid for the
++# 32-bit one. clang validates the values passed to these arguments during
++# parsing, even when -fno-stack-protector is passed afterwards.
++CC32FLAGSREMOVE += -mstack-protector-guard%
+ endif
+ LD32FLAGS := -Wl,-soname=linux-vdso32.so.1
+ AS32FLAGS := -D__VDSO32__
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index d6d5317f768e82..594da4cba707a6 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -450,9 +450,13 @@ SYM_CODE_START(\name)
+ SYM_CODE_END(\name)
+ .endm
+
++ .section .irqentry.text, "ax"
++
+ INT_HANDLER ext_int_handler,__LC_EXT_OLD_PSW,do_ext_irq
+ INT_HANDLER io_int_handler,__LC_IO_OLD_PSW,do_io_irq
+
++ .section .kprobes.text, "ax"
++
+ /*
+ * Machine check handler routines
+ */
+diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c
+index 6295faf0987d86..8b80ea57125f3c 100644
+--- a/arch/s390/kernel/kprobes.c
++++ b/arch/s390/kernel/kprobes.c
+@@ -489,6 +489,12 @@ int __init arch_init_kprobes(void)
+ return 0;
+ }
+
++int __init arch_populate_kprobe_blacklist(void)
++{
++ return kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
++ (unsigned long)__irqentry_text_end);
++}
++
+ int arch_trampoline_kprobe(struct kprobe *p)
+ {
+ return 0;
+diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
+index 9f59837d159e0c..40edfde25f5b97 100644
+--- a/arch/s390/kernel/stacktrace.c
++++ b/arch/s390/kernel/stacktrace.c
+@@ -151,7 +151,7 @@ void arch_stack_walk_user_common(stack_trace_consume_fn consume_entry, void *coo
+ break;
+ }
+ if (!store_ip(consume_entry, cookie, entry, perf, ip))
+- return;
++ break;
+ first = false;
+ }
+ pagefault_enable();
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 978740537a1aac..ef353ca13c356a 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1225,6 +1225,12 @@ static void binder_cleanup_ref_olocked(struct binder_ref *ref)
+ binder_dequeue_work(ref->proc, &ref->death->work);
+ binder_stats_deleted(BINDER_STAT_DEATH);
+ }
++
++ if (ref->freeze) {
++ binder_dequeue_work(ref->proc, &ref->freeze->work);
++ binder_stats_deleted(BINDER_STAT_FREEZE);
++ }
++
+ binder_stats_deleted(BINDER_STAT_REF);
+ }
+
+@@ -3850,7 +3856,6 @@ binder_request_freeze_notification(struct binder_proc *proc,
+ {
+ struct binder_ref_freeze *freeze;
+ struct binder_ref *ref;
+- bool is_frozen;
+
+ freeze = kzalloc(sizeof(*freeze), GFP_KERNEL);
+ if (!freeze)
+@@ -3866,32 +3871,31 @@ binder_request_freeze_notification(struct binder_proc *proc,
+ }
+
+ binder_node_lock(ref->node);
+-
+- if (ref->freeze || !ref->node->proc) {
+- binder_user_error("%d:%d invalid BC_REQUEST_FREEZE_NOTIFICATION %s\n",
+- proc->pid, thread->pid,
+- ref->freeze ? "already set" : "dead node");
++ if (ref->freeze) {
++ binder_user_error("%d:%d BC_REQUEST_FREEZE_NOTIFICATION already set\n",
++ proc->pid, thread->pid);
+ binder_node_unlock(ref->node);
+ binder_proc_unlock(proc);
+ kfree(freeze);
+ return -EINVAL;
+ }
+- binder_inner_proc_lock(ref->node->proc);
+- is_frozen = ref->node->proc->is_frozen;
+- binder_inner_proc_unlock(ref->node->proc);
+
+ binder_stats_created(BINDER_STAT_FREEZE);
+ INIT_LIST_HEAD(&freeze->work.entry);
+ freeze->cookie = handle_cookie->cookie;
+ freeze->work.type = BINDER_WORK_FROZEN_BINDER;
+- freeze->is_frozen = is_frozen;
+-
+ ref->freeze = freeze;
+
+- binder_inner_proc_lock(proc);
+- binder_enqueue_work_ilocked(&ref->freeze->work, &proc->todo);
+- binder_wakeup_proc_ilocked(proc);
+- binder_inner_proc_unlock(proc);
++ if (ref->node->proc) {
++ binder_inner_proc_lock(ref->node->proc);
++ freeze->is_frozen = ref->node->proc->is_frozen;
++ binder_inner_proc_unlock(ref->node->proc);
++
++ binder_inner_proc_lock(proc);
++ binder_enqueue_work_ilocked(&freeze->work, &proc->todo);
++ binder_wakeup_proc_ilocked(proc);
++ binder_inner_proc_unlock(proc);
++ }
+
+ binder_node_unlock(ref->node);
+ binder_proc_unlock(proc);
+@@ -5151,6 +5155,16 @@ static void binder_release_work(struct binder_proc *proc,
+ } break;
+ case BINDER_WORK_NODE:
+ break;
++ case BINDER_WORK_CLEAR_FREEZE_NOTIFICATION: {
++ struct binder_ref_freeze *freeze;
++
++ freeze = container_of(w, struct binder_ref_freeze, work);
++ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
++ "undelivered freeze notification, %016llx\n",
++ (u64)freeze->cookie);
++ kfree(freeze);
++ binder_stats_deleted(BINDER_STAT_FREEZE);
++ } break;
+ default:
+ pr_err("unexpected work type, %d, not freed\n",
+ wtype);
+@@ -5552,6 +5566,7 @@ static bool binder_txns_pending_ilocked(struct binder_proc *proc)
+
+ static void binder_add_freeze_work(struct binder_proc *proc, bool is_frozen)
+ {
++ struct binder_node *prev = NULL;
+ struct rb_node *n;
+ struct binder_ref *ref;
+
+@@ -5560,7 +5575,10 @@ static void binder_add_freeze_work(struct binder_proc *proc, bool is_frozen)
+ struct binder_node *node;
+
+ node = rb_entry(n, struct binder_node, rb_node);
++ binder_inc_node_tmpref_ilocked(node);
+ binder_inner_proc_unlock(proc);
++ if (prev)
++ binder_put_node(prev);
+ binder_node_lock(node);
+ hlist_for_each_entry(ref, &node->refs, node_entry) {
+ /*
+@@ -5586,10 +5604,15 @@ static void binder_add_freeze_work(struct binder_proc *proc, bool is_frozen)
+ }
+ binder_inner_proc_unlock(ref->proc);
+ }
++ prev = node;
+ binder_node_unlock(node);
+ binder_inner_proc_lock(proc);
++ if (proc->is_dead)
++ break;
+ }
+ binder_inner_proc_unlock(proc);
++ if (prev)
++ binder_put_node(prev);
+ }
+
+ static int binder_ioctl_freeze(struct binder_freeze_info *info,
+@@ -6260,6 +6283,7 @@ static void binder_deferred_release(struct binder_proc *proc)
+
+ binder_release_work(proc, &proc->todo);
+ binder_release_work(proc, &proc->delivered_death);
++ binder_release_work(proc, &proc->delivered_freeze);
+
+ binder_debug(BINDER_DEBUG_OPEN_CLOSE,
+ "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d\n",
+@@ -6393,6 +6417,12 @@ static void print_binder_work_ilocked(struct seq_file *m,
+ case BINDER_WORK_CLEAR_DEATH_NOTIFICATION:
+ seq_printf(m, "%shas cleared death notification\n", prefix);
+ break;
++ case BINDER_WORK_FROZEN_BINDER:
++ seq_printf(m, "%shas frozen binder\n", prefix);
++ break;
++ case BINDER_WORK_CLEAR_FREEZE_NOTIFICATION:
++ seq_printf(m, "%shas cleared freeze notification\n", prefix);
++ break;
+ default:
+ seq_printf(m, "%sunknown work: type %d\n", prefix, w->type);
+ break;
+@@ -6539,6 +6569,10 @@ static void print_binder_proc(struct seq_file *m,
+ seq_puts(m, " has delivered dead binder\n");
+ break;
+ }
++ list_for_each_entry(w, &proc->delivered_freeze, entry) {
++ seq_puts(m, " has delivered freeze binder\n");
++ break;
++ }
+ binder_inner_proc_unlock(proc);
+ if (!print_all && m->count == header_pos)
+ m->count = start_pos;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 048ff98dbdfd84..d922cefc1e6625 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -1989,10 +1989,10 @@ static struct device *fwnode_get_next_parent_dev(const struct fwnode_handle *fwn
+ *
+ * Return true if one or more cycles were found. Otherwise, return false.
+ */
+-static bool __fw_devlink_relax_cycles(struct device *con,
++static bool __fw_devlink_relax_cycles(struct fwnode_handle *con_handle,
+ struct fwnode_handle *sup_handle)
+ {
+- struct device *sup_dev = NULL, *par_dev = NULL;
++ struct device *sup_dev = NULL, *par_dev = NULL, *con_dev = NULL;
+ struct fwnode_link *link;
+ struct device_link *dev_link;
+ bool ret = false;
+@@ -2009,22 +2009,22 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+
+ sup_handle->flags |= FWNODE_FLAG_VISITED;
+
+- sup_dev = get_dev_from_fwnode(sup_handle);
+-
+ /* Termination condition. */
+- if (sup_dev == con) {
++ if (sup_handle == con_handle) {
+ pr_debug("----- cycle: start -----\n");
+ ret = true;
+ goto out;
+ }
+
++ sup_dev = get_dev_from_fwnode(sup_handle);
++ con_dev = get_dev_from_fwnode(con_handle);
+ /*
+ * If sup_dev is bound to a driver and @con hasn't started binding to a
+ * driver, sup_dev can't be a consumer of @con. So, no need to check
+ * further.
+ */
+ if (sup_dev && sup_dev->links.status == DL_DEV_DRIVER_BOUND &&
+- con->links.status == DL_DEV_NO_DRIVER) {
++ con_dev && con_dev->links.status == DL_DEV_NO_DRIVER) {
+ ret = false;
+ goto out;
+ }
+@@ -2033,7 +2033,7 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+ if (link->flags & FWLINK_FLAG_IGNORE)
+ continue;
+
+- if (__fw_devlink_relax_cycles(con, link->supplier)) {
++ if (__fw_devlink_relax_cycles(con_handle, link->supplier)) {
+ __fwnode_link_cycle(link);
+ ret = true;
+ }
+@@ -2048,7 +2048,7 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+ else
+ par_dev = fwnode_get_next_parent_dev(sup_handle);
+
+- if (par_dev && __fw_devlink_relax_cycles(con, par_dev->fwnode)) {
++ if (par_dev && __fw_devlink_relax_cycles(con_handle, par_dev->fwnode)) {
+ pr_debug("%pfwf: cycle: child of %pfwf\n", sup_handle,
+ par_dev->fwnode);
+ ret = true;
+@@ -2066,7 +2066,7 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+ !(dev_link->flags & DL_FLAG_CYCLE))
+ continue;
+
+- if (__fw_devlink_relax_cycles(con,
++ if (__fw_devlink_relax_cycles(con_handle,
+ dev_link->supplier->fwnode)) {
+ pr_debug("%pfwf: cycle: depends on %pfwf\n", sup_handle,
+ dev_link->supplier->fwnode);
+@@ -2114,11 +2114,6 @@ static int fw_devlink_create_devlink(struct device *con,
+ if (link->flags & FWLINK_FLAG_IGNORE)
+ return 0;
+
+- if (con->fwnode == link->consumer)
+- flags = fw_devlink_get_flags(link->flags);
+- else
+- flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+-
+ /*
+ * In some cases, a device P might also be a supplier to its child node
+ * C. However, this would defer the probe of C until the probe of P
+@@ -2139,25 +2134,23 @@ static int fw_devlink_create_devlink(struct device *con,
+ return -EINVAL;
+
+ /*
+- * SYNC_STATE_ONLY device links don't block probing and supports cycles.
+- * So, one might expect that cycle detection isn't necessary for them.
+- * However, if the device link was marked as SYNC_STATE_ONLY because
+- * it's part of a cycle, then we still need to do cycle detection. This
+- * is because the consumer and supplier might be part of multiple cycles
+- * and we need to detect all those cycles.
++ * Don't try to optimize by not calling the cycle detection logic under
++ * certain conditions. There's always some corner case that won't get
++ * detected.
+ */
+- if (!device_link_flag_is_sync_state_only(flags) ||
+- flags & DL_FLAG_CYCLE) {
+- device_links_write_lock();
+- if (__fw_devlink_relax_cycles(con, sup_handle)) {
+- __fwnode_link_cycle(link);
+- flags = fw_devlink_get_flags(link->flags);
+- pr_debug("----- cycle: end -----\n");
+- dev_info(con, "Fixed dependency cycle(s) with %pfwf\n",
+- sup_handle);
+- }
+- device_links_write_unlock();
++ device_links_write_lock();
++ if (__fw_devlink_relax_cycles(link->consumer, sup_handle)) {
++ __fwnode_link_cycle(link);
++ pr_debug("----- cycle: end -----\n");
++ pr_info("%pfwf: Fixed dependency cycle(s) with %pfwf\n",
++ link->consumer, sup_handle);
+ }
++ device_links_write_unlock();
++
++ if (con->fwnode == link->consumer)
++ flags = fw_devlink_get_flags(link->flags);
++ else
++ flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+
+ if (sup_handle->flags & FWNODE_FLAG_NOT_DEVICE)
+ sup_dev = fwnode_get_next_parent_dev(sup_handle);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index e682797cdee783..d6a1ba969266a4 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1692,6 +1692,13 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page,
+ if (ret)
+ return ret;
+
++ /*
++ * We touched this entry so mark it as non-IDLE. This makes sure that
++ * we don't preserve IDLE flag and don't incorrectly pick this entry
++ * for different post-processing type (e.g. writeback).
++ */
++ zram_clear_flag(zram, index, ZRAM_IDLE);
++
+ class_index_old = zs_lookup_class_index(zram->mem_pool, comp_len_old);
+ /*
+ * Iterate the secondary comp algorithms list (in order of priority)
+diff --git a/drivers/clk/qcom/gcc-qcs404.c b/drivers/clk/qcom/gcc-qcs404.c
+index c3cfd572e7c1e0..5ca003c9bfba89 100644
+--- a/drivers/clk/qcom/gcc-qcs404.c
++++ b/drivers/clk/qcom/gcc-qcs404.c
+@@ -131,6 +131,7 @@ static struct clk_alpha_pll gpll1_out_main = {
+ /* 930MHz configuration */
+ static const struct alpha_pll_config gpll3_config = {
+ .l = 48,
++ .alpha_hi = 0x70,
+ .alpha = 0x0,
+ .alpha_en_mask = BIT(24),
+ .post_div_mask = 0xf << 8,
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 5892c73e129d2b..07d6f9a9b7c820 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -287,7 +287,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+ ret = cpufreq_enable_boost_support();
+ if (ret) {
+ dev_warn(cpu_dev, "failed to enable boost: %d\n", ret);
+- goto out_free_opp;
++ goto out_free_table;
+ } else {
+ scmi_cpufreq_hw_attr[1] = &cpufreq_freq_attr_scaling_boost_freqs;
+ scmi_cpufreq_driver.boost_enabled = true;
+@@ -296,6 +296,8 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+
+ return 0;
+
++out_free_table:
++ dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+ out_free_opp:
+ dev_pm_opp_remove_all_dynamic(cpu_dev);
+
+diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
+index 2a1b43f9e0fa2b..df5ffe23644298 100644
+--- a/drivers/firmware/efi/libstub/efi-stub.c
++++ b/drivers/firmware/efi/libstub/efi-stub.c
+@@ -149,7 +149,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
+ return EFI_SUCCESS;
+
+ fail_free_cmdline:
+- efi_bs_call(free_pool, cmdline_ptr);
++ efi_bs_call(free_pool, cmdline);
+ return status;
+ }
+
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index cf5bc77e2362c4..610e159d362ad6 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -327,7 +327,7 @@ config DRM_TTM_HELPER
+ config DRM_GEM_DMA_HELPER
+ tristate
+ depends on DRM
+- select FB_DMAMEM_HELPERS if DRM_FBDEV_EMULATION
++ select FB_DMAMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION
+ help
+ Choose this if you need the GEM DMA helper functions
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index c2394c8b4d6b21..1f08cb88d51be5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4584,8 +4584,8 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ int idx;
+ bool px;
+
+- amdgpu_fence_driver_sw_fini(adev);
+ amdgpu_device_ip_fini(adev);
++ amdgpu_fence_driver_sw_fini(adev);
+ amdgpu_ucode_release(&adev->firmware.gpu_info_fw);
+ adev->accel_working = false;
+ dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index 74fdbf71d95b74..599d3ca4e0ef9e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -214,15 +214,15 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)
+
+ drm_sched_entity_destroy(&adev->vce.entity);
+
+- amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
+- (void **)&adev->vce.cpu_addr);
+-
+ for (i = 0; i < adev->vce.num_rings; i++)
+ amdgpu_ring_fini(&adev->vce.ring[i]);
+
+ amdgpu_ucode_release(&adev->vce.fw);
+ mutex_destroy(&adev->vce.idle_mutex);
+
++ amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
++ (void **)&adev->vce.cpu_addr);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+index 7a9adfda5814a6..814ab59fdd4a3a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+@@ -275,6 +275,15 @@ static void nbio_v7_11_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF_BIF256_CI256_RC3X4_USB4_PCIE_MST_CTRL_3, data);
+
++ switch (adev->ip_versions[NBIO_HWIP][0]) {
++ case IP_VERSION(7, 11, 0):
++ case IP_VERSION(7, 11, 1):
++ case IP_VERSION(7, 11, 2):
++ case IP_VERSION(7, 11, 3):
++ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23);
++ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
++ break;
++ }
+ }
+
+ static void nbio_v7_11_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+index 4843dcb9a5f796..d6037577c53278 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+@@ -125,7 +125,7 @@ static bool kq_initialize(struct kernel_queue *kq, struct kfd_node *dev,
+
+ memset(kq->pq_kernel_addr, 0, queue_size);
+ memset(kq->rptr_kernel, 0, sizeof(*kq->rptr_kernel));
+- memset(kq->wptr_kernel, 0, sizeof(*kq->wptr_kernel));
++ memset(kq->wptr_kernel, 0, dev->kfd->device_info.doorbell_size);
+
+ prop.queue_size = queue_size;
+ prop.is_interop = false;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index a88f1b6ea64cfa..a6911bb2cf0c6c 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -3066,7 +3066,10 @@ static void restore_planes_and_stream_state(
+ return;
+
+ for (i = 0; i < status->plane_count; i++) {
++ /* refcount will always be valid, restore everything else */
++ struct kref refcount = status->plane_states[i]->refcount;
+ *status->plane_states[i] = scratch->plane_states[i];
++ status->plane_states[i]->refcount = refcount;
+ }
+ *stream = scratch->stream_state;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index 838d72eaa87fbd..b363f5360818d8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -1392,10 +1392,10 @@ static void dccg35_set_dtbclk_dto(
+
+ /* The recommended programming sequence to enable DTBCLK DTO to generate
+ * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
+- * be set only after DTO is enabled
++ * be set only after DTO is enabled.
++ * PIPEx_DTO_SRC_SEL should not be programmed during DTBCLK update since OTG may still be on, and the
++ * programming is handled in program_pix_clk() regardless, so it can be removed from here.
+ */
+- REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+- PIPE_DTO_SRC_SEL[params->otg_inst], 2);
+ } else {
+ switch (params->otg_inst) {
+ case 0:
+@@ -1412,9 +1412,12 @@ static void dccg35_set_dtbclk_dto(
+ break;
+ }
+
+- REG_UPDATE_2(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+- DTBCLK_DTO_ENABLE[params->otg_inst], 0,
+- PIPE_DTO_SRC_SEL[params->otg_inst], params->is_hdmi ? 0 : 1);
++ /**
++ * PIPEx_DTO_SRC_SEL should not be programmed during DTBCLK update since OTG may still be on, and the
++ * programming is handled in program_pix_clk() regardless, so it can be removed from here.
++ */
++ REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
++ DTBCLK_DTO_ENABLE[params->otg_inst], 0);
+
+ REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0);
+ REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+index 6eccf0241d857d..1ed21c1b86a5bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+@@ -258,12 +258,25 @@ static unsigned int find_preferred_pipe_candidates(const struct dc_state *existi
+ * However this condition comes with a caveat. We need to ignore pipes that will
+ * require a change in OPP but still have the same stream id. For example during
+ * an MPC to ODM transiton.
++ *
++ * Adding check to avoid pipe select on the head pipe by utilizing dc resource
++ * helper function resource_get_primary_dpp_pipe and comparing the pipe index.
+ */
+ if (existing_state) {
+ for (i = 0; i < pipe_count; i++) {
+ if (existing_state->res_ctx.pipe_ctx[i].stream && existing_state->res_ctx.pipe_ctx[i].stream->stream_id == stream_id) {
++ struct pipe_ctx *head_pipe =
++ resource_is_pipe_type(&existing_state->res_ctx.pipe_ctx[i], DPP_PIPE) ?
++ resource_get_primary_dpp_pipe(&existing_state->res_ctx.pipe_ctx[i]) :
++ NULL;
++
++ // we should always respect the head pipe from selection
++ if (head_pipe && head_pipe->pipe_idx == i)
++ continue;
+ if (existing_state->res_ctx.pipe_ctx[i].plane_res.hubp &&
+- existing_state->res_ctx.pipe_ctx[i].plane_res.hubp->opp_id != i)
++ existing_state->res_ctx.pipe_ctx[i].plane_res.hubp->opp_id != i &&
++ (existing_state->res_ctx.pipe_ctx[i].prev_odm_pipe ||
++ existing_state->res_ctx.pipe_ctx[i].next_odm_pipe))
+ continue;
+
+ preferred_pipe_candidates[num_preferred_candidates++] = i;
+@@ -292,6 +305,14 @@ static unsigned int find_last_resort_pipe_candidates(const struct dc_state *exis
+ */
+ if (existing_state) {
+ for (i = 0; i < pipe_count; i++) {
++ struct pipe_ctx *head_pipe =
++ resource_is_pipe_type(&existing_state->res_ctx.pipe_ctx[i], DPP_PIPE) ?
++ resource_get_primary_dpp_pipe(&existing_state->res_ctx.pipe_ctx[i]) :
++ NULL;
++
++ // we should always respect the head pipe from selection
++ if (head_pipe && head_pipe->pipe_idx == i)
++ continue;
+ if ((existing_state->res_ctx.pipe_ctx[i].plane_res.hubp &&
+ existing_state->res_ctx.pipe_ctx[i].plane_res.hubp->opp_id != i) ||
+ existing_state->res_ctx.pipe_ctx[i].stream_res.tg)
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h
+index 5ebe4cb40f9db6..c38a01742d6f0e 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h
+@@ -7571,6 +7571,8 @@
+ // base address: 0x10100000
+ #define regRCC_STRAP0_RCC_DEV0_EPF0_STRAP0 0xd000
+ #define regRCC_STRAP0_RCC_DEV0_EPF0_STRAP0_BASE_IDX 5
++#define regRCC_DEV0_EPF5_STRAP4 0xd284
++#define regRCC_DEV0_EPF5_STRAP4_BASE_IDX 5
+
+
+ // addressBlock: nbio_nbif0_bif_rst_bif_rst_regblk
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h
+index eb8c556d9c9300..3b96f1e5a1802c 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h
+@@ -50665,6 +50665,19 @@
+ #define RCC_STRAP0_RCC_DEV0_EPF0_STRAP0__STRAP_D1_SUPPORT_DEV0_F0_MASK 0x40000000L
+ #define RCC_STRAP0_RCC_DEV0_EPF0_STRAP0__STRAP_D2_SUPPORT_DEV0_F0_MASK 0x80000000L
+
++//RCC_DEV0_EPF5_STRAP4
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_64BIT_EN_DEV0_F5__SHIFT 0x14
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_EN_DEV0_F5__SHIFT 0x15
++#define RCC_DEV0_EPF5_STRAP4__STRAP_FLR_EN_DEV0_F5__SHIFT 0x16
++#define RCC_DEV0_EPF5_STRAP4__STRAP_PME_SUPPORT_DEV0_F5__SHIFT 0x17
++#define RCC_DEV0_EPF5_STRAP4__STRAP_INTERRUPT_PIN_DEV0_F5__SHIFT 0x1c
++#define RCC_DEV0_EPF5_STRAP4__STRAP_AUXPWR_SUPPORT_DEV0_F5__SHIFT 0x1f
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_64BIT_EN_DEV0_F5_MASK 0x00100000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_EN_DEV0_F5_MASK 0x00200000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_FLR_EN_DEV0_F5_MASK 0x00400000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_PME_SUPPORT_DEV0_F5_MASK 0x0F800000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_INTERRUPT_PIN_DEV0_F5_MASK 0x70000000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_AUXPWR_SUPPORT_DEV0_F5_MASK 0x80000000L
+
+ // addressBlock: nbio_nbif0_bif_rst_bif_rst_regblk
+ //HARD_RST_CTRL
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 80e60ea2d11e3c..32bdeac2676b5c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1695,7 +1695,9 @@ static int smu_smc_hw_setup(struct smu_context *smu)
+ return ret;
+ }
+
+- if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
++ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN5)
++ pcie_gen = 4;
++ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
+ pcie_gen = 3;
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
+ pcie_gen = 2;
+@@ -1708,7 +1710,9 @@ static int smu_smc_hw_setup(struct smu_context *smu)
+ * Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
+ * Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32
+ */
+- if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
++ if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X32)
++ pcie_width = 7;
++ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
+ pcie_width = 6;
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
+ pcie_width = 5;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
+index 727d5b405435d0..3c1b4aa4a68d7e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
+@@ -53,7 +53,7 @@
+ #define CTF_OFFSET_MEM 5
+
+ extern const int decoded_link_speed[5];
+-extern const int decoded_link_width[7];
++extern const int decoded_link_width[8];
+
+ #define DECODE_GEN_SPEED(gen_speed_idx) (decoded_link_speed[gen_speed_idx])
+ #define DECODE_LANE_WIDTH(lane_width_idx) (decoded_link_width[lane_width_idx])
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index c0f6b59369b7c4..d52512f5f1bd9d 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1344,8 +1344,12 @@ static int arcturus_get_power_limit(struct smu_context *smu,
+ *default_power_limit = power_limit;
+ if (max_power_limit)
+ *max_power_limit = power_limit;
++ /**
++ * No lower bound is imposed on the limit. Any unreasonable limit set
++ * will result in frequent throttling.
++ */
+ if (min_power_limit)
+- *min_power_limit = power_limit;
++ *min_power_limit = 0;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index b891a5e0a3969a..ceaf4572db2527 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2061,6 +2061,8 @@ static ssize_t smu_v13_0_7_get_gpu_metrics(struct smu_context *smu,
+ gpu_metrics->average_dclk1_frequency = metrics->AverageDclk1Frequency;
+
+ gpu_metrics->current_gfxclk = metrics->CurrClock[PPCLK_GFXCLK];
++ gpu_metrics->current_socclk = metrics->CurrClock[PPCLK_SOCCLK];
++ gpu_metrics->current_uclk = metrics->CurrClock[PPCLK_UCLK];
+ gpu_metrics->current_vclk0 = metrics->CurrClock[PPCLK_VCLK_0];
+ gpu_metrics->current_dclk0 = metrics->CurrClock[PPCLK_DCLK_0];
+ gpu_metrics->current_vclk1 = metrics->CurrClock[PPCLK_VCLK_1];
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+index 865e916fc42544..452589adaf0468 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+@@ -49,7 +49,7 @@
+ #define regMP1_SMN_IH_SW_INT_CTRL_mp1_14_0_0_BASE_IDX 0
+
+ const int decoded_link_speed[5] = {1, 2, 3, 4, 5};
+-const int decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16};
++const int decoded_link_width[8] = {0, 1, 2, 4, 8, 12, 16, 32};
+ /*
+ * DO NOT use these for err/warn/info/debug messages.
+ * Use dev_err, dev_warn, dev_info and dev_dbg instead.
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 1e16a281f2dcde..82aef8626afa97 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1186,13 +1186,15 @@ static int smu_v14_0_2_print_clk_levels(struct smu_context *smu,
+ (pcie_table->pcie_gen[i] == 0) ? "2.5GT/s," :
+ (pcie_table->pcie_gen[i] == 1) ? "5.0GT/s," :
+ (pcie_table->pcie_gen[i] == 2) ? "8.0GT/s," :
+- (pcie_table->pcie_gen[i] == 3) ? "16.0GT/s," : "",
++ (pcie_table->pcie_gen[i] == 3) ? "16.0GT/s," :
++ (pcie_table->pcie_gen[i] == 4) ? "32.0GT/s," : "",
+ (pcie_table->pcie_lane[i] == 1) ? "x1" :
+ (pcie_table->pcie_lane[i] == 2) ? "x2" :
+ (pcie_table->pcie_lane[i] == 3) ? "x4" :
+ (pcie_table->pcie_lane[i] == 4) ? "x8" :
+ (pcie_table->pcie_lane[i] == 5) ? "x12" :
+- (pcie_table->pcie_lane[i] == 6) ? "x16" : "",
++ (pcie_table->pcie_lane[i] == 6) ? "x16" :
++ (pcie_table->pcie_lane[i] == 7) ? "x32" : "",
+ pcie_table->clk_freq[i],
+ (gen_speed == DECODE_GEN_SPEED(pcie_table->pcie_gen[i])) &&
+ (lane_width == DECODE_LANE_WIDTH(pcie_table->pcie_lane[i])) ?
+@@ -1475,15 +1477,35 @@ static int smu_v14_0_2_update_pcie_parameters(struct smu_context *smu,
+ struct smu_14_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
+ struct smu_14_0_pcie_table *pcie_table =
+ &dpm_context->dpm_tables.pcie_table;
++ int num_of_levels = pcie_table->num_of_link_levels;
+ uint32_t smu_pcie_arg;
+ int ret, i;
+
+- for (i = 0; i < pcie_table->num_of_link_levels; i++) {
+- if (pcie_table->pcie_gen[i] > pcie_gen_cap)
++ if (!num_of_levels)
++ return 0;
++
++ if (!(smu->adev->pm.pp_feature & PP_PCIE_DPM_MASK)) {
++ if (pcie_table->pcie_gen[num_of_levels - 1] < pcie_gen_cap)
++ pcie_gen_cap = pcie_table->pcie_gen[num_of_levels - 1];
++
++ if (pcie_table->pcie_lane[num_of_levels - 1] < pcie_width_cap)
++ pcie_width_cap = pcie_table->pcie_lane[num_of_levels - 1];
++
++ /* Force all levels to use the same settings */
++ for (i = 0; i < num_of_levels; i++) {
+ pcie_table->pcie_gen[i] = pcie_gen_cap;
+- if (pcie_table->pcie_lane[i] > pcie_width_cap)
+ pcie_table->pcie_lane[i] = pcie_width_cap;
++ }
++ } else {
++ for (i = 0; i < num_of_levels; i++) {
++ if (pcie_table->pcie_gen[i] > pcie_gen_cap)
++ pcie_table->pcie_gen[i] = pcie_gen_cap;
++ if (pcie_table->pcie_lane[i] > pcie_width_cap)
++ pcie_table->pcie_lane[i] = pcie_width_cap;
++ }
++ }
+
++ for (i = 0; i < num_of_levels; i++) {
+ smu_pcie_arg = i << 16;
+ smu_pcie_arg |= pcie_table->pcie_gen[i] << 8;
+ smu_pcie_arg |= pcie_table->pcie_lane[i];
+@@ -2767,7 +2789,6 @@ static const struct pptable_funcs smu_v14_0_2_ppt_funcs = {
+ .get_unique_id = smu_v14_0_2_get_unique_id,
+ .get_power_limit = smu_v14_0_2_get_power_limit,
+ .set_power_limit = smu_v14_0_2_set_power_limit,
+- .set_power_source = smu_v14_0_set_power_source,
+ .get_power_profile_mode = smu_v14_0_2_get_power_profile_mode,
+ .set_power_profile_mode = smu_v14_0_2_set_power_profile_mode,
+ .run_btc = smu_v14_0_run_btc,
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index e3a9832c742cb1..65b57de20203f5 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -2614,9 +2614,9 @@ static int it6505_poweron(struct it6505 *it6505)
+ /* time interval between OVDD and SYSRSTN at least be 10ms */
+ if (pdata->gpiod_reset) {
+ usleep_range(10000, 20000);
+- gpiod_set_value_cansleep(pdata->gpiod_reset, 0);
+- usleep_range(1000, 2000);
+ gpiod_set_value_cansleep(pdata->gpiod_reset, 1);
++ usleep_range(1000, 2000);
++ gpiod_set_value_cansleep(pdata->gpiod_reset, 0);
+ usleep_range(25000, 35000);
+ }
+
+@@ -2647,7 +2647,7 @@ static int it6505_poweroff(struct it6505 *it6505)
+ disable_irq_nosync(it6505->irq);
+
+ if (pdata->gpiod_reset)
+- gpiod_set_value_cansleep(pdata->gpiod_reset, 0);
++ gpiod_set_value_cansleep(pdata->gpiod_reset, 1);
+
+ if (pdata->pwr18) {
+ err = regulator_disable(pdata->pwr18);
+@@ -3135,7 +3135,7 @@ static int it6505_init_pdata(struct it6505 *it6505)
+ return PTR_ERR(pdata->ovdd);
+ }
+
+- pdata->gpiod_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
++ pdata->gpiod_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(pdata->gpiod_reset)) {
+ dev_err(dev, "gpiod_reset gpio not found");
+ return PTR_ERR(pdata->gpiod_reset);
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 43cdf39019a445..5186d2114a5037 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -3015,7 +3015,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state *state,
+ bool stall)
+ {
+ int i, ret;
+- unsigned long flags;
++ unsigned long flags = 0;
+ struct drm_connector *connector;
+ struct drm_connector_state *old_conn_state, *new_conn_state;
+ struct drm_crtc *crtc;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+index 384df1659be60d..b13a17276d07cd 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+@@ -482,7 +482,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
+ } else {
+ CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_CACHE,
+ VIVS_GL_FLUSH_CACHE_DEPTH |
+- VIVS_GL_FLUSH_CACHE_COLOR);
++ VIVS_GL_FLUSH_CACHE_COLOR |
++ VIVS_GL_FLUSH_CACHE_SHADER_L1);
+ if (has_blt) {
+ CMD_LOAD_STATE(buffer, VIVS_BLT_ENABLE, 0x1);
+ CMD_LOAD_STATE(buffer, VIVS_BLT_SET_COMMAND, 0x1);
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 3e807195a0d03a..2c1cb335d8623f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -405,8 +405,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ if (temp_drm_priv->mtk_drm_bound)
+ cnt++;
+
+- if (cnt == MAX_CRTC)
++ if (cnt == MAX_CRTC) {
++ of_node_put(node);
+ break;
++ }
+ }
+
+ if (drm_priv->data->mmsys_dev_num == cnt) {
+diff --git a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+index 44897e5218a69f..45d09e6fa667fd 100644
+--- a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
++++ b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+@@ -26,7 +26,6 @@ struct jadard_panel_desc {
+ unsigned int lanes;
+ enum mipi_dsi_pixel_format format;
+ int (*init)(struct jadard *jadard);
+- u32 num_init_cmds;
+ bool lp11_before_reset;
+ bool reset_before_power_off_vcioo;
+ unsigned int vcioo_to_lp11_delay_ms;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index f9c73c55f04f76..f9996304d94313 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -1255,16 +1255,6 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
+ goto exit;
+ }
+ }
+-
+- if (dret && radeon_connector->hpd.hpd != RADEON_HPD_NONE &&
+- !radeon_hpd_sense(rdev, radeon_connector->hpd.hpd) &&
+- connector->connector_type == DRM_MODE_CONNECTOR_HDMIA) {
+- DRM_DEBUG_KMS("EDID is readable when HPD disconnected\n");
+- schedule_delayed_work(&rdev->hotplug_work, msecs_to_jiffies(1000));
+- ret = connector_status_disconnected;
+- goto exit;
+- }
+-
+ if (dret) {
+ radeon_connector->detected_by_load = false;
+ radeon_connector_free_edid(connector);
+diff --git a/drivers/gpu/drm/sti/sti_cursor.c b/drivers/gpu/drm/sti/sti_cursor.c
+index db0a1eb535328f..c59fcb4dca3249 100644
+--- a/drivers/gpu/drm/sti/sti_cursor.c
++++ b/drivers/gpu/drm/sti/sti_cursor.c
+@@ -200,6 +200,9 @@ static int sti_cursor_atomic_check(struct drm_plane *drm_plane,
+ return 0;
+
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++
+ mode = &crtc_state->mode;
+ dst_x = new_plane_state->crtc_x;
+ dst_y = new_plane_state->crtc_y;
+diff --git a/drivers/gpu/drm/sti/sti_gdp.c b/drivers/gpu/drm/sti/sti_gdp.c
+index 43c72c2604a0cd..f046f5f7ad259d 100644
+--- a/drivers/gpu/drm/sti/sti_gdp.c
++++ b/drivers/gpu/drm/sti/sti_gdp.c
+@@ -638,6 +638,9 @@ static int sti_gdp_atomic_check(struct drm_plane *drm_plane,
+
+ mixer = to_sti_mixer(crtc);
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++
+ mode = &crtc_state->mode;
+ dst_x = new_plane_state->crtc_x;
+ dst_y = new_plane_state->crtc_y;
+diff --git a/drivers/gpu/drm/sti/sti_hqvdp.c b/drivers/gpu/drm/sti/sti_hqvdp.c
+index acbf70b95aeb97..5793cf2cb8972c 100644
+--- a/drivers/gpu/drm/sti/sti_hqvdp.c
++++ b/drivers/gpu/drm/sti/sti_hqvdp.c
+@@ -1037,6 +1037,9 @@ static int sti_hqvdp_atomic_check(struct drm_plane *drm_plane,
+ return 0;
+
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++
+ mode = &crtc_state->mode;
+ dst_x = new_plane_state->crtc_x;
+ dst_y = new_plane_state->crtc_y;
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 4f5d00aea7168b..2927745d689549 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1846,16 +1846,29 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
+ xe_gt_assert(guc_to_gt(guc), runnable_state == 0);
+ xe_gt_assert(guc_to_gt(guc), exec_queue_pending_disable(q));
+
+- clear_exec_queue_pending_disable(q);
+ if (q->guc->suspend_pending) {
+ suspend_fence_signal(q);
++ clear_exec_queue_pending_disable(q);
+ } else {
+ if (exec_queue_banned(q) || check_timeout) {
+ smp_wmb();
+ wake_up_all(&guc->ct.wq);
+ }
+- if (!check_timeout)
++ if (!check_timeout && exec_queue_destroyed(q)) {
++ /*
++ * Make sure to clear the pending_disable only
++ * after sampling the destroyed state. We want
++ * to ensure we don't trigger the unregister too
++ * early with something intending to only
++ * disable scheduling. The caller doing the
++ * destroy must wait for an ongoing
++ * pending_disable before marking as destroyed.
++ */
++ clear_exec_queue_pending_disable(q);
+ deregister_exec_queue(guc, q);
++ } else {
++ clear_exec_queue_pending_disable(q);
++ }
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index cfd31ae49cc1f7..1b97d90aaddaf4 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -209,7 +209,8 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
+ num_entries * XE_PAGE_SIZE,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+- XE_BO_FLAG_PINNED);
++ XE_BO_FLAG_PINNED |
++ XE_BO_FLAG_PAGETABLE);
+ if (IS_ERR(bo))
+ return PTR_ERR(bo);
+
+@@ -1350,6 +1351,7 @@ __xe_migrate_update_pgtables(struct xe_migrate *m,
+
+ /* For sysmem PTE's, need to map them in our hole.. */
+ if (!IS_DGFX(xe)) {
++ u16 pat_index = xe->pat.idx[XE_CACHE_WB];
+ u32 ptes, ofs;
+
+ ppgtt_ofs = NUM_KERNEL_PDE - 1;
+@@ -1409,7 +1411,7 @@ __xe_migrate_update_pgtables(struct xe_migrate *m,
+ pt_bo->update_index = current_update;
+
+ addr = vm->pt_ops->pte_encode_bo(pt_bo, 0,
+- XE_CACHE_WB, 0);
++ pat_index, 0);
+ bb->cs[bb->len++] = lower_32_bits(addr);
+ bb->cs[bb->len++] = upper_32_bits(addr);
+ }
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+index 4556af2faa0f19..1565a7dd4f04d0 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+@@ -509,12 +509,12 @@ int zynqmp_dpsub_drm_init(struct zynqmp_dpsub *dpsub)
+ if (ret)
+ return ret;
+
+- drm_kms_helper_poll_init(drm);
+-
+ ret = zynqmp_dpsub_kms_init(dpsub);
+ if (ret < 0)
+ goto err_poll_fini;
+
++ drm_kms_helper_poll_init(drm);
++
+ /* Reset all components and register the DRM device. */
+ drm_mode_config_reset(drm);
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index ffe99f0c6acef5..da83c49223b33e 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1417,7 +1417,7 @@ static void i3c_master_put_i3c_addrs(struct i3c_dev_desc *dev)
+ I3C_ADDR_SLOT_FREE);
+
+ if (dev->boardinfo && dev->boardinfo->init_dyn_addr)
+- i3c_bus_set_addr_slot_status(&master->bus, dev->info.dyn_addr,
++ i3c_bus_set_addr_slot_status(&master->bus, dev->boardinfo->init_dyn_addr,
+ I3C_ADDR_SLOT_FREE);
+ }
+
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index a7bfc678153e6c..565af3759813bd 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -130,8 +130,8 @@
+ #define SVC_I3C_PPBAUD_MAX 15
+ #define SVC_I3C_QUICK_I2C_CLK 4170000
+
+-#define SVC_I3C_EVENT_IBI BIT(0)
+-#define SVC_I3C_EVENT_HOTJOIN BIT(1)
++#define SVC_I3C_EVENT_IBI GENMASK(7, 0)
++#define SVC_I3C_EVENT_HOTJOIN BIT(31)
+
+ struct svc_i3c_cmd {
+ u8 addr;
+@@ -214,7 +214,7 @@ struct svc_i3c_master {
+ spinlock_t lock;
+ } ibi;
+ struct mutex lock;
+- int enabled_events;
++ u32 enabled_events;
+ u32 mctrl_config;
+ };
+
+@@ -1056,12 +1056,27 @@ static int svc_i3c_master_do_daa(struct i3c_master_controller *m)
+ if (ret)
+ goto rpm_out;
+
+- /* Register all devices who participated to the core */
+- for (i = 0; i < dev_nb; i++) {
+- ret = i3c_master_add_i3c_dev_locked(m, addrs[i]);
+- if (ret)
+- goto rpm_out;
+- }
++ /*
++ * Register all devices who participated to the core
++ *
++ * If two devices (A and B) are detected in DAA and address 0xa is assigned to
++ * device A and 0xb to device B, a failure in i3c_master_add_i3c_dev_locked()
++ * for device A (addr: 0xa) could prevent device B (addr: 0xb) from being
++ * registered on the bus. The I3C stack might still consider 0xb a free
++ * address. If a subsequent Hotjoin occurs, 0xb might be assigned to Device A,
++ * causing both devices A and B to use the same address 0xb, violating the I3C
++ * specification.
++ *
++ * The return value for i3c_master_add_i3c_dev_locked() should not be checked
++ * because subsequent steps will scan the entire I3C bus, independent of
++ * whether i3c_master_add_i3c_dev_locked() returns success.
++ *
++ * If device A registration fails, there is still a chance to register device
++ * B. i3c_master_add_i3c_dev_locked() can reset DAA if a failure occurs while
++ * retrieving device information.
++ */
++ for (i = 0; i < dev_nb; i++)
++ i3c_master_add_i3c_dev_locked(m, addrs[i]);
+
+ /* Configure IBI auto-rules */
+ ret = svc_i3c_update_ibirules(master);
+@@ -1624,7 +1639,7 @@ static int svc_i3c_master_enable_ibi(struct i3c_dev_desc *dev)
+ return ret;
+ }
+
+- master->enabled_events |= SVC_I3C_EVENT_IBI;
++ master->enabled_events++;
+ svc_i3c_master_enable_interrupts(master, SVC_I3C_MINT_SLVSTART);
+
+ return i3c_master_enec_locked(m, dev->info.dyn_addr, I3C_CCC_EVENT_SIR);
+@@ -1636,7 +1651,7 @@ static int svc_i3c_master_disable_ibi(struct i3c_dev_desc *dev)
+ struct svc_i3c_master *master = to_svc_i3c_master(m);
+ int ret;
+
+- master->enabled_events &= ~SVC_I3C_EVENT_IBI;
++ master->enabled_events--;
+ if (!master->enabled_events)
+ svc_i3c_master_disable_interrupts(master);
+
+@@ -1827,8 +1842,8 @@ static int svc_i3c_master_probe(struct platform_device *pdev)
+ rpm_disable:
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+- pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_set_suspended(&pdev->dev);
+
+ err_disable_clks:
+ svc_i3c_master_unprepare_clks(master);
+diff --git a/drivers/iio/accel/kionix-kx022a.c b/drivers/iio/accel/kionix-kx022a.c
+index 53d59a04ae15e9..b6a828a6df934f 100644
+--- a/drivers/iio/accel/kionix-kx022a.c
++++ b/drivers/iio/accel/kionix-kx022a.c
+@@ -594,7 +594,7 @@ static int kx022a_get_axis(struct kx022a_data *data,
+ if (ret)
+ return ret;
+
+- *val = le16_to_cpu(data->buffer[0]);
++ *val = (s16)le16_to_cpu(data->buffer[0]);
+
+ return IIO_VAL_INT;
+ }
+diff --git a/drivers/iio/adc/ad7780.c b/drivers/iio/adc/ad7780.c
+index e9b0c577c9cca4..8ccb74f470309f 100644
+--- a/drivers/iio/adc/ad7780.c
++++ b/drivers/iio/adc/ad7780.c
+@@ -152,7 +152,7 @@ static int ad7780_write_raw(struct iio_dev *indio_dev,
+
+ switch (m) {
+ case IIO_CHAN_INFO_SCALE:
+- if (val != 0)
++ if (val != 0 || val2 == 0)
+ return -EINVAL;
+
+ vref = st->int_vref_mv * 1000000LL;
+diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
+index 09680015a7ab54..acc44cb34f8245 100644
+--- a/drivers/iio/adc/ad7923.c
++++ b/drivers/iio/adc/ad7923.c
+@@ -48,7 +48,7 @@
+
+ struct ad7923_state {
+ struct spi_device *spi;
+- struct spi_transfer ring_xfer[5];
++ struct spi_transfer ring_xfer[9];
+ struct spi_transfer scan_single_xfer[2];
+ struct spi_message ring_msg;
+ struct spi_message scan_single_msg;
+@@ -64,7 +64,7 @@ struct ad7923_state {
+ * Length = 8 channels + 4 extra for 8 byte timestamp
+ */
+ __be16 rx_buf[12] __aligned(IIO_DMA_MINALIGN);
+- __be16 tx_buf[4];
++ __be16 tx_buf[8];
+ };
+
+ struct ad7923_chip_info {
+diff --git a/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c b/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
+index f44458c380d928..37d0bdaa8d824f 100644
+--- a/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
++++ b/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
+@@ -70,6 +70,10 @@ int inv_sensors_timestamp_update_odr(struct inv_sensors_timestamp *ts,
+ if (mult != ts->mult)
+ ts->new_mult = mult;
+
++ /* When FIFO is off, directly apply the new ODR */
++ if (!fifo)
++ inv_sensors_timestamp_apply_odr(ts, 0, 0, 0);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(inv_sensors_timestamp_update_odr, IIO_INV_SENSORS_TIMESTAMP);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+index 56ac198142500a..7968aa27f9fd79 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+@@ -200,7 +200,6 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev,
+ {
+ struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+ struct inv_icm42600_sensor_state *accel_st = iio_priv(indio_dev);
+- struct inv_sensors_timestamp *ts = &accel_st->ts;
+ struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+ unsigned int fifo_en = 0;
+ unsigned int sleep_temp = 0;
+@@ -229,7 +228,6 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev,
+ }
+
+ /* update data FIFO write */
+- inv_sensors_timestamp_apply_odr(ts, 0, 0, 0);
+ ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en);
+
+ out_unlock:
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+index 938af5b640b00f..c6bb68bf5e1449 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+@@ -99,8 +99,6 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev,
+ const unsigned long *scan_mask)
+ {
+ struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+- struct inv_icm42600_sensor_state *gyro_st = iio_priv(indio_dev);
+- struct inv_sensors_timestamp *ts = &gyro_st->ts;
+ struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+ unsigned int fifo_en = 0;
+ unsigned int sleep_gyro = 0;
+@@ -128,7 +126,6 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev,
+ }
+
+ /* update data FIFO write */
+- inv_sensors_timestamp_apply_odr(ts, 0, 0, 0);
+ ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en);
+
+ out_unlock:
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+index 3bfeabab0ec4f6..5b1088cc3704f1 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+@@ -112,7 +112,6 @@ int inv_mpu6050_prepare_fifo(struct inv_mpu6050_state *st, bool enable)
+ if (enable) {
+ /* reset timestamping */
+ inv_sensors_timestamp_reset(&st->timestamp);
+- inv_sensors_timestamp_apply_odr(&st->timestamp, 0, 0, 0);
+ /* reset FIFO */
+ d = st->chip_config.user_ctrl | INV_MPU6050_BIT_FIFO_RST;
+ ret = regmap_write(st->map, st->reg->user_ctrl, d);
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 4ad949672210ba..291c0fc332c978 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -205,7 +205,7 @@ static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
+ memcpy(all_gains, gains[time_idx], gain_bytes);
+ new_idx = gts->num_hwgain;
+
+- while (time_idx--) {
++ while (time_idx-- > 0) {
+ for (j = 0; j < gts->num_hwgain; j++) {
+ int candidate = gains[time_idx][j];
+ int chk;
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 151099be2863c6..3305ebbdbc0787 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -269,7 +269,7 @@ struct iio_channel *fwnode_iio_channel_get_by_name(struct fwnode_handle *fwnode,
+ return ERR_PTR(-ENODEV);
+ }
+
+- chan = __fwnode_iio_channel_get_by_name(fwnode, name);
++ chan = __fwnode_iio_channel_get_by_name(parent, name);
+ if (!IS_ERR(chan) || PTR_ERR(chan) != -ENODEV) {
+ fwnode_handle_put(parent);
+ return chan;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index 6b479592140c47..c8ec74f089f3d6 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -801,7 +801,9 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+ return 0;
+ }
+
++#ifdef CONFIG_IOMMU_DEBUGFS
+ static struct dentry *cmdqv_debugfs_dir;
++#endif
+
+ static struct arm_smmu_device *
+ __tegra241_cmdqv_probe(struct arm_smmu_device *smmu, struct resource *res,
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index 8321962b37148b..14618772a3d6e4 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -1437,6 +1437,17 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
+ goto out_free;
+ } else {
+ smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
++
++ /*
++ * Defer probe if the relevant SMMU instance hasn't finished
++ * probing yet. This is a fragile hack and we'd ideally
++ * avoid this race in the core code. Until that's ironed
++ * out, however, this is the most pragmatic option on the
++ * table.
++ */
++ if (!smmu)
++ return ERR_PTR(dev_err_probe(dev, -EPROBE_DEFER,
++ "smmu dev has not bound yet\n"));
+ }
+
+ ret = -EINVAL;
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 0e67f1721a3d98..a286c5404ea701 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -199,6 +199,18 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
+ return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
+ }
+
++/*
++ * Convert an index returned by ARM_LPAE_PGD_IDX(), which can point into
++ * a concatenated PGD, into the maximum number of entries that can be
++ * mapped in the same table page.
++ */
++static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
++{
++ int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
++
++ return ptes_per_table - (i & (ptes_per_table - 1));
++}
++
+ static bool selftest_running = false;
+
+ static dma_addr_t __arm_lpae_dma_addr(void *pages)
+@@ -390,7 +402,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
+
+ /* If we can install a leaf entry at this level, then do so */
+ if (size == block_size) {
+- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
++ max_entries = arm_lpae_max_entries(map_idx_start, data);
+ num_entries = min_t(int, pgcount, max_entries);
+ ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
+ if (!ret)
+@@ -592,7 +604,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
+
+ if (size == split_sz) {
+ unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
+- max_entries = ptes_per_table - unmap_idx_start;
++ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
+ num_entries = min_t(int, pgcount, max_entries);
+ }
+
+@@ -650,7 +662,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
+
+ /* If the size matches this level, we're in the right place */
+ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
+- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
++ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
+ num_entries = min_t(int, pgcount, max_entries);
+
+ /* Find and handle non-leaf entries */
+diff --git a/drivers/leds/flash/leds-mt6360.c b/drivers/leds/flash/leds-mt6360.c
+index 4c74f1cf01f00d..676236c19ec415 100644
+--- a/drivers/leds/flash/leds-mt6360.c
++++ b/drivers/leds/flash/leds-mt6360.c
+@@ -784,7 +784,6 @@ static void mt6360_v4l2_flash_release(struct mt6360_priv *priv)
+ static int mt6360_led_probe(struct platform_device *pdev)
+ {
+ struct mt6360_priv *priv;
+- struct fwnode_handle *child;
+ size_t count;
+ int i = 0, ret;
+
+@@ -811,7 +810,7 @@ static int mt6360_led_probe(struct platform_device *pdev)
+ return -ENODEV;
+ }
+
+- device_for_each_child_node(&pdev->dev, child) {
++ device_for_each_child_node_scoped(&pdev->dev, child) {
+ struct mt6360_led *led = priv->leds + i;
+ struct led_init_data init_data = { .fwnode = child, };
+ u32 reg, led_color;
+diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
+index 5a2e259679cfdf..e71456a56ab8da 100644
+--- a/drivers/leds/leds-lp55xx-common.c
++++ b/drivers/leds/leds-lp55xx-common.c
+@@ -1132,9 +1132,6 @@ static int lp55xx_parse_common_child(struct device_node *np,
+ if (ret)
+ return ret;
+
+- if (*chan_nr < 0 || *chan_nr > cfg->max_channel)
+- return -EINVAL;
+-
+ return 0;
+ }
+
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 89632ce9776056..c9f47d0cccf9bb 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2484,6 +2484,7 @@ static void pool_work_wait(struct pool_work *pw, struct pool *pool,
+ init_completion(&pw->complete);
+ queue_work(pool->wq, &pw->worker);
+ wait_for_completion(&pw->complete);
++ destroy_work_on_stack(&pw->worker);
+ }
+
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 29da10e6f703e2..c3a42dd66ce551 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1285,6 +1285,7 @@ static void bitmap_unplug_async(struct bitmap *bitmap)
+
+ queue_work(md_bitmap_wq, &unplug_work.work);
+ wait_for_completion(&done);
++ destroy_work_on_stack(&unplug_work.work);
+ }
+
+ static void bitmap_unplug(struct mddev *mddev, bool sync)
+diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
+index 3a19124ee27932..22a551c407da49 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.c
++++ b/drivers/md/persistent-data/dm-space-map-common.c
+@@ -51,7 +51,7 @@ static int index_check(const struct dm_block_validator *v,
+ block_size - sizeof(__le32),
+ INDEX_CSUM_XOR));
+ if (csum_disk != mi_le->csum) {
+- DMERR_LIMIT("i%s failed: csum %u != wanted %u", __func__,
++ DMERR_LIMIT("%s failed: csum %u != wanted %u", __func__,
+ le32_to_cpu(csum_disk), le32_to_cpu(mi_le->csum));
+ return -EILSEQ;
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index dc2ea636d17342..2fa1f270fb1d3c 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7177,6 +7177,8 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
+ err = mddev_suspend_and_lock(mddev);
+ if (err)
+ return err;
++ raid5_quiesce(mddev, true);
++
+ conf = mddev->private;
+ if (!conf)
+ err = -ENODEV;
+@@ -7198,6 +7200,8 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
+ kfree(old_groups);
+ }
+ }
++
++ raid5_quiesce(mddev, false);
+ mddev_unlock_and_resume(mddev);
+
+ return err ?: len;
+diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
+index a5baca2449c76d..e25add6cc38e94 100644
+--- a/drivers/media/dvb-frontends/ts2020.c
++++ b/drivers/media/dvb-frontends/ts2020.c
+@@ -553,13 +553,19 @@ static void ts2020_regmap_unlock(void *__dev)
+ static int ts2020_probe(struct i2c_client *client)
+ {
+ struct ts2020_config *pdata = client->dev.platform_data;
+- struct dvb_frontend *fe = pdata->fe;
++ struct dvb_frontend *fe;
+ struct ts2020_priv *dev;
+ int ret;
+ u8 u8tmp;
+ unsigned int utmp;
+ char *chip_str;
+
++ if (!pdata) {
++ dev_err(&client->dev, "platform data is mandatory\n");
++ return -EINVAL;
++ }
++
++ fe = pdata->fe;
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ ret = -ENOMEM;
+diff --git a/drivers/media/i2c/dw9768.c b/drivers/media/i2c/dw9768.c
+index 18ef2b35c9aa3d..87a7c3ceeb119e 100644
+--- a/drivers/media/i2c/dw9768.c
++++ b/drivers/media/i2c/dw9768.c
+@@ -471,10 +471,9 @@ static int dw9768_probe(struct i2c_client *client)
+ * to be powered on in an ACPI system. Similarly for power off in
+ * remove.
+ */
+- pm_runtime_enable(dev);
+ full_power = (is_acpi_node(dev_fwnode(dev)) &&
+ acpi_dev_state_d0(dev)) ||
+- (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev));
++ (is_of_node(dev_fwnode(dev)) && !IS_ENABLED(CONFIG_PM));
+ if (full_power) {
+ ret = dw9768_runtime_resume(dev);
+ if (ret < 0) {
+@@ -484,6 +483,7 @@ static int dw9768_probe(struct i2c_client *client)
+ pm_runtime_set_active(dev);
+ }
+
++ pm_runtime_enable(dev);
+ ret = v4l2_async_register_subdev(&dw9768->sd);
+ if (ret < 0) {
+ dev_err(dev, "failed to register V4L2 subdev: %d", ret);
+@@ -495,12 +495,12 @@ static int dw9768_probe(struct i2c_client *client)
+ return 0;
+
+ err_power_off:
++ pm_runtime_disable(dev);
+ if (full_power) {
+ dw9768_runtime_suspend(dev);
+ pm_runtime_set_suspended(dev);
+ }
+ err_clean_entity:
+- pm_runtime_disable(dev);
+ media_entity_cleanup(&dw9768->sd.entity);
+ err_free_handler:
+ v4l2_ctrl_handler_free(&dw9768->ctrls);
+@@ -517,12 +517,12 @@ static void dw9768_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(&dw9768->sd);
+ v4l2_ctrl_handler_free(&dw9768->ctrls);
+ media_entity_cleanup(&dw9768->sd.entity);
++ pm_runtime_disable(dev);
+ if ((is_acpi_node(dev_fwnode(dev)) && acpi_dev_state_d0(dev)) ||
+- (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev))) {
++ (is_of_node(dev_fwnode(dev)) && !IS_ENABLED(CONFIG_PM))) {
+ dw9768_runtime_suspend(dev);
+ pm_runtime_set_suspended(dev);
+ }
+- pm_runtime_disable(dev);
+ }
+
+ static const struct of_device_id dw9768_of_table[] = {
+diff --git a/drivers/media/i2c/ov08x40.c b/drivers/media/i2c/ov08x40.c
+index 7ead3c720e0e11..67b86dabc67eb1 100644
+--- a/drivers/media/i2c/ov08x40.c
++++ b/drivers/media/i2c/ov08x40.c
+@@ -1339,15 +1339,13 @@ static int ov08x40_read_reg(struct ov08x40 *ov08x,
+ return 0;
+ }
+
+-static int ov08x40_burst_fill_regs(struct ov08x40 *ov08x, u16 first_reg,
+- u16 last_reg, u8 val)
++static int __ov08x40_burst_fill_regs(struct i2c_client *client, u16 first_reg,
++ u16 last_reg, size_t num_regs, u8 val)
+ {
+- struct i2c_client *client = v4l2_get_subdevdata(&ov08x->sd);
+ struct i2c_msg msgs;
+- size_t i, num_regs;
++ size_t i;
+ int ret;
+
+- num_regs = last_reg - first_reg + 1;
+ msgs.addr = client->addr;
+ msgs.flags = 0;
+ msgs.len = 2 + num_regs;
+@@ -1373,6 +1371,31 @@ static int ov08x40_burst_fill_regs(struct ov08x40 *ov08x, u16 first_reg,
+ return 0;
+ }
+
++static int ov08x40_burst_fill_regs(struct ov08x40 *ov08x, u16 first_reg,
++ u16 last_reg, u8 val)
++{
++ struct i2c_client *client = v4l2_get_subdevdata(&ov08x->sd);
++ size_t num_regs, num_write_regs;
++ int ret;
++
++ num_regs = last_reg - first_reg + 1;
++ num_write_regs = num_regs;
++
++ if (client->adapter->quirks && client->adapter->quirks->max_write_len)
++ num_write_regs = client->adapter->quirks->max_write_len - 2;
++
++ while (first_reg < last_reg) {
++ ret = __ov08x40_burst_fill_regs(client, first_reg, last_reg,
++ num_write_regs, val);
++ if (ret)
++ return ret;
++
++ first_reg += num_write_regs;
++ }
++
++ return 0;
++}
++
+ /* Write registers up to 4 at a time */
+ static int ov08x40_write_reg(struct ov08x40 *ov08x,
+ u16 reg, u32 len, u32 __val)
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 65d58ddf02870d..344a670e732fa5 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -2168,8 +2168,10 @@ static int tc358743_probe(struct i2c_client *client)
+
+ err_work_queues:
+ cec_unregister_adapter(state->cec_adap);
+- if (!state->i2c_client->irq)
++ if (!state->i2c_client->irq) {
++ del_timer(&state->timer);
+ flush_work(&state->work_i2c_poll);
++ }
+ cancel_delayed_work(&state->delayed_work_enable_hotplug);
+ mutex_destroy(&state->confctl_mutex);
+ err_hdl:
+diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c
+index 73606cee586ede..88c36eb6174ad6 100644
+--- a/drivers/media/platform/allegro-dvt/allegro-core.c
++++ b/drivers/media/platform/allegro-dvt/allegro-core.c
+@@ -1509,8 +1509,10 @@ static int allocate_buffers_internal(struct allegro_channel *channel,
+ INIT_LIST_HEAD(&buffer->head);
+
+ err = allegro_alloc_buffer(dev, buffer, size);
+- if (err)
++ if (err) {
++ kfree(buffer);
+ goto err;
++ }
+ list_add(&buffer->head, list);
+ }
+
+diff --git a/drivers/media/platform/amphion/vpu_drv.c b/drivers/media/platform/amphion/vpu_drv.c
+index 2bf70aafd2baab..51d5234869f57d 100644
+--- a/drivers/media/platform/amphion/vpu_drv.c
++++ b/drivers/media/platform/amphion/vpu_drv.c
+@@ -151,8 +151,8 @@ static int vpu_probe(struct platform_device *pdev)
+ media_device_cleanup(&vpu->mdev);
+ v4l2_device_unregister(&vpu->v4l2_dev);
+ err_vpu_deinit:
+- pm_runtime_set_suspended(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
+
+ return ret;
+ }
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
+index 83db57bc80b70f..f0b1ec79d2961c 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.c
++++ b/drivers/media/platform/amphion/vpu_v4l2.c
+@@ -841,6 +841,7 @@ int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
+ vfd->fops = vdec_get_fops();
+ vfd->ioctl_ops = vdec_get_ioctl_ops();
+ }
++ video_set_drvdata(vfd, vpu);
+
+ ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+ if (ret) {
+@@ -848,7 +849,6 @@ int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
+ v4l2_m2m_release(func->m2m_dev);
+ return ret;
+ }
+- video_set_drvdata(vfd, vpu);
+ func->vfd = vfd;
+
+ ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function);
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index ac48658e2de403..ff269467635561 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1293,6 +1293,11 @@ static int mtk_jpeg_single_core_init(struct platform_device *pdev,
+ return 0;
+ }
+
++static void mtk_jpeg_destroy_workqueue(void *data)
++{
++ destroy_workqueue(data);
++}
++
+ static int mtk_jpeg_probe(struct platform_device *pdev)
+ {
+ struct mtk_jpeg_dev *jpeg;
+@@ -1337,6 +1342,11 @@ static int mtk_jpeg_probe(struct platform_device *pdev)
+ | WQ_FREEZABLE);
+ if (!jpeg->workqueue)
+ return -EINVAL;
++ ret = devm_add_action_or_reset(&pdev->dev,
++ mtk_jpeg_destroy_workqueue,
++ jpeg->workqueue);
++ if (ret)
++ return ret;
+ }
+
+ ret = v4l2_device_register(&pdev->dev, &jpeg->v4l2_dev);
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
+index 4a6ee211e18f97..2c5d74939d0a92 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
+@@ -578,11 +578,6 @@ static int mtk_jpegdec_hw_init_irq(struct mtk_jpegdec_comp_dev *dev)
+ return 0;
+ }
+
+-static void mtk_jpegdec_destroy_workqueue(void *data)
+-{
+- destroy_workqueue(data);
+-}
+-
+ static int mtk_jpegdec_hw_probe(struct platform_device *pdev)
+ {
+ struct mtk_jpegdec_clk *jpegdec_clk;
+@@ -606,12 +601,6 @@ static int mtk_jpegdec_hw_probe(struct platform_device *pdev)
+ dev->plat_dev = pdev;
+ dev->dev = &pdev->dev;
+
+- ret = devm_add_action_or_reset(&pdev->dev,
+- mtk_jpegdec_destroy_workqueue,
+- master_dev->workqueue);
+- if (ret)
+- return ret;
+-
+ spin_lock_init(&dev->hw_lock);
+ dev->hw_state = MTK_JPEG_HW_IDLE;
+
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 1d891381303722..1bf85c1cf96435 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -2679,6 +2679,8 @@ static void mxc_jpeg_detach_pm_domains(struct mxc_jpeg_dev *jpeg)
+ int i;
+
+ for (i = 0; i < jpeg->num_domains; i++) {
++ if (jpeg->pd_dev[i] && !pm_runtime_suspended(jpeg->pd_dev[i]))
++ pm_runtime_force_suspend(jpeg->pd_dev[i]);
+ if (jpeg->pd_link[i] && !IS_ERR(jpeg->pd_link[i]))
+ device_link_del(jpeg->pd_link[i]);
+ if (jpeg->pd_dev[i] && !IS_ERR(jpeg->pd_dev[i]))
+@@ -2842,6 +2844,7 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
+ jpeg->dec_vdev->vfl_dir = VFL_DIR_M2M;
+ jpeg->dec_vdev->device_caps = V4L2_CAP_STREAMING |
+ V4L2_CAP_VIDEO_M2M_MPLANE;
++ video_set_drvdata(jpeg->dec_vdev, jpeg);
+ if (mode == MXC_JPEG_ENCODE) {
+ v4l2_disable_ioctl(jpeg->dec_vdev, VIDIOC_DECODER_CMD);
+ v4l2_disable_ioctl(jpeg->dec_vdev, VIDIOC_TRY_DECODER_CMD);
+@@ -2854,7 +2857,6 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
+ dev_err(dev, "failed to register video device\n");
+ goto err_vdev_register;
+ }
+- video_set_drvdata(jpeg->dec_vdev, jpeg);
+ if (mode == MXC_JPEG_ENCODE)
+ v4l2_info(&jpeg->v4l2_dev,
+ "encoder device registered as /dev/video%d (%d,%d)\n",
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index d64985ca6e884f..8c3bce738f2a8f 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -2130,10 +2130,8 @@ static int camss_configure_pd(struct camss *camss)
+ if (camss->res->pd_name) {
+ camss->genpd = dev_pm_domain_attach_by_name(camss->dev,
+ camss->res->pd_name);
+- if (IS_ERR(camss->genpd)) {
+- ret = PTR_ERR(camss->genpd);
+- goto fail_pm;
+- }
++ if (IS_ERR(camss->genpd))
++ return PTR_ERR(camss->genpd);
+ }
+
+ if (!camss->genpd) {
+@@ -2143,14 +2141,13 @@ static int camss_configure_pd(struct camss *camss)
+ */
+ camss->genpd = dev_pm_domain_attach_by_id(camss->dev,
+ camss->genpd_num - 1);
++ if (IS_ERR(camss->genpd))
++ return PTR_ERR(camss->genpd);
+ }
+- if (IS_ERR_OR_NULL(camss->genpd)) {
+- if (!camss->genpd)
+- ret = -ENODEV;
+- else
+- ret = PTR_ERR(camss->genpd);
+- goto fail_pm;
+- }
++
++ if (!camss->genpd)
++ return -ENODEV;
++
+ camss->genpd_link = device_link_add(camss->dev, camss->genpd,
+ DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 84e95a46dfc983..cabcf710c0462a 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -412,8 +412,8 @@ static int venus_probe(struct platform_device *pdev)
+ of_platform_depopulate(dev);
+ err_runtime_disable:
+ pm_runtime_put_noidle(dev);
+- pm_runtime_set_suspended(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
+ hfi_destroy(core);
+ err_core_deinit:
+ hfi_core_deinit(core, false);
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 0e768f3e9edab4..de532b7ecd74c1 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -102,7 +102,7 @@ queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq)
+ src_vq->drv_priv = ctx;
+ src_vq->ops = &rga_qops;
+ src_vq->mem_ops = &vb2_dma_sg_memops;
+- dst_vq->gfp_flags = __GFP_DMA32;
++ src_vq->gfp_flags = __GFP_DMA32;
+ src_vq->buf_struct_size = sizeof(struct rga_vb_buffer);
+ src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ src_vq->lock = &ctx->rga->mutex;
+diff --git a/drivers/media/platform/samsung/exynos4-is/media-dev.h b/drivers/media/platform/samsung/exynos4-is/media-dev.h
+index 786264cf79dc14..a50e58ab7ef773 100644
+--- a/drivers/media/platform/samsung/exynos4-is/media-dev.h
++++ b/drivers/media/platform/samsung/exynos4-is/media-dev.h
+@@ -178,8 +178,9 @@ int fimc_md_set_camclk(struct v4l2_subdev *sd, bool on);
+ #ifdef CONFIG_OF
+ static inline bool fimc_md_is_isp_available(struct device_node *node)
+ {
+- node = of_get_child_by_name(node, FIMC_IS_OF_NODE_NAME);
+- return node ? of_device_is_available(node) : false;
++ struct device_node *child __free(device_node) =
++ of_get_child_by_name(node, FIMC_IS_OF_NODE_NAME);
++ return child ? of_device_is_available(child) : false;
+ }
+ #else
+ #define fimc_md_is_isp_available(node) (false)
+diff --git a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
+index 65e8f2d074005c..e54f5fac325bd6 100644
+--- a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
++++ b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
+@@ -161,8 +161,7 @@ static int rockchip_vpu981_av1_dec_frame_ref(struct hantro_ctx *ctx,
+ av1_dec->frame_refs[i].timestamp = timestamp;
+ av1_dec->frame_refs[i].frame_type = frame->frame_type;
+ av1_dec->frame_refs[i].order_hint = frame->order_hint;
+- if (!av1_dec->frame_refs[i].vb2_ref)
+- av1_dec->frame_refs[i].vb2_ref = hantro_get_dst_buf(ctx);
++ av1_dec->frame_refs[i].vb2_ref = hantro_get_dst_buf(ctx);
+
+ for (j = 0; j < V4L2_AV1_TOTAL_REFS_PER_FRAME; j++)
+ av1_dec->frame_refs[i].order_hints[j] = frame->order_hints[j];
+diff --git a/drivers/media/usb/gspca/ov534.c b/drivers/media/usb/gspca/ov534.c
+index 8b6a57f170d0dd..bdff64a29a33a2 100644
+--- a/drivers/media/usb/gspca/ov534.c
++++ b/drivers/media/usb/gspca/ov534.c
+@@ -847,7 +847,7 @@ static void set_frame_rate(struct gspca_dev *gspca_dev)
+ r = rate_1;
+ i = ARRAY_SIZE(rate_1);
+ }
+- while (--i > 0) {
++ while (--i >= 0) {
+ if (sd->frame_rate >= r->fps)
+ break;
+ r++;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 13db0026dc1aad..675be4858366f0 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -775,14 +775,27 @@ static const u8 uvc_media_transport_input_guid[16] =
+ UVC_GUID_UVC_MEDIA_TRANSPORT_INPUT;
+ static const u8 uvc_processing_guid[16] = UVC_GUID_UVC_PROCESSING;
+
+-static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
+- unsigned int num_pads, unsigned int extra_size)
++static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
++ u16 id, unsigned int num_pads,
++ unsigned int extra_size)
+ {
+ struct uvc_entity *entity;
+ unsigned int num_inputs;
+ unsigned int size;
+ unsigned int i;
+
++ /* Per UVC 1.1+ spec 3.7.2, the ID should be non-zero. */
++ if (id == 0) {
++ dev_err(&dev->udev->dev, "Found Unit with invalid ID 0.\n");
++ return ERR_PTR(-EINVAL);
++ }
++
++ /* Per UVC 1.1+ spec 3.7.2, the ID is unique. */
++ if (uvc_entity_by_id(dev, id)) {
++ dev_err(&dev->udev->dev, "Found multiple Units with ID %u\n", id);
++ return ERR_PTR(-EINVAL);
++ }
++
+ extra_size = roundup(extra_size, sizeof(*entity->pads));
+ if (num_pads)
+ num_inputs = type & UVC_TERM_OUTPUT ? num_pads : num_pads - 1;
+@@ -792,7 +805,7 @@ static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
+ + num_inputs;
+ entity = kzalloc(size, GFP_KERNEL);
+ if (entity == NULL)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ entity->id = id;
+ entity->type = type;
+@@ -904,10 +917,10 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
+ break;
+ }
+
+- unit = uvc_alloc_entity(UVC_VC_EXTENSION_UNIT, buffer[3],
+- p + 1, 2*n);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, UVC_VC_EXTENSION_UNIT,
++ buffer[3], p + 1, 2 * n);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1016,10 +1029,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- term = uvc_alloc_entity(type | UVC_TERM_INPUT, buffer[3],
+- 1, n + p);
+- if (term == NULL)
+- return -ENOMEM;
++ term = uvc_alloc_new_entity(dev, type | UVC_TERM_INPUT,
++ buffer[3], 1, n + p);
++ if (IS_ERR(term))
++ return PTR_ERR(term);
+
+ if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA) {
+ term->camera.bControlSize = n;
+@@ -1075,10 +1088,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return 0;
+ }
+
+- term = uvc_alloc_entity(type | UVC_TERM_OUTPUT, buffer[3],
+- 1, 0);
+- if (term == NULL)
+- return -ENOMEM;
++ term = uvc_alloc_new_entity(dev, type | UVC_TERM_OUTPUT,
++ buffer[3], 1, 0);
++ if (IS_ERR(term))
++ return PTR_ERR(term);
+
+ memcpy(term->baSourceID, &buffer[7], 1);
+
+@@ -1097,9 +1110,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, 0);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
++ p + 1, 0);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->baSourceID, &buffer[5], p);
+
+@@ -1119,9 +1133,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_entity(buffer[2], buffer[3], 2, n);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3], 2, n);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->baSourceID, &buffer[4], 1);
+ unit->processing.wMaxMultiplier =
+@@ -1148,9 +1162,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, n);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
++ p + 1, n);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1290,9 +1305,10 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ return dev_err_probe(&dev->udev->dev, irq,
+ "No IRQ for privacy GPIO\n");
+
+- unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
+- if (!unit)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, UVC_EXT_GPIO_UNIT,
++ UVC_EXT_GPIO_UNIT_ID, 0, 1);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ unit->gpio.gpio_privacy = gpio_privacy;
+ unit->gpio.irq = irq;
+@@ -1919,11 +1935,41 @@ static void uvc_unregister_video(struct uvc_device *dev)
+ struct uvc_streaming *stream;
+
+ list_for_each_entry(stream, &dev->streams, list) {
++ /* Nothing to do here, continue. */
+ if (!video_is_registered(&stream->vdev))
+ continue;
+
++ /*
++ * For stream->vdev we follow the same logic as:
++ * vb2_video_unregister_device().
++ */
++
++ /* 1. Take a reference to vdev */
++ get_device(&stream->vdev.dev);
++
++ /* 2. Ensure that no new ioctls can be called. */
+ video_unregister_device(&stream->vdev);
+- video_unregister_device(&stream->meta.vdev);
++
++ /* 3. Wait for old ioctls to finish. */
++ mutex_lock(&stream->mutex);
++
++ /* 4. Stop streaming. */
++ uvc_queue_release(&stream->queue);
++
++ mutex_unlock(&stream->mutex);
++
++ put_device(&stream->vdev.dev);
++
++ /*
++ * For stream->meta.vdev we can directly call:
++ * vb2_video_unregister_device().
++ */
++ vb2_video_unregister_device(&stream->meta.vdev);
++
++ /*
++ * Now both vdevs are not streaming and all the ioctls will
++ * return -ENODEV.
++ */
+
+ uvc_debugfs_cleanup_stream(stream);
+ }
+diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
+index f3bb81d7e46045..a33ad04e99cc8e 100644
+--- a/drivers/mtd/nand/spi/winbond.c
++++ b/drivers/mtd/nand/spi/winbond.c
+@@ -201,30 +201,30 @@ static const struct spinand_info winbond_spinand_table[] = {
+ SPINAND_INFO("W25N01JW",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xbc, 0x21),
+ NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25m02gv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N02JWZEIF",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xbf, 0x22),
+ NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 2, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25n02kv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N512GW",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xba, 0x20),
+ NAND_MEMORG(1, 2048, 64, 64, 512, 10, 1, 1, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25n02kv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N02KWZEIR",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xba, 0x22),
+ NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+@@ -237,12 +237,12 @@ static const struct spinand_info winbond_spinand_table[] = {
+ SPINAND_INFO("W25N01GWZEIG",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xba, 0x21),
+ NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25m02gv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N04KV",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xaa, 0x23),
+ NAND_MEMORG(1, 2048, 128, 64, 4096, 40, 2, 1, 1),
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index a4eb6edb850add..7f6b5743207166 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -84,8 +84,7 @@
+ #define FEC_CC_MULT (1 << 31)
+ #define FEC_COUNTER_PERIOD (1 << 31)
+ #define PPS_OUPUT_RELOAD_PERIOD NSEC_PER_SEC
+-#define FEC_CHANNLE_0 0
+-#define DEFAULT_PPS_CHANNEL FEC_CHANNLE_0
++#define DEFAULT_PPS_CHANNEL 0
+
+ #define FEC_PTP_MAX_NSEC_PERIOD 4000000000ULL
+ #define FEC_PTP_MAX_NSEC_COUNTER 0x80000000ULL
+@@ -525,7 +524,6 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
+ int ret = 0;
+
+ if (rq->type == PTP_CLK_REQ_PPS) {
+- fep->pps_channel = DEFAULT_PPS_CHANNEL;
+ fep->reload_period = PPS_OUPUT_RELOAD_PERIOD;
+
+ ret = fec_ptp_enable_pps(fep, on);
+@@ -536,10 +534,9 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
+ if (rq->perout.flags)
+ return -EOPNOTSUPP;
+
+- if (rq->perout.index != DEFAULT_PPS_CHANNEL)
++ if (rq->perout.index != fep->pps_channel)
+ return -EOPNOTSUPP;
+
+- fep->pps_channel = DEFAULT_PPS_CHANNEL;
+ period.tv_sec = rq->perout.period.sec;
+ period.tv_nsec = rq->perout.period.nsec;
+ period_ns = timespec64_to_ns(&period);
+@@ -707,12 +704,16 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
+ {
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct fec_enet_private *fep = netdev_priv(ndev);
++ struct device_node *np = fep->pdev->dev.of_node;
+ int irq;
+ int ret;
+
+ fep->ptp_caps.owner = THIS_MODULE;
+ strscpy(fep->ptp_caps.name, "fec ptp", sizeof(fep->ptp_caps.name));
+
++ fep->pps_channel = DEFAULT_PPS_CHANNEL;
++ of_property_read_u32(np, "fsl,pps-channel", &fep->pps_channel);
++
+ fep->ptp_caps.max_adj = 250000000;
+ fep->ptp_caps.n_alarm = 0;
+ fep->ptp_caps.n_ext_ts = 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 7bf275f127c9d7..766213ee82c16e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1205,6 +1205,9 @@ static int stmmac_init_phy(struct net_device *dev)
+ return -ENODEV;
+ }
+
++ if (priv->dma_cap.eee)
++ phy_support_eee(phydev);
++
+ ret = phylink_connect_phy(priv->phylink, phydev);
+ } else {
+ fwnode_handle_put(phy_fwnode);
+diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
+index 059269557d9264..fba2c734f0ec7f 100644
+--- a/drivers/net/netkit.c
++++ b/drivers/net/netkit.c
+@@ -20,6 +20,7 @@ struct netkit {
+ struct net_device __rcu *peer;
+ struct bpf_mprog_entry __rcu *active;
+ enum netkit_action policy;
++ enum netkit_scrub scrub;
+ struct bpf_mprog_bundle bundle;
+
+ /* Needed in slow-path */
+@@ -50,12 +51,24 @@ netkit_run(const struct bpf_mprog_entry *entry, struct sk_buff *skb,
+ return ret;
+ }
+
+-static void netkit_prep_forward(struct sk_buff *skb, bool xnet)
++static void netkit_xnet(struct sk_buff *skb)
+ {
+- skb_scrub_packet(skb, xnet);
+ skb->priority = 0;
++ skb->mark = 0;
++}
++
++static void netkit_prep_forward(struct sk_buff *skb,
++ bool xnet, bool xnet_scrub)
++{
++ skb_scrub_packet(skb, false);
+ nf_skip_egress(skb, true);
+ skb_reset_mac_header(skb);
++ if (!xnet)
++ return;
++ ipvs_reset(skb);
++ skb_clear_tstamp(skb);
++ if (xnet_scrub)
++ netkit_xnet(skb);
+ }
+
+ static struct netkit *netkit_priv(const struct net_device *dev)
+@@ -80,7 +93,8 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
+ !pskb_may_pull(skb, ETH_HLEN) ||
+ skb_orphan_frags(skb, GFP_ATOMIC)))
+ goto drop;
+- netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)));
++ netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)),
++ nk->scrub);
+ eth_skb_pkt_type(skb, peer);
+ skb->dev = peer;
+ entry = rcu_dereference(nk->active);
+@@ -332,8 +346,10 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+ struct netlink_ext_ack *extack)
+ {
+ struct nlattr *peer_tb[IFLA_MAX + 1], **tbp = tb, *attr;
+- enum netkit_action default_prim = NETKIT_PASS;
+- enum netkit_action default_peer = NETKIT_PASS;
++ enum netkit_action policy_prim = NETKIT_PASS;
++ enum netkit_action policy_peer = NETKIT_PASS;
++ enum netkit_scrub scrub_prim = NETKIT_SCRUB_DEFAULT;
++ enum netkit_scrub scrub_peer = NETKIT_SCRUB_DEFAULT;
+ enum netkit_mode mode = NETKIT_L3;
+ unsigned char ifname_assign_type;
+ struct ifinfomsg *ifmp = NULL;
+@@ -362,17 +378,21 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+ return err;
+ tbp = peer_tb;
+ }
++ if (data[IFLA_NETKIT_SCRUB])
++ scrub_prim = nla_get_u32(data[IFLA_NETKIT_SCRUB]);
++ if (data[IFLA_NETKIT_PEER_SCRUB])
++ scrub_peer = nla_get_u32(data[IFLA_NETKIT_PEER_SCRUB]);
+ if (data[IFLA_NETKIT_POLICY]) {
+ attr = data[IFLA_NETKIT_POLICY];
+- default_prim = nla_get_u32(attr);
+- err = netkit_check_policy(default_prim, attr, extack);
++ policy_prim = nla_get_u32(attr);
++ err = netkit_check_policy(policy_prim, attr, extack);
+ if (err < 0)
+ return err;
+ }
+ if (data[IFLA_NETKIT_PEER_POLICY]) {
+ attr = data[IFLA_NETKIT_PEER_POLICY];
+- default_peer = nla_get_u32(attr);
+- err = netkit_check_policy(default_peer, attr, extack);
++ policy_peer = nla_get_u32(attr);
++ err = netkit_check_policy(policy_peer, attr, extack);
+ if (err < 0)
+ return err;
+ }
+@@ -409,7 +429,8 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+
+ nk = netkit_priv(peer);
+ nk->primary = false;
+- nk->policy = default_peer;
++ nk->policy = policy_peer;
++ nk->scrub = scrub_peer;
+ nk->mode = mode;
+ bpf_mprog_bundle_init(&nk->bundle);
+
+@@ -434,7 +455,8 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+
+ nk = netkit_priv(dev);
+ nk->primary = true;
+- nk->policy = default_prim;
++ nk->policy = policy_prim;
++ nk->scrub = scrub_prim;
+ nk->mode = mode;
+ bpf_mprog_bundle_init(&nk->bundle);
+
+@@ -874,6 +896,18 @@ static int netkit_change_link(struct net_device *dev, struct nlattr *tb[],
+ return -EACCES;
+ }
+
++ if (data[IFLA_NETKIT_SCRUB]) {
++ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_SCRUB],
++ "netkit scrubbing cannot be changed after device creation");
++ return -EACCES;
++ }
++
++ if (data[IFLA_NETKIT_PEER_SCRUB]) {
++ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_PEER_SCRUB],
++ "netkit scrubbing cannot be changed after device creation");
++ return -EACCES;
++ }
++
+ if (data[IFLA_NETKIT_PEER_INFO]) {
+ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_PEER_INFO],
+ "netkit peer info cannot be changed after device creation");
+@@ -908,8 +942,10 @@ static size_t netkit_get_size(const struct net_device *dev)
+ {
+ return nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_POLICY */
+ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_PEER_POLICY */
+- nla_total_size(sizeof(u8)) + /* IFLA_NETKIT_PRIMARY */
++ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_SCRUB */
++ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_PEER_SCRUB */
+ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_MODE */
++ nla_total_size(sizeof(u8)) + /* IFLA_NETKIT_PRIMARY */
+ 0;
+ }
+
+@@ -924,11 +960,15 @@ static int netkit_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ return -EMSGSIZE;
+ if (nla_put_u32(skb, IFLA_NETKIT_MODE, nk->mode))
+ return -EMSGSIZE;
++ if (nla_put_u32(skb, IFLA_NETKIT_SCRUB, nk->scrub))
++ return -EMSGSIZE;
+
+ if (peer) {
+ nk = netkit_priv(peer);
+ if (nla_put_u32(skb, IFLA_NETKIT_PEER_POLICY, nk->policy))
+ return -EMSGSIZE;
++ if (nla_put_u32(skb, IFLA_NETKIT_PEER_SCRUB, nk->scrub))
++ return -EMSGSIZE;
+ }
+
+ return 0;
+@@ -936,9 +976,11 @@ static int netkit_fill_info(struct sk_buff *skb, const struct net_device *dev)
+
+ static const struct nla_policy netkit_policy[IFLA_NETKIT_MAX + 1] = {
+ [IFLA_NETKIT_PEER_INFO] = { .len = sizeof(struct ifinfomsg) },
+- [IFLA_NETKIT_POLICY] = { .type = NLA_U32 },
+ [IFLA_NETKIT_MODE] = { .type = NLA_U32 },
++ [IFLA_NETKIT_POLICY] = { .type = NLA_U32 },
+ [IFLA_NETKIT_PEER_POLICY] = { .type = NLA_U32 },
++ [IFLA_NETKIT_SCRUB] = NLA_POLICY_MAX(NLA_U32, NETKIT_SCRUB_DEFAULT),
++ [IFLA_NETKIT_PEER_SCRUB] = NLA_POLICY_MAX(NLA_U32, NETKIT_SCRUB_DEFAULT),
+ [IFLA_NETKIT_PRIMARY] = { .type = NLA_REJECT,
+ .reject_message = "Primary attribute is read-only" },
+ };
+diff --git a/drivers/net/phy/dp83869.c b/drivers/net/phy/dp83869.c
+index 5f056d7db83eed..b6b38caf9c0ed0 100644
+--- a/drivers/net/phy/dp83869.c
++++ b/drivers/net/phy/dp83869.c
+@@ -153,19 +153,32 @@ struct dp83869_private {
+ int mode;
+ };
+
++static int dp83869_config_aneg(struct phy_device *phydev)
++{
++ struct dp83869_private *dp83869 = phydev->priv;
++
++ if (dp83869->mode != DP83869_RGMII_1000_BASE)
++ return genphy_config_aneg(phydev);
++
++ return genphy_c37_config_aneg(phydev);
++}
++
+ static int dp83869_read_status(struct phy_device *phydev)
+ {
+ struct dp83869_private *dp83869 = phydev->priv;
++ bool changed;
+ int ret;
+
++ if (dp83869->mode == DP83869_RGMII_1000_BASE)
++ return genphy_c37_read_status(phydev, &changed);
++
+ ret = genphy_read_status(phydev);
+ if (ret)
+ return ret;
+
+- if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, phydev->supported)) {
++ if (dp83869->mode == DP83869_RGMII_100_BASE) {
+ if (phydev->link) {
+- if (dp83869->mode == DP83869_RGMII_100_BASE)
+- phydev->speed = SPEED_100;
++ phydev->speed = SPEED_100;
+ } else {
+ phydev->speed = SPEED_UNKNOWN;
+ phydev->duplex = DUPLEX_UNKNOWN;
+@@ -898,6 +911,7 @@ static int dp83869_phy_reset(struct phy_device *phydev)
+ .soft_reset = dp83869_phy_reset, \
+ .config_intr = dp83869_config_intr, \
+ .handle_interrupt = dp83869_handle_interrupt, \
++ .config_aneg = dp83869_config_aneg, \
+ .read_status = dp83869_read_status, \
+ .get_tunable = dp83869_get_tunable, \
+ .set_tunable = dp83869_set_tunable, \
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 33ffa2aa4c1152..e1a15fbc6ad025 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -267,7 +267,7 @@ static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj,
+
+ count = round_down(count, nvmem->word_size);
+
+- if (!nvmem->reg_write)
++ if (!nvmem->reg_write || nvmem->read_only)
+ return -EPERM;
+
+ rc = nvmem_reg_write(nvmem, pos, buf, count);
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 808d1f10541733..c8d5c90aa4d45b 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -82,6 +82,11 @@ enum imx_pcie_variants {
+ #define IMX_PCIE_FLAG_HAS_SERDES BIT(6)
+ #define IMX_PCIE_FLAG_SUPPORT_64BIT BIT(7)
+ #define IMX_PCIE_FLAG_CPU_ADDR_FIXUP BIT(8)
++/*
++ * Because of ERR005723 (PCIe does not support L2 power down) we need to
++ * workaround suspend resume on some devices which are affected by this errata.
++ */
++#define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9)
+
+ #define imx_check_flag(pci, val) (pci->drvdata->flags & val)
+
+@@ -1237,9 +1242,19 @@ static int imx_pcie_suspend_noirq(struct device *dev)
+ return 0;
+
+ imx_pcie_msi_save_restore(imx_pcie, true);
+- imx_pcie_pm_turnoff(imx_pcie);
+- imx_pcie_stop_link(imx_pcie->pci);
+- imx_pcie_host_exit(pp);
++ if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
++ /*
++ * The minimum for a workaround would be to set PERST# and to
++ * set the PCIE_TEST_PD flag. However, we can also disable the
++ * clock which saves some power.
++ */
++ imx_pcie_assert_core_reset(imx_pcie);
++ imx_pcie->drvdata->enable_ref_clk(imx_pcie, false);
++ } else {
++ imx_pcie_pm_turnoff(imx_pcie);
++ imx_pcie_stop_link(imx_pcie->pci);
++ imx_pcie_host_exit(pp);
++ }
+
+ return 0;
+ }
+@@ -1253,14 +1268,32 @@ static int imx_pcie_resume_noirq(struct device *dev)
+ if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND))
+ return 0;
+
+- ret = imx_pcie_host_init(pp);
+- if (ret)
+- return ret;
+- imx_pcie_msi_save_restore(imx_pcie, false);
+- dw_pcie_setup_rc(pp);
++ if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
++ ret = imx_pcie->drvdata->enable_ref_clk(imx_pcie, true);
++ if (ret)
++ return ret;
++ ret = imx_pcie_deassert_core_reset(imx_pcie);
++ if (ret)
++ return ret;
++ /*
++ * Using PCIE_TEST_PD seems to disable MSI and powers down the
++ * root complex. This is why we have to setup the rc again and
++ * why we have to restore the MSI register.
++ */
++ ret = dw_pcie_setup_rc(&imx_pcie->pci->pp);
++ if (ret)
++ return ret;
++ imx_pcie_msi_save_restore(imx_pcie, false);
++ } else {
++ ret = imx_pcie_host_init(pp);
++ if (ret)
++ return ret;
++ imx_pcie_msi_save_restore(imx_pcie, false);
++ dw_pcie_setup_rc(pp);
+
+- if (imx_pcie->link_is_up)
+- imx_pcie_start_link(imx_pcie->pci);
++ if (imx_pcie->link_is_up)
++ imx_pcie_start_link(imx_pcie->pci);
++ }
+
+ return 0;
+ }
+@@ -1485,7 +1518,9 @@ static const struct imx_pcie_drvdata drvdata[] = {
+ [IMX6Q] = {
+ .variant = IMX6Q,
+ .flags = IMX_PCIE_FLAG_IMX_PHY |
+- IMX_PCIE_FLAG_IMX_SPEED_CHANGE,
++ IMX_PCIE_FLAG_IMX_SPEED_CHANGE |
++ IMX_PCIE_FLAG_BROKEN_SUSPEND |
++ IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
+ .dbi_length = 0x200,
+ .gpr = "fsl,imx6q-iomuxc-gpr",
+ .clk_names = imx6q_clks,
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 2219b1a866faf2..44b34559de1ac5 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -455,6 +455,17 @@ static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+ u32 reg;
+
++ /*
++ * Checking whether the link is up here is a last line of defense
++ * against platforms that forward errors on the system bus as
++ * SError upon PCI configuration transactions issued when the link
++ * is down. This check is racy by definition and does not stop
++ * the system from triggering an SError if the link goes down
++ * after this check is performed.
++ */
++ if (!dw_pcie_link_up(pci))
++ return NULL;
++
+ reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
+ CFG_FUNC(PCI_FUNC(devfn));
+ if (!pci_is_root_bus(bus->parent))
+@@ -1093,6 +1104,7 @@ static int ks_pcie_am654_set_mode(struct device *dev,
+
+ static const struct ks_pcie_of_data ks_pcie_rc_of_data = {
+ .host_ops = &ks_pcie_host_ops,
++ .mode = DW_PCIE_RC_TYPE,
+ .version = DW_PCIE_VER_365A,
+ };
+
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 43ba5c6738df1a..cc8ff4a014368c 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -689,7 +689,7 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
+ * for 1 MB BAR size only.
+ */
+ for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
+- dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
++ dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
+ }
+
+ dw_pcie_setup(pci);
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 2b33d03ed05416..b5447228696dc4 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1845,7 +1845,7 @@ static const struct of_device_id qcom_pcie_match[] = {
+ { .compatible = "qcom,pcie-sm8450-pcie0", .data = &cfg_1_9_0 },
+ { .compatible = "qcom,pcie-sm8450-pcie1", .data = &cfg_1_9_0 },
+ { .compatible = "qcom,pcie-sm8550", .data = &cfg_1_9_0 },
+- { .compatible = "qcom,pcie-x1e80100", .data = &cfg_1_9_0 },
++ { .compatible = "qcom,pcie-x1e80100", .data = &cfg_sc8280xp },
+ { }
+ };
+
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 1362745336568e..a6805b005798c3 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -63,15 +63,25 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
+ ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region));
+ }
+
++static int rockchip_pcie_ep_ob_atu_num_bits(struct rockchip_pcie *rockchip,
++ u64 pci_addr, size_t size)
++{
++ int num_pass_bits = fls64(pci_addr ^ (pci_addr + size - 1));
++
++ return clamp(num_pass_bits,
++ ROCKCHIP_PCIE_AT_MIN_NUM_BITS,
++ ROCKCHIP_PCIE_AT_MAX_NUM_BITS);
++}
++
+ static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
+ u32 r, u64 cpu_addr, u64 pci_addr,
+ size_t size)
+ {
+- int num_pass_bits = fls64(size - 1);
++ int num_pass_bits;
+ u32 addr0, addr1, desc0;
+
+- if (num_pass_bits < 8)
+- num_pass_bits = 8;
++ num_pass_bits = rockchip_pcie_ep_ob_atu_num_bits(rockchip,
++ pci_addr, size);
+
+ addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
+ (lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 6111de35f84ca2..15ee949f2485e3 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -245,6 +245,10 @@
+ (PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12)))
+ #define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \
+ (PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12)))
++
++#define ROCKCHIP_PCIE_AT_MIN_NUM_BITS 8
++#define ROCKCHIP_PCIE_AT_MAX_NUM_BITS 20
++
+ #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
+ (PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008)
+ #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index 17f00710925508..62f7dff437309f 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -660,18 +660,18 @@ void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
+ if (IS_ERR_OR_NULL(epc) || !epf)
+ return;
+
++ mutex_lock(&epc->list_lock);
+ if (type == PRIMARY_INTERFACE) {
+ func_no = epf->func_no;
+ list = &epf->list;
++ epf->epc = NULL;
+ } else {
+ func_no = epf->sec_epc_func_no;
+ list = &epf->sec_epc_list;
++ epf->sec_epc = NULL;
+ }
+-
+- mutex_lock(&epc->list_lock);
+ clear_bit(func_no, &epc->function_num_map);
+ list_del(list);
+- epf->epc = NULL;
+ mutex_unlock(&epc->list_lock);
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_remove_epf);
+@@ -837,11 +837,10 @@ EXPORT_SYMBOL_GPL(pci_epc_bus_master_enable_notify);
+ void pci_epc_destroy(struct pci_epc *epc)
+ {
+ pci_ep_cfs_remove_epc_group(epc->group);
+- device_unregister(&epc->dev);
+-
+ #ifdef CONFIG_PCI_DOMAINS_GENERIC
+- pci_bus_release_domain_nr(&epc->dev, epc->domain_nr);
++ pci_bus_release_domain_nr(epc->dev.parent, epc->domain_nr);
+ #endif
++ device_unregister(&epc->dev);
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_destroy);
+
+diff --git a/drivers/pci/of_property.c b/drivers/pci/of_property.c
+index 5a0b98e697954a..886c236e5de6e6 100644
+--- a/drivers/pci/of_property.c
++++ b/drivers/pci/of_property.c
+@@ -126,7 +126,7 @@ static int of_pci_prop_ranges(struct pci_dev *pdev, struct of_changeset *ocs,
+ if (of_pci_get_addr_flags(&res[j], &flags))
+ continue;
+
+- val64 = res[j].start;
++ val64 = pci_bus_address(pdev, &res[j] - pdev->resource);
+ of_pci_set_address(pdev, rp[i].parent_addr, val64, 0, flags,
+ false);
+ if (pci_is_bridge(pdev)) {
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index f4f4b3df3884ef..793b1d274be33a 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -1356,7 +1356,7 @@ static const struct adsp_data sc7280_wpss_resource = {
+ .crash_reason_smem = 626,
+ .firmware_name = "wpss.mdt",
+ .pas_id = 6,
+- .auto_boot = true,
++ .auto_boot = false,
+ .proxy_pd_names = (char*[]){
+ "cx",
+ "mx",
+diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
+index 9ba9495fcc4bae..ea843159b745d5 100644
+--- a/drivers/spmi/spmi-pmic-arb.c
++++ b/drivers/spmi/spmi-pmic-arb.c
+@@ -1763,14 +1763,13 @@ static int spmi_pmic_arb_register_buses(struct spmi_pmic_arb *pmic_arb,
+ {
+ struct device *dev = &pdev->dev;
+ struct device_node *node = dev->of_node;
+- struct device_node *child;
+ int ret;
+
+ /* legacy mode doesn't provide child node for the bus */
+ if (of_device_is_compatible(node, "qcom,spmi-pmic-arb"))
+ return spmi_pmic_arb_bus_init(pdev, node, pmic_arb);
+
+- for_each_available_child_of_node(node, child) {
++ for_each_available_child_of_node_scoped(node, child) {
+ if (of_node_name_eq(child, "spmi")) {
+ ret = spmi_pmic_arb_bus_init(pdev, child, pmic_arb);
+ if (ret)
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index b0c0f0ffdcb046..f547d386ae805b 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -137,7 +137,7 @@ static ssize_t current_uuid_show(struct device *dev,
+ struct int3400_thermal_priv *priv = dev_get_drvdata(dev);
+ int i, length = 0;
+
+- if (priv->current_uuid_index > 0)
++ if (priv->current_uuid_index >= 0)
+ return sprintf(buf, "%s\n",
+ int3400_thermal_uuids[priv->current_uuid_index]);
+
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index 5867e633856233..fb550a7c16b34b 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -724,6 +724,9 @@ static void exynos_ufs_config_smu(struct exynos_ufs *ufs)
+ {
+ u32 reg, val;
+
++ if (ufs->opts & EXYNOS_UFS_OPT_UFSPR_SECURE)
++ return;
++
+ exynos_ufs_disable_auto_ctrl_hcc_save(ufs, &val);
+
+ /* make encryption disabled by default */
+@@ -1440,8 +1443,8 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ if (ret)
+ goto out;
+ exynos_ufs_specify_phy_time_attr(ufs);
+- if (!(ufs->opts & EXYNOS_UFS_OPT_UFSPR_SECURE))
+- exynos_ufs_config_smu(ufs);
++
++ exynos_ufs_config_smu(ufs);
+
+ hba->host->dma_alignment = DATA_UNIT_SIZE - 1;
+ return 0;
+@@ -1484,12 +1487,12 @@ static void exynos_ufs_dev_hw_reset(struct ufs_hba *hba)
+ hci_writel(ufs, 1 << 0, HCI_GPIO_OUT);
+ }
+
+-static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, u8 enter)
++static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, enum uic_cmd_dme cmd)
+ {
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+ struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr;
+
+- if (!enter) {
++ if (cmd == UIC_CMD_DME_HIBER_EXIT) {
+ if (ufs->opts & EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL)
+ exynos_ufs_disable_auto_ctrl_hcc(ufs);
+ exynos_ufs_ungate_clks(ufs);
+@@ -1517,11 +1520,11 @@ static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, u8 enter)
+ }
+ }
+
+-static void exynos_ufs_post_hibern8(struct ufs_hba *hba, u8 enter)
++static void exynos_ufs_post_hibern8(struct ufs_hba *hba, enum uic_cmd_dme cmd)
+ {
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+
+- if (!enter) {
++ if (cmd == UIC_CMD_DME_HIBER_EXIT) {
+ u32 cur_mode = 0;
+ u32 pwrmode;
+
+@@ -1540,7 +1543,7 @@ static void exynos_ufs_post_hibern8(struct ufs_hba *hba, u8 enter)
+
+ if (!(ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB))
+ exynos_ufs_establish_connt(ufs);
+- } else {
++ } else if (cmd == UIC_CMD_DME_HIBER_ENTER) {
+ ufs->entry_hibern8_t = ktime_get();
+ exynos_ufs_gate_clks(ufs);
+ if (ufs->opts & EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL)
+@@ -1627,15 +1630,15 @@ static int exynos_ufs_pwr_change_notify(struct ufs_hba *hba,
+ }
+
+ static void exynos_ufs_hibern8_notify(struct ufs_hba *hba,
+- enum uic_cmd_dme enter,
++ enum uic_cmd_dme cmd,
+ enum ufs_notify_change_status notify)
+ {
+ switch ((u8)notify) {
+ case PRE_CHANGE:
+- exynos_ufs_pre_hibern8(hba, enter);
++ exynos_ufs_pre_hibern8(hba, cmd);
+ break;
+ case POST_CHANGE:
+- exynos_ufs_post_hibern8(hba, enter);
++ exynos_ufs_post_hibern8(hba, cmd);
+ break;
+ }
+ }
+diff --git a/drivers/vfio/pci/qat/main.c b/drivers/vfio/pci/qat/main.c
+index be3644ced17be4..c78cb6de93906c 100644
+--- a/drivers/vfio/pci/qat/main.c
++++ b/drivers/vfio/pci/qat/main.c
+@@ -304,7 +304,7 @@ static ssize_t qat_vf_resume_write(struct file *filp, const char __user *buf,
+ offs = &filp->f_pos;
+
+ if (*offs < 0 ||
+- check_add_overflow((loff_t)len, *offs, &end))
++ check_add_overflow(len, *offs, &end))
+ return -EOVERFLOW;
+
+ if (end > mig_dev->state_size)
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index e152fde888fc9a..db53a3263fbd05 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -613,11 +613,17 @@ int btrfs_writepage_cow_fixup(struct folio *folio);
+ int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info,
+ int compress_type);
+ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+- u64 file_offset, u64 disk_bytenr,
+- u64 disk_io_size,
++ u64 disk_bytenr, u64 disk_io_size,
+ struct page **pages);
+ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+- struct btrfs_ioctl_encoded_io_args *encoded);
++ struct btrfs_ioctl_encoded_io_args *encoded,
++ struct extent_state **cached_state,
++ u64 *disk_bytenr, u64 *disk_io_size);
++ssize_t btrfs_encoded_read_regular(struct kiocb *iocb, struct iov_iter *iter,
++ u64 start, u64 lockend,
++ struct extent_state **cached_state,
++ u64 disk_bytenr, u64 disk_io_size,
++ size_t count, bool compressed, bool *unlocked);
+ ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from,
+ const struct btrfs_ioctl_encoded_io_args *encoded);
+
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 0cc919d15b1441..9c05cab473f577 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2010,7 +2010,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ const struct btrfs_key *key, struct btrfs_path *p,
+ int ins_len, int cow)
+ {
+- struct btrfs_fs_info *fs_info = root->fs_info;
++ struct btrfs_fs_info *fs_info;
+ struct extent_buffer *b;
+ int slot;
+ int ret;
+@@ -2023,6 +2023,10 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ int min_write_lock_level;
+ int prev_cmp;
+
++ if (!root)
++ return -EINVAL;
++
++ fs_info = root->fs_info;
+ might_sleep();
+
+ lowest_level = p->lowest_level;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d9f511babd89ab..b43a8611aca5c6 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2446,7 +2446,7 @@ int btrfs_cross_ref_exist(struct btrfs_root *root, u64 objectid, u64 offset,
+ goto out;
+
+ ret = check_delayed_ref(root, path, objectid, offset, bytenr);
+- } while (ret == -EAGAIN);
++ } while (ret == -EAGAIN && !path->nowait);
+
+ out:
+ btrfs_release_path(path);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 1e4ca1e7d2e58d..d067db2619713f 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9126,26 +9126,31 @@ static void btrfs_encoded_read_endio(struct btrfs_bio *bbio)
+ */
+ WRITE_ONCE(priv->status, bbio->bio.bi_status);
+ }
+- if (!atomic_dec_return(&priv->pending))
++ if (atomic_dec_and_test(&priv->pending))
+ wake_up(&priv->wait);
+ bio_put(&bbio->bio);
+ }
+
+ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+- u64 file_offset, u64 disk_bytenr,
+- u64 disk_io_size, struct page **pages)
++ u64 disk_bytenr, u64 disk_io_size,
++ struct page **pages)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+- struct btrfs_encoded_read_private priv = {
+- .pending = ATOMIC_INIT(1),
+- };
++ struct btrfs_encoded_read_private *priv;
+ unsigned long i = 0;
+ struct btrfs_bio *bbio;
++ int ret;
++
++ priv = kmalloc(sizeof(struct btrfs_encoded_read_private), GFP_NOFS);
++ if (!priv)
++ return -ENOMEM;
+
+- init_waitqueue_head(&priv.wait);
++ init_waitqueue_head(&priv->wait);
++ atomic_set(&priv->pending, 1);
++ priv->status = 0;
+
+ bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info,
+- btrfs_encoded_read_endio, &priv);
++ btrfs_encoded_read_endio, priv);
+ bbio->bio.bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT;
+ bbio->inode = inode;
+
+@@ -9153,11 +9158,11 @@ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+ size_t bytes = min_t(u64, disk_io_size, PAGE_SIZE);
+
+ if (bio_add_page(&bbio->bio, pages[i], bytes, 0) < bytes) {
+- atomic_inc(&priv.pending);
++ atomic_inc(&priv->pending);
+ btrfs_submit_bbio(bbio, 0);
+
+ bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info,
+- btrfs_encoded_read_endio, &priv);
++ btrfs_encoded_read_endio, priv);
+ bbio->bio.bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT;
+ bbio->inode = inode;
+ continue;
+@@ -9168,22 +9173,22 @@ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+ disk_io_size -= bytes;
+ } while (disk_io_size);
+
+- atomic_inc(&priv.pending);
++ atomic_inc(&priv->pending);
+ btrfs_submit_bbio(bbio, 0);
+
+- if (atomic_dec_return(&priv.pending))
+- io_wait_event(priv.wait, !atomic_read(&priv.pending));
++ if (atomic_dec_return(&priv->pending))
++ io_wait_event(priv->wait, !atomic_read(&priv->pending));
+ /* See btrfs_encoded_read_endio() for ordering. */
+- return blk_status_to_errno(READ_ONCE(priv.status));
++ ret = blk_status_to_errno(READ_ONCE(priv->status));
++ kfree(priv);
++ return ret;
+ }
+
+-static ssize_t btrfs_encoded_read_regular(struct kiocb *iocb,
+- struct iov_iter *iter,
+- u64 start, u64 lockend,
+- struct extent_state **cached_state,
+- u64 disk_bytenr, u64 disk_io_size,
+- size_t count, bool compressed,
+- bool *unlocked)
++ssize_t btrfs_encoded_read_regular(struct kiocb *iocb, struct iov_iter *iter,
++ u64 start, u64 lockend,
++ struct extent_state **cached_state,
++ u64 disk_bytenr, u64 disk_io_size,
++ size_t count, bool compressed, bool *unlocked)
+ {
+ struct btrfs_inode *inode = BTRFS_I(file_inode(iocb->ki_filp));
+ struct extent_io_tree *io_tree = &inode->io_tree;
+@@ -9203,7 +9208,7 @@ static ssize_t btrfs_encoded_read_regular(struct kiocb *iocb,
+ goto out;
+ }
+
+- ret = btrfs_encoded_read_regular_fill_pages(inode, start, disk_bytenr,
++ ret = btrfs_encoded_read_regular_fill_pages(inode, disk_bytenr,
+ disk_io_size, pages);
+ if (ret)
+ goto out;
+@@ -9244,15 +9249,16 @@ static ssize_t btrfs_encoded_read_regular(struct kiocb *iocb,
+ }
+
+ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+- struct btrfs_ioctl_encoded_io_args *encoded)
++ struct btrfs_ioctl_encoded_io_args *encoded,
++ struct extent_state **cached_state,
++ u64 *disk_bytenr, u64 *disk_io_size)
+ {
+ struct btrfs_inode *inode = BTRFS_I(file_inode(iocb->ki_filp));
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct extent_io_tree *io_tree = &inode->io_tree;
+ ssize_t ret;
+ size_t count = iov_iter_count(iter);
+- u64 start, lockend, disk_bytenr, disk_io_size;
+- struct extent_state *cached_state = NULL;
++ u64 start, lockend;
+ struct extent_map *em;
+ bool unlocked = false;
+
+@@ -9278,13 +9284,13 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ lockend - start + 1);
+ if (ret)
+ goto out_unlock_inode;
+- lock_extent(io_tree, start, lockend, &cached_state);
++ lock_extent(io_tree, start, lockend, cached_state);
+ ordered = btrfs_lookup_ordered_range(inode, start,
+ lockend - start + 1);
+ if (!ordered)
+ break;
+ btrfs_put_ordered_extent(ordered);
+- unlock_extent(io_tree, start, lockend, &cached_state);
++ unlock_extent(io_tree, start, lockend, cached_state);
+ cond_resched();
+ }
+
+@@ -9304,7 +9310,7 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ free_extent_map(em);
+ em = NULL;
+ ret = btrfs_encoded_read_inline(iocb, iter, start, lockend,
+- &cached_state, extent_start,
++ cached_state, extent_start,
+ count, encoded, &unlocked);
+ goto out;
+ }
+@@ -9317,12 +9323,12 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ inode->vfs_inode.i_size) - iocb->ki_pos;
+ if (em->disk_bytenr == EXTENT_MAP_HOLE ||
+ (em->flags & EXTENT_FLAG_PREALLOC)) {
+- disk_bytenr = EXTENT_MAP_HOLE;
++ *disk_bytenr = EXTENT_MAP_HOLE;
+ count = min_t(u64, count, encoded->len);
+ encoded->len = count;
+ encoded->unencoded_len = count;
+ } else if (extent_map_is_compressed(em)) {
+- disk_bytenr = em->disk_bytenr;
++ *disk_bytenr = em->disk_bytenr;
+ /*
+ * Bail if the buffer isn't large enough to return the whole
+ * compressed extent.
+@@ -9331,7 +9337,7 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ ret = -ENOBUFS;
+ goto out_em;
+ }
+- disk_io_size = em->disk_num_bytes;
++ *disk_io_size = em->disk_num_bytes;
+ count = em->disk_num_bytes;
+ encoded->unencoded_len = em->ram_bytes;
+ encoded->unencoded_offset = iocb->ki_pos - (em->start - em->offset);
+@@ -9341,35 +9347,32 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ goto out_em;
+ encoded->compression = ret;
+ } else {
+- disk_bytenr = extent_map_block_start(em) + (start - em->start);
++ *disk_bytenr = extent_map_block_start(em) + (start - em->start);
+ if (encoded->len > count)
+ encoded->len = count;
+ /*
+ * Don't read beyond what we locked. This also limits the page
+ * allocations that we'll do.
+ */
+- disk_io_size = min(lockend + 1, iocb->ki_pos + encoded->len) - start;
+- count = start + disk_io_size - iocb->ki_pos;
++ *disk_io_size = min(lockend + 1, iocb->ki_pos + encoded->len) - start;
++ count = start + *disk_io_size - iocb->ki_pos;
+ encoded->len = count;
+ encoded->unencoded_len = count;
+- disk_io_size = ALIGN(disk_io_size, fs_info->sectorsize);
++ *disk_io_size = ALIGN(*disk_io_size, fs_info->sectorsize);
+ }
+ free_extent_map(em);
+ em = NULL;
+
+- if (disk_bytenr == EXTENT_MAP_HOLE) {
+- unlock_extent(io_tree, start, lockend, &cached_state);
++ if (*disk_bytenr == EXTENT_MAP_HOLE) {
++ unlock_extent(io_tree, start, lockend, cached_state);
+ btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
+ unlocked = true;
+ ret = iov_iter_zero(count, iter);
+ if (ret != count)
+ ret = -EFAULT;
+ } else {
+- ret = btrfs_encoded_read_regular(iocb, iter, start, lockend,
+- &cached_state, disk_bytenr,
+- disk_io_size, count,
+- encoded->compression,
+- &unlocked);
++ ret = -EIOCBQUEUED;
++ goto out_em;
+ }
+
+ out:
+@@ -9378,10 +9381,11 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ out_em:
+ free_extent_map(em);
+ out_unlock_extent:
+- if (!unlocked)
+- unlock_extent(io_tree, start, lockend, &cached_state);
++ /* Leave inode and extent locked if we need to do a read. */
++ if (!unlocked && ret != -EIOCBQUEUED)
++ unlock_extent(io_tree, start, lockend, cached_state);
+ out_unlock_inode:
+- if (!unlocked)
++ if (!unlocked && ret != -EIOCBQUEUED)
+ btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
+ return ret;
+ }
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 226c91fe31a707..3e3722a7323936 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4514,12 +4514,17 @@ static int btrfs_ioctl_encoded_read(struct file *file, void __user *argp,
+ size_t copy_end_kernel = offsetofend(struct btrfs_ioctl_encoded_io_args,
+ flags);
+ size_t copy_end;
++ struct btrfs_inode *inode = BTRFS_I(file_inode(file));
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
++ struct extent_io_tree *io_tree = &inode->io_tree;
+ struct iovec iovstack[UIO_FASTIOV];
+ struct iovec *iov = iovstack;
+ struct iov_iter iter;
+ loff_t pos;
+ struct kiocb kiocb;
+ ssize_t ret;
++ u64 disk_bytenr, disk_io_size;
++ struct extent_state *cached_state = NULL;
+
+ if (!capable(CAP_SYS_ADMIN)) {
+ ret = -EPERM;
+@@ -4572,7 +4577,32 @@ static int btrfs_ioctl_encoded_read(struct file *file, void __user *argp,
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = pos;
+
+- ret = btrfs_encoded_read(&kiocb, &iter, &args);
++ ret = btrfs_encoded_read(&kiocb, &iter, &args, &cached_state,
++ &disk_bytenr, &disk_io_size);
++
++ if (ret == -EIOCBQUEUED) {
++ bool unlocked = false;
++ u64 start, lockend, count;
++
++ start = ALIGN_DOWN(kiocb.ki_pos, fs_info->sectorsize);
++ lockend = start + BTRFS_MAX_UNCOMPRESSED - 1;
++
++ if (args.compression)
++ count = disk_io_size;
++ else
++ count = args.len;
++
++ ret = btrfs_encoded_read_regular(&kiocb, &iter, start, lockend,
++ &cached_state, disk_bytenr,
++ disk_io_size, count,
++ args.compression, &unlocked);
++
++ if (!unlocked) {
++ unlock_extent(io_tree, start, lockend, &cached_state);
++ btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
++ }
++ }
++
+ if (ret >= 0) {
+ fsnotify_access(file);
+ if (copy_to_user(argp + copy_end,
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index 9522a8b79d22b5..2928abf7eb8271 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -857,6 +857,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
+ "dropping a ref for a root that doesn't have a ref on the block");
+ dump_block_entry(fs_info, be);
+ dump_ref_action(fs_info, ra);
++ rb_erase(&ref->node, &be->refs);
+ kfree(ref);
+ kfree(ra);
+ goto out_unlock;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index b068469871f8e5..0cb11dcd10cd4b 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5677,7 +5677,7 @@ static int send_encoded_extent(struct send_ctx *sctx, struct btrfs_path *path,
+ * Note that send_buf is a mapping of send_buf_pages, so this is really
+ * reading into send_buf.
+ */
+- ret = btrfs_encoded_read_regular_fill_pages(BTRFS_I(inode), offset,
++ ret = btrfs_encoded_read_regular_fill_pages(BTRFS_I(inode),
+ disk_bytenr, disk_num_bytes,
+ sctx->send_buf_pages +
+ (data_offset >> PAGE_SHIFT));
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index c4a5fd94bbbb3b..cf92b75745e2a5 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -5609,9 +5609,9 @@ void send_flush_mdlog(struct ceph_mds_session *s)
+
+ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc,
+ struct ceph_mds_cap_auth *auth,
++ const struct cred *cred,
+ char *tpath)
+ {
+- const struct cred *cred = get_current_cred();
+ u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid);
+ u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid);
+ struct ceph_client *cl = mdsc->fsc->client;
+@@ -5734,8 +5734,9 @@ int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath, int mask)
+ for (i = 0; i < mdsc->s_cap_auths_num; i++) {
+ struct ceph_mds_cap_auth *s = &mdsc->s_cap_auths[i];
+
+- err = ceph_mds_auth_match(mdsc, s, tpath);
++ err = ceph_mds_auth_match(mdsc, s, cred, tpath);
+ if (err < 0) {
++ put_cred(cred);
+ return err;
+ } else if (err > 0) {
+ /* always follow the last auth caps' permision */
+@@ -5751,6 +5752,8 @@ int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath, int mask)
+ }
+ }
+
++ put_cred(cred);
++
+ doutc(cl, "root_squash_perms %d, rw_perms_s %p\n", root_squash_perms,
+ rw_perms_s);
+ if (root_squash_perms && rw_perms_s == NULL) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 73f321b52895ea..86480e5a215e51 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -285,7 +285,9 @@ static int ceph_parse_new_source(const char *dev_name, const char *dev_name_end,
+ size_t len;
+ struct ceph_fsid fsid;
+ struct ceph_parse_opts_ctx *pctx = fc->fs_private;
++ struct ceph_options *opts = pctx->copts;
+ struct ceph_mount_options *fsopt = pctx->opts;
++ const char *name_start = dev_name;
+ char *fsid_start, *fs_name_start;
+
+ if (*dev_name_end != '=') {
+@@ -296,8 +298,14 @@ static int ceph_parse_new_source(const char *dev_name, const char *dev_name_end,
+ fsid_start = strchr(dev_name, '@');
+ if (!fsid_start)
+ return invalfc(fc, "missing cluster fsid");
+- ++fsid_start; /* start of cluster fsid */
++ len = fsid_start - name_start;
++ kfree(opts->name);
++ opts->name = kstrndup(name_start, len, GFP_KERNEL);
++ if (!opts->name)
++ return -ENOMEM;
++ dout("using %s entity name", opts->name);
+
++ ++fsid_start; /* start of cluster fsid */
+ fs_name_start = strchr(fsid_start, '.');
+ if (!fs_name_start)
+ return invalfc(fc, "missing file system name");
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index edf205093f4358..b9ffb2ee9548ae 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1290,16 +1290,18 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ wait_list, issued);
+ return 0;
+ }
+-
+- /*
+- * Issue discard for conventional zones only if the device
+- * supports discard.
+- */
+- if (!bdev_max_discard_sectors(bdev))
+- return -EOPNOTSUPP;
+ }
+ #endif
+
++ /*
++ * stop issuing discard for any of below cases:
++ * 1. device is conventional zone, but it doesn't support discard.
++ * 2. device is regulare device, after snapshot it doesn't support
++ * discard.
++ */
++ if (!bdev_max_discard_sectors(bdev))
++ return -EOPNOTSUPP;
++
+ trace_f2fs_issue_discard(bdev, dc->di.start, dc->di.len);
+
+ lstart = dc->di.lstart;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 983fdd98fc3755..a622056f27f3a2 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1748,6 +1748,18 @@ static int f2fs_freeze(struct super_block *sb)
+
+ static int f2fs_unfreeze(struct super_block *sb)
+ {
++ struct f2fs_sb_info *sbi = F2FS_SB(sb);
++
++ /*
++ * It will update discard_max_bytes of mounted lvm device to zero
++ * after creating snapshot on this lvm device, let's drop all
++ * remained discards.
++ * We don't need to disable real-time discard because discard_max_bytes
++ * will recover after removal of snapshot.
++ */
++ if (test_opt(sbi, DISCARD) && !f2fs_hw_support_discard(sbi))
++ f2fs_issue_discard_timeout(sbi);
++
+ clear_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+ return 0;
+ }
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 984f8e6379dd47..6d0455973d641e 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -1425,9 +1425,12 @@ static int e_show(struct seq_file *m, void *p)
+ return 0;
+ }
+
+- exp_get(exp);
++ if (!cache_get_rcu(&exp->h))
++ return 0;
++
+ if (cache_check(cd, &exp->h, NULL))
+ return 0;
++
+ exp_put(exp);
+ return svc_export_show(m, cd, cp);
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d3cfc647153993..57f8818aa47c5f 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1660,6 +1660,14 @@ static void release_open_stateid(struct nfs4_ol_stateid *stp)
+ free_ol_stateid_reaplist(&reaplist);
+ }
+
++static bool nfs4_openowner_unhashed(struct nfs4_openowner *oo)
++{
++ lockdep_assert_held(&oo->oo_owner.so_client->cl_lock);
++
++ return list_empty(&oo->oo_owner.so_strhash) &&
++ list_empty(&oo->oo_perclient);
++}
++
+ static void unhash_openowner_locked(struct nfs4_openowner *oo)
+ {
+ struct nfs4_client *clp = oo->oo_owner.so_client;
+@@ -4975,6 +4983,12 @@ init_open_stateid(struct nfs4_file *fp, struct nfsd4_open *open)
+ spin_lock(&oo->oo_owner.so_client->cl_lock);
+ spin_lock(&fp->fi_lock);
+
++ if (nfs4_openowner_unhashed(oo)) {
++ mutex_unlock(&stp->st_mutex);
++ stp = NULL;
++ goto out_unlock;
++ }
++
+ retstp = nfsd4_find_existing_open(fp, open);
+ if (retstp)
+ goto out_unlock;
+@@ -6126,6 +6140,11 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+
+ if (!stp) {
+ stp = init_open_stateid(fp, open);
++ if (!stp) {
++ status = nfserr_jukebox;
++ goto out;
++ }
++
+ if (!open->op_stp)
+ new_stp = true;
+ }
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 35fd3e3e177807..baa54c718bd722 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -616,8 +616,13 @@ static int ovl_security_fileattr(const struct path *realpath, struct fileattr *f
+ struct file *file;
+ unsigned int cmd;
+ int err;
++ unsigned int flags;
++
++ flags = O_RDONLY;
++ if (force_o_largefile())
++ flags |= O_LARGEFILE;
+
+- file = dentry_open(realpath, O_RDONLY, current_cred());
++ file = dentry_open(realpath, flags, current_cred());
+ if (IS_ERR(file))
+ return PTR_ERR(file);
+
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index edc9216f6e27ad..8f080046c59d9a 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -197,6 +197,9 @@ void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
+
+ bool ovl_dentry_weird(struct dentry *dentry)
+ {
++ if (!d_can_lookup(dentry) && !d_is_file(dentry) && !d_is_symlink(dentry))
++ return true;
++
+ return dentry->d_flags & (DCACHE_NEED_AUTOMOUNT |
+ DCACHE_MANAGE_TRANSIT |
+ DCACHE_OP_HASH |
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 7a85735d584f35..e376f48c4b8bf4 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -600,6 +600,7 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ ret = -EFAULT;
+ goto out;
+ }
++ ret = 0;
+ /*
+ * We know the bounce buffer is safe to copy from, so
+ * use _copy_to_iter() directly.
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index b40410cd39af42..71c0ce31a4c4db 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -689,6 +689,8 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
+
+ WARN_ON_ONCE(!rwsem_is_locked(&sb->s_umount));
+
++ flush_delayed_work("a_release_work);
++
+ for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+ if (type != -1 && cnt != type)
+ continue;
+diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
+index d95409f3cba667..02ebcbc4882f5b 100644
+--- a/fs/xfs/libxfs/xfs_sb.c
++++ b/fs/xfs/libxfs/xfs_sb.c
+@@ -297,13 +297,6 @@ xfs_validate_sb_write(
+ * the kernel cannot support since we checked for unsupported bits in
+ * the read verifier, which means that memory is corrupt.
+ */
+- if (xfs_sb_has_compat_feature(sbp, XFS_SB_FEAT_COMPAT_UNKNOWN)) {
+- xfs_warn(mp,
+-"Corruption detected in superblock compatible features (0x%x)!",
+- (sbp->sb_features_compat & XFS_SB_FEAT_COMPAT_UNKNOWN));
+- return -EFSCORRUPTED;
+- }
+-
+ if (!xfs_is_readonly(mp) &&
+ xfs_sb_has_ro_compat_feature(sbp, XFS_SB_FEAT_RO_COMPAT_UNKNOWN)) {
+ xfs_alert(mp,
+diff --git a/include/drm/drm_panic.h b/include/drm/drm_panic.h
+index 54085d5d05c345..f4e1fa9ae607a8 100644
+--- a/include/drm/drm_panic.h
++++ b/include/drm/drm_panic.h
+@@ -64,6 +64,8 @@ struct drm_scanout_buffer {
+
+ };
+
++#ifdef CONFIG_DRM_PANIC
++
+ /**
+ * drm_panic_trylock - try to enter the panic printing critical section
+ * @dev: struct drm_device
+@@ -149,4 +151,16 @@ struct drm_scanout_buffer {
+ #define drm_panic_unlock(dev, flags) \
+ raw_spin_unlock_irqrestore(&(dev)->mode_config.panic_lock, flags)
+
++#else
++
++static inline bool drm_panic_trylock(struct drm_device *dev, unsigned long flags)
++{
++ return true;
++}
++
++static inline void drm_panic_lock(struct drm_device *dev, unsigned long flags) {}
++static inline void drm_panic_unlock(struct drm_device *dev, unsigned long flags) {}
++
++#endif
++
+ #endif /* __DRM_PANIC_H__ */
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index 00a3bf7c0d8f0e..6bbfc8aa42e8f4 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -29,6 +29,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
+ #define KASAN_VMALLOC_VM_ALLOC ((__force kasan_vmalloc_flags_t)0x02u)
+ #define KASAN_VMALLOC_PROT_NORMAL ((__force kasan_vmalloc_flags_t)0x04u)
+
++#define KASAN_VMALLOC_PAGE_RANGE 0x1 /* Apply exsiting page range */
++#define KASAN_VMALLOC_TLB_FLUSH 0x2 /* TLB flush */
++
+ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+
+ #include <linux/pgtable.h>
+@@ -564,7 +567,8 @@ void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+ int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end);
++ unsigned long free_region_end,
++ unsigned long flags);
+
+ #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+@@ -579,7 +583,8 @@ static inline int kasan_populate_vmalloc(unsigned long start,
+ static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end) { }
++ unsigned long free_region_end,
++ unsigned long flags) { }
+
+ #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+@@ -614,7 +619,8 @@ static inline int kasan_populate_vmalloc(unsigned long start,
+ static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end) { }
++ unsigned long free_region_end,
++ unsigned long flags) { }
+
+ static inline void *kasan_unpoison_vmalloc(const void *start,
+ unsigned long size,
+diff --git a/include/linux/util_macros.h b/include/linux/util_macros.h
+index 6bb460c3e818b3..825487fb66faf9 100644
+--- a/include/linux/util_macros.h
++++ b/include/linux/util_macros.h
+@@ -4,19 +4,6 @@
+
+ #include <linux/math.h>
+
+-#define __find_closest(x, a, as, op) \
+-({ \
+- typeof(as) __fc_i, __fc_as = (as) - 1; \
+- typeof(x) __fc_x = (x); \
+- typeof(*a) const *__fc_a = (a); \
+- for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \
+- if (__fc_x op DIV_ROUND_CLOSEST(__fc_a[__fc_i] + \
+- __fc_a[__fc_i + 1], 2)) \
+- break; \
+- } \
+- (__fc_i); \
+-})
+-
+ /**
+ * find_closest - locate the closest element in a sorted array
+ * @x: The reference value.
+@@ -25,8 +12,27 @@
+ * @as: Size of 'a'.
+ *
+ * Returns the index of the element closest to 'x'.
++ * Note: If using an array of negative numbers (or mixed positive numbers),
++ * then be sure that 'x' is of a signed-type to get good results.
+ */
+-#define find_closest(x, a, as) __find_closest(x, a, as, <=)
++#define find_closest(x, a, as) \
++({ \
++ typeof(as) __fc_i, __fc_as = (as) - 1; \
++ long __fc_mid_x, __fc_x = (x); \
++ long __fc_left, __fc_right; \
++ typeof(*a) const *__fc_a = (a); \
++ for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \
++ __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i + 1]) / 2; \
++ if (__fc_x <= __fc_mid_x) { \
++ __fc_left = __fc_x - __fc_a[__fc_i]; \
++ __fc_right = __fc_a[__fc_i + 1] - __fc_x; \
++ if (__fc_right < __fc_left) \
++ __fc_i++; \
++ break; \
++ } \
++ } \
++ (__fc_i); \
++})
+
+ /**
+ * find_closest_descending - locate the closest element in a sorted array
+@@ -36,9 +42,27 @@
+ * @as: Size of 'a'.
+ *
+ * Similar to find_closest() but 'a' is expected to be sorted in descending
+- * order.
++ * order. The iteration is done in reverse order, so that the comparison
++ * of '__fc_right' & '__fc_left' also works for unsigned numbers.
+ */
+-#define find_closest_descending(x, a, as) __find_closest(x, a, as, >=)
++#define find_closest_descending(x, a, as) \
++({ \
++ typeof(as) __fc_i, __fc_as = (as) - 1; \
++ long __fc_mid_x, __fc_x = (x); \
++ long __fc_left, __fc_right; \
++ typeof(*a) const *__fc_a = (a); \
++ for (__fc_i = __fc_as; __fc_i >= 1; __fc_i--) { \
++ __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i - 1]) / 2; \
++ if (__fc_x <= __fc_mid_x) { \
++ __fc_left = __fc_x - __fc_a[__fc_i]; \
++ __fc_right = __fc_a[__fc_i - 1] - __fc_x; \
++ if (__fc_right < __fc_left) \
++ __fc_i--; \
++ break; \
++ } \
++ } \
++ (__fc_i); \
++})
+
+ /**
+ * is_insidevar - check if the @ptr points inside the @var memory range.
+diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
+index 6dc258993b1770..2acc7687e017a9 100644
+--- a/include/uapi/linux/if_link.h
++++ b/include/uapi/linux/if_link.h
+@@ -1292,6 +1292,19 @@ enum netkit_mode {
+ NETKIT_L3,
+ };
+
++/* NETKIT_SCRUB_NONE leaves clearing skb->{mark,priority} up to
++ * the BPF program if attached. This also means the latter can
++ * consume the two fields if they were populated earlier.
++ *
++ * NETKIT_SCRUB_DEFAULT zeroes skb->{mark,priority} fields before
++ * invoking the attached BPF program when the peer device resides
++ * in a different network namespace. This is the default behavior.
++ */
++enum netkit_scrub {
++ NETKIT_SCRUB_NONE,
++ NETKIT_SCRUB_DEFAULT,
++};
++
+ enum {
+ IFLA_NETKIT_UNSPEC,
+ IFLA_NETKIT_PEER_INFO,
+@@ -1299,6 +1312,8 @@ enum {
+ IFLA_NETKIT_POLICY,
+ IFLA_NETKIT_PEER_POLICY,
+ IFLA_NETKIT_MODE,
++ IFLA_NETKIT_SCRUB,
++ IFLA_NETKIT_PEER_SCRUB,
+ __IFLA_NETKIT_MAX,
+ };
+ #define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
+diff --git a/kernel/signal.c b/kernel/signal.c
+index cbabb2d05e0ac8..2ae45e6eb6bb8e 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1986,14 +1986,15 @@ int send_sigqueue(struct sigqueue *q, struct pid *pid, enum pid_type type)
+ * into t->pending).
+ *
+ * Where type is not PIDTYPE_PID, signals must be delivered to the
+- * process. In this case, prefer to deliver to current if it is in
+- * the same thread group as the target process, which avoids
+- * unnecessarily waking up a potentially idle task.
++ * process. In this case, prefer to deliver to current if it is in the
++ * same thread group as the target process and its sighand is stable,
++ * which avoids unnecessarily waking up a potentially idle task.
+ */
+ t = pid_task(pid, type);
+ if (!t)
+ goto ret;
+- if (type != PIDTYPE_PID && same_thread_group(t, current))
++ if (type != PIDTYPE_PID &&
++ same_thread_group(t, current) && !current->exit_state)
+ t = current;
+ if (!likely(lock_task_sighand(t, &flags)))
+ goto ret;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 4c28dd177ca650..3dd3b97d8049ae 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -883,6 +883,10 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
+ }
+
+ static struct fgraph_ops fprofiler_ops = {
++ .ops = {
++ .flags = FTRACE_OPS_FL_INITIALIZED,
++ INIT_OPS_HASH(fprofiler_ops.ops)
++ },
+ .entryfunc = &profile_graph_entry,
+ .retfunc = &profile_graph_return,
+ };
+@@ -5076,6 +5080,9 @@ ftrace_mod_callback(struct trace_array *tr, struct ftrace_hash *hash,
+ char *func;
+ int ret;
+
++ if (!tr)
++ return -ENODEV;
++
+ /* match_records() modifies func, and we need the original */
+ func = kstrdup(func_orig, GFP_KERNEL);
+ if (!func)
+diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c
+index d548750a325ace..b25d214b93e161 100644
+--- a/lib/kunit/debugfs.c
++++ b/lib/kunit/debugfs.c
+@@ -212,8 +212,11 @@ void kunit_debugfs_create_suite(struct kunit_suite *suite)
+
+ err:
+ string_stream_destroy(suite->log);
+- kunit_suite_for_each_test_case(suite, test_case)
++ suite->log = NULL;
++ kunit_suite_for_each_test_case(suite, test_case) {
+ string_stream_destroy(test_case->log);
++ test_case->log = NULL;
++ }
+ }
+
+ void kunit_debugfs_destroy_suite(struct kunit_suite *suite)
+diff --git a/lib/kunit/kunit-test.c b/lib/kunit/kunit-test.c
+index 37e02be1e71015..d9c781c859fde1 100644
+--- a/lib/kunit/kunit-test.c
++++ b/lib/kunit/kunit-test.c
+@@ -805,6 +805,8 @@ static void kunit_device_driver_test(struct kunit *test)
+ struct device *test_device;
+ struct driver_test_state *test_state = kunit_kzalloc(test, sizeof(*test_state), GFP_KERNEL);
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_state);
++
+ test->priv = test_state;
+ test_driver = kunit_driver_create(test, "my_driver");
+
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 3619301dda2ebe..8d83e217271967 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -3439,9 +3439,20 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
+ return slot;
+ }
+
++/*
++ * mas_store_root() - Storing value into root.
++ * @mas: The maple state
++ * @entry: The entry to store.
++ *
++ * There is no root node now and we are storing a value into the root - this
++ * function either assigns the pointer or expands into a node.
++ */
+ static inline void mas_store_root(struct ma_state *mas, void *entry)
+ {
+- if (likely((mas->last != 0) || (mas->index != 0)))
++ if (!entry) {
++ if (!mas->index)
++ rcu_assign_pointer(mas->tree->ma_root, NULL);
++ } else if (likely((mas->last != 0) || (mas->index != 0)))
+ mas_root_expand(mas, entry);
+ else if (((unsigned long) (entry) & 3) == 2)
+ mas_root_expand(mas, entry);
+diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
+index a339d117150fba..a149e354bb2689 100644
+--- a/mm/damon/tests/vaddr-kunit.h
++++ b/mm/damon/tests/vaddr-kunit.h
+@@ -300,6 +300,7 @@ static void damon_test_split_evenly(struct kunit *test)
+ damon_test_split_evenly_fail(test, 0, 100, 0);
+ damon_test_split_evenly_succ(test, 0, 100, 10);
+ damon_test_split_evenly_succ(test, 5, 59, 5);
++ damon_test_split_evenly_succ(test, 0, 3, 2);
+ damon_test_split_evenly_fail(test, 5, 6, 2);
+ }
+
+diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
+index 08cfd22b524925..dba3b2f4d75813 100644
+--- a/mm/damon/vaddr.c
++++ b/mm/damon/vaddr.c
+@@ -67,6 +67,7 @@ static int damon_va_evenly_split_region(struct damon_target *t,
+ unsigned long sz_orig, sz_piece, orig_end;
+ struct damon_region *n = NULL, *next;
+ unsigned long start;
++ unsigned int i;
+
+ if (!r || !nr_pieces)
+ return -EINVAL;
+@@ -80,8 +81,7 @@ static int damon_va_evenly_split_region(struct damon_target *t,
+
+ r->ar.end = r->ar.start + sz_piece;
+ next = damon_next_region(r);
+- for (start = r->ar.end; start + sz_piece <= orig_end;
+- start += sz_piece) {
++ for (start = r->ar.end, i = 1; i < nr_pieces; start += sz_piece, i++) {
+ n = damon_new_region(start, start + sz_piece);
+ if (!n)
+ return -ENOMEM;
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index d6210ca48ddab9..88d1c9dcb50721 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -489,7 +489,8 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
+ */
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end)
++ unsigned long free_region_end,
++ unsigned long flags)
+ {
+ void *shadow_start, *shadow_end;
+ unsigned long region_start, region_end;
+@@ -522,12 +523,17 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start);
+ return;
+ }
+- apply_to_existing_page_range(&init_mm,
++
++
++ if (flags & KASAN_VMALLOC_PAGE_RANGE)
++ apply_to_existing_page_range(&init_mm,
+ (unsigned long)shadow_start,
+ size, kasan_depopulate_vmalloc_pte,
+ NULL);
+- flush_tlb_kernel_range((unsigned long)shadow_start,
+- (unsigned long)shadow_end);
++
++ if (flags & KASAN_VMALLOC_TLB_FLUSH)
++ flush_tlb_kernel_range((unsigned long)shadow_start,
++ (unsigned long)shadow_end);
+ }
+ }
+
+diff --git a/mm/slab.h b/mm/slab.h
+index 6c6fe6d630ce3d..92ca5ff2037534 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -73,6 +73,11 @@ struct slab {
+ struct {
+ unsigned inuse:16;
+ unsigned objects:15;
++ /*
++ * If slab debugging is enabled then the
++ * frozen bit can be reused to indicate
++ * that the slab was corrupted
++ */
+ unsigned frozen:1;
+ };
+ };
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 893d3205991518..477fa471da1859 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -230,7 +230,7 @@ static struct kmem_cache *create_cache(const char *name,
+ if (args->use_freeptr_offset &&
+ (args->freeptr_offset >= object_size ||
+ !(flags & SLAB_TYPESAFE_BY_RCU) ||
+- !IS_ALIGNED(args->freeptr_offset, sizeof(freeptr_t))))
++ !IS_ALIGNED(args->freeptr_offset, __alignof__(freeptr_t))))
+ goto out;
+
+ err = -ENOMEM;
+diff --git a/mm/slub.c b/mm/slub.c
+index 5b832512044e3e..15ba89fef89a1f 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab)
+ slab->inuse, slab->objects);
+ return 0;
+ }
++ if (slab->frozen) {
++ slab_err(s, slab, "Slab disabled since SLUB metadata consistency check failed");
++ return 0;
++ }
++
+ /* Slab_pad_check fixes things up after itself */
+ slab_pad_check(s, slab);
+ return 1;
+@@ -1603,6 +1608,7 @@ static noinline bool alloc_debug_processing(struct kmem_cache *s,
+ slab_fix(s, "Marking all objects used");
+ slab->inuse = slab->objects;
+ slab->freelist = NULL;
++ slab->frozen = 1; /* mark consistency-failed slab as frozen */
+ }
+ return false;
+ }
+@@ -2744,7 +2750,8 @@ static void *alloc_single_from_partial(struct kmem_cache *s,
+ slab->inuse++;
+
+ if (!alloc_debug_processing(s, slab, object, orig_size)) {
+- remove_partial(n, slab);
++ if (folio_test_slab(slab_folio(slab)))
++ remove_partial(n, slab);
+ return NULL;
+ }
+
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 634162271c0045..5480b77f4167d7 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2182,6 +2182,25 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay)
+ reclaim_list_global(&decay_list);
+ }
+
++static void
++kasan_release_vmalloc_node(struct vmap_node *vn)
++{
++ struct vmap_area *va;
++ unsigned long start, end;
++
++ start = list_first_entry(&vn->purge_list, struct vmap_area, list)->va_start;
++ end = list_last_entry(&vn->purge_list, struct vmap_area, list)->va_end;
++
++ list_for_each_entry(va, &vn->purge_list, list) {
++ if (is_vmalloc_or_module_addr((void *) va->va_start))
++ kasan_release_vmalloc(va->va_start, va->va_end,
++ va->va_start, va->va_end,
++ KASAN_VMALLOC_PAGE_RANGE);
++ }
++
++ kasan_release_vmalloc(start, end, start, end, KASAN_VMALLOC_TLB_FLUSH);
++}
++
+ static void purge_vmap_node(struct work_struct *work)
+ {
+ struct vmap_node *vn = container_of(work,
+@@ -2190,20 +2209,17 @@ static void purge_vmap_node(struct work_struct *work)
+ struct vmap_area *va, *n_va;
+ LIST_HEAD(local_list);
+
++ if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
++ kasan_release_vmalloc_node(vn);
++
+ vn->nr_purged = 0;
+
+ list_for_each_entry_safe(va, n_va, &vn->purge_list, list) {
+ unsigned long nr = va_size(va) >> PAGE_SHIFT;
+- unsigned long orig_start = va->va_start;
+- unsigned long orig_end = va->va_end;
+ unsigned int vn_id = decode_vn_id(va->flags);
+
+ list_del_init(&va->list);
+
+- if (is_vmalloc_or_module_addr((void *)orig_start))
+- kasan_release_vmalloc(orig_start, orig_end,
+- va->va_start, va->va_end);
+-
+ nr_purged_pages += nr;
+ vn->nr_purged++;
+
+@@ -4784,7 +4800,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+ &free_vmap_area_list);
+ if (va)
+ kasan_release_vmalloc(orig_start, orig_end,
+- va->va_start, va->va_end);
++ va->va_start, va->va_end,
++ KASAN_VMALLOC_PAGE_RANGE | KASAN_VMALLOC_TLB_FLUSH);
+ vas[area] = NULL;
+ }
+
+@@ -4834,7 +4851,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+ &free_vmap_area_list);
+ if (va)
+ kasan_release_vmalloc(orig_start, orig_end,
+- va->va_start, va->va_end);
++ va->va_start, va->va_end,
++ KASAN_VMALLOC_PAGE_RANGE | KASAN_VMALLOC_TLB_FLUSH);
+ vas[area] = NULL;
+ kfree(vms[area]);
+ }
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index ac6a5aa34eabba..3f41344239126b 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1780,6 +1780,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
+ zone_page_state(zone, i));
+
+ #ifdef CONFIG_NUMA
++ fold_vm_zone_numa_events(zone);
+ for (i = 0; i < NR_VM_NUMA_EVENT_ITEMS; i++)
+ seq_printf(m, "\n %-12s %lu", numa_stat_name(i),
+ zone_numa_event_state(zone, i));
+diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
+index 873e9fb2041f02..a9263bd948c41d 100644
+--- a/tools/perf/pmu-events/empty-pmu-events.c
++++ b/tools/perf/pmu-events/empty-pmu-events.c
+@@ -539,17 +539,7 @@ const struct pmu_metrics_table *perf_pmu__find_metrics_table(struct perf_pmu *pm
+ if (!map)
+ return NULL;
+
+- if (!pmu)
+- return &map->metric_table;
+-
+- for (size_t i = 0; i < map->metric_table.num_pmus; i++) {
+- const struct pmu_table_entry *table_pmu = &map->metric_table.pmus[i];
+- const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+-
+- if (pmu__name_match(pmu, pmu_name))
+- return &map->metric_table;
+- }
+- return NULL;
++ return &map->metric_table;
+ }
+
+ const struct pmu_events_table *find_core_events_table(const char *arch, const char *cpuid)
+diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
+index d46a22fb5573de..4145e027775316 100755
+--- a/tools/perf/pmu-events/jevents.py
++++ b/tools/perf/pmu-events/jevents.py
+@@ -1089,17 +1089,7 @@ const struct pmu_metrics_table *perf_pmu__find_metrics_table(struct perf_pmu *pm
+ if (!map)
+ return NULL;
+
+- if (!pmu)
+- return &map->metric_table;
+-
+- for (size_t i = 0; i < map->metric_table.num_pmus; i++) {
+- const struct pmu_table_entry *table_pmu = &map->metric_table.pmus[i];
+- const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+-
+- if (pmu__name_match(pmu, pmu_name))
+- return &map->metric_table;
+- }
+- return NULL;
++ return &map->metric_table;
+ }
+
+ const struct pmu_events_table *find_core_events_table(const char *arch, const char *cpuid)
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-09 23:13 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-09 23:13 UTC (permalink / raw
To: gentoo-commits
commit: 42337dcbb74c47c507f2628074a83f937cd1cf1a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 9 23:12:52 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 9 23:12:52 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=42337dcb
drm/display: Fix building with GCC 15
Bug: https://bugs.gentoo.org/946130
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
2700_drm-display-GCC15.patch | 52 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/0000_README b/0000_README
index 87f43cf7..b2e6beb3 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2700_drm-display-GCC15.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc: drm/display: Fix building with GCC 15
+
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2700_drm-display-GCC15.patch b/2700_drm-display-GCC15.patch
new file mode 100644
index 00000000..0be775ea
--- /dev/null
+++ b/2700_drm-display-GCC15.patch
@@ -0,0 +1,52 @@
+From a500f3751d3c861be7e4463c933cf467240cca5d Mon Sep 17 00:00:00 2001
+From: Brahmajit Das <brahmajit.xyz@gmail.com>
+Date: Wed, 2 Oct 2024 14:53:11 +0530
+Subject: drm/display: Fix building with GCC 15
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+GCC 15 enables -Werror=unterminated-string-initialization by default.
+This results in the following build error
+
+drivers/gpu/drm/display/drm_dp_dual_mode_helper.c: In function ‘is_hdmi_adaptor’:
+drivers/gpu/drm/display/drm_dp_dual_mode_helper.c:164:17: error: initializer-string for array of
+ ‘char’ is too long [-Werror=unterminated-string-initialization]
+ 164 | "DP-HDMI ADAPTOR\x04";
+ | ^~~~~~~~~~~~~~~~~~~~~
+
+After discussion with Ville, the fix was to increase the size of
+dp_dual_mode_hdmi_id array by one, so that it can accommodate the NULL
+line character. This should let us build the kernel with GCC 15.
+
+Signed-off-by: Brahmajit Das <brahmajit.xyz@gmail.com>
+Reviewed-by: Jani Nikula <jani.nikula@intel.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20241002092311.942822-1-brahmajit.xyz@gmail.com
+Signed-off-by: Jani Nikula <jani.nikula@intel.com>
+---
+ drivers/gpu/drm/display/drm_dp_dual_mode_helper.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+(limited to 'drivers/gpu/drm/display/drm_dp_dual_mode_helper.c')
+
+diff --git a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+index 14a2a8473682b0..c491e3203bf11c 100644
+--- a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+@@ -160,11 +160,11 @@ EXPORT_SYMBOL(drm_dp_dual_mode_write);
+
+ static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
+ {
+- static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
++ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN + 1] =
+ "DP-HDMI ADAPTOR\x04";
+
+ return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
+- sizeof(dp_dual_mode_hdmi_id)) == 0;
++ DP_DUAL_MODE_HDMI_ID_LEN) == 0;
+ }
+
+ static bool is_type1_adaptor(uint8_t adaptor_id)
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-11 21:01 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-11 21:01 UTC (permalink / raw
To: gentoo-commits
commit: 3cf228ef3b389e949f1242512c85121af823b397
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 11 21:01:01 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 11 21:01:01 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3cf228ef
Add x86/pkeys fixes
Bug: https://bugs.gentoo.org/946182
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 ++
...-change-caller-of-update_pkru_in_sigframe.patch | 107 +++++++++++++++++++++
...eys-ensure-updated-pkru-value-is-xrstor-d.patch | 96 ++++++++++++++++++
3 files changed, 211 insertions(+)
diff --git a/0000_README b/0000_README
index b2e6beb3..81375872 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,14 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+Patch: 1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
+From: https://git.kernel.org/
+Desc: x86/pkeys: Change caller of update_pkru_in_sigframe()
+
+Patch: 1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
+From: https://git.kernel.org/
+Desc: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch b/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
new file mode 100644
index 00000000..3a1fbd82
--- /dev/null
+++ b/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
@@ -0,0 +1,107 @@
+From 5683d0ce8fb46f36315a2b508f90ec6221cda018 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 17:45:19 +0000
+Subject: x86/pkeys: Change caller of update_pkru_in_sigframe()
+
+From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+
+[ Upstream commit 6a1853bdf17874392476b552398df261f75503e0 ]
+
+update_pkru_in_sigframe() will shortly need some information which
+is only available inside xsave_to_user_sigframe(). Move
+update_pkru_in_sigframe() inside the other function to make it
+easier to provide it that information.
+
+No functional changes.
+
+Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Link: https://lore.kernel.org/all/20241119174520.3987538-2-aruna.ramakrishna%40oracle.com
+Stable-dep-of: ae6012d72fa6 ("x86/pkeys: Ensure updated PKRU value is XRSTOR'd")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/fpu/signal.c | 20 ++------------------
+ arch/x86/kernel/fpu/xstate.h | 15 ++++++++++++++-
+ 2 files changed, 16 insertions(+), 19 deletions(-)
+
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 1065ab995305c..8f62e0666dea5 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -63,16 +63,6 @@ static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
+ return true;
+ }
+
+-/*
+- * Update the value of PKRU register that was already pushed onto the signal frame.
+- */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
+-{
+- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+- return 0;
+- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
+-}
+-
+ /*
+ * Signal frame handlers.
+ */
+@@ -168,14 +158,8 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+
+ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+- int err = 0;
+-
+- if (use_xsave()) {
+- err = xsave_to_user_sigframe(buf);
+- if (!err)
+- err = update_pkru_in_sigframe(buf, pkru);
+- return err;
+- }
++ if (use_xsave())
++ return xsave_to_user_sigframe(buf, pkru);
+
+ if (use_fxsr())
+ return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 0b86a5002c846..6b2924fbe5b8d 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -69,6 +69,16 @@ static inline u64 xfeatures_mask_independent(void)
+ return fpu_kernel_cfg.independent_features;
+ }
+
++/*
++ * Update the value of PKRU register that was already pushed onto the signal frame.
++ */
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
++{
++ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
++ return 0;
++ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
++}
++
+ /* XSAVE/XRSTOR wrapper functions */
+
+ #ifdef CONFIG_X86_64
+@@ -256,7 +266,7 @@ static inline u64 xfeatures_need_sigframe_write(void)
+ * The caller has to zero buf::header before calling this because XSAVE*
+ * does not touch the reserved fields in the header.
+ */
+-static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
++static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+ /*
+ * Include the features which are not xsaved/rstored by the kernel
+@@ -281,6 +291,9 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
+ XSTATE_OP(XSAVE, buf, lmask, hmask, err);
+ clac();
+
++ if (!err)
++ err = update_pkru_in_sigframe(buf, pkru);
++
+ return err;
+ }
+
+--
+2.43.0
+
diff --git a/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch b/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
new file mode 100644
index 00000000..11b1f768
--- /dev/null
+++ b/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
@@ -0,0 +1,96 @@
+From 24fedf2768fd57e0d767137044c4f7493357b325 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 17:45:20 +0000
+Subject: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
+
+From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+
+[ Upstream commit ae6012d72fa60c9ff92de5bac7a8021a47458e5b ]
+
+When XSTATE_BV[i] is 0, and XRSTOR attempts to restore state component
+'i' it ignores any value in the XSAVE buffer and instead restores the
+state component's init value.
+
+This means that if XSAVE writes XSTATE_BV[PKRU]=0 then XRSTOR will
+ignore the value that update_pkru_in_sigframe() writes to the XSAVE buffer.
+
+XSTATE_BV[PKRU] only gets written as 0 if PKRU is in its init state. On
+Intel CPUs, basically never happens because the kernel usually
+overwrites the init value (aside: this is why we didn't notice this bug
+until now). But on AMD, the init tracker is more aggressive and will
+track PKRU as being in its init state upon any wrpkru(0x0).
+Unfortunately, sig_prepare_pkru() does just that: wrpkru(0x0).
+
+This writes XSTATE_BV[PKRU]=0 which makes XRSTOR ignore the PKRU value
+in the sigframe.
+
+To fix this, always overwrite the sigframe XSTATE_BV with a value that
+has XSTATE_BV[PKRU]==1. This ensures that XRSTOR will not ignore what
+update_pkru_in_sigframe() wrote.
+
+The problematic sequence of events is something like this:
+
+Userspace does:
+ * wrpkru(0xffff0000) (or whatever)
+ * Hardware sets: XINUSE[PKRU]=1
+Signal happens, kernel is entered:
+ * sig_prepare_pkru() => wrpkru(0x00000000)
+ * Hardware sets: XINUSE[PKRU]=0 (aggressive AMD init tracker)
+ * XSAVE writes most of XSAVE buffer, including
+ XSTATE_BV[PKRU]=XINUSE[PKRU]=0
+ * update_pkru_in_sigframe() overwrites PKRU in XSAVE buffer
+... signal handling
+ * XRSTOR sees XSTATE_BV[PKRU]==0, ignores just-written value
+ from update_pkru_in_sigframe()
+
+Fixes: 70044df250d0 ("x86/pkeys: Update PKRU to enable all pkeys before XSAVE")
+Suggested-by: Rudi Horn <rudi.horn@oracle.com>
+Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
+Link: https://lore.kernel.org/all/20241119174520.3987538-3-aruna.ramakrishna%40oracle.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/fpu/xstate.h | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 6b2924fbe5b8d..aa16f1a1bbcf1 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -72,10 +72,22 @@ static inline u64 xfeatures_mask_independent(void)
+ /*
+ * Update the value of PKRU register that was already pushed onto the signal frame.
+ */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
+ {
++ u64 xstate_bv;
++ int err;
++
+ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+ return 0;
++
++ /* Mark PKRU as in-use so that it is restored correctly. */
++ xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
++
++ err = __put_user(xstate_bv, &buf->header.xfeatures);
++ if (err)
++ return err;
++
++ /* Update PKRU value in the userspace xsave buffer. */
+ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
+ }
+
+@@ -292,7 +304,7 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkr
+ clac();
+
+ if (!err)
+- err = update_pkru_in_sigframe(buf, pkru);
++ err = update_pkru_in_sigframe(buf, mask, pkru);
+
+ return err;
+ }
+--
+2.43.0
+
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-14 23:47 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-14 23:47 UTC (permalink / raw
To: gentoo-commits
commit: 4c68b8a5598beeb003b0f59ae33deba3b220de9a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 14 23:47:32 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Dec 14 23:47:32 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4c68b8a5
Linux patch 6.12.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-6.12.5.patch | 33366 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 33370 insertions(+)
diff --git a/0000_README b/0000_README
index 81375872..6429d035 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-6.12.4.patch
From: https://www.kernel.org
Desc: Linux 6.12.4
+Patch: 1004_linux-6.12.5.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.5
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1004_linux-6.12.5.patch b/1004_linux-6.12.5.patch
new file mode 100644
index 00000000..6347cd6c
--- /dev/null
+++ b/1004_linux-6.12.5.patch
@@ -0,0 +1,33366 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
+index 7f63c7e9777358..5da6a14dc326bd 100644
+--- a/Documentation/ABI/testing/sysfs-bus-pci
++++ b/Documentation/ABI/testing/sysfs-bus-pci
+@@ -163,6 +163,17 @@ Description:
+ will be present in sysfs. Writing 1 to this file
+ will perform reset.
+
++What: /sys/bus/pci/devices/.../reset_subordinate
++Date: October 2024
++Contact: linux-pci@vger.kernel.org
++Description:
++ This is visible only for bridge devices. If you want to reset
++ all devices attached through the subordinate bus of a specific
++ bridge device, writing 1 to this will try to do it. This will
++ affect all devices attached to the system through this bridge
++ similiar to writing 1 to their individual "reset" file, so use
++ with caution.
++
+ What: /sys/bus/pci/devices/.../vpd
+ Date: February 2008
+ Contact: Ben Hutchings <bwh@kernel.org>
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index 513296bb6f297f..3e1630c70d8ae7 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -822,3 +822,9 @@ Description: It controls the valid block ratio threshold not to trigger excessiv
+ for zoned deivces. The initial value of it is 95(%). F2FS will stop the
+ background GC thread from intiating GC for sections having valid blocks
+ exceeding the ratio.
++
++What: /sys/fs/f2fs/<disk>/max_read_extent_count
++Date: November 2024
++Contact: "Chao Yu" <chao@kernel.org>
++Description: It controls max read extent count for per-inode, the value of threshold
++ is 10240 by default.
+diff --git a/Documentation/accel/qaic/aic080.rst b/Documentation/accel/qaic/aic080.rst
+new file mode 100644
+index 00000000000000..d563771ea6ce48
+--- /dev/null
++++ b/Documentation/accel/qaic/aic080.rst
+@@ -0,0 +1,14 @@
++.. SPDX-License-Identifier: GPL-2.0-only
++
++===============================
++ Qualcomm Cloud AI 80 (AIC080)
++===============================
++
++Overview
++========
++
++The Qualcomm Cloud AI 80/AIC080 family of products are a derivative of AIC100.
++The number of NSPs and clock rates are reduced to fit within resource
++constrained solutions. The PCIe Product ID is 0xa080.
++
++As a derivative product, all AIC100 documentation applies.
+diff --git a/Documentation/accel/qaic/index.rst b/Documentation/accel/qaic/index.rst
+index ad19b88d1a669e..967b9dd8baceac 100644
+--- a/Documentation/accel/qaic/index.rst
++++ b/Documentation/accel/qaic/index.rst
+@@ -10,4 +10,5 @@ accelerator cards.
+ .. toctree::
+
+ qaic
++ aic080
+ aic100
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 65bfab1b186146..77db10e944f039 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -258,6 +258,8 @@ stable kernels.
+ | Hisilicon | Hip{08,09,10,10C| #162001900 | N/A |
+ | | ,11} SMMU PMCG | | |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Hisilicon | Hip09 | #162100801 | HISILICON_ERRATUM_162100801 |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/i2c/busses/i2c-i801.rst b/Documentation/i2c/busses/i2c-i801.rst
+index c840b597912c87..47e8ac5b7099f7 100644
+--- a/Documentation/i2c/busses/i2c-i801.rst
++++ b/Documentation/i2c/busses/i2c-i801.rst
+@@ -49,6 +49,7 @@ Supported adapters:
+ * Intel Meteor Lake (SOC and PCH)
+ * Intel Birch Stream (SOC)
+ * Intel Arrow Lake (SOC)
++ * Intel Panther Lake (SOC)
+
+ Datasheets: Publicly available at the Intel website
+
+diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
+index 6a050d755b9cb4..f6c5d8214c7e98 100644
+--- a/Documentation/netlink/specs/ethtool.yaml
++++ b/Documentation/netlink/specs/ethtool.yaml
+@@ -96,7 +96,12 @@ attribute-sets:
+ name: bits
+ type: nest
+ nested-attributes: bitset-bits
+-
++ -
++ name: value
++ type: binary
++ -
++ name: mask
++ type: binary
+ -
+ name: string
+ attributes:
+diff --git a/Makefile b/Makefile
+index 87dc2f81086021..f158bfe6407ac9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -456,6 +456,7 @@ export rust_common_flags := --edition=2021 \
+ -Wclippy::mut_mut \
+ -Wclippy::needless_bitwise_bool \
+ -Wclippy::needless_continue \
++ -Aclippy::needless_lifetimes \
+ -Wclippy::no_mangle_with_rust_abi \
+ -Wclippy::dbg_macro
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 22f8a7bca6d21c..a11a7a42edbfb5 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1232,6 +1232,17 @@ config HISILICON_ERRATUM_161600802
+
+ If unsure, say Y.
+
++config HISILICON_ERRATUM_162100801
++ bool "Hip09 162100801 erratum support"
++ default y
++ help
++ When enabling GICv4.1 in hip09, VMAPP will fail to clear some caches
++ during unmapping operation, which will cause some vSGIs lost.
++ To fix the issue, invalidate related vPE cache through GICR_INVALLR
++ after VMOVP.
++
++ If unsure, say Y.
++
+ config QCOM_FALKOR_ERRATUM_1003
+ bool "Falkor E1003: Incorrect translation due to ASID change"
+ default y
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index b756578aeaeea1..1559a239137f32 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -719,6 +719,8 @@ static int fpmr_set(struct task_struct *target, const struct user_regset *regset
+ if (!system_supports_fpmr())
+ return -EINVAL;
+
++ fpmr = target->thread.uw.fpmr;
++
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &fpmr, 0, count);
+ if (ret)
+ return ret;
+@@ -1418,7 +1420,7 @@ static int tagged_addr_ctrl_get(struct task_struct *target,
+ {
+ long ctrl = get_tagged_addr_ctrl(target);
+
+- if (IS_ERR_VALUE(ctrl))
++ if (WARN_ON_ONCE(IS_ERR_VALUE(ctrl)))
+ return ctrl;
+
+ return membuf_write(&to, &ctrl, sizeof(ctrl));
+@@ -1432,6 +1434,10 @@ static int tagged_addr_ctrl_set(struct task_struct *target, const struct
+ int ret;
+ long ctrl;
+
++ ctrl = get_tagged_addr_ctrl(target);
++ if (WARN_ON_ONCE(IS_ERR_VALUE(ctrl)))
++ return ctrl;
++
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ctrl, 0, -1);
+ if (ret)
+ return ret;
+@@ -1463,6 +1469,8 @@ static int poe_set(struct task_struct *target, const struct
+ if (!system_supports_poe())
+ return -EINVAL;
+
++ ctrl = target->thread.por_el0;
++
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ctrl, 0, -1);
+ if (ret)
+ return ret;
+diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
+index 188197590fc9ce..b2ac062463273f 100644
+--- a/arch/arm64/mm/context.c
++++ b/arch/arm64/mm/context.c
+@@ -32,9 +32,9 @@ static unsigned long nr_pinned_asids;
+ static unsigned long *pinned_asid_map;
+
+ #define ASID_MASK (~GENMASK(asid_bits - 1, 0))
+-#define ASID_FIRST_VERSION (1UL << asid_bits)
++#define ASID_FIRST_VERSION (1UL << 16)
+
+-#define NUM_USER_ASIDS ASID_FIRST_VERSION
++#define NUM_USER_ASIDS (1UL << asid_bits)
+ #define ctxid2asid(asid) ((asid) & ~ASID_MASK)
+ #define asid2ctxid(asid, genid) ((asid) | (genid))
+
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 27a32ff15412aa..93ba66de160ce4 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -116,15 +116,6 @@ static void __init arch_reserve_crashkernel(void)
+
+ static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit)
+ {
+- /**
+- * Information we get from firmware (e.g. DT dma-ranges) describe DMA
+- * bus constraints. Devices using DMA might have their own limitations.
+- * Some of them rely on DMA zone in low 32-bit memory. Keep low RAM
+- * DMA zone on platforms that have RAM there.
+- */
+- if (memblock_start_of_DRAM() < U32_MAX)
+- zone_limit = min(zone_limit, U32_MAX);
+-
+ return min(zone_limit, memblock_end_of_DRAM() - 1) + 1;
+ }
+
+@@ -140,6 +131,14 @@ static void __init zone_sizes_init(void)
+ acpi_zone_dma_limit = acpi_iort_dma_get_max_cpu_address();
+ dt_zone_dma_limit = of_dma_get_max_cpu_address(NULL);
+ zone_dma_limit = min(dt_zone_dma_limit, acpi_zone_dma_limit);
++ /*
++ * Information we get from firmware (e.g. DT dma-ranges) describe DMA
++ * bus constraints. Devices using DMA might have their own limitations.
++ * Some of them rely on DMA zone in low 32-bit memory. Keep low RAM
++ * DMA zone on platforms that have RAM there.
++ */
++ if (memblock_start_of_DRAM() < U32_MAX)
++ zone_dma_limit = min(zone_dma_limit, U32_MAX);
+ arm64_dma_phys_limit = max_zone_phys(zone_dma_limit);
+ max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
+ #endif
+diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
+index 5da32c00d483fb..376c0708e2979b 100644
+--- a/arch/loongarch/include/asm/hugetlb.h
++++ b/arch/loongarch/include/asm/hugetlb.h
+@@ -29,6 +29,16 @@ static inline int prepare_hugepage_range(struct file *file,
+ return 0;
+ }
+
++#define __HAVE_ARCH_HUGE_PTE_CLEAR
++static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, unsigned long sz)
++{
++ pte_t clear;
++
++ pte_val(clear) = (unsigned long)invalid_pte_table;
++ set_pte_at(mm, addr, ptep, clear);
++}
++
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 174734a23d0ac8..9d53eca66fcc70 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -240,7 +240,7 @@ static void kvm_late_check_requests(struct kvm_vcpu *vcpu)
+ */
+ static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
+ {
+- int ret;
++ int idx, ret;
+
+ /*
+ * Check conditions before entering the guest
+@@ -249,7 +249,9 @@ static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
+ if (ret < 0)
+ return ret;
+
++ idx = srcu_read_lock(&vcpu->kvm->srcu);
+ ret = kvm_check_requests(vcpu);
++ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+
+ return ret;
+ }
+diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c
+index 5ac9beb5f0935e..3b427b319db21d 100644
+--- a/arch/loongarch/mm/tlb.c
++++ b/arch/loongarch/mm/tlb.c
+@@ -289,7 +289,7 @@ static void setup_tlb_handler(int cpu)
+ /* Avoid lockdep warning */
+ rcutree_report_cpu_starting(cpu);
+
+-#ifdef CONFIG_NUMA
++#if defined(CONFIG_NUMA) && !defined(CONFIG_PREEMPT_RT)
+ vec_sz = sizeof(exception_handlers);
+
+ if (pcpu_handlers[cpu])
+diff --git a/arch/mips/boot/dts/loongson/ls7a-pch.dtsi b/arch/mips/boot/dts/loongson/ls7a-pch.dtsi
+index cce9428afc41fc..ee71045883e7e7 100644
+--- a/arch/mips/boot/dts/loongson/ls7a-pch.dtsi
++++ b/arch/mips/boot/dts/loongson/ls7a-pch.dtsi
+@@ -70,7 +70,6 @@ pci@1a000000 {
+ device_type = "pci";
+ #address-cells = <3>;
+ #size-cells = <2>;
+- #interrupt-cells = <2>;
+ msi-parent = <&msi>;
+
+ reg = <0 0x1a000000 0 0x02000000>,
+@@ -234,7 +233,7 @@ phy1: ethernet-phy@1 {
+ };
+ };
+
+- pci_bridge@9,0 {
++ pcie@9,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -244,12 +243,16 @@ pci_bridge@9,0 {
+ interrupts = <32 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 32 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@a,0 {
++ pcie@a,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -259,12 +262,16 @@ pci_bridge@a,0 {
+ interrupts = <33 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 33 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@b,0 {
++ pcie@b,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -274,12 +281,16 @@ pci_bridge@b,0 {
+ interrupts = <34 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 34 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@c,0 {
++ pcie@c,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -289,12 +300,16 @@ pci_bridge@c,0 {
+ interrupts = <35 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 35 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@d,0 {
++ pcie@d,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -304,12 +319,16 @@ pci_bridge@d,0 {
+ interrupts = <36 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 36 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@e,0 {
++ pcie@e,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -319,12 +338,16 @@ pci_bridge@e,0 {
+ interrupts = <37 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 37 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@f,0 {
++ pcie@f,0 {
+ compatible = "pci0014,7a29.1",
+ "pci0014,7a29",
+ "pciclass060400",
+@@ -334,12 +357,16 @@ pci_bridge@f,0 {
+ interrupts = <40 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 40 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@10,0 {
++ pcie@10,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -349,12 +376,16 @@ pci_bridge@10,0 {
+ interrupts = <41 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 41 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@11,0 {
++ pcie@11,0 {
+ compatible = "pci0014,7a29.1",
+ "pci0014,7a29",
+ "pciclass060400",
+@@ -364,12 +395,16 @@ pci_bridge@11,0 {
+ interrupts = <42 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 42 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@12,0 {
++ pcie@12,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -379,12 +414,16 @@ pci_bridge@12,0 {
+ interrupts = <43 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 43 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@13,0 {
++ pcie@13,0 {
+ compatible = "pci0014,7a29.1",
+ "pci0014,7a29",
+ "pciclass060400",
+@@ -394,12 +433,16 @@ pci_bridge@13,0 {
+ interrupts = <38 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 38 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@14,0 {
++ pcie@14,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -409,9 +452,13 @@ pci_bridge@14,0 {
+ interrupts = <39 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 39 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+ };
+
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index fbb68fc28ed3a5..935568d68196d0 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -2932,7 +2932,7 @@ static void __init fixup_device_tree_chrp(void)
+ #endif
+
+ #if defined(CONFIG_PPC64) && defined(CONFIG_PPC_PMAC)
+-static void __init fixup_device_tree_pmac(void)
++static void __init fixup_device_tree_pmac64(void)
+ {
+ phandle u3, i2c, mpic;
+ u32 u3_rev;
+@@ -2972,7 +2972,31 @@ static void __init fixup_device_tree_pmac(void)
+ &parent, sizeof(parent));
+ }
+ #else
+-#define fixup_device_tree_pmac()
++#define fixup_device_tree_pmac64()
++#endif
++
++#ifdef CONFIG_PPC_PMAC
++static void __init fixup_device_tree_pmac(void)
++{
++ __be32 val = 1;
++ char type[8];
++ phandle node;
++
++ // Some pmacs are missing #size-cells on escc nodes
++ for (node = 0; prom_next_node(&node); ) {
++ type[0] = '\0';
++ prom_getprop(node, "device_type", type, sizeof(type));
++ if (prom_strcmp(type, "escc"))
++ continue;
++
++ if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
++ continue;
++
++ prom_setprop(node, NULL, "#size-cells", &val, sizeof(val));
++ }
++}
++#else
++static inline void fixup_device_tree_pmac(void) { }
+ #endif
+
+ #ifdef CONFIG_PPC_EFIKA
+@@ -3197,6 +3221,7 @@ static void __init fixup_device_tree(void)
+ fixup_device_tree_maple_memory_controller();
+ fixup_device_tree_chrp();
+ fixup_device_tree_pmac();
++ fixup_device_tree_pmac64();
+ fixup_device_tree_efika();
+ fixup_device_tree_pasemi();
+ }
+diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
+index 2341393cfac1ae..26c01b9e3434c4 100644
+--- a/arch/riscv/configs/defconfig
++++ b/arch/riscv/configs/defconfig
+@@ -301,7 +301,6 @@ CONFIG_DEBUG_MEMORY_INIT=y
+ CONFIG_DEBUG_PER_CPU_MAPS=y
+ CONFIG_SOFTLOCKUP_DETECTOR=y
+ CONFIG_WQ_WATCHDOG=y
+-CONFIG_DEBUG_TIMEKEEPING=y
+ CONFIG_DEBUG_RT_MUTEXES=y
+ CONFIG_DEBUG_SPINLOCK=y
+ CONFIG_DEBUG_MUTEXES=y
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 30b20ce9a70033..83789e39d1d5e5 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -106,9 +106,10 @@ struct zpci_bus {
+ struct list_head resources;
+ struct list_head bus_next;
+ struct resource bus_resource;
+- int pchid;
++ int topo; /* TID if topo_is_tid, PCHID otherwise */
+ int domain_nr;
+- bool multifunction;
++ u8 multifunction : 1;
++ u8 topo_is_tid : 1;
+ enum pci_bus_speed max_bus_speed;
+ };
+
+@@ -129,6 +130,8 @@ struct zpci_dev {
+ u16 vfn; /* virtual function number */
+ u16 pchid; /* physical channel ID */
+ u16 maxstbl; /* Maximum store block size */
++ u16 rid; /* RID as supplied by firmware */
++ u16 tid; /* Topology for which RID is valid */
+ u8 pfgid; /* function group ID */
+ u8 pft; /* pci function type */
+ u8 port;
+@@ -139,7 +142,8 @@ struct zpci_dev {
+ u8 is_physfn : 1;
+ u8 util_str_avail : 1;
+ u8 irqs_registered : 1;
+- u8 reserved : 2;
++ u8 tid_avail : 1;
++ u8 reserved : 1;
+ unsigned int devfn; /* DEVFN part of the RID*/
+
+ u8 pfip[CLP_PFIP_NR_SEGMENTS]; /* pci function internal path */
+@@ -210,12 +214,14 @@ extern struct airq_iv *zpci_aif_sbv;
+ ----------------------------------------------------------------------------- */
+ /* Base stuff */
+ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
++int zpci_add_device(struct zpci_dev *zdev);
+ int zpci_enable_device(struct zpci_dev *);
+ int zpci_disable_device(struct zpci_dev *);
+ int zpci_scan_configured_device(struct zpci_dev *zdev, u32 fh);
+ int zpci_deconfigure_device(struct zpci_dev *zdev);
+ void zpci_device_reserved(struct zpci_dev *zdev);
+ bool zpci_is_device_configured(struct zpci_dev *zdev);
++int zpci_scan_devices(void);
+
+ int zpci_hot_reset_device(struct zpci_dev *zdev);
+ int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64, u8 *);
+@@ -225,7 +231,7 @@ void zpci_update_fh(struct zpci_dev *zdev, u32 fh);
+
+ /* CLP */
+ int clp_setup_writeback_mio(void);
+-int clp_scan_pci_devices(void);
++int clp_scan_pci_devices(struct list_head *scan_list);
+ int clp_query_pci_fn(struct zpci_dev *zdev);
+ int clp_enable_fh(struct zpci_dev *zdev, u32 *fh, u8 nr_dma_as);
+ int clp_disable_fh(struct zpci_dev *zdev, u32 *fh);
+diff --git a/arch/s390/include/asm/pci_clp.h b/arch/s390/include/asm/pci_clp.h
+index f0c677ddd27060..14afb9ce91f3bc 100644
+--- a/arch/s390/include/asm/pci_clp.h
++++ b/arch/s390/include/asm/pci_clp.h
+@@ -110,7 +110,8 @@ struct clp_req_query_pci {
+ struct clp_rsp_query_pci {
+ struct clp_rsp_hdr hdr;
+ u16 vfn; /* virtual fn number */
+- u16 : 3;
++ u16 : 2;
++ u16 tid_avail : 1;
+ u16 rid_avail : 1;
+ u16 is_physfn : 1;
+ u16 reserved1 : 1;
+@@ -130,8 +131,9 @@ struct clp_rsp_query_pci {
+ u64 edma; /* end dma as */
+ #define ZPCI_RID_MASK_DEVFN 0x00ff
+ u16 rid; /* BUS/DEVFN PCI address */
+- u16 reserved0;
+- u32 reserved[10];
++ u32 reserved0;
++ u16 tid;
++ u32 reserved[9];
+ u32 uid; /* user defined id */
+ u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */
+ u32 reserved2[16];
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 3317f4878eaa70..331e0654d61d78 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1780,7 +1780,9 @@ static void cpumsf_pmu_stop(struct perf_event *event, int flags)
+ event->hw.state |= PERF_HES_STOPPED;
+
+ if ((flags & PERF_EF_UPDATE) && !(event->hw.state & PERF_HES_UPTODATE)) {
+- hw_perf_event_update(event, 1);
++ /* CPU hotplug off removes SDBs. No samples to extract. */
++ if (cpuhw->flags & PMU_F_RESERVED)
++ hw_perf_event_update(event, 1);
+ event->hw.state |= PERF_HES_UPTODATE;
+ }
+ perf_pmu_enable(event->pmu);
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 635fd8f2acbaa2..88f72745fa59e1 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -29,6 +29,7 @@
+ #include <linux/pci.h>
+ #include <linux/printk.h>
+ #include <linux/lockdep.h>
++#include <linux/list_sort.h>
+
+ #include <asm/isc.h>
+ #include <asm/airq.h>
+@@ -778,8 +779,9 @@ int zpci_hot_reset_device(struct zpci_dev *zdev)
+ * @fh: Current Function Handle of the device to be created
+ * @state: Initial state after creation either Standby or Configured
+ *
+- * Creates a new zpci device and adds it to its, possibly newly created, zbus
+- * as well as zpci_list.
++ * Allocates a new struct zpci_dev and queries the platform for its details.
++ * If successful the device can subsequently be added to the zPCI subsystem
++ * using zpci_add_device().
+ *
+ * Returns: the zdev on success or an error pointer otherwise
+ */
+@@ -788,7 +790,6 @@ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ struct zpci_dev *zdev;
+ int rc;
+
+- zpci_dbg(1, "add fid:%x, fh:%x, c:%d\n", fid, fh, state);
+ zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
+ if (!zdev)
+ return ERR_PTR(-ENOMEM);
+@@ -803,11 +804,34 @@ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ goto error;
+ zdev->state = state;
+
+- kref_init(&zdev->kref);
+ mutex_init(&zdev->state_lock);
+ mutex_init(&zdev->fmb_lock);
+ mutex_init(&zdev->kzdev_lock);
+
++ return zdev;
++
++error:
++ zpci_dbg(0, "crt fid:%x, rc:%d\n", fid, rc);
++ kfree(zdev);
++ return ERR_PTR(rc);
++}
++
++/**
++ * zpci_add_device() - Add a previously created zPCI device to the zPCI subsystem
++ * @zdev: The zPCI device to be added
++ *
++ * A struct zpci_dev is added to the zPCI subsystem and to a virtual PCI bus creating
++ * a new one as necessary. A hotplug slot is created and events start to be handled.
++ * If successful from this point on zpci_zdev_get() and zpci_zdev_put() must be used.
++ * If adding the struct zpci_dev fails the device was not added and should be freed.
++ *
++ * Return: 0 on success, or an error code otherwise
++ */
++int zpci_add_device(struct zpci_dev *zdev)
++{
++ int rc;
++
++ zpci_dbg(1, "add fid:%x, fh:%x, c:%d\n", zdev->fid, zdev->fh, zdev->state);
+ rc = zpci_init_iommu(zdev);
+ if (rc)
+ goto error;
+@@ -816,18 +840,17 @@ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ if (rc)
+ goto error_destroy_iommu;
+
++ kref_init(&zdev->kref);
+ spin_lock(&zpci_list_lock);
+ list_add_tail(&zdev->entry, &zpci_list);
+ spin_unlock(&zpci_list_lock);
+-
+- return zdev;
++ return 0;
+
+ error_destroy_iommu:
+ zpci_destroy_iommu(zdev);
+ error:
+- zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
+- kfree(zdev);
+- return ERR_PTR(rc);
++ zpci_dbg(0, "add fid:%x, rc:%d\n", zdev->fid, rc);
++ return rc;
+ }
+
+ bool zpci_is_device_configured(struct zpci_dev *zdev)
+@@ -1069,6 +1092,50 @@ bool zpci_is_enabled(void)
+ return s390_pci_initialized;
+ }
+
++static int zpci_cmp_rid(void *priv, const struct list_head *a,
++ const struct list_head *b)
++{
++ struct zpci_dev *za = container_of(a, struct zpci_dev, entry);
++ struct zpci_dev *zb = container_of(b, struct zpci_dev, entry);
++
++ /*
++ * PCI functions without RID available maintain original order
++ * between themselves but sort before those with RID.
++ */
++ if (za->rid == zb->rid)
++ return za->rid_available > zb->rid_available;
++ /*
++ * PCI functions with RID sort by RID ascending.
++ */
++ return za->rid > zb->rid;
++}
++
++static void zpci_add_devices(struct list_head *scan_list)
++{
++ struct zpci_dev *zdev, *tmp;
++
++ list_sort(NULL, scan_list, &zpci_cmp_rid);
++ list_for_each_entry_safe(zdev, tmp, scan_list, entry) {
++ list_del_init(&zdev->entry);
++ if (zpci_add_device(zdev))
++ kfree(zdev);
++ }
++}
++
++int zpci_scan_devices(void)
++{
++ LIST_HEAD(scan_list);
++ int rc;
++
++ rc = clp_scan_pci_devices(&scan_list);
++ if (rc)
++ return rc;
++
++ zpci_add_devices(&scan_list);
++ zpci_bus_scan_busses();
++ return 0;
++}
++
+ static int __init pci_base_init(void)
+ {
+ int rc;
+@@ -1098,10 +1165,9 @@ static int __init pci_base_init(void)
+ if (rc)
+ goto out_irq;
+
+- rc = clp_scan_pci_devices();
++ rc = zpci_scan_devices();
+ if (rc)
+ goto out_find;
+- zpci_bus_scan_busses();
+
+ s390_pci_initialized = 1;
+ return 0;
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index daa5d7450c7d38..1b74a000ff6459 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -168,9 +168,16 @@ void zpci_bus_scan_busses(void)
+ mutex_unlock(&zbus_list_lock);
+ }
+
++static bool zpci_bus_is_multifunction_root(struct zpci_dev *zdev)
++{
++ return !s390_pci_no_rid && zdev->rid_available &&
++ zpci_is_device_configured(zdev) &&
++ !zdev->vfn;
++}
++
+ /* zpci_bus_create_pci_bus - Create the PCI bus associated with this zbus
+ * @zbus: the zbus holding the zdevices
+- * @fr: PCI root function that will determine the bus's domain, and bus speeed
++ * @fr: PCI root function that will determine the bus's domain, and bus speed
+ * @ops: the pci operations
+ *
+ * The PCI function @fr determines the domain (its UID), multifunction property
+@@ -188,7 +195,7 @@ static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *fr, s
+ return domain;
+
+ zbus->domain_nr = domain;
+- zbus->multifunction = fr->rid_available;
++ zbus->multifunction = zpci_bus_is_multifunction_root(fr);
+ zbus->max_bus_speed = fr->max_bus_speed;
+
+ /*
+@@ -232,13 +239,15 @@ static void zpci_bus_put(struct zpci_bus *zbus)
+ kref_put(&zbus->kref, zpci_bus_release);
+ }
+
+-static struct zpci_bus *zpci_bus_get(int pchid)
++static struct zpci_bus *zpci_bus_get(int topo, bool topo_is_tid)
+ {
+ struct zpci_bus *zbus;
+
+ mutex_lock(&zbus_list_lock);
+ list_for_each_entry(zbus, &zbus_list, bus_next) {
+- if (pchid == zbus->pchid) {
++ if (!zbus->multifunction)
++ continue;
++ if (topo_is_tid == zbus->topo_is_tid && topo == zbus->topo) {
+ kref_get(&zbus->kref);
+ goto out_unlock;
+ }
+@@ -249,7 +258,7 @@ static struct zpci_bus *zpci_bus_get(int pchid)
+ return zbus;
+ }
+
+-static struct zpci_bus *zpci_bus_alloc(int pchid)
++static struct zpci_bus *zpci_bus_alloc(int topo, bool topo_is_tid)
+ {
+ struct zpci_bus *zbus;
+
+@@ -257,7 +266,8 @@ static struct zpci_bus *zpci_bus_alloc(int pchid)
+ if (!zbus)
+ return NULL;
+
+- zbus->pchid = pchid;
++ zbus->topo = topo;
++ zbus->topo_is_tid = topo_is_tid;
+ INIT_LIST_HEAD(&zbus->bus_next);
+ mutex_lock(&zbus_list_lock);
+ list_add_tail(&zbus->bus_next, &zbus_list);
+@@ -292,19 +302,22 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ {
+ int rc = -EINVAL;
+
++ if (zbus->multifunction) {
++ if (!zdev->rid_available) {
++ WARN_ONCE(1, "rid_available not set for multifunction\n");
++ return rc;
++ }
++ zdev->devfn = zdev->rid & ZPCI_RID_MASK_DEVFN;
++ }
++
+ if (zbus->function[zdev->devfn]) {
+ pr_err("devfn %04x is already assigned\n", zdev->devfn);
+ return rc;
+ }
+-
+ zdev->zbus = zbus;
+ zbus->function[zdev->devfn] = zdev;
+ zpci_nb_devices++;
+
+- if (zbus->multifunction && !zdev->rid_available) {
+- WARN_ONCE(1, "rid_available not set for multifunction\n");
+- goto error;
+- }
+ rc = zpci_init_slot(zdev);
+ if (rc)
+ goto error;
+@@ -321,8 +334,9 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+
+ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+ {
++ bool topo_is_tid = zdev->tid_avail;
+ struct zpci_bus *zbus = NULL;
+- int rc = -EBADF;
++ int topo, rc = -EBADF;
+
+ if (zpci_nb_devices == ZPCI_NR_DEVICES) {
+ pr_warn("Adding PCI function %08x failed because the configured limit of %d is reached\n",
+@@ -330,14 +344,10 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+ return -ENOSPC;
+ }
+
+- if (zdev->devfn >= ZPCI_FUNCTIONS_PER_BUS)
+- return -EINVAL;
+-
+- if (!s390_pci_no_rid && zdev->rid_available)
+- zbus = zpci_bus_get(zdev->pchid);
+-
++ topo = topo_is_tid ? zdev->tid : zdev->pchid;
++ zbus = zpci_bus_get(topo, topo_is_tid);
+ if (!zbus) {
+- zbus = zpci_bus_alloc(zdev->pchid);
++ zbus = zpci_bus_alloc(topo, topo_is_tid);
+ if (!zbus)
+ return -ENOMEM;
+ }
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index 6f55a59a087115..74dac6da03d5bb 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -164,10 +164,13 @@ static int clp_store_query_pci_fn(struct zpci_dev *zdev,
+ zdev->port = response->port;
+ zdev->uid = response->uid;
+ zdev->fmb_length = sizeof(u32) * response->fmb_len;
+- zdev->rid_available = response->rid_avail;
+ zdev->is_physfn = response->is_physfn;
+- if (!s390_pci_no_rid && zdev->rid_available)
+- zdev->devfn = response->rid & ZPCI_RID_MASK_DEVFN;
++ zdev->rid_available = response->rid_avail;
++ if (zdev->rid_available)
++ zdev->rid = response->rid;
++ zdev->tid_avail = response->tid_avail;
++ if (zdev->tid_avail)
++ zdev->tid = response->tid;
+
+ memcpy(zdev->pfip, response->pfip, sizeof(zdev->pfip));
+ if (response->util_str_avail) {
+@@ -407,6 +410,7 @@ static int clp_find_pci(struct clp_req_rsp_list_pci *rrb, u32 fid,
+
+ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ {
++ struct list_head *scan_list = data;
+ struct zpci_dev *zdev;
+
+ if (!entry->vendor_id)
+@@ -417,10 +421,11 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ zpci_zdev_put(zdev);
+ return;
+ }
+- zpci_create_device(entry->fid, entry->fh, entry->config_state);
++ zdev = zpci_create_device(entry->fid, entry->fh, entry->config_state);
++ list_add_tail(&zdev->entry, scan_list);
+ }
+
+-int clp_scan_pci_devices(void)
++int clp_scan_pci_devices(struct list_head *scan_list)
+ {
+ struct clp_req_rsp_list_pci *rrb;
+ int rc;
+@@ -429,7 +434,7 @@ int clp_scan_pci_devices(void)
+ if (!rrb)
+ return -ENOMEM;
+
+- rc = clp_list_pci(rrb, NULL, __clp_add);
++ rc = clp_list_pci(rrb, scan_list, __clp_add);
+
+ clp_free_block(rrb);
+ return rc;
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index d4f19d33914cbc..7f7b732b3f3efa 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -340,6 +340,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ zdev = zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_CONFIGURED);
+ if (IS_ERR(zdev))
+ break;
++ if (zpci_add_device(zdev)) {
++ kfree(zdev);
++ break;
++ }
+ } else {
+ /* the configuration request may be stale */
+ if (zdev->state != ZPCI_FN_STATE_STANDBY)
+@@ -349,10 +353,17 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ zpci_scan_configured_device(zdev, ccdf->fh);
+ break;
+ case 0x0302: /* Reserved -> Standby */
+- if (!zdev)
+- zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
+- else
++ if (!zdev) {
++ zdev = zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
++ if (IS_ERR(zdev))
++ break;
++ if (zpci_add_device(zdev)) {
++ kfree(zdev);
++ break;
++ }
++ } else {
+ zpci_update_fh(zdev, ccdf->fh);
++ }
+ break;
+ case 0x0303: /* Deconfiguration requested */
+ if (zdev) {
+@@ -381,7 +392,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ break;
+ case 0x0306: /* 0x308 or 0x302 for multiple devices */
+ zpci_remove_reserved_devices();
+- clp_scan_pci_devices();
++ zpci_scan_devices();
+ break;
+ case 0x0308: /* Standby -> Reserved */
+ if (!zdev)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 7b9a7e8f39acc8..171be04eca1f5d 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -145,7 +145,6 @@ config X86
+ select ARCH_HAS_PARANOID_L1D_FLUSH
+ select BUILDTIME_TABLE_SORT
+ select CLKEVT_I8253
+- select CLOCKSOURCE_VALIDATE_LAST_CYCLE
+ select CLOCKSOURCE_WATCHDOG
+ # Word-size accesses may read uninitialized data past the trailing \0
+ # in strings and cause false KMSAN reports.
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 920e3a640caddd..b4a1a2576510e0 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -943,11 +943,12 @@ static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, u
+ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ static atomic64_t status_warned = ATOMIC64_INIT(0);
++ u64 reserved, status, mask, new_bits, prev_bits;
+ struct perf_sample_data data;
+ struct hw_perf_event *hwc;
+ struct perf_event *event;
+ int handled = 0, idx;
+- u64 reserved, status, mask;
+ bool pmu_enabled;
+
+ /*
+@@ -1012,7 +1013,12 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
+ * the corresponding PMCs are expected to be inactive according to the
+ * active_mask
+ */
+- WARN_ON(status > 0);
++ if (status > 0) {
++ prev_bits = atomic64_fetch_or(status, &status_warned);
++ // A new bit was set for the very first time.
++ new_bits = status & ~prev_bits;
++ WARN(new_bits, "New overflows for inactive PMCs: %llx\n", new_bits);
++ }
+
+ /* Clear overflow and freeze bits */
+ amd_pmu_ack_global_status(~status);
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 6f82e75b61494e..4b804531b03c3c 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -36,10 +36,12 @@
+ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4
+
+ #ifdef CONFIG_X86_64
+-#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit */
++#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit (leaf) */
++#define _PAGE_BIT_NOPTISHADOW _PAGE_BIT_SOFTW5 /* No PTI shadow (root PGD) */
+ #else
+ /* Shared with _PAGE_BIT_UFFD_WP which is not supported on 32 bit */
+-#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW2 /* Saved Dirty bit */
++#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW2 /* Saved Dirty bit (leaf) */
++#define _PAGE_BIT_NOPTISHADOW _PAGE_BIT_SOFTW2 /* No PTI shadow (root PGD) */
+ #endif
+
+ /* If _PAGE_BIT_PRESENT is clear, we use these: */
+@@ -139,6 +141,8 @@
+
+ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
+
++#define _PAGE_NOPTISHADOW (_AT(pteval_t, 1) << _PAGE_BIT_NOPTISHADOW)
++
+ /*
+ * Set of bits not changed in pte_modify. The pte's
+ * protection key is treated like _PAGE_RW, for
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index d8408aafeed988..79d2e17f6582e9 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1065,7 +1065,7 @@ static void init_amd(struct cpuinfo_x86 *c)
+ */
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+ cpu_has(c, X86_FEATURE_AUTOIBRS))
+- WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));
++ WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS) < 0);
+
+ /* AMD CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */
+ clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index 392d09c936d60c..e6fa03ed9172c0 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -178,8 +178,6 @@ struct _cpuid4_info_regs {
+ struct amd_northbridge *nb;
+ };
+
+-static unsigned short num_cache_leaves;
+-
+ /* AMD doesn't have CPUID4. Emulate it here to report the same
+ information to the user. This makes some assumptions about the machine:
+ L2 not shared, no SMT etc. that is currently true on AMD CPUs.
+@@ -717,20 +715,23 @@ void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c)
+
+ void init_amd_cacheinfo(struct cpuinfo_x86 *c)
+ {
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
+
+ if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
+- num_cache_leaves = find_num_cache_leaves(c);
++ ci->num_leaves = find_num_cache_leaves(c);
+ } else if (c->extended_cpuid_level >= 0x80000006) {
+ if (cpuid_edx(0x80000006) & 0xf000)
+- num_cache_leaves = 4;
++ ci->num_leaves = 4;
+ else
+- num_cache_leaves = 3;
++ ci->num_leaves = 3;
+ }
+ }
+
+ void init_hygon_cacheinfo(struct cpuinfo_x86 *c)
+ {
+- num_cache_leaves = find_num_cache_leaves(c);
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
++
++ ci->num_leaves = find_num_cache_leaves(c);
+ }
+
+ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+@@ -740,21 +741,21 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+ unsigned int new_l1d = 0, new_l1i = 0; /* Cache sizes from cpuid(4) */
+ unsigned int new_l2 = 0, new_l3 = 0, i; /* Cache sizes from cpuid(4) */
+ unsigned int l2_id = 0, l3_id = 0, num_threads_sharing, index_msb;
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
+
+ if (c->cpuid_level > 3) {
+- static int is_initialized;
+-
+- if (is_initialized == 0) {
+- /* Init num_cache_leaves from boot CPU */
+- num_cache_leaves = find_num_cache_leaves(c);
+- is_initialized++;
+- }
++ /*
++ * There should be at least one leaf. A non-zero value means
++ * that the number of leaves has been initialized.
++ */
++ if (!ci->num_leaves)
++ ci->num_leaves = find_num_cache_leaves(c);
+
+ /*
+ * Whenever possible use cpuid(4), deterministic cache
+ * parameters cpuid leaf to find the cache details
+ */
+- for (i = 0; i < num_cache_leaves; i++) {
++ for (i = 0; i < ci->num_leaves; i++) {
+ struct _cpuid4_info_regs this_leaf = {};
+ int retval;
+
+@@ -790,14 +791,14 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+ * Don't use cpuid2 if cpuid4 is supported. For P4, we use cpuid2 for
+ * trace cache
+ */
+- if ((num_cache_leaves == 0 || c->x86 == 15) && c->cpuid_level > 1) {
++ if ((!ci->num_leaves || c->x86 == 15) && c->cpuid_level > 1) {
+ /* supports eax=2 call */
+ int j, n;
+ unsigned int regs[4];
+ unsigned char *dp = (unsigned char *)regs;
+ int only_trace = 0;
+
+- if (num_cache_leaves != 0 && c->x86 == 15)
++ if (ci->num_leaves && c->x86 == 15)
+ only_trace = 1;
+
+ /* Number of times to iterate */
+@@ -991,14 +992,12 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+
+ int init_cache_level(unsigned int cpu)
+ {
+- struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu);
+
+- if (!num_cache_leaves)
++ /* There should be at least one leaf. */
++ if (!ci->num_leaves)
+ return -ENOENT;
+- if (!this_cpu_ci)
+- return -EINVAL;
+- this_cpu_ci->num_levels = 3;
+- this_cpu_ci->num_leaves = num_cache_leaves;
++
+ return 0;
+ }
+
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index e7656cbef68d54..4b5f3d0521517a 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -586,7 +586,9 @@ static void init_intel(struct cpuinfo_x86 *c)
+ c->x86_vfm == INTEL_WESTMERE_EX))
+ set_cpu_bug(c, X86_BUG_CLFLUSH_MONITOR);
+
+- if (boot_cpu_has(X86_FEATURE_MWAIT) && c->x86_vfm == INTEL_ATOM_GOLDMONT)
++ if (boot_cpu_has(X86_FEATURE_MWAIT) &&
++ (c->x86_vfm == INTEL_ATOM_GOLDMONT ||
++ c->x86_vfm == INTEL_LUNARLAKE_M))
+ set_cpu_bug(c, X86_BUG_MONITOR);
+
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 621a151ccf7d0a..b2e313ea17bf6f 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -428,8 +428,8 @@ void __init topology_apply_cmdline_limits_early(void)
+ {
+ unsigned int possible = nr_cpu_ids;
+
+- /* 'maxcpus=0' 'nosmp' 'nolapic' 'disableapic' 'noapic' */
+- if (!setup_max_cpus || ioapic_is_disabled || apic_is_disabled)
++ /* 'maxcpus=0' 'nosmp' 'nolapic' 'disableapic' */
++ if (!setup_max_cpus || apic_is_disabled)
+ possible = 1;
+
+ /* 'possible_cpus=N' */
+@@ -443,7 +443,7 @@ void __init topology_apply_cmdline_limits_early(void)
+
+ static __init bool restrict_to_up(void)
+ {
+- if (!smp_found_config || ioapic_is_disabled)
++ if (!smp_found_config)
+ return true;
+ /*
+ * XEN PV is special as it does not advertise the local APIC
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 1065ab995305cd..8f62e0666dea51 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -63,16 +63,6 @@ static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
+ return true;
+ }
+
+-/*
+- * Update the value of PKRU register that was already pushed onto the signal frame.
+- */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
+-{
+- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+- return 0;
+- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
+-}
+-
+ /*
+ * Signal frame handlers.
+ */
+@@ -168,14 +158,8 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+
+ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+- int err = 0;
+-
+- if (use_xsave()) {
+- err = xsave_to_user_sigframe(buf);
+- if (!err)
+- err = update_pkru_in_sigframe(buf, pkru);
+- return err;
+- }
++ if (use_xsave())
++ return xsave_to_user_sigframe(buf, pkru);
+
+ if (use_fxsr())
+ return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 0b86a5002c846d..aa16f1a1bbcf17 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -69,6 +69,28 @@ static inline u64 xfeatures_mask_independent(void)
+ return fpu_kernel_cfg.independent_features;
+ }
+
++/*
++ * Update the value of PKRU register that was already pushed onto the signal frame.
++ */
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
++{
++ u64 xstate_bv;
++ int err;
++
++ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
++ return 0;
++
++ /* Mark PKRU as in-use so that it is restored correctly. */
++ xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
++
++ err = __put_user(xstate_bv, &buf->header.xfeatures);
++ if (err)
++ return err;
++
++ /* Update PKRU value in the userspace xsave buffer. */
++ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
++}
++
+ /* XSAVE/XRSTOR wrapper functions */
+
+ #ifdef CONFIG_X86_64
+@@ -256,7 +278,7 @@ static inline u64 xfeatures_need_sigframe_write(void)
+ * The caller has to zero buf::header before calling this because XSAVE*
+ * does not touch the reserved fields in the header.
+ */
+-static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
++static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+ /*
+ * Include the features which are not xsaved/rstored by the kernel
+@@ -281,6 +303,9 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
+ XSTATE_OP(XSAVE, buf, lmask, hmask, err);
+ clac();
+
++ if (!err)
++ err = update_pkru_in_sigframe(buf, mask, pkru);
++
+ return err;
+ }
+
+diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
+index e9e88c342f752e..540443d699e3c2 100644
+--- a/arch/x86/kernel/relocate_kernel_64.S
++++ b/arch/x86/kernel/relocate_kernel_64.S
+@@ -13,6 +13,7 @@
+ #include <asm/pgtable_types.h>
+ #include <asm/nospec-branch.h>
+ #include <asm/unwind_hints.h>
++#include <asm/asm-offsets.h>
+
+ /*
+ * Must be relocatable PIC code callable as a C function, in particular
+@@ -242,6 +243,13 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
+ movq CR0(%r8), %r8
+ movq %rax, %cr3
+ movq %r8, %cr0
++
++#ifdef CONFIG_KEXEC_JUMP
++ /* Saved in save_processor_state. */
++ movq $saved_context, %rax
++ lgdt saved_context_gdt_desc(%rax)
++#endif
++
+ movq %rbp, %rax
+
+ popf
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 3e353ed1f76736..1b4438e24814b4 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4580,6 +4580,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
+
+ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+ {
++ kvm_pfn_t orig_pfn;
+ int r;
+
+ /* Dummy roots are used only for shadowing bad guest roots. */
+@@ -4601,6 +4602,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+ if (r != RET_PF_CONTINUE)
+ return r;
+
++ orig_pfn = fault->pfn;
++
+ r = RET_PF_RETRY;
+ write_lock(&vcpu->kvm->mmu_lock);
+
+@@ -4615,7 +4618,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+
+ out_unlock:
+ write_unlock(&vcpu->kvm->mmu_lock);
+- kvm_release_pfn_clean(fault->pfn);
++ kvm_release_pfn_clean(orig_pfn);
+ return r;
+ }
+
+@@ -4675,6 +4678,7 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault);
+ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
+ struct kvm_page_fault *fault)
+ {
++ kvm_pfn_t orig_pfn;
+ int r;
+
+ if (page_fault_handle_page_track(vcpu, fault))
+@@ -4692,6 +4696,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
+ if (r != RET_PF_CONTINUE)
+ return r;
+
++ orig_pfn = fault->pfn;
++
+ r = RET_PF_RETRY;
+ read_lock(&vcpu->kvm->mmu_lock);
+
+@@ -4702,7 +4708,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
+
+ out_unlock:
+ read_unlock(&vcpu->kvm->mmu_lock);
+- kvm_release_pfn_clean(fault->pfn);
++ kvm_release_pfn_clean(orig_pfn);
+ return r;
+ }
+ #endif
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index ae7d39ff2d07f0..b08017683920f0 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -778,6 +778,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
+ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+ {
+ struct guest_walker walker;
++ kvm_pfn_t orig_pfn;
+ int r;
+
+ WARN_ON_ONCE(fault->is_tdp);
+@@ -836,6 +837,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+ walker.pte_access &= ~ACC_EXEC_MASK;
+ }
+
++ orig_pfn = fault->pfn;
++
+ r = RET_PF_RETRY;
+ write_lock(&vcpu->kvm->mmu_lock);
+
+@@ -849,7 +852,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+
+ out_unlock:
+ write_unlock(&vcpu->kvm->mmu_lock);
+- kvm_release_pfn_clean(fault->pfn);
++ kvm_release_pfn_clean(orig_pfn);
+ return r;
+ }
+
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index 437e96fb497734..5ab7bd2f1983c1 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -174,7 +174,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
+ if (result)
+ return result;
+
+- set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
++ set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag | _PAGE_NOPTISHADOW));
+ }
+
+ return 0;
+@@ -218,14 +218,14 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
+ if (result)
+ return result;
+ if (pgtable_l5_enabled()) {
+- set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag));
++ set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag | _PAGE_NOPTISHADOW));
+ } else {
+ /*
+ * With p4d folded, pgd is equal to p4d.
+ * The pgd entry has to point to the pud page table in this case.
+ */
+ pud_t *pud = pud_offset(p4d, 0);
+- set_pgd(pgd, __pgd(__pa(pud) | info->kernpg_flag));
++ set_pgd(pgd, __pgd(__pa(pud) | info->kernpg_flag | _PAGE_NOPTISHADOW));
+ }
+ }
+
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 851ec8f1363a8b..5f0d579932c688 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -132,7 +132,7 @@ pgd_t __pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd)
+ * Top-level entries added to init_mm's usermode pgd after boot
+ * will not be automatically propagated to other mms.
+ */
+- if (!pgdp_maps_userspace(pgdp))
++ if (!pgdp_maps_userspace(pgdp) || (pgd.pgd & _PAGE_NOPTISHADOW))
+ return pgd;
+
+ /*
+diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
+index 55c4b07ec1f631..0c316bae1726ee 100644
+--- a/arch/x86/pci/acpi.c
++++ b/arch/x86/pci/acpi.c
+@@ -250,6 +250,125 @@ void __init pci_acpi_crs_quirks(void)
+ pr_info("Please notify linux-pci@vger.kernel.org so future kernels can do this automatically\n");
+ }
+
++/*
++ * Check if pdev is part of a PCIe switch that is directly below the
++ * specified bridge.
++ */
++static bool pcie_switch_directly_under(struct pci_dev *bridge,
++ struct pci_dev *pdev)
++{
++ struct pci_dev *parent = pci_upstream_bridge(pdev);
++
++ /* If the device doesn't have a parent, it's not under anything */
++ if (!parent)
++ return false;
++
++ /*
++ * If the device has a PCIe type, check if it is below the
++ * corresponding PCIe switch components (if applicable). Then check
++ * if its upstream port is directly beneath the specified bridge.
++ */
++ switch (pci_pcie_type(pdev)) {
++ case PCI_EXP_TYPE_UPSTREAM:
++ return parent == bridge;
++
++ case PCI_EXP_TYPE_DOWNSTREAM:
++ if (pci_pcie_type(parent) != PCI_EXP_TYPE_UPSTREAM)
++ return false;
++ parent = pci_upstream_bridge(parent);
++ return parent == bridge;
++
++ case PCI_EXP_TYPE_ENDPOINT:
++ if (pci_pcie_type(parent) != PCI_EXP_TYPE_DOWNSTREAM)
++ return false;
++ parent = pci_upstream_bridge(parent);
++ if (!parent || pci_pcie_type(parent) != PCI_EXP_TYPE_UPSTREAM)
++ return false;
++ parent = pci_upstream_bridge(parent);
++ return parent == bridge;
++ }
++
++ return false;
++}
++
++static bool pcie_has_usb4_host_interface(struct pci_dev *pdev)
++{
++ struct fwnode_handle *fwnode;
++
++ /*
++ * For USB4, the tunneled PCIe Root or Downstream Ports are marked
++ * with the "usb4-host-interface" ACPI property, so we look for
++ * that first. This should cover most cases.
++ */
++ fwnode = fwnode_find_reference(dev_fwnode(&pdev->dev),
++ "usb4-host-interface", 0);
++ if (!IS_ERR(fwnode)) {
++ fwnode_handle_put(fwnode);
++ return true;
++ }
++
++ /*
++ * Any integrated Thunderbolt 3/4 PCIe Root Ports from Intel
++ * before Alder Lake do not have the "usb4-host-interface"
++ * property so we use their PCI IDs instead. All these are
++ * tunneled. This list is not expected to grow.
++ */
++ if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
++ switch (pdev->device) {
++ /* Ice Lake Thunderbolt 3 PCIe Root Ports */
++ case 0x8a1d:
++ case 0x8a1f:
++ case 0x8a21:
++ case 0x8a23:
++ /* Tiger Lake-LP Thunderbolt 4 PCIe Root Ports */
++ case 0x9a23:
++ case 0x9a25:
++ case 0x9a27:
++ case 0x9a29:
++ /* Tiger Lake-H Thunderbolt 4 PCIe Root Ports */
++ case 0x9a2b:
++ case 0x9a2d:
++ case 0x9a2f:
++ case 0x9a31:
++ return true;
++ }
++ }
++
++ return false;
++}
++
++bool arch_pci_dev_is_removable(struct pci_dev *pdev)
++{
++ struct pci_dev *parent, *root;
++
++ /* pdev without a parent or Root Port is never tunneled */
++ parent = pci_upstream_bridge(pdev);
++ if (!parent)
++ return false;
++ root = pcie_find_root_port(pdev);
++ if (!root)
++ return false;
++
++ /* Internal PCIe devices are not tunneled */
++ if (!root->external_facing)
++ return false;
++
++ /* Anything directly behind a "usb4-host-interface" is tunneled */
++ if (pcie_has_usb4_host_interface(parent))
++ return true;
++
++ /*
++ * Check if this is a discrete Thunderbolt/USB4 controller that is
++ * directly behind the non-USB4 PCIe Root Port marked as
++ * "ExternalFacingPort". Those are not behind a PCIe tunnel.
++ */
++ if (pcie_switch_directly_under(root, pdev))
++ return false;
++
++ /* PCIe devices after the discrete chip are tunneled */
++ return true;
++}
++
+ #ifdef CONFIG_PCI_MMCONFIG
+ static int check_segment(u16 seg, struct device *dev, char *estr)
+ {
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 95e517723db3e4..0b1184176ce77a 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -350,9 +350,15 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
+
+ static inline bool disk_zone_is_conv(struct gendisk *disk, sector_t sector)
+ {
+- if (!disk->conv_zones_bitmap)
+- return false;
+- return test_bit(disk_zone_no(disk, sector), disk->conv_zones_bitmap);
++ unsigned long *bitmap;
++ bool is_conv;
++
++ rcu_read_lock();
++ bitmap = rcu_dereference(disk->conv_zones_bitmap);
++ is_conv = bitmap && test_bit(disk_zone_no(disk, sector), bitmap);
++ rcu_read_unlock();
++
++ return is_conv;
+ }
+
+ static bool disk_zone_is_last(struct gendisk *disk, struct blk_zone *zone)
+@@ -1455,6 +1461,24 @@ static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
+ disk->zone_wplugs_hash_bits = 0;
+ }
+
++static unsigned int disk_set_conv_zones_bitmap(struct gendisk *disk,
++ unsigned long *bitmap)
++{
++ unsigned int nr_conv_zones = 0;
++ unsigned long flags;
++
++ spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
++ if (bitmap)
++ nr_conv_zones = bitmap_weight(bitmap, disk->nr_zones);
++ bitmap = rcu_replace_pointer(disk->conv_zones_bitmap, bitmap,
++ lockdep_is_held(&disk->zone_wplugs_lock));
++ spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
++
++ kfree_rcu_mightsleep(bitmap);
++
++ return nr_conv_zones;
++}
++
+ void disk_free_zone_resources(struct gendisk *disk)
+ {
+ if (!disk->zone_wplugs_pool)
+@@ -1478,8 +1502,7 @@ void disk_free_zone_resources(struct gendisk *disk)
+ mempool_destroy(disk->zone_wplugs_pool);
+ disk->zone_wplugs_pool = NULL;
+
+- bitmap_free(disk->conv_zones_bitmap);
+- disk->conv_zones_bitmap = NULL;
++ disk_set_conv_zones_bitmap(disk, NULL);
+ disk->zone_capacity = 0;
+ disk->last_zone_capacity = 0;
+ disk->nr_zones = 0;
+@@ -1538,7 +1561,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ struct blk_revalidate_zone_args *args)
+ {
+ struct request_queue *q = disk->queue;
+- unsigned int nr_seq_zones, nr_conv_zones = 0;
++ unsigned int nr_seq_zones, nr_conv_zones;
+ unsigned int pool_size;
+ struct queue_limits lim;
+ int ret;
+@@ -1546,10 +1569,8 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ disk->nr_zones = args->nr_zones;
+ disk->zone_capacity = args->zone_capacity;
+ disk->last_zone_capacity = args->last_zone_capacity;
+- swap(disk->conv_zones_bitmap, args->conv_zones_bitmap);
+- if (disk->conv_zones_bitmap)
+- nr_conv_zones = bitmap_weight(disk->conv_zones_bitmap,
+- disk->nr_zones);
++ nr_conv_zones =
++ disk_set_conv_zones_bitmap(disk, args->conv_zones_bitmap);
+ if (nr_conv_zones >= disk->nr_zones) {
+ pr_warn("%s: Invalid number of conventional zones %u / %u\n",
+ disk->disk_name, nr_conv_zones, disk->nr_zones);
+@@ -1829,8 +1850,6 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ blk_mq_unfreeze_queue(q);
+ }
+
+- kfree(args.conv_zones_bitmap);
+-
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(blk_revalidate_disk_zones);
+diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c
+index d5a10959ec281c..80ef16ae6a40b4 100644
+--- a/crypto/ecdsa.c
++++ b/crypto/ecdsa.c
+@@ -36,29 +36,24 @@ static int ecdsa_get_signature_rs(u64 *dest, size_t hdrlen, unsigned char tag,
+ const void *value, size_t vlen, unsigned int ndigits)
+ {
+ size_t bufsize = ndigits * sizeof(u64);
+- ssize_t diff = vlen - bufsize;
+ const char *d = value;
+
+- if (!value || !vlen)
++ if (!value || !vlen || vlen > bufsize + 1)
+ return -EINVAL;
+
+- /* diff = 0: 'value' has exacly the right size
+- * diff > 0: 'value' has too many bytes; one leading zero is allowed that
+- * makes the value a positive integer; error on more
+- * diff < 0: 'value' is missing leading zeros
++ /*
++ * vlen may be 1 byte larger than bufsize due to a leading zero byte
++ * (necessary if the most significant bit of the integer is set).
+ */
+- if (diff > 0) {
++ if (vlen > bufsize) {
+ /* skip over leading zeros that make 'value' a positive int */
+ if (*d == 0) {
+ vlen -= 1;
+- diff--;
+ d++;
+- }
+- if (diff)
++ } else {
+ return -EINVAL;
++ }
+ }
+- if (-diff >= bufsize)
+- return -EINVAL;
+
+ ecc_digits_from_bytes(d, vlen, dest, ndigits);
+
+diff --git a/drivers/accel/qaic/qaic_drv.c b/drivers/accel/qaic/qaic_drv.c
+index bf10156c334e71..f139c564eadf9f 100644
+--- a/drivers/accel/qaic/qaic_drv.c
++++ b/drivers/accel/qaic/qaic_drv.c
+@@ -34,6 +34,7 @@
+
+ MODULE_IMPORT_NS(DMA_BUF);
+
++#define PCI_DEV_AIC080 0xa080
+ #define PCI_DEV_AIC100 0xa100
+ #define QAIC_NAME "qaic"
+ #define QAIC_DESC "Qualcomm Cloud AI Accelerators"
+@@ -365,7 +366,7 @@ static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_de
+ return NULL;
+
+ qdev->dev_state = QAIC_OFFLINE;
+- if (id->device == PCI_DEV_AIC100) {
++ if (id->device == PCI_DEV_AIC080 || id->device == PCI_DEV_AIC100) {
+ qdev->num_dbc = 16;
+ qdev->dbc = devm_kcalloc(dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL);
+ if (!qdev->dbc)
+@@ -607,6 +608,7 @@ static struct mhi_driver qaic_mhi_driver = {
+ };
+
+ static const struct pci_device_id qaic_ids[] = {
++ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC080), },
+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC100), },
+ { }
+ };
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 015bd8e66c1cf8..d507d5e084354b 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -549,6 +549,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "iMac12,2"),
+ },
+ },
++ {
++ .callback = video_detect_force_native,
++ /* Apple MacBook Air 7,2 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir7,2"),
++ },
++ },
+ {
+ .callback = video_detect_force_native,
+ /* Apple MacBook Air 9,1 */
+@@ -565,6 +573,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,2"),
+ },
+ },
++ {
++ .callback = video_detect_force_native,
++ /* Apple MacBook Pro 11,2 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro11,2"),
++ },
++ },
+ {
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ .callback = video_detect_force_native,
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 6af546b21574f9..cb45ef5240dab6 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -12,6 +12,7 @@
+
+ #include <linux/acpi.h>
+ #include <linux/dmi.h>
++#include <linux/pci.h>
+ #include <linux/platform_device.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+@@ -295,6 +296,7 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ /*
+ * 2. Devices which also have the skip i2c/serdev quirks and which
+ * need the x86-android-tablets module to properly work.
++ * Sorted alphabetically.
+ */
+ #if IS_ENABLED(CONFIG_X86_ANDROID_TABLETS)
+ {
+@@ -308,6 +310,19 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
++ /* Acer Iconia One 8 A1-840 (non FHD version) */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "BayTrail"),
++ /* Above strings are too generic also match BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "04/01/2014"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
++ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
++ },
++ {
++ /* Asus ME176C tablet */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ME176C"),
+@@ -318,23 +333,24 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
+- /* Lenovo Yoga Book X90F/L */
++ /* Asus TF103C transformer 2-in-1 */
+ .matches = {
+- DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "TF103C"),
+ },
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+- ACPI_QUIRK_UART1_SKIP |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
++ /* Lenovo Yoga Book X90F/L */
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "TF103C"),
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
+ },
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_UART1_SKIP |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+@@ -391,6 +407,19 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
+ },
++ {
++ /* Vexia Edu Atla 10 tablet 9V version */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_UART1_SKIP |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
++ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
++ },
+ {
+ /* Whitelabel (sold as various brands) TM800A550L */
+ .matches = {
+@@ -411,6 +440,7 @@ static const struct acpi_device_id i2c_acpi_known_good_ids[] = {
+ { "10EC5640", 0 }, /* RealTek ALC5640 audio codec */
+ { "10EC5651", 0 }, /* RealTek ALC5651 audio codec */
+ { "INT33F4", 0 }, /* X-Powers AXP288 PMIC */
++ { "INT33F5", 0 }, /* TI Dollar Cove PMIC */
+ { "INT33FD", 0 }, /* Intel Crystal Cove PMIC */
+ { "INT34D3", 0 }, /* Intel Whiskey Cove PMIC */
+ { "NPCE69A", 0 }, /* Asus Transformer keyboard dock */
+@@ -439,18 +469,35 @@ static int acpi_dmi_skip_serdev_enumeration(struct device *controller_parent, bo
+ struct acpi_device *adev = ACPI_COMPANION(controller_parent);
+ const struct dmi_system_id *dmi_id;
+ long quirks = 0;
+- u64 uid;
+- int ret;
++ u64 uid = 0;
+
+- ret = acpi_dev_uid_to_integer(adev, &uid);
+- if (ret)
++ dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids);
++ if (!dmi_id)
+ return 0;
+
+- dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids);
+- if (dmi_id)
+- quirks = (unsigned long)dmi_id->driver_data;
++ quirks = (unsigned long)dmi_id->driver_data;
++
++ /* uid is left at 0 on errors and 0 is not a valid UART UID */
++ acpi_dev_uid_to_integer(adev, &uid);
++
++ /* For PCI UARTs without an UID */
++ if (!uid && dev_is_pci(controller_parent)) {
++ struct pci_dev *pdev = to_pci_dev(controller_parent);
++
++ /*
++ * Devfn values for PCI UARTs on Bay Trail SoCs, which are
++ * the only devices where this fallback is necessary.
++ */
++ if (pdev->devfn == PCI_DEVFN(0x1e, 3))
++ uid = 1;
++ else if (pdev->devfn == PCI_DEVFN(0x1e, 4))
++ uid = 2;
++ }
++
++ if (!uid)
++ return 0;
+
+- if (!dev_is_platform(controller_parent)) {
++ if (!dev_is_platform(controller_parent) && !dev_is_pci(controller_parent)) {
+ /* PNP enumerated UARTs */
+ if ((quirks & ACPI_QUIRK_PNP_UART1_SKIP) && uid == 1)
+ *skip = true;
+@@ -505,7 +552,7 @@ int acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *s
+ * Set skip to true so that the tty core creates a serdev ctrl device.
+ * The backlight driver will manually create the serdev client device.
+ */
+- if (acpi_dev_hid_match(adev, "DELL0501")) {
++ if (adev && acpi_dev_hid_match(adev, "DELL0501")) {
+ *skip = true;
+ /*
+ * Create a platform dev for dell-uart-backlight to bind to.
+diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
+index e1870167642658..c99f2ab105e5b7 100644
+--- a/drivers/base/arch_numa.c
++++ b/drivers/base/arch_numa.c
+@@ -208,6 +208,10 @@ static int __init numa_register_nodes(void)
+ {
+ int nid;
+
++ /* Check the validity of the memblock/node mapping */
++ if (!memblock_validate_numa_coverage(0))
++ return -EINVAL;
++
+ /* Finally register nodes. */
+ for_each_node_mask(nid, numa_nodes_parsed) {
+ unsigned long start_pfn, end_pfn;
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 7a7609298e18bd..89410127089b93 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -58,7 +58,7 @@ bool last_level_cache_is_valid(unsigned int cpu)
+ {
+ struct cacheinfo *llc;
+
+- if (!cache_leaves(cpu))
++ if (!cache_leaves(cpu) || !per_cpu_cacheinfo(cpu))
+ return false;
+
+ llc = per_cpu_cacheinfo_idx(cpu, cache_leaves(cpu) - 1);
+@@ -463,11 +463,9 @@ int __weak populate_cache_leaves(unsigned int cpu)
+ return -ENOENT;
+ }
+
+-static inline
+-int allocate_cache_info(int cpu)
++static inline int allocate_cache_info(int cpu)
+ {
+- per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu),
+- sizeof(struct cacheinfo), GFP_ATOMIC);
++ per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu), sizeof(struct cacheinfo), GFP_ATOMIC);
+ if (!per_cpu_cacheinfo(cpu)) {
+ cache_leaves(cpu) = 0;
+ return -ENOMEM;
+@@ -539,7 +537,11 @@ static inline int init_level_allocate_ci(unsigned int cpu)
+ */
+ ci_cacheinfo(cpu)->early_ci_levels = false;
+
+- if (cache_leaves(cpu) <= early_leaves)
++ /*
++ * Some architectures (e.g., x86) do not use early initialization.
++ * Allocate memory now in such case.
++ */
++ if (cache_leaves(cpu) <= early_leaves && per_cpu_cacheinfo(cpu))
+ return 0;
+
+ kfree(per_cpu_cacheinfo(cpu));
+diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
+index 83acccdc100897..bdb450436cbc53 100644
+--- a/drivers/base/regmap/internal.h
++++ b/drivers/base/regmap/internal.h
+@@ -59,6 +59,7 @@ struct regmap {
+ unsigned long raw_spinlock_flags;
+ };
+ };
++ struct lock_class_key *lock_key;
+ regmap_lock lock;
+ regmap_unlock unlock;
+ void *lock_arg; /* This is passed to lock/unlock functions */
+diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
+index 8d27d3653ea3e7..23da7b31d71534 100644
+--- a/drivers/base/regmap/regcache-maple.c
++++ b/drivers/base/regmap/regcache-maple.c
+@@ -355,6 +355,9 @@ static int regcache_maple_init(struct regmap *map)
+
+ mt_init(mt);
+
++ if (!mt_external_lock(mt) && map->lock_key)
++ lockdep_set_class_and_subclass(&mt->ma_lock, map->lock_key, 1);
++
+ if (!map->num_reg_defaults)
+ return 0;
+
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 4ded93687c1f0a..e3e2afc2c83c6b 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -598,6 +598,17 @@ int regmap_attach_dev(struct device *dev, struct regmap *map,
+ }
+ EXPORT_SYMBOL_GPL(regmap_attach_dev);
+
++static int dev_get_regmap_match(struct device *dev, void *res, void *data);
++
++static int regmap_detach_dev(struct device *dev, struct regmap *map)
++{
++ if (!dev)
++ return 0;
++
++ return devres_release(dev, dev_get_regmap_release,
++ dev_get_regmap_match, (void *)map->name);
++}
++
+ static enum regmap_endian regmap_get_reg_endian(const struct regmap_bus *bus,
+ const struct regmap_config *config)
+ {
+@@ -745,6 +756,7 @@ struct regmap *__regmap_init(struct device *dev,
+ lock_key, lock_name);
+ }
+ map->lock_arg = map;
++ map->lock_key = lock_key;
+ }
+
+ /*
+@@ -1444,6 +1456,7 @@ void regmap_exit(struct regmap *map)
+ {
+ struct regmap_async *async;
+
++ regmap_detach_dev(map->dev, map);
+ regcache_exit(map);
+
+ regmap_debugfs_exit(map);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index d6a1ba969266a4..d0432b1707ceb6 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -298,17 +298,30 @@ static void mark_idle(struct zram *zram, ktime_t cutoff)
+ /*
+ * Do not mark ZRAM_UNDER_WB slot as ZRAM_IDLE to close race.
+ * See the comment in writeback_store.
++ *
++ * Also do not mark ZRAM_SAME slots as ZRAM_IDLE, because no
++ * post-processing (recompress, writeback) happens to the
++ * ZRAM_SAME slot.
++ *
++ * And ZRAM_WB slots simply cannot be ZRAM_IDLE.
+ */
+ zram_slot_lock(zram, index);
+- if (zram_allocated(zram, index) &&
+- !zram_test_flag(zram, index, ZRAM_UNDER_WB)) {
++ if (!zram_allocated(zram, index) ||
++ zram_test_flag(zram, index, ZRAM_WB) ||
++ zram_test_flag(zram, index, ZRAM_UNDER_WB) ||
++ zram_test_flag(zram, index, ZRAM_SAME)) {
++ zram_slot_unlock(zram, index);
++ continue;
++ }
++
+ #ifdef CONFIG_ZRAM_TRACK_ENTRY_ACTIME
+- is_idle = !cutoff || ktime_after(cutoff,
+- zram->table[index].ac_time);
++ is_idle = !cutoff ||
++ ktime_after(cutoff, zram->table[index].ac_time);
+ #endif
+- if (is_idle)
+- zram_set_flag(zram, index, ZRAM_IDLE);
+- }
++ if (is_idle)
++ zram_set_flag(zram, index, ZRAM_IDLE);
++ else
++ zram_clear_flag(zram, index, ZRAM_IDLE);
+ zram_slot_unlock(zram, index);
+ }
+ }
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 4ccaddb46ddd81..11755cb1eb1635 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -524,6 +524,8 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3591), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe123), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe125), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+
+@@ -563,6 +565,16 @@ static const struct usb_device_id quirks_table[] = {
+ { USB_DEVICE(0x043e, 0x3109), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+
++ /* Additional MediaTek MT7920 Bluetooth devices */
++ { USB_DEVICE(0x0489, 0xe134), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3620), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3621), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3622), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++
+ /* Additional MediaTek MT7921 Bluetooth devices */
+ { USB_DEVICE(0x0489, 0xe0c8), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+@@ -630,12 +642,24 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+
+ /* Additional MediaTek MT7925 Bluetooth devices */
++ { USB_DEVICE(0x0489, 0xe111), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe113), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe118), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe11e), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe124), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe139), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe14f), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe150), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe151), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3602), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3603), .driver_info = BTUSB_MEDIATEK |
+@@ -3897,6 +3921,8 @@ static int btusb_probe(struct usb_interface *intf,
+ set_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT, &hdev->quirks);
+ }
+
+ if (!reset)
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index fdd8ea989ed24a..bc21b292144926 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -508,6 +508,8 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+ u32 rate;
+ int i;
+
++ clk_data->num = EN7523_NUM_CLOCKS;
++
+ for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
+ const struct en_clk_desc *desc = &en7523_base_clks[i];
+ u32 reg = desc->div_reg ? desc->div_reg : desc->base_reg;
+@@ -529,8 +531,6 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+
+ hw = en7523_register_pcie_clk(dev, np_base);
+ clk_data->hws[EN7523_CLK_PCIE] = hw;
+-
+- clk_data->num = EN7523_NUM_CLOCKS;
+ }
+
+ static int en7523_clk_hw_init(struct platform_device *pdev,
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 4444dafa4e3dfa..9ba675f229b144 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -959,10 +959,10 @@ config SM_DISPCC_8450
+ config SM_DISPCC_8550
+ tristate "SM8550 Display Clock Controller"
+ depends on ARM64 || COMPILE_TEST
+- depends on SM_GCC_8550 || SM_GCC_8650
++ depends on SM_GCC_8550 || SM_GCC_8650 || SAR_GCC_2130P
+ help
+ Support for the display clock controller on Qualcomm Technologies, Inc
+- SM8550 or SM8650 devices.
++ SAR2130P, SM8550 or SM8650 devices.
+ Say Y if you want to support display devices and functionality such as
+ splash screen.
+
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index be9bee6ab65f6e..49687512184b92 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -267,6 +267,17 @@ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ [PLL_OFF_OPMODE] = 0x30,
+ [PLL_OFF_STATUS] = 0x3c,
+ },
++ [CLK_ALPHA_PLL_TYPE_NSS_HUAYRA] = {
++ [PLL_OFF_L_VAL] = 0x04,
++ [PLL_OFF_ALPHA_VAL] = 0x08,
++ [PLL_OFF_TEST_CTL] = 0x0c,
++ [PLL_OFF_TEST_CTL_U] = 0x10,
++ [PLL_OFF_USER_CTL] = 0x14,
++ [PLL_OFF_CONFIG_CTL] = 0x18,
++ [PLL_OFF_CONFIG_CTL_U] = 0x1c,
++ [PLL_OFF_STATUS] = 0x20,
++ },
++
+ };
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_regs);
+
+diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h
+index 55eca04b23a1fc..c6d1b8429f951a 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.h
++++ b/drivers/clk/qcom/clk-alpha-pll.h
+@@ -32,6 +32,7 @@ enum {
+ CLK_ALPHA_PLL_TYPE_BRAMMO_EVO,
+ CLK_ALPHA_PLL_TYPE_STROMER,
+ CLK_ALPHA_PLL_TYPE_STROMER_PLUS,
++ CLK_ALPHA_PLL_TYPE_NSS_HUAYRA,
+ CLK_ALPHA_PLL_TYPE_MAX,
+ };
+
+diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
+index 8e0f3372dc7a83..80f1f4fcd52a68 100644
+--- a/drivers/clk/qcom/clk-rcg.h
++++ b/drivers/clk/qcom/clk-rcg.h
+@@ -198,6 +198,7 @@ extern const struct clk_ops clk_byte2_ops;
+ extern const struct clk_ops clk_pixel_ops;
+ extern const struct clk_ops clk_gfx3d_ops;
+ extern const struct clk_ops clk_rcg2_shared_ops;
++extern const struct clk_ops clk_rcg2_shared_floor_ops;
+ extern const struct clk_ops clk_rcg2_shared_no_init_park_ops;
+ extern const struct clk_ops clk_dp_ops;
+
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index bf26c5448f0067..bf6406f5279a4c 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -1186,15 +1186,23 @@ clk_rcg2_shared_force_enable_clear(struct clk_hw *hw, const struct freq_tbl *f)
+ return clk_rcg2_clear_force_enable(hw);
+ }
+
+-static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
+- unsigned long parent_rate)
++static int __clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate,
++ enum freq_policy policy)
+ {
+ struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+ const struct freq_tbl *f;
+
+- f = qcom_find_freq(rcg->freq_tbl, rate);
+- if (!f)
++ switch (policy) {
++ case FLOOR:
++ f = qcom_find_freq_floor(rcg->freq_tbl, rate);
++ break;
++ case CEIL:
++ f = qcom_find_freq(rcg->freq_tbl, rate);
++ break;
++ default:
+ return -EINVAL;
++ }
+
+ /*
+ * In case clock is disabled, update the M, N and D registers, cache
+@@ -1207,10 +1215,28 @@ static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
+ return clk_rcg2_shared_force_enable_clear(hw, f);
+ }
+
++static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate)
++{
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, CEIL);
++}
++
+ static int clk_rcg2_shared_set_rate_and_parent(struct clk_hw *hw,
+ unsigned long rate, unsigned long parent_rate, u8 index)
+ {
+- return clk_rcg2_shared_set_rate(hw, rate, parent_rate);
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, CEIL);
++}
++
++static int clk_rcg2_shared_set_floor_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate)
++{
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, FLOOR);
++}
++
++static int clk_rcg2_shared_set_floor_rate_and_parent(struct clk_hw *hw,
++ unsigned long rate, unsigned long parent_rate, u8 index)
++{
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, FLOOR);
+ }
+
+ static int clk_rcg2_shared_enable(struct clk_hw *hw)
+@@ -1348,6 +1374,18 @@ const struct clk_ops clk_rcg2_shared_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_rcg2_shared_ops);
+
++const struct clk_ops clk_rcg2_shared_floor_ops = {
++ .enable = clk_rcg2_shared_enable,
++ .disable = clk_rcg2_shared_disable,
++ .get_parent = clk_rcg2_shared_get_parent,
++ .set_parent = clk_rcg2_shared_set_parent,
++ .recalc_rate = clk_rcg2_shared_recalc_rate,
++ .determine_rate = clk_rcg2_determine_floor_rate,
++ .set_rate = clk_rcg2_shared_set_floor_rate,
++ .set_rate_and_parent = clk_rcg2_shared_set_floor_rate_and_parent,
++};
++EXPORT_SYMBOL_GPL(clk_rcg2_shared_floor_ops);
++
+ static int clk_rcg2_shared_no_init_park(struct clk_hw *hw)
+ {
+ struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
+index 4acde937114af3..eefc322ce36798 100644
+--- a/drivers/clk/qcom/clk-rpmh.c
++++ b/drivers/clk/qcom/clk-rpmh.c
+@@ -389,6 +389,18 @@ DEFINE_CLK_RPMH_BCM(ipa, "IP0");
+ DEFINE_CLK_RPMH_BCM(pka, "PKA0");
+ DEFINE_CLK_RPMH_BCM(qpic_clk, "QP0");
+
++static struct clk_hw *sar2130p_rpmh_clocks[] = {
++ [RPMH_CXO_CLK] = &clk_rpmh_bi_tcxo_div1.hw,
++ [RPMH_CXO_CLK_A] = &clk_rpmh_bi_tcxo_div1_ao.hw,
++ [RPMH_RF_CLK1] = &clk_rpmh_rf_clk1_a.hw,
++ [RPMH_RF_CLK1_A] = &clk_rpmh_rf_clk1_a_ao.hw,
++};
++
++static const struct clk_rpmh_desc clk_rpmh_sar2130p = {
++ .clks = sar2130p_rpmh_clocks,
++ .num_clks = ARRAY_SIZE(sar2130p_rpmh_clocks),
++};
++
+ static struct clk_hw *sdm845_rpmh_clocks[] = {
+ [RPMH_CXO_CLK] = &clk_rpmh_bi_tcxo_div2.hw,
+ [RPMH_CXO_CLK_A] = &clk_rpmh_bi_tcxo_div2_ao.hw,
+@@ -880,6 +892,7 @@ static int clk_rpmh_probe(struct platform_device *pdev)
+ static const struct of_device_id clk_rpmh_match_table[] = {
+ { .compatible = "qcom,qdu1000-rpmh-clk", .data = &clk_rpmh_qdu1000},
+ { .compatible = "qcom,sa8775p-rpmh-clk", .data = &clk_rpmh_sa8775p},
++ { .compatible = "qcom,sar2130p-rpmh-clk", .data = &clk_rpmh_sar2130p},
+ { .compatible = "qcom,sc7180-rpmh-clk", .data = &clk_rpmh_sc7180},
+ { .compatible = "qcom,sc8180x-rpmh-clk", .data = &clk_rpmh_sc8180x},
+ { .compatible = "qcom,sc8280xp-rpmh-clk", .data = &clk_rpmh_sc8280xp},
+diff --git a/drivers/clk/qcom/dispcc-sm8550.c b/drivers/clk/qcom/dispcc-sm8550.c
+index 7f9021ca0ecb0e..e41d4104d77021 100644
+--- a/drivers/clk/qcom/dispcc-sm8550.c
++++ b/drivers/clk/qcom/dispcc-sm8550.c
+@@ -75,7 +75,7 @@ static struct pll_vco lucid_ole_vco[] = {
+ { 249600000, 2000000000, 0 },
+ };
+
+-static const struct alpha_pll_config disp_cc_pll0_config = {
++static struct alpha_pll_config disp_cc_pll0_config = {
+ .l = 0xd,
+ .alpha = 0x6492,
+ .config_ctl_val = 0x20485699,
+@@ -106,7 +106,7 @@ static struct clk_alpha_pll disp_cc_pll0 = {
+ },
+ };
+
+-static const struct alpha_pll_config disp_cc_pll1_config = {
++static struct alpha_pll_config disp_cc_pll1_config = {
+ .l = 0x1f,
+ .alpha = 0x4000,
+ .config_ctl_val = 0x20485699,
+@@ -594,6 +594,13 @@ static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src[] = {
+ { }
+ };
+
++static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src_sar2130p[] = {
++ F(200000000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
++ F(325000000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
++ F(514000000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
++ { }
++};
++
+ static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src_sm8650[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(85714286, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
+@@ -1750,6 +1757,7 @@ static struct qcom_cc_desc disp_cc_sm8550_desc = {
+ };
+
+ static const struct of_device_id disp_cc_sm8550_match_table[] = {
++ { .compatible = "qcom,sar2130p-dispcc" },
+ { .compatible = "qcom,sm8550-dispcc" },
+ { .compatible = "qcom,sm8650-dispcc" },
+ { }
+@@ -1780,6 +1788,12 @@ static int disp_cc_sm8550_probe(struct platform_device *pdev)
+ disp_cc_mdss_mdp_clk_src.freq_tbl = ftbl_disp_cc_mdss_mdp_clk_src_sm8650;
+ disp_cc_mdss_dptx1_usb_router_link_intf_clk.clkr.hw.init->parent_hws[0] =
+ &disp_cc_mdss_dptx1_link_div_clk_src.clkr.hw;
++ } else if (of_device_is_compatible(pdev->dev.of_node, "qcom,sar2130p-dispcc")) {
++ disp_cc_pll0_config.l = 0x1f;
++ disp_cc_pll0_config.alpha = 0x4000;
++ disp_cc_pll0_config.user_ctl_val = 0x1;
++ disp_cc_pll1_config.user_ctl_val = 0x1;
++ disp_cc_mdss_mdp_clk_src.freq_tbl = ftbl_disp_cc_mdss_mdp_clk_src_sar2130p;
+ }
+
+ clk_lucid_ole_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
+diff --git a/drivers/clk/qcom/tcsrcc-sm8550.c b/drivers/clk/qcom/tcsrcc-sm8550.c
+index e5e8f2e82b949d..41d73f92a000ab 100644
+--- a/drivers/clk/qcom/tcsrcc-sm8550.c
++++ b/drivers/clk/qcom/tcsrcc-sm8550.c
+@@ -129,6 +129,13 @@ static struct clk_branch tcsr_usb3_clkref_en = {
+ },
+ };
+
++static struct clk_regmap *tcsr_cc_sar2130p_clocks[] = {
++ [TCSR_PCIE_0_CLKREF_EN] = &tcsr_pcie_0_clkref_en.clkr,
++ [TCSR_PCIE_1_CLKREF_EN] = &tcsr_pcie_1_clkref_en.clkr,
++ [TCSR_USB2_CLKREF_EN] = &tcsr_usb2_clkref_en.clkr,
++ [TCSR_USB3_CLKREF_EN] = &tcsr_usb3_clkref_en.clkr,
++};
++
+ static struct clk_regmap *tcsr_cc_sm8550_clocks[] = {
+ [TCSR_PCIE_0_CLKREF_EN] = &tcsr_pcie_0_clkref_en.clkr,
+ [TCSR_PCIE_1_CLKREF_EN] = &tcsr_pcie_1_clkref_en.clkr,
+@@ -146,6 +153,12 @@ static const struct regmap_config tcsr_cc_sm8550_regmap_config = {
+ .fast_io = true,
+ };
+
++static const struct qcom_cc_desc tcsr_cc_sar2130p_desc = {
++ .config = &tcsr_cc_sm8550_regmap_config,
++ .clks = tcsr_cc_sar2130p_clocks,
++ .num_clks = ARRAY_SIZE(tcsr_cc_sar2130p_clocks),
++};
++
+ static const struct qcom_cc_desc tcsr_cc_sm8550_desc = {
+ .config = &tcsr_cc_sm8550_regmap_config,
+ .clks = tcsr_cc_sm8550_clocks,
+@@ -153,7 +166,8 @@ static const struct qcom_cc_desc tcsr_cc_sm8550_desc = {
+ };
+
+ static const struct of_device_id tcsr_cc_sm8550_match_table[] = {
+- { .compatible = "qcom,sm8550-tcsr" },
++ { .compatible = "qcom,sar2130p-tcsr", .data = &tcsr_cc_sar2130p_desc },
++ { .compatible = "qcom,sm8550-tcsr", .data = &tcsr_cc_sm8550_desc },
+ { }
+ };
+ MODULE_DEVICE_TABLE(of, tcsr_cc_sm8550_match_table);
+@@ -162,7 +176,7 @@ static int tcsr_cc_sm8550_probe(struct platform_device *pdev)
+ {
+ struct regmap *regmap;
+
+- regmap = qcom_cc_map(pdev, &tcsr_cc_sm8550_desc);
++ regmap = qcom_cc_map(pdev, of_device_get_match_data(&pdev->dev));
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
+index 8a08ffde31e758..6657d4b30af9dc 100644
+--- a/drivers/dma-buf/dma-fence-array.c
++++ b/drivers/dma-buf/dma-fence-array.c
+@@ -103,10 +103,36 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
+ static bool dma_fence_array_signaled(struct dma_fence *fence)
+ {
+ struct dma_fence_array *array = to_dma_fence_array(fence);
++ int num_pending;
++ unsigned int i;
+
+- if (atomic_read(&array->num_pending) > 0)
++ /*
++ * We need to read num_pending before checking the enable_signal bit
++ * to avoid racing with the enable_signaling() implementation, which
++ * might decrement the counter, and cause a partial check.
++ * atomic_read_acquire() pairs with atomic_dec_and_test() in
++ * dma_fence_array_enable_signaling()
++ *
++ * The !--num_pending check is here to account for the any_signaled case
++ * if we race with enable_signaling(), that means the !num_pending check
++ * in the is_signalling_enabled branch might be outdated (num_pending
++ * might have been decremented), but that's fine. The user will get the
++ * right value when testing again later.
++ */
++ num_pending = atomic_read_acquire(&array->num_pending);
++ if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &array->base.flags)) {
++ if (num_pending <= 0)
++ goto signal;
+ return false;
++ }
++
++ for (i = 0; i < array->num_fences; ++i) {
++ if (dma_fence_is_signaled(array->fences[i]) && !--num_pending)
++ goto signal;
++ }
++ return false;
+
++signal:
+ dma_fence_array_clear_pending_error(array);
+ return true;
+ }
+diff --git a/drivers/dma-buf/dma-fence-unwrap.c b/drivers/dma-buf/dma-fence-unwrap.c
+index 628af51c81af3d..6345062731f153 100644
+--- a/drivers/dma-buf/dma-fence-unwrap.c
++++ b/drivers/dma-buf/dma-fence-unwrap.c
+@@ -12,6 +12,7 @@
+ #include <linux/dma-fence-chain.h>
+ #include <linux/dma-fence-unwrap.h>
+ #include <linux/slab.h>
++#include <linux/sort.h>
+
+ /* Internal helper to start new array iteration, don't use directly */
+ static struct dma_fence *
+@@ -59,6 +60,25 @@ struct dma_fence *dma_fence_unwrap_next(struct dma_fence_unwrap *cursor)
+ }
+ EXPORT_SYMBOL_GPL(dma_fence_unwrap_next);
+
++
++static int fence_cmp(const void *_a, const void *_b)
++{
++ struct dma_fence *a = *(struct dma_fence **)_a;
++ struct dma_fence *b = *(struct dma_fence **)_b;
++
++ if (a->context < b->context)
++ return -1;
++ else if (a->context > b->context)
++ return 1;
++
++ if (dma_fence_is_later(b, a))
++ return 1;
++ else if (dma_fence_is_later(a, b))
++ return -1;
++
++ return 0;
++}
++
+ /* Implementation for the dma_fence_merge() marco, don't use directly */
+ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
+ struct dma_fence **fences,
+@@ -67,8 +87,7 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
+ struct dma_fence_array *result;
+ struct dma_fence *tmp, **array;
+ ktime_t timestamp;
+- unsigned int i;
+- size_t count;
++ int i, j, count;
+
+ count = 0;
+ timestamp = ns_to_ktime(0);
+@@ -96,78 +115,55 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
+ if (!array)
+ return NULL;
+
+- /*
+- * This trashes the input fence array and uses it as position for the
+- * following merge loop. This works because the dma_fence_merge()
+- * wrapper macro is creating this temporary array on the stack together
+- * with the iterators.
+- */
+- for (i = 0; i < num_fences; ++i)
+- fences[i] = dma_fence_unwrap_first(fences[i], &iter[i]);
+-
+ count = 0;
+- do {
+- unsigned int sel;
+-
+-restart:
+- tmp = NULL;
+- for (i = 0; i < num_fences; ++i) {
+- struct dma_fence *next;
+-
+- while (fences[i] && dma_fence_is_signaled(fences[i]))
+- fences[i] = dma_fence_unwrap_next(&iter[i]);
+-
+- next = fences[i];
+- if (!next)
+- continue;
+-
+- /*
+- * We can't guarantee that inpute fences are ordered by
+- * context, but it is still quite likely when this
+- * function is used multiple times. So attempt to order
+- * the fences by context as we pass over them and merge
+- * fences with the same context.
+- */
+- if (!tmp || tmp->context > next->context) {
+- tmp = next;
+- sel = i;
+-
+- } else if (tmp->context < next->context) {
+- continue;
+-
+- } else if (dma_fence_is_later(tmp, next)) {
+- fences[i] = dma_fence_unwrap_next(&iter[i]);
+- goto restart;
++ for (i = 0; i < num_fences; ++i) {
++ dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {
++ if (!dma_fence_is_signaled(tmp)) {
++ array[count++] = dma_fence_get(tmp);
+ } else {
+- fences[sel] = dma_fence_unwrap_next(&iter[sel]);
+- goto restart;
++ ktime_t t = dma_fence_timestamp(tmp);
++
++ if (ktime_after(t, timestamp))
++ timestamp = t;
+ }
+ }
++ }
+
+- if (tmp) {
+- array[count++] = dma_fence_get(tmp);
+- fences[sel] = dma_fence_unwrap_next(&iter[sel]);
+- }
+- } while (tmp);
++ if (count == 0 || count == 1)
++ goto return_fastpath;
+
+- if (count == 0) {
+- tmp = dma_fence_allocate_private_stub(ktime_get());
+- goto return_tmp;
+- }
++ sort(array, count, sizeof(*array), fence_cmp, NULL);
+
+- if (count == 1) {
+- tmp = array[0];
+- goto return_tmp;
++ /*
++ * Only keep the most recent fence for each context.
++ */
++ j = 0;
++ for (i = 1; i < count; i++) {
++ if (array[i]->context == array[j]->context)
++ dma_fence_put(array[i]);
++ else
++ array[++j] = array[i];
+ }
+-
+- result = dma_fence_array_create(count, array,
+- dma_fence_context_alloc(1),
+- 1, false);
+- if (!result) {
+- tmp = NULL;
+- goto return_tmp;
++ count = ++j;
++
++ if (count > 1) {
++ result = dma_fence_array_create(count, array,
++ dma_fence_context_alloc(1),
++ 1, false);
++ if (!result) {
++ for (i = 0; i < count; i++)
++ dma_fence_put(array[i]);
++ tmp = NULL;
++ goto return_tmp;
++ }
++ return &result->base;
+ }
+- return &result->base;
++
++return_fastpath:
++ if (count == 0)
++ tmp = dma_fence_allocate_private_stub(timestamp);
++ else
++ tmp = array[0];
+
+ return_tmp:
+ kfree(array);
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 2e4260ba5f793c..14afd68664a911 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -1742,9 +1742,11 @@ EXPORT_SYMBOL_GPL(qcom_scm_qseecom_app_send);
+ + any potential issues with this, only allow validated machines for now.
+ */
+ static const struct of_device_id qcom_scm_qseecom_allowlist[] __maybe_unused = {
++ { .compatible = "dell,xps13-9345" },
+ { .compatible = "lenovo,flex-5g" },
+ { .compatible = "lenovo,thinkpad-t14s" },
+ { .compatible = "lenovo,thinkpad-x13s", },
++ { .compatible = "lenovo,yoga-slim7x" },
+ { .compatible = "microsoft,romulus13", },
+ { .compatible = "microsoft,romulus15", },
+ { .compatible = "qcom,sc8180x-primus" },
+diff --git a/drivers/gpio/gpio-grgpio.c b/drivers/gpio/gpio-grgpio.c
+index 017c7170eb57c4..620793740c6681 100644
+--- a/drivers/gpio/gpio-grgpio.c
++++ b/drivers/gpio/gpio-grgpio.c
+@@ -328,6 +328,7 @@ static const struct irq_domain_ops grgpio_irq_domain_ops = {
+ static int grgpio_probe(struct platform_device *ofdev)
+ {
+ struct device_node *np = ofdev->dev.of_node;
++ struct device *dev = &ofdev->dev;
+ void __iomem *regs;
+ struct gpio_chip *gc;
+ struct grgpio_priv *priv;
+@@ -337,7 +338,7 @@ static int grgpio_probe(struct platform_device *ofdev)
+ int size;
+ int i;
+
+- priv = devm_kzalloc(&ofdev->dev, sizeof(*priv), GFP_KERNEL);
++ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+@@ -346,28 +347,31 @@ static int grgpio_probe(struct platform_device *ofdev)
+ return PTR_ERR(regs);
+
+ gc = &priv->gc;
+- err = bgpio_init(gc, &ofdev->dev, 4, regs + GRGPIO_DATA,
++ err = bgpio_init(gc, dev, 4, regs + GRGPIO_DATA,
+ regs + GRGPIO_OUTPUT, NULL, regs + GRGPIO_DIR, NULL,
+ BGPIOF_BIG_ENDIAN_BYTE_ORDER);
+ if (err) {
+- dev_err(&ofdev->dev, "bgpio_init() failed\n");
++ dev_err(dev, "bgpio_init() failed\n");
+ return err;
+ }
+
+ priv->regs = regs;
+ priv->imask = gc->read_reg(regs + GRGPIO_IMASK);
+- priv->dev = &ofdev->dev;
++ priv->dev = dev;
+
+ gc->owner = THIS_MODULE;
+ gc->to_irq = grgpio_to_irq;
+- gc->label = devm_kasprintf(&ofdev->dev, GFP_KERNEL, "%pOF", np);
++ gc->label = devm_kasprintf(dev, GFP_KERNEL, "%pOF", np);
++ if (!gc->label)
++ return -ENOMEM;
++
+ gc->base = -1;
+
+ err = of_property_read_u32(np, "nbits", &prop);
+ if (err || prop <= 0 || prop > GRGPIO_MAX_NGPIO) {
+ gc->ngpio = GRGPIO_MAX_NGPIO;
+- dev_dbg(&ofdev->dev,
+- "No or invalid nbits property: assume %d\n", gc->ngpio);
++ dev_dbg(dev, "No or invalid nbits property: assume %d\n",
++ gc->ngpio);
+ } else {
+ gc->ngpio = prop;
+ }
+@@ -379,7 +383,7 @@ static int grgpio_probe(struct platform_device *ofdev)
+ irqmap = (s32 *)of_get_property(np, "irqmap", &size);
+ if (irqmap) {
+ if (size < gc->ngpio) {
+- dev_err(&ofdev->dev,
++ dev_err(dev,
+ "irqmap shorter than ngpio (%d < %d)\n",
+ size, gc->ngpio);
+ return -EINVAL;
+@@ -389,7 +393,7 @@ static int grgpio_probe(struct platform_device *ofdev)
+ &grgpio_irq_domain_ops,
+ priv);
+ if (!priv->domain) {
+- dev_err(&ofdev->dev, "Could not add irq domain\n");
++ dev_err(dev, "Could not add irq domain\n");
+ return -EINVAL;
+ }
+
+@@ -419,13 +423,13 @@ static int grgpio_probe(struct platform_device *ofdev)
+
+ err = gpiochip_add_data(gc, priv);
+ if (err) {
+- dev_err(&ofdev->dev, "Could not add gpiochip\n");
++ dev_err(dev, "Could not add gpiochip\n");
+ if (priv->domain)
+ irq_domain_remove(priv->domain);
+ return err;
+ }
+
+- dev_info(&ofdev->dev, "regs=0x%p, base=%d, ngpio=%d, irqs=%s\n",
++ dev_info(dev, "regs=0x%p, base=%d, ngpio=%d, irqs=%s\n",
+ priv->regs, gc->base, gc->ngpio, priv->domain ? "on" : "off");
+
+ return 0;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 2b02655abb56ea..44372f8647d51a 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -14,6 +14,7 @@
+ #include <linux/idr.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
++#include <linux/irqdesc.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+ #include <linux/lockdep.h>
+@@ -713,6 +714,45 @@ bool gpiochip_line_is_valid(const struct gpio_chip *gc,
+ }
+ EXPORT_SYMBOL_GPL(gpiochip_line_is_valid);
+
++static void gpiod_free_irqs(struct gpio_desc *desc)
++{
++ int irq = gpiod_to_irq(desc);
++ struct irq_desc *irqd = irq_to_desc(irq);
++ void *cookie;
++
++ for (;;) {
++ /*
++ * Make sure the action doesn't go away while we're
++ * dereferencing it. Retrieve and store the cookie value.
++ * If the irq is freed after we release the lock, that's
++ * alright - the underlying maple tree lookup will return NULL
++ * and nothing will happen in free_irq().
++ */
++ scoped_guard(mutex, &irqd->request_mutex) {
++ if (!irq_desc_has_action(irqd))
++ return;
++
++ cookie = irqd->action->dev_id;
++ }
++
++ free_irq(irq, cookie);
++ }
++}
++
++/*
++ * The chip is going away but there may be users who had requested interrupts
++ * on its GPIO lines who have no idea about its removal and have no way of
++ * being notified about it. We need to free any interrupts still in use here or
++ * we'll leak memory and resources (like procfs files).
++ */
++static void gpiochip_free_remaining_irqs(struct gpio_chip *gc)
++{
++ struct gpio_desc *desc;
++
++ for_each_gpio_desc_with_flag(gc, desc, FLAG_USED_AS_IRQ)
++ gpiod_free_irqs(desc);
++}
++
+ static void gpiodev_release(struct device *dev)
+ {
+ struct gpio_device *gdev = to_gpio_device(dev);
+@@ -1125,6 +1165,7 @@ void gpiochip_remove(struct gpio_chip *gc)
+ /* FIXME: should the legacy sysfs handling be moved to gpio_device? */
+ gpiochip_sysfs_unregister(gdev);
+ gpiochip_free_hogs(gc);
++ gpiochip_free_remaining_irqs(gc);
+
+ scoped_guard(mutex, &gpio_devices_lock)
+ list_del_rcu(&gdev->list);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 7dd55ed57c1d97..b8d4e07d2043ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -800,6 +800,7 @@ int amdgpu_acpi_power_shift_control(struct amdgpu_device *adev,
+ return -EIO;
+ }
+
++ kfree(info);
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 1f08cb88d51be5..51904906545e59 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3666,7 +3666,7 @@ static int amdgpu_device_ip_resume_phase1(struct amdgpu_device *adev)
+ *
+ * @adev: amdgpu_device pointer
+ *
+- * First resume function for hardware IPs. The list of all the hardware
++ * Second resume function for hardware IPs. The list of all the hardware
+ * IPs that make up the asic is walked and the resume callbacks are run for
+ * all blocks except COMMON, GMC, and IH. resume puts the hardware into a
+ * functional state after a suspend and updates the software state as
+@@ -3684,6 +3684,7 @@ static int amdgpu_device_ip_resume_phase2(struct amdgpu_device *adev)
+ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_IH ||
++ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_DCE ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_PSP)
+ continue;
+ r = adev->ip_blocks[i].version->funcs->resume(adev);
+@@ -3698,6 +3699,36 @@ static int amdgpu_device_ip_resume_phase2(struct amdgpu_device *adev)
+ return 0;
+ }
+
++/**
++ * amdgpu_device_ip_resume_phase3 - run resume for hardware IPs
++ *
++ * @adev: amdgpu_device pointer
++ *
++ * Third resume function for hardware IPs. The list of all the hardware
++ * IPs that make up the asic is walked and the resume callbacks are run for
++ * all DCE. resume puts the hardware into a functional state after a suspend
++ * and updates the software state as necessary. This function is also used
++ * for restoring the GPU after a GPU reset.
++ *
++ * Returns 0 on success, negative error code on failure.
++ */
++static int amdgpu_device_ip_resume_phase3(struct amdgpu_device *adev)
++{
++ int i, r;
++
++ for (i = 0; i < adev->num_ip_blocks; i++) {
++ if (!adev->ip_blocks[i].status.valid || adev->ip_blocks[i].status.hw)
++ continue;
++ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_DCE) {
++ r = adev->ip_blocks[i].version->funcs->resume(adev);
++ if (r)
++ return r;
++ }
++ }
++
++ return 0;
++}
++
+ /**
+ * amdgpu_device_ip_resume - run resume for hardware IPs
+ *
+@@ -3727,6 +3758,13 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
+ if (adev->mman.buffer_funcs_ring->sched.ready)
+ amdgpu_ttm_set_buffer_funcs_status(adev, true);
+
++ if (r)
++ return r;
++
++ amdgpu_fence_driver_hw_init(adev);
++
++ r = amdgpu_device_ip_resume_phase3(adev);
++
+ return r;
+ }
+
+@@ -4809,7 +4847,6 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
+ dev_err(adev->dev, "amdgpu_device_ip_resume failed (%d).\n", r);
+ goto exit;
+ }
+- amdgpu_fence_driver_hw_init(adev);
+
+ if (!adev->in_s0ix) {
+ r = amdgpu_amdkfd_resume(adev, adev->in_runpm);
+@@ -5431,6 +5468,10 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
+ amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
+
++ r = amdgpu_device_ip_resume_phase3(tmp_adev);
++ if (r)
++ goto out;
++
+ if (vram_lost)
+ amdgpu_device_fill_reset_magic(tmp_adev);
+
+@@ -6344,6 +6385,9 @@ bool amdgpu_device_cache_pci_state(struct pci_dev *pdev)
+ struct amdgpu_device *adev = drm_to_adev(dev);
+ int r;
+
++ if (amdgpu_sriov_vf(adev))
++ return false;
++
+ r = pci_save_state(pdev);
+ if (!r) {
+ kfree(adev->pci_state);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 74adb983ab03e0..9f922ec50ea2dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -812,7 +812,7 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_device *bdev,
+ /* Map SG to device */
+ r = dma_map_sgtable(adev->dev, ttm->sg, direction, 0);
+ if (r)
+- goto release_sg;
++ goto release_sg_table;
+
+ /* convert SG to linear array of pages and dma addresses */
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+@@ -820,6 +820,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_device *bdev,
+
+ return 0;
+
++release_sg_table:
++ sg_free_table(ttm->sg);
+ release_sg:
+ kfree(ttm->sg);
+ ttm->sg = NULL;
+@@ -1849,6 +1851,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
+
+ mutex_init(&adev->mman.gtt_window_lock);
+
++ dma_set_max_seg_size(adev->dev, UINT_MAX);
+ /* No others user of address space so set it to 0 */
+ r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
+ adev_to_drm(adev)->anon_inode->i_mapping,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 785a343a95f0ff..e7cd51c95141e1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2223,6 +2223,18 @@ static int gfx_v9_0_sw_init(void *handle)
+ }
+
+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
++ case IP_VERSION(9, 4, 2):
++ adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex;
++ adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex);
++ if (adev->gfx.mec_fw_version >= 88) {
++ adev->gfx.enable_cleaner_shader = true;
++ r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);
++ if (r) {
++ adev->gfx.enable_cleaner_shader = false;
++ dev_err(adev->dev, "Failed to initialize cleaner shader\n");
++ }
++ }
++ break;
+ default:
+ adev->gfx.enable_cleaner_shader = false;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h
+index 36c0292b511067..0b6bd09b752993 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: MIT */
+ /*
+- * Copyright 2018 Advanced Micro Devices, Inc.
++ * Copyright 2024 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+@@ -24,3 +24,45 @@
+ static const u32 __maybe_unused gfx_9_0_cleaner_shader_hex[] = {
+ /* Add the cleaner shader code here */
+ };
++
++/* Define the cleaner shader gfx_9_4_2 */
++static const u32 gfx_9_4_2_cleaner_shader_hex[] = {
++ 0xbf068100, 0xbf84003b,
++ 0xbf8a0000, 0xb07c0000,
++ 0xbe8200ff, 0x00000078,
++ 0xbf110802, 0x7e000280,
++ 0x7e020280, 0x7e040280,
++ 0x7e060280, 0x7e080280,
++ 0x7e0a0280, 0x7e0c0280,
++ 0x7e0e0280, 0x80828802,
++ 0xbe803202, 0xbf84fff5,
++ 0xbf9c0000, 0xbe8200ff,
++ 0x80000000, 0x86020102,
++ 0xbf840011, 0xbefe00c1,
++ 0xbeff00c1, 0xd28c0001,
++ 0x0001007f, 0xd28d0001,
++ 0x0002027e, 0x10020288,
++ 0xbe8200bf, 0xbefc00c1,
++ 0xd89c2000, 0x00020201,
++ 0xd89c6040, 0x00040401,
++ 0x320202ff, 0x00000400,
++ 0x80828102, 0xbf84fff8,
++ 0xbefc00ff, 0x0000005c,
++ 0xbf800000, 0xbe802c80,
++ 0xbe812c80, 0xbe822c80,
++ 0xbe832c80, 0x80fc847c,
++ 0xbf84fffa, 0xbee60080,
++ 0xbee70080, 0xbeea0180,
++ 0xbeec0180, 0xbeee0180,
++ 0xbef00180, 0xbef20180,
++ 0xbef40180, 0xbef60180,
++ 0xbef80180, 0xbefa0180,
++ 0xbf810000, 0xbf8d0001,
++ 0xbefc00ff, 0x0000005c,
++ 0xbf800000, 0xbe802c80,
++ 0xbe812c80, 0xbe822c80,
++ 0xbe832c80, 0x80fc847c,
++ 0xbf84fffa, 0xbee60080,
++ 0xbee70080, 0xbeea01ff,
++ 0x000000ee, 0xbf810000,
++};
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2_cleaner_shader.asm b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2_cleaner_shader.asm
+new file mode 100644
+index 00000000000000..35b8cf9070bd98
+--- /dev/null
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2_cleaner_shader.asm
+@@ -0,0 +1,153 @@
++/* SPDX-License-Identifier: MIT */
++/*
++ * Copyright 2024 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++
++// This shader is to clean LDS, SGPRs and VGPRs. It is first 64 Dwords or 256 bytes of 192 Dwords cleaner shader.
++//To turn this shader program on for complitaion change this to main and lower shader main to main_1
++
++// MI200 : Clear SGPRs, VGPRs and LDS
++// Uses two kernels launched separately:
++// 1. Clean VGPRs, LDS, and lower SGPRs
++// Launches one workgroup per CU, each workgroup with 4x wave64 per SIMD in the CU
++// Waves are "wave64" and have 128 VGPRs each, which uses all 512 VGPRs per SIMD
++// Waves in the workgroup share the 64KB of LDS
++// Each wave clears SGPRs 0 - 95. Because there are 4 waves/SIMD, this is physical SGPRs 0-383
++// Each wave clears 128 VGPRs, so all 512 in the SIMD
++// The first wave of the workgroup clears its 64KB of LDS
++// The shader starts with "S_BARRIER" to ensure SPI has launched all waves of the workgroup
++// before any wave in the workgroup could end. Without this, it is possible not all SGPRs get cleared.
++// 2. Clean remaining SGPRs
++// Launches a workgroup with 24 waves per workgroup, yielding 6 waves per SIMD in each CU
++// Waves are allocating 96 SGPRs
++// CP sets up SPI_RESOURCE_RESERVE_* registers to prevent these waves from allocating SGPRs 0-223.
++// As such, these 6 waves per SIMD are allocated physical SGPRs 224-799
++// Barriers do not work for >16 waves per workgroup, so we cannot start with S_BARRIER
++// Instead, the shader starts with an S_SETHALT 1. Once all waves are launched CP will send unhalt command
++// The shader then clears all SGPRs allocated to it, cleaning out physical SGPRs 224-799
++
++shader main
++ asic(MI200)
++ type(CS)
++ wave_size(64)
++// Note: original source code from SQ team
++
++// (theorhetical fastest = ~512clks vgpr + 1536 lds + ~128 sgpr = 2176 clks)
++
++ s_cmp_eq_u32 s0, 1 // Bit0 is set, sgpr0 is set then clear VGPRS and LDS as FW set COMPUTE_USER_DATA_3
++ s_cbranch_scc0 label_0023 // Clean VGPRs and LDS if sgpr0 of wave is set, scc = (s3 == 1)
++ S_BARRIER
++
++ s_movk_i32 m0, 0x0000
++ s_mov_b32 s2, 0x00000078 // Loop 128/8=16 times (loop unrolled for performance)
++ //
++ // CLEAR VGPRs
++ //
++ s_set_gpr_idx_on s2, 0x8 // enable Dest VGPR indexing
++label_0005:
++ v_mov_b32 v0, 0
++ v_mov_b32 v1, 0
++ v_mov_b32 v2, 0
++ v_mov_b32 v3, 0
++ v_mov_b32 v4, 0
++ v_mov_b32 v5, 0
++ v_mov_b32 v6, 0
++ v_mov_b32 v7, 0
++ s_sub_u32 s2, s2, 8
++ s_set_gpr_idx_idx s2
++ s_cbranch_scc0 label_0005
++ s_set_gpr_idx_off
++
++ //
++ //
++
++ s_mov_b32 s2, 0x80000000 // Bit31 is first_wave
++ s_and_b32 s2, s2, s1 // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
++ s_cbranch_scc0 label_clean_sgpr_1 // Clean LDS if its first wave of ThreadGroup/WorkGroup
++ // CLEAR LDS
++ //
++ s_mov_b32 exec_lo, 0xffffffff
++ s_mov_b32 exec_hi, 0xffffffff
++ v_mbcnt_lo_u32_b32 v1, exec_hi, 0 // Set V1 to thread-ID (0..63)
++ v_mbcnt_hi_u32_b32 v1, exec_lo, v1 // Set V1 to thread-ID (0..63)
++ v_mul_u32_u24 v1, 0x00000008, v1 // * 8, so each thread is a double-dword address (8byte)
++ s_mov_b32 s2, 0x00000003f // 64 loop iterations
++ s_mov_b32 m0, 0xffffffff
++ // Clear all of LDS space
++ // Each FirstWave of WorkGroup clears 64kbyte block
++
++label_001F:
++ ds_write2_b64 v1, v[2:3], v[2:3] offset1:32
++ ds_write2_b64 v1, v[4:5], v[4:5] offset0:64 offset1:96
++ v_add_co_u32 v1, vcc, 0x00000400, v1
++ s_sub_u32 s2, s2, 1
++ s_cbranch_scc0 label_001F
++ //
++ // CLEAR SGPRs
++ //
++label_clean_sgpr_1:
++ s_mov_b32 m0, 0x0000005c // Loop 96/4=24 times (loop unrolled for performance)
++ s_nop 0
++label_sgpr_loop:
++ s_movreld_b32 s0, 0
++ s_movreld_b32 s1, 0
++ s_movreld_b32 s2, 0
++ s_movreld_b32 s3, 0
++ s_sub_u32 m0, m0, 4
++ s_cbranch_scc0 label_sgpr_loop
++
++ //clear vcc, flat scratch
++ s_mov_b32 flat_scratch_lo, 0 //clear flat scratch lo SGPR
++ s_mov_b32 flat_scratch_hi, 0 //clear flat scratch hi SGPR
++ s_mov_b64 vcc, 0 //clear vcc
++ s_mov_b64 ttmp0, 0 //Clear ttmp0 and ttmp1
++ s_mov_b64 ttmp2, 0 //Clear ttmp2 and ttmp3
++ s_mov_b64 ttmp4, 0 //Clear ttmp4 and ttmp5
++ s_mov_b64 ttmp6, 0 //Clear ttmp6 and ttmp7
++ s_mov_b64 ttmp8, 0 //Clear ttmp8 and ttmp9
++ s_mov_b64 ttmp10, 0 //Clear ttmp10 and ttmp11
++ s_mov_b64 ttmp12, 0 //Clear ttmp12 and ttmp13
++ s_mov_b64 ttmp14, 0 //Clear ttmp14 and ttmp15
++s_endpgm
++
++label_0023:
++
++ s_sethalt 1
++
++ s_mov_b32 m0, 0x0000005c // Loop 96/4=24 times (loop unrolled for performance)
++ s_nop 0
++label_sgpr_loop1:
++
++ s_movreld_b32 s0, 0
++ s_movreld_b32 s1, 0
++ s_movreld_b32 s2, 0
++ s_movreld_b32 s3, 0
++ s_sub_u32 m0, m0, 4
++ s_cbranch_scc0 label_sgpr_loop1
++
++ //clear vcc, flat scratch
++ s_mov_b32 flat_scratch_lo, 0 //clear flat scratch lo SGPR
++ s_mov_b32 flat_scratch_hi, 0 //clear flat scratch hi SGPR
++ s_mov_b64 vcc, 0xee //clear vcc
++
++s_endpgm
++end
++
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
+index e019249883fb2f..194026e9be3331 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
+@@ -40,10 +40,12 @@
+ static void hdp_v4_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v4_0_invalidate_hdp(struct amdgpu_device *adev,
+@@ -54,11 +56,13 @@ static void hdp_v4_0_invalidate_hdp(struct amdgpu_device *adev,
+ amdgpu_ip_version(adev, HDP_HWIP, 0) == IP_VERSION(4, 4, 5))
+ return;
+
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE, 1);
+- else
++ RREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE);
++ } else {
+ amdgpu_ring_emit_wreg(ring, SOC15_REG_OFFSET(
+ HDP, 0, mmHDP_READ_CACHE_INVALIDATE), 1);
++ }
+ }
+
+ static void hdp_v4_0_query_ras_error_count(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
+index ed7facacf2fe30..d3962d46908811 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
+@@ -31,10 +31,12 @@
+ static void hdp_v5_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v5_0_invalidate_hdp(struct amdgpu_device *adev,
+@@ -42,6 +44,7 @@ static void hdp_v5_0_invalidate_hdp(struct amdgpu_device *adev,
+ {
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE, 1);
++ RREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE);
+ } else {
+ amdgpu_ring_emit_wreg(ring, SOC15_REG_OFFSET(
+ HDP, 0, mmHDP_READ_CACHE_INVALIDATE), 1);
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c b/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
+index 29c3484ae1f166..f52552c5fa27b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
+@@ -31,13 +31,15 @@
+ static void hdp_v5_2_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+ 0);
+- else
++ RREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring,
+ (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+ 0);
++ }
+ }
+
+ static void hdp_v5_2_update_mem_power_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
+index 33736d361dd0bc..6948fe9956ce47 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
+@@ -34,10 +34,12 @@
+ static void hdp_v6_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v6_0_update_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
+index 1c99bb09e2a129..63820329f67eb6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
+@@ -31,10 +31,12 @@
+ static void hdp_v7_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v7_0_update_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index 0fda703363004f..6fca2915ea8fd5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -116,6 +116,20 @@ static int vcn_v4_0_3_early_init(void *handle)
+ return amdgpu_vcn_early_init(adev);
+ }
+
++static int vcn_v4_0_3_fw_shared_init(struct amdgpu_device *adev, int inst_idx)
++{
++ struct amdgpu_vcn4_fw_shared *fw_shared;
++
++ fw_shared = adev->vcn.inst[inst_idx].fw_shared.cpu_addr;
++ fw_shared->present_flag_0 = cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE);
++ fw_shared->sq.is_enabled = 1;
++
++ if (amdgpu_vcnfw_log)
++ amdgpu_vcn_fwlog_init(&adev->vcn.inst[inst_idx]);
++
++ return 0;
++}
++
+ /**
+ * vcn_v4_0_3_sw_init - sw init for VCN block
+ *
+@@ -148,8 +162,6 @@ static int vcn_v4_0_3_sw_init(void *handle)
+ return r;
+
+ for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+- volatile struct amdgpu_vcn4_fw_shared *fw_shared;
+-
+ vcn_inst = GET_INST(VCN, i);
+
+ ring = &adev->vcn.inst[i].ring_enc[0];
+@@ -172,12 +184,7 @@ static int vcn_v4_0_3_sw_init(void *handle)
+ if (r)
+ return r;
+
+- fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+- fw_shared->present_flag_0 = cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE);
+- fw_shared->sq.is_enabled = true;
+-
+- if (amdgpu_vcnfw_log)
+- amdgpu_vcn_fwlog_init(&adev->vcn.inst[i]);
++ vcn_v4_0_3_fw_shared_init(adev, i);
+ }
+
+ if (amdgpu_sriov_vf(adev)) {
+@@ -273,6 +280,8 @@ static int vcn_v4_0_3_hw_init(void *handle)
+ }
+ } else {
+ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++ struct amdgpu_vcn4_fw_shared *fw_shared;
++
+ vcn_inst = GET_INST(VCN, i);
+ ring = &adev->vcn.inst[i].ring_enc[0];
+
+@@ -296,6 +305,11 @@ static int vcn_v4_0_3_hw_init(void *handle)
+ regVCN_RB1_DB_CTRL);
+ }
+
++ /* Re-init fw_shared when RAS fatal error occurred */
++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
++ if (!fw_shared->sq.is_enabled)
++ vcn_v4_0_3_fw_shared_init(adev, i);
++
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+index ac439f0565e357..16f5561fb86ec5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+@@ -114,6 +114,33 @@ static int vega20_ih_toggle_ring_interrupts(struct amdgpu_device *adev,
+ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, RB_ENABLE, (enable ? 1 : 0));
+ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, RB_GPU_TS_ENABLE, 1);
+
++ if (enable) {
++ /* Unset the CLEAR_OVERFLOW bit to make sure the next step
++ * is switching the bit from 0 to 1
++ */
++ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++ if (amdgpu_sriov_vf(adev) && amdgpu_sriov_reg_indirect_ih(adev)) {
++ if (psp_reg_program(&adev->psp, ih_regs->psp_reg_id, tmp))
++ return -ETIMEDOUT;
++ } else {
++ WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++ }
++
++ /* Clear RB_OVERFLOW bit */
++ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
++ if (amdgpu_sriov_vf(adev) && amdgpu_sriov_reg_indirect_ih(adev)) {
++ if (psp_reg_program(&adev->psp, ih_regs->psp_reg_id, tmp))
++ return -ETIMEDOUT;
++ } else {
++ WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++ }
++
++ /* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++ * can be detected.
++ */
++ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++ }
++
+ /* enable_intr field is only valid in ring0 */
+ if (ih == &adev->irq.ih)
+ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, ENABLE_INTR, (enable ? 1 : 0));
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 48caecf7e72ed1..8de61cc524c943 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1509,6 +1509,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gfx.config.gc_tcp_size_per_cu) {
+ pcache_info[i].cache_size = adev->gfx.config.gc_tcp_size_per_cu;
+ pcache_info[i].cache_level = 1;
++ /* Cacheline size not available in IP discovery for gc943,gc944 */
++ pcache_info[i].cache_line_size = 128;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1520,6 +1522,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ pcache_info[i].cache_size =
+ adev->gfx.config.gc_l1_instruction_cache_size_per_sqc;
+ pcache_info[i].cache_level = 1;
++ pcache_info[i].cache_line_size = 64;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_INST_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1530,6 +1533,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gfx.config.gc_l1_data_cache_size_per_sqc) {
+ pcache_info[i].cache_size = adev->gfx.config.gc_l1_data_cache_size_per_sqc;
+ pcache_info[i].cache_level = 1;
++ pcache_info[i].cache_line_size = 64;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1540,6 +1544,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gfx.config.gc_tcc_size) {
+ pcache_info[i].cache_size = adev->gfx.config.gc_tcc_size;
+ pcache_info[i].cache_level = 2;
++ pcache_info[i].cache_line_size = 128;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1550,6 +1555,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gmc.mall_size) {
+ pcache_info[i].cache_size = adev->gmc.mall_size / 1024;
+ pcache_info[i].cache_level = 3;
++ pcache_info[i].cache_line_size = 64;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index fad1c8f2bc8334..b05be24531e187 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -235,6 +235,9 @@ static void kfd_device_info_init(struct kfd_dev *kfd,
+ */
+ kfd->device_info.needs_pci_atomics = true;
+ kfd->device_info.no_atomic_fw_version = kfd->adev->gfx.rs64_enable ? 509 : 0;
++ } else if (gc_version < IP_VERSION(13, 0, 0)) {
++ kfd->device_info.needs_pci_atomics = true;
++ kfd->device_info.no_atomic_fw_version = 2090;
+ } else {
+ kfd->device_info.needs_pci_atomics = true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 24fbde7dd1c425..ad3a3aa72b51f3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1910,7 +1910,11 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ else
+ init_data.flags.gpu_vm_support = (amdgpu_sg_display != 0);
+ } else {
+- init_data.flags.gpu_vm_support = (amdgpu_sg_display != 0) && (adev->flags & AMD_IS_APU);
++ if (amdgpu_ip_version(adev, DCE_HWIP, 0) == IP_VERSION(2, 0, 3))
++ init_data.flags.gpu_vm_support = (amdgpu_sg_display == 1);
++ else
++ init_data.flags.gpu_vm_support =
++ (amdgpu_sg_display != 0) && (adev->flags & AMD_IS_APU);
+ }
+
+ adev->mode_info.gpu_vm_support = init_data.flags.gpu_vm_support;
+@@ -7337,10 +7341,15 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ const struct drm_connector_state *drm_state = dm_state ? &dm_state->base : NULL;
+ int requested_bpc = drm_state ? drm_state->max_requested_bpc : 8;
+ enum dc_status dc_result = DC_OK;
++ uint8_t bpc_limit = 6;
+
+ if (!dm_state)
+ return NULL;
+
++ if (aconnector->dc_link->connector_signal == SIGNAL_TYPE_HDMI_TYPE_A ||
++ aconnector->dc_link->dpcd_caps.dongle_type == DISPLAY_DONGLE_DP_HDMI_CONVERTER)
++ bpc_limit = 8;
++
+ do {
+ stream = create_stream_for_sink(connector, drm_mode,
+ dm_state, old_stream,
+@@ -7361,11 +7370,12 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ dc_result = dm_validate_stream_and_context(adev->dm.dc, stream);
+
+ if (dc_result != DC_OK) {
+- DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d (%s)\n",
++ DRM_DEBUG_KMS("Mode %dx%d (clk %d) pixel_encoding:%s color_depth:%s failed validation -- %s\n",
+ drm_mode->hdisplay,
+ drm_mode->vdisplay,
+ drm_mode->clock,
+- dc_result,
++ dc_pixel_encoding_to_str(stream->timing.pixel_encoding),
++ dc_color_depth_to_str(stream->timing.display_color_depth),
+ dc_status_to_str(dc_result));
+
+ dc_stream_release(stream);
+@@ -7373,10 +7383,13 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ requested_bpc -= 2; /* lower bpc to retry validation */
+ }
+
+- } while (stream == NULL && requested_bpc >= 6);
++ } while (stream == NULL && requested_bpc >= bpc_limit);
+
+- if (dc_result == DC_FAIL_ENC_VALIDATE && !aconnector->force_yuv420_output) {
+- DRM_DEBUG_KMS("Retry forcing YCbCr420 encoding\n");
++ if ((dc_result == DC_FAIL_ENC_VALIDATE ||
++ dc_result == DC_EXCEED_DONGLE_CAP) &&
++ !aconnector->force_yuv420_output) {
++ DRM_DEBUG_KMS("%s:%d Retry forcing yuv420 encoding\n",
++ __func__, __LINE__);
+
+ aconnector->force_yuv420_output = true;
+ stream = create_validate_stream_for_sink(aconnector, drm_mode,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index b46a3afe48ca7c..3bd0d46c170109 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -132,6 +132,8 @@ static void dcn35_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *
+ for (i = 0; i < dc->res_pool->pipe_count; ++i) {
+ struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+ struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i];
++ struct clk_mgr_internal *clk_mgr_internal = TO_CLK_MGR_INTERNAL(clk_mgr_base);
++ struct dccg *dccg = clk_mgr_internal->dccg;
+ struct pipe_ctx *pipe = safe_to_lower
+ ? &context->res_ctx.pipe_ctx[i]
+ : &dc->current_state->res_ctx.pipe_ctx[i];
+@@ -148,8 +150,13 @@ static void dcn35_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *
+ new_pipe->stream_res.stream_enc &&
+ new_pipe->stream_res.stream_enc->funcs->is_fifo_enabled &&
+ new_pipe->stream_res.stream_enc->funcs->is_fifo_enabled(new_pipe->stream_res.stream_enc);
+- if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal) ||
+- !pipe->stream->link_enc) && !stream_changed_otg_dig_on) {
++ bool has_active_hpo = dccg->ctx->dc->link_srv->dp_is_128b_132b_signal(old_pipe) && dccg->ctx->dc->link_srv->dp_is_128b_132b_signal(new_pipe);
++
++ if (!has_active_hpo && !dccg->ctx->dc->link_srv->dp_is_128b_132b_signal(pipe) &&
++ (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal) ||
++ !pipe->stream->link_enc) && !stream_changed_otg_dig_on)) {
++
++
+ /* This w/a should not trigger when we have a dig active */
+ if (disable) {
+ if (pipe->stream_res.tg && pipe->stream_res.tg->funcs->immediate_disable_crtc)
+@@ -257,11 +264,11 @@ static void dcn35_notify_host_router_bw(struct clk_mgr *clk_mgr_base, struct dc_
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ uint32_t host_router_bw_kbps[MAX_HOST_ROUTERS_NUM] = { 0 };
+ int i;
+-
+ for (i = 0; i < context->stream_count; ++i) {
+ const struct dc_stream_state *stream = context->streams[i];
+ const struct dc_link *link = stream->link;
+- uint8_t lowest_dpia_index = 0, hr_index = 0;
++ uint8_t lowest_dpia_index = 0;
++ unsigned int hr_index = 0;
+
+ if (!link)
+ continue;
+@@ -271,6 +278,8 @@ static void dcn35_notify_host_router_bw(struct clk_mgr *clk_mgr_base, struct dc_
+ continue;
+
+ hr_index = (link->link_index - lowest_dpia_index) / 2;
++ if (hr_index >= MAX_HOST_ROUTERS_NUM)
++ continue;
+ host_router_bw_kbps[hr_index] += dc_bandwidth_in_kbps_from_timing(
+ &stream->timing, dc_link_get_highest_encoding_format(link));
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index a6911bb2cf0c6c..9f570d447c2099 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -6006,3 +6006,21 @@ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state
+
+ return profile;
+ }
++
++/*
++ **********************************************************************************
++ * dc_get_det_buffer_size_from_state() - extracts detile buffer size from dc state
++ *
++ * Called when DM wants to log detile buffer size from dc_state
++ *
++ **********************************************************************************
++ */
++unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context)
++{
++ struct dc *dc = context->clk_mgr->ctx->dc;
++
++ if (dc->res_pool->funcs->get_det_buffer_size)
++ return dc->res_pool->funcs->get_det_buffer_size(context);
++ else
++ return 0;
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+index 801cdbc8117d9b..e255c204b7e855 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+@@ -434,3 +434,43 @@ char *dc_status_to_str(enum dc_status status)
+
+ return "Unexpected status error";
+ }
++
++char *dc_pixel_encoding_to_str(enum dc_pixel_encoding pixel_encoding)
++{
++ switch (pixel_encoding) {
++ case PIXEL_ENCODING_RGB:
++ return "RGB";
++ case PIXEL_ENCODING_YCBCR422:
++ return "YUV422";
++ case PIXEL_ENCODING_YCBCR444:
++ return "YUV444";
++ case PIXEL_ENCODING_YCBCR420:
++ return "YUV420";
++ default:
++ return "Unknown";
++ }
++}
++
++char *dc_color_depth_to_str(enum dc_color_depth color_depth)
++{
++ switch (color_depth) {
++ case COLOR_DEPTH_666:
++ return "6-bpc";
++ case COLOR_DEPTH_888:
++ return "8-bpc";
++ case COLOR_DEPTH_101010:
++ return "10-bpc";
++ case COLOR_DEPTH_121212:
++ return "12-bpc";
++ case COLOR_DEPTH_141414:
++ return "14-bpc";
++ case COLOR_DEPTH_161616:
++ return "16-bpc";
++ case COLOR_DEPTH_999:
++ return "9-bpc";
++ case COLOR_DEPTH_111111:
++ return "11-bpc";
++ default:
++ return "Unknown";
++ }
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index c7599c40d4be38..d915020a429582 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -765,25 +765,6 @@ static inline void get_vp_scan_direction(
+ *flip_horz_scan_dir = !*flip_horz_scan_dir;
+ }
+
+-/*
+- * This is a preliminary vp size calculation to allow us to check taps support.
+- * The result is completely overridden afterwards.
+- */
+-static void calculate_viewport_size(struct pipe_ctx *pipe_ctx)
+-{
+- struct scaler_data *data = &pipe_ctx->plane_res.scl_data;
+-
+- data->viewport.width = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.horz, data->recout.width));
+- data->viewport.height = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.vert, data->recout.height));
+- data->viewport_c.width = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.horz_c, data->recout.width));
+- data->viewport_c.height = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.vert_c, data->recout.height));
+- if (pipe_ctx->plane_state->rotation == ROTATION_ANGLE_90 ||
+- pipe_ctx->plane_state->rotation == ROTATION_ANGLE_270) {
+- swap(data->viewport.width, data->viewport.height);
+- swap(data->viewport_c.width, data->viewport_c.height);
+- }
+-}
+-
+ static struct rect intersect_rec(const struct rect *r0, const struct rect *r1)
+ {
+ struct rect rec;
+@@ -1468,6 +1449,7 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ const struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
+ const struct rect odm_slice_src = resource_get_odm_slice_src_rect(pipe_ctx);
++ struct scaling_taps temp = {0};
+ bool res = false;
+
+ DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
+@@ -1519,14 +1501,16 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ res = spl_calculate_scaler_params(spl_in, spl_out);
+ // Convert respective out params from SPL to scaler data
+ translate_SPL_out_params_to_pipe_ctx(pipe_ctx, spl_out);
++
++ /* Ignore scaler failure if pipe context plane is phantom plane */
++ if (!res && plane_state->is_phantom)
++ res = true;
+ } else {
+ #endif
+ /* depends on h_active */
+ calculate_recout(pipe_ctx);
+ /* depends on pixel format */
+ calculate_scaling_ratios(pipe_ctx);
+- /* depends on scaling ratios and recout, does not calculate offset yet */
+- calculate_viewport_size(pipe_ctx);
+
+ /*
+ * LB calculations depend on vp size, h/v_active and scaling ratios
+@@ -1547,6 +1531,24 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+
+ pipe_ctx->plane_res.scl_data.lb_params.alpha_en = plane_state->per_pixel_alpha;
+
++ // get TAP value with 100x100 dummy data for max scaling qualify, override
++ // if a new scaling quality required
++ pipe_ctx->plane_res.scl_data.viewport.width = 100;
++ pipe_ctx->plane_res.scl_data.viewport.height = 100;
++ pipe_ctx->plane_res.scl_data.viewport_c.width = 100;
++ pipe_ctx->plane_res.scl_data.viewport_c.height = 100;
++ if (pipe_ctx->plane_res.xfm != NULL)
++ res = pipe_ctx->plane_res.xfm->funcs->transform_get_optimal_number_of_taps(
++ pipe_ctx->plane_res.xfm, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
++
++ if (pipe_ctx->plane_res.dpp != NULL)
++ res = pipe_ctx->plane_res.dpp->funcs->dpp_get_optimal_number_of_taps(
++ pipe_ctx->plane_res.dpp, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
++
++ temp = pipe_ctx->plane_res.scl_data.taps;
++
++ calculate_inits_and_viewports(pipe_ctx);
++
+ if (pipe_ctx->plane_res.xfm != NULL)
+ res = pipe_ctx->plane_res.xfm->funcs->transform_get_optimal_number_of_taps(
+ pipe_ctx->plane_res.xfm, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
+@@ -1573,11 +1575,14 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ &plane_state->scaling_quality);
+ }
+
+- /*
+- * Depends on recout, scaling ratios, h_active and taps
+- * May need to re-check lb size after this in some obscure scenario
+- */
+- if (res)
++ /* Ignore scaler failure if pipe context plane is phantom plane */
++ if (!res && plane_state->is_phantom)
++ res = true;
++
++ if (res && (pipe_ctx->plane_res.scl_data.taps.v_taps != temp.v_taps ||
++ pipe_ctx->plane_res.scl_data.taps.h_taps != temp.h_taps ||
++ pipe_ctx->plane_res.scl_data.taps.v_taps_c != temp.v_taps_c ||
++ pipe_ctx->plane_res.scl_data.taps.h_taps_c != temp.h_taps_c))
+ calculate_inits_and_viewports(pipe_ctx);
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index 9a406d74c0dd76..3d93efdc1026dd 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -819,12 +819,12 @@ void dc_stream_log(const struct dc *dc, const struct dc_stream_state *stream)
+ stream->dst.height,
+ stream->output_color_space);
+ DC_LOG_DC(
+- "\tpix_clk_khz: %d, h_total: %d, v_total: %d, pixelencoder:%d, displaycolorDepth:%d\n",
++ "\tpix_clk_khz: %d, h_total: %d, v_total: %d, pixel_encoding:%s, color_depth:%s\n",
+ stream->timing.pix_clk_100hz / 10,
+ stream->timing.h_total,
+ stream->timing.v_total,
+- stream->timing.pixel_encoding,
+- stream->timing.display_color_depth);
++ dc_pixel_encoding_to_str(stream->timing.pixel_encoding),
++ dc_color_depth_to_str(stream->timing.display_color_depth));
+ DC_LOG_DC(
+ "\tlink: %d\n",
+ stream->link->link_index);
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 3992ad73165bc6..7c163aa7e8bd2d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -285,6 +285,7 @@ struct dc_caps {
+ uint16_t subvp_vertical_int_margin_us;
+ bool seamless_odm;
+ uint32_t max_v_total;
++ bool vtotal_limited_by_fp2;
+ uint32_t max_disp_clock_khz_at_vmin;
+ uint8_t subvp_drr_vblank_start_margin_us;
+ bool cursor_not_scaled;
+@@ -2543,6 +2544,8 @@ struct dc_power_profile {
+
+ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state *context);
+
++unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context);
++
+ /* DSC Interfaces */
+ #include "dc_dsc.h"
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index 1e7de0f03290a3..ec5009f411eb0d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -1294,6 +1294,8 @@ static void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle)
+
+ memset(&new_signals, 0, sizeof(new_signals));
+
++ new_signals.bits.allow_idle = 1; /* always set */
++
+ if (dc->config.disable_ips == DMUB_IPS_ENABLE ||
+ dc->config.disable_ips == DMUB_IPS_DISABLE_DYNAMIC) {
+ new_signals.bits.allow_pg = 1;
+@@ -1389,7 +1391,7 @@ static void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ */
+ dc_dmub_srv->needs_idle_wake = false;
+
+- if (prev_driver_signals.bits.allow_ips2 &&
++ if ((prev_driver_signals.bits.allow_ips2 || prev_driver_signals.all == 0) &&
+ (!dc->debug.optimize_ips_handshake ||
+ ips_fw->signals.bits.ips2_commit || !ips_fw->signals.bits.in_idle)) {
+ DC_LOG_IPS(
+@@ -1450,7 +1452,7 @@ static void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ }
+
+ dc_dmub_srv_notify_idle(dc, false);
+- if (prev_driver_signals.bits.allow_ips1) {
++ if (prev_driver_signals.bits.allow_ips1 || prev_driver_signals.all == 0) {
+ DC_LOG_IPS(
+ "wait for IPS1 commit clear (ips1_commit=%u ips2_commit=%u)",
+ ips_fw->signals.bits.ips1_commit,
+diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c
+index 5b343f745cf333..ae81451a3a725c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c
+@@ -83,6 +83,15 @@ void enc314_disable_fifo(struct stream_encoder *enc)
+ REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, 0);
+ }
+
++static bool enc314_is_fifo_enabled(struct stream_encoder *enc)
++{
++ struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
++ uint32_t reset_val;
++
++ REG_GET(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, &reset_val);
++ return (reset_val != 0);
++}
++
+ void enc314_dp_set_odm_combine(
+ struct stream_encoder *enc,
+ bool odm_combine)
+@@ -468,6 +477,7 @@ static const struct stream_encoder_funcs dcn314_str_enc_funcs = {
+
+ .enable_fifo = enc314_enable_fifo,
+ .disable_fifo = enc314_disable_fifo,
++ .is_fifo_enabled = enc314_is_fifo_enabled,
+ .set_input_mode = enc314_set_dig_input_mode,
+ };
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index d851c081e3768a..8dabb1ac0b684d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -1222,6 +1222,7 @@ static dml_bool_t CalculatePrefetchSchedule(struct display_mode_lib_scratch_st *
+ s->dst_y_prefetch_oto = s->Tvm_oto_lines + 2 * s->Tr0_oto_lines + s->Lsw_oto;
+
+ s->dst_y_prefetch_equ = p->VStartup - (*p->TSetup + dml_max(p->TWait + p->TCalc, *p->Tdmdl)) / s->LineTime - (*p->DSTYAfterScaler + (dml_float_t) *p->DSTXAfterScaler / (dml_float_t)p->myPipe->HTotal);
++ s->dst_y_prefetch_equ = dml_min(s->dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH
+
+ #ifdef __DML_VBA_DEBUG__
+ dml_print("DML::%s: HTotal = %u\n", __func__, p->myPipe->HTotal);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 8697eac1e1f7e1..8dee0d397e0322 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -339,11 +339,22 @@ void dml21_apply_soc_bb_overrides(struct dml2_initialize_instance_in_out *dml_in
+ // }
+ }
+
++static unsigned int calc_max_hardware_v_total(const struct dc_stream_state *stream)
++{
++ unsigned int max_hw_v_total = stream->ctx->dc->caps.max_v_total;
++
++ if (stream->ctx->dc->caps.vtotal_limited_by_fp2) {
++ max_hw_v_total -= stream->timing.v_front_porch + 1;
++ }
++
++ return max_hw_v_total;
++}
++
+ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cfg *timing,
+ struct dc_stream_state *stream,
+ struct dml2_context *dml_ctx)
+ {
+- unsigned int hblank_start, vblank_start;
++ unsigned int hblank_start, vblank_start, min_hardware_refresh_in_uhz;
+
+ timing->h_active = stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right;
+ timing->v_active = stream->timing.v_addressable + stream->timing.v_border_bottom + stream->timing.v_border_top;
+@@ -371,11 +382,23 @@ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cf
+ - stream->timing.v_border_top - stream->timing.v_border_bottom;
+
+ timing->drr_config.enabled = stream->ignore_msa_timing_param;
+- timing->drr_config.min_refresh_uhz = stream->timing.min_refresh_in_uhz;
+ timing->drr_config.drr_active_variable = stream->vrr_active_variable;
+ timing->drr_config.drr_active_fixed = stream->vrr_active_fixed;
+ timing->drr_config.disallowed = !stream->allow_freesync;
+
++ /* limit min refresh rate to DC cap */
++ min_hardware_refresh_in_uhz = stream->timing.min_refresh_in_uhz;
++ if (stream->ctx->dc->caps.max_v_total != 0) {
++ min_hardware_refresh_in_uhz = div64_u64((stream->timing.pix_clk_100hz * 100000000ULL),
++ (stream->timing.h_total * (long long)calc_max_hardware_v_total(stream)));
++ }
++
++ if (stream->timing.min_refresh_in_uhz > min_hardware_refresh_in_uhz) {
++ timing->drr_config.min_refresh_uhz = stream->timing.min_refresh_in_uhz;
++ } else {
++ timing->drr_config.min_refresh_uhz = min_hardware_refresh_in_uhz;
++ }
++
+ if (dml_ctx->config.callbacks.get_max_flickerless_instant_vtotal_increase &&
+ stream->ctx->dc->config.enable_fpo_flicker_detection == 1)
+ timing->drr_config.max_instant_vtotal_delta = dml_ctx->config.callbacks.get_max_flickerless_instant_vtotal_increase(stream, false);
+@@ -859,7 +882,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+ plane->immediate_flip = plane_state->flip_immediate;
+
+ plane->composition.rect_out_height_spans_vactive =
+- plane_state->dst_rect.height >= stream->timing.v_addressable &&
++ plane_state->dst_rect.height >= stream->src.height &&
+ stream->dst.height >= stream->timing.v_addressable;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+index 4e93eeedfc1bbd..efcc1a6b364c27 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+@@ -355,6 +355,20 @@ void dcn314_calculate_pix_rate_divider(
+ }
+ }
+
++static bool dcn314_is_pipe_dig_fifo_on(struct pipe_ctx *pipe)
++{
++ return pipe && pipe->stream
++ // Check dig's otg instance.
++ && pipe->stream_res.stream_enc
++ && pipe->stream_res.stream_enc->funcs->dig_source_otg
++ && pipe->stream_res.tg->inst == pipe->stream_res.stream_enc->funcs->dig_source_otg(pipe->stream_res.stream_enc)
++ && pipe->stream->link && pipe->stream->link->link_enc
++ && pipe->stream->link->link_enc->funcs->is_dig_enabled
++ && pipe->stream->link->link_enc->funcs->is_dig_enabled(pipe->stream->link->link_enc)
++ && pipe->stream_res.stream_enc->funcs->is_fifo_enabled
++ && pipe->stream_res.stream_enc->funcs->is_fifo_enabled(pipe->stream_res.stream_enc);
++}
++
+ void dcn314_resync_fifo_dccg_dio(struct dce_hwseq *hws, struct dc *dc, struct dc_state *context, unsigned int current_pipe_idx)
+ {
+ unsigned int i;
+@@ -371,7 +385,11 @@ void dcn314_resync_fifo_dccg_dio(struct dce_hwseq *hws, struct dc *dc, struct dc
+ if (pipe->top_pipe || pipe->prev_odm_pipe)
+ continue;
+
+- if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
++ if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal)) &&
++ !pipe->stream->apply_seamless_boot_optimization &&
++ !pipe->stream->apply_edp_fast_boot_optimization) {
++ if (dcn314_is_pipe_dig_fifo_on(pipe))
++ continue;
+ pipe->stream_res.tg->funcs->disable_crtc(pipe->stream_res.tg);
+ reset_sync_context_for_pipe(dc, context, i);
+ otg_disabled[i] = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_status.h b/drivers/gpu/drm/amd/display/dc/inc/core_status.h
+index fa5edd03d00439..b5afd8c3103dba 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_status.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_status.h
+@@ -60,5 +60,7 @@ enum dc_status {
+ };
+
+ char *dc_status_to_str(enum dc_status status);
++char *dc_pixel_encoding_to_str(enum dc_pixel_encoding pixel_encoding);
++char *dc_color_depth_to_str(enum dc_color_depth color_depth);
+
+ #endif /* _CORE_STATUS_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+index bfb8b8502d2026..e1e3142cdc00ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+@@ -215,6 +215,7 @@ struct resource_funcs {
+
+ void (*get_panel_config_defaults)(struct dc_panel_config *panel_config);
+ void (*build_pipe_pix_clk_params)(struct pipe_ctx *pipe_ctx);
++ unsigned int (*get_det_buffer_size)(const struct dc_state *context);
+ };
+
+ struct audio_support{
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+index eea2b3b307cd5f..45e4de8d5cff8d 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+@@ -1511,6 +1511,7 @@ bool dcn20_split_stream_for_odm(
+
+ if (prev_odm_pipe->plane_state) {
+ struct scaler_data *sd = &prev_odm_pipe->plane_res.scl_data;
++ struct output_pixel_processor *opp = next_odm_pipe->stream_res.opp;
+ int new_width;
+
+ /* HACTIVE halved for odm combine */
+@@ -1544,7 +1545,28 @@ bool dcn20_split_stream_for_odm(
+ sd->viewport_c.x += dc_fixpt_floor(dc_fixpt_mul_int(
+ sd->ratios.horz_c, sd->h_active - sd->recout.x));
+ sd->recout.x = 0;
++
++ /*
++ * When odm is used in YcbCr422 or 420 colour space, a split screen
++ * will be seen with the previous calculations since the extra left
++ * edge pixel is accounted for in fmt but not in viewport.
++ *
++ * Below are calculations which fix the split by fixing the calculations
++ * if there is an extra left edge pixel.
++ */
++ if (opp && opp->funcs->opp_get_left_edge_extra_pixel_count
++ && opp->funcs->opp_get_left_edge_extra_pixel_count(
++ opp, next_odm_pipe->stream->timing.pixel_encoding,
++ resource_is_pipe_type(next_odm_pipe, OTG_MASTER)) == 1) {
++ sd->h_active += 1;
++ sd->recout.width += 1;
++ sd->viewport.x -= dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ sd->viewport_c.x -= dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ sd->viewport_c.width += dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ sd->viewport.width += dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ }
+ }
++
+ if (!next_odm_pipe->top_pipe)
+ next_odm_pipe->stream_res.opp = pool->opps[next_odm_pipe->pipe_idx];
+ else
+@@ -2133,6 +2155,7 @@ bool dcn20_fast_validate_bw(
+ ASSERT(0);
+ }
+ }
++
+ /* Actual dsc count per stream dsc validation*/
+ if (!dcn20_validate_dsc(dc, context)) {
+ context->bw_ctx.dml.vba.ValidationStatus[context->bw_ctx.dml.vba.soc.num_states] =
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+index 347e6aaea582fb..14b28841657d21 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+@@ -1298,7 +1298,7 @@ static struct link_encoder *dcn21_link_encoder_create(
+ kzalloc(sizeof(struct dcn21_link_encoder), GFP_KERNEL);
+ int link_regs_id;
+
+- if (!enc21)
++ if (!enc21 || enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
+ return NULL;
+
+ link_regs_id =
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
+index 5040a4c6ed1862..75cc84473a577e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
+@@ -2354,6 +2354,7 @@ static bool dcn30_resource_construct(
+
+ dc->caps.dp_hdmi21_pcon_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* read VBIOS LTTPR caps */
+ {
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
+index 5791b5cc287529..320b040d591d1e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
+@@ -1234,6 +1234,7 @@ static bool dcn302_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
+index 63f0f882c8610c..297cf4b5600dae 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
+@@ -1179,6 +1179,7 @@ static bool dcn303_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+index ac8cb20e2e3b64..80386f698ae4de 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+@@ -1721,6 +1721,12 @@ int dcn31_populate_dml_pipes_from_context(
+ return pipe_cnt;
+ }
+
++unsigned int dcn31_get_det_buffer_size(
++ const struct dc_state *context)
++{
++ return context->bw_ctx.dml.ip.det_buffer_size_kbytes;
++}
++
+ void dcn31_calculate_wm_and_dlg(
+ struct dc *dc, struct dc_state *context,
+ display_e2e_pipe_params_st *pipes,
+@@ -1843,6 +1849,7 @@ static struct resource_funcs dcn31_res_pool_funcs = {
+ .update_bw_bounding_box = dcn31_update_bw_bounding_box,
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn31_get_panel_config_defaults,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static struct clock_source *dcn30_clock_source_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h
+index 901436591ed45c..551ad912f7bea8 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h
+@@ -63,6 +63,9 @@ struct resource_pool *dcn31_create_resource_pool(
+ const struct dc_init_data *init_data,
+ struct dc *dc);
+
++unsigned int dcn31_get_det_buffer_size(
++ const struct dc_state *context);
++
+ /*temp: B0 specific before switch to dcn313 headers*/
+ #ifndef regPHYPLLF_PIXCLK_RESYNC_CNTL
+ #define regPHYPLLF_PIXCLK_RESYNC_CNTL 0x007e
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+index 169924d0a8393e..01d95108ce662b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+@@ -1778,6 +1778,7 @@ static struct resource_funcs dcn314_res_pool_funcs = {
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn314_get_panel_config_defaults,
+ .get_preferred_eng_id_dpia = dcn314_get_preferred_eng_id_dpia,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static struct clock_source *dcn30_clock_source_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+index 3f4b9dba411244..f2ce687c0e03ca 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+@@ -1840,6 +1840,7 @@ static struct resource_funcs dcn315_res_pool_funcs = {
+ .update_bw_bounding_box = dcn315_update_bw_bounding_box,
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn315_get_panel_config_defaults,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn315_resource_construct(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+index 5fd52c5fcee458..af82e13029c9e4 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+@@ -1720,6 +1720,7 @@ static struct resource_funcs dcn316_res_pool_funcs = {
+ .update_bw_bounding_box = dcn316_update_bw_bounding_box,
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn316_get_panel_config_defaults,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn316_resource_construct(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+index a124ad9bd108c8..6b889c8be0ca3f 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+@@ -2186,6 +2186,7 @@ static bool dcn32_resource_construct(
+ dc->caps.dmcub_support = true;
+ dc->caps.seamless_odm = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+index 827a94f84f1001..74113c578bac40 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+@@ -1743,6 +1743,7 @@ static bool dcn321_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index 893a9d9ee870df..d0c4693c12241b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -1779,6 +1779,7 @@ static struct resource_funcs dcn35_res_pool_funcs = {
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn35_get_panel_config_defaults,
+ .get_preferred_eng_id_dpia = dcn35_get_preferred_eng_id_dpia,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn35_resource_construct(
+@@ -1850,6 +1851,7 @@ static bool dcn35_resource_construct(
+ dc->caps.zstate_support = true;
+ dc->caps.ips_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index 70abd32ce2ad18..575c0aa12229cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -1758,6 +1758,7 @@ static struct resource_funcs dcn351_res_pool_funcs = {
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn35_get_panel_config_defaults,
+ .get_preferred_eng_id_dpia = dcn351_get_preferred_eng_id_dpia,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn351_resource_construct(
+@@ -1829,6 +1830,7 @@ static bool dcn351_resource_construct(
+ dc->caps.zstate_support = true;
+ dc->caps.ips_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+index 9d56fbdcd06afd..4aa975418fb18d 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+@@ -1826,6 +1826,7 @@ static bool dcn401_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ if (ASICREV_IS_GC_12_0_1_A0(dc->ctx->asic_id.hw_internal_rev))
+ dc->caps.dcc_plane_width_limit = 7680;
+diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+index ebcf68bfae2b32..7835100b37c41e 100644
+--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
++++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+@@ -747,7 +747,8 @@ union dmub_shared_state_ips_driver_signals {
+ uint32_t allow_ips1 : 1; /**< 1 is IPS1 is allowed */
+ uint32_t allow_ips2 : 1; /**< 1 is IPS1 is allowed */
+ uint32_t allow_z10 : 1; /**< 1 if Z10 is allowed */
+- uint32_t reserved_bits : 28; /**< Reversed bits */
++ uint32_t allow_idle : 1; /**< 1 if driver is allowing idle */
++ uint32_t reserved_bits : 27; /**< Reversed bits */
+ } bits;
+ uint32_t all;
+ };
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index bbd259cea4f4f6..ab62a76d48cf76 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -121,6 +121,17 @@ static unsigned int calc_duration_in_us_from_v_total(
+ return duration_in_us;
+ }
+
++static unsigned int calc_max_hardware_v_total(const struct dc_stream_state *stream)
++{
++ unsigned int max_hw_v_total = stream->ctx->dc->caps.max_v_total;
++
++ if (stream->ctx->dc->caps.vtotal_limited_by_fp2) {
++ max_hw_v_total -= stream->timing.v_front_porch + 1;
++ }
++
++ return max_hw_v_total;
++}
++
+ unsigned int mod_freesync_calc_v_total_from_refresh(
+ const struct dc_stream_state *stream,
+ unsigned int refresh_in_uhz)
+@@ -1002,7 +1013,7 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
+
+ if (stream->ctx->dc->caps.max_v_total != 0 && stream->timing.h_total != 0) {
+ min_hardware_refresh_in_uhz = div64_u64((stream->timing.pix_clk_100hz * 100000000ULL),
+- (stream->timing.h_total * (long long)stream->ctx->dc->caps.max_v_total));
++ (stream->timing.h_total * (long long)calc_max_hardware_v_total(stream)));
+ }
+ /* Limit minimum refresh rate to what can be supported by hardware */
+ min_refresh_in_uhz = min_hardware_refresh_in_uhz > in_config->min_refresh_in_uhz ?
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index d5d6ab484e5add..0fa6fbee197899 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -1409,7 +1409,11 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
+ * create a custom set of heuristics, write a string of numbers to the file
+ * starting with the number of the custom profile along with a setting
+ * for each heuristic parameter. Due to differences across asic families
+- * the heuristic parameters vary from family to family.
++ * the heuristic parameters vary from family to family. Additionally,
++ * you can apply the custom heuristics to different clock domains. Each
++ * clock domain is considered a distinct operation so if you modify the
++ * gfxclk heuristics and then the memclk heuristics, the all of the
++ * custom heuristics will be retained until you switch to another profile.
+ *
+ */
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 32bdeac2676b5c..0c0b9aa44dfa3a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -72,6 +72,10 @@ static int smu_set_power_limit(void *handle, uint32_t limit);
+ static int smu_set_fan_speed_rpm(void *handle, uint32_t speed);
+ static int smu_set_gfx_cgpg(struct smu_context *smu, bool enabled);
+ static int smu_set_mp1_state(void *handle, enum pp_mp1_state mp1_state);
++static void smu_power_profile_mode_get(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode);
++static void smu_power_profile_mode_put(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode);
+
+ static int smu_sys_get_pp_feature_mask(void *handle,
+ char *buf)
+@@ -1257,35 +1261,19 @@ static int smu_sw_init(void *handle)
+ INIT_WORK(&smu->interrupt_work, smu_interrupt_work_fn);
+ atomic64_set(&smu->throttle_int_counter, 0);
+ smu->watermarks_bitmap = 0;
+- smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+- smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+
+ atomic_set(&smu->smu_power.power_gate.vcn_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.jpeg_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
+
+- smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
+-
+ if (smu->is_apu ||
+ !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D))
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
++ smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ else
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
+-
+- smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+- smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+- smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
+- smu->workload_setting[3] = PP_SMC_POWER_PROFILE_VIDEO;
+- smu->workload_setting[4] = PP_SMC_POWER_PROFILE_VR;
+- smu->workload_setting[5] = PP_SMC_POWER_PROFILE_COMPUTE;
+- smu->workload_setting[6] = PP_SMC_POWER_PROFILE_CUSTOM;
++ smu->power_profile_mode = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
++ smu_power_profile_mode_get(smu, smu->power_profile_mode);
++
+ smu->display_config = &adev->pm.pm_display_cfg;
+
+ smu->smu_dpm.dpm_level = AMD_DPM_FORCED_LEVEL_AUTO;
+@@ -1338,6 +1326,11 @@ static int smu_sw_fini(void *handle)
+ return ret;
+ }
+
++ if (smu->custom_profile_params) {
++ kfree(smu->custom_profile_params);
++ smu->custom_profile_params = NULL;
++ }
++
+ smu_fini_microcode(smu);
+
+ return 0;
+@@ -2117,6 +2110,9 @@ static int smu_suspend(void *handle)
+ if (!ret)
+ adev->gfx.gfx_off_entrycount = count;
+
++ /* clear this on suspend so it will get reprogrammed on resume */
++ smu->workload_mask = 0;
++
+ return 0;
+ }
+
+@@ -2229,25 +2225,49 @@ static int smu_enable_umd_pstate(void *handle,
+ }
+
+ static int smu_bump_power_profile_mode(struct smu_context *smu,
+- long *param,
+- uint32_t param_size)
++ long *custom_params,
++ u32 custom_params_max_idx)
+ {
+- int ret = 0;
++ u32 workload_mask = 0;
++ int i, ret = 0;
++
++ for (i = 0; i < PP_SMC_POWER_PROFILE_COUNT; i++) {
++ if (smu->workload_refcount[i])
++ workload_mask |= 1 << i;
++ }
++
++ if (smu->workload_mask == workload_mask)
++ return 0;
+
+ if (smu->ppt_funcs->set_power_profile_mode)
+- ret = smu->ppt_funcs->set_power_profile_mode(smu, param, param_size);
++ ret = smu->ppt_funcs->set_power_profile_mode(smu, workload_mask,
++ custom_params,
++ custom_params_max_idx);
++
++ if (!ret)
++ smu->workload_mask = workload_mask;
+
+ return ret;
+ }
+
++static void smu_power_profile_mode_get(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode)
++{
++ smu->workload_refcount[profile_mode]++;
++}
++
++static void smu_power_profile_mode_put(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode)
++{
++ if (smu->workload_refcount[profile_mode])
++ smu->workload_refcount[profile_mode]--;
++}
++
+ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ enum amd_dpm_forced_level level,
+- bool skip_display_settings,
+- bool init)
++ bool skip_display_settings)
+ {
+ int ret = 0;
+- int index = 0;
+- long workload[1];
+ struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+
+ if (!skip_display_settings) {
+@@ -2284,14 +2304,8 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ }
+
+ if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+- smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
+- index = fls(smu->workload_mask);
+- index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- workload[0] = smu->workload_setting[index];
+-
+- if (init || smu->power_profile_mode != workload[0])
+- smu_bump_power_profile_mode(smu, workload, 0);
+- }
++ smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
++ smu_bump_power_profile_mode(smu, NULL, 0);
+
+ return ret;
+ }
+@@ -2310,13 +2324,13 @@ static int smu_handle_task(struct smu_context *smu,
+ ret = smu_pre_display_config_changed(smu);
+ if (ret)
+ return ret;
+- ret = smu_adjust_power_state_dynamic(smu, level, false, false);
++ ret = smu_adjust_power_state_dynamic(smu, level, false);
+ break;
+ case AMD_PP_TASK_COMPLETE_INIT:
+- ret = smu_adjust_power_state_dynamic(smu, level, true, true);
++ ret = smu_adjust_power_state_dynamic(smu, level, true);
+ break;
+ case AMD_PP_TASK_READJUST_POWER_STATE:
+- ret = smu_adjust_power_state_dynamic(smu, level, true, false);
++ ret = smu_adjust_power_state_dynamic(smu, level, true);
+ break;
+ default:
+ break;
+@@ -2338,12 +2352,11 @@ static int smu_handle_dpm_task(void *handle,
+
+ static int smu_switch_power_profile(void *handle,
+ enum PP_SMC_POWER_PROFILE type,
+- bool en)
++ bool enable)
+ {
+ struct smu_context *smu = handle;
+ struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+- long workload[1];
+- uint32_t index;
++ int ret;
+
+ if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled)
+ return -EOPNOTSUPP;
+@@ -2351,21 +2364,21 @@ static int smu_switch_power_profile(void *handle,
+ if (!(type < PP_SMC_POWER_PROFILE_CUSTOM))
+ return -EINVAL;
+
+- if (!en) {
+- smu->workload_mask &= ~(1 << smu->workload_prority[type]);
+- index = fls(smu->workload_mask);
+- index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- workload[0] = smu->workload_setting[index];
+- } else {
+- smu->workload_mask |= (1 << smu->workload_prority[type]);
+- index = fls(smu->workload_mask);
+- index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- workload[0] = smu->workload_setting[index];
+- }
+-
+ if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+- smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+- smu_bump_power_profile_mode(smu, workload, 0);
++ smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
++ if (enable)
++ smu_power_profile_mode_get(smu, type);
++ else
++ smu_power_profile_mode_put(smu, type);
++ ret = smu_bump_power_profile_mode(smu, NULL, 0);
++ if (ret) {
++ if (enable)
++ smu_power_profile_mode_put(smu, type);
++ else
++ smu_power_profile_mode_get(smu, type);
++ return ret;
++ }
++ }
+
+ return 0;
+ }
+@@ -3053,12 +3066,35 @@ static int smu_set_power_profile_mode(void *handle,
+ uint32_t param_size)
+ {
+ struct smu_context *smu = handle;
++ bool custom = false;
++ int ret = 0;
+
+ if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled ||
+ !smu->ppt_funcs->set_power_profile_mode)
+ return -EOPNOTSUPP;
+
+- return smu_bump_power_profile_mode(smu, param, param_size);
++ if (param[param_size] == PP_SMC_POWER_PROFILE_CUSTOM) {
++ custom = true;
++ /* clear frontend mask so custom changes propogate */
++ smu->workload_mask = 0;
++ }
++
++ if ((param[param_size] != smu->power_profile_mode) || custom) {
++ /* clear the old user preference */
++ smu_power_profile_mode_put(smu, smu->power_profile_mode);
++ /* set the new user preference */
++ smu_power_profile_mode_get(smu, param[param_size]);
++ ret = smu_bump_power_profile_mode(smu,
++ custom ? param : NULL,
++ custom ? param_size : 0);
++ if (ret)
++ smu_power_profile_mode_put(smu, param[param_size]);
++ else
++ /* store the user's preference */
++ smu->power_profile_mode = param[param_size];
++ }
++
++ return ret;
+ }
+
+ static int smu_get_fan_control_mode(void *handle, u32 *fan_mode)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+index b44a185d07e84c..2b8a18ce25d943 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+@@ -556,11 +556,13 @@ struct smu_context {
+ uint32_t hard_min_uclk_req_from_dal;
+ bool disable_uclk_switch;
+
++ /* asic agnostic workload mask */
+ uint32_t workload_mask;
+- uint32_t workload_prority[WORKLOAD_POLICY_MAX];
+- uint32_t workload_setting[WORKLOAD_POLICY_MAX];
++ /* default/user workload preference */
+ uint32_t power_profile_mode;
+- uint32_t default_power_profile_mode;
++ uint32_t workload_refcount[PP_SMC_POWER_PROFILE_COUNT];
++ /* backend specific custom workload settings */
++ long *custom_profile_params;
+ bool pm_enabled;
+ bool is_apu;
+
+@@ -731,9 +733,12 @@ struct pptable_funcs {
+ * @set_power_profile_mode: Set a power profile mode. Also used to
+ * create/set custom power profile modes.
+ * &input: Power profile mode parameters.
+- * &size: Size of &input.
++ * &workload_mask: mask of workloads to enable
++ * &custom_params: custom profile parameters
++ * &custom_params_max_idx: max valid idx into custom_params
+ */
+- int (*set_power_profile_mode)(struct smu_context *smu, long *input, uint32_t size);
++ int (*set_power_profile_mode)(struct smu_context *smu, u32 workload_mask,
++ long *custom_params, u32 custom_params_max_idx);
+
+ /**
+ * @dpm_set_vcn_enable: Enable/disable VCN engine dynamic power
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index d52512f5f1bd9d..fc1297fecc62e0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1445,98 +1445,120 @@ static int arcturus_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int arcturus_set_power_profile_mode(struct smu_context *smu,
+- long *input,
+- uint32_t size)
++#define ARCTURUS_CUSTOM_PARAMS_COUNT 10
++#define ARCTURUS_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define ARCTURUS_CUSTOM_PARAMS_SIZE (ARCTURUS_CUSTOM_PARAMS_CLOCK_COUNT * ARCTURUS_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int arcturus_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffInt_t activity_monitor;
+- int workload_type = 0;
+- uint32_t profile_mode = input[size];
+- int ret = 0;
++ int ret, idx;
+
+- if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode);
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor),
++ false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
+ }
+
++ idx = 0 * ARCTURUS_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor.Gfx_FPS = input[idx + 1];
++ activity_monitor.Gfx_UseRlcBusy = input[idx + 2];
++ activity_monitor.Gfx_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Gfx_MinActiveFreq = input[idx + 4];
++ activity_monitor.Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor.Gfx_BoosterFreq = input[idx + 6];
++ activity_monitor.Gfx_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Gfx_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Gfx_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 1 * ARCTURUS_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Uclk */
++ activity_monitor.Mem_FPS = input[idx + 1];
++ activity_monitor.Mem_UseRlcBusy = input[idx + 2];
++ activity_monitor.Mem_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Mem_MinActiveFreq = input[idx + 4];
++ activity_monitor.Mem_BoosterFreqType = input[idx + 5];
++ activity_monitor.Mem_BoosterFreq = input[idx + 6];
++ activity_monitor.Mem_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Mem_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Mem_PD_Data_error_rate_coeff = input[idx + 9];
++ }
+
+- if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
+- (smu->smc_fw_version >= 0x360d00)) {
+- if (size != 10)
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor),
++ true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor),
+- false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ return ret;
++}
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor.Gfx_FPS = input[1];
+- activity_monitor.Gfx_UseRlcBusy = input[2];
+- activity_monitor.Gfx_MinActiveFreqType = input[3];
+- activity_monitor.Gfx_MinActiveFreq = input[4];
+- activity_monitor.Gfx_BoosterFreqType = input[5];
+- activity_monitor.Gfx_BoosterFreq = input[6];
+- activity_monitor.Gfx_PD_Data_limit_c = input[7];
+- activity_monitor.Gfx_PD_Data_error_coeff = input[8];
+- activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 1: /* Uclk */
+- activity_monitor.Mem_FPS = input[1];
+- activity_monitor.Mem_UseRlcBusy = input[2];
+- activity_monitor.Mem_MinActiveFreqType = input[3];
+- activity_monitor.Mem_MinActiveFreq = input[4];
+- activity_monitor.Mem_BoosterFreqType = input[5];
+- activity_monitor.Mem_BoosterFreq = input[6];
+- activity_monitor.Mem_PD_Data_limit_c = input[7];
+- activity_monitor.Mem_PD_Data_error_coeff = input[8];
+- activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+- break;
+- default:
++static int arcturus_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (smu->smc_fw_version < 0x360d00)
+ return -EINVAL;
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(ARCTURUS_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
+ }
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor),
+- true);
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != ARCTURUS_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= ARCTURUS_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * ARCTURUS_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = arcturus_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
+- }
+-
+- /*
+- * Conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT
+- * Not all profile modes are supported on arcturus.
+- */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- profile_mode);
+- if (workload_type < 0) {
+- dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on arcturus\n", profile_mode);
+- return -EINVAL;
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, ARCTURUS_CUSTOM_PARAMS_SIZE);
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- 1 << workload_type,
+- NULL);
++ SMU_MSG_SetWorkloadMask,
++ backend_workload_mask,
++ NULL);
+ if (ret) {
+- dev_err(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
+-
+- return 0;
++ return ret;
+ }
+
+ static int arcturus_set_performance_level(struct smu_context *smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 16af1a329621f1..27c1892b2c7493 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2004,87 +2004,122 @@ static int navi10_get_power_profile_mode(struct smu_context *smu, char *buf)
+ return size;
+ }
+
+-static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++#define NAVI10_CUSTOM_PARAMS_COUNT 10
++#define NAVI10_CUSTOM_PARAMS_CLOCKS_COUNT 3
++#define NAVI10_CUSTOM_PARAMS_SIZE (NAVI10_CUSTOM_PARAMS_CLOCKS_COUNT * NAVI10_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int navi10_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffInt_t activity_monitor;
+- int workload_type, ret = 0;
++ int ret, idx;
+
+- smu->power_profile_mode = input[size];
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor), false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
++ }
+
+- if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ idx = 0 * NAVI10_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor.Gfx_FPS = input[idx + 1];
++ activity_monitor.Gfx_MinFreqStep = input[idx + 2];
++ activity_monitor.Gfx_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Gfx_MinActiveFreq = input[idx + 4];
++ activity_monitor.Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor.Gfx_BoosterFreq = input[idx + 6];
++ activity_monitor.Gfx_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Gfx_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Gfx_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 1 * NAVI10_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Socclk */
++ activity_monitor.Soc_FPS = input[idx + 1];
++ activity_monitor.Soc_MinFreqStep = input[idx + 2];
++ activity_monitor.Soc_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Soc_MinActiveFreq = input[idx + 4];
++ activity_monitor.Soc_BoosterFreqType = input[idx + 5];
++ activity_monitor.Soc_BoosterFreq = input[idx + 6];
++ activity_monitor.Soc_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Soc_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Soc_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 2 * NAVI10_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Memclk */
++ activity_monitor.Mem_FPS = input[idx + 1];
++ activity_monitor.Mem_MinFreqStep = input[idx + 2];
++ activity_monitor.Mem_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Mem_MinActiveFreq = input[idx + 4];
++ activity_monitor.Mem_BoosterFreqType = input[idx + 5];
++ activity_monitor.Mem_BoosterFreq = input[idx + 6];
++ activity_monitor.Mem_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Mem_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Mem_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor), true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 10)
+- return -EINVAL;
++ return ret;
++}
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor), false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++static int navi10_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor.Gfx_FPS = input[1];
+- activity_monitor.Gfx_MinFreqStep = input[2];
+- activity_monitor.Gfx_MinActiveFreqType = input[3];
+- activity_monitor.Gfx_MinActiveFreq = input[4];
+- activity_monitor.Gfx_BoosterFreqType = input[5];
+- activity_monitor.Gfx_BoosterFreq = input[6];
+- activity_monitor.Gfx_PD_Data_limit_c = input[7];
+- activity_monitor.Gfx_PD_Data_error_coeff = input[8];
+- activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 1: /* Socclk */
+- activity_monitor.Soc_FPS = input[1];
+- activity_monitor.Soc_MinFreqStep = input[2];
+- activity_monitor.Soc_MinActiveFreqType = input[3];
+- activity_monitor.Soc_MinActiveFreq = input[4];
+- activity_monitor.Soc_BoosterFreqType = input[5];
+- activity_monitor.Soc_BoosterFreq = input[6];
+- activity_monitor.Soc_PD_Data_limit_c = input[7];
+- activity_monitor.Soc_PD_Data_error_coeff = input[8];
+- activity_monitor.Soc_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 2: /* Memclk */
+- activity_monitor.Mem_FPS = input[1];
+- activity_monitor.Mem_MinFreqStep = input[2];
+- activity_monitor.Mem_MinActiveFreqType = input[3];
+- activity_monitor.Mem_MinActiveFreq = input[4];
+- activity_monitor.Mem_BoosterFreqType = input[5];
+- activity_monitor.Mem_BoosterFreq = input[6];
+- activity_monitor.Mem_PD_Data_limit_c = input[7];
+- activity_monitor.Mem_PD_Data_error_coeff = input[8];
+- activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+- break;
+- default:
+- return -EINVAL;
+- }
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor), true);
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params = kzalloc(NAVI10_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
++ }
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != NAVI10_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= NAVI10_CUSTOM_PARAMS_CLOCKS_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * NAVI10_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = navi10_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, NAVI10_CUSTOM_PARAMS_SIZE);
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
+- if (ret)
+- dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ backend_workload_mask, NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 9c3c48297cba03..1af90990d05c8f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1706,90 +1706,126 @@ static int sienna_cichlid_get_power_profile_mode(struct smu_context *smu, char *
+ return size;
+ }
+
+-static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++#define SIENNA_CICHLID_CUSTOM_PARAMS_COUNT 10
++#define SIENNA_CICHLID_CUSTOM_PARAMS_CLOCK_COUNT 3
++#define SIENNA_CICHLID_CUSTOM_PARAMS_SIZE (SIENNA_CICHLID_CUSTOM_PARAMS_CLOCK_COUNT * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int sienna_cichlid_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
++ int ret, idx;
+
+- smu->power_profile_mode = input[size];
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
++ }
+
+- if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ idx = 0 * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_FPS = input[idx + 1];
++ activity_monitor->Gfx_MinFreqStep = input[idx + 2];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 3];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 6];
++ activity_monitor->Gfx_PD_Data_limit_c = input[idx + 7];
++ activity_monitor->Gfx_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor->Gfx_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 1 * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Socclk */
++ activity_monitor->Fclk_FPS = input[idx + 1];
++ activity_monitor->Fclk_MinFreqStep = input[idx + 2];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 3];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 5];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 6];
++ activity_monitor->Fclk_PD_Data_limit_c = input[idx + 7];
++ activity_monitor->Fclk_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor->Fclk_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 2 * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Memclk */
++ activity_monitor->Mem_FPS = input[idx + 1];
++ activity_monitor->Mem_MinFreqStep = input[idx + 2];
++ activity_monitor->Mem_MinActiveFreqType = input[idx + 3];
++ activity_monitor->Mem_MinActiveFreq = input[idx + 4];
++ activity_monitor->Mem_BoosterFreqType = input[idx + 5];
++ activity_monitor->Mem_BoosterFreq = input[idx + 6];
++ activity_monitor->Mem_PD_Data_limit_c = input[idx + 7];
++ activity_monitor->Mem_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor->Mem_PD_Data_error_rate_coeff = input[idx + 9];
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 10)
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ return ret;
++}
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_FPS = input[1];
+- activity_monitor->Gfx_MinFreqStep = input[2];
+- activity_monitor->Gfx_MinActiveFreqType = input[3];
+- activity_monitor->Gfx_MinActiveFreq = input[4];
+- activity_monitor->Gfx_BoosterFreqType = input[5];
+- activity_monitor->Gfx_BoosterFreq = input[6];
+- activity_monitor->Gfx_PD_Data_limit_c = input[7];
+- activity_monitor->Gfx_PD_Data_error_coeff = input[8];
+- activity_monitor->Gfx_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 1: /* Socclk */
+- activity_monitor->Fclk_FPS = input[1];
+- activity_monitor->Fclk_MinFreqStep = input[2];
+- activity_monitor->Fclk_MinActiveFreqType = input[3];
+- activity_monitor->Fclk_MinActiveFreq = input[4];
+- activity_monitor->Fclk_BoosterFreqType = input[5];
+- activity_monitor->Fclk_BoosterFreq = input[6];
+- activity_monitor->Fclk_PD_Data_limit_c = input[7];
+- activity_monitor->Fclk_PD_Data_error_coeff = input[8];
+- activity_monitor->Fclk_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 2: /* Memclk */
+- activity_monitor->Mem_FPS = input[1];
+- activity_monitor->Mem_MinFreqStep = input[2];
+- activity_monitor->Mem_MinActiveFreqType = input[3];
+- activity_monitor->Mem_MinActiveFreq = input[4];
+- activity_monitor->Mem_BoosterFreqType = input[5];
+- activity_monitor->Mem_BoosterFreq = input[6];
+- activity_monitor->Mem_PD_Data_limit_c = input[7];
+- activity_monitor->Mem_PD_Data_error_coeff = input[8];
+- activity_monitor->Mem_PD_Data_error_rate_coeff = input[9];
+- break;
+- default:
+- return -EINVAL;
++static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SIENNA_CICHLID_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
+ }
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), true);
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SIENNA_CICHLID_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SIENNA_CICHLID_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = sienna_cichlid_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SIENNA_CICHLID_CUSTOM_PARAMS_SIZE);
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
+- if (ret)
+- dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ backend_workload_mask, NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 1fe020f1f4dbe2..9bca748ac2e947 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -1054,42 +1054,27 @@ static int vangogh_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++static int vangogh_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
+ {
+- int workload_type, ret;
+- uint32_t profile_mode = input[size];
++ u32 backend_workload_mask = 0;
++ int ret;
+
+- if (profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode);
+- return -EINVAL;
+- }
+-
+- if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT ||
+- profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
+- return 0;
+-
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- profile_mode);
+- if (workload_type < 0) {
+- dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on VANGOGH\n",
+- profile_mode);
+- return -EINVAL;
+- }
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- 1 << workload_type,
+- NULL);
++ backend_workload_mask,
++ NULL);
+ if (ret) {
+- dev_err_once(smu->adev->dev, "Fail to set workload type %d\n",
+- workload_type);
++ dev_err_once(smu->adev->dev, "Fail to set workload mask 0x%08x\n",
++ workload_mask);
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
+-
+- return 0;
++ return ret;
+ }
+
+ static int vangogh_set_soft_freq_limited_range(struct smu_context *smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index cc0504b063fa3a..1a8a42b176e520 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -862,44 +862,27 @@ static int renoir_force_clk_levels(struct smu_context *smu,
+ return ret;
+ }
+
+-static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++static int renoir_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
+ {
+- int workload_type, ret;
+- uint32_t profile_mode = input[size];
++ int ret;
++ u32 backend_workload_mask = 0;
+
+- if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode);
+- return -EINVAL;
+- }
+-
+- if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT ||
+- profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
+- return 0;
+-
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- profile_mode);
+- if (workload_type < 0) {
+- /*
+- * TODO: If some case need switch to powersave/default power mode
+- * then can consider enter WORKLOAD_COMPUTE/WORKLOAD_CUSTOM for power saving.
+- */
+- dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on RENOIR\n", profile_mode);
+- return -EINVAL;
+- }
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- 1 << workload_type,
+- NULL);
++ backend_workload_mask,
++ NULL);
+ if (ret) {
+- dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
++ dev_err_once(smu->adev->dev, "Failed to set workload mask 0x08%x\n",
++ workload_mask);
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
+-
+- return 0;
++ return ret;
+ }
+
+ static int renoir_set_peak_clock_by_device(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index d53e162dcd8de2..a9373968807164 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2477,82 +2477,76 @@ static int smu_v13_0_0_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+- long *input,
+- uint32_t size)
++#define SMU_13_0_0_CUSTOM_PARAMS_COUNT 9
++#define SMU_13_0_0_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define SMU_13_0_0_CUSTOM_PARAMS_SIZE (SMU_13_0_0_CUSTOM_PARAMS_CLOCK_COUNT * SMU_13_0_0_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int smu_v13_0_0_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
+- u32 workload_mask, selected_workload_mask;
+-
+- smu->power_profile_mode = input[size];
++ int ret, idx;
+
+- if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 9)
+- return -EINVAL;
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
+-
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_FPS = input[1];
+- activity_monitor->Gfx_MinActiveFreqType = input[2];
+- activity_monitor->Gfx_MinActiveFreq = input[3];
+- activity_monitor->Gfx_BoosterFreqType = input[4];
+- activity_monitor->Gfx_BoosterFreq = input[5];
+- activity_monitor->Gfx_PD_Data_limit_c = input[6];
+- activity_monitor->Gfx_PD_Data_error_coeff = input[7];
+- activity_monitor->Gfx_PD_Data_error_rate_coeff = input[8];
+- break;
+- case 1: /* Fclk */
+- activity_monitor->Fclk_FPS = input[1];
+- activity_monitor->Fclk_MinActiveFreqType = input[2];
+- activity_monitor->Fclk_MinActiveFreq = input[3];
+- activity_monitor->Fclk_BoosterFreqType = input[4];
+- activity_monitor->Fclk_BoosterFreq = input[5];
+- activity_monitor->Fclk_PD_Data_limit_c = input[6];
+- activity_monitor->Fclk_PD_Data_error_coeff = input[7];
+- activity_monitor->Fclk_PD_Data_error_rate_coeff = input[8];
+- break;
+- default:
+- return -EINVAL;
+- }
++ idx = 0 * SMU_13_0_0_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_FPS = input[idx + 1];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 3];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 5];
++ activity_monitor->Gfx_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Gfx_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Gfx_PD_Data_error_rate_coeff = input[idx + 8];
++ }
++ idx = 1 * SMU_13_0_0_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Fclk */
++ activity_monitor->Fclk_FPS = input[idx + 1];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 3];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 5];
++ activity_monitor->Fclk_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Fclk_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Fclk_PD_Data_error_rate_coeff = input[idx + 8];
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- true);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
+- return ret;
+- }
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
++ return ret;
++}
+
+- if (workload_type < 0)
+- return -EINVAL;
++static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int workload_type, ret, idx = -1, i;
+
+- selected_workload_mask = workload_mask = 1 << workload_type;
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+ if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+@@ -2564,15 +2558,48 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ CMN2ASIC_MAPPING_WORKLOAD,
+ PP_SMC_POWER_PROFILE_POWERSAVING);
+ if (workload_type >= 0)
+- workload_mask |= 1 << workload_type;
++ backend_workload_mask |= 1 << workload_type;
++ }
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SMU_13_0_0_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
++ }
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SMU_13_0_0_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SMU_13_0_0_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SMU_13_0_0_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = smu_v13_0_0_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
++ if (ret) {
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SMU_13_0_0_CUSTOM_PARAMS_SIZE);
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- workload_mask,
+- NULL);
+- if (!ret)
+- smu->workload_mask = selected_workload_mask;
++ SMU_MSG_SetWorkloadMask,
++ backend_workload_mask,
++ NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index ceaf4572db2527..d0e6d051e9cf9f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2436,78 +2436,110 @@ do { \
+ return result;
+ }
+
+-static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++#define SMU_13_0_7_CUSTOM_PARAMS_COUNT 8
++#define SMU_13_0_7_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define SMU_13_0_7_CUSTOM_PARAMS_SIZE (SMU_13_0_7_CUSTOM_PARAMS_CLOCK_COUNT * SMU_13_0_7_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int smu_v13_0_7_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
++ int ret, idx;
+
+- smu->power_profile_mode = input[size];
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
++ }
+
+- if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_WINDOW3D) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ idx = 0 * SMU_13_0_7_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_ActiveHystLimit = input[idx + 1];
++ activity_monitor->Gfx_IdleHystLimit = input[idx + 2];
++ activity_monitor->Gfx_FPS = input[idx + 3];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 6];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 7];
++ }
++ idx = 1 * SMU_13_0_7_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Fclk */
++ activity_monitor->Fclk_ActiveHystLimit = input[idx + 1];
++ activity_monitor->Fclk_IdleHystLimit = input[idx + 2];
++ activity_monitor->Fclk_FPS = input[idx + 3];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 5];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 6];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 7];
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 8)
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ return ret;
++}
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_ActiveHystLimit = input[1];
+- activity_monitor->Gfx_IdleHystLimit = input[2];
+- activity_monitor->Gfx_FPS = input[3];
+- activity_monitor->Gfx_MinActiveFreqType = input[4];
+- activity_monitor->Gfx_BoosterFreqType = input[5];
+- activity_monitor->Gfx_MinActiveFreq = input[6];
+- activity_monitor->Gfx_BoosterFreq = input[7];
+- break;
+- case 1: /* Fclk */
+- activity_monitor->Fclk_ActiveHystLimit = input[1];
+- activity_monitor->Fclk_IdleHystLimit = input[2];
+- activity_monitor->Fclk_FPS = input[3];
+- activity_monitor->Fclk_MinActiveFreqType = input[4];
+- activity_monitor->Fclk_BoosterFreqType = input[5];
+- activity_monitor->Fclk_MinActiveFreq = input[6];
+- activity_monitor->Fclk_BoosterFreq = input[7];
+- break;
+- default:
+- return -EINVAL;
++static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SMU_13_0_7_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
+ }
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), true);
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SMU_13_0_7_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SMU_13_0_7_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SMU_13_0_7_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = smu_v13_0_7_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SMU_13_0_7_CUSTOM_PARAMS_SIZE);
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
++ backend_workload_mask, NULL);
+
+- if (ret)
+- dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
+- else
+- smu->workload_mask = (1 << workload_type);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 82aef8626afa97..b22fb7eafcd3f2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1751,90 +1751,120 @@ static int smu_v14_0_2_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+- long *input,
+- uint32_t size)
++#define SMU_14_0_2_CUSTOM_PARAMS_COUNT 9
++#define SMU_14_0_2_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define SMU_14_0_2_CUSTOM_PARAMS_SIZE (SMU_14_0_2_CUSTOM_PARAMS_CLOCK_COUNT * SMU_14_0_2_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int smu_v14_0_2_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
+- uint32_t current_profile_mode = smu->power_profile_mode;
+- smu->power_profile_mode = input[size];
++ int ret, idx;
+
+- if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 9)
+- return -EINVAL;
++ idx = 0 * SMU_14_0_2_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_FPS = input[idx + 1];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 3];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 5];
++ activity_monitor->Gfx_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Gfx_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Gfx_PD_Data_error_rate_coeff = input[idx + 8];
++ }
++ idx = 1 * SMU_14_0_2_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Fclk */
++ activity_monitor->Fclk_FPS = input[idx + 1];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 3];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 5];
++ activity_monitor->Fclk_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Fclk_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Fclk_PD_Data_error_rate_coeff = input[idx + 8];
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_FPS = input[1];
+- activity_monitor->Gfx_MinActiveFreqType = input[2];
+- activity_monitor->Gfx_MinActiveFreq = input[3];
+- activity_monitor->Gfx_BoosterFreqType = input[4];
+- activity_monitor->Gfx_BoosterFreq = input[5];
+- activity_monitor->Gfx_PD_Data_limit_c = input[6];
+- activity_monitor->Gfx_PD_Data_error_coeff = input[7];
+- activity_monitor->Gfx_PD_Data_error_rate_coeff = input[8];
+- break;
+- case 1: /* Fclk */
+- activity_monitor->Fclk_FPS = input[1];
+- activity_monitor->Fclk_MinActiveFreqType = input[2];
+- activity_monitor->Fclk_MinActiveFreq = input[3];
+- activity_monitor->Fclk_BoosterFreqType = input[4];
+- activity_monitor->Fclk_BoosterFreq = input[5];
+- activity_monitor->Fclk_PD_Data_limit_c = input[6];
+- activity_monitor->Fclk_PD_Data_error_coeff = input[7];
+- activity_monitor->Fclk_PD_Data_error_rate_coeff = input[8];
+- break;
+- default:
+- return -EINVAL;
+- }
++ return ret;
++}
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- true);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
+- return ret;
+- }
+- }
++static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
++ /* disable deep sleep if compute is enabled */
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_COMPUTE))
+ smu_v14_0_deep_sleep_control(smu, false);
+- else if (current_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
++ else
+ smu_v14_0_deep_sleep_control(smu, true);
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SMU_14_0_2_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
++ }
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SMU_14_0_2_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SMU_14_0_2_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SMU_14_0_2_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = smu_v14_0_2_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
++ if (ret) {
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SMU_14_0_2_CUSTOM_PARAMS_SIZE);
++ }
+
+- ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- 1 << workload_type,
+- NULL);
+- if (!ret)
+- smu->workload_mask = 1 << workload_type;
++ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
++ backend_workload_mask, NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index 91ad434bcdaeb4..0d71db7be325da 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -1215,3 +1215,28 @@ void smu_cmn_generic_plpd_policy_desc(struct smu_dpm_policy *policy)
+ {
+ policy->desc = &xgmi_plpd_policy_desc;
+ }
++
++void smu_cmn_get_backend_workload_mask(struct smu_context *smu,
++ u32 workload_mask,
++ u32 *backend_workload_mask)
++{
++ int workload_type;
++ u32 profile_mode;
++
++ *backend_workload_mask = 0;
++
++ for (profile_mode = 0; profile_mode < PP_SMC_POWER_PROFILE_COUNT; profile_mode++) {
++ if (!(workload_mask & (1 << profile_mode)))
++ continue;
++
++ /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
++ workload_type = smu_cmn_to_asic_specific_index(smu,
++ CMN2ASIC_MAPPING_WORKLOAD,
++ profile_mode);
++
++ if (workload_type < 0)
++ continue;
++
++ *backend_workload_mask |= 1 << workload_type;
++ }
++}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 1de685defe85b1..a020277dec3e96 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -147,5 +147,9 @@ bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev);
+ void smu_cmn_generic_soc_policy_desc(struct smu_dpm_policy *policy);
+ void smu_cmn_generic_plpd_policy_desc(struct smu_dpm_policy *policy);
+
++void smu_cmn_get_backend_workload_mask(struct smu_context *smu,
++ u32 workload_mask,
++ u32 *backend_workload_mask);
++
+ #endif
+ #endif
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 65b57de20203f5..008d86cc562af7 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3507,6 +3507,7 @@ static const struct of_device_id it6505_of_match[] = {
+ { .compatible = "ite,it6505" },
+ { }
+ };
++MODULE_DEVICE_TABLE(of, it6505_of_match);
+
+ static struct i2c_driver it6505_i2c_driver = {
+ .driver = {
+diff --git a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+index 14a2a8473682b0..c491e3203bf11c 100644
+--- a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+@@ -160,11 +160,11 @@ EXPORT_SYMBOL(drm_dp_dual_mode_write);
+
+ static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
+ {
+- static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
++ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN + 1] =
+ "DP-HDMI ADAPTOR\x04";
+
+ return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
+- sizeof(dp_dual_mode_hdmi_id)) == 0;
++ DP_DUAL_MODE_HDMI_ID_LEN) == 0;
+ }
+
+ static bool is_type1_adaptor(uint8_t adaptor_id)
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index ac90118b9e7a81..bcf3a33123be1c 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -320,6 +320,9 @@ static bool drm_dp_decode_sideband_msg_hdr(const struct drm_dp_mst_topology_mgr
+ hdr->broadcast = (buf[idx] >> 7) & 0x1;
+ hdr->path_msg = (buf[idx] >> 6) & 0x1;
+ hdr->msg_len = buf[idx] & 0x3f;
++ if (hdr->msg_len < 1) /* min space for body CRC */
++ return false;
++
+ idx++;
+ hdr->somt = (buf[idx] >> 7) & 0x1;
+ hdr->eomt = (buf[idx] >> 6) & 0x1;
+@@ -3697,8 +3700,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
+ ret = 0;
+ mgr->payload_id_table_cleared = false;
+
+- memset(&mgr->down_rep_recv, 0, sizeof(mgr->down_rep_recv));
+- memset(&mgr->up_req_recv, 0, sizeof(mgr->up_req_recv));
++ mgr->reset_rx_state = true;
+ }
+
+ out_unlock:
+@@ -3856,6 +3858,11 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
+ }
+ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume);
+
++static void reset_msg_rx_state(struct drm_dp_sideband_msg_rx *msg)
++{
++ memset(msg, 0, sizeof(*msg));
++}
++
+ static bool
+ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
+ struct drm_dp_mst_branch **mstb)
+@@ -3934,6 +3941,34 @@ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
+ return true;
+ }
+
++static int get_msg_request_type(u8 data)
++{
++ return data & 0x7f;
++}
++
++static bool verify_rx_request_type(struct drm_dp_mst_topology_mgr *mgr,
++ const struct drm_dp_sideband_msg_tx *txmsg,
++ const struct drm_dp_sideband_msg_rx *rxmsg)
++{
++ const struct drm_dp_sideband_msg_hdr *hdr = &rxmsg->initial_hdr;
++ const struct drm_dp_mst_branch *mstb = txmsg->dst;
++ int tx_req_type = get_msg_request_type(txmsg->msg[0]);
++ int rx_req_type = get_msg_request_type(rxmsg->msg[0]);
++ char rad_str[64];
++
++ if (tx_req_type == rx_req_type)
++ return true;
++
++ drm_dp_mst_rad_to_str(mstb->rad, mstb->lct, rad_str, sizeof(rad_str));
++ drm_dbg_kms(mgr->dev,
++ "Got unexpected MST reply, mstb: %p seqno: %d lct: %d rad: %s rx_req_type: %s (%02x) != tx_req_type: %s (%02x)\n",
++ mstb, hdr->seqno, mstb->lct, rad_str,
++ drm_dp_mst_req_type_str(rx_req_type), rx_req_type,
++ drm_dp_mst_req_type_str(tx_req_type), tx_req_type);
++
++ return false;
++}
++
+ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+ {
+ struct drm_dp_sideband_msg_tx *txmsg;
+@@ -3963,6 +3998,9 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+ goto out_clear_reply;
+ }
+
++ if (!verify_rx_request_type(mgr, txmsg, msg))
++ goto out_clear_reply;
++
+ drm_dp_sideband_parse_reply(mgr, msg, &txmsg->reply);
+
+ if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
+@@ -4138,6 +4176,17 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ return 0;
+ }
+
++static void update_msg_rx_state(struct drm_dp_mst_topology_mgr *mgr)
++{
++ mutex_lock(&mgr->lock);
++ if (mgr->reset_rx_state) {
++ mgr->reset_rx_state = false;
++ reset_msg_rx_state(&mgr->down_rep_recv);
++ reset_msg_rx_state(&mgr->up_req_recv);
++ }
++ mutex_unlock(&mgr->lock);
++}
++
+ /**
+ * drm_dp_mst_hpd_irq_handle_event() - MST hotplug IRQ handle MST event
+ * @mgr: manager to notify irq for.
+@@ -4172,6 +4221,8 @@ int drm_dp_mst_hpd_irq_handle_event(struct drm_dp_mst_topology_mgr *mgr, const u
+ *handled = true;
+ }
+
++ update_msg_rx_state(mgr);
++
+ if (esi[1] & DP_DOWN_REP_MSG_RDY) {
+ ret = drm_dp_mst_handle_down_rep(mgr);
+ *handled = true;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 2d84d7ea1ab7a0..4a73821b81f6fd 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -184,6 +184,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* AYA NEO AYANEO 2 */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYANEO 2"),
++ },
++ .driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* AYA NEO 2021 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYADEVICE"),
+@@ -196,6 +202,18 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "AIR"),
+ },
+ .driver_data = (void *)&lcd1080x1920_leftside_up,
++ }, { /* AYA NEO Founder */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYA NEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "AYA NEO Founder"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* AYA NEO GEEK */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GEEK"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* AYA NEO NEXT */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"),
+diff --git a/drivers/gpu/drm/drm_panic.c b/drivers/gpu/drm/drm_panic.c
+index 74412b7bf936c2..0a9ecc1380d2a4 100644
+--- a/drivers/gpu/drm/drm_panic.c
++++ b/drivers/gpu/drm/drm_panic.c
+@@ -209,6 +209,14 @@ static u32 convert_xrgb8888_to_argb2101010(u32 pix)
+ return GENMASK(31, 30) /* set alpha bits */ | pix | ((pix >> 8) & 0x00300C03);
+ }
+
++static u32 convert_xrgb8888_to_abgr2101010(u32 pix)
++{
++ pix = ((pix & 0x00FF0000) >> 14) |
++ ((pix & 0x0000FF00) << 4) |
++ ((pix & 0x000000FF) << 22);
++ return GENMASK(31, 30) /* set alpha bits */ | pix | ((pix >> 8) & 0x00300C03);
++}
++
+ /*
+ * convert_from_xrgb8888 - convert one pixel from xrgb8888 to the desired format
+ * @color: input color, in xrgb8888 format
+@@ -242,6 +250,8 @@ static u32 convert_from_xrgb8888(u32 color, u32 format)
+ return convert_xrgb8888_to_xrgb2101010(color);
+ case DRM_FORMAT_ARGB2101010:
+ return convert_xrgb8888_to_argb2101010(color);
++ case DRM_FORMAT_ABGR2101010:
++ return convert_xrgb8888_to_abgr2101010(color);
+ default:
+ WARN_ONCE(1, "Can't convert to %p4cc\n", &format);
+ return 0;
+diff --git a/drivers/gpu/drm/mcde/mcde_drv.c b/drivers/gpu/drm/mcde/mcde_drv.c
+index 10c06440c7e73e..f1bb38f4e67349 100644
+--- a/drivers/gpu/drm/mcde/mcde_drv.c
++++ b/drivers/gpu/drm/mcde/mcde_drv.c
+@@ -473,6 +473,7 @@ static const struct of_device_id mcde_of_match[] = {
+ },
+ {},
+ };
++MODULE_DEVICE_TABLE(of, mcde_of_match);
+
+ static struct platform_driver mcde_driver = {
+ .driver = {
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 86735430462fa6..06381c62820975 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -4565,6 +4565,31 @@ static const struct panel_desc yes_optoelectronics_ytc700tlag_05_201c = {
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+
++static const struct drm_display_mode mchp_ac69t88a_mode = {
++ .clock = 25000,
++ .hdisplay = 800,
++ .hsync_start = 800 + 88,
++ .hsync_end = 800 + 88 + 5,
++ .htotal = 800 + 88 + 5 + 40,
++ .vdisplay = 480,
++ .vsync_start = 480 + 23,
++ .vsync_end = 480 + 23 + 5,
++ .vtotal = 480 + 23 + 5 + 1,
++};
++
++static const struct panel_desc mchp_ac69t88a = {
++ .modes = &mchp_ac69t88a_mode,
++ .num_modes = 1,
++ .bpc = 8,
++ .size = {
++ .width = 108,
++ .height = 65,
++ },
++ .bus_flags = DRM_BUS_FLAG_DE_HIGH,
++ .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA,
++ .connector_type = DRM_MODE_CONNECTOR_LVDS,
++};
++
+ static const struct drm_display_mode arm_rtsm_mode[] = {
+ {
+ .clock = 65000,
+@@ -5048,6 +5073,9 @@ static const struct of_device_id platform_of_match[] = {
+ }, {
+ .compatible = "yes-optoelectronics,ytc700tlag-05-201c",
+ .data = &yes_optoelectronics_ytc700tlag_05_201c,
++ }, {
++ .compatible = "microchip,ac69t88a",
++ .data = &mchp_ac69t88a,
+ }, {
+ /* Must be the last entry */
+ .compatible = "panel-dpi",
+diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
+index 1b2d31c4d77caa..ac77d1246b9453 100644
+--- a/drivers/gpu/drm/radeon/r600_cs.c
++++ b/drivers/gpu/drm/radeon/r600_cs.c
+@@ -2104,7 +2104,7 @@ static int r600_packet3_check(struct radeon_cs_parser *p,
+ return -EINVAL;
+ }
+
+- offset = radeon_get_ib_value(p, idx+1) << 8;
++ offset = (u64)radeon_get_ib_value(p, idx+1) << 8;
+ if (offset != track->vgt_strmout_bo_offset[idx_value]) {
+ DRM_ERROR("bad STRMOUT_BASE_UPDATE, bo offset does not match: 0x%llx, 0x%x\n",
+ offset, track->vgt_strmout_bo_offset[idx_value]);
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index e97c6c60bc96ef..416590ea0dc3d6 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -803,6 +803,14 @@ int drm_sched_job_init(struct drm_sched_job *job,
+ return -EINVAL;
+ }
+
++ /*
++ * We don't know for sure how the user has allocated. Thus, zero the
++ * struct so that unallowed (i.e., too early) usage of pointers that
++ * this function does not set is guaranteed to lead to a NULL pointer
++ * exception instead of UB.
++ */
++ memset(job, 0, sizeof(*job));
++
+ job->entity = entity;
+ job->credits = credits;
+ job->s_fence = drm_sched_fence_alloc(entity, owner);
+diff --git a/drivers/gpu/drm/sti/sti_mixer.c b/drivers/gpu/drm/sti/sti_mixer.c
+index 7e5f14646625b4..06c1b81912f79f 100644
+--- a/drivers/gpu/drm/sti/sti_mixer.c
++++ b/drivers/gpu/drm/sti/sti_mixer.c
+@@ -137,7 +137,7 @@ static void mixer_dbg_crb(struct seq_file *s, int val)
+ }
+ }
+
+-static void mixer_dbg_mxn(struct seq_file *s, void *addr)
++static void mixer_dbg_mxn(struct seq_file *s, void __iomem *addr)
+ {
+ int i;
+
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 00cd081d787327..6ee56cbd3f1bfc 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -254,9 +254,9 @@ void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon)
+ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_SRC_X(source), channel);
+ }
+
++ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_EN, mask);
+ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_CLR, mask);
+ V3D_CORE_WRITE(0, V3D_PCTR_0_OVERFLOW, mask);
+- V3D_CORE_WRITE(0, V3D_V4_PCTR_0_EN, mask);
+
+ v3d->active_perfmon = perfmon;
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 2d7d3e90f3be44..7e0a5ea7ab859a 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1924,7 +1924,7 @@ static int vc4_hdmi_audio_startup(struct device *dev, void *data)
+ }
+
+ if (!vc4_hdmi_audio_can_stream(vc4_hdmi)) {
+- ret = -ENODEV;
++ ret = -ENOTSUPP;
+ goto out_dev_exit;
+ }
+
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 863539e1f7e04b..c389e82463bfdb 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -963,6 +963,17 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_SCLEIRQ);
+
+
++ /* Set AXI panic mode.
++ * VC4 panics when < 2 lines in FIFO.
++ * VC5 panics when less than 1 line in the FIFO.
++ */
++ dispctrl &= ~(SCALER_DISPCTRL_PANIC0_MASK |
++ SCALER_DISPCTRL_PANIC1_MASK |
++ SCALER_DISPCTRL_PANIC2_MASK);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC0);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2);
++
+ /* Set AXI panic mode.
+ * VC4 panics when < 2 lines in FIFO.
+ * VC5 panics when less than 1 line in the FIFO.
+diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+index 81b71903675e0d..7c78496e6213cc 100644
+--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+@@ -186,6 +186,7 @@
+
+ #define VDBOX_CGCTL3F10(base) XE_REG((base) + 0x3f10)
+ #define IECPUNIT_CLKGATE_DIS REG_BIT(22)
++#define RAMDFTUNIT_CLKGATE_DIS REG_BIT(9)
+
+ #define VDBOX_CGCTL3F18(base) XE_REG((base) + 0x3f18)
+ #define ALNUNIT_CLKGATE_DIS REG_BIT(13)
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index bd604b9f08e4fa..5404de2aea5457 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -286,6 +286,9 @@
+ #define GAMTLBVEBOX0_CLKGATE_DIS REG_BIT(16)
+ #define LTCDD_CLKGATE_DIS REG_BIT(10)
+
++#define UNSLCGCTL9454 XE_REG(0x9454)
++#define LSCFE_CLKGATE_DIS REG_BIT(4)
++
+ #define XEHP_SLICE_UNIT_LEVEL_CLKGATE XE_REG_MCR(0x94d4)
+ #define L3_CR2X_CLKGATE_DIS REG_BIT(17)
+ #define L3_CLKGATE_DIS REG_BIT(16)
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index bdb76e834e4c36..5221ee3f12149b 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -6,6 +6,7 @@
+ #include "xe_devcoredump.h"
+ #include "xe_devcoredump_types.h"
+
++#include <linux/ascii85.h>
+ #include <linux/devcoredump.h>
+ #include <generated/utsrelease.h>
+
+@@ -85,9 +86,9 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+
+ p = drm_coredump_printer(&iter);
+
+- drm_printf(&p, "**** Xe Device Coredump ****\n");
+- drm_printf(&p, "kernel: " UTS_RELEASE "\n");
+- drm_printf(&p, "module: " KBUILD_MODNAME "\n");
++ drm_puts(&p, "**** Xe Device Coredump ****\n");
++ drm_puts(&p, "kernel: " UTS_RELEASE "\n");
++ drm_puts(&p, "module: " KBUILD_MODNAME "\n");
+
+ ts = ktime_to_timespec64(ss->snapshot_time);
+ drm_printf(&p, "Snapshot time: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec);
+@@ -96,20 +97,25 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+ drm_printf(&p, "Process: %s\n", ss->process_name);
+ xe_device_snapshot_print(xe, &p);
+
+- drm_printf(&p, "\n**** GuC CT ****\n");
+- xe_guc_ct_snapshot_print(coredump->snapshot.ct, &p);
+- xe_guc_exec_queue_snapshot_print(coredump->snapshot.ge, &p);
++ drm_printf(&p, "\n**** GT #%d ****\n", ss->gt->info.id);
++ drm_printf(&p, "\tTile: %d\n", ss->gt->tile->id);
+
+- drm_printf(&p, "\n**** Job ****\n");
+- xe_sched_job_snapshot_print(coredump->snapshot.job, &p);
++ drm_puts(&p, "\n**** GuC CT ****\n");
++ xe_guc_ct_snapshot_print(ss->ct, &p);
+
+- drm_printf(&p, "\n**** HW Engines ****\n");
++ drm_puts(&p, "\n**** Contexts ****\n");
++ xe_guc_exec_queue_snapshot_print(ss->ge, &p);
++
++ drm_puts(&p, "\n**** Job ****\n");
++ xe_sched_job_snapshot_print(ss->job, &p);
++
++ drm_puts(&p, "\n**** HW Engines ****\n");
+ for (i = 0; i < XE_NUM_HW_ENGINES; i++)
+- if (coredump->snapshot.hwe[i])
+- xe_hw_engine_snapshot_print(coredump->snapshot.hwe[i],
+- &p);
+- drm_printf(&p, "\n**** VM state ****\n");
+- xe_vm_snapshot_print(coredump->snapshot.vm, &p);
++ if (ss->hwe[i])
++ xe_hw_engine_snapshot_print(ss->hwe[i], &p);
++
++ drm_puts(&p, "\n**** VM state ****\n");
++ xe_vm_snapshot_print(ss->vm, &p);
+
+ return count - iter.remain;
+ }
+@@ -141,13 +147,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
+ {
+ struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
+ struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
++ unsigned int fw_ref;
+
+ /* keep going if fw fails as we still want to save the memory and SW data */
+- if (xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL))
++ fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
++ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+ xe_vm_snapshot_capture_delayed(ss->vm);
+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+- xe_force_wake_put(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
++ xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+
+ /* Calculate devcoredump size */
+ ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
+@@ -220,8 +228,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
+ u32 width_mask = (0x1 << q->width) - 1;
+ const char *process_name = "no process";
+
+- int i;
++ unsigned int fw_ref;
+ bool cookie;
++ int i;
+
+ ss->snapshot_time = ktime_get_real();
+ ss->boot_time = ktime_get_boottime();
+@@ -244,26 +253,25 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
+ }
+
+ /* keep going if fw fails as we still want to save the memory and SW data */
+- if (xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL))
+- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
++ fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+
+- coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
+- coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(q);
+- coredump->snapshot.job = xe_sched_job_snapshot_capture(job);
+- coredump->snapshot.vm = xe_vm_snapshot_capture(q->vm);
++ ss->ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
++ ss->ge = xe_guc_exec_queue_snapshot_capture(q);
++ ss->job = xe_sched_job_snapshot_capture(job);
++ ss->vm = xe_vm_snapshot_capture(q->vm);
+
+ for_each_hw_engine(hwe, q->gt, id) {
+ if (hwe->class != q->hwe->class ||
+ !(BIT(hwe->logical_instance) & adj_logical_mask)) {
+- coredump->snapshot.hwe[id] = NULL;
++ ss->hwe[id] = NULL;
+ continue;
+ }
+- coredump->snapshot.hwe[id] = xe_hw_engine_snapshot_capture(hwe);
++ ss->hwe[id] = xe_hw_engine_snapshot_capture(hwe);
+ }
+
+ queue_work(system_unbound_wq, &ss->work);
+
+- xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
++ xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
+ dma_fence_end_signalling(cookie);
+ }
+
+@@ -310,3 +318,89 @@ int xe_devcoredump_init(struct xe_device *xe)
+ }
+
+ #endif
++
++/**
++ * xe_print_blob_ascii85 - print a BLOB to some useful location in ASCII85
++ *
++ * The output is split to multiple lines because some print targets, e.g. dmesg
++ * cannot handle arbitrarily long lines. Note also that printing to dmesg in
++ * piece-meal fashion is not possible, each separate call to drm_puts() has a
++ * line-feed automatically added! Therefore, the entire output line must be
++ * constructed in a local buffer first, then printed in one atomic output call.
++ *
++ * There is also a scheduler yield call to prevent the 'task has been stuck for
++ * 120s' kernel hang check feature from firing when printing to a slow target
++ * such as dmesg over a serial port.
++ *
++ * TODO: Add compression prior to the ASCII85 encoding to shrink huge buffers down.
++ *
++ * @p: the printer object to output to
++ * @prefix: optional prefix to add to output string
++ * @blob: the Binary Large OBject to dump out
++ * @offset: offset in bytes to skip from the front of the BLOB, must be a multiple of sizeof(u32)
++ * @size: the size in bytes of the BLOB, must be a multiple of sizeof(u32)
++ */
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++ const void *blob, size_t offset, size_t size)
++{
++ const u32 *blob32 = (const u32 *)blob;
++ char buff[ASCII85_BUFSZ], *line_buff;
++ size_t line_pos = 0;
++
++#define DMESG_MAX_LINE_LEN 800
++#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
++
++ if (size & 3)
++ drm_printf(p, "Size not word aligned: %zu", size);
++ if (offset & 3)
++ drm_printf(p, "Offset not word aligned: %zu", size);
++
++ line_buff = kzalloc(DMESG_MAX_LINE_LEN, GFP_KERNEL);
++ if (IS_ERR_OR_NULL(line_buff)) {
++ drm_printf(p, "Failed to allocate line buffer: %pe", line_buff);
++ return;
++ }
++
++ blob32 += offset / sizeof(*blob32);
++ size /= sizeof(*blob32);
++
++ if (prefix) {
++ strscpy(line_buff, prefix, DMESG_MAX_LINE_LEN - MIN_SPACE - 2);
++ line_pos = strlen(line_buff);
++
++ line_buff[line_pos++] = ':';
++ line_buff[line_pos++] = ' ';
++ }
++
++ while (size--) {
++ u32 val = *(blob32++);
++
++ strscpy(line_buff + line_pos, ascii85_encode(val, buff),
++ DMESG_MAX_LINE_LEN - line_pos);
++ line_pos += strlen(line_buff + line_pos);
++
++ if ((line_pos + MIN_SPACE) >= DMESG_MAX_LINE_LEN) {
++ line_buff[line_pos++] = '\n';
++ line_buff[line_pos++] = 0;
++
++ drm_puts(p, line_buff);
++
++ line_pos = 0;
++
++ /* Prevent 'stuck thread' time out errors */
++ cond_resched();
++ }
++ }
++
++ if (line_pos) {
++ line_buff[line_pos++] = '\n';
++ line_buff[line_pos++] = 0;
++
++ drm_puts(p, line_buff);
++ }
++
++ kfree(line_buff);
++
++#undef MIN_SPACE
++#undef DMESG_MAX_LINE_LEN
++}
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.h b/drivers/gpu/drm/xe/xe_devcoredump.h
+index e2fa65ce093226..a4eebc285fc837 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.h
++++ b/drivers/gpu/drm/xe/xe_devcoredump.h
+@@ -6,6 +6,9 @@
+ #ifndef _XE_DEVCOREDUMP_H_
+ #define _XE_DEVCOREDUMP_H_
+
++#include <linux/types.h>
++
++struct drm_printer;
+ struct xe_device;
+ struct xe_sched_job;
+
+@@ -23,4 +26,7 @@ static inline int xe_devcoredump_init(struct xe_device *xe)
+ }
+ #endif
+
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++ const void *blob, size_t offset, size_t size);
++
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump_types.h b/drivers/gpu/drm/xe/xe_devcoredump_types.h
+index 440d05d77a5af8..3cc2f095fdfbd1 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump_types.h
++++ b/drivers/gpu/drm/xe/xe_devcoredump_types.h
+@@ -37,7 +37,8 @@ struct xe_devcoredump_snapshot {
+ /* GuC snapshots */
+ /** @ct: GuC CT snapshot */
+ struct xe_guc_ct_snapshot *ct;
+- /** @ge: Guc Engine snapshot */
++
++ /** @ge: GuC Submission Engine snapshot */
+ struct xe_guc_submit_exec_queue_snapshot *ge;
+
+ /** @hwe: HW Engine snapshot array */
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index a1987b554a8d2a..bb85208cf1a94c 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -919,6 +919,7 @@ void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p)
+
+ for_each_gt(gt, xe, id) {
+ drm_printf(p, "GT id: %u\n", id);
++ drm_printf(p, "\tTile: %u\n", gt->tile->id);
+ drm_printf(p, "\tType: %s\n",
+ gt->info.type == XE_GT_TYPE_MAIN ? "main" : "media");
+ drm_printf(p, "\tIP ver: %u.%u.%u\n",
+diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
+index a2577672f4e3e6..1608a55edc846e 100644
+--- a/drivers/gpu/drm/xe/xe_force_wake.h
++++ b/drivers/gpu/drm/xe/xe_force_wake.h
+@@ -46,4 +46,20 @@ xe_force_wake_assert_held(struct xe_force_wake *fw,
+ xe_gt_assert(fw->gt, fw->awake_domains & domain);
+ }
+
++/**
++ * xe_force_wake_ref_has_domain - verifies if the domains are in fw_ref
++ * @fw_ref : the force_wake reference
++ * @domain : forcewake domain to verify
++ *
++ * This function confirms whether the @fw_ref includes a reference to the
++ * specified @domain.
++ *
++ * Return: true if domain is refcounted.
++ */
++static inline bool
++xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains domain)
++{
++ return fw_ref & domain;
++}
++
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_gt_topology.c b/drivers/gpu/drm/xe/xe_gt_topology.c
+index 0662f71c6ede78..3e113422b88de2 100644
+--- a/drivers/gpu/drm/xe/xe_gt_topology.c
++++ b/drivers/gpu/drm/xe/xe_gt_topology.c
+@@ -5,6 +5,7 @@
+
+ #include "xe_gt_topology.h"
+
++#include <generated/xe_wa_oob.h>
+ #include <linux/bitmap.h>
+ #include <linux/compiler.h>
+
+@@ -12,6 +13,7 @@
+ #include "xe_assert.h"
+ #include "xe_gt.h"
+ #include "xe_mmio.h"
++#include "xe_wa.h"
+
+ static void
+ load_dss_mask(struct xe_gt *gt, xe_dss_mask_t mask, int numregs, ...)
+@@ -129,6 +131,18 @@ load_l3_bank_mask(struct xe_gt *gt, xe_l3_bank_mask_t l3_bank_mask)
+ struct xe_device *xe = gt_to_xe(gt);
+ u32 fuse3 = xe_mmio_read32(gt, MIRROR_FUSE3);
+
++ /*
++ * PTL platforms with media version 30.00 do not provide proper values
++ * for the media GT's L3 bank registers. Skip the readout since we
++ * don't have any way to obtain real values.
++ *
++ * This may get re-described as an official workaround in the future,
++ * but there's no tracking number assigned yet so we use a custom
++ * OOB workaround descriptor.
++ */
++ if (XE_WA(gt, no_media_l3))
++ return;
++
+ if (GRAPHICS_VER(xe) >= 20) {
+ xe_l3_bank_mask_t per_node = {};
+ u32 meml3_en = REG_FIELD_GET(XE2_NODE_ENABLE_MASK, fuse3);
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
+index 9c505d3517cd1a..cd6a5f09d631e4 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.c
++++ b/drivers/gpu/drm/xe/xe_guc_ct.c
+@@ -906,6 +906,24 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+ }
+ }
+
++ /*
++ * Occasionally it is seen that the G2H worker starts running after a delay of more than
++ * a second even after being queued and activated by the Linux workqueue subsystem. This
++ * leads to G2H timeout error. The root cause of issue lies with scheduling latency of
++ * Lunarlake Hybrid CPU. Issue dissappears if we disable Lunarlake atom cores from BIOS
++ * and this is beyond xe kmd.
++ *
++ * TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
++ */
++ if (!ret) {
++ flush_work(&ct->g2h_worker);
++ if (g2h_fence.done) {
++ xe_gt_warn(gt, "G2H fence %u, action %04x, done\n",
++ g2h_fence.seqno, action[0]);
++ ret = 1;
++ }
++ }
++
+ /*
+ * Ensure we serialize with completion side to prevent UAF with fence going out of scope on
+ * the stack, since we have no clue if it will fire after the timeout before we can erase
+diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
+index a37ee341942844..be47780ec2a7e7 100644
+--- a/drivers/gpu/drm/xe/xe_guc_log.c
++++ b/drivers/gpu/drm/xe/xe_guc_log.c
+@@ -6,9 +6,12 @@
+ #include "xe_guc_log.h"
+
+ #include <drm/drm_managed.h>
++#include <linux/vmalloc.h>
+
+ #include "xe_bo.h"
++#include "xe_devcoredump.h"
+ #include "xe_gt.h"
++#include "xe_gt_printk.h"
+ #include "xe_map.h"
+ #include "xe_module.h"
+
+@@ -49,32 +52,35 @@ static size_t guc_log_size(void)
+ CAPTURE_BUFFER_SIZE;
+ }
+
++/**
++ * xe_guc_log_print - dump a copy of the GuC log to some useful location
++ * @log: GuC log structure
++ * @p: the printer object to output to
++ */
+ void xe_guc_log_print(struct xe_guc_log *log, struct drm_printer *p)
+ {
+ struct xe_device *xe = log_to_xe(log);
+ size_t size;
+- int i, j;
++ void *copy;
+
+- xe_assert(xe, log->bo);
++ if (!log->bo) {
++ drm_puts(p, "GuC log buffer not allocated");
++ return;
++ }
+
+ size = log->bo->size;
+
+-#define DW_PER_READ 128
+- xe_assert(xe, !(size % (DW_PER_READ * sizeof(u32))));
+- for (i = 0; i < size / sizeof(u32); i += DW_PER_READ) {
+- u32 read[DW_PER_READ];
+-
+- xe_map_memcpy_from(xe, read, &log->bo->vmap, i * sizeof(u32),
+- DW_PER_READ * sizeof(u32));
+-#define DW_PER_PRINT 4
+- for (j = 0; j < DW_PER_READ / DW_PER_PRINT; ++j) {
+- u32 *print = read + j * DW_PER_PRINT;
+-
+- drm_printf(p, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+- *(print + 0), *(print + 1),
+- *(print + 2), *(print + 3));
+- }
++ copy = vmalloc(size);
++ if (!copy) {
++ drm_printf(p, "Failed to allocate %zu", size);
++ return;
+ }
++
++ xe_map_memcpy_from(xe, copy, &log->bo->vmap, 0, size);
++
++ xe_print_blob_ascii85(p, "Log data", copy, 0, size);
++
++ vfree(copy);
+ }
+
+ int xe_guc_log_init(struct xe_guc_log *log)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 2927745d689549..fed23304e4da58 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -2193,7 +2193,7 @@ xe_guc_exec_queue_snapshot_print(struct xe_guc_submit_exec_queue_snapshot *snaps
+ if (!snapshot)
+ return;
+
+- drm_printf(p, "\nGuC ID: %d\n", snapshot->guc.id);
++ drm_printf(p, "GuC ID: %d\n", snapshot->guc.id);
+ drm_printf(p, "\tName: %s\n", snapshot->name);
+ drm_printf(p, "\tClass: %d\n", snapshot->class);
+ drm_printf(p, "\tLogical mask: 0x%x\n", snapshot->logical_mask);
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
+index c9c3beb3ce8d06..547919e8ce9e45 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine.c
+@@ -1053,7 +1053,6 @@ void xe_hw_engine_snapshot_print(struct xe_hw_engine_snapshot *snapshot,
+ if (snapshot->hwe->class == XE_ENGINE_CLASS_COMPUTE)
+ drm_printf(p, "\tRCU_MODE: 0x%08x\n",
+ snapshot->reg.rcu_mode);
+- drm_puts(p, "\n");
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index 5e962e72c97ea6..025d649434673d 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -383,10 +383,12 @@ static const struct pci_device_id pciidlist[] = {
+ XE_ADLS_IDS(INTEL_VGA_DEVICE, &adl_s_desc),
+ XE_ADLP_IDS(INTEL_VGA_DEVICE, &adl_p_desc),
+ XE_ADLN_IDS(INTEL_VGA_DEVICE, &adl_n_desc),
++ XE_RPLU_IDS(INTEL_VGA_DEVICE, &adl_p_desc),
+ XE_RPLP_IDS(INTEL_VGA_DEVICE, &adl_p_desc),
+ XE_RPLS_IDS(INTEL_VGA_DEVICE, &adl_s_desc),
+ XE_DG1_IDS(INTEL_VGA_DEVICE, &dg1_desc),
+ XE_ATS_M_IDS(INTEL_VGA_DEVICE, &ats_m_desc),
++ XE_ARL_IDS(INTEL_VGA_DEVICE, &mtl_desc),
+ XE_DG2_IDS(INTEL_VGA_DEVICE, &dg2_desc),
+ XE_MTL_IDS(INTEL_VGA_DEVICE, &mtl_desc),
+ XE_LNL_IDS(INTEL_VGA_DEVICE, &lnl_desc),
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 848da8e68c7a83..1c96375bd7df75 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -9,6 +9,7 @@
+ #include <linux/sched/clock.h>
+
+ #include <drm/ttm/ttm_placement.h>
++#include <generated/xe_wa_oob.h>
+ #include <uapi/drm/xe_drm.h>
+
+ #include "regs/xe_engine_regs.h"
+@@ -23,6 +24,7 @@
+ #include "xe_macros.h"
+ #include "xe_mmio.h"
+ #include "xe_ttm_vram_mgr.h"
++#include "xe_wa.h"
+
+ static const u16 xe_to_user_engine_class[] = {
+ [XE_ENGINE_CLASS_RENDER] = DRM_XE_ENGINE_CLASS_RENDER,
+@@ -458,12 +460,23 @@ static int query_hwconfig(struct xe_device *xe,
+
+ static size_t calc_topo_query_size(struct xe_device *xe)
+ {
+- return xe->info.gt_count *
+- (4 * sizeof(struct drm_xe_query_topology_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.g_dss_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.c_dss_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.l3_bank_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.eu_mask_per_dss));
++ struct xe_gt *gt;
++ size_t query_size = 0;
++ int id;
++
++ for_each_gt(gt, xe, id) {
++ query_size += 3 * sizeof(struct drm_xe_query_topology_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.g_dss_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.c_dss_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.eu_mask_per_dss);
++
++ /* L3bank mask may not be available for some GTs */
++ if (!XE_WA(gt, no_media_l3))
++ query_size += sizeof(struct drm_xe_query_topology_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.l3_bank_mask);
++ }
++
++ return query_size;
+ }
+
+ static int copy_mask(void __user **ptr,
+@@ -516,11 +529,18 @@ static int query_gt_topology(struct xe_device *xe,
+ if (err)
+ return err;
+
+- topo.type = DRM_XE_TOPO_L3_BANK;
+- err = copy_mask(&query_ptr, &topo, gt->fuse_topo.l3_bank_mask,
+- sizeof(gt->fuse_topo.l3_bank_mask));
+- if (err)
+- return err;
++ /*
++ * If the kernel doesn't have a way to obtain a correct L3bank
++ * mask, then it's better to omit L3 from the query rather than
++ * reporting bogus or zeroed information to userspace.
++ */
++ if (!XE_WA(gt, no_media_l3)) {
++ topo.type = DRM_XE_TOPO_L3_BANK;
++ err = copy_mask(&query_ptr, &topo, gt->fuse_topo.l3_bank_mask,
++ sizeof(gt->fuse_topo.l3_bank_mask));
++ if (err)
++ return err;
++ }
+
+ topo.type = gt->fuse_topo.eu_type == XE_GT_EU_TYPE_SIMD16 ?
+ DRM_XE_TOPO_SIMD16_EU_PER_DSS :
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 353936a0f877de..37e592b2bf062a 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -251,6 +251,34 @@ static const struct xe_rtp_entry_sr gt_was[] = {
+ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
+ },
+
++ /* Xe3_LPG */
++
++ { XE_RTP_NAME("14021871409"),
++ XE_RTP_RULES(GRAPHICS_VERSION(3000), GRAPHICS_STEP(A0, B0)),
++ XE_RTP_ACTIONS(SET(UNSLCGCTL9454, LSCFE_CLKGATE_DIS))
++ },
++
++ /* Xe3_LPM */
++
++ { XE_RTP_NAME("16021867713"),
++ XE_RTP_RULES(MEDIA_VERSION(3000),
++ ENGINE_CLASS(VIDEO_DECODE)),
++ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F1C(0), MFXPIPE_CLKGATE_DIS)),
++ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
++ },
++ { XE_RTP_NAME("16021865536"),
++ XE_RTP_RULES(MEDIA_VERSION(3000),
++ ENGINE_CLASS(VIDEO_DECODE)),
++ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F10(0), IECPUNIT_CLKGATE_DIS)),
++ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
++ },
++ { XE_RTP_NAME("14021486841"),
++ XE_RTP_RULES(MEDIA_VERSION(3000), MEDIA_STEP(A0, B0),
++ ENGINE_CLASS(VIDEO_DECODE)),
++ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F10(0), RAMDFTUNIT_CLKGATE_DIS)),
++ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
++ },
++
+ {}
+ };
+
+@@ -567,6 +595,13 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ XE_RTP_ACTION_FLAG(ENGINE_BASE)))
+ },
+
++ /* Xe3_LPG */
++
++ { XE_RTP_NAME("14021402888"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3001), FUNC(xe_rtp_match_first_render_or_compute)),
++ XE_RTP_ACTIONS(SET(HALF_SLICE_CHICKEN7, CLEAR_OPTIMIZATION_DISABLE))
++ },
++
+ {}
+ };
+
+@@ -742,6 +777,18 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
+ XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX))
+ },
+
++ /* Xe3_LPG */
++ { XE_RTP_NAME("14021490052"),
++ XE_RTP_RULES(GRAPHICS_VERSION(3000), GRAPHICS_STEP(A0, B0),
++ ENGINE_CLASS(RENDER)),
++ XE_RTP_ACTIONS(SET(FF_MODE,
++ DIS_MESH_PARTIAL_AUTOSTRIP |
++ DIS_MESH_AUTOSTRIP),
++ SET(VFLSKPD,
++ DIS_PARTIAL_AUTOSTRIP |
++ DIS_AUTOSTRIP))
++ },
++
+ {}
+ };
+
+diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
+index 920ca506014661..264d6e116499ce 100644
+--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
++++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
+@@ -33,7 +33,9 @@
+ GRAPHICS_VERSION(2004)
+ 22019338487 MEDIA_VERSION(2000)
+ GRAPHICS_VERSION(2001)
++ MEDIA_VERSION(3000), MEDIA_STEP(A0, B0)
+ 22019338487_display PLATFORM(LUNARLAKE)
+ 16023588340 GRAPHICS_VERSION(2001)
+ 14019789679 GRAPHICS_VERSION(1255)
+ GRAPHICS_VERSION_RANGE(1270, 2004)
++no_media_l3 MEDIA_VERSION(3000)
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 582fd234eec789..935ccc38d12958 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2674,9 +2674,10 @@ static bool hid_check_device_match(struct hid_device *hdev,
+ /*
+ * hid-generic implements .match(), so we must be dealing with a
+ * different HID driver here, and can simply check if
+- * hid_ignore_special_drivers is set or not.
++ * hid_ignore_special_drivers or HID_QUIRK_IGNORE_SPECIAL_DRIVER
++ * are set or not.
+ */
+- return !hid_ignore_special_drivers;
++ return !hid_ignore_special_drivers && !(hdev->quirks & HID_QUIRK_IGNORE_SPECIAL_DRIVER);
+ }
+
+ static int __hid_device_probe(struct hid_device *hdev, struct hid_driver *hdrv)
+diff --git a/drivers/hid/hid-generic.c b/drivers/hid/hid-generic.c
+index d2439399fb357a..9e04c6d0fcc874 100644
+--- a/drivers/hid/hid-generic.c
++++ b/drivers/hid/hid-generic.c
+@@ -40,6 +40,9 @@ static bool hid_generic_match(struct hid_device *hdev,
+ if (ignore_special_driver)
+ return true;
+
++ if (hdev->quirks & HID_QUIRK_IGNORE_SPECIAL_DRIVER)
++ return true;
++
+ if (hdev->quirks & HID_QUIRK_HAVE_SPECIAL_DRIVER)
+ return false;
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 92cff3f2658cf5..0f23be98c56e22 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -94,6 +94,7 @@
+ #define USB_DEVICE_ID_APPLE_MAGICMOUSE2 0x0269
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD 0x030e
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 0x0265
++#define USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC 0x0324
+ #define USB_DEVICE_ID_APPLE_FOUNTAIN_ANSI 0x020e
+ #define USB_DEVICE_ID_APPLE_FOUNTAIN_ISO 0x020f
+ #define USB_DEVICE_ID_APPLE_GEYSER_ANSI 0x0214
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index 8a73b59e0827b9..ec110dea87726d 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -227,7 +227,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ touch_minor = tdata[4];
+ state = tdata[7] & TOUCH_STATE_MASK;
+ down = state != TOUCH_STATE_NONE;
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ id = tdata[8] & 0xf;
+ x = (tdata[1] << 27 | tdata[0] << 19) >> 19;
+ y = -((tdata[3] << 30 | tdata[2] << 22 | tdata[1] << 14) >> 19);
+@@ -259,8 +261,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ /* If requested, emulate a scroll wheel by detecting small
+ * vertical touch motions.
+ */
+- if (emulate_scroll_wheel && (input->id.product !=
+- USB_DEVICE_ID_APPLE_MAGICTRACKPAD2)) {
++ if (emulate_scroll_wheel &&
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ unsigned long now = jiffies;
+ int step_x = msc->touches[id].scroll_x - x;
+ int step_y = msc->touches[id].scroll_y - y;
+@@ -359,7 +362,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ input_report_abs(input, ABS_MT_POSITION_X, x);
+ input_report_abs(input, ABS_MT_POSITION_Y, y);
+
+- if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2)
++ if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC)
+ input_report_abs(input, ABS_MT_PRESSURE, pressure);
+
+ if (report_undeciphered) {
+@@ -367,7 +372,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ input->id.product == USB_DEVICE_ID_APPLE_MAGICMOUSE2)
+ input_event(input, EV_MSC, MSC_RAW, tdata[7]);
+ else if (input->id.product !=
+- USB_DEVICE_ID_APPLE_MAGICTRACKPAD2)
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ input->id.product !=
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC)
+ input_event(input, EV_MSC, MSC_RAW, tdata[8]);
+ }
+ }
+@@ -493,7 +500,9 @@ static int magicmouse_raw_event(struct hid_device *hdev,
+ magicmouse_emit_buttons(msc, clicks & 3);
+ input_report_rel(input, REL_X, x);
+ input_report_rel(input, REL_Y, y);
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ input_mt_sync_frame(input);
+ input_report_key(input, BTN_MOUSE, clicks & 1);
+ } else { /* USB_DEVICE_ID_APPLE_MAGICTRACKPAD */
+@@ -545,7 +554,9 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ __set_bit(REL_WHEEL_HI_RES, input->relbit);
+ __set_bit(REL_HWHEEL_HI_RES, input->relbit);
+ }
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ /* If the trackpad has been connected to a Mac, the name is
+ * automatically personalized, e.g., "José Expósito's Trackpad".
+ * When connected through Bluetooth, the personalized name is
+@@ -621,7 +632,9 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ MOUSE_RES_X);
+ input_abs_set_res(input, ABS_MT_POSITION_Y,
+ MOUSE_RES_Y);
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ input_set_abs_params(input, ABS_MT_PRESSURE, 0, 253, 0, 0);
+ input_set_abs_params(input, ABS_PRESSURE, 0, 253, 0, 0);
+ input_set_abs_params(input, ABS_MT_ORIENTATION, -3, 4, 0, 0);
+@@ -660,7 +673,8 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ input_set_events_per_packet(input, 60);
+
+ if (report_undeciphered &&
+- input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ __set_bit(EV_MSC, input->evbit);
+ __set_bit(MSC_RAW, input->mscbit);
+ }
+@@ -685,7 +699,9 @@ static int magicmouse_input_mapping(struct hid_device *hdev,
+
+ /* Magic Trackpad does not give relative data after switching to MT */
+ if ((hi->input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD ||
+- hi->input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) &&
++ hi->input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ hi->input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
+ field->flags & HID_MAIN_ITEM_RELATIVE)
+ return -1;
+
+@@ -721,7 +737,8 @@ static int magicmouse_enable_multitouch(struct hid_device *hdev)
+ int ret;
+ int feature_size;
+
+- if (hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ if (hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ if (hdev->vendor == BT_VENDOR_ID_APPLE) {
+ feature_size = sizeof(feature_mt_trackpad2_bt);
+ feature = feature_mt_trackpad2_bt;
+@@ -766,7 +783,8 @@ static int magicmouse_fetch_battery(struct hid_device *hdev)
+
+ if (!hdev->battery || hdev->vendor != USB_VENDOR_ID_APPLE ||
+ (hdev->product != USB_DEVICE_ID_APPLE_MAGICMOUSE2 &&
+- hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2))
++ hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC))
+ return -1;
+
+ report_enum = &hdev->report_enum[hdev->battery_report_type];
+@@ -835,7 +853,9 @@ static int magicmouse_probe(struct hid_device *hdev,
+
+ if (id->vendor == USB_VENDOR_ID_APPLE &&
+ (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+- (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 && hdev->type != HID_TYPE_USBMOUSE)))
++ ((id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
++ hdev->type != HID_TYPE_USBMOUSE)))
+ return 0;
+
+ if (!msc->input) {
+@@ -850,7 +870,8 @@ static int magicmouse_probe(struct hid_device *hdev,
+ else if (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2)
+ report = hid_register_report(hdev, HID_INPUT_REPORT,
+ MOUSE2_REPORT_ID, 0);
+- else if (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ else if (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ if (id->vendor == BT_VENDOR_ID_APPLE)
+ report = hid_register_report(hdev, HID_INPUT_REPORT,
+ TRACKPAD2_BT_REPORT_ID, 0);
+@@ -920,7 +941,8 @@ static const __u8 *magicmouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ */
+ if (hdev->vendor == USB_VENDOR_ID_APPLE &&
+ (hdev->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+- hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) &&
++ hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
+ *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) {
+ hid_info(hdev,
+ "fixing up magicmouse battery report descriptor\n");
+@@ -951,6 +973,10 @@ static const struct hid_device_id magic_mice[] = {
+ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2), .driver_data = 0 },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE,
+ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2), .driver_data = 0 },
++ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE,
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC), .driver_data = 0 },
++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE,
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC), .driver_data = 0 },
+ { }
+ };
+ MODULE_DEVICE_TABLE(hid, magic_mice);
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 43664a24176fca..4e87380d3edd6b 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -414,7 +414,19 @@ static int i2c_hid_set_power(struct i2c_hid *ihid, int power_state)
+
+ i2c_hid_dbg(ihid, "%s\n", __func__);
+
++ /*
++ * Some STM-based devices need 400µs after a rising clock edge to wake
++ * from deep sleep, in which case the first request will fail due to
++ * the address not being acknowledged. Try after a short sleep to see
++ * if the device came alive on the bus. Certain Weida Tech devices also
++ * need this.
++ */
+ ret = i2c_hid_set_power_command(ihid, power_state);
++ if (ret && power_state == I2C_HID_PWR_ON) {
++ usleep_range(400, 500);
++ ret = i2c_hid_set_power_command(ihid, I2C_HID_PWR_ON);
++ }
++
+ if (ret)
+ dev_err(&ihid->client->dev,
+ "failed to change power setting.\n");
+@@ -976,14 +988,6 @@ static int i2c_hid_core_resume(struct i2c_hid *ihid)
+
+ enable_irq(client->irq);
+
+- /* Make sure the device is awake on the bus */
+- ret = i2c_hid_probe_address(ihid);
+- if (ret < 0) {
+- dev_err(&client->dev, "nothing at address after resume: %d\n",
+- ret);
+- return -ENXIO;
+- }
+-
+ /* On Goodix 27c6:0d42 wait extra time before device wakeup.
+ * It's not clear why but if we send wakeup too early, the device will
+ * never trigger input interrupts.
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 2bc45b24075c3f..9843b52bd017a0 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2241,7 +2241,8 @@ static void wacom_update_name(struct wacom *wacom, const char *suffix)
+ if (hid_is_usb(wacom->hdev)) {
+ struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent);
+ struct usb_device *dev = interface_to_usbdev(intf);
+- product_name = dev->product;
++ if (dev->product != NULL)
++ product_name = dev->product;
+ }
+
+ if (wacom->hdev->bus == BUS_I2C) {
+diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
+index 096f1daa8f2bcf..1218a3b449a801 100644
+--- a/drivers/hwmon/nct6775-platform.c
++++ b/drivers/hwmon/nct6775-platform.c
+@@ -1350,6 +1350,8 @@ static const char * const asus_msi_boards[] = {
+ "Pro H610M-CT D4",
+ "Pro H610T D4",
+ "Pro Q670M-C",
++ "Pro WS 600M-CL",
++ "Pro WS 665-ACE",
+ "Pro WS W680-ACE",
+ "Pro WS W680-ACE IPMI",
+ "Pro WS W790-ACE",
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index 6b3ba7e5723aa1..2254abda5c46c9 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -160,6 +160,7 @@ config I2C_I801
+ Meteor Lake (SOC and PCH)
+ Birch Stream (SOC)
+ Arrow Lake (SOC)
++ Panther Lake (SOC)
+
+ This driver can also be built as a module. If so, the module
+ will be called i2c-i801.
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 299fe9d3afab0a..75dab01d43a750 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -81,6 +81,8 @@
+ * Meteor Lake PCH-S (PCH) 0x7f23 32 hard yes yes yes
+ * Birch Stream (SOC) 0x5796 32 hard yes yes yes
+ * Arrow Lake-H (SOC) 0x7722 32 hard yes yes yes
++ * Panther Lake-H (SOC) 0xe322 32 hard yes yes yes
++ * Panther Lake-P (SOC) 0xe422 32 hard yes yes yes
+ *
+ * Features supported by this driver:
+ * Software PEC no
+@@ -261,6 +263,8 @@
+ #define PCI_DEVICE_ID_INTEL_CANNONLAKE_H_SMBUS 0xa323
+ #define PCI_DEVICE_ID_INTEL_COMETLAKE_V_SMBUS 0xa3a3
+ #define PCI_DEVICE_ID_INTEL_METEOR_LAKE_SOC_S_SMBUS 0xae22
++#define PCI_DEVICE_ID_INTEL_PANTHER_LAKE_H_SMBUS 0xe322
++#define PCI_DEVICE_ID_INTEL_PANTHER_LAKE_P_SMBUS 0xe422
+
+ struct i801_mux_config {
+ char *gpio_chip;
+@@ -1055,6 +1059,8 @@ static const struct pci_device_id i801_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
++ { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
++ { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { 0, }
+ };
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index da83c49223b33e..42310c9a00c2d1 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -282,7 +282,8 @@ static int i3c_device_uevent(const struct device *dev, struct kobj_uevent_env *e
+ struct i3c_device_info devinfo;
+ u16 manuf, part, ext;
+
+- i3c_device_get_info(i3cdev, &devinfo);
++ if (i3cdev->desc)
++ devinfo = i3cdev->desc->info;
+ manuf = I3C_PID_MANUF_ID(devinfo.pid);
+ part = I3C_PID_PART_ID(devinfo.pid);
+ ext = I3C_PID_EXTRA_INFO(devinfo.pid);
+@@ -345,10 +346,10 @@ const struct bus_type i3c_bus_type = {
+ EXPORT_SYMBOL_GPL(i3c_bus_type);
+
+ static enum i3c_addr_slot_status
+-i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
++i3c_bus_get_addr_slot_status_mask(struct i3c_bus *bus, u16 addr, u32 mask)
+ {
+ unsigned long status;
+- int bitpos = addr * 2;
++ int bitpos = addr * I3C_ADDR_SLOT_STATUS_BITS;
+
+ if (addr > I2C_MAX_ADDR)
+ return I3C_ADDR_SLOT_RSVD;
+@@ -356,22 +357,33 @@ i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
+ status = bus->addrslots[bitpos / BITS_PER_LONG];
+ status >>= bitpos % BITS_PER_LONG;
+
+- return status & I3C_ADDR_SLOT_STATUS_MASK;
++ return status & mask;
+ }
+
+-static void i3c_bus_set_addr_slot_status(struct i3c_bus *bus, u16 addr,
+- enum i3c_addr_slot_status status)
++static enum i3c_addr_slot_status
++i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
++{
++ return i3c_bus_get_addr_slot_status_mask(bus, addr, I3C_ADDR_SLOT_STATUS_MASK);
++}
++
++static void i3c_bus_set_addr_slot_status_mask(struct i3c_bus *bus, u16 addr,
++ enum i3c_addr_slot_status status, u32 mask)
+ {
+- int bitpos = addr * 2;
++ int bitpos = addr * I3C_ADDR_SLOT_STATUS_BITS;
+ unsigned long *ptr;
+
+ if (addr > I2C_MAX_ADDR)
+ return;
+
+ ptr = bus->addrslots + (bitpos / BITS_PER_LONG);
+- *ptr &= ~((unsigned long)I3C_ADDR_SLOT_STATUS_MASK <<
+- (bitpos % BITS_PER_LONG));
+- *ptr |= (unsigned long)status << (bitpos % BITS_PER_LONG);
++ *ptr &= ~((unsigned long)mask << (bitpos % BITS_PER_LONG));
++ *ptr |= ((unsigned long)status & mask) << (bitpos % BITS_PER_LONG);
++}
++
++static void i3c_bus_set_addr_slot_status(struct i3c_bus *bus, u16 addr,
++ enum i3c_addr_slot_status status)
++{
++ i3c_bus_set_addr_slot_status_mask(bus, addr, status, I3C_ADDR_SLOT_STATUS_MASK);
+ }
+
+ static bool i3c_bus_dev_addr_is_avail(struct i3c_bus *bus, u8 addr)
+@@ -383,13 +395,44 @@ static bool i3c_bus_dev_addr_is_avail(struct i3c_bus *bus, u8 addr)
+ return status == I3C_ADDR_SLOT_FREE;
+ }
+
++/*
++ * ┌────┬─────────────┬───┬─────────┬───┐
++ * │S/Sr│ 7'h7E RnW=0 │ACK│ ENTDAA │ T ├────┐
++ * └────┴─────────────┴───┴─────────┴───┘ │
++ * ┌─────────────────────────────────────────┘
++ * │ ┌──┬─────────────┬───┬─────────────────┬────────────────┬───┬─────────┐
++ * └─►│Sr│7'h7E RnW=1 │ACK│48bit UID BCR DCR│Assign 7bit Addr│PAR│ ACK/NACK│
++ * └──┴─────────────┴───┴─────────────────┴────────────────┴───┴─────────┘
++ * Some master controllers (such as HCI) need to prepare the entire above transaction before
++ * sending it out to the I3C bus. This means that a 7-bit dynamic address needs to be allocated
++ * before knowing the target device's UID information.
++ *
++ * However, some I3C targets may request specific addresses (called as "init_dyn_addr"), which is
++ * typically specified by the DT-'s assigned-address property. Lower addresses having higher IBI
++ * priority. If it is available, i3c_bus_get_free_addr() preferably return a free address that is
++ * not in the list of desired addresses (called as "init_dyn_addr"). This allows the device with
++ * the "init_dyn_addr" to switch to its "init_dyn_addr" when it hot-joins the I3C bus. Otherwise,
++ * if the "init_dyn_addr" is already in use by another I3C device, the target device will not be
++ * able to switch to its desired address.
++ *
++ * If the previous step fails, fallback returning one of the remaining unassigned address,
++ * regardless of its state in the desired list.
++ */
+ static int i3c_bus_get_free_addr(struct i3c_bus *bus, u8 start_addr)
+ {
+ enum i3c_addr_slot_status status;
+ u8 addr;
+
+ for (addr = start_addr; addr < I3C_MAX_ADDR; addr++) {
+- status = i3c_bus_get_addr_slot_status(bus, addr);
++ status = i3c_bus_get_addr_slot_status_mask(bus, addr,
++ I3C_ADDR_SLOT_EXT_STATUS_MASK);
++ if (status == I3C_ADDR_SLOT_FREE)
++ return addr;
++ }
++
++ for (addr = start_addr; addr < I3C_MAX_ADDR; addr++) {
++ status = i3c_bus_get_addr_slot_status_mask(bus, addr,
++ I3C_ADDR_SLOT_STATUS_MASK);
+ if (status == I3C_ADDR_SLOT_FREE)
+ return addr;
+ }
+@@ -1506,16 +1549,9 @@ static int i3c_master_reattach_i3c_dev(struct i3c_dev_desc *dev,
+ u8 old_dyn_addr)
+ {
+ struct i3c_master_controller *master = i3c_dev_get_master(dev);
+- enum i3c_addr_slot_status status;
+ int ret;
+
+- if (dev->info.dyn_addr != old_dyn_addr &&
+- (!dev->boardinfo ||
+- dev->info.dyn_addr != dev->boardinfo->init_dyn_addr)) {
+- status = i3c_bus_get_addr_slot_status(&master->bus,
+- dev->info.dyn_addr);
+- if (status != I3C_ADDR_SLOT_FREE)
+- return -EBUSY;
++ if (dev->info.dyn_addr != old_dyn_addr) {
+ i3c_bus_set_addr_slot_status(&master->bus,
+ dev->info.dyn_addr,
+ I3C_ADDR_SLOT_I3C_DEV);
+@@ -1918,9 +1954,11 @@ static int i3c_master_bus_init(struct i3c_master_controller *master)
+ goto err_rstdaa;
+ }
+
+- i3c_bus_set_addr_slot_status(&master->bus,
+- i3cboardinfo->init_dyn_addr,
+- I3C_ADDR_SLOT_I3C_DEV);
++ /* Do not mark as occupied until real device exist in bus */
++ i3c_bus_set_addr_slot_status_mask(&master->bus,
++ i3cboardinfo->init_dyn_addr,
++ I3C_ADDR_SLOT_EXT_DESIRED,
++ I3C_ADDR_SLOT_EXT_STATUS_MASK);
+
+ /*
+ * Only try to create/attach devices that have a static
+@@ -2088,7 +2126,8 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
+ else
+ expected_dyn_addr = newdev->info.dyn_addr;
+
+- if (newdev->info.dyn_addr != expected_dyn_addr) {
++ if (newdev->info.dyn_addr != expected_dyn_addr &&
++ i3c_bus_get_addr_slot_status(&master->bus, expected_dyn_addr) == I3C_ADDR_SLOT_FREE) {
+ /*
+ * Try to apply the expected dynamic address. If it fails, keep
+ * the address assigned by the master.
+diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
+index a918e96b21fddc..13adc584009429 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
++++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
+@@ -159,10 +159,10 @@ static void hci_dma_cleanup(struct i3c_hci *hci)
+ for (i = 0; i < rings->total; i++) {
+ rh = &rings->headers[i];
+
++ rh_reg_write(INTR_SIGNAL_ENABLE, 0);
+ rh_reg_write(RING_CONTROL, 0);
+ rh_reg_write(CR_SETUP, 0);
+ rh_reg_write(IBI_SETUP, 0);
+- rh_reg_write(INTR_SIGNAL_ENABLE, 0);
+
+ if (rh->xfer)
+ dma_free_coherent(&hci->master.dev,
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 7042ddfdfc03ee..955e9eff0099e5 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -1394,6 +1394,9 @@ static int ad7192_probe(struct spi_device *spi)
+ st->int_vref_mv = ret == -ENODEV ? avdd_mv : ret / MILLI;
+
+ st->chip_info = spi_get_device_match_data(spi);
++ if (!st->chip_info)
++ return -ENODEV;
++
+ indio_dev->name = st->chip_info->name;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->info = st->chip_info->info;
+diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
+index 8c516ede911619..640a5d3aa2c6e7 100644
+--- a/drivers/iio/light/ltr501.c
++++ b/drivers/iio/light/ltr501.c
+@@ -1613,6 +1613,8 @@ static const struct acpi_device_id ltr_acpi_match[] = {
+ { "LTER0501", ltr501 },
+ { "LTER0559", ltr559 },
+ { "LTER0301", ltr301 },
++ /* https://www.catalog.update.microsoft.com/Search.aspx?q=lter0303 */
++ { "LTER0303", ltr303 },
+ { },
+ };
+ MODULE_DEVICE_TABLE(acpi, ltr_acpi_match);
+diff --git a/drivers/iio/magnetometer/af8133j.c b/drivers/iio/magnetometer/af8133j.c
+index d81d89af6283b7..acd291f3e7924c 100644
+--- a/drivers/iio/magnetometer/af8133j.c
++++ b/drivers/iio/magnetometer/af8133j.c
+@@ -312,10 +312,11 @@ static int af8133j_set_scale(struct af8133j_data *data,
+ * When suspended, just store the new range to data->range to be
+ * applied later during power up.
+ */
+- if (!pm_runtime_status_suspended(dev))
++ if (!pm_runtime_status_suspended(dev)) {
+ scoped_guard(mutex, &data->mutex)
+ ret = regmap_write(data->regmap,
+ AF8133J_REG_RANGE, range);
++ }
+
+ pm_runtime_enable(dev);
+
+diff --git a/drivers/iio/magnetometer/yamaha-yas530.c b/drivers/iio/magnetometer/yamaha-yas530.c
+index 65011a8598d332..c55a38650c0d47 100644
+--- a/drivers/iio/magnetometer/yamaha-yas530.c
++++ b/drivers/iio/magnetometer/yamaha-yas530.c
+@@ -372,6 +372,7 @@ static int yas537_measure(struct yas5xx *yas5xx, u16 *t, u16 *x, u16 *y1, u16 *y
+ u8 data[8];
+ u16 xy1y2[3];
+ s32 h[3], s[3];
++ int half_range = BIT(13);
+ int i, ret;
+
+ mutex_lock(&yas5xx->lock);
+@@ -406,13 +407,13 @@ static int yas537_measure(struct yas5xx *yas5xx, u16 *t, u16 *x, u16 *y1, u16 *y
+ /* The second version of YAS537 needs to include calibration coefficients */
+ if (yas5xx->version == YAS537_VERSION_1) {
+ for (i = 0; i < 3; i++)
+- s[i] = xy1y2[i] - BIT(13);
+- h[0] = (c->k * (128 * s[0] + c->a2 * s[1] + c->a3 * s[2])) / BIT(13);
+- h[1] = (c->k * (c->a4 * s[0] + c->a5 * s[1] + c->a6 * s[2])) / BIT(13);
+- h[2] = (c->k * (c->a7 * s[0] + c->a8 * s[1] + c->a9 * s[2])) / BIT(13);
++ s[i] = xy1y2[i] - half_range;
++ h[0] = (c->k * (128 * s[0] + c->a2 * s[1] + c->a3 * s[2])) / half_range;
++ h[1] = (c->k * (c->a4 * s[0] + c->a5 * s[1] + c->a6 * s[2])) / half_range;
++ h[2] = (c->k * (c->a7 * s[0] + c->a8 * s[1] + c->a9 * s[2])) / half_range;
+ for (i = 0; i < 3; i++) {
+- clamp_val(h[i], -BIT(13), BIT(13) - 1);
+- xy1y2[i] = h[i] + BIT(13);
++ h[i] = clamp(h[i], -half_range, half_range - 1);
++ xy1y2[i] = h[i] + half_range;
+ }
+ }
+
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 804b788f3f167d..f3399087859fd1 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -118,6 +118,7 @@ static void free_sub_pt(u64 *root, int mode, struct list_head *freelist)
+ */
+ static bool increase_address_space(struct amd_io_pgtable *pgtable,
+ unsigned long address,
++ unsigned int page_size_level,
+ gfp_t gfp)
+ {
+ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+@@ -133,7 +134,8 @@ static bool increase_address_space(struct amd_io_pgtable *pgtable,
+
+ spin_lock_irqsave(&domain->lock, flags);
+
+- if (address <= PM_LEVEL_SIZE(pgtable->mode))
++ if (address <= PM_LEVEL_SIZE(pgtable->mode) &&
++ pgtable->mode - 1 >= page_size_level)
+ goto out;
+
+ ret = false;
+@@ -163,18 +165,21 @@ static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
+ gfp_t gfp,
+ bool *updated)
+ {
++ unsigned long last_addr = address + (page_size - 1);
+ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+ int level, end_lvl;
+ u64 *pte, *page;
+
+ BUG_ON(!is_power_of_2(page_size));
+
+- while (address > PM_LEVEL_SIZE(pgtable->mode)) {
++ while (last_addr > PM_LEVEL_SIZE(pgtable->mode) ||
++ pgtable->mode - 1 < PAGE_SIZE_LEVEL(page_size)) {
+ /*
+ * Return an error if there is no memory to update the
+ * page-table.
+ */
+- if (!increase_address_space(pgtable, address, gfp))
++ if (!increase_address_space(pgtable, last_addr,
++ PAGE_SIZE_LEVEL(page_size), gfp))
+ return NULL;
+ }
+
+diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
+index e590973ce5cfa2..b8393a8c075396 100644
+--- a/drivers/iommu/iommufd/fault.c
++++ b/drivers/iommu/iommufd/fault.c
+@@ -415,8 +415,6 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ put_unused_fd(fdno);
+ out_fput:
+ fput(filep);
+- refcount_dec(&fault->obj.users);
+- iommufd_ctx_put(fault->ictx);
+ out_abort:
+ iommufd_object_abort_and_destroy(ucmd->ictx, &fault->obj);
+
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index d82bcab233a1b0..66ce15027f28d7 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -407,7 +407,7 @@ config PARTITION_PERCPU
+ config STM32MP_EXTI
+ tristate "STM32MP extended interrupts and event controller"
+ depends on (ARCH_STM32 && !ARM_SINGLE_ARMV7M) || COMPILE_TEST
+- default y
++ default ARCH_STM32 && !ARM_SINGLE_ARMV7M
+ select IRQ_DOMAIN_HIERARCHY
+ select GENERIC_IRQ_CHIP
+ help
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 52f625e07658cb..d9b6ec844cdda0 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -44,6 +44,7 @@
+ #define ITS_FLAGS_WORKAROUND_CAVIUM_22375 (1ULL << 1)
+ #define ITS_FLAGS_WORKAROUND_CAVIUM_23144 (1ULL << 2)
+ #define ITS_FLAGS_FORCE_NON_SHAREABLE (1ULL << 3)
++#define ITS_FLAGS_WORKAROUND_HISILICON_162100801 (1ULL << 4)
+
+ #define RD_LOCAL_LPI_ENABLED BIT(0)
+ #define RD_LOCAL_PENDTABLE_PREALLOCATED BIT(1)
+@@ -61,6 +62,7 @@ static u32 lpi_id_bits;
+ #define LPI_PENDBASE_SZ ALIGN(BIT(LPI_NRBITS) / 8, SZ_64K)
+
+ static u8 __ro_after_init lpi_prop_prio;
++static struct its_node *find_4_1_its(void);
+
+ /*
+ * Collection structure - just an ID, and a redistributor address to
+@@ -3797,6 +3799,20 @@ static void its_vpe_db_proxy_move(struct its_vpe *vpe, int from, int to)
+ raw_spin_unlock_irqrestore(&vpe_proxy.lock, flags);
+ }
+
++static void its_vpe_4_1_invall_locked(int cpu, struct its_vpe *vpe)
++{
++ void __iomem *rdbase;
++ u64 val;
++
++ val = GICR_INVALLR_V;
++ val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
++
++ guard(raw_spinlock)(&gic_data_rdist_cpu(cpu)->rd_lock);
++ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
++ gic_write_lpir(val, rdbase + GICR_INVALLR);
++ wait_for_syncr(rdbase);
++}
++
+ static int its_vpe_set_affinity(struct irq_data *d,
+ const struct cpumask *mask_val,
+ bool force)
+@@ -3804,6 +3820,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+ unsigned int from, cpu = nr_cpu_ids;
+ struct cpumask *table_mask;
++ struct its_node *its;
+ unsigned long flags;
+
+ /*
+@@ -3866,6 +3883,11 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ vpe->col_idx = cpu;
+
+ its_send_vmovp(vpe);
++
++ its = find_4_1_its();
++ if (its && its->flags & ITS_FLAGS_WORKAROUND_HISILICON_162100801)
++ its_vpe_4_1_invall_locked(cpu, vpe);
++
+ its_vpe_db_proxy_move(vpe, from, cpu);
+
+ out:
+@@ -4173,22 +4195,12 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+
+ static void its_vpe_4_1_invall(struct its_vpe *vpe)
+ {
+- void __iomem *rdbase;
+ unsigned long flags;
+- u64 val;
+ int cpu;
+
+- val = GICR_INVALLR_V;
+- val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
+-
+ /* Target the redistributor this vPE is currently known on */
+ cpu = vpe_to_cpuid_lock(vpe, &flags);
+- raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
+- rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+- gic_write_lpir(val, rdbase + GICR_INVALLR);
+-
+- wait_for_syncr(rdbase);
+- raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
++ its_vpe_4_1_invall_locked(cpu, vpe);
+ vpe_to_cpuid_unlock(vpe, flags);
+ }
+
+@@ -4781,6 +4793,14 @@ static bool its_set_non_coherent(void *data)
+ return true;
+ }
+
++static bool __maybe_unused its_enable_quirk_hip09_162100801(void *data)
++{
++ struct its_node *its = data;
++
++ its->flags |= ITS_FLAGS_WORKAROUND_HISILICON_162100801;
++ return true;
++}
++
+ static const struct gic_quirk its_quirks[] = {
+ #ifdef CONFIG_CAVIUM_ERRATUM_22375
+ {
+@@ -4827,6 +4847,14 @@ static const struct gic_quirk its_quirks[] = {
+ .init = its_enable_quirk_hip07_161600802,
+ },
+ #endif
++#ifdef CONFIG_HISILICON_ERRATUM_162100801
++ {
++ .desc = "ITS: Hip09 erratum 162100801",
++ .iidr = 0x00051736,
++ .mask = 0xffffffff,
++ .init = its_enable_quirk_hip09_162100801,
++ },
++#endif
+ #ifdef CONFIG_ROCKCHIP_ERRATUM_3588001
+ {
+ .desc = "ITS: Rockchip erratum RK3588001",
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 06b97fd49ad9a2..f69f4e928d6143 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -29,11 +29,14 @@ static ssize_t brightness_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
++ unsigned int brightness;
+
+- /* no lock needed for this */
++ mutex_lock(&led_cdev->led_access);
+ led_update_brightness(led_cdev);
++ brightness = led_cdev->brightness;
++ mutex_unlock(&led_cdev->led_access);
+
+- return sprintf(buf, "%u\n", led_cdev->brightness);
++ return sprintf(buf, "%u\n", brightness);
+ }
+
+ static ssize_t brightness_store(struct device *dev,
+@@ -70,8 +73,13 @@ static ssize_t max_brightness_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
++ unsigned int max_brightness;
++
++ mutex_lock(&led_cdev->led_access);
++ max_brightness = led_cdev->max_brightness;
++ mutex_unlock(&led_cdev->led_access);
+
+- return sprintf(buf, "%u\n", led_cdev->max_brightness);
++ return sprintf(buf, "%u\n", max_brightness);
+ }
+ static DEVICE_ATTR_RO(max_brightness);
+
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 94885e411085ad..82102a4c5d6883 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -269,6 +269,35 @@ static bool pcc_mbox_cmd_complete_check(struct pcc_chan_info *pchan)
+ return !!val;
+ }
+
++static void check_and_ack(struct pcc_chan_info *pchan, struct mbox_chan *chan)
++{
++ struct acpi_pcct_ext_pcc_shared_memory pcc_hdr;
++
++ if (pchan->type != ACPI_PCCT_TYPE_EXT_PCC_SLAVE_SUBSPACE)
++ return;
++ /* If the memory region has not been mapped, we cannot
++ * determine if we need to send the message, but we still
++ * need to set the cmd_update flag before returning.
++ */
++ if (pchan->chan.shmem == NULL) {
++ pcc_chan_reg_read_modify_write(&pchan->cmd_update);
++ return;
++ }
++ memcpy_fromio(&pcc_hdr, pchan->chan.shmem,
++ sizeof(struct acpi_pcct_ext_pcc_shared_memory));
++ /*
++ * The PCC slave subspace channel needs to set the command complete bit
++ * after processing message. If the PCC_ACK_FLAG is set, it should also
++ * ring the doorbell.
++ *
++ * The PCC master subspace channel clears chan_in_use to free channel.
++ */
++ if (le32_to_cpup(&pcc_hdr.flags) & PCC_ACK_FLAG_MASK)
++ pcc_send_data(chan, NULL);
++ else
++ pcc_chan_reg_read_modify_write(&pchan->cmd_update);
++}
++
+ /**
+ * pcc_mbox_irq - PCC mailbox interrupt handler
+ * @irq: interrupt number
+@@ -306,14 +335,7 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
+
+ mbox_chan_received_data(chan, NULL);
+
+- /*
+- * The PCC slave subspace channel needs to set the command complete bit
+- * and ring doorbell after processing message.
+- *
+- * The PCC master subspace channel clears chan_in_use to free channel.
+- */
+- if (pchan->type == ACPI_PCCT_TYPE_EXT_PCC_SLAVE_SUBSPACE)
+- pcc_send_data(chan, NULL);
++ check_and_ack(pchan, chan);
+ pchan->chan_in_use = false;
+
+ return IRQ_HANDLED;
+@@ -365,14 +387,37 @@ EXPORT_SYMBOL_GPL(pcc_mbox_request_channel);
+ void pcc_mbox_free_channel(struct pcc_mbox_chan *pchan)
+ {
+ struct mbox_chan *chan = pchan->mchan;
++ struct pcc_chan_info *pchan_info;
++ struct pcc_mbox_chan *pcc_mbox_chan;
+
+ if (!chan || !chan->cl)
+ return;
++ pchan_info = chan->con_priv;
++ pcc_mbox_chan = &pchan_info->chan;
++ if (pcc_mbox_chan->shmem) {
++ iounmap(pcc_mbox_chan->shmem);
++ pcc_mbox_chan->shmem = NULL;
++ }
+
+ mbox_free_channel(chan);
+ }
+ EXPORT_SYMBOL_GPL(pcc_mbox_free_channel);
+
++int pcc_mbox_ioremap(struct mbox_chan *chan)
++{
++ struct pcc_chan_info *pchan_info;
++ struct pcc_mbox_chan *pcc_mbox_chan;
++
++ if (!chan || !chan->cl)
++ return -1;
++ pchan_info = chan->con_priv;
++ pcc_mbox_chan = &pchan_info->chan;
++ pcc_mbox_chan->shmem = ioremap(pcc_mbox_chan->shmem_base_addr,
++ pcc_mbox_chan->shmem_size);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(pcc_mbox_ioremap);
++
+ /**
+ * pcc_send_data - Called from Mailbox Controller code. Used
+ * here only to ring the channel doorbell. The PCC client
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index e7abfdd77c3b66..e42f1400cea9d7 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1718,7 +1718,7 @@ static CLOSURE_CALLBACK(cache_set_flush)
+ if (!IS_ERR_OR_NULL(c->gc_thread))
+ kthread_stop(c->gc_thread);
+
+- if (!IS_ERR(c->root))
++ if (!IS_ERR_OR_NULL(c->root))
+ list_add(&c->root->list, &c->btree_cache);
+
+ /*
+diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig
+index a4537818a58c05..cd1c545293574a 100644
+--- a/drivers/media/pci/intel/ipu6/Kconfig
++++ b/drivers/media/pci/intel/ipu6/Kconfig
+@@ -8,7 +8,7 @@ config VIDEO_INTEL_IPU6
+ select IOMMU_IOVA
+ select VIDEO_V4L2_SUBDEV_API
+ select MEDIA_CONTROLLER
+- select VIDEOBUF2_DMA_CONTIG
++ select VIDEOBUF2_DMA_SG
+ select V4L2_FWNODE
+ help
+ This is the 6th Gen Intel Image Processing Unit, found in Intel SoCs
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
+index 03dbb0e0ea7957..bbb66b56ee88c9 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
+@@ -13,17 +13,48 @@
+
+ #include <media/media-entity.h>
+ #include <media/v4l2-subdev.h>
+-#include <media/videobuf2-dma-contig.h>
++#include <media/videobuf2-dma-sg.h>
+ #include <media/videobuf2-v4l2.h>
+
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-fw-isys.h"
+ #include "ipu6-isys.h"
+ #include "ipu6-isys-video.h"
+
+-static int queue_setup(struct vb2_queue *q, unsigned int *num_buffers,
+- unsigned int *num_planes, unsigned int sizes[],
+- struct device *alloc_devs[])
++static int ipu6_isys_buf_init(struct vb2_buffer *vb)
++{
++ struct ipu6_isys *isys = vb2_get_drv_priv(vb->vb2_queue);
++ struct sg_table *sg = vb2_dma_sg_plane_desc(vb, 0);
++ struct vb2_v4l2_buffer *vvb = to_vb2_v4l2_buffer(vb);
++ struct ipu6_isys_video_buffer *ivb =
++ vb2_buffer_to_ipu6_isys_video_buffer(vvb);
++ int ret;
++
++ ret = ipu6_dma_map_sgtable(isys->adev, sg, DMA_TO_DEVICE, 0);
++ if (ret)
++ return ret;
++
++ ivb->dma_addr = sg_dma_address(sg->sgl);
++
++ return 0;
++}
++
++static void ipu6_isys_buf_cleanup(struct vb2_buffer *vb)
++{
++ struct ipu6_isys *isys = vb2_get_drv_priv(vb->vb2_queue);
++ struct sg_table *sg = vb2_dma_sg_plane_desc(vb, 0);
++ struct vb2_v4l2_buffer *vvb = to_vb2_v4l2_buffer(vb);
++ struct ipu6_isys_video_buffer *ivb =
++ vb2_buffer_to_ipu6_isys_video_buffer(vvb);
++
++ ivb->dma_addr = 0;
++ ipu6_dma_unmap_sgtable(isys->adev, sg, DMA_TO_DEVICE, 0);
++}
++
++static int ipu6_isys_queue_setup(struct vb2_queue *q, unsigned int *num_buffers,
++ unsigned int *num_planes, unsigned int sizes[],
++ struct device *alloc_devs[])
+ {
+ struct ipu6_isys_queue *aq = vb2_queue_to_isys_queue(q);
+ struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq);
+@@ -207,9 +238,11 @@ ipu6_isys_buf_to_fw_frame_buf_pin(struct vb2_buffer *vb,
+ struct ipu6_fw_isys_frame_buff_set_abi *set)
+ {
+ struct ipu6_isys_queue *aq = vb2_queue_to_isys_queue(vb->vb2_queue);
++ struct vb2_v4l2_buffer *vvb = to_vb2_v4l2_buffer(vb);
++ struct ipu6_isys_video_buffer *ivb =
++ vb2_buffer_to_ipu6_isys_video_buffer(vvb);
+
+- set->output_pins[aq->fw_output].addr =
+- vb2_dma_contig_plane_dma_addr(vb, 0);
++ set->output_pins[aq->fw_output].addr = ivb->dma_addr;
+ set->output_pins[aq->fw_output].out_buf_id = vb->index + 1;
+ }
+
+@@ -332,7 +365,7 @@ static void buf_queue(struct vb2_buffer *vb)
+
+ dev_dbg(dev, "queue buffer %u for %s\n", vb->index, av->vdev.name);
+
+- dma = vb2_dma_contig_plane_dma_addr(vb, 0);
++ dma = ivb->dma_addr;
+ dev_dbg(dev, "iova: iova %pad\n", &dma);
+
+ spin_lock_irqsave(&aq->lock, flags);
+@@ -724,10 +757,14 @@ void ipu6_isys_queue_buf_ready(struct ipu6_isys_stream *stream,
+ }
+
+ list_for_each_entry_reverse(ib, &aq->active, head) {
++ struct ipu6_isys_video_buffer *ivb;
++ struct vb2_v4l2_buffer *vvb;
+ dma_addr_t addr;
+
+ vb = ipu6_isys_buffer_to_vb2_buffer(ib);
+- addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++ vvb = to_vb2_v4l2_buffer(vb);
++ ivb = vb2_buffer_to_ipu6_isys_video_buffer(vvb);
++ addr = ivb->dma_addr;
+
+ if (info->pin.addr != addr) {
+ if (first)
+@@ -766,10 +803,12 @@ void ipu6_isys_queue_buf_ready(struct ipu6_isys_stream *stream,
+ }
+
+ static const struct vb2_ops ipu6_isys_queue_ops = {
+- .queue_setup = queue_setup,
++ .queue_setup = ipu6_isys_queue_setup,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
++ .buf_init = ipu6_isys_buf_init,
+ .buf_prepare = ipu6_isys_buf_prepare,
++ .buf_cleanup = ipu6_isys_buf_cleanup,
+ .start_streaming = start_streaming,
+ .stop_streaming = stop_streaming,
+ .buf_queue = buf_queue,
+@@ -779,16 +818,17 @@ int ipu6_isys_queue_init(struct ipu6_isys_queue *aq)
+ {
+ struct ipu6_isys *isys = ipu6_isys_queue_to_video(aq)->isys;
+ struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq);
++ struct ipu6_bus_device *adev = isys->adev;
+ int ret;
+
+ /* no support for userptr */
+ if (!aq->vbq.io_modes)
+ aq->vbq.io_modes = VB2_MMAP | VB2_DMABUF;
+
+- aq->vbq.drv_priv = aq;
++ aq->vbq.drv_priv = isys;
+ aq->vbq.ops = &ipu6_isys_queue_ops;
+ aq->vbq.lock = &av->mutex;
+- aq->vbq.mem_ops = &vb2_dma_contig_memops;
++ aq->vbq.mem_ops = &vb2_dma_sg_memops;
+ aq->vbq.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ aq->vbq.min_queued_buffers = 1;
+ aq->vbq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+@@ -797,8 +837,8 @@ int ipu6_isys_queue_init(struct ipu6_isys_queue *aq)
+ if (ret)
+ return ret;
+
+- aq->dev = &isys->adev->auxdev.dev;
+- aq->vbq.dev = &isys->adev->auxdev.dev;
++ aq->dev = &adev->auxdev.dev;
++ aq->vbq.dev = &adev->isp->pdev->dev;
+ spin_lock_init(&aq->lock);
+ INIT_LIST_HEAD(&aq->active);
+ INIT_LIST_HEAD(&aq->incoming);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h
+index 95cfd4869d9356..fe8fc796a58f5d 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h
+@@ -38,6 +38,7 @@ struct ipu6_isys_buffer {
+ struct ipu6_isys_video_buffer {
+ struct vb2_v4l2_buffer vb_v4l2;
+ struct ipu6_isys_buffer ib;
++ dma_addr_t dma_addr;
+ };
+
+ #define IPU6_ISYS_BUFFER_LIST_FL_INCOMING BIT(0)
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys.c b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+index c4aff2e2009bab..c85e056cb904b2 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+@@ -34,6 +34,7 @@
+
+ #include "ipu6-bus.h"
+ #include "ipu6-cpd.h"
++#include "ipu6-dma.h"
+ #include "ipu6-isys.h"
+ #include "ipu6-isys-csi2.h"
+ #include "ipu6-mmu.h"
+@@ -933,29 +934,27 @@ static const struct dev_pm_ops isys_pm_ops = {
+
+ static void free_fw_msg_bufs(struct ipu6_isys *isys)
+ {
+- struct device *dev = &isys->adev->auxdev.dev;
+ struct isys_fw_msgs *fwmsg, *safe;
+
+ list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist, head)
+- dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg,
+- fwmsg->dma_addr, 0);
++ ipu6_dma_free(isys->adev, sizeof(struct isys_fw_msgs), fwmsg,
++ fwmsg->dma_addr, 0);
+
+ list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist_fw, head)
+- dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg,
+- fwmsg->dma_addr, 0);
++ ipu6_dma_free(isys->adev, sizeof(struct isys_fw_msgs), fwmsg,
++ fwmsg->dma_addr, 0);
+ }
+
+ static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount)
+ {
+- struct device *dev = &isys->adev->auxdev.dev;
+ struct isys_fw_msgs *addr;
+ dma_addr_t dma_addr;
+ unsigned long flags;
+ unsigned int i;
+
+ for (i = 0; i < amount; i++) {
+- addr = dma_alloc_attrs(dev, sizeof(struct isys_fw_msgs),
+- &dma_addr, GFP_KERNEL, 0);
++ addr = ipu6_dma_alloc(isys->adev, sizeof(*addr),
++ &dma_addr, GFP_KERNEL, 0);
+ if (!addr)
+ break;
+ addr->dma_addr = dma_addr;
+@@ -974,8 +973,8 @@ static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount)
+ struct isys_fw_msgs, head);
+ list_del(&addr->head);
+ spin_unlock_irqrestore(&isys->listlock, flags);
+- dma_free_attrs(dev, sizeof(struct isys_fw_msgs), addr,
+- addr->dma_addr, 0);
++ ipu6_dma_free(isys->adev, sizeof(struct isys_fw_msgs), addr,
++ addr->dma_addr, 0);
+ spin_lock_irqsave(&isys->listlock, flags);
+ }
+ spin_unlock_irqrestore(&isys->listlock, flags);
+diff --git a/drivers/media/usb/cx231xx/cx231xx-cards.c b/drivers/media/usb/cx231xx/cx231xx-cards.c
+index 92efe6c1f47bae..bda729b42d05fe 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-cards.c
++++ b/drivers/media/usb/cx231xx/cx231xx-cards.c
+@@ -994,6 +994,8 @@ const unsigned int cx231xx_bcount = ARRAY_SIZE(cx231xx_boards);
+
+ /* table of devices that work with this driver */
+ struct usb_device_id cx231xx_id_table[] = {
++ {USB_DEVICE(0x1D19, 0x6108),
++ .driver_info = CX231XX_BOARD_PV_XCAPTURE_USB},
+ {USB_DEVICE(0x1D19, 0x6109),
+ .driver_info = CX231XX_BOARD_PV_XCAPTURE_USB},
+ {USB_DEVICE(0x0572, 0x5A3C),
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 675be4858366f0..9f38a9b23c0181 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2474,9 +2474,22 @@ static const struct uvc_device_info uvc_quirk_force_y8 = {
+ * The Logitech cameras listed below have their interface class set to
+ * VENDOR_SPEC because they don't announce themselves as UVC devices, even
+ * though they are compliant.
++ *
++ * Sort these by vendor/product ID.
+ */
+ static const struct usb_device_id uvc_ids[] = {
+ /* Quanta ACER HD User Facing */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x0408,
++ .idProduct = 0x4033,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = UVC_PC_PROTOCOL_15,
++ .driver_info = (kernel_ulong_t)&(const struct uvc_device_info){
++ .uvc_version = 0x010a,
++ } },
++ /* Quanta ACER HD User Facing */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x0408,
+@@ -3010,6 +3023,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
+ | UVC_QUIRK_IGNORE_SELECTOR_UNIT) },
++ /* NXP Semiconductors IR VIDEO */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x1fc9,
++ .idProduct = 0x009b,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
+ /* Oculus VR Positional Tracker DK2 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+@@ -3118,6 +3140,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_META(V4L2_META_FMT_D4XX) },
++ /* Intel D421 Depth Module */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x8086,
++ .idProduct = 0x1155,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_META(V4L2_META_FMT_D4XX) },
+ /* Generic USB Video Class */
+ { USB_INTERFACE_INFO(USB_CLASS_VIDEO, 1, UVC_PC_PROTOCOL_UNDEFINED) },
+ { USB_INTERFACE_INFO(USB_CLASS_VIDEO, 1, UVC_PC_PROTOCOL_15) },
+diff --git a/drivers/misc/eeprom/eeprom_93cx6.c b/drivers/misc/eeprom/eeprom_93cx6.c
+index 9627294fe3e951..4c9827fe921731 100644
+--- a/drivers/misc/eeprom/eeprom_93cx6.c
++++ b/drivers/misc/eeprom/eeprom_93cx6.c
+@@ -186,6 +186,11 @@ void eeprom_93cx6_read(struct eeprom_93cx6 *eeprom, const u8 word,
+ eeprom_93cx6_write_bits(eeprom, command,
+ PCI_EEPROM_WIDTH_OPCODE + eeprom->width);
+
++ if (has_quirk_extra_read_cycle(eeprom)) {
++ eeprom_93cx6_pulse_high(eeprom);
++ eeprom_93cx6_pulse_low(eeprom);
++ }
++
+ /*
+ * Read the requested 16 bits.
+ */
+@@ -252,6 +257,11 @@ void eeprom_93cx6_readb(struct eeprom_93cx6 *eeprom, const u8 byte,
+ eeprom_93cx6_write_bits(eeprom, command,
+ PCI_EEPROM_WIDTH_OPCODE + eeprom->width + 1);
+
++ if (has_quirk_extra_read_cycle(eeprom)) {
++ eeprom_93cx6_pulse_high(eeprom);
++ eeprom_93cx6_pulse_low(eeprom);
++ }
++
+ /*
+ * Read the requested 8 bits.
+ */
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index ef06a4d5d65bb2..1d08009f2bd83f 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -50,6 +50,7 @@
+ #include <linux/mmc/sd.h>
+
+ #include <linux/uaccess.h>
++#include <linux/unaligned.h>
+
+ #include "queue.h"
+ #include "block.h"
+@@ -993,11 +994,12 @@ static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
+ int err;
+ u32 result;
+ __be32 *blocks;
++ u8 resp_sz = mmc_card_ult_capacity(card) ? 8 : 4;
++ unsigned int noio_flag;
+
+ struct mmc_request mrq = {};
+ struct mmc_command cmd = {};
+ struct mmc_data data = {};
+-
+ struct scatterlist sg;
+
+ err = mmc_app_cmd(card->host, card);
+@@ -1008,7 +1010,7 @@ static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
+ cmd.arg = 0;
+ cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+
+- data.blksz = 4;
++ data.blksz = resp_sz;
+ data.blocks = 1;
+ data.flags = MMC_DATA_READ;
+ data.sg = &sg;
+@@ -1018,15 +1020,29 @@ static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
+ mrq.cmd = &cmd;
+ mrq.data = &data;
+
+- blocks = kmalloc(4, GFP_KERNEL);
++ noio_flag = memalloc_noio_save();
++ blocks = kmalloc(resp_sz, GFP_KERNEL);
++ memalloc_noio_restore(noio_flag);
+ if (!blocks)
+ return -ENOMEM;
+
+- sg_init_one(&sg, blocks, 4);
++ sg_init_one(&sg, blocks, resp_sz);
+
+ mmc_wait_for_req(card->host, &mrq);
+
+- result = ntohl(*blocks);
++ if (mmc_card_ult_capacity(card)) {
++ /*
++ * Normally, ACMD22 returns the number of written sectors as
++ * u32. SDUC, however, returns it as u64. This is not a
++ * superfluous requirement, because SDUC writes may exceed 2TB.
++ * For Linux mmc however, the previously write operation could
++ * not be more than the block layer limits, thus just make room
++ * for a u64 and cast the response back to u32.
++ */
++ result = clamp_val(get_unaligned_be64(blocks), 0, UINT_MAX);
++ } else {
++ result = ntohl(*blocks);
++ }
+ kfree(blocks);
+
+ if (cmd.error || data.error)
+diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
+index 0ddaee0eae54f0..4f3a26676ccb86 100644
+--- a/drivers/mmc/core/bus.c
++++ b/drivers/mmc/core/bus.c
+@@ -149,6 +149,8 @@ static void mmc_bus_shutdown(struct device *dev)
+ if (dev->driver && drv->shutdown)
+ drv->shutdown(card);
+
++ __mmc_stop_host(host);
++
+ if (host->bus_ops->shutdown) {
+ ret = host->bus_ops->shutdown(host);
+ if (ret)
+@@ -321,7 +323,9 @@ int mmc_add_card(struct mmc_card *card)
+ case MMC_TYPE_SD:
+ type = "SD";
+ if (mmc_card_blockaddr(card)) {
+- if (mmc_card_ext_capacity(card))
++ if (mmc_card_ult_capacity(card))
++ type = "SDUC";
++ else if (mmc_card_ext_capacity(card))
+ type = "SDXC";
+ else
+ type = "SDHC";
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index b7754a1b8d9788..3205feb1e8ff6a 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -23,6 +23,7 @@
+ #define MMC_CARD_SDXC (1<<3) /* card is SDXC */
+ #define MMC_CARD_REMOVED (1<<4) /* card has been removed */
+ #define MMC_STATE_SUSPENDED (1<<5) /* card is suspended */
++#define MMC_CARD_SDUC (1<<6) /* card is SDUC */
+
+ #define mmc_card_present(c) ((c)->state & MMC_STATE_PRESENT)
+ #define mmc_card_readonly(c) ((c)->state & MMC_STATE_READONLY)
+@@ -30,11 +31,13 @@
+ #define mmc_card_ext_capacity(c) ((c)->state & MMC_CARD_SDXC)
+ #define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED))
+ #define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED)
++#define mmc_card_ult_capacity(c) ((c)->state & MMC_CARD_SDUC)
+
+ #define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT)
+ #define mmc_card_set_readonly(c) ((c)->state |= MMC_STATE_READONLY)
+ #define mmc_card_set_blockaddr(c) ((c)->state |= MMC_STATE_BLOCKADDR)
+ #define mmc_card_set_ext_capacity(c) ((c)->state |= MMC_CARD_SDXC)
++#define mmc_card_set_ult_capacity(c) ((c)->state |= MMC_CARD_SDUC)
+ #define mmc_card_set_removed(c) ((c)->state |= MMC_CARD_REMOVED)
+ #define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED)
+ #define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED)
+@@ -82,6 +85,7 @@ struct mmc_fixup {
+ #define CID_MANFID_SANDISK_SD 0x3
+ #define CID_MANFID_ATP 0x9
+ #define CID_MANFID_TOSHIBA 0x11
++#define CID_MANFID_GIGASTONE 0x12
+ #define CID_MANFID_MICRON 0x13
+ #define CID_MANFID_SAMSUNG 0x15
+ #define CID_MANFID_APACER 0x27
+@@ -284,4 +288,10 @@ static inline int mmc_card_broken_cache_flush(const struct mmc_card *c)
+ {
+ return c->quirks & MMC_QUIRK_BROKEN_CACHE_FLUSH;
+ }
++
++static inline int mmc_card_broken_sd_poweroff_notify(const struct mmc_card *c)
++{
++ return c->quirks & MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index d6c819dd68ed47..327029f5c59b79 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2296,6 +2296,9 @@ void mmc_start_host(struct mmc_host *host)
+
+ void __mmc_stop_host(struct mmc_host *host)
+ {
++ if (host->rescan_disable)
++ return;
++
+ if (host->slot.cd_irq >= 0) {
+ mmc_gpio_set_cd_wake(host, false);
+ disable_irq(host->slot.cd_irq);
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 92905fc46436dd..89b512905be140 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -25,6 +25,15 @@ static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = {
+ 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
+ MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY),
+
++ /*
++ * GIGASTONE Gaming Plus microSD cards manufactured on 02/2022 never
++ * clear Flush Cache bit and set Poweroff Notification Ready bit.
++ */
++ _FIXUP_EXT("ASTC", CID_MANFID_GIGASTONE, 0x3456, 2022, 2,
++ 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
++ MMC_QUIRK_BROKEN_SD_CACHE | MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY,
++ EXT_CSD_REV_ANY),
++
+ END_FIXUP
+ };
+
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 12fe282bea77ef..63915541c0e494 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -100,7 +100,7 @@ void mmc_decode_cid(struct mmc_card *card)
+ /*
+ * Given a 128-bit response, decode to our card CSD structure.
+ */
+-static int mmc_decode_csd(struct mmc_card *card)
++static int mmc_decode_csd(struct mmc_card *card, bool is_sduc)
+ {
+ struct mmc_csd *csd = &card->csd;
+ unsigned int e, m, csd_struct;
+@@ -144,9 +144,10 @@ static int mmc_decode_csd(struct mmc_card *card)
+ mmc_card_set_readonly(card);
+ break;
+ case 1:
++ case 2:
+ /*
+- * This is a block-addressed SDHC or SDXC card. Most
+- * interesting fields are unused and have fixed
++ * This is a block-addressed SDHC, SDXC or SDUC card.
++ * Most interesting fields are unused and have fixed
+ * values. To avoid getting tripped by buggy cards,
+ * we assume those fixed values ourselves.
+ */
+@@ -159,14 +160,19 @@ static int mmc_decode_csd(struct mmc_card *card)
+ e = unstuff_bits(resp, 96, 3);
+ csd->max_dtr = tran_exp[e] * tran_mant[m];
+ csd->cmdclass = unstuff_bits(resp, 84, 12);
+- csd->c_size = unstuff_bits(resp, 48, 22);
+
+- /* SDXC cards have a minimum C_SIZE of 0x00FFFF */
+- if (csd->c_size >= 0xFFFF)
++ if (csd_struct == 1)
++ m = unstuff_bits(resp, 48, 22);
++ else
++ m = unstuff_bits(resp, 48, 28);
++ csd->c_size = m;
++
++ if (csd->c_size >= 0x400000 && is_sduc)
++ mmc_card_set_ult_capacity(card);
++ else if (csd->c_size >= 0xFFFF)
+ mmc_card_set_ext_capacity(card);
+
+- m = unstuff_bits(resp, 48, 22);
+- csd->capacity = (1 + m) << 10;
++ csd->capacity = (1 + (typeof(sector_t))m) << 10;
+
+ csd->read_blkbits = 9;
+ csd->read_partial = 0;
+@@ -876,7 +882,7 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
+ return err;
+ }
+
+-int mmc_sd_get_csd(struct mmc_card *card)
++int mmc_sd_get_csd(struct mmc_card *card, bool is_sduc)
+ {
+ int err;
+
+@@ -887,7 +893,7 @@ int mmc_sd_get_csd(struct mmc_card *card)
+ if (err)
+ return err;
+
+- err = mmc_decode_csd(card);
++ err = mmc_decode_csd(card, is_sduc);
+ if (err)
+ return err;
+
+@@ -1107,7 +1113,7 @@ static int sd_parse_ext_reg_power(struct mmc_card *card, u8 fno, u8 page,
+ card->ext_power.rev = reg_buf[0] & 0xf;
+
+ /* Power Off Notification support at bit 4. */
+- if (reg_buf[1] & BIT(4))
++ if ((reg_buf[1] & BIT(4)) && !mmc_card_broken_sd_poweroff_notify(card))
+ card->ext_power.feature_support |= SD_EXT_POWER_OFF_NOTIFY;
+
+ /* Power Sustenance support at bit 5. */
+@@ -1442,7 +1448,7 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
+ }
+
+ if (!oldcard) {
+- err = mmc_sd_get_csd(card);
++ err = mmc_sd_get_csd(card, false);
+ if (err)
+ goto free_card;
+
+diff --git a/drivers/mmc/core/sd.h b/drivers/mmc/core/sd.h
+index fe6dd46927a423..7e8beface2ca61 100644
+--- a/drivers/mmc/core/sd.h
++++ b/drivers/mmc/core/sd.h
+@@ -10,7 +10,7 @@ struct mmc_host;
+ struct mmc_card;
+
+ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr);
+-int mmc_sd_get_csd(struct mmc_card *card);
++int mmc_sd_get_csd(struct mmc_card *card, bool is_sduc);
+ void mmc_decode_cid(struct mmc_card *card);
+ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
+ bool reinit);
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 4fb247fde5c080..9566837c9848e6 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -769,7 +769,7 @@ static int mmc_sdio_init_card(struct mmc_host *host, u32 ocr,
+ * Read CSD, before selecting the card
+ */
+ if (!oldcard && mmc_card_sd_combo(card)) {
+- err = mmc_sd_get_csd(card);
++ err = mmc_sd_get_csd(card, false);
+ if (err)
+ goto remove;
+
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 89018b6c97b9a7..813bc20cfb5a6c 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2736,20 +2736,18 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ }
+
+ /* Allocate MMC host for this device */
+- mmc = mmc_alloc_host(sizeof(struct msdc_host), &pdev->dev);
++ mmc = devm_mmc_alloc_host(&pdev->dev, sizeof(struct msdc_host));
+ if (!mmc)
+ return -ENOMEM;
+
+ host = mmc_priv(mmc);
+ ret = mmc_of_parse(mmc);
+ if (ret)
+- goto host_free;
++ return ret;
+
+ host->base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(host->base)) {
+- ret = PTR_ERR(host->base);
+- goto host_free;
+- }
++ if (IS_ERR(host->base))
++ return PTR_ERR(host->base);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res) {
+@@ -2760,53 +2758,45 @@ static int msdc_drv_probe(struct platform_device *pdev)
+
+ ret = mmc_regulator_get_supply(mmc);
+ if (ret)
+- goto host_free;
++ return ret;
+
+ ret = msdc_of_clock_parse(pdev, host);
+ if (ret)
+- goto host_free;
++ return ret;
+
+ host->reset = devm_reset_control_get_optional_exclusive(&pdev->dev,
+ "hrst");
+- if (IS_ERR(host->reset)) {
+- ret = PTR_ERR(host->reset);
+- goto host_free;
+- }
++ if (IS_ERR(host->reset))
++ return PTR_ERR(host->reset);
+
+ /* only eMMC has crypto property */
+ if (!(mmc->caps2 & MMC_CAP2_NO_MMC)) {
+ host->crypto_clk = devm_clk_get_optional(&pdev->dev, "crypto");
+ if (IS_ERR(host->crypto_clk))
+- host->crypto_clk = NULL;
+- else
++ return PTR_ERR(host->crypto_clk);
++ else if (host->crypto_clk)
+ mmc->caps2 |= MMC_CAP2_CRYPTO;
+ }
+
+ host->irq = platform_get_irq(pdev, 0);
+- if (host->irq < 0) {
+- ret = host->irq;
+- goto host_free;
+- }
++ if (host->irq < 0)
++ return host->irq;
+
+ host->pinctrl = devm_pinctrl_get(&pdev->dev);
+- if (IS_ERR(host->pinctrl)) {
+- ret = PTR_ERR(host->pinctrl);
+- dev_err(&pdev->dev, "Cannot find pinctrl!\n");
+- goto host_free;
+- }
++ if (IS_ERR(host->pinctrl))
++ return dev_err_probe(&pdev->dev, PTR_ERR(host->pinctrl),
++ "Cannot find pinctrl");
+
+ host->pins_default = pinctrl_lookup_state(host->pinctrl, "default");
+ if (IS_ERR(host->pins_default)) {
+- ret = PTR_ERR(host->pins_default);
+ dev_err(&pdev->dev, "Cannot find pinctrl default!\n");
+- goto host_free;
++ return PTR_ERR(host->pins_default);
+ }
+
+ host->pins_uhs = pinctrl_lookup_state(host->pinctrl, "state_uhs");
+ if (IS_ERR(host->pins_uhs)) {
+- ret = PTR_ERR(host->pins_uhs);
+ dev_err(&pdev->dev, "Cannot find pinctrl uhs!\n");
+- goto host_free;
++ return PTR_ERR(host->pins_uhs);
+ }
+
+ /* Support for SDIO eint irq ? */
+@@ -2885,7 +2875,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ ret = msdc_ungate_clock(host);
+ if (ret) {
+ dev_err(&pdev->dev, "Cannot ungate clocks!\n");
+- goto release_mem;
++ goto release_clk;
+ }
+ msdc_init_hw(host);
+
+@@ -2895,14 +2885,14 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ GFP_KERNEL);
+ if (!host->cq_host) {
+ ret = -ENOMEM;
+- goto host_free;
++ goto release;
+ }
+ host->cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
+ host->cq_host->mmio = host->base + 0x800;
+ host->cq_host->ops = &msdc_cmdq_ops;
+ ret = cqhci_init(host->cq_host, mmc, true);
+ if (ret)
+- goto host_free;
++ goto release;
+ mmc->max_segs = 128;
+ /* cqhci 16bit length */
+ /* 0 size, means 65536 so we don't have to -1 here */
+@@ -2929,9 +2919,10 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ end:
+ pm_runtime_disable(host->dev);
+ release:
+- platform_set_drvdata(pdev, NULL);
+ msdc_deinit_hw(host);
++release_clk:
+ msdc_gate_clock(host);
++ platform_set_drvdata(pdev, NULL);
+ release_mem:
+ if (host->dma.gpd)
+ dma_free_coherent(&pdev->dev,
+@@ -2939,11 +2930,8 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ host->dma.gpd, host->dma.gpd_addr);
+ if (host->dma.bd)
+ dma_free_coherent(&pdev->dev,
+- MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+- host->dma.bd, host->dma.bd_addr);
+-host_free:
+- mmc_free_host(mmc);
+-
++ MAX_BD_NUM * sizeof(struct mt_bdma_desc),
++ host->dma.bd, host->dma.bd_addr);
+ return ret;
+ }
+
+@@ -2968,9 +2956,7 @@ static void msdc_drv_remove(struct platform_device *pdev)
+ 2 * sizeof(struct mt_gpdma_desc),
+ host->dma.gpd, host->dma.gpd_addr);
+ dma_free_coherent(&pdev->dev, MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+- host->dma.bd, host->dma.bd_addr);
+-
+- mmc_free_host(mmc);
++ host->dma.bd, host->dma.bd_addr);
+ }
+
+ static void msdc_save_reg(struct msdc_host *host)
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 8f0bc6dca2b040..ef3a44f2dff16d 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -238,6 +238,7 @@ struct esdhc_platform_data {
+
+ struct esdhc_soc_data {
+ u32 flags;
++ u32 quirks;
+ };
+
+ static const struct esdhc_soc_data esdhc_imx25_data = {
+@@ -309,10 +310,12 @@ static struct esdhc_soc_data usdhc_imx7ulp_data = {
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ | ESDHC_FLAG_PMQOS | ESDHC_FLAG_HS400
+ | ESDHC_FLAG_STATE_LOST_IN_LPMODE,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+ static struct esdhc_soc_data usdhc_imxrt1050_data = {
+ .flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_STD_TUNING
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ static struct esdhc_soc_data usdhc_imx8qxp_data = {
+@@ -321,6 +324,7 @@ static struct esdhc_soc_data usdhc_imx8qxp_data = {
+ | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+ | ESDHC_FLAG_STATE_LOST_IN_LPMODE
+ | ESDHC_FLAG_CLK_RATE_LOST_IN_PM_RUNTIME,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ static struct esdhc_soc_data usdhc_imx8mm_data = {
+@@ -328,6 +332,7 @@ static struct esdhc_soc_data usdhc_imx8mm_data = {
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+ | ESDHC_FLAG_STATE_LOST_IN_LPMODE,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ struct pltfm_imx_data {
+@@ -1687,6 +1692,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+
+ imx_data->socdata = device_get_match_data(&pdev->dev);
+
++ host->quirks |= imx_data->socdata->quirks;
+ if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
+ cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
+
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index ed45ed0bdafd96..2e2e15e2d8fb8b 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -21,6 +21,7 @@
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
+ #include <linux/gpio.h>
++#include <linux/gpio/machine.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/pm_qos.h>
+ #include <linux/debugfs.h>
+@@ -1235,6 +1236,29 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
+ .priv_size = sizeof(struct intel_host),
+ };
+
++/* DMI quirks for devices with missing or broken CD GPIO info */
++static const struct gpiod_lookup_table vexia_edu_atla10_cd_gpios = {
++ .dev_id = "0000:00:12.0",
++ .table = {
++ GPIO_LOOKUP("INT33FC:00", 38, "cd", GPIO_ACTIVE_HIGH),
++ { }
++ },
++};
++
++static const struct dmi_system_id sdhci_intel_byt_cd_gpio_override[] = {
++ {
++ /* Vexia Edu Atla 10 tablet 9V version */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
++ },
++ .driver_data = (void *)&vexia_edu_atla10_cd_gpios,
++ },
++ { }
++};
++
+ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
+ #ifdef CONFIG_PM_SLEEP
+ .resume = byt_resume,
+@@ -1253,6 +1277,7 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
+ .add_host = byt_add_host,
+ .remove_slot = byt_remove_slot,
+ .ops = &sdhci_intel_byt_ops,
++ .cd_gpio_override = sdhci_intel_byt_cd_gpio_override,
+ .priv_size = sizeof(struct intel_host),
+ };
+
+@@ -2054,6 +2079,42 @@ static const struct dev_pm_ops sdhci_pci_pm_ops = {
+ * *
+ \*****************************************************************************/
+
++static struct gpiod_lookup_table *sdhci_pci_add_gpio_lookup_table(
++ struct sdhci_pci_chip *chip)
++{
++ struct gpiod_lookup_table *cd_gpio_lookup_table;
++ const struct dmi_system_id *dmi_id = NULL;
++ size_t count;
++
++ if (chip->fixes && chip->fixes->cd_gpio_override)
++ dmi_id = dmi_first_match(chip->fixes->cd_gpio_override);
++
++ if (!dmi_id)
++ return NULL;
++
++ cd_gpio_lookup_table = dmi_id->driver_data;
++ for (count = 0; cd_gpio_lookup_table->table[count].key; count++)
++ ;
++
++ cd_gpio_lookup_table = kmemdup(dmi_id->driver_data,
++ /* count + 1 terminating entry */
++ struct_size(cd_gpio_lookup_table, table, count + 1),
++ GFP_KERNEL);
++ if (!cd_gpio_lookup_table)
++ return ERR_PTR(-ENOMEM);
++
++ gpiod_add_lookup_table(cd_gpio_lookup_table);
++ return cd_gpio_lookup_table;
++}
++
++static void sdhci_pci_remove_gpio_lookup_table(struct gpiod_lookup_table *lookup_table)
++{
++ if (lookup_table) {
++ gpiod_remove_lookup_table(lookup_table);
++ kfree(lookup_table);
++ }
++}
++
+ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
+ struct pci_dev *pdev, struct sdhci_pci_chip *chip, int first_bar,
+ int slotno)
+@@ -2129,8 +2190,19 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
+ device_init_wakeup(&pdev->dev, true);
+
+ if (slot->cd_idx >= 0) {
++ struct gpiod_lookup_table *cd_gpio_lookup_table;
++
++ cd_gpio_lookup_table = sdhci_pci_add_gpio_lookup_table(chip);
++ if (IS_ERR(cd_gpio_lookup_table)) {
++ ret = PTR_ERR(cd_gpio_lookup_table);
++ goto remove;
++ }
++
+ ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx,
+ slot->cd_override_level, 0);
++
++ sdhci_pci_remove_gpio_lookup_table(cd_gpio_lookup_table);
++
+ if (ret && ret != -EPROBE_DEFER)
+ ret = mmc_gpiod_request_cd(host->mmc, NULL,
+ slot->cd_idx,
+diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
+index 153704f812edc8..4973fa8592175e 100644
+--- a/drivers/mmc/host/sdhci-pci.h
++++ b/drivers/mmc/host/sdhci-pci.h
+@@ -156,6 +156,7 @@ struct sdhci_pci_fixes {
+ #endif
+
+ const struct sdhci_ops *ops;
++ const struct dmi_system_id *cd_gpio_override;
+ size_t priv_size;
+ };
+
+diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
+index 511615dc334196..cc371d0c9f3c76 100644
+--- a/drivers/net/can/c_can/c_can_main.c
++++ b/drivers/net/can/c_can/c_can_main.c
+@@ -1014,49 +1014,57 @@ static int c_can_handle_bus_err(struct net_device *dev,
+
+ /* propagate the error condition to the CAN stack */
+ skb = alloc_can_err_skb(dev, &cf);
+- if (unlikely(!skb))
+- return 0;
+
+ /* check for 'last error code' which tells us the
+ * type of the last error to occur on the CAN bus
+ */
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (likely(skb))
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (lec_type) {
+ case LEC_STUFF_ERROR:
+ netdev_dbg(dev, "stuff error\n");
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ stats->rx_errors++;
+ break;
+ case LEC_FORM_ERROR:
+ netdev_dbg(dev, "form error\n");
+- cf->data[2] |= CAN_ERR_PROT_FORM;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_FORM;
+ stats->rx_errors++;
+ break;
+ case LEC_ACK_ERROR:
+ netdev_dbg(dev, "ack error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+ stats->tx_errors++;
+ break;
+ case LEC_BIT1_ERROR:
+ netdev_dbg(dev, "bit1 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT1;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT1;
+ stats->tx_errors++;
+ break;
+ case LEC_BIT0_ERROR:
+ netdev_dbg(dev, "bit0 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT0;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT0;
+ stats->tx_errors++;
+ break;
+ case LEC_CRC_ERROR:
+ netdev_dbg(dev, "CRC error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+ stats->rx_errors++;
+ break;
+ default:
+ break;
+ }
+
++ if (unlikely(!skb))
++ return 0;
++
+ netif_receive_skb(skb);
+ return 1;
+ }
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+index 6792c14fd7eb00..681643ab37804e 100644
+--- a/drivers/net/can/dev/dev.c
++++ b/drivers/net/can/dev/dev.c
+@@ -468,7 +468,7 @@ static int can_set_termination(struct net_device *ndev, u16 term)
+ else
+ set = 0;
+
+- gpiod_set_value(priv->termination_gpio, set);
++ gpiod_set_value_cansleep(priv->termination_gpio, set);
+
+ return 0;
+ }
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index d32b10900d2f62..c86b57d47085fd 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -390,36 +390,55 @@ static int ifi_canfd_handle_lec_err(struct net_device *ndev)
+ return 0;
+
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ /* Propagate the error condition to the CAN stack. */
+ skb = alloc_can_err_skb(ndev, &cf);
+- if (unlikely(!skb))
+- return 0;
+
+ /* Read the error counter register and check for new errors. */
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (likely(skb))
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- if (errctr & IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_OVERLOAD;
++ if (errctr & IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_OVERLOAD;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST)
+- cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ if (errctr & IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST) {
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_BIT0;
++ if (errctr & IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST) {
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT0;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_BIT1;
++ if (errctr & IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST) {
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT1;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
++ if (errctr & IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST)
+- cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ if (errctr & IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_FORM;
++ if (errctr & IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ }
+
+ /* Reset the error counter, ack the IRQ and re-enable the counter. */
+ writel(IFI_CANFD_ERROR_CTR_ER_RESET, priv->base + IFI_CANFD_ERROR_CTR);
+@@ -427,6 +446,9 @@ static int ifi_canfd_handle_lec_err(struct net_device *ndev)
+ priv->base + IFI_CANFD_INTERRUPT);
+ writel(IFI_CANFD_ERROR_CTR_ER_ENABLE, priv->base + IFI_CANFD_ERROR_CTR);
+
++ if (unlikely(!skb))
++ return 0;
++
+ netif_receive_skb(skb);
+
+ return 1;
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 16e9e7d7527d97..533bcb77c9f934 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -695,47 +695,60 @@ static int m_can_handle_lec_err(struct net_device *dev,
+ u32 timestamp = 0;
+
+ cdev->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ /* propagate the error condition to the CAN stack */
+ skb = alloc_can_err_skb(dev, &cf);
+- if (unlikely(!skb))
+- return 0;
+
+ /* check for 'last error code' which tells us the
+ * type of the last error to occur on the CAN bus
+ */
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (likely(skb))
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (lec_type) {
+ case LEC_STUFF_ERROR:
+ netdev_dbg(dev, "stuff error\n");
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ break;
+ case LEC_FORM_ERROR:
+ netdev_dbg(dev, "form error\n");
+- cf->data[2] |= CAN_ERR_PROT_FORM;
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_FORM;
+ break;
+ case LEC_ACK_ERROR:
+ netdev_dbg(dev, "ack error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+ break;
+ case LEC_BIT1_ERROR:
+ netdev_dbg(dev, "bit1 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT1;
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT1;
+ break;
+ case LEC_BIT0_ERROR:
+ netdev_dbg(dev, "bit0 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT0;
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT0;
+ break;
+ case LEC_CRC_ERROR:
+ netdev_dbg(dev, "CRC error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+ break;
+ default:
+ break;
+ }
+
++ if (unlikely(!skb))
++ return 0;
++
+ if (cdev->is_peripheral)
+ timestamp = m_can_get_timestamp(cdev);
+
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index ddb3247948ad2f..4d245857ef1cec 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -416,8 +416,6 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ int ret = 0;
+
+ skb = alloc_can_err_skb(dev, &cf);
+- if (skb == NULL)
+- return -ENOMEM;
+
+ txerr = priv->read_reg(priv, SJA1000_TXERR);
+ rxerr = priv->read_reg(priv, SJA1000_RXERR);
+@@ -425,8 +423,11 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ if (isrc & IRQ_DOI) {
+ /* data overrun interrupt */
+ netdev_dbg(dev, "data overrun interrupt\n");
+- cf->can_id |= CAN_ERR_CRTL;
+- cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ if (skb) {
++ cf->can_id |= CAN_ERR_CRTL;
++ cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ }
++
+ stats->rx_over_errors++;
+ stats->rx_errors++;
+ sja1000_write_cmdreg(priv, CMD_CDO); /* clear bit */
+@@ -452,7 +453,7 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ else
+ state = CAN_STATE_ERROR_ACTIVE;
+ }
+- if (state != CAN_STATE_BUS_OFF) {
++ if (state != CAN_STATE_BUS_OFF && skb) {
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = txerr;
+ cf->data[7] = rxerr;
+@@ -460,33 +461,38 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ if (isrc & IRQ_BEI) {
+ /* bus error interrupt */
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ ecc = priv->read_reg(priv, SJA1000_ECC);
++ if (skb) {
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+-
+- /* set error type */
+- switch (ecc & ECC_MASK) {
+- case ECC_BIT:
+- cf->data[2] |= CAN_ERR_PROT_BIT;
+- break;
+- case ECC_FORM:
+- cf->data[2] |= CAN_ERR_PROT_FORM;
+- break;
+- case ECC_STUFF:
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
+- break;
+- default:
+- break;
+- }
++ /* set error type */
++ switch (ecc & ECC_MASK) {
++ case ECC_BIT:
++ cf->data[2] |= CAN_ERR_PROT_BIT;
++ break;
++ case ECC_FORM:
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ break;
++ case ECC_STUFF:
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ break;
++ default:
++ break;
++ }
+
+- /* set error location */
+- cf->data[3] = ecc & ECC_SEG;
++ /* set error location */
++ cf->data[3] = ecc & ECC_SEG;
++ }
+
+ /* Error occurred during transmission? */
+- if ((ecc & ECC_DIR) == 0)
+- cf->data[2] |= CAN_ERR_PROT_TX;
++ if ((ecc & ECC_DIR) == 0) {
++ stats->tx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_TX;
++ } else {
++ stats->rx_errors++;
++ }
+ }
+ if (isrc & IRQ_EPI) {
+ /* error passive interrupt */
+@@ -502,8 +508,10 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ netdev_dbg(dev, "arbitration lost interrupt\n");
+ alc = priv->read_reg(priv, SJA1000_ALC);
+ priv->can.can_stats.arbitration_lost++;
+- cf->can_id |= CAN_ERR_LOSTARB;
+- cf->data[0] = alc & 0x1f;
++ if (skb) {
++ cf->can_id |= CAN_ERR_LOSTARB;
++ cf->data[0] = alc & 0x1f;
++ }
+ }
+
+ if (state != priv->can.state) {
+@@ -516,6 +524,9 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ can_bus_off(dev);
+ }
+
++ if (!skb)
++ return -ENOMEM;
++
+ netif_rx(skb);
+
+ return ret;
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 148d974ebb2107..1b9501ee10deb5 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -671,9 +671,9 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ tx_state = txerr >= rxerr ? new_state : 0;
+ rx_state = txerr <= rxerr ? new_state : 0;
+ can_change_state(net, cf, tx_state, rx_state);
+- netif_rx(skb);
+
+ if (new_state == CAN_STATE_BUS_OFF) {
++ netif_rx(skb);
+ can_bus_off(net);
+ if (priv->can.restart_ms == 0) {
+ priv->force_quit = 1;
+@@ -684,6 +684,7 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = txerr;
+ cf->data[7] = rxerr;
++ netif_rx(skb);
+ }
+ }
+
+@@ -696,27 +697,38 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ /* Check for protocol errors */
+ if (eflag & HI3110_ERR_PROTOCOL_MASK) {
+ skb = alloc_can_err_skb(net, &cf);
+- if (!skb)
+- break;
++ if (skb)
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+ priv->can.can_stats.bus_error++;
+- priv->net->stats.rx_errors++;
+- if (eflag & HI3110_ERR_BITERR)
+- cf->data[2] |= CAN_ERR_PROT_BIT;
+- else if (eflag & HI3110_ERR_FRMERR)
+- cf->data[2] |= CAN_ERR_PROT_FORM;
+- else if (eflag & HI3110_ERR_STUFERR)
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
+- else if (eflag & HI3110_ERR_CRCERR)
+- cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ;
+- else if (eflag & HI3110_ERR_ACKERR)
+- cf->data[3] |= CAN_ERR_PROT_LOC_ACK;
+-
+- cf->data[6] = hi3110_read(spi, HI3110_READ_TEC);
+- cf->data[7] = hi3110_read(spi, HI3110_READ_REC);
++ if (eflag & HI3110_ERR_BITERR) {
++ priv->net->stats.tx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_BIT;
++ } else if (eflag & HI3110_ERR_FRMERR) {
++ priv->net->stats.rx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ } else if (eflag & HI3110_ERR_STUFERR) {
++ priv->net->stats.rx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ } else if (eflag & HI3110_ERR_CRCERR) {
++ priv->net->stats.rx_errors++;
++ if (skb)
++ cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ;
++ } else if (eflag & HI3110_ERR_ACKERR) {
++ priv->net->stats.tx_errors++;
++ if (skb)
++ cf->data[3] |= CAN_ERR_PROT_LOC_ACK;
++ }
++
+ netdev_dbg(priv->net, "Bus Error\n");
+- netif_rx(skb);
++ if (skb) {
++ cf->data[6] = hi3110_read(spi, HI3110_READ_TEC);
++ cf->data[7] = hi3110_read(spi, HI3110_READ_REC);
++ netif_rx(skb);
++ }
+ }
+ }
+
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+index d3ac865933fdf6..e94321849fd7e6 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+@@ -21,6 +21,11 @@ static inline bool mcp251xfd_tx_fifo_sta_empty(u32 fifo_sta)
+ return fifo_sta & MCP251XFD_REG_FIFOSTA_TFERFFIF;
+ }
+
++static inline bool mcp251xfd_tx_fifo_sta_less_than_half_full(u32 fifo_sta)
++{
++ return fifo_sta & MCP251XFD_REG_FIFOSTA_TFHRFHIF;
++}
++
+ static inline int
+ mcp251xfd_tef_tail_get_from_chip(const struct mcp251xfd_priv *priv,
+ u8 *tef_tail)
+@@ -147,7 +152,29 @@ mcp251xfd_get_tef_len(struct mcp251xfd_priv *priv, u8 *len_p)
+ BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(len));
+
+ len = (chip_tx_tail << shift) - (tail << shift);
+- *len_p = len >> shift;
++ len >>= shift;
++
++ /* According to mcp2518fd erratum DS80000789E 6. the FIFOCI
++ * bits of a FIFOSTA register, here the TX-FIFO tail index
++ * might be corrupted.
++ *
++ * However here it seems the bit indicating that the TX-FIFO
++ * is empty (MCP251XFD_REG_FIFOSTA_TFERFFIF) is not correct
++ * while the TX-FIFO tail index is.
++ *
++ * We assume the TX-FIFO is empty, i.e. all pending CAN frames
++ * haven been send, if:
++ * - Chip's head and tail index are equal (len == 0).
++ * - The TX-FIFO is less than half full.
++ * (The TX-FIFO empty case has already been checked at the
++ * beginning of this function.)
++ * - No free buffers in the TX ring.
++ */
++ if (len == 0 && mcp251xfd_tx_fifo_sta_less_than_half_full(fifo_sta) &&
++ mcp251xfd_get_tx_free(tx_ring) == 0)
++ len = tx_ring->obj_num;
++
++ *len_p = len;
+
+ return 0;
+ }
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 360158c295d348..4311c1f0eafd8d 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -579,11 +579,9 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ /* bus error interrupt */
+ netdev_dbg(dev, "bus error interrupt\n");
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
++ ecc = readl(priv->base + SUN4I_REG_STA_ADDR);
+
+ if (likely(skb)) {
+- ecc = readl(priv->base + SUN4I_REG_STA_ADDR);
+-
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (ecc & SUN4I_STA_MASK_ERR) {
+@@ -601,9 +599,15 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ >> 16;
+ break;
+ }
+- /* error occurred during transmission? */
+- if ((ecc & SUN4I_STA_ERR_DIR) == 0)
++ }
++
++ /* error occurred during transmission? */
++ if ((ecc & SUN4I_STA_ERR_DIR) == 0) {
++ if (likely(skb))
+ cf->data[2] |= CAN_ERR_PROT_TX;
++ stats->tx_errors++;
++ } else {
++ stats->rx_errors++;
+ }
+ }
+ if (isrc & SUN4I_INT_ERR_PASSIVE) {
+@@ -629,10 +633,10 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ tx_state = txerr >= rxerr ? state : 0;
+ rx_state = txerr <= rxerr ? state : 0;
+
+- if (likely(skb))
+- can_change_state(dev, cf, tx_state, rx_state);
+- else
+- priv->can.state = state;
++ /* The skb allocation might fail, but can_change_state()
++ * handles cf == NULL.
++ */
++ can_change_state(dev, cf, tx_state, rx_state);
+ if (state == CAN_STATE_BUS_OFF)
+ can_bus_off(dev);
+ }
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 050c0b49938a42..5355bac4dccbe0 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -335,15 +335,14 @@ static void ems_usb_rx_err(struct ems_usb *dev, struct ems_cpc_msg *msg)
+ struct net_device_stats *stats = &dev->netdev->stats;
+
+ skb = alloc_can_err_skb(dev->netdev, &cf);
+- if (skb == NULL)
+- return;
+
+ if (msg->type == CPC_MSG_TYPE_CAN_STATE) {
+ u8 state = msg->msg.can_state;
+
+ if (state & SJA1000_SR_BS) {
+ dev->can.state = CAN_STATE_BUS_OFF;
+- cf->can_id |= CAN_ERR_BUSOFF;
++ if (skb)
++ cf->can_id |= CAN_ERR_BUSOFF;
+
+ dev->can.can_stats.bus_off++;
+ can_bus_off(dev->netdev);
+@@ -361,44 +360,53 @@ static void ems_usb_rx_err(struct ems_usb *dev, struct ems_cpc_msg *msg)
+
+ /* bus error interrupt */
+ dev->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (skb) {
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- switch (ecc & SJA1000_ECC_MASK) {
+- case SJA1000_ECC_BIT:
+- cf->data[2] |= CAN_ERR_PROT_BIT;
+- break;
+- case SJA1000_ECC_FORM:
+- cf->data[2] |= CAN_ERR_PROT_FORM;
+- break;
+- case SJA1000_ECC_STUFF:
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
+- break;
+- default:
+- cf->data[3] = ecc & SJA1000_ECC_SEG;
+- break;
++ switch (ecc & SJA1000_ECC_MASK) {
++ case SJA1000_ECC_BIT:
++ cf->data[2] |= CAN_ERR_PROT_BIT;
++ break;
++ case SJA1000_ECC_FORM:
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ break;
++ case SJA1000_ECC_STUFF:
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ break;
++ default:
++ cf->data[3] = ecc & SJA1000_ECC_SEG;
++ break;
++ }
+ }
+
+ /* Error occurred during transmission? */
+- if ((ecc & SJA1000_ECC_DIR) == 0)
+- cf->data[2] |= CAN_ERR_PROT_TX;
++ if ((ecc & SJA1000_ECC_DIR) == 0) {
++ stats->tx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_TX;
++ } else {
++ stats->rx_errors++;
++ }
+
+- if (dev->can.state == CAN_STATE_ERROR_WARNING ||
+- dev->can.state == CAN_STATE_ERROR_PASSIVE) {
++ if (skb && (dev->can.state == CAN_STATE_ERROR_WARNING ||
++ dev->can.state == CAN_STATE_ERROR_PASSIVE)) {
+ cf->can_id |= CAN_ERR_CRTL;
+ cf->data[1] = (txerr > rxerr) ?
+ CAN_ERR_CRTL_TX_PASSIVE : CAN_ERR_CRTL_RX_PASSIVE;
+ }
+ } else if (msg->type == CPC_MSG_TYPE_OVERRUN) {
+- cf->can_id |= CAN_ERR_CRTL;
+- cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ if (skb) {
++ cf->can_id |= CAN_ERR_CRTL;
++ cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ }
+
+ stats->rx_over_errors++;
+ stats->rx_errors++;
+ }
+
+- netif_rx(skb);
++ if (skb)
++ netif_rx(skb);
+ }
+
+ /*
+diff --git a/drivers/net/can/usb/f81604.c b/drivers/net/can/usb/f81604.c
+index bc0c8903fe7794..e0cfa1460b0b83 100644
+--- a/drivers/net/can/usb/f81604.c
++++ b/drivers/net/can/usb/f81604.c
+@@ -526,7 +526,6 @@ static void f81604_handle_can_bus_errors(struct f81604_port_priv *priv,
+ netdev_dbg(netdev, "bus error interrupt\n");
+
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ if (skb) {
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+@@ -548,10 +547,15 @@ static void f81604_handle_can_bus_errors(struct f81604_port_priv *priv,
+
+ /* set error location */
+ cf->data[3] = data->ecc & F81604_SJA1000_ECC_SEG;
++ }
+
+- /* Error occurred during transmission? */
+- if ((data->ecc & F81604_SJA1000_ECC_DIR) == 0)
++ /* Error occurred during transmission? */
++ if ((data->ecc & F81604_SJA1000_ECC_DIR) == 0) {
++ stats->tx_errors++;
++ if (skb)
+ cf->data[2] |= CAN_ERR_PROT_TX;
++ } else {
++ stats->rx_errors++;
+ }
+
+ set_bit(F81604_CLEAR_ECC, &priv->clear_flags);
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index bc86e9b329fd10..b6f4de375df75d 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -43,9 +43,6 @@
+ #define USB_XYLANTA_SAINT3_VENDOR_ID 0x16d0
+ #define USB_XYLANTA_SAINT3_PRODUCT_ID 0x0f30
+
+-#define GS_USB_ENDPOINT_IN 1
+-#define GS_USB_ENDPOINT_OUT 2
+-
+ /* Timestamp 32 bit timer runs at 1 MHz (1 µs tick). Worker accounts
+ * for timer overflow (will be after ~71 minutes)
+ */
+@@ -336,6 +333,9 @@ struct gs_usb {
+
+ unsigned int hf_size_rx;
+ u8 active_channels;
++
++ unsigned int pipe_in;
++ unsigned int pipe_out;
+ };
+
+ /* 'allocate' a tx context.
+@@ -687,7 +687,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+
+ resubmit_urb:
+ usb_fill_bulk_urb(urb, parent->udev,
+- usb_rcvbulkpipe(parent->udev, GS_USB_ENDPOINT_IN),
++ parent->pipe_in,
+ hf, dev->parent->hf_size_rx,
+ gs_usb_receive_bulk_callback, parent);
+
+@@ -819,7 +819,7 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
+ }
+
+ usb_fill_bulk_urb(urb, dev->udev,
+- usb_sndbulkpipe(dev->udev, GS_USB_ENDPOINT_OUT),
++ dev->parent->pipe_out,
+ hf, dev->hf_size_tx,
+ gs_usb_xmit_callback, txc);
+
+@@ -925,8 +925,7 @@ static int gs_can_open(struct net_device *netdev)
+ /* fill, anchor, and submit rx urb */
+ usb_fill_bulk_urb(urb,
+ dev->udev,
+- usb_rcvbulkpipe(dev->udev,
+- GS_USB_ENDPOINT_IN),
++ dev->parent->pipe_in,
+ buf,
+ dev->parent->hf_size_rx,
+ gs_usb_receive_bulk_callback, parent);
+@@ -1413,6 +1412,7 @@ static int gs_usb_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+ {
+ struct usb_device *udev = interface_to_usbdev(intf);
++ struct usb_endpoint_descriptor *ep_in, *ep_out;
+ struct gs_host_frame *hf;
+ struct gs_usb *parent;
+ struct gs_host_config hconf = {
+@@ -1422,6 +1422,13 @@ static int gs_usb_probe(struct usb_interface *intf,
+ unsigned int icount, i;
+ int rc;
+
++ rc = usb_find_common_endpoints(intf->cur_altsetting,
++ &ep_in, &ep_out, NULL, NULL);
++ if (rc) {
++ dev_err(&intf->dev, "Required endpoints not found\n");
++ return rc;
++ }
++
+ /* send host config */
+ rc = usb_control_msg_send(udev, 0,
+ GS_USB_BREQ_HOST_FORMAT,
+@@ -1466,6 +1473,10 @@ static int gs_usb_probe(struct usb_interface *intf,
+ usb_set_intfdata(intf, parent);
+ parent->udev = udev;
+
++ /* store the detected endpoints */
++ parent->pipe_in = usb_rcvbulkpipe(parent->udev, ep_in->bEndpointAddress);
++ parent->pipe_out = usb_sndbulkpipe(parent->udev, ep_out->bEndpointAddress);
++
+ for (i = 0; i < icount; i++) {
+ unsigned int hf_size_rx = 0;
+
+diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c
+index f8d8c70642c4ff..59b4a7240b5832 100644
+--- a/drivers/net/dsa/qca/qca8k-8xxx.c
++++ b/drivers/net/dsa/qca/qca8k-8xxx.c
+@@ -673,7 +673,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy,
+ * We therefore need to lock the MDIO bus onto which the switch is
+ * connected.
+ */
+- mutex_lock(&priv->bus->mdio_lock);
++ mutex_lock_nested(&priv->bus->mdio_lock, MDIO_MUTEX_NESTED);
+
+ /* Actually start the request:
+ * 1. Send mdio master packet
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 20ba14eb87e00b..b901ecb57f2552 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -1193,10 +1193,14 @@ static int bnxt_grxclsrule(struct bnxt *bp, struct ethtool_rxnfc *cmd)
+ }
+ }
+
+- if (fltr->base.flags & BNXT_ACT_DROP)
++ if (fltr->base.flags & BNXT_ACT_DROP) {
+ fs->ring_cookie = RX_CLS_FLOW_DISC;
+- else
++ } else if (fltr->base.flags & BNXT_ACT_RSS_CTX) {
++ fs->flow_type |= FLOW_RSS;
++ cmd->rss_context = fltr->base.fw_vnic_id;
++ } else {
+ fs->ring_cookie = fltr->base.rxq;
++ }
+ rc = 0;
+
+ fltr_err:
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index c09370eab319b2..16a7908c79f703 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -28,6 +28,9 @@ EXPORT_SYMBOL_GPL(enetc_port_mac_wr);
+ static void enetc_change_preemptible_tcs(struct enetc_ndev_priv *priv,
+ u8 preemptible_tcs)
+ {
++ if (!(priv->si->hw_features & ENETC_SI_F_QBU))
++ return;
++
+ priv->preemptible_tcs = preemptible_tcs;
+ enetc_mm_commit_preemptible_tcs(priv);
+ }
+diff --git a/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c b/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c
+index 39689826cc8ffc..ce253aac5344cc 100644
+--- a/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c
++++ b/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c
+@@ -94,7 +94,7 @@ static int mpc52xx_fec_mdio_probe(struct platform_device *of)
+ goto out_free;
+ }
+
+- snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
++ snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res.start);
+ bus->priv = priv;
+
+ bus->parent = dev;
+diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c
+index d96028f01770cf..fb416d60dcd727 100644
+--- a/drivers/net/ethernet/freescale/fman/fman.c
++++ b/drivers/net/ethernet/freescale/fman/fman.c
+@@ -24,7 +24,6 @@
+
+ /* General defines */
+ #define FMAN_LIODN_TBL 64 /* size of LIODN table */
+-#define MAX_NUM_OF_MACS 10
+ #define FM_NUM_OF_FMAN_CTRL_EVENT_REGS 4
+ #define BASE_RX_PORTID 0x08
+ #define BASE_TX_PORTID 0x28
+diff --git a/drivers/net/ethernet/freescale/fman/fman.h b/drivers/net/ethernet/freescale/fman/fman.h
+index 2ea575a46675b0..74eb62eba0d7ff 100644
+--- a/drivers/net/ethernet/freescale/fman/fman.h
++++ b/drivers/net/ethernet/freescale/fman/fman.h
+@@ -74,6 +74,9 @@
+ #define BM_MAX_NUM_OF_POOLS 64 /* Buffers pools */
+ #define FMAN_PORT_MAX_EXT_POOLS_NUM 8 /* External BM pools per Rx port */
+
++/* General defines */
++#define MAX_NUM_OF_MACS 10
++
+ struct fman; /* FMan data */
+
+ /* Enum for defining port types */
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index 11da139082e1bf..1916a2ac48b9f1 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -259,6 +259,11 @@ static int mac_probe(struct platform_device *_of_dev)
+ err = -EINVAL;
+ goto _return_dev_put;
+ }
++ if (val >= MAX_NUM_OF_MACS) {
++ dev_err(dev, "cell-index value is too big for %pOF\n", mac_node);
++ err = -EINVAL;
++ goto _return_dev_put;
++ }
+ priv->cell_index = (u8)val;
+
+ /* Get the MAC address */
+diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
+index 2e210a00355843..249b482e32d3bd 100644
+--- a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
++++ b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
+@@ -123,7 +123,7 @@ static int fs_mii_bitbang_init(struct mii_bus *bus, struct device_node *np)
+ * we get is an int, and the odds of multiple bitbang mdio buses
+ * is low enough that it's not worth going too crazy.
+ */
+- snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
++ snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res.start);
+
+ data = of_get_property(np, "fsl,mdio-pin", &len);
+ if (!data || len != 4)
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 009716a12a26af..f1324e25b2af1c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -542,7 +542,8 @@ ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
+ /**
+ * ice_find_netlist_node
+ * @hw: pointer to the hw struct
+- * @node_type_ctx: type of netlist node to look for
++ * @node_type: type of netlist node to look for
++ * @ctx: context of the search
+ * @node_part_number: node part number to look for
+ * @node_handle: output parameter if node found - optional
+ *
+@@ -552,10 +553,12 @@ ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
+ * valid if the function returns zero, and should be ignored on any non-zero
+ * return value.
+ *
+- * Returns: 0 if the node is found, -ENOENT if no handle was found, and
+- * a negative error code on failure to access the AQ.
++ * Return:
++ * * 0 if the node is found,
++ * * -ENOENT if no handle was found,
++ * * negative error code on failure to access the AQ.
+ */
+-static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx,
++static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type, u8 ctx,
+ u8 node_part_number, u16 *node_handle)
+ {
+ u8 idx;
+@@ -566,8 +569,8 @@ static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx,
+ int status;
+
+ cmd.addr.topo_params.node_type_ctx =
+- FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_TYPE_M,
+- node_type_ctx);
++ FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_TYPE_M, node_type) |
++ FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_CTX_M, ctx);
+ cmd.addr.topo_params.index = idx;
+
+ status = ice_aq_get_netlist_node(hw, &cmd,
+@@ -2726,9 +2729,11 @@ bool ice_is_pf_c827(struct ice_hw *hw)
+ */
+ bool ice_is_phy_rclk_in_netlist(struct ice_hw *hw)
+ {
+- if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_PHY,
++ ICE_AQC_LINK_TOPO_NODE_CTX_PORT,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_C827, NULL) &&
+- ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_PHY,
++ ICE_AQC_LINK_TOPO_NODE_CTX_PORT,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_E822_PHY, NULL))
+ return false;
+
+@@ -2744,6 +2749,7 @@ bool ice_is_phy_rclk_in_netlist(struct ice_hw *hw)
+ bool ice_is_clock_mux_in_netlist(struct ice_hw *hw)
+ {
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX,
+ NULL))
+ return false;
+@@ -2764,12 +2770,14 @@ bool ice_is_clock_mux_in_netlist(struct ice_hw *hw)
+ bool ice_is_cgu_in_netlist(struct ice_hw *hw)
+ {
+ if (!ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032,
+ NULL)) {
+ hw->cgu_part_number = ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032;
+ return true;
+ } else if (!ice_find_netlist_node(hw,
+ ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384,
+ NULL)) {
+ hw->cgu_part_number = ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384;
+@@ -2788,6 +2796,7 @@ bool ice_is_cgu_in_netlist(struct ice_hw *hw)
+ bool ice_is_gps_in_netlist(struct ice_hw *hw)
+ {
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_GPS,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_GPS, NULL))
+ return false;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index b1e7727b8677f9..8f2e758c394277 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6361,10 +6361,12 @@ ice_set_vlan_filtering_features(struct ice_vsi *vsi, netdev_features_t features)
+ int err = 0;
+
+ /* support Single VLAN Mode (SVM) and Double VLAN Mode (DVM) by checking
+- * if either bit is set
++ * if either bit is set. In switchdev mode Rx filtering should never be
++ * enabled.
+ */
+- if (features &
+- (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER))
++ if ((features &
++ (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)) &&
++ !ice_is_eswitch_mode_switchdev(vsi->back))
+ err = vlan_ops->ena_rx_filtering(vsi);
+ else
+ err = vlan_ops->dis_rx_filtering(vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index ec8db830ac73ae..3816e45b6ab44a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -1495,7 +1495,8 @@ static int ice_read_ptp_tstamp_eth56g(struct ice_hw *hw, u8 port, u8 idx,
+ * lower 8 bits in the low register, and the upper 32 bits in the high
+ * register.
+ */
+- *tstamp = ((u64)hi) << TS_PHY_HIGH_S | ((u64)lo & TS_PHY_LOW_M);
++ *tstamp = FIELD_PREP(TS_PHY_HIGH_M, hi) |
++ FIELD_PREP(TS_PHY_LOW_M, lo);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 6cedc1a906afb6..4c8b8457134427 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -663,9 +663,8 @@ static inline u64 ice_get_base_incval(struct ice_hw *hw)
+ #define TS_HIGH_M 0xFF
+ #define TS_HIGH_S 32
+
+-#define TS_PHY_LOW_M 0xFF
+-#define TS_PHY_HIGH_M 0xFFFFFFFF
+-#define TS_PHY_HIGH_S 8
++#define TS_PHY_LOW_M GENMASK(7, 0)
++#define TS_PHY_HIGH_M GENMASK_ULL(39, 8)
+
+ #define BYTES_PER_IDX_ADDR_L_U 8
+ #define BYTES_PER_IDX_ADDR_L 4
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index d4e6f0e104872d..60d15b3e6e2faa 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -2448,6 +2448,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ * rest of the packet.
+ */
+ tx_buf->type = LIBETH_SQE_EMPTY;
++ idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
+
+ /* Adjust the DMA offset and the remaining size of the
+ * fragment. On the first iteration of this loop,
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index f1d0881687233e..18284a838e2424 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -637,6 +637,10 @@ static int __init igb_init_module(void)
+ dca_register_notify(&dca_notifier);
+ #endif
+ ret = pci_register_driver(&igb_driver);
++#ifdef CONFIG_IGB_DCA
++ if (ret)
++ dca_unregister_notify(&dca_notifier);
++#endif
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
+index 6493abf189de5e..6639069ad52834 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
+@@ -194,6 +194,8 @@ u32 ixgbe_read_reg(struct ixgbe_hw *hw, u32 reg);
+ dev_err(&adapter->pdev->dev, format, ## arg)
+ #define e_dev_notice(format, arg...) \
+ dev_notice(&adapter->pdev->dev, format, ## arg)
++#define e_dbg(msglvl, format, arg...) \
++ netif_dbg(adapter, msglvl, adapter->netdev, format, ## arg)
+ #define e_info(msglvl, format, arg...) \
+ netif_info(adapter, msglvl, adapter->netdev, format, ## arg)
+ #define e_err(msglvl, format, arg...) \
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+index 14aa2ca51f70ec..81179c60af4e01 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+@@ -40,7 +40,7 @@
+ #define IXGBE_SFF_1GBASESX_CAPABLE 0x1
+ #define IXGBE_SFF_1GBASELX_CAPABLE 0x2
+ #define IXGBE_SFF_1GBASET_CAPABLE 0x8
+-#define IXGBE_SFF_BASEBX10_CAPABLE 0x64
++#define IXGBE_SFF_BASEBX10_CAPABLE 0x40
+ #define IXGBE_SFF_10GBASESR_CAPABLE 0x10
+ #define IXGBE_SFF_10GBASELR_CAPABLE 0x20
+ #define IXGBE_SFF_SOFT_RS_SELECT_MASK 0x8
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index e71715f5da2287..20415c1238ef8d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -1047,7 +1047,7 @@ static int ixgbe_negotiate_vf_api(struct ixgbe_adapter *adapter,
+ break;
+ }
+
+- e_info(drv, "VF %d requested invalid api version %u\n", vf, api);
++ e_dbg(drv, "VF %d requested unsupported api version %u\n", vf, api);
+
+ return -1;
+ }
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index 66cf17f1940820..f804b35d79c726 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -629,7 +629,6 @@ void ixgbevf_init_ipsec_offload(struct ixgbevf_adapter *adapter)
+
+ switch (adapter->hw.api_version) {
+ case ixgbe_mbox_api_14:
+- case ixgbe_mbox_api_15:
+ break;
+ default:
+ return;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index 878cbdbf5ec8b4..e7e01f3298efb0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -5,6 +5,7 @@
+ #include <net/nexthop.h>
+ #include <net/ip_tunnels.h>
+ #include "tc_tun_encap.h"
++#include "fs_core.h"
+ #include "en_tc.h"
+ #include "tc_tun.h"
+ #include "rep/tc.h"
+@@ -24,10 +25,18 @@ static int mlx5e_set_int_port_tunnel(struct mlx5e_priv *priv,
+
+ route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex);
+
+- if (!route_dev || !netif_is_ovs_master(route_dev) ||
+- attr->parse_attr->filter_dev == e->out_dev)
++ if (!route_dev || !netif_is_ovs_master(route_dev))
+ goto out;
+
++ if (priv->mdev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_DMFS &&
++ mlx5e_eswitch_uplink_rep(attr->parse_attr->filter_dev) &&
++ (attr->esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) {
++ mlx5_core_warn(priv->mdev,
++ "Matching on external port with encap + fwd to table actions is not allowed for firmware steering\n");
++ err = -EINVAL;
++ goto out;
++ }
++
+ err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex,
+ MLX5E_TC_INT_PORT_EGRESS,
+ &attr->action, out_index);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 13a3fa8dc0cb09..c14bef83d84d0f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2652,11 +2652,11 @@ void mlx5e_trigger_napi_sched(struct napi_struct *napi)
+
+ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ struct mlx5e_params *params,
+- struct mlx5e_channel_param *cparam,
+ struct xsk_buff_pool *xsk_pool,
+ struct mlx5e_channel **cp)
+ {
+ struct net_device *netdev = priv->netdev;
++ struct mlx5e_channel_param *cparam;
+ struct mlx5_core_dev *mdev;
+ struct mlx5e_xsk_param xsk;
+ struct mlx5e_channel *c;
+@@ -2678,8 +2678,15 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ return err;
+
+ c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
+- if (!c)
+- return -ENOMEM;
++ cparam = kvzalloc(sizeof(*cparam), GFP_KERNEL);
++ if (!c || !cparam) {
++ err = -ENOMEM;
++ goto err_free;
++ }
++
++ err = mlx5e_build_channel_param(mdev, params, cparam);
++ if (err)
++ goto err_free;
+
+ c->priv = priv;
+ c->mdev = mdev;
+@@ -2713,6 +2720,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+
+ *cp = c;
+
++ kvfree(cparam);
+ return 0;
+
+ err_close_queues:
+@@ -2721,6 +2729,8 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ err_napi_del:
+ netif_napi_del(&c->napi);
+
++err_free:
++ kvfree(cparam);
+ kvfree(c);
+
+ return err;
+@@ -2779,20 +2789,14 @@ static void mlx5e_close_channel(struct mlx5e_channel *c)
+ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ struct mlx5e_channels *chs)
+ {
+- struct mlx5e_channel_param *cparam;
+ int err = -ENOMEM;
+ int i;
+
+ chs->num = chs->params.num_channels;
+
+ chs->c = kcalloc(chs->num, sizeof(struct mlx5e_channel *), GFP_KERNEL);
+- cparam = kvzalloc(sizeof(struct mlx5e_channel_param), GFP_KERNEL);
+- if (!chs->c || !cparam)
+- goto err_free;
+-
+- err = mlx5e_build_channel_param(priv->mdev, &chs->params, cparam);
+- if (err)
+- goto err_free;
++ if (!chs->c)
++ goto err_out;
+
+ for (i = 0; i < chs->num; i++) {
+ struct xsk_buff_pool *xsk_pool = NULL;
+@@ -2800,7 +2804,7 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ if (chs->params.xdp_prog)
+ xsk_pool = mlx5e_xsk_get_pool(&chs->params, chs->params.xsk, i);
+
+- err = mlx5e_open_channel(priv, i, &chs->params, cparam, xsk_pool, &chs->c[i]);
++ err = mlx5e_open_channel(priv, i, &chs->params, xsk_pool, &chs->c[i]);
+ if (err)
+ goto err_close_channels;
+ }
+@@ -2818,7 +2822,6 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ }
+
+ mlx5e_health_channels_update(priv);
+- kvfree(cparam);
+ return 0;
+
+ err_close_ptp:
+@@ -2829,9 +2832,8 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ for (i--; i >= 0; i--)
+ mlx5e_close_channel(chs->c[i]);
+
+-err_free:
+ kfree(chs->c);
+- kvfree(cparam);
++err_out:
+ chs->num = 0;
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6e4f8aaf8d2f21..2eabfcc247c6ae 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -3698,6 +3698,7 @@ void mlx5_fs_core_free(struct mlx5_core_dev *dev)
+ int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_flow_steering *steering;
++ char name[80];
+ int err = 0;
+
+ err = mlx5_init_fc_stats(dev);
+@@ -3722,10 +3723,12 @@ int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
+ else
+ steering->mode = MLX5_FLOW_STEERING_MODE_DMFS;
+
+- steering->fgs_cache = kmem_cache_create("mlx5_fs_fgs",
++ snprintf(name, sizeof(name), "%s-mlx5_fs_fgs", dev_name(dev->device));
++ steering->fgs_cache = kmem_cache_create(name,
+ sizeof(struct mlx5_flow_group), 0,
+ 0, NULL);
+- steering->ftes_cache = kmem_cache_create("mlx5_fs_ftes", sizeof(struct fs_fte), 0,
++ snprintf(name, sizeof(name), "%s-mlx5_fs_ftes", dev_name(dev->device));
++ steering->ftes_cache = kmem_cache_create(name, sizeof(struct fs_fte), 0,
+ 0, NULL);
+ if (!steering->ftes_cache || !steering->fgs_cache) {
+ err = -ENOMEM;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
+index 601fad5fc54a39..ee4058bafe119b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
+@@ -39,6 +39,8 @@ bool mlx5hws_bwc_match_params_is_complex(struct mlx5hws_context *ctx,
+ } else {
+ mlx5hws_err(ctx, "Failed to calculate matcher definer layout\n");
+ }
++ } else {
++ kfree(mt->fc);
+ }
+
+ mlx5hws_match_template_destroy(mt);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
+index 6d443e6ee8d9e9..08be034bd1e16d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
+@@ -990,6 +990,7 @@ static int hws_bwc_send_queues_init(struct mlx5hws_context *ctx)
+ for (i = 0; i < bwc_queues; i++) {
+ mutex_init(&ctx->bwc_send_queue_locks[i]);
+ lockdep_register_key(ctx->bwc_lock_class_keys + i);
++ lockdep_set_class(ctx->bwc_send_queue_locks + i, ctx->bwc_lock_class_keys + i);
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
+index 947500f8ed7142..7aa1a462a1035b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
+@@ -67,7 +67,7 @@ static bool mlxsw_afk_blocks_check(struct mlxsw_afk *mlxsw_afk)
+
+ for (j = 0; j < block->instances_count; j++) {
+ const struct mlxsw_afk_element_info *elinfo;
+- struct mlxsw_afk_element_inst *elinst;
++ const struct mlxsw_afk_element_inst *elinst;
+
+ elinst = &block->instances[j];
+ elinfo = &mlxsw_afk_element_infos[elinst->element];
+@@ -154,7 +154,7 @@ static void mlxsw_afk_picker_count_hits(struct mlxsw_afk *mlxsw_afk,
+ const struct mlxsw_afk_block *block = &mlxsw_afk->blocks[i];
+
+ for (j = 0; j < block->instances_count; j++) {
+- struct mlxsw_afk_element_inst *elinst;
++ const struct mlxsw_afk_element_inst *elinst;
+
+ elinst = &block->instances[j];
+ if (elinst->element == element) {
+@@ -386,7 +386,7 @@ mlxsw_afk_block_elinst_get(const struct mlxsw_afk_block *block,
+ int i;
+
+ for (i = 0; i < block->instances_count; i++) {
+- struct mlxsw_afk_element_inst *elinst;
++ const struct mlxsw_afk_element_inst *elinst;
+
+ elinst = &block->instances[i];
+ if (elinst->element == element)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
+index 98a05598178b3b..5aa1afb3f2ca81 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
+@@ -117,7 +117,7 @@ struct mlxsw_afk_element_inst { /* element instance in actual block */
+
+ struct mlxsw_afk_block {
+ u16 encoding; /* block ID */
+- struct mlxsw_afk_element_inst *instances;
++ const struct mlxsw_afk_element_inst *instances;
+ unsigned int instances_count;
+ bool high_entropy;
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
+index eaad7860560271..1850a975b38044 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
+@@ -7,7 +7,7 @@
+ #include "item.h"
+ #include "core_acl_flex_keys.h"
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_32_47, 0x00, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_0_31, 0x02, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 13, 3),
+@@ -15,7 +15,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_32_47, 0x00, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x02, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 13, 3),
+@@ -23,27 +23,27 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac_ex[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac_ex[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_32_47, 0x02, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x04, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(ETHERTYPE, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_sip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_sip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(L4_PORT_RANGE, 0x04, 16, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_dip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_dip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(L4_PORT_RANGE, 0x04, 16, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_ECN, 0x04, 4, 2),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_TTL_, 0x04, 24, 8),
+@@ -51,35 +51,35 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(TCP_FLAGS, 0x08, 8, 9), /* TCP_CONTROL+TCP_ECN */
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_ex[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_ex[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x00, 0, 12),
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 29, 3),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_L4_PORT, 0x08, 0, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(DST_L4_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_dip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_dip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_32_63, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_ex1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_ex1[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_64_95, 0x04, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_32_63, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip_ex[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip_ex[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_96_127, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_64_95, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_packet_type[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_packet_type[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(ETHERTYPE, 0x00, 0, 16),
+ };
+
+@@ -124,90 +124,90 @@ const struct mlxsw_afk_ops mlxsw_sp1_afk_ops = {
+ .clear_block = mlxsw_sp1_afk_clear_block,
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(FDB_MISS, 0x00, 3, 1),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_1[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(FDB_MISS, 0x00, 3, 1),
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_2[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_32_47, 0x04, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_32_47, 0x06, 2),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_3[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_3[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x00, 0, 3),
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_32_47, 0x06, 2),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_4[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_4[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x00, 0, 3),
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
+ MLXSW_AFK_ELEMENT_INST_U32(ETHERTYPE, 0x04, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(SRC_SYS_PORT, 0x04, 0, 8, -1, true), /* RX_ACL_SYSTEM_PORT */
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_0[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_1[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(IP_DSCP, 0x04, 0, 6),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_ECN, 0x04, 6, 2),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_TTL_, 0x04, 8, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x04, 16, 8),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5[] = {
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER, 0x04, 20, 11, 0, true),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_0_3, 0x00, 0, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_32_63, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_1[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_4_7, 0x00, 0, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_64_95, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2[] = {
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x00, 0, 3, 0, true),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_3[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_3[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_32_63, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_4[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_4[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_64_95, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_5[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_5[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_96_127, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_L4_PORT, 0x04, 16, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(DST_L4_PORT, 0x04, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_2[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(TCP_FLAGS, 0x04, 16, 9), /* TCP_CONTROL + TCP_ECN */
+ MLXSW_AFK_ELEMENT_INST_U32(L4_PORT_RANGE, 0x04, 0, 16),
+ };
+@@ -319,16 +319,20 @@ const struct mlxsw_afk_ops mlxsw_sp2_afk_ops = {
+ .clear_block = mlxsw_sp2_afk_clear_block,
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5b[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 18, 12),
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(SRC_SYS_PORT, 0x04, 0, 9, -1, true), /* RX_ACL_SYSTEM_PORT */
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5b[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_1b[] = {
++ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x04, 4),
++};
++
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER, 0x04, 20, 12),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x00, 0, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x04, 4),
+ };
+@@ -341,7 +345,7 @@ static const struct mlxsw_afk_block mlxsw_sp4_afk_blocks[] = {
+ MLXSW_AFK_BLOCK(0x14, mlxsw_sp_afk_element_info_mac_4),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x1A, mlxsw_sp_afk_element_info_mac_5b),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x38, mlxsw_sp_afk_element_info_ipv4_0),
+- MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x39, mlxsw_sp_afk_element_info_ipv4_1),
++ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x3F, mlxsw_sp_afk_element_info_ipv4_1b),
+ MLXSW_AFK_BLOCK(0x3A, mlxsw_sp_afk_element_info_ipv4_2),
+ MLXSW_AFK_BLOCK(0x36, mlxsw_sp_afk_element_info_ipv4_5b),
+ MLXSW_AFK_BLOCK(0x40, mlxsw_sp_afk_element_info_ipv6_0),
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index c47266d1c7c279..b2d206dec70c8a 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -2439,6 +2439,7 @@ void mana_query_gf_stats(struct mana_port_context *apc)
+
+ mana_gd_init_req_hdr(&req.hdr, MANA_QUERY_GF_STAT,
+ sizeof(req), sizeof(resp));
++ req.hdr.resp.msg_version = GDMA_MESSAGE_V2;
+ req.req_stats = STATISTICS_FLAGS_RX_DISCARDS_NO_WQE |
+ STATISTICS_FLAGS_RX_ERRORS_VPORT_DISABLED |
+ STATISTICS_FLAGS_HC_RX_BYTES |
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index 16e6bd4661433f..6218d9c2685546 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -3314,7 +3314,9 @@ int qed_mcp_bist_nvm_get_num_images(struct qed_hwfn *p_hwfn,
+ if (rc)
+ return rc;
+
+- if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
++ if (((rsp & FW_MSG_CODE_MASK) == FW_MSG_CODE_UNSUPPORTED))
++ rc = -EOPNOTSUPP;
++ else if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
+ rc = -EINVAL;
+
+ return rc;
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 713a89bb21e93b..5ed2818bac257c 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4233,8 +4233,8 @@ static unsigned int rtl8125_quirk_udp_padto(struct rtl8169_private *tp,
+ {
+ unsigned int padto = 0, len = skb->len;
+
+- if (rtl_is_8125(tp) && len < 128 + RTL_MIN_PATCH_LEN &&
+- rtl_skb_is_udp(skb) && skb_transport_header_was_set(skb)) {
++ if (len < 128 + RTL_MIN_PATCH_LEN && rtl_skb_is_udp(skb) &&
++ skb_transport_header_was_set(skb)) {
+ unsigned int trans_data_len = skb_tail_pointer(skb) -
+ skb_transport_header(skb);
+
+@@ -4258,9 +4258,15 @@ static unsigned int rtl8125_quirk_udp_padto(struct rtl8169_private *tp,
+ static unsigned int rtl_quirk_packet_padto(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+ {
+- unsigned int padto;
++ unsigned int padto = 0;
+
+- padto = rtl8125_quirk_udp_padto(tp, skb);
++ switch (tp->mac_version) {
++ case RTL_GIGA_MAC_VER_61 ... RTL_GIGA_MAC_VER_63:
++ padto = rtl8125_quirk_udp_padto(tp, skb);
++ break;
++ default:
++ break;
++ }
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_34:
+diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
+index 84fa911c78db55..fe0bf1d3217af2 100644
+--- a/drivers/net/ethernet/rocker/rocker_main.c
++++ b/drivers/net/ethernet/rocker/rocker_main.c
+@@ -2502,7 +2502,7 @@ static void rocker_carrier_init(const struct rocker_port *rocker_port)
+ u64 link_status = rocker_read64(rocker, PORT_PHYS_LINK_STATUS);
+ bool link_up;
+
+- link_up = link_status & (1 << rocker_port->pport);
++ link_up = link_status & (1ULL << rocker_port->pport);
+ if (link_up)
+ netif_carrier_on(rocker_port->dev);
+ else
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+index 93a78fd0737b6c..28fff6cab812e4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+@@ -44,6 +44,7 @@
+ #define GMAC_MDIO_DATA 0x00000204
+ #define GMAC_GPIO_STATUS 0x0000020C
+ #define GMAC_ARP_ADDR 0x00000210
++#define GMAC_EXT_CFG1 0x00000238
+ #define GMAC_ADDR_HIGH(reg) (0x300 + reg * 8)
+ #define GMAC_ADDR_LOW(reg) (0x304 + reg * 8)
+ #define GMAC_L3L4_CTRL(reg) (0x900 + (reg) * 0x30)
+@@ -284,6 +285,10 @@ enum power_event {
+ #define GMAC_HW_FEAT_DVLAN BIT(5)
+ #define GMAC_HW_FEAT_NRVF GENMASK(2, 0)
+
++/* MAC extended config 1 */
++#define GMAC_CONFIG1_SAVE_EN BIT(24)
++#define GMAC_CONFIG1_SPLM(v) FIELD_PREP(GENMASK(9, 8), v)
++
+ /* GMAC GPIO Status reg */
+ #define GMAC_GPO0 BIT(16)
+ #define GMAC_GPO1 BIT(17)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index 77b35abc6f6fa4..22a044d93e172f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -534,6 +534,11 @@ static void dwmac4_enable_sph(struct stmmac_priv *priv, void __iomem *ioaddr,
+ value |= GMAC_CONFIG_HDSMS_256; /* Segment max 256 bytes */
+ writel(value, ioaddr + GMAC_EXT_CONFIG);
+
++ value = readl(ioaddr + GMAC_EXT_CFG1);
++ value |= GMAC_CONFIG1_SPLM(1); /* Split mode set to L2OFST */
++ value |= GMAC_CONFIG1_SAVE_EN; /* Enable Split AV mode */
++ writel(value, ioaddr + GMAC_EXT_CFG1);
++
+ value = readl(ioaddr + DMA_CHAN_CONTROL(dwmac4_addrs, chan));
+ if (en)
+ value |= DMA_CONTROL_SPH;
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 7f611c74eb629b..ba15a0a4ce629e 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -895,7 +895,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ if (geneve->cfg.df == GENEVE_DF_SET) {
+ df = htons(IP_DF);
+ } else if (geneve->cfg.df == GENEVE_DF_INHERIT) {
+- struct ethhdr *eth = eth_hdr(skb);
++ struct ethhdr *eth = skb_eth_hdr(skb);
+
+ if (ntohs(eth->h_proto) == ETH_P_IPV6) {
+ df = htons(IP_DF);
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index d3273bc0da4a1f..691969a4910f2b 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -351,6 +351,22 @@ static int lan88xx_config_aneg(struct phy_device *phydev)
+ static void lan88xx_link_change_notify(struct phy_device *phydev)
+ {
+ int temp;
++ int ret;
++
++ /* Reset PHY to ensure MII_LPA provides up-to-date information. This
++ * issue is reproducible only after parallel detection, as described
++ * in IEEE 802.3-2022, Section 28.2.3.1 ("Parallel detection function"),
++ * where the link partner does not support auto-negotiation.
++ */
++ if (phydev->state == PHY_NOLINK) {
++ ret = phy_init_hw(phydev);
++ if (ret < 0)
++ goto link_change_notify_failed;
++
++ ret = _phy_start_aneg(phydev);
++ if (ret < 0)
++ goto link_change_notify_failed;
++ }
+
+ /* At forced 100 F/H mode, chip may fail to set mode correctly
+ * when cable is switched between long(~50+m) and short one.
+@@ -377,6 +393,11 @@ static void lan88xx_link_change_notify(struct phy_device *phydev)
+ temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
+ phy_write(phydev, LAN88XX_INT_MASK, temp);
+ }
++
++ return;
++
++link_change_notify_failed:
++ phydev_err(phydev, "Link change process failed %pe\n", ERR_PTR(ret));
+ }
+
+ /**
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index a5684ef5884bda..dcec92625cf651 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -466,7 +466,8 @@ static void sfp_quirk_ubnt_uf_instant(const struct sfp_eeprom_id *id,
+ static const struct sfp_quirk sfp_quirks[] = {
+ // Alcatel Lucent G-010S-P can operate at 2500base-X, but incorrectly
+ // report 2500MBd NRZ in their EEPROM
+- SFP_QUIRK_M("ALCATELLUCENT", "G010SP", sfp_quirk_2500basex),
++ SFP_QUIRK("ALCATELLUCENT", "G010SP", sfp_quirk_2500basex,
++ sfp_fixup_ignore_tx_fault),
+
+ // Alcatel Lucent G-010S-A can operate at 2500base-X, but report 3.2GBd
+ // NRZ in their EEPROM
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 53a038fcbe991d..c897afef0b414c 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -946,9 +946,6 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
+ void *buf, *head;
+ dma_addr_t addr;
+
+- if (unlikely(!skb_page_frag_refill(size, alloc_frag, gfp)))
+- return NULL;
+-
+ head = page_address(alloc_frag->page);
+
+ if (rq->do_dma) {
+@@ -2443,6 +2440,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
+ len = SKB_DATA_ALIGN(len) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+
++ if (unlikely(!skb_page_frag_refill(len, &rq->alloc_frag, gfp)))
++ return -ENOMEM;
++
+ buf = virtnet_rq_alloc(rq, len, gfp);
+ if (unlikely(!buf))
+ return -ENOMEM;
+@@ -2545,6 +2545,12 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
+ */
+ len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room);
+
++ if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp)))
++ return -ENOMEM;
++
++ if (!alloc_frag->offset && len + room + sizeof(struct virtnet_rq_dma) > alloc_frag->size)
++ len -= sizeof(struct virtnet_rq_dma);
++
+ buf = virtnet_rq_alloc(rq, len + room, gfp);
+ if (unlikely(!buf))
+ return -ENOMEM;
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index 08a6f36a6be9cb..6805357ee29e6d 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -3,7 +3,7 @@
+ * Copyright (c) 2004-2011 Atheros Communications Inc.
+ * Copyright (c) 2011-2012,2017 Qualcomm Atheros, Inc.
+ * Copyright (c) 2016-2017 Erik Stromdahl <erik.stromdahl@gmail.com>
+- * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -2648,9 +2648,9 @@ static void ath10k_sdio_remove(struct sdio_func *func)
+
+ netif_napi_del(&ar->napi);
+
+- ath10k_core_destroy(ar);
+-
+ destroy_workqueue(ar_sdio->workqueue);
++
++ ath10k_core_destroy(ar);
+ }
+
+ static const struct sdio_device_id ath10k_sdio_devices[] = {
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 6d0784a21558ea..8946141aa0dce6 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -8186,9 +8186,9 @@ ath12k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ arvif->vdev_id, ret);
+ goto out;
+ }
+- ieee80211_iterate_stations_atomic(hw,
+- ath12k_mac_disable_peer_fixed_rate,
+- arvif);
++ ieee80211_iterate_stations_mtx(hw,
++ ath12k_mac_disable_peer_fixed_rate,
++ arvif);
+ } else if (ath12k_mac_bitrate_mask_get_single_nss(ar, band, mask,
+ &single_nss)) {
+ rate = WMI_FIXED_RATE_NONE;
+@@ -8233,16 +8233,16 @@ ath12k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ goto out;
+ }
+
+- ieee80211_iterate_stations_atomic(hw,
+- ath12k_mac_disable_peer_fixed_rate,
+- arvif);
++ ieee80211_iterate_stations_mtx(hw,
++ ath12k_mac_disable_peer_fixed_rate,
++ arvif);
+
+ mutex_lock(&ar->conf_mutex);
+
+ arvif->bitrate_mask = *mask;
+- ieee80211_iterate_stations_atomic(hw,
+- ath12k_mac_set_bitrate_mask_iter,
+- arvif);
++ ieee80211_iterate_stations_mtx(hw,
++ ath12k_mac_set_bitrate_mask_iter,
++ arvif);
+
+ mutex_unlock(&ar->conf_mutex);
+ }
+diff --git a/drivers/net/wireless/ath/ath5k/pci.c b/drivers/net/wireless/ath/ath5k/pci.c
+index b51fce5ae26020..f5ca2fe0d07490 100644
+--- a/drivers/net/wireless/ath/ath5k/pci.c
++++ b/drivers/net/wireless/ath/ath5k/pci.c
+@@ -46,6 +46,8 @@ static const struct pci_device_id ath5k_pci_id_table[] = {
+ { PCI_VDEVICE(ATHEROS, 0x001b) }, /* 5413 Eagle */
+ { PCI_VDEVICE(ATHEROS, 0x001c) }, /* PCI-E cards */
+ { PCI_VDEVICE(ATHEROS, 0x001d) }, /* 2417 Nala */
++ { PCI_VDEVICE(ATHEROS, 0xff16) }, /* Gigaset SX76[23] AR241[34]A */
++ { PCI_VDEVICE(ATHEROS, 0xff1a) }, /* Arcadyan ARV45XX AR2417 */
+ { PCI_VDEVICE(ATHEROS, 0xff1b) }, /* AR5BXB63 */
+ { 0 }
+ };
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index d35262335eaf79..8a1e3376424487 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -770,7 +770,7 @@ void brcmf_sdiod_sgtable_alloc(struct brcmf_sdio_dev *sdiodev)
+
+ nents = max_t(uint, BRCMF_DEFAULT_RXGLOM_SIZE,
+ sdiodev->settings->bus.sdio.txglomsz);
+- nents += (nents >> 4) + 1;
++ nents *= 2;
+
+ WARN_ON(nents > sdiodev->max_segment_count);
+
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_rx.c b/drivers/net/wireless/intel/ipw2x00/libipw_rx.c
+index 48d6870bbf4e25..9a97ab9b89ae8b 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_rx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_rx.c
+@@ -870,8 +870,8 @@ void libipw_rx_any(struct libipw_device *ieee,
+ switch (ieee->iw_mode) {
+ case IW_MODE_ADHOC:
+ /* our BSS and not from/to DS */
+- if (ether_addr_equal(hdr->addr3, ieee->bssid))
+- if ((fc & (IEEE80211_FCTL_TODS+IEEE80211_FCTL_FROMDS)) == 0) {
++ if (ether_addr_equal(hdr->addr3, ieee->bssid) &&
++ ((fc & (IEEE80211_FCTL_TODS + IEEE80211_FCTL_FROMDS)) == 0)) {
+ /* promisc: get all */
+ if (ieee->dev->flags & IFF_PROMISC)
+ is_packet_for_us = 1;
+@@ -885,8 +885,8 @@ void libipw_rx_any(struct libipw_device *ieee,
+ break;
+ case IW_MODE_INFRA:
+ /* our BSS (== from our AP) and from DS */
+- if (ether_addr_equal(hdr->addr2, ieee->bssid))
+- if ((fc & (IEEE80211_FCTL_TODS+IEEE80211_FCTL_FROMDS)) == IEEE80211_FCTL_FROMDS) {
++ if (ether_addr_equal(hdr->addr2, ieee->bssid) &&
++ ((fc & (IEEE80211_FCTL_TODS + IEEE80211_FCTL_FROMDS)) == IEEE80211_FCTL_FROMDS)) {
+ /* promisc: get all */
+ if (ieee->dev->flags & IFF_PROMISC)
+ is_packet_for_us = 1;
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 21d0754dd7f6ac..b67e551fcee3ef 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -1297,12 +1297,12 @@ static void rtw_sdio_deinit_tx(struct rtw_dev *rtwdev)
+ struct rtw_sdio *rtwsdio = (struct rtw_sdio *)rtwdev->priv;
+ int i;
+
+- for (i = 0; i < RTK_MAX_TX_QUEUE_NUM; i++)
+- skb_queue_purge(&rtwsdio->tx_queue[i]);
+-
+ flush_workqueue(rtwsdio->txwq);
+ destroy_workqueue(rtwsdio->txwq);
+ kfree(rtwsdio->tx_handler_data);
++
++ for (i = 0; i < RTK_MAX_TX_QUEUE_NUM; i++)
++ ieee80211_purge_tx_queue(rtwdev->hw, &rtwsdio->tx_queue[i]);
+ }
+
+ int rtw_sdio_probe(struct sdio_func *sdio_func,
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index b17a429bcd2994..07695294767acb 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -423,10 +423,11 @@ static void rtw_usb_tx_handler(struct work_struct *work)
+
+ static void rtw_usb_tx_queue_purge(struct rtw_usb *rtwusb)
+ {
++ struct rtw_dev *rtwdev = rtwusb->rtwdev;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(rtwusb->tx_queue); i++)
+- skb_queue_purge(&rtwusb->tx_queue[i]);
++ ieee80211_purge_tx_queue(rtwdev->hw, &rtwusb->tx_queue[i]);
+ }
+
+ static void rtw_usb_write_port_complete(struct urb *urb)
+@@ -888,9 +889,9 @@ static void rtw_usb_deinit_tx(struct rtw_dev *rtwdev)
+ {
+ struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+
+- rtw_usb_tx_queue_purge(rtwusb);
+ flush_workqueue(rtwusb->txwq);
+ destroy_workqueue(rtwusb->txwq);
++ rtw_usb_tx_queue_purge(rtwusb);
+ }
+
+ static int rtw_usb_intf_init(struct rtw_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 13a7c39ceb6f55..e6bceef691e9be 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6074,6 +6074,9 @@ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
+
+ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ NULL, 0, req->ie_len);
++ if (!skb)
++ return -ENOMEM;
++
+ skb_put_data(skb, ies->ies[NL80211_BAND_6GHZ], ies->len[NL80211_BAND_6GHZ]);
+ skb_put_data(skb, ies->common_ies, ies->common_ie_len);
+ hdr = (struct ieee80211_hdr *)skb->data;
+diff --git a/drivers/nvdimm/dax_devs.c b/drivers/nvdimm/dax_devs.c
+index 6b4922de30477e..37b743acbb7bad 100644
+--- a/drivers/nvdimm/dax_devs.c
++++ b/drivers/nvdimm/dax_devs.c
+@@ -106,12 +106,12 @@ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns)
+
+ nvdimm_bus_lock(&ndns->dev);
+ nd_dax = nd_dax_alloc(nd_region);
+- nd_pfn = &nd_dax->nd_pfn;
+- dax_dev = nd_pfn_devinit(nd_pfn, ndns);
++ dax_dev = nd_dax_devinit(nd_dax, ndns);
+ nvdimm_bus_unlock(&ndns->dev);
+ if (!dax_dev)
+ return -ENOMEM;
+ pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL);
++ nd_pfn = &nd_dax->nd_pfn;
+ nd_pfn->pfn_sb = pfn_sb;
+ rc = nd_pfn_validate(nd_pfn, DAX_SIG);
+ dev_dbg(dev, "dax: %s\n", rc == 0 ? dev_name(dax_dev) : "<none>");
+diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
+index 2dbb1dca17b534..5ca06e9a2d2925 100644
+--- a/drivers/nvdimm/nd.h
++++ b/drivers/nvdimm/nd.h
+@@ -600,6 +600,13 @@ struct nd_dax *to_nd_dax(struct device *dev);
+ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns);
+ bool is_nd_dax(const struct device *dev);
+ struct device *nd_dax_create(struct nd_region *nd_region);
++static inline struct device *nd_dax_devinit(struct nd_dax *nd_dax,
++ struct nd_namespace_common *ndns)
++{
++ if (!nd_dax)
++ return NULL;
++ return nd_pfn_devinit(&nd_dax->nd_pfn, ndns);
++}
+ #else
+ static inline int nd_dax_probe(struct device *dev,
+ struct nd_namespace_common *ndns)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f0d4c6f3cb0555..249914b90dbfa7 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1303,9 +1303,10 @@ static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
+ queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
+ }
+
+-static void nvme_keep_alive_finish(struct request *rq,
+- blk_status_t status, struct nvme_ctrl *ctrl)
++static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
++ blk_status_t status)
+ {
++ struct nvme_ctrl *ctrl = rq->end_io_data;
+ unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
+ unsigned long delay = nvme_keep_alive_work_period(ctrl);
+ enum nvme_ctrl_state state = nvme_ctrl_state(ctrl);
+@@ -1322,17 +1323,20 @@ static void nvme_keep_alive_finish(struct request *rq,
+ delay = 0;
+ }
+
++ blk_mq_free_request(rq);
++
+ if (status) {
+ dev_err(ctrl->device,
+ "failed nvme_keep_alive_end_io error=%d\n",
+ status);
+- return;
++ return RQ_END_IO_NONE;
+ }
+
+ ctrl->ka_last_check_time = jiffies;
+ ctrl->comp_seen = false;
+ if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)
+ queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
++ return RQ_END_IO_NONE;
+ }
+
+ static void nvme_keep_alive_work(struct work_struct *work)
+@@ -1341,7 +1345,6 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ struct nvme_ctrl, ka_work);
+ bool comp_seen = ctrl->comp_seen;
+ struct request *rq;
+- blk_status_t status;
+
+ ctrl->ka_last_check_time = jiffies;
+
+@@ -1364,9 +1367,9 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ nvme_init_request(rq, &ctrl->ka_cmd);
+
+ rq->timeout = ctrl->kato * HZ;
+- status = blk_execute_rq(rq, false);
+- nvme_keep_alive_finish(rq, status, ctrl);
+- blk_mq_free_request(rq);
++ rq->end_io = nvme_keep_alive_end_io;
++ rq->end_io_data = ctrl;
++ blk_execute_rq_nowait(rq, false);
+ }
+
+ static void nvme_start_keep_alive(struct nvme_ctrl *ctrl)
+@@ -2064,7 +2067,8 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
+ lim->physical_block_size = min(phys_bs, atomic_bs);
+ lim->io_min = phys_bs;
+ lim->io_opt = io_opt;
+- if (ns->ctrl->quirks & NVME_QUIRK_DEALLOCATE_ZEROES)
++ if ((ns->ctrl->quirks & NVME_QUIRK_DEALLOCATE_ZEROES) &&
++ (ns->ctrl->oncs & NVME_CTRL_ONCS_DSM))
+ lim->max_write_zeroes_sectors = UINT_MAX;
+ else
+ lim->max_write_zeroes_sectors = ns->ctrl->max_zeroes_sectors;
+@@ -3250,8 +3254,9 @@ static int nvme_check_ctrl_fabric_info(struct nvme_ctrl *ctrl, struct nvme_id_ct
+ }
+
+ if (!ctrl->maxcmd) {
+- dev_err(ctrl->device, "Maximum outstanding commands is 0\n");
+- return -EINVAL;
++ dev_warn(ctrl->device,
++ "Firmware bug: maximum outstanding commands is 0\n");
++ ctrl->maxcmd = ctrl->sqsize + 1;
+ }
+
+ return 0;
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 24a2759798d01e..913e6e5a80705f 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1091,13 +1091,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ }
+ destroy_admin:
+ nvme_stop_keep_alive(&ctrl->ctrl);
+- nvme_quiesce_admin_queue(&ctrl->ctrl);
+- blk_sync_queue(ctrl->ctrl.admin_q);
+- nvme_rdma_stop_queue(&ctrl->queues[0]);
+- nvme_cancel_admin_tagset(&ctrl->ctrl);
+- if (new)
+- nvme_remove_admin_tag_set(&ctrl->ctrl);
+- nvme_rdma_destroy_admin_queue(ctrl);
++ nvme_rdma_teardown_admin_queue(ctrl, new);
+ return ret;
+ }
+
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 3e416af2659f19..55abfe5e1d2548 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2278,7 +2278,7 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+ }
+ destroy_admin:
+ nvme_stop_keep_alive(ctrl);
+- nvme_tcp_teardown_admin_queue(ctrl, false);
++ nvme_tcp_teardown_admin_queue(ctrl, new);
+ return ret;
+ }
+
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index b5447228696dc4..6483e1874477ef 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1830,6 +1830,7 @@ static const struct of_device_id qcom_pcie_match[] = {
+ { .compatible = "qcom,pcie-ipq8064-v2", .data = &cfg_2_1_0 },
+ { .compatible = "qcom,pcie-ipq8074", .data = &cfg_2_3_3 },
+ { .compatible = "qcom,pcie-ipq8074-gen3", .data = &cfg_2_9_0 },
++ { .compatible = "qcom,pcie-ipq9574", .data = &cfg_2_9_0 },
+ { .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
+ { .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
+ { .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
+diff --git a/drivers/pci/controller/plda/pcie-starfive.c b/drivers/pci/controller/plda/pcie-starfive.c
+index c9933ecf683382..0564fdce47c2a3 100644
+--- a/drivers/pci/controller/plda/pcie-starfive.c
++++ b/drivers/pci/controller/plda/pcie-starfive.c
+@@ -404,6 +404,9 @@ static int starfive_pcie_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ pm_runtime_enable(&pdev->dev);
++ pm_runtime_get_sync(&pdev->dev);
++
+ plda->host_ops = &sf_host_ops;
+ plda->num_events = PLDA_MAX_EVENT_NUM;
+ /* mask doorbell event */
+@@ -413,11 +416,12 @@ static int starfive_pcie_probe(struct platform_device *pdev)
+ plda->events_bitmap <<= PLDA_NUM_DMA_EVENTS;
+ ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops,
+ &stf_pcie_event);
+- if (ret)
++ if (ret) {
++ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+ return ret;
++ }
+
+- pm_runtime_enable(&pdev->dev);
+- pm_runtime_get_sync(&pdev->dev);
+ platform_set_drvdata(pdev, pcie);
+
+ return 0;
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 264a180403a0ec..9d9596947350f5 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -740,11 +740,9 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
+ if (!(features & VMD_FEAT_BIOS_PM_QUIRK))
+ return 0;
+
+- pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
+-
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR);
+ if (!pos)
+- return 0;
++ goto out_state_change;
+
+ /*
+ * Skip if the max snoop LTR is non-zero, indicating BIOS has set it
+@@ -752,7 +750,7 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
+ */
+ pci_read_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, <r_reg);
+ if (!!(ltr_reg & (PCI_LTR_VALUE_MASK | PCI_LTR_SCALE_MASK)))
+- return 0;
++ goto out_state_change;
+
+ /*
+ * Set the default values to the maximum required by the platform to
+@@ -764,6 +762,13 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
+ pci_write_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, ltr_reg);
+ pci_info(pdev, "VMD: Default LTR value set by driver\n");
+
++out_state_change:
++ /*
++ * Ensure devices are in D0 before enabling PCI-PM L1 PM Substates, per
++ * PCIe r6.0, sec 5.5.4.
++ */
++ pci_set_power_state_locked(pdev, PCI_D0);
++ pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
+ return 0;
+ }
+
+@@ -1100,6 +1105,10 @@ static const struct pci_device_id vmd_ids[] = {
+ .driver_data = VMD_FEATS_CLIENT,},
+ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
+ .driver_data = VMD_FEATS_CLIENT,},
++ {PCI_VDEVICE(INTEL, 0xb60b),
++ .driver_data = VMD_FEATS_CLIENT,},
++ {PCI_VDEVICE(INTEL, 0xb06f),
++ .driver_data = VMD_FEATS_CLIENT,},
+ {0,}
+ };
+ MODULE_DEVICE_TABLE(pci, vmd_ids);
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 5d0f4db1cab786..3e5a117f5b5d60 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -521,6 +521,31 @@ static ssize_t bus_rescan_store(struct device *dev,
+ static struct device_attribute dev_attr_bus_rescan = __ATTR(rescan, 0200, NULL,
+ bus_rescan_store);
+
++static ssize_t reset_subordinate_store(struct device *dev,
++ struct device_attribute *attr,
++ const char *buf, size_t count)
++{
++ struct pci_dev *pdev = to_pci_dev(dev);
++ struct pci_bus *bus = pdev->subordinate;
++ unsigned long val;
++
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ if (kstrtoul(buf, 0, &val) < 0)
++ return -EINVAL;
++
++ if (val) {
++ int ret = __pci_reset_bus(bus);
++
++ if (ret)
++ return ret;
++ }
++
++ return count;
++}
++static DEVICE_ATTR_WO(reset_subordinate);
++
+ #if defined(CONFIG_PM) && defined(CONFIG_ACPI)
+ static ssize_t d3cold_allowed_store(struct device *dev,
+ struct device_attribute *attr,
+@@ -625,6 +650,7 @@ static struct attribute *pci_dev_attrs[] = {
+ static struct attribute *pci_bridge_attrs[] = {
+ &dev_attr_subordinate_bus_number.attr,
+ &dev_attr_secondary_bus_number.attr,
++ &dev_attr_reset_subordinate.attr,
+ NULL,
+ };
+
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 08f170fd3efb3e..dd3c6dcb47ae4a 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5885,7 +5885,7 @@ EXPORT_SYMBOL_GPL(pci_probe_reset_bus);
+ *
+ * Same as above except return -EAGAIN if the bus cannot be locked
+ */
+-static int __pci_reset_bus(struct pci_bus *bus)
++int __pci_reset_bus(struct pci_bus *bus)
+ {
+ int rc;
+
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 14d00ce45bfa95..1cdc2c9547a7e1 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -104,6 +104,7 @@ bool pci_reset_supported(struct pci_dev *dev);
+ void pci_init_reset_methods(struct pci_dev *dev);
+ int pci_bridge_secondary_bus_reset(struct pci_dev *dev);
+ int pci_bus_error_reset(struct pci_dev *dev);
++int __pci_reset_bus(struct pci_bus *bus);
+
+ struct pci_cap_saved_data {
+ u16 cap_nr;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index f1615805f5b078..ebb0c1d5cae255 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1633,23 +1633,33 @@ static void set_pcie_thunderbolt(struct pci_dev *dev)
+
+ static void set_pcie_untrusted(struct pci_dev *dev)
+ {
+- struct pci_dev *parent;
++ struct pci_dev *parent = pci_upstream_bridge(dev);
+
++ if (!parent)
++ return;
+ /*
+- * If the upstream bridge is untrusted we treat this device
++ * If the upstream bridge is untrusted we treat this device as
+ * untrusted as well.
+ */
+- parent = pci_upstream_bridge(dev);
+- if (parent && (parent->untrusted || parent->external_facing))
++ if (parent->untrusted) {
++ dev->untrusted = true;
++ return;
++ }
++
++ if (arch_pci_dev_is_removable(dev)) {
++ pci_dbg(dev, "marking as untrusted\n");
+ dev->untrusted = true;
++ }
+ }
+
+ static void pci_set_removable(struct pci_dev *dev)
+ {
+ struct pci_dev *parent = pci_upstream_bridge(dev);
+
++ if (!parent)
++ return;
+ /*
+- * We (only) consider everything downstream from an external_facing
++ * We (only) consider everything tunneled below an external_facing
+ * device to be removable by the user. We're mainly concerned with
+ * consumer platforms with user accessible thunderbolt ports that are
+ * vulnerable to DMA attacks, and we expect those ports to be marked by
+@@ -1659,9 +1669,15 @@ static void pci_set_removable(struct pci_dev *dev)
+ * accessible to user / may not be removed by end user, and thus not
+ * exposed as "removable" to userspace.
+ */
+- if (parent &&
+- (parent->external_facing || dev_is_removable(&parent->dev)))
++ if (dev_is_removable(&parent->dev)) {
++ dev_set_removable(&dev->dev, DEVICE_REMOVABLE);
++ return;
++ }
++
++ if (arch_pci_dev_is_removable(dev)) {
++ pci_dbg(dev, "marking as removable\n");
+ dev_set_removable(&dev->dev, DEVICE_REMOVABLE);
++ }
+ }
+
+ /**
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index dccb60c1d9cc3d..8103bc24a54ea4 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4996,18 +4996,21 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
+ }
+
+ /*
+- * Wangxun 10G/1G NICs have no ACS capability, and on multi-function
+- * devices, peer-to-peer transactions are not be used between the functions.
+- * So add an ACS quirk for below devices to isolate functions.
++ * Wangxun 40G/25G/10G/1G NICs have no ACS capability, but on
++ * multi-function devices, the hardware isolates the functions by
++ * directing all peer-to-peer traffic upstream as though PCI_ACS_RR and
++ * PCI_ACS_CR were set.
+ * SFxxx 1G NICs(em).
+ * RP1000/RP2000 10G NICs(sp).
++ * FF5xxx 40G/25G/10G NICs(aml).
+ */
+ static int pci_quirk_wangxun_nic_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ switch (dev->device) {
+- case 0x0100 ... 0x010F:
+- case 0x1001:
+- case 0x2001:
++ case 0x0100 ... 0x010F: /* EM */
++ case 0x1001: case 0x2001: /* SP */
++ case 0x5010: case 0x5025: case 0x5040: /* AML */
++ case 0x5110: case 0x5125: case 0x5140: /* AML */
+ return pci_acs_ctrl_enabled(acs_flags,
+ PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 4061890a174835..b3eec63c00ba04 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -220,6 +220,9 @@ static int pinctrl_register_one_pin(struct pinctrl_dev *pctldev,
+
+ /* Set owner */
+ pindesc->pctldev = pctldev;
++#ifdef CONFIG_PINMUX
++ mutex_init(&pindesc->mux_lock);
++#endif
+
+ /* Copy basic pin info */
+ if (pin->name) {
+diff --git a/drivers/pinctrl/core.h b/drivers/pinctrl/core.h
+index 4e07707d2435bd..d6c24978e7081a 100644
+--- a/drivers/pinctrl/core.h
++++ b/drivers/pinctrl/core.h
+@@ -177,6 +177,7 @@ struct pin_desc {
+ const char *mux_owner;
+ const struct pinctrl_setting_mux *mux_setting;
+ const char *gpio_owner;
++ struct mutex mux_lock;
+ #endif
+ };
+
+diff --git a/drivers/pinctrl/freescale/Kconfig b/drivers/pinctrl/freescale/Kconfig
+index 3b59d71890045b..139bc0fb8a9dbf 100644
+--- a/drivers/pinctrl/freescale/Kconfig
++++ b/drivers/pinctrl/freescale/Kconfig
+@@ -20,7 +20,7 @@ config PINCTRL_IMX_SCMI
+
+ config PINCTRL_IMX_SCU
+ tristate
+- depends on IMX_SCU
++ depends on IMX_SCU || COMPILE_TEST
+ select PINCTRL_IMX
+
+ config PINCTRL_IMX1_CORE
+diff --git a/drivers/pinctrl/pinmux.c b/drivers/pinctrl/pinmux.c
+index 02033ea1c64384..0743190da59e81 100644
+--- a/drivers/pinctrl/pinmux.c
++++ b/drivers/pinctrl/pinmux.c
+@@ -14,6 +14,7 @@
+
+ #include <linux/array_size.h>
+ #include <linux/ctype.h>
++#include <linux/cleanup.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+@@ -93,6 +94,7 @@ bool pinmux_can_be_used_for_gpio(struct pinctrl_dev *pctldev, unsigned int pin)
+ if (!desc || !ops)
+ return true;
+
++ guard(mutex)(&desc->mux_lock);
+ if (ops->strict && desc->mux_usecount)
+ return false;
+
+@@ -127,29 +129,31 @@ static int pin_request(struct pinctrl_dev *pctldev,
+ dev_dbg(pctldev->dev, "request pin %d (%s) for %s\n",
+ pin, desc->name, owner);
+
+- if ((!gpio_range || ops->strict) &&
+- desc->mux_usecount && strcmp(desc->mux_owner, owner)) {
+- dev_err(pctldev->dev,
+- "pin %s already requested by %s; cannot claim for %s\n",
+- desc->name, desc->mux_owner, owner);
+- goto out;
+- }
++ scoped_guard(mutex, &desc->mux_lock) {
++ if ((!gpio_range || ops->strict) &&
++ desc->mux_usecount && strcmp(desc->mux_owner, owner)) {
++ dev_err(pctldev->dev,
++ "pin %s already requested by %s; cannot claim for %s\n",
++ desc->name, desc->mux_owner, owner);
++ goto out;
++ }
+
+- if ((gpio_range || ops->strict) && desc->gpio_owner) {
+- dev_err(pctldev->dev,
+- "pin %s already requested by %s; cannot claim for %s\n",
+- desc->name, desc->gpio_owner, owner);
+- goto out;
+- }
++ if ((gpio_range || ops->strict) && desc->gpio_owner) {
++ dev_err(pctldev->dev,
++ "pin %s already requested by %s; cannot claim for %s\n",
++ desc->name, desc->gpio_owner, owner);
++ goto out;
++ }
+
+- if (gpio_range) {
+- desc->gpio_owner = owner;
+- } else {
+- desc->mux_usecount++;
+- if (desc->mux_usecount > 1)
+- return 0;
++ if (gpio_range) {
++ desc->gpio_owner = owner;
++ } else {
++ desc->mux_usecount++;
++ if (desc->mux_usecount > 1)
++ return 0;
+
+- desc->mux_owner = owner;
++ desc->mux_owner = owner;
++ }
+ }
+
+ /* Let each pin increase references to this module */
+@@ -178,12 +182,14 @@ static int pin_request(struct pinctrl_dev *pctldev,
+
+ out_free_pin:
+ if (status) {
+- if (gpio_range) {
+- desc->gpio_owner = NULL;
+- } else {
+- desc->mux_usecount--;
+- if (!desc->mux_usecount)
+- desc->mux_owner = NULL;
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (gpio_range) {
++ desc->gpio_owner = NULL;
++ } else {
++ desc->mux_usecount--;
++ if (!desc->mux_usecount)
++ desc->mux_owner = NULL;
++ }
+ }
+ }
+ out:
+@@ -219,15 +225,17 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ return NULL;
+ }
+
+- if (!gpio_range) {
+- /*
+- * A pin should not be freed more times than allocated.
+- */
+- if (WARN_ON(!desc->mux_usecount))
+- return NULL;
+- desc->mux_usecount--;
+- if (desc->mux_usecount)
+- return NULL;
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (!gpio_range) {
++ /*
++ * A pin should not be freed more times than allocated.
++ */
++ if (WARN_ON(!desc->mux_usecount))
++ return NULL;
++ desc->mux_usecount--;
++ if (desc->mux_usecount)
++ return NULL;
++ }
+ }
+
+ /*
+@@ -239,13 +247,15 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ else if (ops->free)
+ ops->free(pctldev, pin);
+
+- if (gpio_range) {
+- owner = desc->gpio_owner;
+- desc->gpio_owner = NULL;
+- } else {
+- owner = desc->mux_owner;
+- desc->mux_owner = NULL;
+- desc->mux_setting = NULL;
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (gpio_range) {
++ owner = desc->gpio_owner;
++ desc->gpio_owner = NULL;
++ } else {
++ owner = desc->mux_owner;
++ desc->mux_owner = NULL;
++ desc->mux_setting = NULL;
++ }
+ }
+
+ module_put(pctldev->owner);
+@@ -458,7 +468,8 @@ int pinmux_enable_setting(const struct pinctrl_setting *setting)
+ pins[i]);
+ continue;
+ }
+- desc->mux_setting = &(setting->data.mux);
++ scoped_guard(mutex, &desc->mux_lock)
++ desc->mux_setting = &(setting->data.mux);
+ }
+
+ ret = ops->set_mux(pctldev, setting->data.mux.func,
+@@ -472,8 +483,10 @@ int pinmux_enable_setting(const struct pinctrl_setting *setting)
+ err_set_mux:
+ for (i = 0; i < num_pins; i++) {
+ desc = pin_desc_get(pctldev, pins[i]);
+- if (desc)
+- desc->mux_setting = NULL;
++ if (desc) {
++ scoped_guard(mutex, &desc->mux_lock)
++ desc->mux_setting = NULL;
++ }
+ }
+ err_pin_request:
+ /* On error release all taken pins */
+@@ -492,6 +505,7 @@ void pinmux_disable_setting(const struct pinctrl_setting *setting)
+ unsigned int num_pins = 0;
+ int i;
+ struct pin_desc *desc;
++ bool is_equal;
+
+ if (pctlops->get_group_pins)
+ ret = pctlops->get_group_pins(pctldev, setting->data.mux.group,
+@@ -517,7 +531,10 @@ void pinmux_disable_setting(const struct pinctrl_setting *setting)
+ pins[i]);
+ continue;
+ }
+- if (desc->mux_setting == &(setting->data.mux)) {
++ scoped_guard(mutex, &desc->mux_lock)
++ is_equal = (desc->mux_setting == &(setting->data.mux));
++
++ if (is_equal) {
+ pin_free(pctldev, pins[i], NULL);
+ } else {
+ const char *gname;
+@@ -608,40 +625,42 @@ static int pinmux_pins_show(struct seq_file *s, void *what)
+ if (desc == NULL)
+ continue;
+
+- if (desc->mux_owner &&
+- !strcmp(desc->mux_owner, pinctrl_dev_get_name(pctldev)))
+- is_hog = true;
+-
+- if (pmxops->strict) {
+- if (desc->mux_owner)
+- seq_printf(s, "pin %d (%s): device %s%s",
+- pin, desc->name, desc->mux_owner,
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (desc->mux_owner &&
++ !strcmp(desc->mux_owner, pinctrl_dev_get_name(pctldev)))
++ is_hog = true;
++
++ if (pmxops->strict) {
++ if (desc->mux_owner)
++ seq_printf(s, "pin %d (%s): device %s%s",
++ pin, desc->name, desc->mux_owner,
++ is_hog ? " (HOG)" : "");
++ else if (desc->gpio_owner)
++ seq_printf(s, "pin %d (%s): GPIO %s",
++ pin, desc->name, desc->gpio_owner);
++ else
++ seq_printf(s, "pin %d (%s): UNCLAIMED",
++ pin, desc->name);
++ } else {
++ /* For non-strict controllers */
++ seq_printf(s, "pin %d (%s): %s %s%s", pin, desc->name,
++ desc->mux_owner ? desc->mux_owner
++ : "(MUX UNCLAIMED)",
++ desc->gpio_owner ? desc->gpio_owner
++ : "(GPIO UNCLAIMED)",
+ is_hog ? " (HOG)" : "");
+- else if (desc->gpio_owner)
+- seq_printf(s, "pin %d (%s): GPIO %s",
+- pin, desc->name, desc->gpio_owner);
++ }
++
++ /* If mux: print function+group claiming the pin */
++ if (desc->mux_setting)
++ seq_printf(s, " function %s group %s\n",
++ pmxops->get_function_name(pctldev,
++ desc->mux_setting->func),
++ pctlops->get_group_name(pctldev,
++ desc->mux_setting->group));
+ else
+- seq_printf(s, "pin %d (%s): UNCLAIMED",
+- pin, desc->name);
+- } else {
+- /* For non-strict controllers */
+- seq_printf(s, "pin %d (%s): %s %s%s", pin, desc->name,
+- desc->mux_owner ? desc->mux_owner
+- : "(MUX UNCLAIMED)",
+- desc->gpio_owner ? desc->gpio_owner
+- : "(GPIO UNCLAIMED)",
+- is_hog ? " (HOG)" : "");
++ seq_putc(s, '\n');
+ }
+-
+- /* If mux: print function+group claiming the pin */
+- if (desc->mux_setting)
+- seq_printf(s, " function %s group %s\n",
+- pmxops->get_function_name(pctldev,
+- desc->mux_setting->func),
+- pctlops->get_group_name(pctldev,
+- desc->mux_setting->group));
+- else
+- seq_putc(s, '\n');
+ }
+
+ mutex_unlock(&pctldev->mutex);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index a0eb4e01b3a755..1b7eecff3ffa43 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -1226,6 +1226,8 @@ static const struct of_device_id pmic_gpio_of_match[] = {
+ { .compatible = "qcom,pm8550ve-gpio", .data = (void *) 8 },
+ { .compatible = "qcom,pm8550vs-gpio", .data = (void *) 6 },
+ { .compatible = "qcom,pm8916-gpio", .data = (void *) 4 },
++ /* pm8937 has 8 GPIOs with holes on 3, 4 and 6 */
++ { .compatible = "qcom,pm8937-gpio", .data = (void *) 8 },
+ { .compatible = "qcom,pm8941-gpio", .data = (void *) 36 },
+ /* pm8950 has 8 GPIOs with holes on 3 */
+ { .compatible = "qcom,pm8950-gpio", .data = (void *) 8 },
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+index d16ece90d926cf..5fa04e7c1d5c4d 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+@@ -983,6 +983,7 @@ static const struct of_device_id pmic_mpp_of_match[] = {
+ { .compatible = "qcom,pm8226-mpp", .data = (void *) 8 },
+ { .compatible = "qcom,pm8841-mpp", .data = (void *) 4 },
+ { .compatible = "qcom,pm8916-mpp", .data = (void *) 4 },
++ { .compatible = "qcom,pm8937-mpp", .data = (void *) 4 },
+ { .compatible = "qcom,pm8941-mpp", .data = (void *) 8 },
+ { .compatible = "qcom,pm8950-mpp", .data = (void *) 4 },
+ { .compatible = "qcom,pmi8950-mpp", .data = (void *) 4 },
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 89f5f44857d555..1101e5b2488e52 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -3696,7 +3696,6 @@ static int asus_wmi_custom_fan_curve_init(struct asus_wmi *asus)
+ /* Throttle thermal policy ****************************************************/
+ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ {
+- u32 retval;
+ u8 value;
+ int err;
+
+@@ -3718,8 +3717,8 @@ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ value = asus->throttle_thermal_policy_mode;
+ }
+
+- err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev,
+- value, &retval);
++ /* Some machines do not return an error code as a result, so we ignore it */
++ err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev, value, NULL);
+
+ sysfs_notify(&asus->platform_device->dev.kobj, NULL,
+ "throttle_thermal_policy");
+@@ -3729,12 +3728,6 @@ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ return err;
+ }
+
+- if (retval != 1) {
+- pr_warn("Failed to set throttle thermal policy (retval): 0x%x\n",
+- retval);
+- return -EIO;
+- }
+-
+ /* Must set to disabled if mode is toggled */
+ if (asus->cpu_fan_curve_available)
+ asus->custom_fan_curves[FAN_CURVE_DEV_CPU].enabled = false;
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 29ad510e881c39..778ff187ac59e6 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -2171,8 +2171,24 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+ }
+
+ genpd->gd = gd;
+- return 0;
++ device_initialize(&genpd->dev);
++
++ if (!genpd_is_dev_name_fw(genpd)) {
++ dev_set_name(&genpd->dev, "%s", genpd->name);
++ } else {
++ ret = ida_alloc(&genpd_ida, GFP_KERNEL);
++ if (ret < 0)
++ goto put;
+
++ genpd->device_id = ret;
++ dev_set_name(&genpd->dev, "%s_%u", genpd->name, genpd->device_id);
++ }
++
++ return 0;
++put:
++ put_device(&genpd->dev);
++ if (genpd->free_states == genpd_free_default_power_state)
++ kfree(genpd->states);
+ free:
+ if (genpd_is_cpu_domain(genpd))
+ free_cpumask_var(genpd->cpus);
+@@ -2182,6 +2198,9 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+
+ static void genpd_free_data(struct generic_pm_domain *genpd)
+ {
++ put_device(&genpd->dev);
++ if (genpd->device_id != -ENXIO)
++ ida_free(&genpd_ida, genpd->device_id);
+ if (genpd_is_cpu_domain(genpd))
+ free_cpumask_var(genpd->cpus);
+ if (genpd->free_states)
+@@ -2270,20 +2289,6 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
+ if (ret)
+ return ret;
+
+- device_initialize(&genpd->dev);
+-
+- if (!genpd_is_dev_name_fw(genpd)) {
+- dev_set_name(&genpd->dev, "%s", genpd->name);
+- } else {
+- ret = ida_alloc(&genpd_ida, GFP_KERNEL);
+- if (ret < 0) {
+- put_device(&genpd->dev);
+- return ret;
+- }
+- genpd->device_id = ret;
+- dev_set_name(&genpd->dev, "%s_%u", genpd->name, genpd->device_id);
+- }
+-
+ mutex_lock(&gpd_list_lock);
+ list_add(&genpd->gpd_list_node, &gpd_list);
+ mutex_unlock(&gpd_list_lock);
+@@ -2324,8 +2329,6 @@ static int genpd_remove(struct generic_pm_domain *genpd)
+ genpd_unlock(genpd);
+ genpd_debug_remove(genpd);
+ cancel_work_sync(&genpd->power_off_work);
+- if (genpd->device_id != -ENXIO)
+- ida_free(&genpd_ida, genpd->device_id);
+ genpd_free_data(genpd);
+
+ pr_debug("%s: removed %s\n", __func__, dev_name(&genpd->dev));
+diff --git a/drivers/pmdomain/imx/gpcv2.c b/drivers/pmdomain/imx/gpcv2.c
+index 963d61c5af6d5e..3f0e6960f47fc2 100644
+--- a/drivers/pmdomain/imx/gpcv2.c
++++ b/drivers/pmdomain/imx/gpcv2.c
+@@ -403,7 +403,7 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd)
+ * already reaches target before udelay()
+ */
+ regmap_read_bypassed(domain->regmap, domain->regs->hsk, ®_val);
+- udelay(5);
++ udelay(10);
+ }
+
+ /* Disable reset clocks for all devices in the domain */
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index c56cd0f63909a2..77a36e7bddd54e 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -150,7 +150,8 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct __kernel_timex *tx)
+ if (ppb > ops->max_adj || ppb < -ops->max_adj)
+ return -ERANGE;
+ err = ops->adjfine(ops, tx->freq);
+- ptp->dialed_frequency = tx->freq;
++ if (!err)
++ ptp->dialed_frequency = tx->freq;
+ } else if (tx->modes & ADJ_OFFSET) {
+ if (ops->adjphase) {
+ s32 max_phase_adj = ops->getmaxphase(ops);
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index 6c343b4b9d15a8..7870722b6ee21c 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -843,26 +843,15 @@ static const struct rpmh_vreg_hw_data pmic5_ftsmps520 = {
+ .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+
+-static const struct rpmh_vreg_hw_data pmic5_ftsmps525_lv = {
++static const struct rpmh_vreg_hw_data pmic5_ftsmps525 = {
+ .regulator_type = VRM,
+ .ops = &rpmh_regulator_vrm_ops,
+ .voltage_ranges = (struct linear_range[]) {
+ REGULATOR_LINEAR_RANGE(300000, 0, 267, 4000),
++ REGULATOR_LINEAR_RANGE(1376000, 268, 438, 8000),
+ },
+- .n_linear_ranges = 1,
+- .n_voltages = 268,
+- .pmic_mode_map = pmic_mode_map_pmic5_smps,
+- .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+-};
+-
+-static const struct rpmh_vreg_hw_data pmic5_ftsmps525_mv = {
+- .regulator_type = VRM,
+- .ops = &rpmh_regulator_vrm_ops,
+- .voltage_ranges = (struct linear_range[]) {
+- REGULATOR_LINEAR_RANGE(600000, 0, 267, 8000),
+- },
+- .n_linear_ranges = 1,
+- .n_voltages = 268,
++ .n_linear_ranges = 2,
++ .n_voltages = 439,
+ .pmic_mode_map = pmic_mode_map_pmic5_smps,
+ .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+@@ -1190,12 +1179,12 @@ static const struct rpmh_vreg_init_data pm8550_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pm8550vs_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_lv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_mv, "vdd-s6"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+@@ -1203,14 +1192,14 @@ static const struct rpmh_vreg_init_data pm8550vs_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pm8550ve_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_mv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+- RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+- RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
++ RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525, "vdd-s7"),
++ RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525, "vdd-s8"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+@@ -1218,14 +1207,14 @@ static const struct rpmh_vreg_init_data pm8550ve_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pmc8380_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_mv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+- RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+- RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
++ RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525, "vdd-s7"),
++ RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525, "vdd-s8"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+@@ -1409,16 +1398,16 @@ static const struct rpmh_vreg_init_data pmx65_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pmx75_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_mv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+- RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+- RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
+- RPMH_VREG("smps9", "smp%s9", &pmic5_ftsmps525_lv, "vdd-s9"),
+- RPMH_VREG("smps10", "smp%s10", &pmic5_ftsmps525_lv, "vdd-s10"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
++ RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525, "vdd-s7"),
++ RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525, "vdd-s8"),
++ RPMH_VREG("smps9", "smp%s9", &pmic5_ftsmps525, "vdd-s9"),
++ RPMH_VREG("smps10", "smp%s10", &pmic5_ftsmps525, "vdd-s10"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2-18"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 793b1d274be33a..1a2d08ec9de9ef 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -1433,6 +1433,7 @@ static const struct of_device_id adsp_of_match[] = {
+ { .compatible = "qcom,sa8775p-cdsp1-pas", .data = &sa8775p_cdsp1_resource},
+ { .compatible = "qcom,sa8775p-gpdsp0-pas", .data = &sa8775p_gpdsp0_resource},
+ { .compatible = "qcom,sa8775p-gpdsp1-pas", .data = &sa8775p_gpdsp1_resource},
++ { .compatible = "qcom,sar2130p-adsp-pas", .data = &sm8350_adsp_resource},
+ { .compatible = "qcom,sc7180-adsp-pas", .data = &sm8250_adsp_resource},
+ { .compatible = "qcom,sc7180-mpss-pas", .data = &mpss_resource_init},
+ { .compatible = "qcom,sc7280-adsp-pas", .data = &sm8350_adsp_resource},
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 35dca2accbb8df..5849d2970bba45 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -645,18 +645,17 @@ static int cmos_nvram_read(void *priv, unsigned int off, void *val,
+ unsigned char *buf = val;
+
+ off += NVRAM_OFFSET;
+- spin_lock_irq(&rtc_lock);
+- for (; count; count--, off++) {
++ for (; count; count--, off++, buf++) {
++ guard(spinlock_irq)(&rtc_lock);
+ if (off < 128)
+- *buf++ = CMOS_READ(off);
++ *buf = CMOS_READ(off);
+ else if (can_bank2)
+- *buf++ = cmos_read_bank2(off);
++ *buf = cmos_read_bank2(off);
+ else
+- break;
++ return -EIO;
+ }
+- spin_unlock_irq(&rtc_lock);
+
+- return count ? -EIO : 0;
++ return 0;
+ }
+
+ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+@@ -671,23 +670,23 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ * NVRAM to update, updating checksums is also part of its job.
+ */
+ off += NVRAM_OFFSET;
+- spin_lock_irq(&rtc_lock);
+- for (; count; count--, off++) {
++ for (; count; count--, off++, buf++) {
+ /* don't trash RTC registers */
+ if (off == cmos->day_alrm
+ || off == cmos->mon_alrm
+ || off == cmos->century)
+- buf++;
+- else if (off < 128)
+- CMOS_WRITE(*buf++, off);
++ continue;
++
++ guard(spinlock_irq)(&rtc_lock);
++ if (off < 128)
++ CMOS_WRITE(*buf, off);
+ else if (can_bank2)
+- cmos_write_bank2(*buf++, off);
++ cmos_write_bank2(*buf, off);
+ else
+- break;
++ return -EIO;
+ }
+- spin_unlock_irq(&rtc_lock);
+
+- return count ? -EIO : 0;
++ return 0;
+ }
+
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 4cd3a3eab6f1c4..cd394d8c9f07f0 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2493,6 +2493,7 @@ static int complete_v3_hw(struct hisi_sas_cq *cq)
+ /* update rd_point */
+ cq->rd_point = rd_point;
+ hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point);
++ cond_resched();
+
+ return completed;
+ }
+@@ -3550,6 +3551,11 @@ debugfs_to_reg_name_v3_hw(int off, int base_off,
+ return NULL;
+ }
+
++static bool debugfs_dump_is_generated_v3_hw(void *p)
++{
++ return p ? true : false;
++}
++
+ static void debugfs_print_reg_v3_hw(u32 *regs_val, struct seq_file *s,
+ const struct hisi_sas_debugfs_reg *reg)
+ {
+@@ -3575,6 +3581,9 @@ static int debugfs_global_v3_hw_show(struct seq_file *s, void *p)
+ {
+ struct hisi_sas_debugfs_regs *global = s->private;
+
++ if (!debugfs_dump_is_generated_v3_hw(global->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(global->data, s,
+ &debugfs_global_reg);
+
+@@ -3586,6 +3595,9 @@ static int debugfs_axi_v3_hw_show(struct seq_file *s, void *p)
+ {
+ struct hisi_sas_debugfs_regs *axi = s->private;
+
++ if (!debugfs_dump_is_generated_v3_hw(axi->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(axi->data, s,
+ &debugfs_axi_reg);
+
+@@ -3597,6 +3609,9 @@ static int debugfs_ras_v3_hw_show(struct seq_file *s, void *p)
+ {
+ struct hisi_sas_debugfs_regs *ras = s->private;
+
++ if (!debugfs_dump_is_generated_v3_hw(ras->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(ras->data, s,
+ &debugfs_ras_reg);
+
+@@ -3609,6 +3624,9 @@ static int debugfs_port_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_debugfs_port *port = s->private;
+ const struct hisi_sas_debugfs_reg *reg_port = &debugfs_port_reg;
+
++ if (!debugfs_dump_is_generated_v3_hw(port->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(port->data, s, reg_port);
+
+ return 0;
+@@ -3664,6 +3682,9 @@ static int debugfs_cq_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_debugfs_cq *debugfs_cq = s->private;
+ int slot;
+
++ if (!debugfs_dump_is_generated_v3_hw(debugfs_cq->complete_hdr))
++ return -EPERM;
++
+ for (slot = 0; slot < HISI_SAS_QUEUE_SLOTS; slot++)
+ debugfs_cq_show_slot_v3_hw(s, slot, debugfs_cq);
+
+@@ -3685,8 +3706,12 @@ static void debugfs_dq_show_slot_v3_hw(struct seq_file *s, int slot,
+
+ static int debugfs_dq_v3_hw_show(struct seq_file *s, void *p)
+ {
++ struct hisi_sas_debugfs_dq *debugfs_dq = s->private;
+ int slot;
+
++ if (!debugfs_dump_is_generated_v3_hw(debugfs_dq->hdr))
++ return -EPERM;
++
+ for (slot = 0; slot < HISI_SAS_QUEUE_SLOTS; slot++)
+ debugfs_dq_show_slot_v3_hw(s, slot, s->private);
+
+@@ -3700,6 +3725,9 @@ static int debugfs_iost_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_iost *iost = debugfs_iost->iost;
+ int i, max_command_entries = HISI_SAS_MAX_COMMANDS;
+
++ if (!debugfs_dump_is_generated_v3_hw(iost))
++ return -EPERM;
++
+ for (i = 0; i < max_command_entries; i++, iost++) {
+ __le64 *data = &iost->qw0;
+
+@@ -3719,6 +3747,9 @@ static int debugfs_iost_cache_v3_hw_show(struct seq_file *s, void *p)
+ int i, tab_idx;
+ __le64 *iost;
+
++ if (!debugfs_dump_is_generated_v3_hw(iost_cache))
++ return -EPERM;
++
+ for (i = 0; i < HISI_SAS_IOST_ITCT_CACHE_NUM; i++, iost_cache++) {
+ /*
+ * Data struct of IOST cache:
+@@ -3742,6 +3773,9 @@ static int debugfs_itct_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_debugfs_itct *debugfs_itct = s->private;
+ struct hisi_sas_itct *itct = debugfs_itct->itct;
+
++ if (!debugfs_dump_is_generated_v3_hw(itct))
++ return -EPERM;
++
+ for (i = 0; i < HISI_SAS_MAX_ITCT_ENTRIES; i++, itct++) {
+ __le64 *data = &itct->qw0;
+
+@@ -3761,6 +3795,9 @@ static int debugfs_itct_cache_v3_hw_show(struct seq_file *s, void *p)
+ int i, tab_idx;
+ __le64 *itct;
+
++ if (!debugfs_dump_is_generated_v3_hw(itct_cache))
++ return -EPERM;
++
+ for (i = 0; i < HISI_SAS_IOST_ITCT_CACHE_NUM; i++, itct_cache++) {
+ /*
+ * Data struct of ITCT cache:
+@@ -3778,10 +3815,9 @@ static int debugfs_itct_cache_v3_hw_show(struct seq_file *s, void *p)
+ }
+ DEFINE_SHOW_ATTRIBUTE(debugfs_itct_cache_v3_hw);
+
+-static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
++static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba, int index)
+ {
+ u64 *debugfs_timestamp;
+- int dump_index = hisi_hba->debugfs_dump_index;
+ struct dentry *dump_dentry;
+ struct dentry *dentry;
+ char name[256];
+@@ -3789,17 +3825,17 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ int c;
+ int d;
+
+- snprintf(name, 256, "%d", dump_index);
++ snprintf(name, 256, "%d", index);
+
+ dump_dentry = debugfs_create_dir(name, hisi_hba->debugfs_dump_dentry);
+
+- debugfs_timestamp = &hisi_hba->debugfs_timestamp[dump_index];
++ debugfs_timestamp = &hisi_hba->debugfs_timestamp[index];
+
+ debugfs_create_u64("timestamp", 0400, dump_dentry,
+ debugfs_timestamp);
+
+ debugfs_create_file("global", 0400, dump_dentry,
+- &hisi_hba->debugfs_regs[dump_index][DEBUGFS_GLOBAL],
++ &hisi_hba->debugfs_regs[index][DEBUGFS_GLOBAL],
+ &debugfs_global_v3_hw_fops);
+
+ /* Create port dir and files */
+@@ -3808,7 +3844,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ snprintf(name, 256, "%d", p);
+
+ debugfs_create_file(name, 0400, dentry,
+- &hisi_hba->debugfs_port_reg[dump_index][p],
++ &hisi_hba->debugfs_port_reg[index][p],
+ &debugfs_port_v3_hw_fops);
+ }
+
+@@ -3818,7 +3854,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ snprintf(name, 256, "%d", c);
+
+ debugfs_create_file(name, 0400, dentry,
+- &hisi_hba->debugfs_cq[dump_index][c],
++ &hisi_hba->debugfs_cq[index][c],
+ &debugfs_cq_v3_hw_fops);
+ }
+
+@@ -3828,32 +3864,32 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ snprintf(name, 256, "%d", d);
+
+ debugfs_create_file(name, 0400, dentry,
+- &hisi_hba->debugfs_dq[dump_index][d],
++ &hisi_hba->debugfs_dq[index][d],
+ &debugfs_dq_v3_hw_fops);
+ }
+
+ debugfs_create_file("iost", 0400, dump_dentry,
+- &hisi_hba->debugfs_iost[dump_index],
++ &hisi_hba->debugfs_iost[index],
+ &debugfs_iost_v3_hw_fops);
+
+ debugfs_create_file("iost_cache", 0400, dump_dentry,
+- &hisi_hba->debugfs_iost_cache[dump_index],
++ &hisi_hba->debugfs_iost_cache[index],
+ &debugfs_iost_cache_v3_hw_fops);
+
+ debugfs_create_file("itct", 0400, dump_dentry,
+- &hisi_hba->debugfs_itct[dump_index],
++ &hisi_hba->debugfs_itct[index],
+ &debugfs_itct_v3_hw_fops);
+
+ debugfs_create_file("itct_cache", 0400, dump_dentry,
+- &hisi_hba->debugfs_itct_cache[dump_index],
++ &hisi_hba->debugfs_itct_cache[index],
+ &debugfs_itct_cache_v3_hw_fops);
+
+ debugfs_create_file("axi", 0400, dump_dentry,
+- &hisi_hba->debugfs_regs[dump_index][DEBUGFS_AXI],
++ &hisi_hba->debugfs_regs[index][DEBUGFS_AXI],
+ &debugfs_axi_v3_hw_fops);
+
+ debugfs_create_file("ras", 0400, dump_dentry,
+- &hisi_hba->debugfs_regs[dump_index][DEBUGFS_RAS],
++ &hisi_hba->debugfs_regs[index][DEBUGFS_RAS],
+ &debugfs_ras_v3_hw_fops);
+ }
+
+@@ -4516,22 +4552,34 @@ static void debugfs_release_v3_hw(struct hisi_hba *hisi_hba, int dump_index)
+ int i;
+
+ devm_kfree(dev, hisi_hba->debugfs_iost_cache[dump_index].cache);
++ hisi_hba->debugfs_iost_cache[dump_index].cache = NULL;
+ devm_kfree(dev, hisi_hba->debugfs_itct_cache[dump_index].cache);
++ hisi_hba->debugfs_itct_cache[dump_index].cache = NULL;
+ devm_kfree(dev, hisi_hba->debugfs_iost[dump_index].iost);
++ hisi_hba->debugfs_iost[dump_index].iost = NULL;
+ devm_kfree(dev, hisi_hba->debugfs_itct[dump_index].itct);
++ hisi_hba->debugfs_itct[dump_index].itct = NULL;
+
+- for (i = 0; i < hisi_hba->queue_count; i++)
++ for (i = 0; i < hisi_hba->queue_count; i++) {
+ devm_kfree(dev, hisi_hba->debugfs_dq[dump_index][i].hdr);
++ hisi_hba->debugfs_dq[dump_index][i].hdr = NULL;
++ }
+
+- for (i = 0; i < hisi_hba->queue_count; i++)
++ for (i = 0; i < hisi_hba->queue_count; i++) {
+ devm_kfree(dev,
+ hisi_hba->debugfs_cq[dump_index][i].complete_hdr);
++ hisi_hba->debugfs_cq[dump_index][i].complete_hdr = NULL;
++ }
+
+- for (i = 0; i < DEBUGFS_REGS_NUM; i++)
++ for (i = 0; i < DEBUGFS_REGS_NUM; i++) {
+ devm_kfree(dev, hisi_hba->debugfs_regs[dump_index][i].data);
++ hisi_hba->debugfs_regs[dump_index][i].data = NULL;
++ }
+
+- for (i = 0; i < hisi_hba->n_phy; i++)
++ for (i = 0; i < hisi_hba->n_phy; i++) {
+ devm_kfree(dev, hisi_hba->debugfs_port_reg[dump_index][i].data);
++ hisi_hba->debugfs_port_reg[dump_index][i].data = NULL;
++ }
+ }
+
+ static const struct hisi_sas_debugfs_reg *debugfs_reg_array_v3_hw[DEBUGFS_REGS_NUM] = {
+@@ -4658,8 +4706,6 @@ static int debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba)
+ debugfs_snapshot_itct_reg_v3_hw(hisi_hba);
+ debugfs_snapshot_iost_reg_v3_hw(hisi_hba);
+
+- debugfs_create_files_v3_hw(hisi_hba);
+-
+ debugfs_snapshot_restore_v3_hw(hisi_hba);
+ hisi_hba->debugfs_dump_index++;
+
+@@ -4743,6 +4789,17 @@ static void debugfs_bist_init_v3_hw(struct hisi_hba *hisi_hba)
+ hisi_hba->debugfs_bist_linkrate = SAS_LINK_RATE_1_5_GBPS;
+ }
+
++static void debugfs_dump_init_v3_hw(struct hisi_hba *hisi_hba)
++{
++ int i;
++
++ hisi_hba->debugfs_dump_dentry =
++ debugfs_create_dir("dump", hisi_hba->debugfs_dir);
++
++ for (i = 0; i < hisi_sas_debugfs_dump_count; i++)
++ debugfs_create_files_v3_hw(hisi_hba, i);
++}
++
+ static void debugfs_exit_v3_hw(struct hisi_hba *hisi_hba)
+ {
+ debugfs_remove_recursive(hisi_hba->debugfs_dir);
+@@ -4763,8 +4820,7 @@ static void debugfs_init_v3_hw(struct hisi_hba *hisi_hba)
+ /* create bist structures */
+ debugfs_bist_init_v3_hw(hisi_hba);
+
+- hisi_hba->debugfs_dump_dentry =
+- debugfs_create_dir("dump", hisi_hba->debugfs_dir);
++ debugfs_dump_init_v3_hw(hisi_hba);
+
+ debugfs_phy_down_cnt_init_v3_hw(hisi_hba);
+ debugfs_fifo_init_v3_hw(hisi_hba);
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 134bc96dd13400..ce3a1f42713dd8 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -2226,6 +2226,11 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ ulp_status, ulp_word4, latt);
+
+ if (latt || ulp_status) {
++ lpfc_printf_vlog(vport, KERN_WARNING, LOG_DISCOVERY,
++ "0229 FDMI cmd %04x failed, latt = %d "
++ "ulp_status: (x%x/x%x), sli_flag x%x\n",
++ be16_to_cpu(fdmi_cmd), latt, ulp_status,
++ ulp_word4, phba->sli.sli_flag);
+
+ /* Look for a retryable error */
+ if (ulp_status == IOSTAT_LOCAL_REJECT) {
+@@ -2234,8 +2239,16 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ case IOERR_SLI_DOWN:
+ /* Driver aborted this IO. No retry as error
+ * is likely Offline->Online or some adapter
+- * error. Recovery will try again.
++ * error. Recovery will try again, but if port
++ * is not active there's no point to continue
++ * issuing follow up FDMI commands.
+ */
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE)) {
++ free_ndlp = cmdiocb->ndlp;
++ lpfc_ct_free_iocb(phba, cmdiocb);
++ lpfc_nlp_put(free_ndlp);
++ return;
++ }
+ break;
+ case IOERR_ABORT_IN_PROGRESS:
+ case IOERR_SEQUENCE_TIMEOUT:
+@@ -2256,12 +2269,6 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ break;
+ }
+ }
+-
+- lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+- "0229 FDMI cmd %04x latt = %d "
+- "ulp_status: x%x, rid x%x\n",
+- be16_to_cpu(fdmi_cmd), latt, ulp_status,
+- ulp_word4);
+ }
+
+ free_ndlp = cmdiocb->ndlp;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 9241075f72fa4b..6e8d8a96c54fb3 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -155,6 +155,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ struct lpfc_hba *phba;
+ struct lpfc_work_evt *evtp;
+ unsigned long iflags;
++ bool nvme_reg = false;
+
+ ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode;
+ if (!ndlp)
+@@ -177,38 +178,49 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ /* Don't schedule a worker thread event if the vport is going down. */
+ if (test_bit(FC_UNLOADING, &vport->load_flag) ||
+ !test_bit(HBA_SETUP, &phba->hba_flag)) {
++
+ spin_lock_irqsave(&ndlp->lock, iflags);
+ ndlp->rport = NULL;
+
++ if (ndlp->fc4_xpt_flags & NVME_XPT_REGD)
++ nvme_reg = true;
++
+ /* The scsi_transport is done with the rport so lpfc cannot
+- * call to unregister. Remove the scsi transport reference
+- * and clean up the SCSI transport node details.
++ * call to unregister.
+ */
+- if (ndlp->fc4_xpt_flags & (NLP_XPT_REGD | SCSI_XPT_REGD)) {
++ if (ndlp->fc4_xpt_flags & SCSI_XPT_REGD) {
+ ndlp->fc4_xpt_flags &= ~SCSI_XPT_REGD;
+
+- /* NVME transport-registered rports need the
+- * NLP_XPT_REGD flag to complete an unregister.
++ /* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node,
++ * unregister calls were made to the scsi and nvme
++ * transports and refcnt was already decremented. Clear
++ * the NLP_XPT_REGD flag only if the NVME Rport is
++ * confirmed unregistered.
+ */
+- if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
++ if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
+ ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
++ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ lpfc_nlp_put(ndlp); /* may free ndlp */
++ } else {
++ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ }
++ } else {
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
+- lpfc_nlp_put(ndlp);
+- spin_lock_irqsave(&ndlp->lock, iflags);
+ }
+
++ spin_lock_irqsave(&ndlp->lock, iflags);
++
+ /* Only 1 thread can drop the initial node reference. If
+ * another thread has set NLP_DROPPED, this thread is done.
+ */
+- if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD) &&
+- !(ndlp->nlp_flag & NLP_DROPPED)) {
+- ndlp->nlp_flag |= NLP_DROPPED;
++ if (nvme_reg || (ndlp->nlp_flag & NLP_DROPPED)) {
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
+- lpfc_nlp_put(ndlp);
+ return;
+ }
+
++ ndlp->nlp_flag |= NLP_DROPPED;
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ lpfc_nlp_put(ndlp);
+ return;
+ }
+
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 0dd451009b0791..a3658ef1141b26 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13518,6 +13518,8 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
+ /* Disable FW logging to host memory */
+ lpfc_ras_stop_fwlog(phba);
+
++ lpfc_sli4_queue_unset(phba);
++
+ /* Reset SLI4 HBA FCoE function */
+ lpfc_pci_function_reset(phba);
+
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 2ec6e55771b45a..6748fba48a07ed 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -5291,6 +5291,8 @@ lpfc_sli_brdrestart_s4(struct lpfc_hba *phba)
+ "0296 Restart HBA Data: x%x x%x\n",
+ phba->pport->port_state, psli->sli_flag);
+
++ lpfc_sli4_queue_unset(phba);
++
+ rc = lpfc_sli4_brdreset(phba);
+ if (rc) {
+ phba->link_state = LPFC_HBA_ERROR;
+@@ -17625,6 +17627,9 @@ lpfc_eq_destroy(struct lpfc_hba *phba, struct lpfc_queue *eq)
+ if (!eq)
+ return -ENODEV;
+
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(eq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17651,10 +17656,12 @@ lpfc_eq_destroy(struct lpfc_hba *phba, struct lpfc_queue *eq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, eq->phba->mbox_mem_pool);
+
++list_remove:
+ /* Remove eq from any list */
+ list_del_init(&eq->list);
+- mempool_free(mbox, eq->phba->mbox_mem_pool);
++
+ return status;
+ }
+
+@@ -17682,6 +17689,10 @@ lpfc_cq_destroy(struct lpfc_hba *phba, struct lpfc_queue *cq)
+ /* sanity check on queue memory */
+ if (!cq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(cq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17707,9 +17718,11 @@ lpfc_cq_destroy(struct lpfc_hba *phba, struct lpfc_queue *cq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, cq->phba->mbox_mem_pool);
++
++list_remove:
+ /* Remove cq from any list */
+ list_del_init(&cq->list);
+- mempool_free(mbox, cq->phba->mbox_mem_pool);
+ return status;
+ }
+
+@@ -17737,6 +17750,10 @@ lpfc_mq_destroy(struct lpfc_hba *phba, struct lpfc_queue *mq)
+ /* sanity check on queue memory */
+ if (!mq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(mq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17762,9 +17779,11 @@ lpfc_mq_destroy(struct lpfc_hba *phba, struct lpfc_queue *mq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, mq->phba->mbox_mem_pool);
++
++list_remove:
+ /* Remove mq from any list */
+ list_del_init(&mq->list);
+- mempool_free(mbox, mq->phba->mbox_mem_pool);
+ return status;
+ }
+
+@@ -17792,6 +17811,10 @@ lpfc_wq_destroy(struct lpfc_hba *phba, struct lpfc_queue *wq)
+ /* sanity check on queue memory */
+ if (!wq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(wq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17816,11 +17839,13 @@ lpfc_wq_destroy(struct lpfc_hba *phba, struct lpfc_queue *wq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, wq->phba->mbox_mem_pool);
++
++list_remove:
+ /* Remove wq from any list */
+ list_del_init(&wq->list);
+ kfree(wq->pring);
+ wq->pring = NULL;
+- mempool_free(mbox, wq->phba->mbox_mem_pool);
+ return status;
+ }
+
+@@ -17850,6 +17875,10 @@ lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ /* sanity check on queue memory */
+ if (!hrq || !drq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(hrq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17890,9 +17919,11 @@ lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, hrq->phba->mbox_mem_pool);
++
++list_remove:
+ list_del_init(&hrq->list);
+ list_del_init(&drq->list);
+- mempool_free(mbox, hrq->phba->mbox_mem_pool);
+ return status;
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 2810608acd963a..e6ece30c43486c 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -3304,6 +3304,7 @@ struct fc_function_template qla2xxx_transport_vport_functions = {
+ .show_host_node_name = 1,
+ .show_host_port_name = 1,
+ .show_host_supported_classes = 1,
++ .show_host_supported_speeds = 1,
+
+ .get_host_port_id = qla2x00_get_host_port_id,
+ .show_host_port_id = 1,
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 52dc9604f56746..10431a67d202bb 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -24,6 +24,7 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ {
+ struct bsg_job *bsg_job = sp->u.bsg_job;
+ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++ struct completion *comp = sp->comp;
+
+ ql_dbg(ql_dbg_user, sp->vha, 0x7009,
+ "%s: sp hdl %x, result=%x bsg ptr %p\n",
+@@ -35,6 +36,9 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ bsg_reply->result = res;
+ bsg_job_done(bsg_job, bsg_reply->result,
+ bsg_reply->reply_payload_rcv_len);
++
++ if (comp)
++ complete(comp);
+ }
+
+ void qla2x00_bsg_sp_free(srb_t *sp)
+@@ -490,16 +494,6 @@ qla2x00_process_ct(struct bsg_job *bsg_job)
+ goto done;
+ }
+
+- if ((req_sg_cnt != bsg_job->request_payload.sg_cnt) ||
+- (rsp_sg_cnt != bsg_job->reply_payload.sg_cnt)) {
+- ql_log(ql_log_warn, vha, 0x7011,
+- "request_sg_cnt: %x dma_request_sg_cnt: %x reply_sg_cnt:%x "
+- "dma_reply_sg_cnt: %x\n", bsg_job->request_payload.sg_cnt,
+- req_sg_cnt, bsg_job->reply_payload.sg_cnt, rsp_sg_cnt);
+- rval = -EAGAIN;
+- goto done_unmap_sg;
+- }
+-
+ if (!vha->flags.online) {
+ ql_log(ql_log_warn, vha, 0x7012,
+ "Host is not online.\n");
+@@ -3061,7 +3055,7 @@ qla24xx_bsg_request(struct bsg_job *bsg_job)
+
+ static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ {
+- bool found = false;
++ bool found, do_bsg_done;
+ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
+ scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
+ struct qla_hw_data *ha = vha->hw;
+@@ -3069,6 +3063,11 @@ static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ int cnt;
+ unsigned long flags;
+ struct req_que *req;
++ int rval;
++ DECLARE_COMPLETION_ONSTACK(comp);
++ uint32_t ratov_j;
++
++ found = do_bsg_done = false;
+
+ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ req = qpair->req;
+@@ -3080,42 +3079,104 @@ static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ sp->type == SRB_ELS_CMD_HST ||
+ sp->type == SRB_ELS_CMD_HST_NOLOGIN) &&
+ sp->u.bsg_job == bsg_job) {
+- req->outstanding_cmds[cnt] = NULL;
+- spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+-
+- if (!ha->flags.eeh_busy && ha->isp_ops->abort_command(sp)) {
+- ql_log(ql_log_warn, vha, 0x7089,
+- "mbx abort_command failed.\n");
+- bsg_reply->result = -EIO;
+- } else {
+- ql_dbg(ql_dbg_user, vha, 0x708a,
+- "mbx abort_command success.\n");
+- bsg_reply->result = 0;
+- }
+- /* ref: INIT */
+- kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ found = true;
+- goto done;
++ sp->comp = ∁
++ break;
+ }
+ }
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+
+-done:
+- return found;
++ if (!found)
++ return false;
++
++ if (ha->flags.eeh_busy) {
++ /* skip over abort. EEH handling will return the bsg. Wait for it */
++ rval = QLA_SUCCESS;
++ ql_dbg(ql_dbg_user, vha, 0x802c,
++ "eeh encounter. bsg %p sp=%p handle=%x \n",
++ bsg_job, sp, sp->handle);
++ } else {
++ rval = ha->isp_ops->abort_command(sp);
++ ql_dbg(ql_dbg_user, vha, 0x802c,
++ "Aborting bsg %p sp=%p handle=%x rval=%x\n",
++ bsg_job, sp, sp->handle, rval);
++ }
++
++ switch (rval) {
++ case QLA_SUCCESS:
++ /* Wait for the command completion. */
++ ratov_j = ha->r_a_tov / 10 * 4 * 1000;
++ ratov_j = msecs_to_jiffies(ratov_j);
++
++ if (!wait_for_completion_timeout(&comp, ratov_j)) {
++ ql_log(ql_log_info, vha, 0x7089,
++ "bsg abort timeout. bsg=%p sp=%p handle %#x .\n",
++ bsg_job, sp, sp->handle);
++
++ do_bsg_done = true;
++ } else {
++ /* fw had returned the bsg */
++ ql_dbg(ql_dbg_user, vha, 0x708a,
++ "bsg abort success. bsg %p sp=%p handle=%#x\n",
++ bsg_job, sp, sp->handle);
++ do_bsg_done = false;
++ }
++ break;
++ default:
++ ql_log(ql_log_info, vha, 0x704f,
++ "bsg abort fail. bsg=%p sp=%p rval=%x.\n",
++ bsg_job, sp, rval);
++
++ do_bsg_done = true;
++ break;
++ }
++
++ if (!do_bsg_done)
++ return true;
++
++ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
++ /*
++ * recheck to make sure it's still the same bsg_job due to
++ * qp_lock_ptr was released earlier.
++ */
++ if (req->outstanding_cmds[cnt] &&
++ req->outstanding_cmds[cnt]->u.bsg_job != bsg_job) {
++ /* fw had returned the bsg */
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++ return true;
++ }
++ req->outstanding_cmds[cnt] = NULL;
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++
++ /* ref: INIT */
++ sp->comp = NULL;
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
++ bsg_reply->result = -ENXIO;
++ bsg_reply->reply_payload_rcv_len = 0;
++
++ ql_dbg(ql_dbg_user, vha, 0x7051,
++ "%s bsg_job_done : bsg %p result %#x sp %p.\n",
++ __func__, bsg_job, bsg_reply->result, sp);
++
++ bsg_job_done(bsg_job, bsg_reply->result, bsg_reply->reply_payload_rcv_len);
++
++ return true;
+ }
+
+ int
+ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ {
+- struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++ struct fc_bsg_request *bsg_request = bsg_job->request;
+ scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
+ struct qla_hw_data *ha = vha->hw;
+ int i;
+ struct qla_qpair *qpair;
+
+- ql_log(ql_log_info, vha, 0x708b, "%s CMD timeout. bsg ptr %p.\n",
+- __func__, bsg_job);
++ ql_log(ql_log_info, vha, 0x708b,
++ "%s CMD timeout. bsg ptr %p msgcode %x vendor cmd %x\n",
++ __func__, bsg_job, bsg_request->msgcode,
++ bsg_request->rqst_data.h_vendor.vendor_cmd[0]);
+
+ if (qla2x00_isp_reg_stat(ha)) {
+ ql_log(ql_log_info, vha, 0x9007,
+@@ -3136,7 +3197,6 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ }
+
+ ql_log(ql_log_info, vha, 0x708b, "SRB not found to abort.\n");
+- bsg_reply->result = -ENXIO;
+
+ done:
+ return 0;
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index 76703f2706b8e3..79879c4743e6dc 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -506,6 +506,7 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
+ return(NULL);
+ }
+
++ vha->irq_offset = QLA_BASE_VECTORS;
+ host = vha->host;
+ fc_vport->dd_data = vha;
+ /* New host info */
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 7f980e6141c282..7ab717ed72327e 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6902,12 +6902,15 @@ qla2x00_do_dpc(void *data)
+ set_user_nice(current, MIN_NICE);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+- while (!kthread_should_stop()) {
++ while (1) {
+ ql_dbg(ql_dbg_dpc, base_vha, 0x4000,
+ "DPC handler sleeping.\n");
+
+ schedule();
+
++ if (kthread_should_stop())
++ break;
++
+ if (test_and_clear_bit(DO_EEH_RECOVERY, &base_vha->dpc_flags))
+ qla_pci_set_eeh_busy(base_vha);
+
+@@ -6920,15 +6923,16 @@ qla2x00_do_dpc(void *data)
+ goto end_loop;
+ }
+
++ if (test_bit(UNLOADING, &base_vha->dpc_flags))
++ /* don't do any work. Wait to be terminated by kthread_stop */
++ goto end_loop;
++
+ ha->dpc_active = 1;
+
+ ql_dbg(ql_dbg_dpc + ql_dbg_verbose, base_vha, 0x4001,
+ "DPC handler waking up, dpc_flags=0x%lx.\n",
+ base_vha->dpc_flags);
+
+- if (test_bit(UNLOADING, &base_vha->dpc_flags))
+- break;
+-
+ if (IS_P3P_TYPE(ha)) {
+ if (IS_QLA8044(ha)) {
+ if (test_and_clear_bit(ISP_UNRECOVERABLE,
+@@ -7241,9 +7245,6 @@ qla2x00_do_dpc(void *data)
+ */
+ ha->dpc_active = 0;
+
+- /* Cleanup any residual CTX SRBs. */
+- qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
+-
+ return 0;
+ }
+
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index b52513eeeafa75..680ba180a67252 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -6447,7 +6447,7 @@ static int schedule_resp(struct scsi_cmnd *cmnd, struct sdebug_dev_info *devip,
+ }
+ sd_dp = &sqcp->sd_dp;
+
+- if (polled)
++ if (polled || (ndelay > 0 && ndelay < INCLUSIVE_TIMING_MAX_NS))
+ ns_from_boot = ktime_get_boottime_ns();
+
+ /* one of the resp_*() response functions is called here */
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 84334ab39c8107..94127868bedf8a 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -386,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp)
+ SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
+
+ mutex_lock(&sdp->open_rel_lock);
+- kref_put(&sfp->f_ref, sg_remove_sfp);
+ sdp->open_cnt--;
+
+ /* possibly many open()s waiting on exlude clearing, start many;
+@@ -398,6 +397,7 @@ sg_release(struct inode *inode, struct file *filp)
+ wake_up_interruptible(&sdp->open_wait);
+ }
+ mutex_unlock(&sdp->open_rel_lock);
++ kref_put(&sfp->f_ref, sg_remove_sfp);
+ return 0;
+ }
+
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index beb88f25dbb993..c9038284bc893d 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -3506,6 +3506,7 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ int i, cmd_nr, cmd_type, bt;
+ int retval = 0;
+ unsigned int blk;
++ bool cmd_mtiocget;
+ struct scsi_tape *STp = file->private_data;
+ struct st_modedef *STm;
+ struct st_partstat *STps;
+@@ -3619,6 +3620,7 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ */
+ if (mtc.mt_op != MTREW &&
+ mtc.mt_op != MTOFFL &&
++ mtc.mt_op != MTLOAD &&
+ mtc.mt_op != MTRETEN &&
+ mtc.mt_op != MTERASE &&
+ mtc.mt_op != MTSEEK &&
+@@ -3732,17 +3734,28 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ goto out;
+ }
+
++ cmd_mtiocget = cmd_type == _IOC_TYPE(MTIOCGET) && cmd_nr == _IOC_NR(MTIOCGET);
++
+ if ((i = flush_buffer(STp, 0)) < 0) {
+- retval = i;
+- goto out;
+- }
+- if (STp->can_partitions &&
+- (i = switch_partition(STp)) < 0) {
+- retval = i;
+- goto out;
++ if (cmd_mtiocget && STp->pos_unknown) {
++ /* flush fails -> modify status accordingly */
++ reset_state(STp);
++ STp->pos_unknown = 1;
++ } else { /* return error */
++ retval = i;
++ goto out;
++ }
++ } else { /* flush_buffer succeeds */
++ if (STp->can_partitions) {
++ i = switch_partition(STp);
++ if (i < 0) {
++ retval = i;
++ goto out;
++ }
++ }
+ }
+
+- if (cmd_type == _IOC_TYPE(MTIOCGET) && cmd_nr == _IOC_NR(MTIOCGET)) {
++ if (cmd_mtiocget) {
+ struct mtget mt_status;
+
+ if (_IOC_SIZE(cmd_in) != sizeof(struct mtget)) {
+@@ -3756,7 +3769,7 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ ((STp->density << MT_ST_DENSITY_SHIFT) & MT_ST_DENSITY_MASK);
+ mt_status.mt_blkno = STps->drv_block;
+ mt_status.mt_fileno = STps->drv_file;
+- if (STp->block_size != 0) {
++ if (STp->block_size != 0 && mt_status.mt_blkno >= 0) {
+ if (STps->rw == ST_WRITING)
+ mt_status.mt_blkno +=
+ (STp->buffer)->buffer_bytes / STp->block_size;
+diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
+index fe111bae38c8e1..5ea8887828c064 100644
+--- a/drivers/soc/imx/soc-imx8m.c
++++ b/drivers/soc/imx/soc-imx8m.c
+@@ -30,7 +30,7 @@
+
+ struct imx8_soc_data {
+ char *name;
+- u32 (*soc_revision)(void);
++ int (*soc_revision)(u32 *socrev);
+ };
+
+ static u64 soc_uid;
+@@ -51,24 +51,29 @@ static u32 imx8mq_soc_revision_from_atf(void)
+ static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; };
+ #endif
+
+-static u32 __init imx8mq_soc_revision(void)
++static int imx8mq_soc_revision(u32 *socrev)
+ {
+ struct device_node *np;
+ void __iomem *ocotp_base;
+ u32 magic;
+ u32 rev;
+ struct clk *clk;
++ int ret;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
+ if (!np)
+- return 0;
++ return -EINVAL;
+
+ ocotp_base = of_iomap(np, 0);
+- WARN_ON(!ocotp_base);
++ if (!ocotp_base) {
++ ret = -EINVAL;
++ goto err_iomap;
++ }
++
+ clk = of_clk_get_by_name(np, NULL);
+ if (IS_ERR(clk)) {
+- WARN_ON(IS_ERR(clk));
+- return 0;
++ ret = PTR_ERR(clk);
++ goto err_clk;
+ }
+
+ clk_prepare_enable(clk);
+@@ -88,32 +93,45 @@ static u32 __init imx8mq_soc_revision(void)
+ soc_uid <<= 32;
+ soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
+
++ *socrev = rev;
++
+ clk_disable_unprepare(clk);
+ clk_put(clk);
+ iounmap(ocotp_base);
+ of_node_put(np);
+
+- return rev;
++ return 0;
++
++err_clk:
++ iounmap(ocotp_base);
++err_iomap:
++ of_node_put(np);
++ return ret;
+ }
+
+-static void __init imx8mm_soc_uid(void)
++static int imx8mm_soc_uid(void)
+ {
+ void __iomem *ocotp_base;
+ struct device_node *np;
+ struct clk *clk;
++ int ret = 0;
+ u32 offset = of_machine_is_compatible("fsl,imx8mp") ?
+ IMX8MP_OCOTP_UID_OFFSET : 0;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
+ if (!np)
+- return;
++ return -EINVAL;
+
+ ocotp_base = of_iomap(np, 0);
+- WARN_ON(!ocotp_base);
++ if (!ocotp_base) {
++ ret = -EINVAL;
++ goto err_iomap;
++ }
++
+ clk = of_clk_get_by_name(np, NULL);
+ if (IS_ERR(clk)) {
+- WARN_ON(IS_ERR(clk));
+- return;
++ ret = PTR_ERR(clk);
++ goto err_clk;
+ }
+
+ clk_prepare_enable(clk);
+@@ -124,31 +142,41 @@ static void __init imx8mm_soc_uid(void)
+
+ clk_disable_unprepare(clk);
+ clk_put(clk);
++
++err_clk:
+ iounmap(ocotp_base);
++err_iomap:
+ of_node_put(np);
++
++ return ret;
+ }
+
+-static u32 __init imx8mm_soc_revision(void)
++static int imx8mm_soc_revision(u32 *socrev)
+ {
+ struct device_node *np;
+ void __iomem *anatop_base;
+- u32 rev;
++ int ret;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
+ if (!np)
+- return 0;
++ return -EINVAL;
+
+ anatop_base = of_iomap(np, 0);
+- WARN_ON(!anatop_base);
++ if (!anatop_base) {
++ ret = -EINVAL;
++ goto err_iomap;
++ }
+
+- rev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
++ *socrev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
+
+ iounmap(anatop_base);
+ of_node_put(np);
+
+- imx8mm_soc_uid();
++ return imx8mm_soc_uid();
+
+- return rev;
++err_iomap:
++ of_node_put(np);
++ return ret;
+ }
+
+ static const struct imx8_soc_data imx8mq_soc_data = {
+@@ -184,7 +212,7 @@ static __maybe_unused const struct of_device_id imx8_soc_match[] = {
+ kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \
+ "unknown"
+
+-static int __init imx8_soc_init(void)
++static int imx8m_soc_probe(struct platform_device *pdev)
+ {
+ struct soc_device_attribute *soc_dev_attr;
+ struct soc_device *soc_dev;
+@@ -212,8 +240,11 @@ static int __init imx8_soc_init(void)
+ data = id->data;
+ if (data) {
+ soc_dev_attr->soc_id = data->name;
+- if (data->soc_revision)
+- soc_rev = data->soc_revision();
++ if (data->soc_revision) {
++ ret = data->soc_revision(&soc_rev);
++ if (ret)
++ goto free_soc;
++ }
+ }
+
+ soc_dev_attr->revision = imx8_revision(soc_rev);
+@@ -251,6 +282,38 @@ static int __init imx8_soc_init(void)
+ kfree(soc_dev_attr);
+ return ret;
+ }
++
++static struct platform_driver imx8m_soc_driver = {
++ .probe = imx8m_soc_probe,
++ .driver = {
++ .name = "imx8m-soc",
++ },
++};
++
++static int __init imx8_soc_init(void)
++{
++ struct platform_device *pdev;
++ int ret;
++
++ /* No match means this is non-i.MX8M hardware, do nothing. */
++ if (!of_match_node(imx8_soc_match, of_root))
++ return 0;
++
++ ret = platform_driver_register(&imx8m_soc_driver);
++ if (ret) {
++ pr_err("Failed to register imx8m-soc platform driver: %d\n", ret);
++ return ret;
++ }
++
++ pdev = platform_device_register_simple("imx8m-soc", -1, NULL, 0);
++ if (IS_ERR(pdev)) {
++ pr_err("Failed to register imx8m-soc platform device: %ld\n", PTR_ERR(pdev));
++ platform_driver_unregister(&imx8m_soc_driver);
++ return PTR_ERR(pdev);
++ }
++
++ return 0;
++}
+ device_initcall(imx8_soc_init);
+ MODULE_DESCRIPTION("NXP i.MX8M SoC driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 28bcc65e91beb3..a470285f54a875 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -153,325 +153,2431 @@ enum llcc_reg_offset {
+ };
+
+ static const struct llcc_slice_config sa8775p_data[] = {
+- {LLCC_CPUSS, 1, 2048, 1, 0, 0x00FF, 0x0, 0, 0, 0, 1, 1, 0, 0},
+- {LLCC_VIDSC0, 2, 512, 3, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_CPUSS1, 3, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_CPUHWT, 5, 512, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPT, 10, 4096, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_GPUHTW, 11, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_GPU, 12, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 1, 0},
+- {LLCC_MMUHWT, 13, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 1, 0, 0},
+- {LLCC_CMPTDMA, 15, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_DISP, 16, 4096, 2, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_VIDFW, 17, 3072, 1, 0, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CVP, 28, 256, 3, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0},
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 1, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 2048,
++ .priority = 1,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 4096,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 3072,
++ .priority = 1,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config sc7180_data[] = {
+- { LLCC_CPUSS, 1, 256, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_MDM, 8, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 256,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 128,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 128,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 128,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sc7280_data[] = {
+- { LLCC_CPUSS, 1, 768, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 1, 0},
+- { LLCC_MDMHPGRW, 7, 512, 2, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_CMPT, 10, 768, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_GPUHTW, 11, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_GPU, 12, 512, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_MMUHWT, 13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 0, 1, 0},
+- { LLCC_MDMPNG, 21, 768, 0, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_WLHW, 24, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_MODPE, 29, 64, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 768,
++ .priority = 1,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 768,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sc8180x_data[] = {
+- { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_VIDSC1, 3, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPGRW, 7, 3072, 1, 1, 0x3ff, 0xc00, 0, 0, 0, 1, 0 },
+- { LLCC_MDM, 8, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 5120, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1 },
+- { LLCC_CMPTDMA, 15, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_VIDFW, 17, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0xc, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_WLHW, 24, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 512, 1, 1, 0xc, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 },
+- { LLCC_WRCACHE, 31, 128, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDSC1,
++ .slice_id = 3,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3ff,
++ .res_ways = 0xc00,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 5120,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 20,
++ .max_cap = 1024,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xc,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xc,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sc8280xp_data[] = {
+- { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 },
+- { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_GPU, 12, 4096, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDHW, 22, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_ECC, 26, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0, 0 },
+- { LLCC_WRCACHE, 31, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CVPFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CPUSS1, 3, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CVPFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+-static const struct llcc_slice_config sdm845_data[] = {
+- { LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 },
+- { LLCC_VIDSC1, 3, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 },
+- { LLCC_ROTATOR, 4, 563, 2, 1, 0x0, 0x00e, 2, 0, 1, 1, 0 },
+- { LLCC_VOICE, 5, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_AUDIO, 6, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_MDMHPGRW, 7, 1024, 2, 0, 0xfc, 0xf00, 0, 0, 1, 1, 0 },
+- { LLCC_MDM, 8, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_CMPT, 10, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_GPUHTW, 11, 512, 1, 1, 0xc, 0x0, 0, 0, 1, 1, 0 },
+- { LLCC_GPU, 12, 2304, 1, 0, 0xff0, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_MMUHWT, 13, 256, 2, 0, 0x0, 0x1, 0, 0, 1, 0, 1 },
+- { LLCC_CMPTDMA, 15, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_DISP, 16, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_VIDFW, 17, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0 },
++static const struct llcc_slice_config sdm845_data[] = {{
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDSC1,
++ .slice_id = 3,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ROTATOR,
++ .slice_id = 4,
++ .max_cap = 563,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xe,
++ .cache_mode = 2,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VOICE,
++ .slice_id = 5,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 2,
++ .bonus_ways = 0xfc,
++ .res_ways = 0xf00,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xc,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 2304,
++ .priority = 1,
++ .bonus_ways = 0xff0,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 256,
++ .priority = 2,
++ .res_ways = 0x1,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 20,
++ .max_cap = 1024,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xf00,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x1e,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm6350_data[] = {
+- { LLCC_CPUSS, 1, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 1 },
+- { LLCC_MDM, 8, 512, 2, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 256, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 512, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMPNG, 21, 768, 0, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 64, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 768,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 512,
++ .priority = 2,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 256,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 768,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 768,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm7150_data[] = {
+- { LLCC_CPUSS, 1, 512, 1, 0, 0xF, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_MDM, 8, 128, 2, 0, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 512, 1, 0, 0xF, 0x0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 128,
++ .priority = 2,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm8150_data[] = {
+- { LLCC_CPUSS, 1, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_VIDSC1, 3, 512, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPGRW, 7, 3072, 1, 0, 0xFF, 0xF00, 0, 0, 0, 1, 0 },
+- { LLCC_MDM, 8, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPT, 10, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW , 11, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 2560, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1 },
+- { LLCC_CMPTDMA, 15, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_DISP, 16, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPFX, 21, 1024, 0, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_WLHW, 24, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 },
+- { LLCC_WRCACHE, 31, 128, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDSC1,
++ .slice_id = 3,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 3072,
++ .priority = 1,
++ .bonus_ways = 0xff,
++ .res_ways = 0xf00,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 2560,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 20,
++ .max_cap = 1024,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sm8250_data[] = {
+- { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 },
+- { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_CMPT, 10, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPTDMA, 15, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_DISP, 16, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_VIDFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_NPU, 23, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_WLHW, 24, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CVP, 28, 256, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_APTCM, 30, 128, 3, 0, 0x0, 0x3, 1, 0, 0, 1, 0, 0 },
+- { LLCC_WRCACHE, 31, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 128,
++ .priority = 3,
++ .res_ways = 0x3,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm8350_data[] = {
+- { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPT, 10, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 },
+- { LLCC_DISP, 16, 3072, 2, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0xf, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 256, 1, 1, 0xf, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 0, 1, 0 },
+- { LLCC_WRCACHE, 31, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 },
+- { LLCC_CVPFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CPUSS1, 3, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 3,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 3072,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_CVPFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm8450_data[] = {
+- {LLCC_CPUSS, 1, 3072, 1, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 },
+- {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 },
+- {LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_MODHW, 9, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_GPU, 12, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 1, 0 },
+- {LLCC_MMUHWT, 13, 768, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- {LLCC_DISP, 16, 4096, 2, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_MDMPNG, 21, 1024, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 },
+- {LLCC_CVP, 28, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_MODPE, 29, 64, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0 },
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- {LLCC_CVPFW, 17, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CPUSS1, 3, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CAMEXP0, 4, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CPUMTE, 23, 256, 1, 1, 0x0FFF, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 },
+- {LLCC_CAMEXP1, 27, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_AENPU, 8, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 3,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 4096,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf000,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf000,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CVPFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUMTE,
++ .slice_id = 23,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 27,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 8,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sm8550_data[] = {
+- {LLCC_CPUSS, 1, 5120, 1, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_VIDSC0, 2, 512, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MDMHPGRW, 25, 1024, 4, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MODHW, 26, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_GPU, 9, 3096, 1, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MMUHWT, 18, 768, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_DISP, 16, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MDMPNG, 27, 1024, 0, 1, 0xF00000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CVP, 8, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MODPE, 29, 64, 1, 1, 0xF00000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, },
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP0, 4, 256, 4, 1, 0xF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP1, 7, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CMPTHCP, 17, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_LCPDARE, 30, 128, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, },
+- {LLCC_AENPU, 3, 3072, 1, 1, 0xFE01FF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_ISLAND1, 12, 1792, 7, 1, 0xFE00, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_ISLAND4, 15, 256, 7, 1, 0x10000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP2, 19, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP3, 20, 3200, 2, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP4, 21, 3200, 2, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_DISP_WB, 23, 1024, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_DISP_1, 24, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_VIDVSP, 28, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 5120,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 25,
++ .max_cap = 1024,
++ .priority = 4,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 9,
++ .max_cap = 3096,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ .write_scid_cacheable_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 18,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 27,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xf00000,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 8,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf00000,
++ .cache_mode = 0,
++ .alloc_oneway_en = true,
++ .vict_prio = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 7,
++ .max_cap = 3200,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CMPTHCP,
++ .slice_id = 17,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_LCPDARE,
++ .slice_id = 30,
++ .max_cap = 128,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .alloc_oneway_en = true,
++ .vict_prio = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 3,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfe01ff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_ISLAND1,
++ .slice_id = 12,
++ .max_cap = 1792,
++ .priority = 7,
++ .fixed_size = true,
++ .bonus_ways = 0xfe00,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_ISLAND4,
++ .slice_id = 15,
++ .max_cap = 256,
++ .priority = 7,
++ .fixed_size = true,
++ .bonus_ways = 0x10000,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP2,
++ .slice_id = 19,
++ .max_cap = 3200,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP3,
++ .slice_id = 20,
++ .max_cap = 3200,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP4,
++ .slice_id = 21,
++ .max_cap = 3200,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_DISP_WB,
++ .slice_id = 23,
++ .max_cap = 1024,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_DISP_1,
++ .slice_id = 24,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_VIDVSP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sm8650_data[] = {
+- {LLCC_CPUSS, 1, 5120, 1, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDIO, 6, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MDMHPGRW, 25, 1024, 3, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MODHW, 26, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPU, 9, 3096, 1, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MMUHWT, 18, 768, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_DISP, 16, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MDMHPFX, 24, 1024, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MDMPNG, 27, 1024, 0, 1, 0x000000, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CVP, 8, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MODPE, 29, 128, 1, 1, 0xF00000, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP0, 4, 256, 3, 1, 0xF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP1, 7, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPTHCP, 17, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_LCPDARE, 30, 128, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AENPU, 3, 3072, 1, 1, 0xFFFFFF, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_ISLAND1, 12, 5888, 7, 1, 0x0, 0x7FFFFF, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_DISP_WB, 23, 1024, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_VIDVSP, 28, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 5120,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .stale_en = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 25,
++ .max_cap = 1024,
++ .priority = 3,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 9,
++ .max_cap = 3096,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ .write_scid_cacheable_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 18,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 24,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 27,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 8,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf00000,
++ .cache_mode = 0,
++ .alloc_oneway_en = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 7,
++ .max_cap = 3200,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CMPTHCP,
++ .slice_id = 17,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_LCPDARE,
++ .slice_id = 30,
++ .max_cap = 128,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .alloc_oneway_en = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 3,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_ISLAND1,
++ .slice_id = 12,
++ .max_cap = 5888,
++ .priority = 7,
++ .fixed_size = true,
++ .res_ways = 0x7fffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_DISP_WB,
++ .slice_id = 23,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_VIDVSP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config qdu1000_data_2ch[] = {
+- { LLCC_MDMHPGRW, 7, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MODHW, 9, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MDMPNG, 21, 256, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_ECC, 26, 512, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_MODPE, 29, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 },
+- { LLCC_WRCACHE, 31, 128, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 },
++ {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 256,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xc,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config qdu1000_data_4ch[] = {
+- { LLCC_MDMHPGRW, 7, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MODHW, 9, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MDMPNG, 21, 512, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_ECC, 26, 1024, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_MODPE, 29, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 },
+- { LLCC_WRCACHE, 31, 256, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 },
++ {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 512,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xc,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config qdu1000_data_8ch[] = {
+- { LLCC_MDMHPGRW, 7, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_ECC, 26, 2048, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_MODPE, 29, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 },
+- { LLCC_WRCACHE, 31, 512, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 },
++ {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 2048,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xc,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config x1e80100_data[] = {
+- {LLCC_CPUSS, 1, 6144, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_VIDSC0, 2, 512, 4, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPT, 10, 6144, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPU, 9, 4608, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MMUHWT, 18, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CVP, 8, 512, 4, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_WRCACHE, 31, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP0, 4, 256, 4, 1, 0x3, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP1, 7, 3072, 3, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_LCPDARE, 30, 512, 3, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AENPU, 3, 3072, 1, 1, 0xFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_ISLAND1, 12, 2048, 7, 1, 0x0, 0xF, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP2, 19, 3072, 3, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP3, 20, 3072, 2, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP4, 21, 3072, 2, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 9,
++ .max_cap = 4608,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ .write_scid_cacheable_en = true,
++ .stale_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 18,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 8,
++ .max_cap = 512,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 7,
++ .max_cap = 3072,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_LCPDARE,
++ .slice_id = 30,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .alloc_oneway_en = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 3,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_ISLAND1,
++ .slice_id = 12,
++ .max_cap = 2048,
++ .priority = 7,
++ .fixed_size = true,
++ .res_ways = 0xf,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP2,
++ .slice_id = 19,
++ .max_cap = 3072,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP3,
++ .slice_id = 20,
++ .max_cap = 3072,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP4,
++ .slice_id = 21,
++ .max_cap = 3072,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ },
+ };
+
+ static const struct llcc_edac_reg_offset llcc_v1_edac_reg_offset = {
+diff --git a/drivers/soc/qcom/qcom_pd_mapper.c b/drivers/soc/qcom/qcom_pd_mapper.c
+index c940f4da28ed5c..6e30f08761aa43 100644
+--- a/drivers/soc/qcom/qcom_pd_mapper.c
++++ b/drivers/soc/qcom/qcom_pd_mapper.c
+@@ -540,6 +540,7 @@ static const struct of_device_id qcom_pdm_domains[] __maybe_unused = {
+ { .compatible = "qcom,msm8996", .data = msm8996_domains, },
+ { .compatible = "qcom,msm8998", .data = msm8998_domains, },
+ { .compatible = "qcom,qcm2290", .data = qcm2290_domains, },
++ { .compatible = "qcom,qcm6490", .data = sc7280_domains, },
+ { .compatible = "qcom,qcs404", .data = qcs404_domains, },
+ { .compatible = "qcom,sc7180", .data = sc7180_domains, },
+ { .compatible = "qcom,sc7280", .data = sc7280_domains, },
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 9573b8fa4fbfc6..29b9676fe43d89 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -315,9 +315,10 @@ static void fsl_lpspi_set_watermark(struct fsl_lpspi_data *fsl_lpspi)
+ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ {
+ struct lpspi_config config = fsl_lpspi->config;
+- unsigned int perclk_rate, scldiv, div;
++ unsigned int perclk_rate, div;
+ u8 prescale_max;
+ u8 prescale;
++ int scldiv;
+
+ perclk_rate = clk_get_rate(fsl_lpspi->clk_per);
+ prescale_max = fsl_lpspi->devtype_data->prescale_max;
+@@ -338,13 +339,13 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+
+ for (prescale = 0; prescale <= prescale_max; prescale++) {
+ scldiv = div / (1 << prescale) - 2;
+- if (scldiv < 256) {
++ if (scldiv >= 0 && scldiv < 256) {
+ fsl_lpspi->config.prescale = prescale;
+ break;
+ }
+ }
+
+- if (scldiv >= 256)
++ if (scldiv < 0 || scldiv >= 256)
+ return -EINVAL;
+
+ writel(scldiv | (scldiv << 8) | ((scldiv >> 1) << 16),
+diff --git a/drivers/spi/spi-mpc52xx.c b/drivers/spi/spi-mpc52xx.c
+index d5ac60c135c20a..159f359d7501aa 100644
+--- a/drivers/spi/spi-mpc52xx.c
++++ b/drivers/spi/spi-mpc52xx.c
+@@ -520,6 +520,7 @@ static void mpc52xx_spi_remove(struct platform_device *op)
+ struct mpc52xx_spi *ms = spi_controller_get_devdata(host);
+ int i;
+
++ cancel_work_sync(&ms->work);
+ free_irq(ms->irq0, ms);
+ free_irq(ms->irq1, ms);
+
+diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c
+index dc1c4ae2d8b01b..1a7874676f68e4 100644
+--- a/drivers/thermal/qcom/tsens-v1.c
++++ b/drivers/thermal/qcom/tsens-v1.c
+@@ -162,28 +162,35 @@ struct tsens_plat_data data_tsens_v1 = {
+ .fields = tsens_v1_regfields,
+ };
+
+-static const struct tsens_ops ops_8956 = {
+- .init = init_8956,
++static const struct tsens_ops ops_common = {
++ .init = init_common,
+ .calibrate = tsens_calibrate_common,
+ .get_temp = get_temp_tsens_valid,
+ };
+
+-struct tsens_plat_data data_8956 = {
++struct tsens_plat_data data_8937 = {
+ .num_sensors = 11,
+- .ops = &ops_8956,
++ .ops = &ops_common,
+ .feat = &tsens_v1_feat,
+ .fields = tsens_v1_regfields,
+ };
+
+-static const struct tsens_ops ops_8976 = {
+- .init = init_common,
++static const struct tsens_ops ops_8956 = {
++ .init = init_8956,
+ .calibrate = tsens_calibrate_common,
+ .get_temp = get_temp_tsens_valid,
+ };
+
++struct tsens_plat_data data_8956 = {
++ .num_sensors = 11,
++ .ops = &ops_8956,
++ .feat = &tsens_v1_feat,
++ .fields = tsens_v1_regfields,
++};
++
+ struct tsens_plat_data data_8976 = {
+ .num_sensors = 11,
+- .ops = &ops_8976,
++ .ops = &ops_common,
+ .feat = &tsens_v1_feat,
+ .fields = tsens_v1_regfields,
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index 0b4421bf478544..d2db804692f01d 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -1119,6 +1119,9 @@ static const struct of_device_id tsens_table[] = {
+ }, {
+ .compatible = "qcom,msm8916-tsens",
+ .data = &data_8916,
++ }, {
++ .compatible = "qcom,msm8937-tsens",
++ .data = &data_8937,
+ }, {
+ .compatible = "qcom,msm8939-tsens",
+ .data = &data_8939,
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index cab39de045b100..7b36a0318fa6a0 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -647,7 +647,7 @@ extern struct tsens_plat_data data_8960;
+ extern struct tsens_plat_data data_8226, data_8909, data_8916, data_8939, data_8974, data_9607;
+
+ /* TSENS v1 targets */
+-extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
++extern struct tsens_plat_data data_tsens_v1, data_8937, data_8976, data_8956;
+
+ /* TSENS v2 targets */
+ extern struct tsens_plat_data data_8996, data_ipq8074, data_tsens_v2;
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index ab9e7f20426025..51894c93c8a313 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -750,7 +750,7 @@ static const struct dw8250_platform_data dw8250_renesas_rzn1_data = {
+ .quirks = DW_UART_QUIRK_CPR_VALUE | DW_UART_QUIRK_IS_DMA_FC,
+ };
+
+-static const struct dw8250_platform_data dw8250_starfive_jh7100_data = {
++static const struct dw8250_platform_data dw8250_skip_set_rate_data = {
+ .usr_reg = DW_UART_USR,
+ .quirks = DW_UART_QUIRK_SKIP_SET_RATE,
+ };
+@@ -760,7 +760,8 @@ static const struct of_device_id dw8250_of_match[] = {
+ { .compatible = "cavium,octeon-3860-uart", .data = &dw8250_octeon_3860_data },
+ { .compatible = "marvell,armada-38x-uart", .data = &dw8250_armada_38x_data },
+ { .compatible = "renesas,rzn1-uart", .data = &dw8250_renesas_rzn1_data },
+- { .compatible = "starfive,jh7100-uart", .data = &dw8250_starfive_jh7100_data },
++ { .compatible = "sophgo,sg2044-uart", .data = &dw8250_skip_set_rate_data },
++ { .compatible = "starfive,jh7100-uart", .data = &dw8250_skip_set_rate_data },
+ { /* Sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, dw8250_of_match);
+diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
+index 265f21133b633e..796e37a1d859f2 100644
+--- a/drivers/ufs/core/ufs-sysfs.c
++++ b/drivers/ufs/core/ufs-sysfs.c
+@@ -670,6 +670,9 @@ static ssize_t read_req_latency_avg_show(struct device *dev,
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+ struct ufs_hba_monitor *m = &hba->monitor;
+
++ if (!m->nr_req[READ])
++ return sysfs_emit(buf, "0\n");
++
+ return sysfs_emit(buf, "%llu\n", div_u64(ktime_to_us(m->lat_sum[READ]),
+ m->nr_req[READ]));
+ }
+@@ -737,6 +740,9 @@ static ssize_t write_req_latency_avg_show(struct device *dev,
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+ struct ufs_hba_monitor *m = &hba->monitor;
+
++ if (!m->nr_req[WRITE])
++ return sysfs_emit(buf, "0\n");
++
+ return sysfs_emit(buf, "%llu\n", div_u64(ktime_to_us(m->lat_sum[WRITE]),
+ m->nr_req[WRITE]));
+ }
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 433d0480391ea6..6c09d97ae00658 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -170,7 +170,7 @@ static int ufs_bsg_request(struct bsg_job *job)
+ break;
+ case UPIU_TRANSACTION_UIC_CMD:
+ memcpy(&uc, &bsg_request->upiu_req.uc, UIC_CMD_SIZE);
+- ret = ufshcd_send_uic_cmd(hba, &uc);
++ ret = ufshcd_send_bsg_uic_cmd(hba, &uc);
+ if (ret)
+ dev_err(hba->dev, "send uic cmd: error code %d\n", ret);
+
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index 7aea8fbaeee882..9ffd94ddf8c7ce 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -84,6 +84,7 @@ int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
+ u8 **buf, bool ascii);
+
+ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
++int ufshcd_send_bsg_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
+
+ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
+ struct utp_upiu_req *req_upiu,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index abbe7135a97787..cfebe4a1af9e84 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2411,8 +2411,6 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+ int err;
+
+ hba->capabilities = ufshcd_readl(hba, REG_CONTROLLER_CAPABILITIES);
+- if (hba->quirks & UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS)
+- hba->capabilities &= ~MASK_64_ADDRESSING_SUPPORT;
+
+ /* nutrs and nutmrs are 0 based values */
+ hba->nutrs = (hba->capabilities & MASK_TRANSFER_REQUESTS_SLOTS_SDB) + 1;
+@@ -2551,13 +2549,11 @@ ufshcd_wait_for_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ * __ufshcd_send_uic_cmd - Send UIC commands and retrieve the result
+ * @hba: per adapter instance
+ * @uic_cmd: UIC command
+- * @completion: initialize the completion only if this is set to true
+ *
+ * Return: 0 only if success.
+ */
+ static int
+-__ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd,
+- bool completion)
++__ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ {
+ lockdep_assert_held(&hba->uic_cmd_mutex);
+
+@@ -2567,8 +2563,7 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd,
+ return -EIO;
+ }
+
+- if (completion)
+- init_completion(&uic_cmd->done);
++ init_completion(&uic_cmd->done);
+
+ uic_cmd->cmd_active = 1;
+ ufshcd_dispatch_uic_cmd(hba, uic_cmd);
+@@ -2594,7 +2589,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ mutex_lock(&hba->uic_cmd_mutex);
+ ufshcd_add_delay_before_dme_cmd(hba);
+
+- ret = __ufshcd_send_uic_cmd(hba, uic_cmd, true);
++ ret = __ufshcd_send_uic_cmd(hba, uic_cmd);
+ if (!ret)
+ ret = ufshcd_wait_for_uic_cmd(hba, uic_cmd);
+
+@@ -4288,7 +4283,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ reenable_intr = true;
+ }
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+- ret = __ufshcd_send_uic_cmd(hba, cmd, false);
++ ret = __ufshcd_send_uic_cmd(hba, cmd);
+ if (ret) {
+ dev_err(hba->dev,
+ "pwr ctrl cmd 0x%x with mode 0x%x uic error %d\n",
+@@ -4343,6 +4338,42 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ return ret;
+ }
+
++/**
++ * ufshcd_send_bsg_uic_cmd - Send UIC commands requested via BSG layer and retrieve the result
++ * @hba: per adapter instance
++ * @uic_cmd: UIC command
++ *
++ * Return: 0 only if success.
++ */
++int ufshcd_send_bsg_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
++{
++ int ret;
++
++ if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD)
++ return 0;
++
++ ufshcd_hold(hba);
++
++ if (uic_cmd->argument1 == UIC_ARG_MIB(PA_PWRMODE) &&
++ uic_cmd->command == UIC_CMD_DME_SET) {
++ ret = ufshcd_uic_pwr_ctrl(hba, uic_cmd);
++ goto out;
++ }
++
++ mutex_lock(&hba->uic_cmd_mutex);
++ ufshcd_add_delay_before_dme_cmd(hba);
++
++ ret = __ufshcd_send_uic_cmd(hba, uic_cmd);
++ if (!ret)
++ ret = ufshcd_wait_for_uic_cmd(hba, uic_cmd);
++
++ mutex_unlock(&hba->uic_cmd_mutex);
++
++out:
++ ufshcd_release(hba);
++ return ret;
++}
++
+ /**
+ * ufshcd_uic_change_pwr_mode - Perform the UIC power mode chage
+ * using DME_SET primitives.
+@@ -4651,9 +4682,6 @@ static int ufshcd_change_power_mode(struct ufs_hba *hba,
+ dev_err(hba->dev,
+ "%s: power mode change failed %d\n", __func__, ret);
+ } else {
+- ufshcd_vops_pwr_change_notify(hba, POST_CHANGE, NULL,
+- pwr_mode);
+-
+ memcpy(&hba->pwr_info, pwr_mode,
+ sizeof(struct ufs_pa_layer_attr));
+ }
+@@ -4682,6 +4710,10 @@ int ufshcd_config_pwr_mode(struct ufs_hba *hba,
+
+ ret = ufshcd_change_power_mode(hba, &final_params);
+
++ if (!ret)
++ ufshcd_vops_pwr_change_notify(hba, POST_CHANGE, NULL,
++ &final_params);
++
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);
+@@ -10231,6 +10263,7 @@ void ufshcd_remove(struct ufs_hba *hba)
+ ufs_hwmon_remove(hba);
+ ufs_bsg_remove(hba);
+ ufs_sysfs_remove_nodes(hba->dev);
++ cancel_delayed_work_sync(&hba->ufs_rtc_update_work);
+ blk_mq_destroy_queue(hba->tmf_queue);
+ blk_put_queue(hba->tmf_queue);
+ blk_mq_free_tag_set(&hba->tmf_tag_set);
+@@ -10309,6 +10342,8 @@ EXPORT_SYMBOL_GPL(ufshcd_dealloc_host);
+ */
+ static int ufshcd_set_dma_mask(struct ufs_hba *hba)
+ {
++ if (hba->vops && hba->vops->set_dma_mask)
++ return hba->vops->set_dma_mask(hba);
+ if (hba->capabilities & MASK_64_ADDRESSING_SUPPORT) {
+ if (!dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(64)))
+ return 0;
+diff --git a/drivers/ufs/host/cdns-pltfrm.c b/drivers/ufs/host/cdns-pltfrm.c
+index 66811d8d1929c1..b31aa84111511b 100644
+--- a/drivers/ufs/host/cdns-pltfrm.c
++++ b/drivers/ufs/host/cdns-pltfrm.c
+@@ -307,9 +307,7 @@ static int cdns_ufs_pltfrm_probe(struct platform_device *pdev)
+ */
+ static void cdns_ufs_pltfrm_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops cdns_ufs_dev_pm_ops = {
+diff --git a/drivers/ufs/host/tc-dwc-g210-pltfrm.c b/drivers/ufs/host/tc-dwc-g210-pltfrm.c
+index a3877592604d5d..c6f8565ede21a1 100644
+--- a/drivers/ufs/host/tc-dwc-g210-pltfrm.c
++++ b/drivers/ufs/host/tc-dwc-g210-pltfrm.c
+@@ -76,10 +76,7 @@ static int tc_dwc_g210_pltfm_probe(struct platform_device *pdev)
+ */
+ static void tc_dwc_g210_pltfm_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops tc_dwc_g210_pltfm_pm_ops = {
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index fb550a7c16b34b..98505c68103d0e 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -1963,8 +1963,7 @@ static void exynos_ufs_remove(struct platform_device *pdev)
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+
+ phy_power_off(ufs->phy);
+ phy_exit(ufs->phy);
+diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c
+index 5ee73ff052512b..501609521b2609 100644
+--- a/drivers/ufs/host/ufs-hisi.c
++++ b/drivers/ufs/host/ufs-hisi.c
+@@ -576,9 +576,7 @@ static int ufs_hisi_probe(struct platform_device *pdev)
+
+ static void ufs_hisi_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops ufs_hisi_pm_ops = {
+diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
+index 9a5919434c4e0d..c834d38921b6cb 100644
+--- a/drivers/ufs/host/ufs-mediatek.c
++++ b/drivers/ufs/host/ufs-mediatek.c
+@@ -1869,10 +1869,7 @@ static int ufs_mtk_probe(struct platform_device *pdev)
+ */
+ static void ufs_mtk_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index ecdfff2456e31d..91127fb171864f 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -1843,10 +1843,11 @@ static int ufs_qcom_probe(struct platform_device *pdev)
+ static void ufs_qcom_remove(struct platform_device *pdev)
+ {
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
++ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
+- platform_device_msi_free_irqs_all(hba->dev);
++ ufshcd_pltfrm_remove(pdev);
++ if (host->esi_enabled)
++ platform_device_msi_free_irqs_all(hba->dev);
+ }
+
+ static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = {
+diff --git a/drivers/ufs/host/ufs-renesas.c b/drivers/ufs/host/ufs-renesas.c
+index 8711e5cbc9680a..21a64b34397d8c 100644
+--- a/drivers/ufs/host/ufs-renesas.c
++++ b/drivers/ufs/host/ufs-renesas.c
+@@ -7,6 +7,7 @@
+
+ #include <linux/clk.h>
+ #include <linux/delay.h>
++#include <linux/dma-mapping.h>
+ #include <linux/err.h>
+ #include <linux/iopoll.h>
+ #include <linux/kernel.h>
+@@ -364,14 +365,20 @@ static int ufs_renesas_init(struct ufs_hba *hba)
+ return -ENOMEM;
+ ufshcd_set_variant(hba, priv);
+
+- hba->quirks |= UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS | UFSHCD_QUIRK_HIBERN_FASTAUTO;
++ hba->quirks |= UFSHCD_QUIRK_HIBERN_FASTAUTO;
+
+ return 0;
+ }
+
++static int ufs_renesas_set_dma_mask(struct ufs_hba *hba)
++{
++ return dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(32));
++}
++
+ static const struct ufs_hba_variant_ops ufs_renesas_vops = {
+ .name = "renesas",
+ .init = ufs_renesas_init,
++ .set_dma_mask = ufs_renesas_set_dma_mask,
+ .setup_clocks = ufs_renesas_setup_clocks,
+ .hce_enable_notify = ufs_renesas_hce_enable_notify,
+ .dbg_register_dump = ufs_renesas_dbg_register_dump,
+@@ -390,9 +397,7 @@ static int ufs_renesas_probe(struct platform_device *pdev)
+
+ static void ufs_renesas_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static struct platform_driver ufs_renesas_platform = {
+diff --git a/drivers/ufs/host/ufs-sprd.c b/drivers/ufs/host/ufs-sprd.c
+index d8b165908809d6..d220978c2d8c8a 100644
+--- a/drivers/ufs/host/ufs-sprd.c
++++ b/drivers/ufs/host/ufs-sprd.c
+@@ -427,10 +427,7 @@ static int ufs_sprd_probe(struct platform_device *pdev)
+
+ static void ufs_sprd_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops ufs_sprd_pm_ops = {
+diff --git a/drivers/ufs/host/ufshcd-pltfrm.c b/drivers/ufs/host/ufshcd-pltfrm.c
+index 1f4f30d6cb4234..505572d4fa878c 100644
+--- a/drivers/ufs/host/ufshcd-pltfrm.c
++++ b/drivers/ufs/host/ufshcd-pltfrm.c
+@@ -524,6 +524,22 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_pltfrm_init);
+
++/**
++ * ufshcd_pltfrm_remove - Remove ufshcd platform
++ * @pdev: pointer to Platform device handle
++ */
++void ufshcd_pltfrm_remove(struct platform_device *pdev)
++{
++ struct ufs_hba *hba = platform_get_drvdata(pdev);
++
++ pm_runtime_get_sync(&pdev->dev);
++ ufshcd_remove(hba);
++ ufshcd_dealloc_host(hba);
++ pm_runtime_disable(&pdev->dev);
++ pm_runtime_put_noidle(&pdev->dev);
++}
++EXPORT_SYMBOL_GPL(ufshcd_pltfrm_remove);
++
+ MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>");
+ MODULE_AUTHOR("Vinayak Holikatti <h.vinayak@samsung.com>");
+ MODULE_DESCRIPTION("UFS host controller Platform bus based glue driver");
+diff --git a/drivers/ufs/host/ufshcd-pltfrm.h b/drivers/ufs/host/ufshcd-pltfrm.h
+index df387be5216bd4..3017f8e8f93c67 100644
+--- a/drivers/ufs/host/ufshcd-pltfrm.h
++++ b/drivers/ufs/host/ufshcd-pltfrm.h
+@@ -31,6 +31,7 @@ int ufshcd_negotiate_pwr_params(const struct ufs_host_params *host_params,
+ void ufshcd_init_host_params(struct ufs_host_params *host_params);
+ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ const struct ufs_hba_variant_ops *vops);
++void ufshcd_pltfrm_remove(struct platform_device *pdev);
+ int ufshcd_populate_vreg(struct device *dev, const char *name,
+ struct ufs_vreg **out_vreg, bool skip_current);
+
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index 2a38e1eb65466c..97437de52ef681 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -25,6 +25,7 @@
+ #define TD_PAGE_COUNT 5
+ #define CI_HDRC_PAGE_SIZE 4096ul /* page size for TD's */
+ #define ENDPT_MAX 32
++#define CI_MAX_REQ_SIZE (4 * CI_HDRC_PAGE_SIZE)
+ #define CI_MAX_BUF_SIZE (TD_PAGE_COUNT * CI_HDRC_PAGE_SIZE)
+
+ /******************************************************************************
+@@ -260,6 +261,7 @@ struct ci_hdrc {
+ bool b_sess_valid_event;
+ bool imx28_write_fix;
+ bool has_portsc_pec_bug;
++ bool has_short_pkt_limit;
+ bool supports_runtime_pm;
+ bool in_lpm;
+ bool wakeup_int;
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index c64ab0e07ea030..17b3ac2ac8a1e8 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -342,6 +342,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ struct ci_hdrc_platform_data pdata = {
+ .name = dev_name(&pdev->dev),
+ .capoffset = DEF_CAPOFFSET,
++ .flags = CI_HDRC_HAS_SHORT_PKT_LIMIT,
+ .notify_event = ci_hdrc_imx_notify_event,
+ };
+ int ret;
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 835bf2428dc6ec..5aa16dbfc289ce 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -1076,6 +1076,8 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ CI_HDRC_SUPPORTS_RUNTIME_PM);
+ ci->has_portsc_pec_bug = !!(ci->platdata->flags &
+ CI_HDRC_HAS_PORTSC_PEC_MISSED);
++ ci->has_short_pkt_limit = !!(ci->platdata->flags &
++ CI_HDRC_HAS_SHORT_PKT_LIMIT);
+ platform_set_drvdata(pdev, ci);
+
+ ret = hw_device_init(ci, base);
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 69ef3cd8d4f836..fd6032874bf33a 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -10,6 +10,7 @@
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/dmapool.h>
++#include <linux/dma-direct.h>
+ #include <linux/err.h>
+ #include <linux/irqreturn.h>
+ #include <linux/kernel.h>
+@@ -540,6 +541,126 @@ static int prepare_td_for_sg(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+ return ret;
+ }
+
++/*
++ * Verify if the scatterlist is valid by iterating each sg entry.
++ * Return invalid sg entry index which is less than num_sgs.
++ */
++static int sglist_get_invalid_entry(struct device *dma_dev, u8 dir,
++ struct usb_request *req)
++{
++ int i;
++ struct scatterlist *s = req->sg;
++
++ if (req->num_sgs == 1)
++ return 1;
++
++ dir = dir ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
++
++ for (i = 0; i < req->num_sgs; i++, s = sg_next(s)) {
++ /* Only small sg (generally last sg) may be bounced. If
++ * that happens. we can't ensure the addr is page-aligned
++ * after dma map.
++ */
++ if (dma_kmalloc_needs_bounce(dma_dev, s->length, dir))
++ break;
++
++ /* Make sure each sg start address (except first sg) is
++ * page-aligned and end address (except last sg) is also
++ * page-aligned.
++ */
++ if (i == 0) {
++ if (!IS_ALIGNED(s->offset + s->length,
++ CI_HDRC_PAGE_SIZE))
++ break;
++ } else {
++ if (s->offset)
++ break;
++ if (!sg_is_last(s) && !IS_ALIGNED(s->length,
++ CI_HDRC_PAGE_SIZE))
++ break;
++ }
++ }
++
++ return i;
++}
++
++static int sglist_do_bounce(struct ci_hw_req *hwreq, int index,
++ bool copy, unsigned int *bounced)
++{
++ void *buf;
++ int i, ret, nents, num_sgs;
++ unsigned int rest, rounded;
++ struct scatterlist *sg, *src, *dst;
++
++ nents = index + 1;
++ ret = sg_alloc_table(&hwreq->sgt, nents, GFP_KERNEL);
++ if (ret)
++ return ret;
++
++ sg = src = hwreq->req.sg;
++ num_sgs = hwreq->req.num_sgs;
++ rest = hwreq->req.length;
++ dst = hwreq->sgt.sgl;
++
++ for (i = 0; i < index; i++) {
++ memcpy(dst, src, sizeof(*src));
++ rest -= src->length;
++ src = sg_next(src);
++ dst = sg_next(dst);
++ }
++
++ /* create one bounce buffer */
++ rounded = round_up(rest, CI_HDRC_PAGE_SIZE);
++ buf = kmalloc(rounded, GFP_KERNEL);
++ if (!buf) {
++ sg_free_table(&hwreq->sgt);
++ return -ENOMEM;
++ }
++
++ sg_set_buf(dst, buf, rounded);
++
++ hwreq->req.sg = hwreq->sgt.sgl;
++ hwreq->req.num_sgs = nents;
++ hwreq->sgt.sgl = sg;
++ hwreq->sgt.nents = num_sgs;
++
++ if (copy)
++ sg_copy_to_buffer(src, num_sgs - index, buf, rest);
++
++ *bounced = rest;
++
++ return 0;
++}
++
++static void sglist_do_debounce(struct ci_hw_req *hwreq, bool copy)
++{
++ void *buf;
++ int i, nents, num_sgs;
++ struct scatterlist *sg, *src, *dst;
++
++ sg = hwreq->req.sg;
++ num_sgs = hwreq->req.num_sgs;
++ src = sg_last(sg, num_sgs);
++ buf = sg_virt(src);
++
++ if (copy) {
++ dst = hwreq->sgt.sgl;
++ for (i = 0; i < num_sgs - 1; i++)
++ dst = sg_next(dst);
++
++ nents = hwreq->sgt.nents - num_sgs + 1;
++ sg_copy_from_buffer(dst, nents, buf, sg_dma_len(src));
++ }
++
++ hwreq->req.sg = hwreq->sgt.sgl;
++ hwreq->req.num_sgs = hwreq->sgt.nents;
++ hwreq->sgt.sgl = sg;
++ hwreq->sgt.nents = num_sgs;
++
++ kfree(buf);
++ sg_free_table(&hwreq->sgt);
++}
++
+ /**
+ * _hardware_enqueue: configures a request at hardware level
+ * @hwep: endpoint
+@@ -552,6 +673,8 @@ static int _hardware_enqueue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+ struct ci_hdrc *ci = hwep->ci;
+ int ret = 0;
+ struct td_node *firstnode, *lastnode;
++ unsigned int bounced_size;
++ struct scatterlist *sg;
+
+ /* don't queue twice */
+ if (hwreq->req.status == -EALREADY)
+@@ -559,11 +682,29 @@ static int _hardware_enqueue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+
+ hwreq->req.status = -EALREADY;
+
++ if (hwreq->req.num_sgs && hwreq->req.length &&
++ ci->has_short_pkt_limit) {
++ ret = sglist_get_invalid_entry(ci->dev->parent, hwep->dir,
++ &hwreq->req);
++ if (ret < hwreq->req.num_sgs) {
++ ret = sglist_do_bounce(hwreq, ret, hwep->dir == TX,
++ &bounced_size);
++ if (ret)
++ return ret;
++ }
++ }
++
+ ret = usb_gadget_map_request_by_dev(ci->dev->parent,
+ &hwreq->req, hwep->dir);
+ if (ret)
+ return ret;
+
++ if (hwreq->sgt.sgl) {
++ /* We've mapped a bigger buffer, now recover the actual size */
++ sg = sg_last(hwreq->req.sg, hwreq->req.num_sgs);
++ sg_dma_len(sg) = min(sg_dma_len(sg), bounced_size);
++ }
++
+ if (hwreq->req.num_mapped_sgs)
+ ret = prepare_td_for_sg(hwep, hwreq);
+ else
+@@ -733,6 +874,10 @@ static int _hardware_dequeue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+ usb_gadget_unmap_request_by_dev(hwep->ci->dev->parent,
+ &hwreq->req, hwep->dir);
+
++ /* sglist bounced */
++ if (hwreq->sgt.sgl)
++ sglist_do_debounce(hwreq, hwep->dir == RX);
++
+ hwreq->req.actual += actual;
+
+ if (hwreq->req.status)
+@@ -960,6 +1105,12 @@ static int _ep_queue(struct usb_ep *ep, struct usb_request *req,
+ return -EMSGSIZE;
+ }
+
++ if (ci->has_short_pkt_limit &&
++ hwreq->req.length > CI_MAX_REQ_SIZE) {
++ dev_err(hwep->ci->dev, "request length too big (max 16KB)\n");
++ return -EMSGSIZE;
++ }
++
+ /* first nuke then test link, e.g. previous status has not sent */
+ if (!list_empty(&hwreq->queue)) {
+ dev_err(hwep->ci->dev, "request already in queue\n");
+@@ -1574,6 +1725,9 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req)
+
+ usb_gadget_unmap_request(&hwep->ci->gadget, req, hwep->dir);
+
++ if (hwreq->sgt.sgl)
++ sglist_do_debounce(hwreq, false);
++
+ req->status = -ECONNRESET;
+
+ if (hwreq->req.complete != NULL) {
+@@ -2063,7 +2217,7 @@ static irqreturn_t udc_irq(struct ci_hdrc *ci)
+ }
+ }
+
+- if (USBi_UI & intr)
++ if ((USBi_UI | USBi_UEI) & intr)
+ isr_tr_complete_handler(ci);
+
+ if ((USBi_SLI & intr) && !(ci->suspended)) {
+diff --git a/drivers/usb/chipidea/udc.h b/drivers/usb/chipidea/udc.h
+index 5193df1e18c75b..c8a47389a46bbb 100644
+--- a/drivers/usb/chipidea/udc.h
++++ b/drivers/usb/chipidea/udc.h
+@@ -69,11 +69,13 @@ struct td_node {
+ * @req: request structure for gadget drivers
+ * @queue: link to QH list
+ * @tds: link to TD list
++ * @sgt: hold original sglist when bounce sglist
+ */
+ struct ci_hw_req {
+ struct usb_request req;
+ struct list_head queue;
+ struct list_head tds;
++ struct sg_table sgt;
+ };
+
+ #ifdef CONFIG_USB_CHIPIDEA_UDC
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 7a5dff8d9cc6c3..accf15ff1306a2 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -61,9 +61,11 @@ static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci)
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+ int ret;
+
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
++ if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) {
++ ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
++ if (ret)
++ return ret;
++ }
+
+ memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci));
+
+@@ -73,11 +75,6 @@ static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci)
+ static int ucsi_acpi_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+ {
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+- int ret;
+-
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
+
+ memcpy(val, ua->base + UCSI_MESSAGE_IN, val_len);
+
+@@ -102,42 +99,6 @@ static const struct ucsi_operations ucsi_acpi_ops = {
+ .async_control = ucsi_acpi_async_control
+ };
+
+-static int
+-ucsi_zenbook_read_cci(struct ucsi *ucsi, u32 *cci)
+-{
+- struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+- int ret;
+-
+- if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) {
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
+- }
+-
+- memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci));
+-
+- return 0;
+-}
+-
+-static int
+-ucsi_zenbook_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+-{
+- struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+-
+- /* UCSI_MESSAGE_IN is never read for PPM_RESET, return stored data */
+- memcpy(val, ua->base + UCSI_MESSAGE_IN, val_len);
+-
+- return 0;
+-}
+-
+-static const struct ucsi_operations ucsi_zenbook_ops = {
+- .read_version = ucsi_acpi_read_version,
+- .read_cci = ucsi_zenbook_read_cci,
+- .read_message_in = ucsi_zenbook_read_message_in,
+- .sync_control = ucsi_sync_control_common,
+- .async_control = ucsi_acpi_async_control
+-};
+-
+ static int ucsi_gram_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+ {
+ u16 bogus_change = UCSI_CONSTAT_POWER_LEVEL_CHANGE |
+@@ -190,13 +151,6 @@ static const struct ucsi_operations ucsi_gram_ops = {
+ };
+
+ static const struct dmi_system_id ucsi_acpi_quirks[] = {
+- {
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
+- },
+- .driver_data = (void *)&ucsi_zenbook_ops,
+- },
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index f7000d383a4e62..9b6cb76e632807 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -172,12 +172,12 @@ static int pmic_glink_ucsi_async_control(struct ucsi *__ucsi, u64 command)
+ static void pmic_glink_ucsi_update_connector(struct ucsi_connector *con)
+ {
+ struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi);
+- int i;
+
+- for (i = 0; i < PMIC_GLINK_MAX_PORTS; i++) {
+- if (ucsi->port_orientation[i])
+- con->typec_cap.orientation_aware = true;
+- }
++ if (con->num > PMIC_GLINK_MAX_PORTS ||
++ !ucsi->port_orientation[con->num - 1])
++ return;
++
++ con->typec_cap.orientation_aware = true;
+ }
+
+ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con)
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 7527e277c89897..eb7387ee6ebd10 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -1517,7 +1517,8 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
+ struct mlx5_vhca_qp *host_qp;
+ struct mlx5_vhca_qp *fw_qp;
+ struct mlx5_core_dev *mdev;
+- u32 max_msg_size = PAGE_SIZE;
++ u32 log_max_msg_size;
++ u32 max_msg_size;
+ u64 rq_size = SZ_2M;
+ u32 max_recv_wr;
+ int err;
+@@ -1534,6 +1535,12 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
+ }
+
+ mdev = mvdev->mdev;
++ log_max_msg_size = MLX5_CAP_ADV_VIRTUALIZATION(mdev, pg_track_log_max_msg_size);
++ max_msg_size = (1ULL << log_max_msg_size);
++ /* The RQ must hold at least 4 WQEs/messages for successful QP creation */
++ if (rq_size < 4 * max_msg_size)
++ rq_size = 4 * max_msg_size;
++
+ memset(tracker, 0, sizeof(*tracker));
+ tracker->uar = mlx5_get_uars_page(mdev);
+ if (IS_ERR(tracker->uar)) {
+@@ -1623,25 +1630,41 @@ set_report_output(u32 size, int index, struct mlx5_vhca_qp *qp,
+ {
+ u32 entry_size = MLX5_ST_SZ_BYTES(page_track_report_entry);
+ u32 nent = size / entry_size;
++ u32 nent_in_page;
++ u32 nent_to_set;
+ struct page *page;
++ u32 page_offset;
++ u32 page_index;
++ u32 buf_offset;
++ void *kaddr;
+ u64 addr;
+ u64 *buf;
+ int i;
+
+- if (WARN_ON(index >= qp->recv_buf.npages ||
++ buf_offset = index * qp->max_msg_size;
++ if (WARN_ON(buf_offset + size >= qp->recv_buf.npages * PAGE_SIZE ||
+ (nent > qp->max_msg_size / entry_size)))
+ return;
+
+- page = qp->recv_buf.page_list[index];
+- buf = kmap_local_page(page);
+- for (i = 0; i < nent; i++) {
+- addr = MLX5_GET(page_track_report_entry, buf + i,
+- dirty_address_low);
+- addr |= (u64)MLX5_GET(page_track_report_entry, buf + i,
+- dirty_address_high) << 32;
+- iova_bitmap_set(dirty, addr, qp->tracked_page_size);
+- }
+- kunmap_local(buf);
++ do {
++ page_index = buf_offset / PAGE_SIZE;
++ page_offset = buf_offset % PAGE_SIZE;
++ nent_in_page = (PAGE_SIZE - page_offset) / entry_size;
++ page = qp->recv_buf.page_list[page_index];
++ kaddr = kmap_local_page(page);
++ buf = kaddr + page_offset;
++ nent_to_set = min(nent, nent_in_page);
++ for (i = 0; i < nent_to_set; i++) {
++ addr = MLX5_GET(page_track_report_entry, buf + i,
++ dirty_address_low);
++ addr |= (u64)MLX5_GET(page_track_report_entry, buf + i,
++ dirty_address_high) << 32;
++ iova_bitmap_set(dirty, addr, qp->tracked_page_size);
++ }
++ kunmap_local(kaddr);
++ buf_offset += (nent_to_set * entry_size);
++ nent -= nent_to_set;
++ } while (nent);
+ }
+
+ static void
+diff --git a/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c b/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c
+index 56a3859dda8a15..4230b817a80bd8 100644
+--- a/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c
++++ b/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c
+@@ -87,12 +87,8 @@ static int mmio_guard_ioremap_hook(phys_addr_t phys, size_t size,
+
+ while (phys < end) {
+ const int func_id = ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_FUNC_ID;
+- int err;
+-
+- err = arm_smccc_do_one_page(func_id, phys);
+- if (err)
+- return err;
+
++ WARN_ON_ONCE(arm_smccc_do_one_page(func_id, phys));
+ phys += PAGE_SIZE;
+ }
+
+diff --git a/drivers/watchdog/apple_wdt.c b/drivers/watchdog/apple_wdt.c
+index d4f739932f0be8..62dabf223d9096 100644
+--- a/drivers/watchdog/apple_wdt.c
++++ b/drivers/watchdog/apple_wdt.c
+@@ -130,7 +130,7 @@ static int apple_wdt_restart(struct watchdog_device *wdd, unsigned long mode,
+ * can take up to ~20-25ms until the SoC is actually reset. Just wait
+ * 50ms here to be safe.
+ */
+- (void)readl_relaxed(wdt->regs + APPLE_WDT_WD1_CUR_TIME);
++ (void)readl(wdt->regs + APPLE_WDT_WD1_CUR_TIME);
+ mdelay(50);
+
+ return 0;
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 35b358bcf94ce6..f01ed38aba6751 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -82,6 +82,13 @@
+ #define TCO2_CNT(p) (TCOBASE(p) + 0x0a) /* TCO2 Control Register */
+ #define TCOv2_TMR(p) (TCOBASE(p) + 0x12) /* TCOv2 Timer Initial Value*/
+
++/*
++ * NMI_NOW is bit 8 of TCO1_CNT register
++ * Read/Write
++ * This bit is implemented as RW but has no effect on HW.
++ */
++#define NMI_NOW BIT(8)
++
+ /* internal variables */
+ struct iTCO_wdt_private {
+ struct watchdog_device wddev;
+@@ -219,13 +226,23 @@ static int update_no_reboot_bit_cnt(void *priv, bool set)
+ struct iTCO_wdt_private *p = priv;
+ u16 val, newval;
+
+- val = inw(TCO1_CNT(p));
++ /*
++ * writing back 1b1 to NMI_NOW of TCO1_CNT register
++ * causes NMI_NOW bit inversion what consequently does
++ * not allow to perform the register's value comparison
++ * properly.
++ *
++ * NMI_NOW bit masking for TCO1_CNT register values
++ * helps to avoid possible NMI_NOW bit inversions on
++ * following write operation.
++ */
++ val = inw(TCO1_CNT(p)) & ~NMI_NOW;
+ if (set)
+ val |= BIT(0);
+ else
+ val &= ~BIT(0);
+ outw(val, TCO1_CNT(p));
+- newval = inw(TCO1_CNT(p));
++ newval = inw(TCO1_CNT(p)) & ~NMI_NOW;
+
+ /* make sure the update is successful */
+ return val != newval ? -EIO : 0;
+diff --git a/drivers/watchdog/mtk_wdt.c b/drivers/watchdog/mtk_wdt.c
+index c35f85ce8d69cc..e2d7a57d6ea2e7 100644
+--- a/drivers/watchdog/mtk_wdt.c
++++ b/drivers/watchdog/mtk_wdt.c
+@@ -225,9 +225,15 @@ static int mtk_wdt_restart(struct watchdog_device *wdt_dev,
+ {
+ struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
+ void __iomem *wdt_base;
++ u32 reg;
+
+ wdt_base = mtk_wdt->wdt_base;
+
++ /* Enable reset in order to issue a system reset instead of an IRQ */
++ reg = readl(wdt_base + WDT_MODE);
++ reg &= ~WDT_MODE_IRQ_EN;
++ writel(reg | WDT_MODE_KEY, wdt_base + WDT_MODE);
++
+ while (1) {
+ writel(WDT_SWRST_KEY, wdt_base + WDT_SWRST);
+ mdelay(5);
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 4895a69015a8ea..563d842014dfba 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -61,7 +61,7 @@
+
+ #define MAX_HW_ERROR 250
+
+-static int heartbeat = DEFAULT_HEARTBEAT;
++static int heartbeat;
+
+ /*
+ * struct to hold data for each WDT device
+@@ -252,6 +252,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ wdd->min_timeout = 1;
+ wdd->max_hw_heartbeat_ms = (WDT_PRELOAD_MAX << WDT_PRELOAD_SHIFT) /
+ wdt->freq * 1000;
++ wdd->timeout = DEFAULT_HEARTBEAT;
+ wdd->parent = dev;
+
+ watchdog_set_drvdata(wdd, wdt);
+diff --git a/drivers/watchdog/xilinx_wwdt.c b/drivers/watchdog/xilinx_wwdt.c
+index d271e2e8d6e271..3d2a156f718009 100644
+--- a/drivers/watchdog/xilinx_wwdt.c
++++ b/drivers/watchdog/xilinx_wwdt.c
+@@ -2,7 +2,7 @@
+ /*
+ * Window watchdog device driver for Xilinx Versal WWDT
+ *
+- * Copyright (C) 2022 - 2023, Advanced Micro Devices, Inc.
++ * Copyright (C) 2022 - 2024, Advanced Micro Devices, Inc.
+ */
+
+ #include <linux/clk.h>
+@@ -36,6 +36,12 @@
+
+ #define XWWDT_CLOSE_WINDOW_PERCENT 50
+
++/* Maximum count value of each 32 bit window */
++#define XWWDT_MAX_COUNT_WINDOW GENMASK(31, 0)
++
++/* Maximum count value of closed and open window combined */
++#define XWWDT_MAX_COUNT_WINDOW_COMBINED GENMASK_ULL(32, 1)
++
+ static int wwdt_timeout;
+ static int closed_window_percent;
+
+@@ -54,6 +60,8 @@ MODULE_PARM_DESC(closed_window_percent,
+ * @xilinx_wwdt_wdd: watchdog device structure
+ * @freq: source clock frequency of WWDT
+ * @close_percent: Closed window percent
++ * @closed_timeout: Closed window timeout in ticks
++ * @open_timeout: Open window timeout in ticks
+ */
+ struct xwwdt_device {
+ void __iomem *base;
+@@ -61,27 +69,22 @@ struct xwwdt_device {
+ struct watchdog_device xilinx_wwdt_wdd;
+ unsigned long freq;
+ u32 close_percent;
++ u64 closed_timeout;
++ u64 open_timeout;
+ };
+
+ static int xilinx_wwdt_start(struct watchdog_device *wdd)
+ {
+ struct xwwdt_device *xdev = watchdog_get_drvdata(wdd);
+ struct watchdog_device *xilinx_wwdt_wdd = &xdev->xilinx_wwdt_wdd;
+- u64 time_out, closed_timeout, open_timeout;
+ u32 control_status_reg;
+
+- /* Calculate timeout count */
+- time_out = xdev->freq * wdd->timeout;
+- closed_timeout = div_u64(time_out * xdev->close_percent, 100);
+- open_timeout = time_out - closed_timeout;
+- wdd->min_hw_heartbeat_ms = xdev->close_percent * 10 * wdd->timeout;
+-
+ spin_lock(&xdev->spinlock);
+
+ iowrite32(XWWDT_MWR_MASK, xdev->base + XWWDT_MWR_OFFSET);
+ iowrite32(~(u32)XWWDT_ESR_WEN_MASK, xdev->base + XWWDT_ESR_OFFSET);
+- iowrite32((u32)closed_timeout, xdev->base + XWWDT_FWR_OFFSET);
+- iowrite32((u32)open_timeout, xdev->base + XWWDT_SWR_OFFSET);
++ iowrite32((u32)xdev->closed_timeout, xdev->base + XWWDT_FWR_OFFSET);
++ iowrite32((u32)xdev->open_timeout, xdev->base + XWWDT_SWR_OFFSET);
+
+ /* Enable the window watchdog timer */
+ control_status_reg = ioread32(xdev->base + XWWDT_ESR_OFFSET);
+@@ -133,7 +136,12 @@ static int xwwdt_probe(struct platform_device *pdev)
+ struct watchdog_device *xilinx_wwdt_wdd;
+ struct device *dev = &pdev->dev;
+ struct xwwdt_device *xdev;
++ u64 max_per_window_ms;
++ u64 min_per_window_ms;
++ u64 timeout_count;
+ struct clk *clk;
++ u32 timeout_ms;
++ u64 ms_count;
+ int ret;
+
+ xdev = devm_kzalloc(dev, sizeof(*xdev), GFP_KERNEL);
+@@ -154,12 +162,13 @@ static int xwwdt_probe(struct platform_device *pdev)
+ return PTR_ERR(clk);
+
+ xdev->freq = clk_get_rate(clk);
+- if (!xdev->freq)
++ if (xdev->freq < 1000000)
+ return -EINVAL;
+
+ xilinx_wwdt_wdd->min_timeout = XWWDT_MIN_TIMEOUT;
+ xilinx_wwdt_wdd->timeout = XWWDT_DEFAULT_TIMEOUT;
+- xilinx_wwdt_wdd->max_hw_heartbeat_ms = 1000 * xilinx_wwdt_wdd->timeout;
++ xilinx_wwdt_wdd->max_hw_heartbeat_ms =
++ div64_u64(XWWDT_MAX_COUNT_WINDOW_COMBINED, xdev->freq) * 1000;
+
+ if (closed_window_percent == 0 || closed_window_percent >= 100)
+ xdev->close_percent = XWWDT_CLOSE_WINDOW_PERCENT;
+@@ -167,6 +176,48 @@ static int xwwdt_probe(struct platform_device *pdev)
+ xdev->close_percent = closed_window_percent;
+
+ watchdog_init_timeout(xilinx_wwdt_wdd, wwdt_timeout, &pdev->dev);
++
++ /* Calculate ticks for 1 milli-second */
++ ms_count = div_u64(xdev->freq, 1000);
++ timeout_ms = xilinx_wwdt_wdd->timeout * 1000;
++ timeout_count = timeout_ms * ms_count;
++
++ if (timeout_ms > xilinx_wwdt_wdd->max_hw_heartbeat_ms) {
++ /*
++ * To avoid ping restrictions until the minimum hardware heartbeat,
++ * we will solely rely on the open window and
++ * adjust the minimum hardware heartbeat to 0.
++ */
++ xdev->closed_timeout = 0;
++ xdev->open_timeout = XWWDT_MAX_COUNT_WINDOW;
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms = 0;
++ xilinx_wwdt_wdd->max_hw_heartbeat_ms = xilinx_wwdt_wdd->max_hw_heartbeat_ms / 2;
++ } else {
++ xdev->closed_timeout = div64_u64(timeout_count * xdev->close_percent, 100);
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms =
++ div64_u64(timeout_ms * xdev->close_percent, 100);
++
++ if (timeout_ms > xilinx_wwdt_wdd->max_hw_heartbeat_ms / 2) {
++ max_per_window_ms = xilinx_wwdt_wdd->max_hw_heartbeat_ms / 2;
++ min_per_window_ms = timeout_ms - max_per_window_ms;
++
++ if (xilinx_wwdt_wdd->min_hw_heartbeat_ms > max_per_window_ms) {
++ dev_info(xilinx_wwdt_wdd->parent,
++ "Closed window cannot be set to %d%%. Using maximum supported value.\n",
++ xdev->close_percent);
++ xdev->closed_timeout = max_per_window_ms * ms_count;
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms = max_per_window_ms;
++ } else if (xilinx_wwdt_wdd->min_hw_heartbeat_ms < min_per_window_ms) {
++ dev_info(xilinx_wwdt_wdd->parent,
++ "Closed window cannot be set to %d%%. Using minimum supported value.\n",
++ xdev->close_percent);
++ xdev->closed_timeout = min_per_window_ms * ms_count;
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms = min_per_window_ms;
++ }
++ }
++ xdev->open_timeout = timeout_count - xdev->closed_timeout;
++ }
++
+ spin_lock_init(&xdev->spinlock);
+ watchdog_set_drvdata(xilinx_wwdt_wdd, xdev);
+ watchdog_set_nowayout(xilinx_wwdt_wdd, 1);
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 83d5cdd77f293e..604399e59a3d10 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -641,6 +641,7 @@ static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ return ret;
+
+ down_write(&dev_replace->rwsem);
++ dev_replace->replace_task = current;
+ switch (dev_replace->replace_state) {
+ case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED:
+ case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED:
+@@ -994,6 +995,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ list_add(&tgt_device->dev_alloc_list, &fs_devices->alloc_list);
+ fs_devices->rw_devices++;
+
++ dev_replace->replace_task = NULL;
+ up_write(&dev_replace->rwsem);
+ btrfs_rm_dev_replace_blocked(fs_info);
+
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b11bfe68dd65fb..43b7b331b2da36 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3202,8 +3202,7 @@ int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount)
+ return 0;
+ }
+
+-int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices,
+- const char *options)
++int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices)
+ {
+ u32 sectorsize;
+ u32 nodesize;
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 99af64d3f27781..127e31e0834709 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -52,8 +52,7 @@ struct extent_buffer *btrfs_find_create_tree_block(
+ int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info);
+ int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
+ const struct btrfs_super_block *disk_sb);
+-int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices,
+- const char *options);
++int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices);
+ void __cold close_ctree(struct btrfs_fs_info *fs_info);
+ int btrfs_validate_super(const struct btrfs_fs_info *fs_info,
+ const struct btrfs_super_block *sb, int mirror_num);
+diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
+index 79f64e383eddf8..cbfb225858a59f 100644
+--- a/fs/btrfs/fs.h
++++ b/fs/btrfs/fs.h
+@@ -317,6 +317,8 @@ struct btrfs_dev_replace {
+
+ struct percpu_counter bio_counter;
+ wait_queue_head_t replace_wait;
++
++ struct task_struct *replace_task;
+ };
+
+ /*
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d067db2619713f..58ffe78132d9d6 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9857,6 +9857,7 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (btrfs_root_dead(root)) {
+ spin_unlock(&root->root_item_lock);
+
++ btrfs_drew_write_unlock(&root->snapshot_lock);
+ btrfs_exclop_finish(fs_info);
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because subvolume %llu is being deleted",
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index c64d0713412231..8292e488d3d777 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -946,8 +946,7 @@ static int get_default_subvol_objectid(struct btrfs_fs_info *fs_info, u64 *objec
+ }
+
+ static int btrfs_fill_super(struct super_block *sb,
+- struct btrfs_fs_devices *fs_devices,
+- void *data)
++ struct btrfs_fs_devices *fs_devices)
+ {
+ struct inode *inode;
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+@@ -971,7 +970,7 @@ static int btrfs_fill_super(struct super_block *sb,
+ return err;
+ }
+
+- err = open_ctree(sb, fs_devices, (char *)data);
++ err = open_ctree(sb, fs_devices);
+ if (err) {
+ btrfs_err(fs_info, "open_ctree failed");
+ return err;
+@@ -1887,18 +1886,21 @@ static int btrfs_get_tree_super(struct fs_context *fc)
+
+ if (sb->s_root) {
+ btrfs_close_devices(fs_devices);
+- if ((fc->sb_flags ^ sb->s_flags) & SB_RDONLY)
+- ret = -EBUSY;
++ /*
++ * At this stage we may have RO flag mismatch between
++ * fc->sb_flags and sb->s_flags. Caller should detect such
++ * mismatch and reconfigure with sb->s_umount rwsem held if
++ * needed.
++ */
+ } else {
+ snprintf(sb->s_id, sizeof(sb->s_id), "%pg", bdev);
+ shrinker_debugfs_rename(sb->s_shrink, "sb-btrfs:%s", sb->s_id);
+ btrfs_sb(sb)->bdev_holder = &btrfs_fs_type;
+- ret = btrfs_fill_super(sb, fs_devices, NULL);
+- }
+-
+- if (ret) {
+- deactivate_locked_super(sb);
+- return ret;
++ ret = btrfs_fill_super(sb, fs_devices);
++ if (ret) {
++ deactivate_locked_super(sb);
++ return ret;
++ }
+ }
+
+ btrfs_clear_oneshot_options(fs_info);
+@@ -1984,39 +1986,18 @@ static int btrfs_get_tree_super(struct fs_context *fc)
+ * btrfs or not, setting the whole super block RO. To make per-subvolume mounting
+ * work with different options work we need to keep backward compatibility.
+ */
+-static struct vfsmount *btrfs_reconfigure_for_mount(struct fs_context *fc)
++static int btrfs_reconfigure_for_mount(struct fs_context *fc, struct vfsmount *mnt)
+ {
+- struct vfsmount *mnt;
+- int ret;
+- const bool ro2rw = !(fc->sb_flags & SB_RDONLY);
+-
+- /*
+- * We got an EBUSY because our SB_RDONLY flag didn't match the existing
+- * super block, so invert our setting here and retry the mount so we
+- * can get our vfsmount.
+- */
+- if (ro2rw)
+- fc->sb_flags |= SB_RDONLY;
+- else
+- fc->sb_flags &= ~SB_RDONLY;
+-
+- mnt = fc_mount(fc);
+- if (IS_ERR(mnt))
+- return mnt;
++ int ret = 0;
+
+- if (!ro2rw)
+- return mnt;
++ if (fc->sb_flags & SB_RDONLY)
++ return ret;
+
+- /* We need to convert to rw, call reconfigure. */
+- fc->sb_flags &= ~SB_RDONLY;
+ down_write(&mnt->mnt_sb->s_umount);
+- ret = btrfs_reconfigure(fc);
++ if (!(fc->sb_flags & SB_RDONLY) && (mnt->mnt_sb->s_flags & SB_RDONLY))
++ ret = btrfs_reconfigure(fc);
+ up_write(&mnt->mnt_sb->s_umount);
+- if (ret) {
+- mntput(mnt);
+- return ERR_PTR(ret);
+- }
+- return mnt;
++ return ret;
+ }
+
+ static int btrfs_get_tree_subvol(struct fs_context *fc)
+@@ -2026,6 +2007,7 @@ static int btrfs_get_tree_subvol(struct fs_context *fc)
+ struct fs_context *dup_fc;
+ struct dentry *dentry;
+ struct vfsmount *mnt;
++ int ret = 0;
+
+ /*
+ * Setup a dummy root and fs_info for test/set super. This is because
+@@ -2068,11 +2050,16 @@ static int btrfs_get_tree_subvol(struct fs_context *fc)
+ fc->security = NULL;
+
+ mnt = fc_mount(dup_fc);
+- if (PTR_ERR_OR_ZERO(mnt) == -EBUSY)
+- mnt = btrfs_reconfigure_for_mount(dup_fc);
+- put_fs_context(dup_fc);
+- if (IS_ERR(mnt))
++ if (IS_ERR(mnt)) {
++ put_fs_context(dup_fc);
+ return PTR_ERR(mnt);
++ }
++ ret = btrfs_reconfigure_for_mount(dup_fc, mnt);
++ put_fs_context(dup_fc);
++ if (ret) {
++ mntput(mnt);
++ return ret;
++ }
+
+ /*
+ * This free's ->subvol_name, because if it isn't set we have to
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index eb51b609190fb5..0c4d14c59ebec5 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -732,6 +732,114 @@ const u8 *btrfs_sb_fsid_ptr(const struct btrfs_super_block *sb)
+ return has_metadata_uuid ? sb->metadata_uuid : sb->fsid;
+ }
+
++/*
++ * We can have very weird soft links passed in.
++ * One example is "/proc/self/fd/<fd>", which can be a soft link to
++ * a block device.
++ *
++ * But it's never a good idea to use those weird names.
++ * Here we check if the path (not following symlinks) is a good one inside
++ * "/dev/".
++ */
++static bool is_good_dev_path(const char *dev_path)
++{
++ struct path path = { .mnt = NULL, .dentry = NULL };
++ char *path_buf = NULL;
++ char *resolved_path;
++ bool is_good = false;
++ int ret;
++
++ if (!dev_path)
++ goto out;
++
++ path_buf = kmalloc(PATH_MAX, GFP_KERNEL);
++ if (!path_buf)
++ goto out;
++
++ /*
++ * Do not follow soft link, just check if the original path is inside
++ * "/dev/".
++ */
++ ret = kern_path(dev_path, 0, &path);
++ if (ret)
++ goto out;
++ resolved_path = d_path(&path, path_buf, PATH_MAX);
++ if (IS_ERR(resolved_path))
++ goto out;
++ if (strncmp(resolved_path, "/dev/", strlen("/dev/")))
++ goto out;
++ is_good = true;
++out:
++ kfree(path_buf);
++ path_put(&path);
++ return is_good;
++}
++
++static int get_canonical_dev_path(const char *dev_path, char *canonical)
++{
++ struct path path = { .mnt = NULL, .dentry = NULL };
++ char *path_buf = NULL;
++ char *resolved_path;
++ int ret;
++
++ if (!dev_path) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ path_buf = kmalloc(PATH_MAX, GFP_KERNEL);
++ if (!path_buf) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ ret = kern_path(dev_path, LOOKUP_FOLLOW, &path);
++ if (ret)
++ goto out;
++ resolved_path = d_path(&path, path_buf, PATH_MAX);
++ ret = strscpy(canonical, resolved_path, PATH_MAX);
++out:
++ kfree(path_buf);
++ path_put(&path);
++ return ret;
++}
++
++static bool is_same_device(struct btrfs_device *device, const char *new_path)
++{
++ struct path old = { .mnt = NULL, .dentry = NULL };
++ struct path new = { .mnt = NULL, .dentry = NULL };
++ char *old_path = NULL;
++ bool is_same = false;
++ int ret;
++
++ if (!device->name)
++ goto out;
++
++ old_path = kzalloc(PATH_MAX, GFP_NOFS);
++ if (!old_path)
++ goto out;
++
++ rcu_read_lock();
++ ret = strscpy(old_path, rcu_str_deref(device->name), PATH_MAX);
++ rcu_read_unlock();
++ if (ret < 0)
++ goto out;
++
++ ret = kern_path(old_path, LOOKUP_FOLLOW, &old);
++ if (ret)
++ goto out;
++ ret = kern_path(new_path, LOOKUP_FOLLOW, &new);
++ if (ret)
++ goto out;
++ if (path_equal(&old, &new))
++ is_same = true;
++out:
++ kfree(old_path);
++ path_put(&old);
++ path_put(&new);
++ return is_same;
++}
++
+ /*
+ * Add new device to list of registered devices
+ *
+@@ -852,7 +960,7 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ MAJOR(path_devt), MINOR(path_devt),
+ current->comm, task_pid_nr(current));
+
+- } else if (!device->name || strcmp(device->name->str, path)) {
++ } else if (!device->name || !is_same_device(device, path)) {
+ /*
+ * When FS is already mounted.
+ * 1. If you are here and if the device->name is NULL that
+@@ -1383,12 +1491,23 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ bool new_device_added = false;
+ struct btrfs_device *device = NULL;
+ struct file *bdev_file;
++ char *canonical_path = NULL;
+ u64 bytenr;
+ dev_t devt;
+ int ret;
+
+ lockdep_assert_held(&uuid_mutex);
+
++ if (!is_good_dev_path(path)) {
++ canonical_path = kmalloc(PATH_MAX, GFP_KERNEL);
++ if (canonical_path) {
++ ret = get_canonical_dev_path(path, canonical_path);
++ if (ret < 0) {
++ kfree(canonical_path);
++ canonical_path = NULL;
++ }
++ }
++ }
+ /*
+ * Avoid an exclusive open here, as the systemd-udev may initiate the
+ * device scan which may race with the user's mount or mkfs command,
+@@ -1433,7 +1552,8 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ goto free_disk_super;
+ }
+
+- device = device_list_add(path, disk_super, &new_device_added);
++ device = device_list_add(canonical_path ? : path, disk_super,
++ &new_device_added);
+ if (!IS_ERR(device) && new_device_added)
+ btrfs_free_stale_devices(device->devt, device);
+
+@@ -1442,6 +1562,7 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+
+ error_bdev_put:
+ fput(bdev_file);
++ kfree(canonical_path);
+
+ return device;
+ }
+@@ -2721,8 +2842,6 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ set_blocksize(device->bdev_file, BTRFS_BDEV_BLOCKSIZE);
+
+ if (seeding_dev) {
+- btrfs_clear_sb_rdonly(sb);
+-
+ /* GFP_KERNEL allocation must not be under device_list_mutex */
+ seed_devices = btrfs_init_sprout(fs_info);
+ if (IS_ERR(seed_devices)) {
+@@ -2865,8 +2984,6 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ mutex_unlock(&fs_info->chunk_mutex);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ error_trans:
+- if (seeding_dev)
+- btrfs_set_sb_rdonly(sb);
+ if (trans)
+ btrfs_end_transaction(trans);
+ error_free_zone:
+@@ -6481,13 +6598,15 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
+ max_len = btrfs_max_io_len(map, map_offset, &io_geom);
+ *length = min_t(u64, map->chunk_len - map_offset, max_len);
+
+- down_read(&dev_replace->rwsem);
++ if (dev_replace->replace_task != current)
++ down_read(&dev_replace->rwsem);
++
+ dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(dev_replace);
+ /*
+ * Hold the semaphore for read during the whole operation, write is
+ * requested at commit time but must wait.
+ */
+- if (!dev_replace_is_ongoing)
++ if (!dev_replace_is_ongoing && dev_replace->replace_task != current)
+ up_read(&dev_replace->rwsem);
+
+ switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
+@@ -6627,7 +6746,7 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
+ bioc->mirror_num = io_geom.mirror_num;
+
+ out:
+- if (dev_replace_is_ongoing) {
++ if (dev_replace_is_ongoing && dev_replace->replace_task != current) {
+ lockdep_assert_held(&dev_replace->rwsem);
+ /* Unlock and let waiting writers proceed */
+ up_read(&dev_replace->rwsem);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 865dc70a9dfc47..dddedaef5e93dd 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -2861,16 +2861,14 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
+ case -EINVAL:
+ /* annoy the user because dlm usage is wrong */
+ WARN_ON(1);
+- log_error(ls, "%s %d %x %x %x %d %d %s", __func__,
++ log_error(ls, "%s %d %x %x %x %d %d", __func__,
+ rv, lkb->lkb_id, dlm_iflags_val(lkb), args->flags,
+- lkb->lkb_status, lkb->lkb_wait_type,
+- lkb->lkb_resource->res_name);
++ lkb->lkb_status, lkb->lkb_wait_type);
+ break;
+ default:
+- log_debug(ls, "%s %d %x %x %x %d %d %s", __func__,
++ log_debug(ls, "%s %d %x %x %x %d %d", __func__,
+ rv, lkb->lkb_id, dlm_iflags_val(lkb), args->flags,
+- lkb->lkb_status, lkb->lkb_wait_type,
+- lkb->lkb_resource->res_name);
++ lkb->lkb_status, lkb->lkb_wait_type);
+ break;
+ }
+
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 1ae4542f0bd88b..90fbab6b6f0363 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -823,7 +823,8 @@ static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force)
+ to_free = NULL;
+ head = file->f_ep;
+ if (head->first == &epi->fllink && !epi->fllink.next) {
+- file->f_ep = NULL;
++ /* See eventpoll_release() for details. */
++ WRITE_ONCE(file->f_ep, NULL);
+ if (!is_file_epoll(file)) {
+ struct epitems_head *v;
+ v = container_of(head, struct epitems_head, epitems);
+@@ -1603,7 +1604,8 @@ static int attach_epitem(struct file *file, struct epitem *epi)
+ spin_unlock(&file->f_lock);
+ goto allocate;
+ }
+- file->f_ep = head;
++ /* See eventpoll_release() for details. */
++ WRITE_ONCE(file->f_ep, head);
+ to_free = NULL;
+ }
+ hlist_add_head_rcu(&epi->fllink, file->f_ep);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 88f98dc4402753..60909af2d4a537 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4482,7 +4482,7 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
+ int depth = 0;
+ struct ext4_map_blocks map;
+ unsigned int credits;
+- loff_t epos;
++ loff_t epos, old_size = i_size_read(inode);
+
+ BUG_ON(!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS));
+ map.m_lblk = offset;
+@@ -4541,6 +4541,11 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
+ if (ext4_update_inode_size(inode, epos) & 0x1)
+ inode_set_mtime_to_ts(inode,
+ inode_get_ctime(inode));
++ if (epos > old_size) {
++ pagecache_isize_extended(inode, old_size, epos);
++ ext4_zero_partial_blocks(handle, inode,
++ old_size, epos - old_size);
++ }
+ }
+ ret2 = ext4_mark_inode_dirty(handle, inode);
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 99d09cd9c6a37e..67a5b937f5a92d 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1307,8 +1307,10 @@ static int ext4_write_end(struct file *file,
+ folio_unlock(folio);
+ folio_put(folio);
+
+- if (old_size < pos && !verity)
++ if (old_size < pos && !verity) {
+ pagecache_isize_extended(inode, old_size, pos);
++ ext4_zero_partial_blocks(handle, inode, old_size, pos - old_size);
++ }
+ /*
+ * Don't mark the inode dirty under folio lock. First, it unnecessarily
+ * makes the holding time of folio lock longer. Second, it forces lock
+@@ -1423,8 +1425,10 @@ static int ext4_journalled_write_end(struct file *file,
+ folio_unlock(folio);
+ folio_put(folio);
+
+- if (old_size < pos && !verity)
++ if (old_size < pos && !verity) {
+ pagecache_isize_extended(inode, old_size, pos);
++ ext4_zero_partial_blocks(handle, inode, old_size, pos - old_size);
++ }
+
+ if (size_changed) {
+ ret2 = ext4_mark_inode_dirty(handle, inode);
+@@ -2985,7 +2989,8 @@ static int ext4_da_do_write_end(struct address_space *mapping,
+ struct inode *inode = mapping->host;
+ loff_t old_size = inode->i_size;
+ bool disksize_changed = false;
+- loff_t new_i_size;
++ loff_t new_i_size, zero_len = 0;
++ handle_t *handle;
+
+ if (unlikely(!folio_buffers(folio))) {
+ folio_unlock(folio);
+@@ -3029,18 +3034,21 @@ static int ext4_da_do_write_end(struct address_space *mapping,
+ folio_unlock(folio);
+ folio_put(folio);
+
+- if (old_size < pos)
++ if (pos > old_size) {
+ pagecache_isize_extended(inode, old_size, pos);
++ zero_len = pos - old_size;
++ }
+
+- if (disksize_changed) {
+- handle_t *handle;
++ if (!disksize_changed && !zero_len)
++ return copied;
+
+- handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
+- if (IS_ERR(handle))
+- return PTR_ERR(handle);
+- ext4_mark_inode_dirty(handle, inode);
+- ext4_journal_stop(handle);
+- }
++ handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
++ if (IS_ERR(handle))
++ return PTR_ERR(handle);
++ if (zero_len)
++ ext4_zero_partial_blocks(handle, inode, old_size, zero_len);
++ ext4_mark_inode_dirty(handle, inode);
++ ext4_journal_stop(handle);
+
+ return copied;
+ }
+@@ -5426,6 +5434,14 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ }
+
+ if (attr->ia_size != inode->i_size) {
++ /* attach jbd2 jinode for EOF folio tail zeroing */
++ if (attr->ia_size & (inode->i_sb->s_blocksize - 1) ||
++ oldsize & (inode->i_sb->s_blocksize - 1)) {
++ error = ext4_inode_attach_jinode(inode);
++ if (error)
++ goto err_out;
++ }
++
+ handle = ext4_journal_start(inode, EXT4_HT_INODE, 3);
+ if (IS_ERR(handle)) {
+ error = PTR_ERR(handle);
+@@ -5436,12 +5452,17 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ orphan = 1;
+ }
+ /*
+- * Update c/mtime on truncate up, ext4_truncate() will
+- * update c/mtime in shrink case below
++ * Update c/mtime and tail zero the EOF folio on
++ * truncate up. ext4_truncate() handles the shrink case
++ * below.
+ */
+- if (!shrink)
++ if (!shrink) {
+ inode_set_mtime_to_ts(inode,
+ inode_set_ctime_current(inode));
++ if (oldsize & (inode->i_sb->s_blocksize - 1))
++ ext4_block_truncate_page(handle,
++ inode->i_mapping, oldsize);
++ }
+
+ if (shrink)
+ ext4_fc_track_range(handle, inode,
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 9efe4c00d75bb3..da0960d496ae09 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1819,16 +1819,6 @@ bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len)
+ return true;
+ }
+
+-static inline u64 bytes_to_blks(struct inode *inode, u64 bytes)
+-{
+- return (bytes >> inode->i_blkbits);
+-}
+-
+-static inline u64 blks_to_bytes(struct inode *inode, u64 blks)
+-{
+- return (blks << inode->i_blkbits);
+-}
+-
+ static int f2fs_xattr_fiemap(struct inode *inode,
+ struct fiemap_extent_info *fieinfo)
+ {
+@@ -1854,7 +1844,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return err;
+ }
+
+- phys = blks_to_bytes(inode, ni.blk_addr);
++ phys = F2FS_BLK_TO_BYTES(ni.blk_addr);
+ offset = offsetof(struct f2fs_inode, i_addr) +
+ sizeof(__le32) * (DEF_ADDRS_PER_INODE -
+ get_inline_xattr_addrs(inode));
+@@ -1886,7 +1876,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return err;
+ }
+
+- phys = blks_to_bytes(inode, ni.blk_addr);
++ phys = F2FS_BLK_TO_BYTES(ni.blk_addr);
+ len = inode->i_sb->s_blocksize;
+
+ f2fs_put_page(page, 1);
+@@ -1906,7 +1896,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len)
+ {
+ struct f2fs_map_blocks map;
+- sector_t start_blk, last_blk;
++ sector_t start_blk, last_blk, blk_len, max_len;
+ pgoff_t next_pgofs;
+ u64 logical = 0, phys = 0, size = 0;
+ u32 flags = 0;
+@@ -1948,16 +1938,15 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ goto out;
+ }
+
+- if (bytes_to_blks(inode, len) == 0)
+- len = blks_to_bytes(inode, 1);
+-
+- start_blk = bytes_to_blks(inode, start);
+- last_blk = bytes_to_blks(inode, start + len - 1);
++ start_blk = F2FS_BYTES_TO_BLK(start);
++ last_blk = F2FS_BYTES_TO_BLK(start + len - 1);
++ blk_len = last_blk - start_blk + 1;
++ max_len = F2FS_BYTES_TO_BLK(maxbytes) - start_blk;
+
+ next:
+ memset(&map, 0, sizeof(map));
+ map.m_lblk = start_blk;
+- map.m_len = bytes_to_blks(inode, len);
++ map.m_len = blk_len;
+ map.m_next_pgofs = &next_pgofs;
+ map.m_seg_type = NO_CHECK_TYPE;
+
+@@ -1974,12 +1963,23 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) {
+ start_blk = next_pgofs;
+
+- if (blks_to_bytes(inode, start_blk) < maxbytes)
++ if (F2FS_BLK_TO_BYTES(start_blk) < maxbytes)
+ goto prep_next;
+
+ flags |= FIEMAP_EXTENT_LAST;
+ }
+
++ /*
++ * current extent may cross boundary of inquiry, increase len to
++ * requery.
++ */
++ if (!compr_cluster && (map.m_flags & F2FS_MAP_MAPPED) &&
++ map.m_lblk + map.m_len - 1 == last_blk &&
++ blk_len != max_len) {
++ blk_len = max_len;
++ goto next;
++ }
++
+ compr_appended = false;
+ /* In a case of compressed cluster, append this to the last extent */
+ if (compr_cluster && ((map.m_flags & F2FS_MAP_DELALLOC) ||
+@@ -2011,14 +2011,14 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ } else if (compr_appended) {
+ unsigned int appended_blks = cluster_size -
+ count_in_cluster + 1;
+- size += blks_to_bytes(inode, appended_blks);
++ size += F2FS_BLK_TO_BYTES(appended_blks);
+ start_blk += appended_blks;
+ compr_cluster = false;
+ } else {
+- logical = blks_to_bytes(inode, start_blk);
++ logical = F2FS_BLK_TO_BYTES(start_blk);
+ phys = __is_valid_data_blkaddr(map.m_pblk) ?
+- blks_to_bytes(inode, map.m_pblk) : 0;
+- size = blks_to_bytes(inode, map.m_len);
++ F2FS_BLK_TO_BYTES(map.m_pblk) : 0;
++ size = F2FS_BLK_TO_BYTES(map.m_len);
+ flags = 0;
+
+ if (compr_cluster) {
+@@ -2026,13 +2026,13 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ count_in_cluster += map.m_len;
+ if (count_in_cluster == cluster_size) {
+ compr_cluster = false;
+- size += blks_to_bytes(inode, 1);
++ size += F2FS_BLKSIZE;
+ }
+ } else if (map.m_flags & F2FS_MAP_DELALLOC) {
+ flags = FIEMAP_EXTENT_UNWRITTEN;
+ }
+
+- start_blk += bytes_to_blks(inode, size);
++ start_blk += F2FS_BYTES_TO_BLK(size);
+ }
+
+ prep_next:
+@@ -2070,7 +2070,7 @@ static int f2fs_read_single_page(struct inode *inode, struct folio *folio,
+ struct readahead_control *rac)
+ {
+ struct bio *bio = *bio_ret;
+- const unsigned blocksize = blks_to_bytes(inode, 1);
++ const unsigned int blocksize = F2FS_BLKSIZE;
+ sector_t block_in_file;
+ sector_t last_block;
+ sector_t last_block_in_file;
+@@ -2080,8 +2080,8 @@ static int f2fs_read_single_page(struct inode *inode, struct folio *folio,
+
+ block_in_file = (sector_t)index;
+ last_block = block_in_file + nr_pages;
+- last_block_in_file = bytes_to_blks(inode,
+- f2fs_readpage_limit(inode) + blocksize - 1);
++ last_block_in_file = F2FS_BYTES_TO_BLK(f2fs_readpage_limit(inode) +
++ blocksize - 1);
+ if (last_block > last_block_in_file)
+ last_block = last_block_in_file;
+
+@@ -2181,7 +2181,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ struct bio *bio = *bio_ret;
+ unsigned int start_idx = cc->cluster_idx << cc->log_cluster_size;
+ sector_t last_block_in_file;
+- const unsigned blocksize = blks_to_bytes(inode, 1);
++ const unsigned int blocksize = F2FS_BLKSIZE;
+ struct decompress_io_ctx *dic = NULL;
+ struct extent_info ei = {};
+ bool from_dnode = true;
+@@ -2190,8 +2190,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+
+ f2fs_bug_on(sbi, f2fs_cluster_is_empty(cc));
+
+- last_block_in_file = bytes_to_blks(inode,
+- f2fs_readpage_limit(inode) + blocksize - 1);
++ last_block_in_file = F2FS_BYTES_TO_BLK(f2fs_readpage_limit(inode) +
++ blocksize - 1);
+
+ /* get rid of pages beyond EOF */
+ for (i = 0; i < cc->cluster_size; i++) {
+@@ -3952,7 +3952,7 @@ static int check_swap_activate(struct swap_info_struct *sis,
+ * to be very smart.
+ */
+ cur_lblock = 0;
+- last_lblock = bytes_to_blks(inode, i_size_read(inode));
++ last_lblock = F2FS_BYTES_TO_BLK(i_size_read(inode));
+
+ while (cur_lblock < last_lblock && cur_lblock < sis->max) {
+ struct f2fs_map_blocks map;
+@@ -4195,8 +4195,8 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ pgoff_t next_pgofs = 0;
+ int err;
+
+- map.m_lblk = bytes_to_blks(inode, offset);
+- map.m_len = bytes_to_blks(inode, offset + length - 1) - map.m_lblk + 1;
++ map.m_lblk = F2FS_BYTES_TO_BLK(offset);
++ map.m_len = F2FS_BYTES_TO_BLK(offset + length - 1) - map.m_lblk + 1;
+ map.m_next_pgofs = &next_pgofs;
+ map.m_seg_type = f2fs_rw_hint_to_seg_type(F2FS_I_SB(inode),
+ inode->i_write_hint);
+@@ -4207,7 +4207,7 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ if (err)
+ return err;
+
+- iomap->offset = blks_to_bytes(inode, map.m_lblk);
++ iomap->offset = F2FS_BLK_TO_BYTES(map.m_lblk);
+
+ /*
+ * When inline encryption is enabled, sometimes I/O to an encrypted file
+@@ -4227,21 +4227,21 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ if (WARN_ON_ONCE(map.m_pblk == NEW_ADDR))
+ return -EINVAL;
+
+- iomap->length = blks_to_bytes(inode, map.m_len);
++ iomap->length = F2FS_BLK_TO_BYTES(map.m_len);
+ iomap->type = IOMAP_MAPPED;
+ iomap->flags |= IOMAP_F_MERGED;
+ iomap->bdev = map.m_bdev;
+- iomap->addr = blks_to_bytes(inode, map.m_pblk);
++ iomap->addr = F2FS_BLK_TO_BYTES(map.m_pblk);
+ } else {
+ if (flags & IOMAP_WRITE)
+ return -ENOTBLK;
+
+ if (map.m_pblk == NULL_ADDR) {
+- iomap->length = blks_to_bytes(inode, next_pgofs) -
+- iomap->offset;
++ iomap->length = F2FS_BLK_TO_BYTES(next_pgofs) -
++ iomap->offset;
+ iomap->type = IOMAP_HOLE;
+ } else if (map.m_pblk == NEW_ADDR) {
+- iomap->length = blks_to_bytes(inode, map.m_len);
++ iomap->length = F2FS_BLK_TO_BYTES(map.m_len);
+ iomap->type = IOMAP_UNWRITTEN;
+ } else {
+ f2fs_bug_on(F2FS_I_SB(inode), 1);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 62ac440d94168a..fb09c8e9bc5732 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -346,21 +346,22 @@ static struct extent_tree *__grab_extent_tree(struct inode *inode,
+ }
+
+ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+- struct extent_tree *et)
++ struct extent_tree *et, unsigned int nr_shrink)
+ {
+ struct rb_node *node, *next;
+ struct extent_node *en;
+- unsigned int count = atomic_read(&et->node_cnt);
++ unsigned int count;
+
+ node = rb_first_cached(&et->root);
+- while (node) {
++
++ for (count = 0; node && count < nr_shrink; count++) {
+ next = rb_next(node);
+ en = rb_entry(node, struct extent_node, rb_node);
+ __release_extent_node(sbi, et, en);
+ node = next;
+ }
+
+- return count - atomic_read(&et->node_cnt);
++ return count;
+ }
+
+ static void __drop_largest_extent(struct extent_tree *et,
+@@ -579,6 +580,30 @@ static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ return en;
+ }
+
++static unsigned int __destroy_extent_node(struct inode *inode,
++ enum extent_type type)
++{
++ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
++ struct extent_tree *et = F2FS_I(inode)->extent_tree[type];
++ unsigned int nr_shrink = type == EX_READ ?
++ READ_EXTENT_CACHE_SHRINK_NUMBER :
++ AGE_EXTENT_CACHE_SHRINK_NUMBER;
++ unsigned int node_cnt = 0;
++
++ if (!et || !atomic_read(&et->node_cnt))
++ return 0;
++
++ while (atomic_read(&et->node_cnt)) {
++ write_lock(&et->lock);
++ node_cnt += __free_extent_tree(sbi, et, nr_shrink);
++ write_unlock(&et->lock);
++ }
++
++ f2fs_bug_on(sbi, atomic_read(&et->node_cnt));
++
++ return node_cnt;
++}
++
+ static void __update_extent_tree_range(struct inode *inode,
+ struct extent_info *tei, enum extent_type type)
+ {
+@@ -649,7 +674,9 @@ static void __update_extent_tree_range(struct inode *inode,
+ }
+
+ if (end < org_end && (type != EX_READ ||
+- org_end - end >= F2FS_MIN_EXTENT_LEN)) {
++ (org_end - end >= F2FS_MIN_EXTENT_LEN &&
++ atomic_read(&et->node_cnt) <
++ sbi->max_read_extent_count))) {
+ if (parts) {
+ __set_extent_info(&ei,
+ end, org_end - end,
+@@ -717,9 +744,6 @@ static void __update_extent_tree_range(struct inode *inode,
+ }
+ }
+
+- if (is_inode_flag_set(inode, FI_NO_EXTENT))
+- __free_extent_tree(sbi, et);
+-
+ if (et->largest_updated) {
+ et->largest_updated = false;
+ updated = true;
+@@ -737,6 +761,9 @@ static void __update_extent_tree_range(struct inode *inode,
+ out_read_extent_cache:
+ write_unlock(&et->lock);
+
++ if (is_inode_flag_set(inode, FI_NO_EXTENT))
++ __destroy_extent_node(inode, EX_READ);
++
+ if (updated)
+ f2fs_mark_inode_dirty_sync(inode, true);
+ }
+@@ -899,10 +926,14 @@ static unsigned int __shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink
+ list_for_each_entry_safe(et, next, &eti->zombie_list, list) {
+ if (atomic_read(&et->node_cnt)) {
+ write_lock(&et->lock);
+- node_cnt += __free_extent_tree(sbi, et);
++ node_cnt += __free_extent_tree(sbi, et,
++ nr_shrink - node_cnt - tree_cnt);
+ write_unlock(&et->lock);
+ }
+- f2fs_bug_on(sbi, atomic_read(&et->node_cnt));
++
++ if (atomic_read(&et->node_cnt))
++ goto unlock_out;
++
+ list_del_init(&et->list);
+ radix_tree_delete(&eti->extent_tree_root, et->ino);
+ kmem_cache_free(extent_tree_slab, et);
+@@ -1041,23 +1072,6 @@ unsigned int f2fs_shrink_age_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink
+ return __shrink_extent_tree(sbi, nr_shrink, EX_BLOCK_AGE);
+ }
+
+-static unsigned int __destroy_extent_node(struct inode *inode,
+- enum extent_type type)
+-{
+- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+- struct extent_tree *et = F2FS_I(inode)->extent_tree[type];
+- unsigned int node_cnt = 0;
+-
+- if (!et || !atomic_read(&et->node_cnt))
+- return 0;
+-
+- write_lock(&et->lock);
+- node_cnt = __free_extent_tree(sbi, et);
+- write_unlock(&et->lock);
+-
+- return node_cnt;
+-}
+-
+ void f2fs_destroy_extent_node(struct inode *inode)
+ {
+ __destroy_extent_node(inode, EX_READ);
+@@ -1066,7 +1080,6 @@ void f2fs_destroy_extent_node(struct inode *inode)
+
+ static void __drop_extent_tree(struct inode *inode, enum extent_type type)
+ {
+- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et = F2FS_I(inode)->extent_tree[type];
+ bool updated = false;
+
+@@ -1074,7 +1087,6 @@ static void __drop_extent_tree(struct inode *inode, enum extent_type type)
+ return;
+
+ write_lock(&et->lock);
+- __free_extent_tree(sbi, et);
+ if (type == EX_READ) {
+ set_inode_flag(inode, FI_NO_EXTENT);
+ if (et->largest.len) {
+@@ -1083,6 +1095,9 @@ static void __drop_extent_tree(struct inode *inode, enum extent_type type)
+ }
+ }
+ write_unlock(&et->lock);
++
++ __destroy_extent_node(inode, type);
++
+ if (updated)
+ f2fs_mark_inode_dirty_sync(inode, true);
+ }
+@@ -1156,6 +1171,7 @@ void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi)
+ sbi->hot_data_age_threshold = DEF_HOT_DATA_AGE_THRESHOLD;
+ sbi->warm_data_age_threshold = DEF_WARM_DATA_AGE_THRESHOLD;
+ sbi->last_age_weight = LAST_AGE_WEIGHT;
++ sbi->max_read_extent_count = DEF_MAX_READ_EXTENT_COUNT;
+ }
+
+ int __init f2fs_create_extent_cache(void)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 93a5e1c24e566e..cec3dd205b3df8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -634,6 +634,9 @@ enum {
+ #define DEF_HOT_DATA_AGE_THRESHOLD 262144
+ #define DEF_WARM_DATA_AGE_THRESHOLD 2621440
+
++/* default max read extent count per inode */
++#define DEF_MAX_READ_EXTENT_COUNT 10240
++
+ /* extent cache type */
+ enum extent_type {
+ EX_READ,
+@@ -1619,6 +1622,7 @@ struct f2fs_sb_info {
+ /* for extent tree cache */
+ struct extent_tree_info extent_tree[NR_EXTENT_CACHES];
+ atomic64_t allocated_data_blocks; /* for block age extent_cache */
++ unsigned int max_read_extent_count; /* max read extent count per inode */
+
+ /* The threshold used for hot and warm data seperation*/
+ unsigned int hot_data_age_threshold;
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 1ed86df343a5d1..10780e37fc7b68 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -775,8 +775,10 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ !is_inode_flag_set(inode, FI_DIRTY_INODE))
+ return 0;
+
+- if (!f2fs_is_checkpoint_ready(sbi))
++ if (!f2fs_is_checkpoint_ready(sbi)) {
++ f2fs_mark_inode_dirty_sync(inode, true);
+ return -ENOSPC;
++ }
+
+ /*
+ * We need to balance fs here to prevent from producing dirty node pages
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index af36c6d6542b8c..4d7b9fd6ef31ab 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1341,7 +1341,12 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ err = -EFSCORRUPTED;
+ dec_valid_node_count(sbi, dn->inode, !ofs);
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+- f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
++ f2fs_warn_ratelimited(sbi,
++ "f2fs_new_node_page: inconsistent nat entry, "
++ "ino:%u, nid:%u, blkaddr:%u, ver:%u, flag:%u",
++ new_ni.ino, new_ni.nid, new_ni.blk_addr,
++ new_ni.version, new_ni.flag);
++ f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
+ goto fail;
+ }
+ #endif
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index c56e8c87393523..d9a44f03e558bf 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -789,6 +789,13 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ return count;
+ }
+
++ if (!strcmp(a->attr.name, "max_read_extent_count")) {
++ if (t > UINT_MAX)
++ return -EINVAL;
++ *ui = (unsigned int)t;
++ return count;
++ }
++
+ if (!strcmp(a->attr.name, "ipu_policy")) {
+ if (t >= BIT(F2FS_IPU_MAX))
+ return -EINVAL;
+@@ -1054,6 +1061,8 @@ F2FS_SBI_GENERAL_RW_ATTR(revoked_atomic_block);
+ F2FS_SBI_GENERAL_RW_ATTR(hot_data_age_threshold);
+ F2FS_SBI_GENERAL_RW_ATTR(warm_data_age_threshold);
+ F2FS_SBI_GENERAL_RW_ATTR(last_age_weight);
++/* read extent cache */
++F2FS_SBI_GENERAL_RW_ATTR(max_read_extent_count);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
+ F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy);
+@@ -1244,6 +1253,7 @@ static struct attribute *f2fs_attrs[] = {
+ ATTR_LIST(hot_data_age_threshold),
+ ATTR_LIST(warm_data_age_threshold),
+ ATTR_LIST(last_age_weight),
++ ATTR_LIST(max_read_extent_count),
+ NULL,
+ };
+ ATTRIBUTE_GROUPS(f2fs);
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index e22c1edc32b39e..b9cef63c78717f 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1537,11 +1537,13 @@ static struct inode *gfs2_alloc_inode(struct super_block *sb)
+ if (!ip)
+ return NULL;
+ ip->i_no_addr = 0;
++ ip->i_no_formal_ino = 0;
+ ip->i_flags = 0;
+ ip->i_gl = NULL;
+ gfs2_holder_mark_uninitialized(&ip->i_iopen_gh);
+ memset(&ip->i_res, 0, sizeof(ip->i_res));
+ RB_CLEAR_NODE(&ip->i_res.rs_node);
++ ip->i_diskflags = 0;
+ ip->i_rahead = 0;
+ return &ip->i_inode;
+ }
+diff --git a/fs/jffs2/compr_rtime.c b/fs/jffs2/compr_rtime.c
+index 79e771ab624f47..3bd9d2f3bece20 100644
+--- a/fs/jffs2/compr_rtime.c
++++ b/fs/jffs2/compr_rtime.c
+@@ -95,6 +95,9 @@ static int jffs2_rtime_decompress(unsigned char *data_in,
+
+ positions[value]=outpos;
+ if (repeat) {
++ if ((outpos + repeat) > destlen) {
++ return 1;
++ }
+ if (backoffs + repeat >= outpos) {
+ while(repeat) {
+ cpage_out[outpos++] = cpage_out[backoffs++];
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 3ab410059dc202..f9009e4f9ffd89 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1820,6 +1820,9 @@ dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
+ return -EIO;
+ dp = (struct dmap *) mp->data;
+
++ if (dp->tree.budmin < 0)
++ return -EIO;
++
+ /* try to allocate the blocks.
+ */
+ rc = dbAllocDmapLev(bmp, dp, (int) nblocks, l2nb, results);
+@@ -2888,6 +2891,9 @@ static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ /* bubble the new value up the tree as required.
+ */
+ for (k = 0; k < le32_to_cpu(tp->dmt_height); k++) {
++ if (lp == 0)
++ break;
++
+ /* get the index of the first leaf of the 4 leaf
+ * group containing the specified leaf (leafno).
+ */
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 5d3127ca68a42d..8f85177f284b5a 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -2891,6 +2891,14 @@ int jfs_readdir(struct file *file, struct dir_context *ctx)
+ stbl = DT_GETSTBL(p);
+
+ for (i = index; i < p->header.nextindex; i++) {
++ if (stbl[i] < 0 || stbl[i] > 127) {
++ jfs_err("JFS: Invalid stbl[%d] = %d for inode %ld, block = %lld",
++ i, stbl[i], (long)ip->i_ino, (long long)bn);
++ free_page(dirent_buf);
++ DT_PUTPAGE(mp);
++ return -EIO;
++ }
++
+ d = (struct ldtentry *) & p->slot[stbl[i]];
+
+ if (((long) jfs_dirent + d->namlen + 1) >
+@@ -3086,6 +3094,13 @@ static int dtReadFirst(struct inode *ip, struct btstack * btstack)
+
+ /* get the leftmost entry */
+ stbl = DT_GETSTBL(p);
++
++ if (stbl[0] < 0 || stbl[0] > 127) {
++ DT_PUTPAGE(mp);
++ jfs_error(ip->i_sb, "stbl[0] out of bound\n");
++ return -EIO;
++ }
++
+ xd = (pxd_t *) & p->slot[stbl[0]];
+
+ /* get the child page block address */
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index a8602729586ab7..f61c58fbf117d3 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -70,7 +70,7 @@ static inline unsigned int nilfs_chunk_size(struct inode *inode)
+ */
+ static unsigned int nilfs_last_byte(struct inode *inode, unsigned long page_nr)
+ {
+- unsigned int last_byte = inode->i_size;
++ u64 last_byte = inode->i_size;
+
+ last_byte -= page_nr << PAGE_SHIFT;
+ if (last_byte > PAGE_SIZE)
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 9644bc72e4573b..8e2d43fc6f7c1f 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -266,13 +266,6 @@ static int create_fd(struct fsnotify_group *group, const struct path *path,
+ group->fanotify_data.f_flags | __FMODE_NONOTIFY,
+ current_cred());
+ if (IS_ERR(new_file)) {
+- /*
+- * we still send an event even if we can't open the file. this
+- * can happen when say tasks are gone and we try to open their
+- * /proc files or we try to open a WRONLY file like in sysfs
+- * we just send the errno to userspace since there isn't much
+- * else we can do.
+- */
+ put_unused_fd(client_fd);
+ client_fd = PTR_ERR(new_file);
+ } else {
+@@ -663,7 +656,7 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ unsigned int info_mode = FAN_GROUP_FLAG(group, FANOTIFY_INFO_MODES);
+ unsigned int pidfd_mode = info_mode & FAN_REPORT_PIDFD;
+ struct file *f = NULL, *pidfd_file = NULL;
+- int ret, pidfd = FAN_NOPIDFD, fd = FAN_NOFD;
++ int ret, pidfd = -ESRCH, fd = -EBADF;
+
+ pr_debug("%s: group=%p event=%p\n", __func__, group, event);
+
+@@ -691,10 +684,39 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ if (!FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&
+ path && path->mnt && path->dentry) {
+ fd = create_fd(group, path, &f);
+- if (fd < 0)
+- return fd;
++ /*
++ * Opening an fd from dentry can fail for several reasons.
++ * For example, when tasks are gone and we try to open their
++ * /proc files or we try to open a WRONLY file like in sysfs
++ * or when trying to open a file that was deleted on the
++ * remote network server.
++ *
++ * For a group with FAN_REPORT_FD_ERROR, we will send the
++ * event with the error instead of the open fd, otherwise
++ * Userspace may not get the error at all.
++ * In any case, userspace will not know which file failed to
++ * open, so add a debug print for further investigation.
++ */
++ if (fd < 0) {
++ pr_debug("fanotify: create_fd(%pd2) failed err=%d\n",
++ path->dentry, fd);
++ if (!FAN_GROUP_FLAG(group, FAN_REPORT_FD_ERROR)) {
++ /*
++ * Historically, we've handled EOPENSTALE in a
++ * special way and silently dropped such
++ * events. Now we have to keep it to maintain
++ * backward compatibility...
++ */
++ if (fd == -EOPENSTALE)
++ fd = 0;
++ return fd;
++ }
++ }
+ }
+- metadata.fd = fd;
++ if (FAN_GROUP_FLAG(group, FAN_REPORT_FD_ERROR))
++ metadata.fd = fd;
++ else
++ metadata.fd = fd >= 0 ? fd : FAN_NOFD;
+
+ if (pidfd_mode) {
+ /*
+@@ -709,18 +731,16 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ * The PIDTYPE_TGID check for an event->pid is performed
+ * preemptively in an attempt to catch out cases where the event
+ * listener reads events after the event generating process has
+- * already terminated. Report FAN_NOPIDFD to the event listener
+- * in those cases, with all other pidfd creation errors being
+- * reported as FAN_EPIDFD.
++ * already terminated. Depending on flag FAN_REPORT_FD_ERROR,
++ * report either -ESRCH or FAN_NOPIDFD to the event listener in
++ * those cases with all other pidfd creation errors reported as
++ * the error code itself or as FAN_EPIDFD.
+ */
+- if (metadata.pid == 0 ||
+- !pid_has_task(event->pid, PIDTYPE_TGID)) {
+- pidfd = FAN_NOPIDFD;
+- } else {
++ if (metadata.pid && pid_has_task(event->pid, PIDTYPE_TGID))
+ pidfd = pidfd_prepare(event->pid, 0, &pidfd_file);
+- if (pidfd < 0)
+- pidfd = FAN_EPIDFD;
+- }
++
++ if (!FAN_GROUP_FLAG(group, FAN_REPORT_FD_ERROR) && pidfd < 0)
++ pidfd = pidfd == -ESRCH ? FAN_NOPIDFD : FAN_EPIDFD;
+ }
+
+ ret = -EFAULT;
+@@ -737,9 +757,6 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ buf += FAN_EVENT_METADATA_LEN;
+ count -= FAN_EVENT_METADATA_LEN;
+
+- if (fanotify_is_perm_event(event->mask))
+- FANOTIFY_PERM(event)->fd = fd;
+-
+ if (info_mode) {
+ ret = copy_info_records_to_user(event, info, info_mode, pidfd,
+ buf, count);
+@@ -753,15 +770,18 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ if (pidfd_file)
+ fd_install(pidfd, pidfd_file);
+
++ if (fanotify_is_perm_event(event->mask))
++ FANOTIFY_PERM(event)->fd = fd;
++
+ return metadata.event_len;
+
+ out_close_fd:
+- if (fd != FAN_NOFD) {
++ if (f) {
+ put_unused_fd(fd);
+ fput(f);
+ }
+
+- if (pidfd >= 0) {
++ if (pidfd_file) {
+ put_unused_fd(pidfd);
+ fput(pidfd_file);
+ }
+@@ -828,15 +848,6 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
+ }
+
+ ret = copy_event_to_user(group, event, buf, count);
+- if (unlikely(ret == -EOPENSTALE)) {
+- /*
+- * We cannot report events with stale fd so drop it.
+- * Setting ret to 0 will continue the event loop and
+- * do the right thing if there are no more events to
+- * read (i.e. return bytes read, -EAGAIN or wait).
+- */
+- ret = 0;
+- }
+
+ /*
+ * Permission events get queued to wait for response. Other
+@@ -845,7 +856,7 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
+ if (!fanotify_is_perm_event(event->mask)) {
+ fsnotify_destroy_event(group, &event->fse);
+ } else {
+- if (ret <= 0) {
++ if (ret <= 0 || FANOTIFY_PERM(event)->fd < 0) {
+ spin_lock(&group->notification_lock);
+ finish_permission_event(group,
+ FANOTIFY_PERM(event), FAN_DENY, NULL);
+@@ -1954,7 +1965,7 @@ static int __init fanotify_user_setup(void)
+ FANOTIFY_DEFAULT_MAX_USER_MARKS);
+
+ BUILD_BUG_ON(FANOTIFY_INIT_FLAGS & FANOTIFY_INTERNAL_GROUP_FLAGS);
+- BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 12);
++ BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 13);
+ BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 11);
+
+ fanotify_mark_cache = KMEM_CACHE(fanotify_mark,
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 0763202d00c992..8d789b017fa9b6 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -977,7 +977,7 @@ int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+
+ /* Check for compressed frame. */
+ err = attr_is_frame_compressed(ni, attr_b, vcn >> NTFS_LZNT_CUNIT,
+- &hint);
++ &hint, run);
+ if (err)
+ goto out;
+
+@@ -1521,16 +1521,16 @@ int attr_wof_frame_info(struct ntfs_inode *ni, struct ATTRIB *attr,
+ * attr_is_frame_compressed - Used to detect compressed frame.
+ *
+ * attr - base (primary) attribute segment.
++ * run - run to use, usually == &ni->file.run.
+ * Only base segments contains valid 'attr->nres.c_unit'
+ */
+ int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+- CLST frame, CLST *clst_data)
++ CLST frame, CLST *clst_data, struct runs_tree *run)
+ {
+ int err;
+ u32 clst_frame;
+ CLST clen, lcn, vcn, alen, slen, vcn_next;
+ size_t idx;
+- struct runs_tree *run;
+
+ *clst_data = 0;
+
+@@ -1542,7 +1542,6 @@ int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+
+ clst_frame = 1u << attr->nres.c_unit;
+ vcn = frame * clst_frame;
+- run = &ni->file.run;
+
+ if (!run_lookup_entry(run, vcn, &lcn, &clen, &idx)) {
+ err = attr_load_runs_vcn(ni, attr->type, attr_name(attr),
+@@ -1678,7 +1677,7 @@ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ if (err)
+ goto out;
+
+- err = attr_is_frame_compressed(ni, attr_b, frame, &clst_data);
++ err = attr_is_frame_compressed(ni, attr_b, frame, &clst_data, run);
+ if (err)
+ goto out;
+
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 41c7ffad279016..c33e818b3164cd 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1900,46 +1900,6 @@ enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
+ return REPARSE_LINK;
+ }
+
+-/*
+- * fiemap_fill_next_extent_k - a copy of fiemap_fill_next_extent
+- * but it uses 'fe_k' instead of fieinfo->fi_extents_start
+- */
+-static int fiemap_fill_next_extent_k(struct fiemap_extent_info *fieinfo,
+- struct fiemap_extent *fe_k, u64 logical,
+- u64 phys, u64 len, u32 flags)
+-{
+- struct fiemap_extent extent;
+-
+- /* only count the extents */
+- if (fieinfo->fi_extents_max == 0) {
+- fieinfo->fi_extents_mapped++;
+- return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0;
+- }
+-
+- if (fieinfo->fi_extents_mapped >= fieinfo->fi_extents_max)
+- return 1;
+-
+- if (flags & FIEMAP_EXTENT_DELALLOC)
+- flags |= FIEMAP_EXTENT_UNKNOWN;
+- if (flags & FIEMAP_EXTENT_DATA_ENCRYPTED)
+- flags |= FIEMAP_EXTENT_ENCODED;
+- if (flags & (FIEMAP_EXTENT_DATA_TAIL | FIEMAP_EXTENT_DATA_INLINE))
+- flags |= FIEMAP_EXTENT_NOT_ALIGNED;
+-
+- memset(&extent, 0, sizeof(extent));
+- extent.fe_logical = logical;
+- extent.fe_physical = phys;
+- extent.fe_length = len;
+- extent.fe_flags = flags;
+-
+- memcpy(fe_k + fieinfo->fi_extents_mapped, &extent, sizeof(extent));
+-
+- fieinfo->fi_extents_mapped++;
+- if (fieinfo->fi_extents_mapped == fieinfo->fi_extents_max)
+- return 1;
+- return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0;
+-}
+-
+ /*
+ * ni_fiemap - Helper for file_fiemap().
+ *
+@@ -1950,11 +1910,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ __u64 vbo, __u64 len)
+ {
+ int err = 0;
+- struct fiemap_extent *fe_k = NULL;
+ struct ntfs_sb_info *sbi = ni->mi.sbi;
+ u8 cluster_bits = sbi->cluster_bits;
+- struct runs_tree *run;
+- struct rw_semaphore *run_lock;
++ struct runs_tree run;
+ struct ATTRIB *attr;
+ CLST vcn = vbo >> cluster_bits;
+ CLST lcn, clen;
+@@ -1965,13 +1923,11 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ u32 flags;
+ bool ok;
+
++ run_init(&run);
+ if (S_ISDIR(ni->vfs_inode.i_mode)) {
+- run = &ni->dir.alloc_run;
+ attr = ni_find_attr(ni, NULL, NULL, ATTR_ALLOC, I30_NAME,
+ ARRAY_SIZE(I30_NAME), NULL, NULL);
+- run_lock = &ni->dir.run_lock;
+ } else {
+- run = &ni->file.run;
+ attr = ni_find_attr(ni, NULL, NULL, ATTR_DATA, NULL, 0, NULL,
+ NULL);
+ if (!attr) {
+@@ -1986,7 +1942,6 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ "fiemap is not supported for compressed file (cp -r)");
+ goto out;
+ }
+- run_lock = &ni->file.run_lock;
+ }
+
+ if (!attr || !attr->non_res) {
+@@ -1998,51 +1953,33 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ goto out;
+ }
+
+- /*
+- * To avoid lock problems replace pointer to user memory by pointer to kernel memory.
+- */
+- fe_k = kmalloc_array(fieinfo->fi_extents_max,
+- sizeof(struct fiemap_extent),
+- GFP_NOFS | __GFP_ZERO);
+- if (!fe_k) {
+- err = -ENOMEM;
+- goto out;
+- }
+-
+ end = vbo + len;
+ alloc_size = le64_to_cpu(attr->nres.alloc_size);
+ if (end > alloc_size)
+ end = alloc_size;
+
+- down_read(run_lock);
+
+ while (vbo < end) {
+ if (idx == -1) {
+- ok = run_lookup_entry(run, vcn, &lcn, &clen, &idx);
++ ok = run_lookup_entry(&run, vcn, &lcn, &clen, &idx);
+ } else {
+ CLST vcn_next = vcn;
+
+- ok = run_get_entry(run, ++idx, &vcn, &lcn, &clen) &&
++ ok = run_get_entry(&run, ++idx, &vcn, &lcn, &clen) &&
+ vcn == vcn_next;
+ if (!ok)
+ vcn = vcn_next;
+ }
+
+ if (!ok) {
+- up_read(run_lock);
+- down_write(run_lock);
+-
+ err = attr_load_runs_vcn(ni, attr->type,
+ attr_name(attr),
+- attr->name_len, run, vcn);
+-
+- up_write(run_lock);
+- down_read(run_lock);
++ attr->name_len, &run, vcn);
+
+ if (err)
+ break;
+
+- ok = run_lookup_entry(run, vcn, &lcn, &clen, &idx);
++ ok = run_lookup_entry(&run, vcn, &lcn, &clen, &idx);
+
+ if (!ok) {
+ err = -EINVAL;
+@@ -2067,8 +2004,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ } else if (is_attr_compressed(attr)) {
+ CLST clst_data;
+
+- err = attr_is_frame_compressed(
+- ni, attr, vcn >> attr->nres.c_unit, &clst_data);
++ err = attr_is_frame_compressed(ni, attr,
++ vcn >> attr->nres.c_unit,
++ &clst_data, &run);
+ if (err)
+ break;
+ if (clst_data < NTFS_LZNT_CLUSTERS)
+@@ -2097,8 +2035,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ if (vbo + dlen >= end)
+ flags |= FIEMAP_EXTENT_LAST;
+
+- err = fiemap_fill_next_extent_k(fieinfo, fe_k, vbo, lbo,
+- dlen, flags);
++ err = fiemap_fill_next_extent(fieinfo, vbo, lbo, dlen,
++ flags);
+
+ if (err < 0)
+ break;
+@@ -2119,8 +2057,7 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ if (vbo + bytes >= end)
+ flags |= FIEMAP_EXTENT_LAST;
+
+- err = fiemap_fill_next_extent_k(fieinfo, fe_k, vbo, lbo, bytes,
+- flags);
++ err = fiemap_fill_next_extent(fieinfo, vbo, lbo, bytes, flags);
+ if (err < 0)
+ break;
+ if (err == 1) {
+@@ -2131,19 +2068,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ vbo += bytes;
+ }
+
+- up_read(run_lock);
+-
+- /*
+- * Copy to user memory out of lock
+- */
+- if (copy_to_user(fieinfo->fi_extents_start, fe_k,
+- fieinfo->fi_extents_max *
+- sizeof(struct fiemap_extent))) {
+- err = -EFAULT;
+- }
+-
+ out:
+- kfree(fe_k);
++ run_close(&run);
+ return err;
+ }
+
+@@ -2672,7 +2598,8 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ down_write(&ni->file.run_lock);
+ run_truncate_around(run, le64_to_cpu(attr->nres.svcn));
+ frame = frame_vbo >> (cluster_bits + NTFS_LZNT_CUNIT);
+- err = attr_is_frame_compressed(ni, attr, frame, &clst_data);
++ err = attr_is_frame_compressed(ni, attr, frame, &clst_data,
++ run);
+ up_write(&ni->file.run_lock);
+ if (err)
+ goto out1;
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index 26e1e1379c04e2..cd8e8374bb5a0a 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -446,7 +446,8 @@ int attr_wof_frame_info(struct ntfs_inode *ni, struct ATTRIB *attr,
+ struct runs_tree *run, u64 frame, u64 frames,
+ u8 frame_bits, u32 *ondisk_size, u64 *vbo_data);
+ int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+- CLST frame, CLST *clst_data);
++ CLST frame, CLST *clst_data,
++ struct runs_tree *run);
+ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ u64 new_valid);
+ int attr_collapse_range(struct ntfs_inode *ni, u64 vbo, u64 bytes);
+diff --git a/fs/ntfs3/run.c b/fs/ntfs3/run.c
+index 58e988cd80490d..48566dff0dc92b 100644
+--- a/fs/ntfs3/run.c
++++ b/fs/ntfs3/run.c
+@@ -1055,8 +1055,8 @@ int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+ {
+ int ret, err;
+ CLST next_vcn, lcn, len;
+- size_t index;
+- bool ok;
++ size_t index, done;
++ bool ok, zone;
+ struct wnd_bitmap *wnd;
+
+ ret = run_unpack(run, sbi, ino, svcn, evcn, vcn, run_buf, run_buf_size);
+@@ -1087,8 +1087,9 @@ int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+ continue;
+
+ down_read_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS);
++ zone = max(wnd->zone_bit, lcn) < min(wnd->zone_end, lcn + len);
+ /* Check for free blocks. */
+- ok = wnd_is_used(wnd, lcn, len);
++ ok = !zone && wnd_is_used(wnd, lcn, len);
+ up_read(&wnd->rw_lock);
+ if (ok)
+ continue;
+@@ -1096,14 +1097,33 @@ int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+ /* Looks like volume is corrupted. */
+ ntfs_set_state(sbi, NTFS_DIRTY_ERROR);
+
+- if (down_write_trylock(&wnd->rw_lock)) {
+- /* Mark all zero bits as used in range [lcn, lcn+len). */
+- size_t done;
+- err = wnd_set_used_safe(wnd, lcn, len, &done);
+- up_write(&wnd->rw_lock);
+- if (err)
+- return err;
++ if (!down_write_trylock(&wnd->rw_lock))
++ continue;
++
++ if (zone) {
++ /*
++ * Range [lcn, lcn + len) intersects with zone.
++ * To avoid complex with zone just turn it off.
++ */
++ wnd_zone_set(wnd, 0, 0);
++ }
++
++ /* Mark all zero bits as used in range [lcn, lcn+len). */
++ err = wnd_set_used_safe(wnd, lcn, len, &done);
++ if (zone) {
++ /* Restore zone. Lock mft run. */
++ struct rw_semaphore *lock;
++ lock = is_mounted(sbi) ? &sbi->mft.ni->file.run_lock :
++ NULL;
++ if (lock)
++ down_read(lock);
++ ntfs_refresh_zone(sbi);
++ if (lock)
++ up_read(lock);
+ }
++ up_write(&wnd->rw_lock);
++ if (err)
++ return err;
+ }
+
+ return ret;
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 60df52e4c1f878..764ecbd5ad41dd 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -3110,6 +3110,7 @@ static void *ocfs2_dlm_seq_next(struct seq_file *m, void *v, loff_t *pos)
+ struct ocfs2_lock_res *iter = v;
+ struct ocfs2_lock_res *dummy = &priv->p_iter_res;
+
++ (*pos)++;
+ spin_lock(&ocfs2_dlm_tracking_lock);
+ iter = ocfs2_dlm_next_res(iter, priv);
+ list_del_init(&dummy->l_debug_list);
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 8ac42ea81a17bd..5df34561c551c6 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -1002,25 +1002,6 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ start = bit_off + 1;
+ }
+
+- /* clear the contiguous bits until the end boundary */
+- if (count) {
+- blkno = la_start_blk +
+- ocfs2_clusters_to_blocks(osb->sb,
+- start - count);
+-
+- trace_ocfs2_sync_local_to_main_free(
+- count, start - count,
+- (unsigned long long)la_start_blk,
+- (unsigned long long)blkno);
+-
+- status = ocfs2_release_clusters(handle,
+- main_bm_inode,
+- main_bm_bh, blkno,
+- count);
+- if (status < 0)
+- mlog_errno(status);
+- }
+-
+ bail:
+ if (status)
+ mlog_errno(status);
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 59c92353151a85..5550f8afa43802 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -200,8 +200,10 @@ static struct inode *ocfs2_get_init_inode(struct inode *dir, umode_t mode)
+ mode = mode_strip_sgid(&nop_mnt_idmap, dir, mode);
+ inode_init_owner(&nop_mnt_idmap, inode, dir, mode);
+ status = dquot_initialize(inode);
+- if (status)
++ if (status) {
++ iput(inode);
+ return ERR_PTR(status);
++ }
+
+ return inode;
+ }
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 0c6468844c4b54..a697e53ccee2be 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -677,6 +677,7 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ int cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, umode_t mode, dev_t dev);
++umode_t wire_mode_to_posix(u32 wire, bool is_dir);
+
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ static inline int get_dfs_path(const unsigned int xid, struct cifs_ses *ses,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index c6f15dbe860a41..0eae60731c20c0 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -5406,7 +5406,7 @@ CIFSSMBSetPathInfo(const unsigned int xid, struct cifs_tcon *tcon,
+ param_offset = offsetof(struct smb_com_transaction2_spi_req,
+ InformationLevel) - 4;
+ offset = param_offset + params;
+- data_offset = (char *) (&pSMB->hdr.Protocol) + offset;
++ data_offset = (char *)pSMB + offsetof(typeof(*pSMB), hdr.Protocol) + offset;
+ pSMB->ParameterOffset = cpu_to_le16(param_offset);
+ pSMB->DataOffset = cpu_to_le16(offset);
+ pSMB->SetupCount = 1;
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index a94c538ff86368..feff3324d39c6d 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -2512,9 +2512,6 @@ cifs_put_tcon(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace)
+
+ list_del_init(&tcon->tcon_list);
+ tcon->status = TID_EXITING;
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+- list_replace_init(&tcon->dfs_ses_list, &ses_list);
+-#endif
+ spin_unlock(&tcon->tc_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+
+@@ -2522,6 +2519,7 @@ cifs_put_tcon(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace)
+ cancel_delayed_work_sync(&tcon->query_interfaces);
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ cancel_delayed_work_sync(&tcon->dfs_cache_work);
++ list_replace_init(&tcon->dfs_ses_list, &ses_list);
+ #endif
+
+ if (tcon->use_witness) {
+diff --git a/fs/smb/client/dfs.c b/fs/smb/client/dfs.c
+index 3f6077c68d68aa..c35953843373ea 100644
+--- a/fs/smb/client/dfs.c
++++ b/fs/smb/client/dfs.c
+@@ -321,49 +321,6 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx)
+ return rc;
+ }
+
+-/* Update dfs referral path of superblock */
+-static int update_server_fullpath(struct TCP_Server_Info *server, struct cifs_sb_info *cifs_sb,
+- const char *target)
+-{
+- int rc = 0;
+- size_t len = strlen(target);
+- char *refpath, *npath;
+-
+- if (unlikely(len < 2 || *target != '\\'))
+- return -EINVAL;
+-
+- if (target[1] == '\\') {
+- len += 1;
+- refpath = kmalloc(len, GFP_KERNEL);
+- if (!refpath)
+- return -ENOMEM;
+-
+- scnprintf(refpath, len, "%s", target);
+- } else {
+- len += sizeof("\\");
+- refpath = kmalloc(len, GFP_KERNEL);
+- if (!refpath)
+- return -ENOMEM;
+-
+- scnprintf(refpath, len, "\\%s", target);
+- }
+-
+- npath = dfs_cache_canonical_path(refpath, cifs_sb->local_nls, cifs_remap(cifs_sb));
+- kfree(refpath);
+-
+- if (IS_ERR(npath)) {
+- rc = PTR_ERR(npath);
+- } else {
+- mutex_lock(&server->refpath_lock);
+- spin_lock(&server->srv_lock);
+- kfree(server->leaf_fullpath);
+- server->leaf_fullpath = npath;
+- spin_unlock(&server->srv_lock);
+- mutex_unlock(&server->refpath_lock);
+- }
+- return rc;
+-}
+-
+ static int target_share_matches_server(struct TCP_Server_Info *server, char *share,
+ bool *target_match)
+ {
+@@ -388,77 +345,22 @@ static int target_share_matches_server(struct TCP_Server_Info *server, char *sha
+ return rc;
+ }
+
+-static void __tree_connect_ipc(const unsigned int xid, char *tree,
+- struct cifs_sb_info *cifs_sb,
+- struct cifs_ses *ses)
+-{
+- struct TCP_Server_Info *server = ses->server;
+- struct cifs_tcon *tcon = ses->tcon_ipc;
+- int rc;
+-
+- spin_lock(&ses->ses_lock);
+- spin_lock(&ses->chan_lock);
+- if (cifs_chan_needs_reconnect(ses, server) ||
+- ses->ses_status != SES_GOOD) {
+- spin_unlock(&ses->chan_lock);
+- spin_unlock(&ses->ses_lock);
+- cifs_server_dbg(FYI, "%s: skipping ipc reconnect due to disconnected ses\n",
+- __func__);
+- return;
+- }
+- spin_unlock(&ses->chan_lock);
+- spin_unlock(&ses->ses_lock);
+-
+- cifs_server_lock(server);
+- scnprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", server->hostname);
+- cifs_server_unlock(server);
+-
+- rc = server->ops->tree_connect(xid, ses, tree, tcon,
+- cifs_sb->local_nls);
+- cifs_server_dbg(FYI, "%s: tree_reconnect %s: %d\n", __func__, tree, rc);
+- spin_lock(&tcon->tc_lock);
+- if (rc) {
+- tcon->status = TID_NEED_TCON;
+- } else {
+- tcon->status = TID_GOOD;
+- tcon->need_reconnect = false;
+- }
+- spin_unlock(&tcon->tc_lock);
+-}
+-
+-static void tree_connect_ipc(const unsigned int xid, char *tree,
+- struct cifs_sb_info *cifs_sb,
+- struct cifs_tcon *tcon)
+-{
+- struct cifs_ses *ses = tcon->ses;
+-
+- __tree_connect_ipc(xid, tree, cifs_sb, ses);
+- __tree_connect_ipc(xid, tree, cifs_sb, CIFS_DFS_ROOT_SES(ses));
+-}
+-
+-static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, char *tree, bool islink,
+- struct dfs_cache_tgt_list *tl)
++static int tree_connect_dfs_target(const unsigned int xid,
++ struct cifs_tcon *tcon,
++ struct cifs_sb_info *cifs_sb,
++ char *tree, bool islink,
++ struct dfs_cache_tgt_list *tl)
+ {
+- int rc;
++ const struct smb_version_operations *ops = tcon->ses->server->ops;
+ struct TCP_Server_Info *server = tcon->ses->server;
+- const struct smb_version_operations *ops = server->ops;
+- struct cifs_ses *root_ses = CIFS_DFS_ROOT_SES(tcon->ses);
+- char *share = NULL, *prefix = NULL;
+ struct dfs_cache_tgt_iterator *tit;
++ char *share = NULL, *prefix = NULL;
+ bool target_match;
+-
+- tit = dfs_cache_get_tgt_iterator(tl);
+- if (!tit) {
+- rc = -ENOENT;
+- goto out;
+- }
++ int rc = -ENOENT;
+
+ /* Try to tree connect to all dfs targets */
+- for (; tit; tit = dfs_cache_get_next_tgt(tl, tit)) {
+- const char *target = dfs_cache_get_tgt_name(tit);
+- DFS_CACHE_TGT_LIST(ntl);
+-
++ for (tit = dfs_cache_get_tgt_iterator(tl);
++ tit; tit = dfs_cache_get_next_tgt(tl, tit)) {
+ kfree(share);
+ kfree(prefix);
+ share = prefix = NULL;
+@@ -479,69 +381,16 @@ static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *t
+ }
+
+ dfs_cache_noreq_update_tgthint(server->leaf_fullpath + 1, tit);
+- tree_connect_ipc(xid, tree, cifs_sb, tcon);
+-
+ scnprintf(tree, MAX_TREE_SIZE, "\\%s", share);
+- if (!islink) {
+- rc = ops->tree_connect(xid, tcon->ses, tree, tcon, cifs_sb->local_nls);
+- break;
+- }
+-
+- /*
+- * If no dfs referrals were returned from link target, then just do a TREE_CONNECT
+- * to it. Otherwise, cache the dfs referral and then mark current tcp ses for
+- * reconnect so either the demultiplex thread or the echo worker will reconnect to
+- * newly resolved target.
+- */
+- if (dfs_cache_find(xid, root_ses, cifs_sb->local_nls, cifs_remap(cifs_sb), target,
+- NULL, &ntl)) {
+- rc = ops->tree_connect(xid, tcon->ses, tree, tcon, cifs_sb->local_nls);
+- if (rc)
+- continue;
+-
++ rc = ops->tree_connect(xid, tcon->ses, tree,
++ tcon, tcon->ses->local_nls);
++ if (islink && !rc && cifs_sb)
+ rc = cifs_update_super_prepath(cifs_sb, prefix);
+- } else {
+- /* Target is another dfs share */
+- rc = update_server_fullpath(server, cifs_sb, target);
+- dfs_cache_free_tgts(tl);
+-
+- if (!rc) {
+- rc = -EREMOTE;
+- list_replace_init(&ntl.tl_list, &tl->tl_list);
+- } else
+- dfs_cache_free_tgts(&ntl);
+- }
+ break;
+ }
+
+-out:
+ kfree(share);
+ kfree(prefix);
+-
+- return rc;
+-}
+-
+-static int tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, char *tree, bool islink,
+- struct dfs_cache_tgt_list *tl)
+-{
+- int rc;
+- int num_links = 0;
+- struct TCP_Server_Info *server = tcon->ses->server;
+- char *old_fullpath = server->leaf_fullpath;
+-
+- do {
+- rc = __tree_connect_dfs_target(xid, tcon, cifs_sb, tree, islink, tl);
+- if (!rc || rc != -EREMOTE)
+- break;
+- } while (rc = -ELOOP, ++num_links < MAX_NESTED_LINKS);
+- /*
+- * If we couldn't tree connect to any targets from last referral path, then
+- * retry it from newly resolved dfs referral.
+- */
+- if (rc && server->leaf_fullpath != old_fullpath)
+- cifs_signal_cifsd_for_reconnect(server, true);
+-
+ dfs_cache_free_tgts(tl);
+ return rc;
+ }
+@@ -596,14 +445,11 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ if (!IS_ERR(sb))
+ cifs_sb = CIFS_SB(sb);
+
+- /*
+- * Tree connect to last share in @tcon->tree_name whether dfs super or
+- * cached dfs referral was not found.
+- */
+- if (!cifs_sb || !server->leaf_fullpath ||
++ /* Tree connect to last share in @tcon->tree_name if no DFS referral */
++ if (!server->leaf_fullpath ||
+ dfs_cache_noreq_find(server->leaf_fullpath + 1, &ref, &tl)) {
+- rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon,
+- cifs_sb ? cifs_sb->local_nls : nlsc);
++ rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name,
++ tcon, tcon->ses->local_nls);
+ goto out;
+ }
+
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 6d567b16998119..b35fe1075503e1 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -724,6 +724,88 @@ static int cifs_sfu_mode(struct cifs_fattr *fattr, const unsigned char *path,
+ #endif
+ }
+
++#define POSIX_TYPE_FILE 0
++#define POSIX_TYPE_DIR 1
++#define POSIX_TYPE_SYMLINK 2
++#define POSIX_TYPE_CHARDEV 3
++#define POSIX_TYPE_BLKDEV 4
++#define POSIX_TYPE_FIFO 5
++#define POSIX_TYPE_SOCKET 6
++
++#define POSIX_X_OTH 0000001
++#define POSIX_W_OTH 0000002
++#define POSIX_R_OTH 0000004
++#define POSIX_X_GRP 0000010
++#define POSIX_W_GRP 0000020
++#define POSIX_R_GRP 0000040
++#define POSIX_X_USR 0000100
++#define POSIX_W_USR 0000200
++#define POSIX_R_USR 0000400
++#define POSIX_STICKY 0001000
++#define POSIX_SET_GID 0002000
++#define POSIX_SET_UID 0004000
++
++#define POSIX_OTH_MASK 0000007
++#define POSIX_GRP_MASK 0000070
++#define POSIX_USR_MASK 0000700
++#define POSIX_PERM_MASK 0000777
++#define POSIX_FILETYPE_MASK 0070000
++
++#define POSIX_FILETYPE_SHIFT 12
++
++static u32 wire_perms_to_posix(u32 wire)
++{
++ u32 mode = 0;
++
++ mode |= (wire & POSIX_X_OTH) ? S_IXOTH : 0;
++ mode |= (wire & POSIX_W_OTH) ? S_IWOTH : 0;
++ mode |= (wire & POSIX_R_OTH) ? S_IROTH : 0;
++ mode |= (wire & POSIX_X_GRP) ? S_IXGRP : 0;
++ mode |= (wire & POSIX_W_GRP) ? S_IWGRP : 0;
++ mode |= (wire & POSIX_R_GRP) ? S_IRGRP : 0;
++ mode |= (wire & POSIX_X_USR) ? S_IXUSR : 0;
++ mode |= (wire & POSIX_W_USR) ? S_IWUSR : 0;
++ mode |= (wire & POSIX_R_USR) ? S_IRUSR : 0;
++ mode |= (wire & POSIX_STICKY) ? S_ISVTX : 0;
++ mode |= (wire & POSIX_SET_GID) ? S_ISGID : 0;
++ mode |= (wire & POSIX_SET_UID) ? S_ISUID : 0;
++
++ return mode;
++}
++
++static u32 posix_filetypes[] = {
++ S_IFREG,
++ S_IFDIR,
++ S_IFLNK,
++ S_IFCHR,
++ S_IFBLK,
++ S_IFIFO,
++ S_IFSOCK
++};
++
++static u32 wire_filetype_to_posix(u32 wire_type)
++{
++ if (wire_type >= ARRAY_SIZE(posix_filetypes)) {
++ pr_warn("Unexpected type %u", wire_type);
++ return 0;
++ }
++ return posix_filetypes[wire_type];
++}
++
++umode_t wire_mode_to_posix(u32 wire, bool is_dir)
++{
++ u32 wire_type;
++ u32 mode;
++
++ wire_type = (wire & POSIX_FILETYPE_MASK) >> POSIX_FILETYPE_SHIFT;
++ /* older servers do not set POSIX file type in the mode field in the response */
++ if ((wire_type == 0) && is_dir)
++ mode = wire_perms_to_posix(wire) | S_IFDIR;
++ else
++ mode = (wire_perms_to_posix(wire) | wire_filetype_to_posix(wire_type));
++ return (umode_t)mode;
++}
++
+ /* Fill a cifs_fattr struct with info from POSIX info struct */
+ static void smb311_posix_info_to_fattr(struct cifs_fattr *fattr,
+ struct cifs_open_info_data *data,
+@@ -760,20 +842,14 @@ static void smb311_posix_info_to_fattr(struct cifs_fattr *fattr,
+ fattr->cf_bytes = le64_to_cpu(info->AllocationSize);
+ fattr->cf_createtime = le64_to_cpu(info->CreationTime);
+ fattr->cf_nlink = le32_to_cpu(info->HardLinks);
+- fattr->cf_mode = (umode_t) le32_to_cpu(info->Mode);
++ fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode),
++ fattr->cf_cifsattrs & ATTR_DIRECTORY);
+
+ if (cifs_open_data_reparse(data) &&
+ cifs_reparse_point_to_fattr(cifs_sb, fattr, data))
+ goto out_reparse;
+
+- fattr->cf_mode &= ~S_IFMT;
+- if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
+- fattr->cf_mode |= S_IFDIR;
+- fattr->cf_dtype = DT_DIR;
+- } else { /* file */
+- fattr->cf_mode |= S_IFREG;
+- fattr->cf_dtype = DT_REG;
+- }
++ fattr->cf_dtype = S_DT(fattr->cf_mode);
+
+ out_reparse:
+ if (S_ISLNK(fattr->cf_mode)) {
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index b3a8f9c6fcff6f..273358d20a46c9 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -71,6 +71,8 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ struct inode *inode;
+ struct super_block *sb = parent->d_sb;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
++ bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
++ bool reparse_need_reval = false;
+ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ int rc;
+
+@@ -85,7 +87,21 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ * this spares us an invalidation.
+ */
+ retry:
+- if ((fattr->cf_cifsattrs & ATTR_REPARSE) ||
++ if (posix) {
++ switch (fattr->cf_mode & S_IFMT) {
++ case S_IFLNK:
++ case S_IFBLK:
++ case S_IFCHR:
++ reparse_need_reval = true;
++ break;
++ default:
++ break;
++ }
++ } else if (fattr->cf_cifsattrs & ATTR_REPARSE) {
++ reparse_need_reval = true;
++ }
++
++ if (reparse_need_reval ||
+ (fattr->cf_flags & CIFS_FATTR_NEED_REVAL))
+ return;
+
+@@ -241,31 +257,29 @@ cifs_posix_to_fattr(struct cifs_fattr *fattr, struct smb2_posix_info *info,
+ fattr->cf_nlink = le32_to_cpu(info->HardLinks);
+ fattr->cf_cifsattrs = le32_to_cpu(info->DosAttributes);
+
+- /*
+- * Since we set the inode type below we need to mask off
+- * to avoid strange results if bits set above.
+- * XXX: why not make server&client use the type bits?
+- */
+- fattr->cf_mode = le32_to_cpu(info->Mode) & ~S_IFMT;
++ if (fattr->cf_cifsattrs & ATTR_REPARSE)
++ fattr->cf_cifstag = le32_to_cpu(info->ReparseTag);
++
++ /* The Mode field in the response can now include the file type as well */
++ fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode),
++ fattr->cf_cifsattrs & ATTR_DIRECTORY);
++ fattr->cf_dtype = S_DT(le32_to_cpu(info->Mode));
++
++ switch (fattr->cf_mode & S_IFMT) {
++ case S_IFLNK:
++ case S_IFBLK:
++ case S_IFCHR:
++ fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;
++ break;
++ default:
++ break;
++ }
+
+ cifs_dbg(FYI, "posix fattr: dev %d, reparse %d, mode %o\n",
+ le32_to_cpu(info->DeviceId),
+ le32_to_cpu(info->ReparseTag),
+ le32_to_cpu(info->Mode));
+
+- if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
+- fattr->cf_mode |= S_IFDIR;
+- fattr->cf_dtype = DT_DIR;
+- } else {
+- /*
+- * mark anything that is not a dir as regular
+- * file. special files should have the REPARSE
+- * attribute and will be marked as needing revaluation
+- */
+- fattr->cf_mode |= S_IFREG;
+- fattr->cf_dtype = DT_REG;
+- }
+-
+ sid_to_id(cifs_sb, &parsed.owner, fattr, SIDOWNER);
+ sid_to_id(cifs_sb, &parsed.group, fattr, SIDGROUP);
+ }
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index f74d0a86f44a4e..d3abb99cc99094 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -730,44 +730,60 @@ static void wsl_to_fattr(struct cifs_open_info_data *data,
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
+ }
+
+-bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+- struct cifs_fattr *fattr,
+- struct cifs_open_info_data *data)
++static bool posix_reparse_to_fattr(struct cifs_sb_info *cifs_sb,
++ struct cifs_fattr *fattr,
++ struct cifs_open_info_data *data)
+ {
+ struct reparse_posix_data *buf = data->reparse.posix;
+- u32 tag = data->reparse.tag;
+
+- if (tag == IO_REPARSE_TAG_NFS && buf) {
+- if (le16_to_cpu(buf->ReparseDataLength) < sizeof(buf->InodeType))
++
++ if (buf == NULL)
++ return true;
++
++ if (le16_to_cpu(buf->ReparseDataLength) < sizeof(buf->InodeType)) {
++ WARN_ON_ONCE(1);
++ return false;
++ }
++
++ switch (le64_to_cpu(buf->InodeType)) {
++ case NFS_SPECFILE_CHR:
++ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8) {
++ WARN_ON_ONCE(1);
+ return false;
+- switch (le64_to_cpu(buf->InodeType)) {
+- case NFS_SPECFILE_CHR:
+- if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
+- return false;
+- fattr->cf_mode |= S_IFCHR;
+- fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
+- break;
+- case NFS_SPECFILE_BLK:
+- if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
+- return false;
+- fattr->cf_mode |= S_IFBLK;
+- fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
+- break;
+- case NFS_SPECFILE_FIFO:
+- fattr->cf_mode |= S_IFIFO;
+- break;
+- case NFS_SPECFILE_SOCK:
+- fattr->cf_mode |= S_IFSOCK;
+- break;
+- case NFS_SPECFILE_LNK:
+- fattr->cf_mode |= S_IFLNK;
+- break;
+- default:
++ }
++ fattr->cf_mode |= S_IFCHR;
++ fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
++ break;
++ case NFS_SPECFILE_BLK:
++ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8) {
+ WARN_ON_ONCE(1);
+ return false;
+ }
+- goto out;
++ fattr->cf_mode |= S_IFBLK;
++ fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
++ break;
++ case NFS_SPECFILE_FIFO:
++ fattr->cf_mode |= S_IFIFO;
++ break;
++ case NFS_SPECFILE_SOCK:
++ fattr->cf_mode |= S_IFSOCK;
++ break;
++ case NFS_SPECFILE_LNK:
++ fattr->cf_mode |= S_IFLNK;
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ return false;
+ }
++ return true;
++}
++
++bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
++ struct cifs_fattr *fattr,
++ struct cifs_open_info_data *data)
++{
++ u32 tag = data->reparse.tag;
++ bool ok;
+
+ switch (tag) {
+ case IO_REPARSE_TAG_INTERNAL:
+@@ -787,15 +803,19 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ case IO_REPARSE_TAG_LX_BLK:
+ wsl_to_fattr(data, cifs_sb, tag, fattr);
+ break;
++ case IO_REPARSE_TAG_NFS:
++ ok = posix_reparse_to_fattr(cifs_sb, fattr, data);
++ if (!ok)
++ return false;
++ break;
+ case 0: /* SMB1 symlink */
+ case IO_REPARSE_TAG_SYMLINK:
+- case IO_REPARSE_TAG_NFS:
+ fattr->cf_mode |= S_IFLNK;
+ break;
+ default:
+ return false;
+ }
+-out:
++
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
+ return true;
+ }
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index a188908914fe8f..a55f0044d30bde 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -943,7 +943,8 @@ int smb2_query_path_info(const unsigned int xid,
+ if (rc || !data->reparse_point)
+ goto out;
+
+- cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
++ if (!tcon->posix_extensions)
++ cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
+ /*
+ * Skip SMB2_OP_GET_REPARSE if symlink already parsed in create
+ * response.
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 599118aed20539..d0836d710f1814 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -6651,6 +6651,10 @@ int smb2_read(struct ksmbd_work *work)
+ }
+
+ offset = le64_to_cpu(req->Offset);
++ if (offset < 0) {
++ err = -EINVAL;
++ goto out;
++ }
+ length = le32_to_cpu(req->Length);
+ mincount = le32_to_cpu(req->MinimumCount);
+
+@@ -6864,6 +6868,8 @@ int smb2_write(struct ksmbd_work *work)
+ }
+
+ offset = le64_to_cpu(req->Offset);
++ if (offset < 0)
++ return -EINVAL;
+ length = le32_to_cpu(req->Length);
+
+ if (req->Channel == SMB2_CHANNEL_RDMA_V1 ||
+diff --git a/fs/unicode/mkutf8data.c b/fs/unicode/mkutf8data.c
+index b2bd08250c7a09..77b685db827511 100644
+--- a/fs/unicode/mkutf8data.c
++++ b/fs/unicode/mkutf8data.c
+@@ -2230,6 +2230,75 @@ static void nfdicf_init(void)
+ file_fail(fold_name);
+ }
+
++static void ignore_init(void)
++{
++ FILE *file;
++ unsigned int unichar;
++ unsigned int first;
++ unsigned int last;
++ unsigned int *um;
++ int count;
++ int ret;
++
++ if (verbose > 0)
++ printf("Parsing %s\n", prop_name);
++ file = fopen(prop_name, "r");
++ if (!file)
++ open_fail(prop_name, errno);
++ assert(file);
++ count = 0;
++ while (fgets(line, LINESIZE, file)) {
++ ret = sscanf(line, "%X..%X ; %s # ", &first, &last, buf0);
++ if (ret == 3) {
++ if (strcmp(buf0, "Default_Ignorable_Code_Point"))
++ continue;
++ if (!utf32valid(first) || !utf32valid(last))
++ line_fail(prop_name, line);
++ for (unichar = first; unichar <= last; unichar++) {
++ free(unicode_data[unichar].utf32nfdi);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdi = um;
++ free(unicode_data[unichar].utf32nfdicf);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdicf = um;
++ count++;
++ }
++ if (verbose > 1)
++ printf(" %X..%X Default_Ignorable_Code_Point\n",
++ first, last);
++ continue;
++ }
++ ret = sscanf(line, "%X ; %s # ", &unichar, buf0);
++ if (ret == 2) {
++ if (strcmp(buf0, "Default_Ignorable_Code_Point"))
++ continue;
++ if (!utf32valid(unichar))
++ line_fail(prop_name, line);
++ free(unicode_data[unichar].utf32nfdi);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdi = um;
++ free(unicode_data[unichar].utf32nfdicf);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdicf = um;
++ if (verbose > 1)
++ printf(" %X Default_Ignorable_Code_Point\n",
++ unichar);
++ count++;
++ continue;
++ }
++ }
++ fclose(file);
++
++ if (verbose > 0)
++ printf("Found %d entries\n", count);
++ if (count == 0)
++ file_fail(prop_name);
++}
++
+ static void corrections_init(void)
+ {
+ FILE *file;
+@@ -3342,6 +3411,7 @@ int main(int argc, char *argv[])
+ ccc_init();
+ nfdi_init();
+ nfdicf_init();
++ ignore_init();
+ corrections_init();
+ hangul_decompose();
+ nfdi_decompose();
+diff --git a/fs/unicode/utf8data.c_shipped b/fs/unicode/utf8data.c_shipped
+index ac2da4ba2dc0f9..dafa5fed761d83 100644
+--- a/fs/unicode/utf8data.c_shipped
++++ b/fs/unicode/utf8data.c_shipped
+@@ -82,58 +82,58 @@ static const struct utf8data utf8nfdidata[] = {
+ { 0xc0100, 20736 }
+ };
+
+-static const unsigned char utf8data[64080] = {
++static const unsigned char utf8data[64256] = {
+ /* nfdicf_30100 */
+- 0xd7,0x07,0x66,0x84,0x0c,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x96,0x1a,0xe3,0x60,0x15,
+- 0xe2,0x49,0x0e,0xc1,0xe0,0x4b,0x0d,0xcf,0x86,0x65,0x2d,0x0d,0x01,0x00,0xd4,0xb8,
+- 0xd3,0x27,0xe2,0x03,0xa3,0xe1,0xcb,0x35,0xe0,0x29,0x22,0xcf,0x86,0xc5,0xe4,0xfa,
+- 0x6c,0xe3,0x45,0x68,0xe2,0xdb,0x65,0xe1,0x0e,0x65,0xe0,0xd3,0x64,0xcf,0x86,0xe5,
+- 0x98,0x64,0x64,0x7b,0x64,0x0b,0x00,0xd2,0x0e,0xe1,0xb3,0x3c,0xe0,0x34,0xa3,0xcf,
+- 0x86,0xcf,0x06,0x01,0x00,0xd1,0x0c,0xe0,0x98,0xa8,0xcf,0x86,0xcf,0x06,0x02,0xff,
++ 0xd7,0x07,0x66,0x84,0x0c,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x99,0x1a,0xe3,0x63,0x15,
++ 0xe2,0x4c,0x0e,0xc1,0xe0,0x4e,0x0d,0xcf,0x86,0x65,0x2d,0x0d,0x01,0x00,0xd4,0xb8,
++ 0xd3,0x27,0xe2,0x89,0xa3,0xe1,0xce,0x35,0xe0,0x2c,0x22,0xcf,0x86,0xc5,0xe4,0x15,
++ 0x6d,0xe3,0x60,0x68,0xe2,0xf6,0x65,0xe1,0x29,0x65,0xe0,0xee,0x64,0xcf,0x86,0xe5,
++ 0xb3,0x64,0x64,0x96,0x64,0x0b,0x00,0xd2,0x0e,0xe1,0xb5,0x3c,0xe0,0xba,0xa3,0xcf,
++ 0x86,0xcf,0x06,0x01,0x00,0xd1,0x0c,0xe0,0x1e,0xa9,0xcf,0x86,0xcf,0x06,0x02,0xff,
+ 0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,
+- 0x00,0xe4,0xdf,0x45,0xe3,0x39,0x45,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x01,0xad,
+- 0xd0,0x21,0xcf,0x86,0xe5,0xfb,0xa9,0xe4,0x7a,0xa9,0xe3,0x39,0xa9,0xe2,0x18,0xa9,
+- 0xe1,0x07,0xa9,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,
+- 0x00,0xcf,0x86,0xe5,0xdd,0xab,0xd4,0x19,0xe3,0x1c,0xab,0xe2,0xfb,0xaa,0xe1,0xea,
+- 0xaa,0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,
+- 0x83,0xab,0xe2,0x62,0xab,0xe1,0x51,0xab,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
+- 0x01,0xff,0xe9,0x9b,0xbb,0x00,0x83,0xe2,0x68,0xf9,0xe1,0x52,0xf6,0xe0,0xcf,0xf4,
+- 0xcf,0x86,0xd5,0x31,0xc4,0xe3,0x51,0x4e,0xe2,0xf2,0x4c,0xe1,0x09,0xcc,0xe0,0x99,
+- 0x4b,0xcf,0x86,0xe5,0x8b,0x49,0xe4,0xac,0x46,0xe3,0x76,0xbc,0xe2,0xcd,0xbb,0xe1,
+- 0xa8,0xbb,0xe0,0x81,0xbb,0xcf,0x86,0xe5,0x4e,0xbb,0x94,0x07,0x63,0x39,0xbb,0x07,
+- 0x00,0x07,0x00,0xe4,0x3b,0xf4,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,
+- 0xe1,0x4a,0xe1,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x39,0xe2,0xcf,0x86,
+- 0xe5,0xfe,0xe1,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x39,0xe2,0xcf,0x06,
+- 0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xd4,0xf3,0xe3,0xbd,0xf2,
+- 0xd2,0xa0,0xe1,0x73,0xe6,0xd0,0x21,0xcf,0x86,0xe5,0x74,0xe3,0xe4,0xf0,0xe2,0xe3,
+- 0xae,0xe2,0xe2,0x8d,0xe2,0xe1,0x7b,0xe2,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,
+- 0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xd0,0xe4,0xe3,0x8f,0xe4,
+- 0xe2,0x6e,0xe4,0xe1,0x5d,0xe4,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,
+- 0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0x57,0xe5,0xe1,0x46,0xe5,0x10,0x09,
+- 0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x77,
+- 0xe5,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,
+- 0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0xbd,0xe5,0xd2,0x14,0xe1,0x8c,0xe5,
++ 0x00,0xe4,0xe1,0x45,0xe3,0x3b,0x45,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x87,0xad,
++ 0xd0,0x21,0xcf,0x86,0xe5,0x81,0xaa,0xe4,0x00,0xaa,0xe3,0xbf,0xa9,0xe2,0x9e,0xa9,
++ 0xe1,0x8d,0xa9,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,
++ 0x00,0xcf,0x86,0xe5,0x63,0xac,0xd4,0x19,0xe3,0xa2,0xab,0xe2,0x81,0xab,0xe1,0x70,
++ 0xab,0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,
++ 0x09,0xac,0xe2,0xe8,0xab,0xe1,0xd7,0xab,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
++ 0x01,0xff,0xe9,0x9b,0xbb,0x00,0x83,0xe2,0x19,0xfa,0xe1,0xf2,0xf6,0xe0,0x6f,0xf5,
++ 0xcf,0x86,0xd5,0x31,0xc4,0xe3,0x54,0x4e,0xe2,0xf5,0x4c,0xe1,0xa4,0xcc,0xe0,0x9c,
++ 0x4b,0xcf,0x86,0xe5,0x8e,0x49,0xe4,0xaf,0x46,0xe3,0x11,0xbd,0xe2,0x68,0xbc,0xe1,
++ 0x43,0xbc,0xe0,0x1c,0xbc,0xcf,0x86,0xe5,0xe9,0xbb,0x94,0x07,0x63,0xd4,0xbb,0x07,
++ 0x00,0x07,0x00,0xe4,0xdb,0xf4,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,
++ 0xe1,0xea,0xe1,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xd9,0xe2,0xcf,0x86,
++ 0xe5,0x9e,0xe2,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xd9,0xe2,0xcf,0x06,
++ 0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x74,0xf4,0xe3,0x5d,0xf3,
++ 0xd2,0xa0,0xe1,0x13,0xe7,0xd0,0x21,0xcf,0x86,0xe5,0x14,0xe4,0xe4,0x90,0xe3,0xe3,
++ 0x4e,0xe3,0xe2,0x2d,0xe3,0xe1,0x1b,0xe3,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,
++ 0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x70,0xe5,0xe3,0x2f,0xe5,
++ 0xe2,0x0e,0xe5,0xe1,0xfd,0xe4,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,
++ 0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xf7,0xe5,0xe1,0xe6,0xe5,0x10,0x09,
++ 0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x17,
++ 0xe6,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,
++ 0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x5d,0xe6,0xd2,0x14,0xe1,0x2c,0xe6,
+ 0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,
+- 0x98,0xe5,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,
+- 0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0xed,0xea,0xd4,0x19,0xe3,0x26,0xea,0xe2,0x04,
+- 0xea,0xe1,0xf3,0xe9,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,
+- 0xb7,0x00,0xd3,0x18,0xe2,0x70,0xea,0xe1,0x5f,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,
+- 0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x88,0xea,0x10,
++ 0x38,0xe6,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,
++ 0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x8d,0xeb,0xd4,0x19,0xe3,0xc6,0xea,0xe2,0xa4,
++ 0xea,0xe1,0x93,0xea,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,
++ 0xb7,0x00,0xd3,0x18,0xe2,0x10,0xeb,0xe1,0xff,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,
++ 0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x28,0xeb,0x10,
+ 0x08,0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,
+ 0x08,0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,
+- 0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x8a,
+- 0xec,0xd4,0x1a,0xe3,0xc2,0xeb,0xe2,0xa8,0xeb,0xe1,0x95,0xeb,0x10,0x08,0x05,0xff,
+- 0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x0a,0xec,
+- 0xe1,0xf8,0xeb,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,
+- 0x00,0xd2,0x13,0xe1,0x26,0xec,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,
++ 0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x2a,
++ 0xed,0xd4,0x1a,0xe3,0x62,0xec,0xe2,0x48,0xec,0xe1,0x35,0xec,0x10,0x08,0x05,0xff,
++ 0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0xaa,0xec,
++ 0xe1,0x98,0xec,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,
++ 0x00,0xd2,0x13,0xe1,0xc6,0xec,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,
+ 0xe7,0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,
+ 0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,
+- 0xff,0xe7,0xaa,0xae,0x00,0xe0,0x3c,0xef,0xcf,0x86,0xd5,0x1d,0xe4,0xb1,0xed,0xe3,
+- 0x6d,0xed,0xe2,0x4b,0xed,0xe1,0x3a,0xed,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,
+- 0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x58,0xee,0xe2,0x34,0xee,0xe1,
+- 0x23,0xee,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,
+- 0xd3,0x18,0xe2,0xa3,0xee,0xe1,0x92,0xee,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,
+- 0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xbb,0xee,0x10,0x08,0x05,
++ 0xff,0xe7,0xaa,0xae,0x00,0xe0,0xdc,0xef,0xcf,0x86,0xd5,0x1d,0xe4,0x51,0xee,0xe3,
++ 0x0d,0xee,0xe2,0xeb,0xed,0xe1,0xda,0xed,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,
++ 0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xf8,0xee,0xe2,0xd4,0xee,0xe1,
++ 0xc3,0xee,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,
++ 0xd3,0x18,0xe2,0x43,0xef,0xe1,0x32,0xef,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,
++ 0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x5b,0xef,0x10,0x08,0x05,
+ 0xff,0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,
+ 0xff,0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,
+ 0x9e,0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+@@ -141,152 +141,152 @@ static const unsigned char utf8data[64080] = {
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdi_30100 */
+- 0x57,0x04,0x01,0x00,0xc6,0xd5,0x13,0xe4,0xa8,0x59,0xe3,0xe2,0x54,0xe2,0x5b,0x4f,
+- 0xc1,0xe0,0x87,0x4d,0xcf,0x06,0x01,0x00,0xd4,0xb8,0xd3,0x27,0xe2,0x89,0x9f,0xe1,
+- 0x91,0x8d,0xe0,0x21,0x71,0xcf,0x86,0xc5,0xe4,0x80,0x69,0xe3,0xcb,0x64,0xe2,0x61,
+- 0x62,0xe1,0x94,0x61,0xe0,0x59,0x61,0xcf,0x86,0xe5,0x1e,0x61,0x64,0x01,0x61,0x0b,
+- 0x00,0xd2,0x0e,0xe1,0x3f,0xa0,0xe0,0xba,0x9f,0xcf,0x86,0xcf,0x06,0x01,0x00,0xd1,
+- 0x0c,0xe0,0x1e,0xa5,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,
+- 0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x1b,0xb6,0xe3,0x95,
+- 0xad,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x87,0xa9,0xd0,0x21,0xcf,0x86,0xe5,0x81,
+- 0xa6,0xe4,0x00,0xa6,0xe3,0xbf,0xa5,0xe2,0x9e,0xa5,0xe1,0x8d,0xa5,0x10,0x08,0x01,
+- 0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0xcf,0x86,0xe5,0x63,0xa8,
+- 0xd4,0x19,0xe3,0xa2,0xa7,0xe2,0x81,0xa7,0xe1,0x70,0xa7,0x10,0x08,0x01,0xff,0xe9,
+- 0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,0x09,0xa8,0xe2,0xe8,0xa7,0xe1,
+- 0xd7,0xa7,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,0x9b,0xbb,0x00,
+- 0x83,0xe2,0xee,0xf5,0xe1,0xd8,0xf2,0xe0,0x55,0xf1,0xcf,0x86,0xd5,0x31,0xc4,0xe3,
+- 0xd5,0xcb,0xe2,0xae,0xc9,0xe1,0x8f,0xc8,0xe0,0x1f,0xbf,0xcf,0x86,0xe5,0x12,0xbb,
+- 0xe4,0x0b,0xba,0xe3,0xfc,0xb8,0xe2,0x53,0xb8,0xe1,0x2e,0xb8,0xe0,0x07,0xb8,0xcf,
+- 0x86,0xe5,0xd4,0xb7,0x94,0x07,0x63,0xbf,0xb7,0x07,0x00,0x07,0x00,0xe4,0xc1,0xf0,
+- 0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0xd0,0xdd,0xcf,0x86,0xcf,
+- 0x06,0x05,0x00,0xd1,0x0e,0xe0,0xbf,0xde,0xcf,0x86,0xe5,0x84,0xde,0xcf,0x06,0x11,
+- 0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xbf,0xde,0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,
+- 0xcf,0x06,0x00,0x00,0xe4,0x5a,0xf0,0xe3,0x43,0xef,0xd2,0xa0,0xe1,0xf9,0xe2,0xd0,
+- 0x21,0xcf,0x86,0xe5,0xfa,0xdf,0xe4,0x76,0xdf,0xe3,0x34,0xdf,0xe2,0x13,0xdf,0xe1,
+- 0x01,0xdf,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,
+- 0xcf,0x86,0xd5,0x1c,0xe4,0x56,0xe1,0xe3,0x15,0xe1,0xe2,0xf4,0xe0,0xe1,0xe3,0xe0,
+- 0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,
+- 0xd3,0x18,0xe2,0xdd,0xe1,0xe1,0xcc,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa1,0x9a,0xa8,
+- 0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0xfd,0xe1,0x91,0x11,0x10,0x09,0x05,
+- 0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,
+- 0xbe,0x00,0xe3,0x43,0xe2,0xd2,0x14,0xe1,0x12,0xe2,0x10,0x08,0x05,0xff,0xe5,0xaf,
+- 0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x1e,0xe2,0x10,0x08,0x05,0xff,
+- 0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,
+- 0xe5,0x73,0xe7,0xd4,0x19,0xe3,0xac,0xe6,0xe2,0x8a,0xe6,0xe1,0x79,0xe6,0x10,0x08,
+- 0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0xf6,
+- 0xe6,0xe1,0xe5,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,
+- 0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x0e,0xe7,0x10,0x08,0x05,0xff,0xe7,0x81,0xbd,
+- 0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,0xe7,0x85,0x85,
+- 0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,0x86,0x9c,0x00,
+- 0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x10,0xe9,0xd4,0x1a,0xe3,0x48,0xe8,
+- 0xe2,0x2e,0xe8,0xe1,0x1b,0xe8,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,0x00,0x05,0xff,
+- 0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x90,0xe8,0xe1,0x7e,0xe8,0x10,0x08,0x05,
+- 0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,0xe1,0xac,0xe8,
+- 0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,0x00,0xd1,0x12,
+- 0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,
+- 0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,
+- 0xc2,0xeb,0xcf,0x86,0xd5,0x1d,0xe4,0x37,0xea,0xe3,0xf3,0xe9,0xe2,0xd1,0xe9,0xe1,
+- 0xc0,0xe9,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,
+- 0x00,0xd4,0x19,0xe3,0xde,0xea,0xe2,0xba,0xea,0xe1,0xa9,0xea,0x10,0x08,0x05,0xff,
+- 0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,0x29,0xeb,0xe1,
+- 0x18,0xeb,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,
+- 0x92,0x00,0xd2,0x13,0xe1,0x41,0xeb,0x10,0x08,0x05,0xff,0xe8,0x9a,0x88,0x00,0x05,
+- 0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,0xa8,0x00,0x05,
+- 0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,0x05,0xff,0xe4,
+- 0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x57,0x04,0x01,0x00,0xc6,0xd5,0x16,0xe4,0xc2,0x59,0xe3,0xfb,0x54,0xe2,0x74,0x4f,
++ 0xc1,0xe0,0xa0,0x4d,0xcf,0x86,0x65,0x84,0x4d,0x01,0x00,0xd4,0xb8,0xd3,0x27,0xe2,
++ 0x0c,0xa0,0xe1,0xdf,0x8d,0xe0,0x39,0x71,0xcf,0x86,0xc5,0xe4,0x98,0x69,0xe3,0xe3,
++ 0x64,0xe2,0x79,0x62,0xe1,0xac,0x61,0xe0,0x71,0x61,0xcf,0x86,0xe5,0x36,0x61,0x64,
++ 0x19,0x61,0x0b,0x00,0xd2,0x0e,0xe1,0xc2,0xa0,0xe0,0x3d,0xa0,0xcf,0x86,0xcf,0x06,
++ 0x01,0x00,0xd1,0x0c,0xe0,0xa1,0xa5,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd0,0x08,
++ 0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x9e,
++ 0xb6,0xe3,0x18,0xae,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x0a,0xaa,0xd0,0x21,0xcf,
++ 0x86,0xe5,0x04,0xa7,0xe4,0x83,0xa6,0xe3,0x42,0xa6,0xe2,0x21,0xa6,0xe1,0x10,0xa6,
++ 0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0xcf,0x86,
++ 0xe5,0xe6,0xa8,0xd4,0x19,0xe3,0x25,0xa8,0xe2,0x04,0xa8,0xe1,0xf3,0xa7,0x10,0x08,
++ 0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,0x8c,0xa8,0xe2,
++ 0x6b,0xa8,0xe1,0x5a,0xa8,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,
++ 0x9b,0xbb,0x00,0x83,0xe2,0x9c,0xf6,0xe1,0x75,0xf3,0xe0,0xf2,0xf1,0xcf,0x86,0xd5,
++ 0x31,0xc4,0xe3,0x6d,0xcc,0xe2,0x46,0xca,0xe1,0x27,0xc9,0xe0,0xb7,0xbf,0xcf,0x86,
++ 0xe5,0xaa,0xbb,0xe4,0xa3,0xba,0xe3,0x94,0xb9,0xe2,0xeb,0xb8,0xe1,0xc6,0xb8,0xe0,
++ 0x9f,0xb8,0xcf,0x86,0xe5,0x6c,0xb8,0x94,0x07,0x63,0x57,0xb8,0x07,0x00,0x07,0x00,
++ 0xe4,0x5e,0xf1,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0x6d,0xde,
++ 0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x5c,0xdf,0xcf,0x86,0xe5,0x21,0xdf,
++ 0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x5c,0xdf,0xcf,0x06,0x13,0x00,0xcf,
++ 0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xf7,0xf0,0xe3,0xe0,0xef,0xd2,0xa0,0xe1,
++ 0x96,0xe3,0xd0,0x21,0xcf,0x86,0xe5,0x97,0xe0,0xe4,0x13,0xe0,0xe3,0xd1,0xdf,0xe2,
++ 0xb0,0xdf,0xe1,0x9e,0xdf,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,
++ 0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xf3,0xe1,0xe3,0xb2,0xe1,0xe2,0x91,0xe1,
++ 0xe1,0x80,0xe1,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,
++ 0x00,0xd4,0x34,0xd3,0x18,0xe2,0x7a,0xe2,0xe1,0x69,0xe2,0x10,0x09,0x05,0xff,0xf0,
++ 0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x9a,0xe2,0x91,0x11,
++ 0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,
++ 0xff,0xe5,0xac,0xbe,0x00,0xe3,0xe0,0xe2,0xd2,0x14,0xe1,0xaf,0xe2,0x10,0x08,0x05,
++ 0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0xbb,0xe2,0x10,
++ 0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,
++ 0x6a,0xcf,0x86,0xe5,0x10,0xe8,0xd4,0x19,0xe3,0x49,0xe7,0xe2,0x27,0xe7,0xe1,0x16,
++ 0xe7,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,
++ 0x18,0xe2,0x93,0xe7,0xe1,0x82,0xe7,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,
++ 0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xab,0xe7,0x10,0x08,0x05,0xff,
++ 0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,
++ 0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,
++ 0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0xad,0xe9,0xd4,0x1a,
++ 0xe3,0xe5,0xe8,0xe2,0xcb,0xe8,0xe1,0xb8,0xe8,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,
++ 0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x2d,0xe9,0xe1,0x1b,0xe9,
++ 0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,
++ 0xe1,0x49,0xe9,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,
++ 0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,
++ 0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,
++ 0xae,0x00,0xe0,0x5f,0xec,0xcf,0x86,0xd5,0x1d,0xe4,0xd4,0xea,0xe3,0x90,0xea,0xe2,
++ 0x6e,0xea,0xe1,0x5d,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,
++ 0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x7b,0xeb,0xe2,0x57,0xeb,0xe1,0x46,0xeb,0x10,
++ 0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,
++ 0xc6,0xeb,0xe1,0xb5,0xeb,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,
++ 0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xde,0xeb,0x10,0x08,0x05,0xff,0xe8,0x9a,
++ 0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,
++ 0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,
++ 0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdicf_30200 */
+- 0xd7,0x07,0x66,0x84,0x05,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x96,0x13,0xe3,0x60,0x0e,
+- 0xe2,0x49,0x07,0xc1,0xe0,0x4b,0x06,0xcf,0x86,0x65,0x2d,0x06,0x01,0x00,0xd4,0x2a,
+- 0xe3,0xce,0x35,0xe2,0x02,0x9c,0xe1,0xca,0x2e,0xe0,0x28,0x1b,0xcf,0x86,0xc5,0xe4,
+- 0xf9,0x65,0xe3,0x44,0x61,0xe2,0xda,0x5e,0xe1,0x0d,0x5e,0xe0,0xd2,0x5d,0xcf,0x86,
+- 0xe5,0x97,0x5d,0x64,0x7a,0x5d,0x0b,0x00,0x83,0xe2,0xf6,0xf2,0xe1,0xe0,0xef,0xe0,
+- 0x5d,0xee,0xcf,0x86,0xd5,0x31,0xc4,0xe3,0xdf,0x47,0xe2,0x80,0x46,0xe1,0x97,0xc5,
+- 0xe0,0x27,0x45,0xcf,0x86,0xe5,0x19,0x43,0xe4,0x3a,0x40,0xe3,0x04,0xb6,0xe2,0x5b,
+- 0xb5,0xe1,0x36,0xb5,0xe0,0x0f,0xb5,0xcf,0x86,0xe5,0xdc,0xb4,0x94,0x07,0x63,0xc7,
+- 0xb4,0x07,0x00,0x07,0x00,0xe4,0xc9,0xed,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,
+- 0xd2,0x0b,0xe1,0xd8,0xda,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xc7,0xdb,
+- 0xcf,0x86,0xe5,0x8c,0xdb,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xc7,0xdb,
+- 0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x62,0xed,0xe3,
+- 0x4b,0xec,0xd2,0xa0,0xe1,0x01,0xe0,0xd0,0x21,0xcf,0x86,0xe5,0x02,0xdd,0xe4,0x7e,
+- 0xdc,0xe3,0x3c,0xdc,0xe2,0x1b,0xdc,0xe1,0x09,0xdc,0x10,0x08,0x05,0xff,0xe4,0xb8,
+- 0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x5e,0xde,0xe3,
+- 0x1d,0xde,0xe2,0xfc,0xdd,0xe1,0xeb,0xdd,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,
+- 0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xe5,0xde,0xe1,0xd4,0xde,
++ 0xd7,0x07,0x66,0x84,0x05,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x99,0x13,0xe3,0x63,0x0e,
++ 0xe2,0x4c,0x07,0xc1,0xe0,0x4e,0x06,0xcf,0x86,0x65,0x2d,0x06,0x01,0x00,0xd4,0x2a,
++ 0xe3,0xd0,0x35,0xe2,0x88,0x9c,0xe1,0xcd,0x2e,0xe0,0x2b,0x1b,0xcf,0x86,0xc5,0xe4,
++ 0x14,0x66,0xe3,0x5f,0x61,0xe2,0xf5,0x5e,0xe1,0x28,0x5e,0xe0,0xed,0x5d,0xcf,0x86,
++ 0xe5,0xb2,0x5d,0x64,0x95,0x5d,0x0b,0x00,0x83,0xe2,0xa7,0xf3,0xe1,0x80,0xf0,0xe0,
++ 0xfd,0xee,0xcf,0x86,0xd5,0x31,0xc4,0xe3,0xe2,0x47,0xe2,0x83,0x46,0xe1,0x32,0xc6,
++ 0xe0,0x2a,0x45,0xcf,0x86,0xe5,0x1c,0x43,0xe4,0x3d,0x40,0xe3,0x9f,0xb6,0xe2,0xf6,
++ 0xb5,0xe1,0xd1,0xb5,0xe0,0xaa,0xb5,0xcf,0x86,0xe5,0x77,0xb5,0x94,0x07,0x63,0x62,
++ 0xb5,0x07,0x00,0x07,0x00,0xe4,0x69,0xee,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,
++ 0xd2,0x0b,0xe1,0x78,0xdb,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x67,0xdc,
++ 0xcf,0x86,0xe5,0x2c,0xdc,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x67,0xdc,
++ 0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x02,0xee,0xe3,
++ 0xeb,0xec,0xd2,0xa0,0xe1,0xa1,0xe0,0xd0,0x21,0xcf,0x86,0xe5,0xa2,0xdd,0xe4,0x1e,
++ 0xdd,0xe3,0xdc,0xdc,0xe2,0xbb,0xdc,0xe1,0xa9,0xdc,0x10,0x08,0x05,0xff,0xe4,0xb8,
++ 0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xfe,0xde,0xe3,
++ 0xbd,0xde,0xe2,0x9c,0xde,0xe1,0x8b,0xde,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,
++ 0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0x85,0xdf,0xe1,0x74,0xdf,
+ 0x10,0x09,0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,
+- 0xe2,0x05,0xdf,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,
+- 0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x4b,0xdf,0xd2,0x14,0xe1,
+- 0x1a,0xdf,0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,
+- 0x00,0xe1,0x26,0xdf,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,
+- 0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x7b,0xe4,0xd4,0x19,0xe3,0xb4,0xe3,
+- 0xe2,0x92,0xe3,0xe1,0x81,0xe3,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,
+- 0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0xfe,0xe3,0xe1,0xed,0xe3,0x10,0x09,0x05,0xff,
+- 0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x16,
++ 0xe2,0xa5,0xdf,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,
++ 0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0xeb,0xdf,0xd2,0x14,0xe1,
++ 0xba,0xdf,0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,
++ 0x00,0xe1,0xc6,0xdf,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,
++ 0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x1b,0xe5,0xd4,0x19,0xe3,0x54,0xe4,
++ 0xe2,0x32,0xe4,0xe1,0x21,0xe4,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,
++ 0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0x9e,0xe4,0xe1,0x8d,0xe4,0x10,0x09,0x05,0xff,
++ 0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xb6,
+ 0xe4,0x10,0x08,0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,
+ 0x11,0x10,0x08,0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,
+ 0x10,0x08,0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,
+- 0xe5,0x18,0xe6,0xd4,0x1a,0xe3,0x50,0xe5,0xe2,0x36,0xe5,0xe1,0x23,0xe5,0x10,0x08,
++ 0xe5,0xb8,0xe6,0xd4,0x1a,0xe3,0xf0,0xe5,0xe2,0xd6,0xe5,0xe1,0xc3,0xe5,0x10,0x08,
+ 0x05,0xff,0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,
+- 0x98,0xe5,0xe1,0x86,0xe5,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,
+- 0x83,0xa3,0x00,0xd2,0x13,0xe1,0xb4,0xe5,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,
++ 0x38,0xe6,0xe1,0x26,0xe6,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,
++ 0x83,0xa3,0x00,0xd2,0x13,0xe1,0x54,0xe6,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,
+ 0x05,0xff,0xe7,0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,
+ 0x00,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,
+- 0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,0xca,0xe8,0xcf,0x86,0xd5,0x1d,0xe4,0x3f,
+- 0xe7,0xe3,0xfb,0xe6,0xe2,0xd9,0xe6,0xe1,0xc8,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa3,
+- 0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xe6,0xe7,0xe2,0xc2,
+- 0xe7,0xe1,0xb1,0xe7,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,
+- 0x8a,0x00,0xd3,0x18,0xe2,0x31,0xe8,0xe1,0x20,0xe8,0x10,0x09,0x05,0xff,0xf0,0xa6,
+- 0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x49,0xe8,0x10,
++ 0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,0x6a,0xe9,0xcf,0x86,0xd5,0x1d,0xe4,0xdf,
++ 0xe7,0xe3,0x9b,0xe7,0xe2,0x79,0xe7,0xe1,0x68,0xe7,0x10,0x09,0x05,0xff,0xf0,0xa3,
++ 0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x86,0xe8,0xe2,0x62,
++ 0xe8,0xe1,0x51,0xe8,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,
++ 0x8a,0x00,0xd3,0x18,0xe2,0xd1,0xe8,0xe1,0xc0,0xe8,0x10,0x09,0x05,0xff,0xf0,0xa6,
++ 0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xe9,0xe8,0x10,
+ 0x08,0x05,0xff,0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,
+ 0x08,0x05,0xff,0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,
+ 0xff,0xe8,0x9e,0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdi_30200 */
+- 0x57,0x04,0x01,0x00,0xc6,0xd5,0x13,0xe4,0x68,0x53,0xe3,0xa2,0x4e,0xe2,0x1b,0x49,
+- 0xc1,0xe0,0x47,0x47,0xcf,0x06,0x01,0x00,0xd4,0x2a,0xe3,0x99,0x99,0xe2,0x48,0x99,
+- 0xe1,0x50,0x87,0xe0,0xe0,0x6a,0xcf,0x86,0xc5,0xe4,0x3f,0x63,0xe3,0x8a,0x5e,0xe2,
+- 0x20,0x5c,0xe1,0x53,0x5b,0xe0,0x18,0x5b,0xcf,0x86,0xe5,0xdd,0x5a,0x64,0xc0,0x5a,
+- 0x0b,0x00,0x83,0xe2,0x3c,0xf0,0xe1,0x26,0xed,0xe0,0xa3,0xeb,0xcf,0x86,0xd5,0x31,
+- 0xc4,0xe3,0x23,0xc6,0xe2,0xfc,0xc3,0xe1,0xdd,0xc2,0xe0,0x6d,0xb9,0xcf,0x86,0xe5,
+- 0x60,0xb5,0xe4,0x59,0xb4,0xe3,0x4a,0xb3,0xe2,0xa1,0xb2,0xe1,0x7c,0xb2,0xe0,0x55,
+- 0xb2,0xcf,0x86,0xe5,0x22,0xb2,0x94,0x07,0x63,0x0d,0xb2,0x07,0x00,0x07,0x00,0xe4,
+- 0x0f,0xeb,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0x1e,0xd8,0xcf,
+- 0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x0d,0xd9,0xcf,0x86,0xe5,0xd2,0xd8,0xcf,
+- 0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x0d,0xd9,0xcf,0x06,0x13,0x00,0xcf,0x86,
+- 0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xa8,0xea,0xe3,0x91,0xe9,0xd2,0xa0,0xe1,0x47,
+- 0xdd,0xd0,0x21,0xcf,0x86,0xe5,0x48,0xda,0xe4,0xc4,0xd9,0xe3,0x82,0xd9,0xe2,0x61,
+- 0xd9,0xe1,0x4f,0xd9,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,0xb8,
+- 0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xa4,0xdb,0xe3,0x63,0xdb,0xe2,0x42,0xdb,0xe1,
+- 0x31,0xdb,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,0x00,
+- 0xd4,0x34,0xd3,0x18,0xe2,0x2b,0xdc,0xe1,0x1a,0xdc,0x10,0x09,0x05,0xff,0xf0,0xa1,
+- 0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x4b,0xdc,0x91,0x11,0x10,
+- 0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,0xff,
+- 0xe5,0xac,0xbe,0x00,0xe3,0x91,0xdc,0xd2,0x14,0xe1,0x60,0xdc,0x10,0x08,0x05,0xff,
+- 0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x6c,0xdc,0x10,0x08,
+- 0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,0x6a,
+- 0xcf,0x86,0xe5,0xc1,0xe1,0xd4,0x19,0xe3,0xfa,0xe0,0xe2,0xd8,0xe0,0xe1,0xc7,0xe0,
+- 0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,0x18,
+- 0xe2,0x44,0xe1,0xe1,0x33,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,0x05,
+- 0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x5c,0xe1,0x10,0x08,0x05,0xff,0xe7,
+- 0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,0xe7,
+- 0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,0x86,
+- 0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x5e,0xe3,0xd4,0x1a,0xe3,
+- 0x96,0xe2,0xe2,0x7c,0xe2,0xe1,0x69,0xe2,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,0x00,
+- 0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0xde,0xe2,0xe1,0xcc,0xe2,0x10,
+- 0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,0xe1,
+- 0xfa,0xe2,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,0x00,
+- 0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,0xaa,
+- 0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,0xae,
+- 0x00,0xe0,0x10,0xe6,0xcf,0x86,0xd5,0x1d,0xe4,0x85,0xe4,0xe3,0x41,0xe4,0xe2,0x1f,
+- 0xe4,0xe1,0x0e,0xe4,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,0xe4,
+- 0x8f,0x95,0x00,0xd4,0x19,0xe3,0x2c,0xe5,0xe2,0x08,0xe5,0xe1,0xf7,0xe4,0x10,0x08,
+- 0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,0x77,
+- 0xe5,0xe1,0x66,0xe5,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,0xf0,
+- 0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x8f,0xe5,0x10,0x08,0x05,0xff,0xe8,0x9a,0x88,
+- 0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,0xa8,
+- 0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,0x05,
+- 0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x57,0x04,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x82,0x53,0xe3,0xbb,0x4e,0xe2,0x34,0x49,
++ 0xc1,0xe0,0x60,0x47,0xcf,0x86,0x65,0x44,0x47,0x01,0x00,0xd4,0x2a,0xe3,0x1c,0x9a,
++ 0xe2,0xcb,0x99,0xe1,0x9e,0x87,0xe0,0xf8,0x6a,0xcf,0x86,0xc5,0xe4,0x57,0x63,0xe3,
++ 0xa2,0x5e,0xe2,0x38,0x5c,0xe1,0x6b,0x5b,0xe0,0x30,0x5b,0xcf,0x86,0xe5,0xf5,0x5a,
++ 0x64,0xd8,0x5a,0x0b,0x00,0x83,0xe2,0xea,0xf0,0xe1,0xc3,0xed,0xe0,0x40,0xec,0xcf,
++ 0x86,0xd5,0x31,0xc4,0xe3,0xbb,0xc6,0xe2,0x94,0xc4,0xe1,0x75,0xc3,0xe0,0x05,0xba,
++ 0xcf,0x86,0xe5,0xf8,0xb5,0xe4,0xf1,0xb4,0xe3,0xe2,0xb3,0xe2,0x39,0xb3,0xe1,0x14,
++ 0xb3,0xe0,0xed,0xb2,0xcf,0x86,0xe5,0xba,0xb2,0x94,0x07,0x63,0xa5,0xb2,0x07,0x00,
++ 0x07,0x00,0xe4,0xac,0xeb,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,
++ 0xbb,0xd8,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xaa,0xd9,0xcf,0x86,0xe5,
++ 0x6f,0xd9,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xaa,0xd9,0xcf,0x06,0x13,
++ 0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x45,0xeb,0xe3,0x2e,0xea,0xd2,
++ 0xa0,0xe1,0xe4,0xdd,0xd0,0x21,0xcf,0x86,0xe5,0xe5,0xda,0xe4,0x61,0xda,0xe3,0x1f,
++ 0xda,0xe2,0xfe,0xd9,0xe1,0xec,0xd9,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,
++ 0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x41,0xdc,0xe3,0x00,0xdc,0xe2,
++ 0xdf,0xdb,0xe1,0xce,0xdb,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,
++ 0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xc8,0xdc,0xe1,0xb7,0xdc,0x10,0x09,0x05,
++ 0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0xe8,0xdc,
++ 0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,
++ 0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x2e,0xdd,0xd2,0x14,0xe1,0xfd,0xdc,0x10,
++ 0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x09,
++ 0xdd,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,
++ 0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x5e,0xe2,0xd4,0x19,0xe3,0x97,0xe1,0xe2,0x75,0xe1,
++ 0xe1,0x64,0xe1,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,
++ 0x00,0xd3,0x18,0xe2,0xe1,0xe1,0xe1,0xd0,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,
++ 0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xf9,0xe1,0x10,0x08,
++ 0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,
++ 0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,
++ 0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0xfb,0xe3,
++ 0xd4,0x1a,0xe3,0x33,0xe3,0xe2,0x19,0xe3,0xe1,0x06,0xe3,0x10,0x08,0x05,0xff,0xe7,
++ 0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x7b,0xe3,0xe1,
++ 0x69,0xe3,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,
++ 0xd2,0x13,0xe1,0x97,0xe3,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,
++ 0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,
++ 0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,
++ 0xe7,0xaa,0xae,0x00,0xe0,0xad,0xe6,0xcf,0x86,0xd5,0x1d,0xe4,0x22,0xe5,0xe3,0xde,
++ 0xe4,0xe2,0xbc,0xe4,0xe1,0xab,0xe4,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,
++ 0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xc9,0xe5,0xe2,0xa5,0xe5,0xe1,0x94,
++ 0xe5,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,
++ 0x18,0xe2,0x14,0xe6,0xe1,0x03,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,
++ 0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x2c,0xe6,0x10,0x08,0x05,0xff,
++ 0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,
++ 0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,
++ 0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdicf_c0100 */
+ 0xd7,0xb0,0x56,0x04,0x01,0x00,0x95,0xa8,0xd4,0x5e,0xd3,0x2e,0xd2,0x16,0xd1,0x0a,
+ 0x10,0x04,0x01,0x00,0x01,0xff,0x61,0x00,0x10,0x06,0x01,0xff,0x62,0x00,0x01,0xff,
+@@ -299,3174 +299,3184 @@ static const unsigned char utf8data[64080] = {
+ 0xd1,0x0c,0x10,0x06,0x01,0xff,0x74,0x00,0x01,0xff,0x75,0x00,0x10,0x06,0x01,0xff,
+ 0x76,0x00,0x01,0xff,0x77,0x00,0x92,0x16,0xd1,0x0c,0x10,0x06,0x01,0xff,0x78,0x00,
+ 0x01,0xff,0x79,0x00,0x10,0x06,0x01,0xff,0x7a,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xc6,0xe5,0xf6,0x14,0xe4,0x6c,0x0d,0xe3,0x36,0x08,0xe2,0x1f,0x01,0xc1,0xd0,0x21,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x13,0x52,0x04,0x01,0x00,
+- 0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xce,0xbc,0x00,0x01,0x00,0x01,0x00,0xcf,
+- 0x86,0xe5,0x9d,0x44,0xd4,0x7f,0xd3,0x3f,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,
+- 0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,
+- 0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x07,0x01,0xff,0xc3,0xa6,0x00,0x01,
+- 0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x80,
+- 0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x82,0x00,0x01,
+- 0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x80,0x00,0x01,
+- 0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,0x01,0xff,0x69,
+- 0xcc,0x88,0x00,0xd3,0x3b,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb0,0x00,
+- 0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,0x00,0x01,0xff,
+- 0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,0x00,0x01,0xff,
+- 0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1f,
+- 0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb8,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,0x10,
+- 0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x07,0x01,
+- 0xff,0xc3,0xbe,0x00,0x01,0xff,0x73,0x73,0x00,0xe1,0xd4,0x03,0xe0,0xeb,0x01,0xcf,
+- 0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,
+- 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x86,
+- 0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa8,
+- 0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,0x81,0x00,0x01,
+- 0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,0x82,
+- 0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,0x87,0x00,0x01,
+- 0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,0x8c,0x00,0x01,
+- 0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x8c,0x00,0x01,0xff,0x64,
+- 0xcc,0x8c,0x00,0xd3,0x3b,0xd2,0x1b,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc4,0x91,0x00,
+- 0x01,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x84,0x00,0x01,0xff,0x65,0xcc,0x84,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0x86,0x00,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,0x00,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x67,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,0x00,0x10,0x08,
+- 0x01,0xff,0x67,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,0x7b,0xd3,0x3b,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x87,0x00,0x01,0xff,0x67,0xcc,
+- 0x87,0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0xff,0x68,0xcc,0x82,0x00,
+- 0x10,0x07,0x01,0xff,0xc4,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x69,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x69,
+- 0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,
+- 0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0xa8,
+- 0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x37,0xd2,0x17,0xd1,0x0c,0x10,0x08,0x01,
+- 0xff,0x69,0xcc,0x87,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc4,0xb3,0x00,0x01,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x6a,0xcc,0x82,0x00,0x01,0xff,0x6a,0xcc,0x82,0x00,
+- 0x10,0x08,0x01,0xff,0x6b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,0x00,0xd2,0x1c,
+- 0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6c,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
+- 0x6c,0xcc,0x81,0x00,0x01,0xff,0x6c,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x6c,0xcc,0xa7,0x00,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,
+- 0x8c,0x00,0x01,0xff,0xc5,0x80,0x00,0xcf,0x86,0xd5,0xed,0xd4,0x72,0xd3,0x37,0xd2,
+- 0x17,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc5,0x82,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x6e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,
+- 0x01,0xff,0x6e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,
+- 0x6e,0xcc,0x8c,0x00,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,
+- 0x01,0xff,0xca,0xbc,0x6e,0x00,0x10,0x07,0x01,0xff,0xc5,0x8b,0x00,0x01,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0x84,0x00,0x10,
+- 0x08,0x01,0xff,0x6f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,0xd3,0x3b,0xd2,
+- 0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0xff,0x6f,0xcc,0x8b,
+- 0x00,0x10,0x07,0x01,0xff,0xc5,0x93,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x72,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,
+- 0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x72,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,
+- 0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x73,0xcc,
+- 0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,0xa7,0x00,
+- 0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x7b,0xd3,0x3b,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x73,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,
+- 0x74,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x74,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x10,0x07,0x01,0xff,0xc5,0xa7,
+- 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x83,0x00,0x01,
+- 0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x84,0x00,0x01,0xff,0x75,
+- 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x86,0x00,0x01,0xff,0x75,
+- 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x8b,0x00,0x01,
+- 0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa8,0x00,0x01,0xff,0x75,
+- 0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x82,0x00,0x01,0xff,0x77,
+- 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x82,0x00,0x01,0xff,0x79,0xcc,0x82,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,0x88,0x00,0x01,0xff,0x7a,
+- 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,0x7a,0xcc,0x87,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,0x7a,0xcc,0x8c,
+- 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0xff,0x73,0x00,0xe0,0x65,0x01,
+- 0xcf,0x86,0xd5,0xb4,0xd4,0x5a,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xc9,0x93,0x00,0x10,0x07,0x01,0xff,0xc6,0x83,0x00,0x01,0x00,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xc6,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0x94,0x00,
+- 0x01,0xff,0xc6,0x88,0x00,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc9,
+- 0x96,0x00,0x10,0x07,0x01,0xff,0xc9,0x97,0x00,0x01,0xff,0xc6,0x8c,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x07,0x01,0xff,0xc7,0x9d,0x00,0x01,0xff,0xc9,0x99,0x00,0xd3,0x32,
+- 0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0x9b,0x00,0x01,0xff,0xc6,0x92,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xa0,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc9,
+- 0xa3,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0xa9,0x00,0x01,0xff,0xc9,0xa8,0x00,
+- 0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0x99,0x00,0x01,0x00,0x01,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xc9,0xaf,0x00,0x01,0xff,0xc9,0xb2,0x00,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xc9,0xb5,0x00,0xd4,0x5d,0xd3,0x34,0xd2,0x1b,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x10,0x07,0x01,0xff,
+- 0xc6,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0xa5,0x00,0x01,0x00,
+- 0x10,0x07,0x01,0xff,0xca,0x80,0x00,0x01,0xff,0xc6,0xa8,0x00,0xd2,0x0f,0x91,0x0b,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x83,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,
+- 0xff,0xc6,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x88,0x00,0x01,0xff,0x75,
+- 0xcc,0x9b,0x00,0xd3,0x33,0xd2,0x1d,0xd1,0x0f,0x10,0x08,0x01,0xff,0x75,0xcc,0x9b,
+- 0x00,0x01,0xff,0xca,0x8a,0x00,0x10,0x07,0x01,0xff,0xca,0x8b,0x00,0x01,0xff,0xc6,
+- 0xb4,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc6,0xb6,0x00,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xca,0x92,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xb9,
+- 0x00,0x01,0x00,0x01,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xbd,0x00,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x44,0xd3,0x16,0x52,0x04,0x01,0x00,0x51,0x07,
+- 0x01,0xff,0xc7,0x86,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc7,0x89,0x00,0xd2,0x12,
+- 0x91,0x0b,0x10,0x07,0x01,0xff,0xc7,0x89,0x00,0x01,0x00,0x01,0xff,0xc7,0x8c,0x00,
+- 0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x61,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,
+- 0x61,0xcc,0x8c,0x00,0x01,0xff,0x69,0xcc,0x8c,0x00,0xd3,0x46,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x8c,0x00,0xd1,0x12,0x10,0x08,
+- 0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x10,0x0a,
+- 0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,
+- 0x75,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,
+- 0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0x75,0xcc,
+- 0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,
+- 0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x87,0xd3,0x41,0xd2,0x26,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x87,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x01,0xff,0xc3,0xa6,0xcc,
+- 0x84,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc7,0xa5,0x00,0x01,0x00,0x10,0x08,0x01,
+- 0xff,0x67,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x10,0x08,0x01,
+- 0xff,0x6f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,0x10,0x0a,0x01,
+- 0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x10,
+- 0x09,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0xd3,
+- 0x38,0xd2,0x1a,0xd1,0x0f,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,0x01,0xff,0xc7,
+- 0xb3,0x00,0x10,0x07,0x01,0xff,0xc7,0xb3,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x67,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x10,0x07,0x04,0xff,0xc6,
+- 0x95,0x00,0x04,0xff,0xc6,0xbf,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x6e,
+- 0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x8a,
+- 0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xc3,0xa6,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,
+- 0xff,0xc3,0xb8,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x31,0x02,
+- 0xe1,0xad,0x44,0xe0,0xc8,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,
+- 0x10,0x08,0x01,0xff,0x61,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,
+- 0x01,0xff,0x65,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,
+- 0x6f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x72,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,
+- 0x01,0xff,0x72,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x75,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,
+- 0x75,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x04,0xff,0x73,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,
+- 0x74,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,
+- 0xc8,0x9d,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x68,0xcc,0x8c,0x00,0x04,0xff,0x68,
+- 0xcc,0x8c,0x00,0xd4,0x79,0xd3,0x31,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xc6,
+- 0x9e,0x00,0x07,0x00,0x10,0x07,0x04,0xff,0xc8,0xa3,0x00,0x04,0x00,0xd1,0x0b,0x10,
+- 0x07,0x04,0xff,0xc8,0xa5,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x61,0xcc,0x87,0x00,
+- 0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x65,0xcc,
+- 0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x88,0xcc,
+- 0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,
+- 0x6f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,
+- 0x04,0xff,0x6f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0xd3,0x27,0xe2,0x0b,
+- 0x43,0xd1,0x14,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x04,0xff,0x6f,
+- 0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x79,0xcc,0x84,0x00,0x04,0xff,0x79,
+- 0xcc,0x84,0x00,0xd2,0x13,0x51,0x04,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa5,
+- 0x00,0x08,0xff,0xc8,0xbc,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,0xc6,0x9a,
+- 0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa6,0x00,0x08,0x00,0xcf,0x86,0x95,0x5f,0x94,
+- 0x5b,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,0xc9,0x82,0x00,
+- 0x10,0x04,0x09,0x00,0x09,0xff,0xc6,0x80,0x00,0xd1,0x0e,0x10,0x07,0x09,0xff,0xca,
+- 0x89,0x00,0x09,0xff,0xca,0x8c,0x00,0x10,0x07,0x09,0xff,0xc9,0x87,0x00,0x09,0x00,
+- 0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x89,0x00,0x09,0x00,0x10,0x07,0x09,
+- 0xff,0xc9,0x8b,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x8d,0x00,0x09,
+- 0x00,0x10,0x07,0x09,0xff,0xc9,0x8f,0x00,0x09,0x00,0x01,0x00,0x01,0x00,0xd1,0x8b,
+- 0xd0,0x0c,0xcf,0x86,0xe5,0xfa,0x42,0x64,0xd9,0x42,0x01,0xe6,0xcf,0x86,0xd5,0x2a,
+- 0xe4,0x82,0x43,0xe3,0x69,0x43,0xd2,0x11,0xe1,0x48,0x43,0x10,0x07,0x01,0xff,0xcc,
+- 0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0xe1,0x4f,0x43,0x10,0x09,0x01,0xff,0xcc,0x88,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0x00,0xd4,0x0f,0x93,0x0b,0x92,0x07,0x61,0x94,
+- 0x43,0x01,0xea,0x06,0xe6,0x06,0xe6,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x0a,
+- 0xff,0xcd,0xb1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb3,0x00,0x0a,0x00,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb7,
+- 0x00,0x0a,0x00,0xd2,0x07,0x61,0x80,0x43,0x00,0x00,0x51,0x04,0x09,0x00,0x10,0x06,
+- 0x01,0xff,0x3b,0x00,0x10,0xff,0xcf,0xb3,0x00,0xe0,0x31,0x01,0xcf,0x86,0xd5,0xd3,
+- 0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,0x00,0x01,0xff,
+- 0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,
+- 0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x00,0x00,0x10,
+- 0x09,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0xd3,
+- 0x3c,0xd2,0x20,0xd1,0x12,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0xb1,0x00,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb3,
+- 0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb4,0x00,0x01,0xff,0xce,0xb5,0x00,0x10,
+- 0x07,0x01,0xff,0xce,0xb6,0x00,0x01,0xff,0xce,0xb7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,
+- 0x07,0x01,0xff,0xce,0xb8,0x00,0x01,0xff,0xce,0xb9,0x00,0x10,0x07,0x01,0xff,0xce,
+- 0xba,0x00,0x01,0xff,0xce,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xbc,0x00,
+- 0x01,0xff,0xce,0xbd,0x00,0x10,0x07,0x01,0xff,0xce,0xbe,0x00,0x01,0xff,0xce,0xbf,
+- 0x00,0xe4,0x6e,0x43,0xd3,0x35,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcf,0x80,
+- 0x00,0x01,0xff,0xcf,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x83,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xcf,0x84,0x00,0x01,0xff,0xcf,0x85,0x00,0x10,0x07,0x01,
+- 0xff,0xcf,0x86,0x00,0x01,0xff,0xcf,0x87,0x00,0xe2,0x14,0x43,0xd1,0x0e,0x10,0x07,
+- 0x01,0xff,0xcf,0x88,0x00,0x01,0xff,0xcf,0x89,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xcf,0x86,0xd5,0x94,0xd4,0x3c,
+- 0xd3,0x13,0x92,0x0f,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0x83,0x00,0x01,
+- 0x00,0x01,0x00,0xd2,0x07,0x61,0x23,0x43,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
+- 0xcf,0x89,0xcc,0x81,0x00,0x0a,0xff,0xcf,0x97,0x00,0xd3,0x2c,0xd2,0x11,0xe1,0x2f,
+- 0x43,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb8,0x00,0xd1,0x10,0x10,
+- 0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0xff,0xcf,0x86,0x00,0x10,0x07,0x01,
+- 0xff,0xcf,0x80,0x00,0x04,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xcf,0x99,
+- 0x00,0x06,0x00,0x10,0x07,0x01,0xff,0xcf,0x9b,0x00,0x04,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xcf,0x9d,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0x9f,0x00,0x04,0x00,
+- 0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa1,0x00,0x04,
+- 0x00,0x10,0x07,0x01,0xff,0xcf,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xcf,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xa7,0x00,0x01,0x00,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,
+- 0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xad,0x00,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xcf,0xaf,0x00,0x01,0x00,0xd3,0x2b,0xd2,0x12,0x91,0x0e,0x10,0x07,
+- 0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xcf,0x81,0x00,0x01,0x00,0xd1,0x0e,0x10,0x07,
+- 0x05,0xff,0xce,0xb8,0x00,0x05,0xff,0xce,0xb5,0x00,0x10,0x04,0x06,0x00,0x07,0xff,
+- 0xcf,0xb8,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x07,0x00,0x07,0xff,0xcf,0xb2,0x00,
+- 0x10,0x07,0x07,0xff,0xcf,0xbb,0x00,0x07,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,
+- 0xff,0xcd,0xbb,0x00,0x10,0x07,0x08,0xff,0xcd,0xbc,0x00,0x08,0xff,0xcd,0xbd,0x00,
+- 0xe3,0xd6,0x46,0xe2,0x3d,0x05,0xe1,0x27,0x02,0xe0,0x66,0x01,0xcf,0x86,0xd5,0xf0,
+- 0xd4,0x7e,0xd3,0x40,0xd2,0x22,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0xb5,0xcc,0x80,
+- 0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x07,0x01,0xff,0xd1,0x92,0x00,0x01,
+- 0xff,0xd0,0xb3,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x94,0x00,0x01,
+- 0xff,0xd1,0x95,0x00,0x10,0x07,0x01,0xff,0xd1,0x96,0x00,0x01,0xff,0xd1,0x96,0xcc,
+- 0x88,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x98,0x00,0x01,0xff,0xd1,
+- 0x99,0x00,0x10,0x07,0x01,0xff,0xd1,0x9a,0x00,0x01,0xff,0xd1,0x9b,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,0x00,
+- 0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0xff,0xd1,0x9f,0x00,0xd3,0x38,
+- 0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xb0,0x00,0x01,0xff,0xd0,0xb1,0x00,
+- 0x10,0x07,0x01,0xff,0xd0,0xb2,0x00,0x01,0xff,0xd0,0xb3,0x00,0xd1,0x0e,0x10,0x07,
+- 0x01,0xff,0xd0,0xb4,0x00,0x01,0xff,0xd0,0xb5,0x00,0x10,0x07,0x01,0xff,0xd0,0xb6,
+- 0x00,0x01,0xff,0xd0,0xb7,0x00,0xd2,0x1e,0xd1,0x10,0x10,0x07,0x01,0xff,0xd0,0xb8,
+- 0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x10,0x07,0x01,0xff,0xd0,0xba,0x00,0x01,
+- 0xff,0xd0,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xbc,0x00,0x01,0xff,0xd0,
+- 0xbd,0x00,0x10,0x07,0x01,0xff,0xd0,0xbe,0x00,0x01,0xff,0xd0,0xbf,0x00,0xe4,0x0e,
+- 0x42,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x80,0x00,0x01,0xff,
+- 0xd1,0x81,0x00,0x10,0x07,0x01,0xff,0xd1,0x82,0x00,0x01,0xff,0xd1,0x83,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xd1,0x84,0x00,0x01,0xff,0xd1,0x85,0x00,0x10,0x07,0x01,
+- 0xff,0xd1,0x86,0x00,0x01,0xff,0xd1,0x87,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,
+- 0xff,0xd1,0x88,0x00,0x01,0xff,0xd1,0x89,0x00,0x10,0x07,0x01,0xff,0xd1,0x8a,0x00,
+- 0x01,0xff,0xd1,0x8b,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x8c,0x00,0x01,0xff,
+- 0xd1,0x8d,0x00,0x10,0x07,0x01,0xff,0xd1,0x8e,0x00,0x01,0xff,0xd1,0x8f,0x00,0xcf,
+- 0x86,0xd5,0x07,0x64,0xb8,0x41,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xd1,0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xa3,0x00,
+- 0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,
+- 0xff,0xd1,0xa7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa9,
+- 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd1,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xaf,0x00,0x01,0x00,
+- 0xd3,0x33,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb1,0x00,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xd1,0xb3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb5,
+- 0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,0xff,0xd1,0xb5,
+- 0xcc,0x8f,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb9,0x00,0x01,0x00,
+- 0x10,0x07,0x01,0xff,0xd1,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,
+- 0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbf,0x00,0x01,0x00,0xe0,0x41,0x01,
+- 0xcf,0x86,0xd5,0x8e,0xd4,0x36,0xd3,0x11,0xe2,0x7a,0x41,0xe1,0x71,0x41,0x10,0x07,
+- 0x01,0xff,0xd2,0x81,0x00,0x01,0x00,0xd2,0x0f,0x51,0x04,0x04,0x00,0x10,0x07,0x06,
+- 0xff,0xd2,0x8b,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,0xd2,0x8d,0x00,0x04,
+- 0x00,0x10,0x07,0x04,0xff,0xd2,0x8f,0x00,0x04,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xd2,0x91,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x93,0x00,
+- 0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x95,0x00,0x01,0x00,0x10,0x07,0x01,
+- 0xff,0xd2,0x97,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x99,
+- 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9b,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd2,0x9d,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9f,0x00,0x01,0x00,
+- 0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa1,0x00,0x01,
+- 0x00,0x10,0x07,0x01,0xff,0xd2,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xd2,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa7,0x00,0x01,0x00,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,
+- 0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xad,0x00,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xd2,0xaf,0x00,0x01,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd2,0xb1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xb3,0x00,0x01,0x00,
+- 0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,
+- 0xb7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb9,0x00,0x01,
+- 0x00,0x10,0x07,0x01,0xff,0xd2,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xd2,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbf,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xdc,0xd4,0x5a,0xd3,0x36,0xd2,0x20,0xd1,0x10,0x10,0x07,0x01,0xff,0xd3,0x8f,
+- 0x00,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb6,0xcc,0x86,
+- 0x00,0x01,0xff,0xd3,0x84,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x86,
+- 0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x88,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,
+- 0x01,0x00,0x06,0xff,0xd3,0x8a,0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x8c,0x00,
+- 0xe1,0x52,0x40,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8e,0x00,0xd3,0x41,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb0,0xcc,
+- 0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb0,0xcc,
+- 0x88,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x95,0x00,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xd0,0xb5,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0xd2,0x1d,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xd3,0x99,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x99,
+- 0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xd0,0xb6,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,
+- 0xd0,0xb7,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,0x82,0xd3,0x41,
+- 0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa1,0x00,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xd0,0xb8,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x10,
+- 0x09,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0xd2,
+- 0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa9,0x00,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xd3,0xa9,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,
+- 0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,
+- 0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x41,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x8b,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,0x01,
+- 0xff,0xd1,0x87,0xcc,0x88,0x00,0x10,0x07,0x08,0xff,0xd3,0xb7,0x00,0x08,0x00,0xd2,
+- 0x1d,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x01,0xff,0xd1,0x8b,
+- 0xcc,0x88,0x00,0x10,0x07,0x09,0xff,0xd3,0xbb,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,
+- 0x09,0xff,0xd3,0xbd,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd3,0xbf,0x00,0x09,0x00,
+- 0xe1,0x26,0x02,0xe0,0x78,0x01,0xcf,0x86,0xd5,0xb0,0xd4,0x58,0xd3,0x2c,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x81,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,
+- 0x83,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x85,0x00,0x06,0x00,0x10,
+- 0x07,0x06,0xff,0xd4,0x87,0x00,0x06,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,
+- 0xd4,0x89,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8b,0x00,0x06,0x00,0xd1,0x0b,
+- 0x10,0x07,0x06,0xff,0xd4,0x8d,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8f,0x00,
+- 0x06,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xd4,0x91,0x00,0x09,
+- 0x00,0x10,0x07,0x09,0xff,0xd4,0x93,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,
+- 0xd4,0x95,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x97,0x00,0x0a,0x00,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x99,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,
+- 0x9b,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x9d,0x00,0x0a,0x00,0x10,
+- 0x07,0x0a,0xff,0xd4,0x9f,0x00,0x0a,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x0a,0xff,0xd4,0xa1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0xa3,0x00,
+- 0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xd4,0xa5,0x00,0x0b,0x00,0x10,0x07,0x0c,
+- 0xff,0xd4,0xa7,0x00,0x0c,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x10,0xff,0xd4,0xa9,
+- 0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xab,0x00,0x10,0x00,0xd1,0x0b,0x10,0x07,
+- 0x10,0xff,0xd4,0xad,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xaf,0x00,0x10,0x00,
+- 0xd3,0x35,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xa1,0x00,0x10,
+- 0x07,0x01,0xff,0xd5,0xa2,0x00,0x01,0xff,0xd5,0xa3,0x00,0xd1,0x0e,0x10,0x07,0x01,
+- 0xff,0xd5,0xa4,0x00,0x01,0xff,0xd5,0xa5,0x00,0x10,0x07,0x01,0xff,0xd5,0xa6,0x00,
+- 0x01,0xff,0xd5,0xa7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xa8,0x00,
+- 0x01,0xff,0xd5,0xa9,0x00,0x10,0x07,0x01,0xff,0xd5,0xaa,0x00,0x01,0xff,0xd5,0xab,
+- 0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xac,0x00,0x01,0xff,0xd5,0xad,0x00,0x10,
+- 0x07,0x01,0xff,0xd5,0xae,0x00,0x01,0xff,0xd5,0xaf,0x00,0xcf,0x86,0xe5,0xf1,0x3e,
+- 0xd4,0x70,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb0,0x00,0x01,
+- 0xff,0xd5,0xb1,0x00,0x10,0x07,0x01,0xff,0xd5,0xb2,0x00,0x01,0xff,0xd5,0xb3,0x00,
+- 0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb4,0x00,0x01,0xff,0xd5,0xb5,0x00,0x10,0x07,
+- 0x01,0xff,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,
+- 0x01,0xff,0xd5,0xb8,0x00,0x01,0xff,0xd5,0xb9,0x00,0x10,0x07,0x01,0xff,0xd5,0xba,
+- 0x00,0x01,0xff,0xd5,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xbc,0x00,0x01,
+- 0xff,0xd5,0xbd,0x00,0x10,0x07,0x01,0xff,0xd5,0xbe,0x00,0x01,0xff,0xd5,0xbf,0x00,
+- 0xe3,0x70,0x3e,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x80,0x00,0x01,0xff,
+- 0xd6,0x81,0x00,0x10,0x07,0x01,0xff,0xd6,0x82,0x00,0x01,0xff,0xd6,0x83,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xd6,0x84,0x00,0x01,0xff,0xd6,0x85,0x00,0x10,0x07,0x01,
+- 0xff,0xd6,0x86,0x00,0x00,0x00,0xe0,0x18,0x3f,0xcf,0x86,0xe5,0xa9,0x3e,0xe4,0x80,
+- 0x3e,0xe3,0x5f,0x3e,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xd5,0xa5,0xd6,0x82,0x00,0xe4,0x3e,0x25,0xe3,0xc4,0x1a,0xe2,0xf8,0x80,
+- 0xe1,0xc0,0x13,0xd0,0x1e,0xcf,0x86,0xc5,0xe4,0xf0,0x4a,0xe3,0x3b,0x46,0xe2,0xd1,
+- 0x43,0xe1,0x04,0x43,0xe0,0xc9,0x42,0xcf,0x86,0xe5,0x8e,0x42,0x64,0x71,0x42,0x0b,
+- 0x00,0xcf,0x86,0xe5,0xfa,0x01,0xe4,0xd5,0x55,0xe3,0x76,0x01,0xe2,0x76,0x53,0xd1,
+- 0x0c,0xe0,0xd7,0x52,0xcf,0x86,0x65,0x75,0x52,0x04,0x00,0xe0,0x0d,0x01,0xcf,0x86,
+- 0xd5,0x0a,0xe4,0xf8,0x52,0x63,0xe7,0x52,0x0a,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x80,0x00,0x01,0xff,0xe2,0xb4,0x81,0x00,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x82,0x00,0x01,0xff,0xe2,0xb4,0x83,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x84,0x00,0x01,0xff,0xe2,0xb4,0x85,0x00,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x86,0x00,0x01,0xff,0xe2,0xb4,0x87,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x88,0x00,0x01,0xff,0xe2,0xb4,0x89,0x00,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x8a,0x00,0x01,0xff,0xe2,0xb4,0x8b,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x8c,0x00,0x01,0xff,0xe2,0xb4,0x8d,0x00,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x8e,0x00,0x01,0xff,0xe2,0xb4,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x90,0x00,0x01,0xff,0xe2,0xb4,0x91,0x00,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x92,0x00,0x01,0xff,0xe2,0xb4,0x93,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x94,0x00,0x01,0xff,0xe2,0xb4,0x95,0x00,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x96,0x00,0x01,0xff,0xe2,0xb4,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x98,0x00,0x01,0xff,0xe2,0xb4,0x99,0x00,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x9a,0x00,0x01,0xff,0xe2,0xb4,0x9b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x9c,0x00,0x01,0xff,0xe2,0xb4,0x9d,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,
+- 0x9e,0x00,0x01,0xff,0xe2,0xb4,0x9f,0x00,0xcf,0x86,0xe5,0x2a,0x52,0x94,0x50,0xd3,
+- 0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa0,0x00,0x01,0xff,0xe2,
+- 0xb4,0xa1,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa2,0x00,0x01,0xff,0xe2,0xb4,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa4,0x00,0x01,0xff,0xe2,0xb4,0xa5,
+- 0x00,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xa7,0x00,0x52,0x04,0x00,0x00,0x91,
+- 0x0c,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xad,0x00,0x00,0x00,0x01,0x00,0xd2,
+- 0x1b,0xe1,0xce,0x52,0xe0,0x7f,0x52,0xcf,0x86,0x95,0x0f,0x94,0x0b,0x93,0x07,0x62,
+- 0x64,0x52,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd1,0x13,0xe0,0xa5,0x53,0xcf,
+- 0x86,0x95,0x0a,0xe4,0x7a,0x53,0x63,0x69,0x53,0x04,0x00,0x04,0x00,0xd0,0x0d,0xcf,
+- 0x86,0x95,0x07,0x64,0xf4,0x53,0x08,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,
+- 0x54,0x04,0x04,0x00,0xd3,0x07,0x62,0x01,0x54,0x04,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x11,0xff,0xe1,0x8f,0xb0,0x00,0x11,0xff,0xe1,0x8f,0xb1,0x00,0x10,0x08,0x11,
+- 0xff,0xe1,0x8f,0xb2,0x00,0x11,0xff,0xe1,0x8f,0xb3,0x00,0x91,0x10,0x10,0x08,0x11,
+- 0xff,0xe1,0x8f,0xb4,0x00,0x11,0xff,0xe1,0x8f,0xb5,0x00,0x00,0x00,0xd4,0x1c,0xe3,
+- 0x92,0x56,0xe2,0xc9,0x55,0xe1,0x8c,0x55,0xe0,0x6d,0x55,0xcf,0x86,0x95,0x0a,0xe4,
+- 0x56,0x55,0x63,0x45,0x55,0x04,0x00,0x04,0x00,0xe3,0xd2,0x01,0xe2,0xdd,0x59,0xd1,
+- 0x0c,0xe0,0xfe,0x58,0xcf,0x86,0x65,0xd7,0x58,0x0a,0x00,0xe0,0x4e,0x59,0xcf,0x86,
+- 0xd5,0xc5,0xd4,0x45,0xd3,0x31,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x12,0xff,0xd0,0xb2,
+- 0x00,0x12,0xff,0xd0,0xb4,0x00,0x10,0x07,0x12,0xff,0xd0,0xbe,0x00,0x12,0xff,0xd1,
+- 0x81,0x00,0x51,0x07,0x12,0xff,0xd1,0x82,0x00,0x10,0x07,0x12,0xff,0xd1,0x8a,0x00,
+- 0x12,0xff,0xd1,0xa3,0x00,0x92,0x10,0x91,0x0c,0x10,0x08,0x12,0xff,0xea,0x99,0x8b,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,
+- 0xff,0xe1,0x83,0x90,0x00,0x14,0xff,0xe1,0x83,0x91,0x00,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0x92,0x00,0x14,0xff,0xe1,0x83,0x93,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0x94,0x00,0x14,0xff,0xe1,0x83,0x95,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x96,
+- 0x00,0x14,0xff,0xe1,0x83,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0x98,0x00,0x14,0xff,0xe1,0x83,0x99,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x9a,
+- 0x00,0x14,0xff,0xe1,0x83,0x9b,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0x9c,
+- 0x00,0x14,0xff,0xe1,0x83,0x9d,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x9e,0x00,0x14,
+- 0xff,0xe1,0x83,0x9f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,
+- 0xff,0xe1,0x83,0xa0,0x00,0x14,0xff,0xe1,0x83,0xa1,0x00,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xa2,0x00,0x14,0xff,0xe1,0x83,0xa3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xa4,0x00,0x14,0xff,0xe1,0x83,0xa5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xa6,
+- 0x00,0x14,0xff,0xe1,0x83,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xa8,0x00,0x14,0xff,0xe1,0x83,0xa9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xaa,
+- 0x00,0x14,0xff,0xe1,0x83,0xab,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xac,
+- 0x00,0x14,0xff,0xe1,0x83,0xad,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xae,0x00,0x14,
+- 0xff,0xe1,0x83,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xb0,0x00,0x14,0xff,0xe1,0x83,0xb1,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xb2,
+- 0x00,0x14,0xff,0xe1,0x83,0xb3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xb4,
+- 0x00,0x14,0xff,0xe1,0x83,0xb5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xb6,0x00,0x14,
+- 0xff,0xe1,0x83,0xb7,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xb8,
+- 0x00,0x14,0xff,0xe1,0x83,0xb9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xba,0x00,0x00,
+- 0x00,0xd1,0x0c,0x10,0x04,0x00,0x00,0x14,0xff,0xe1,0x83,0xbd,0x00,0x10,0x08,0x14,
+- 0xff,0xe1,0x83,0xbe,0x00,0x14,0xff,0xe1,0x83,0xbf,0x00,0xe2,0x9d,0x08,0xe1,0x48,
+- 0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa5,0x00,0x01,0xff,0x61,0xcc,0xa5,0x00,0x10,
+- 0x08,0x01,0xff,0x62,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x62,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,0x00,0x10,0x08,0x01,
+- 0xff,0x62,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,0x24,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,
+- 0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x87,0x00,0x01,0xff,0x64,0xcc,0x87,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa3,0x00,0x01,0xff,0x64,0xcc,0xa3,0x00,0x10,
+- 0x08,0x01,0xff,0x64,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,0x00,0xd3,0x48,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa7,0x00,0x01,0xff,0x64,0xcc,0xa7,
+- 0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xad,0x00,0x01,0xff,0x64,0xcc,0xad,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x84,
+- 0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0x01,0xff,0x65,
+- 0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xad,
+- 0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0xb0,0x00,0x01,
+- 0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,
+- 0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x66,0xcc,0x87,
+- 0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x67,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,0x00,0x10,0x08,0x01,
+- 0xff,0x68,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x68,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x68,
+- 0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x68,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x68,
+- 0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,
+- 0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,0xff,0x69,0xcc,0x88,
+- 0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x81,0x00,0x01,0xff,0x6b,0xcc,0x81,0x00,0x10,
+- 0x08,0x01,0xff,0x6b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x10,0x08,0x01,
+- 0xff,0x6c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,0x24,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0xb1,0x00,0x01,0xff,0x6c,0xcc,0xb1,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xad,0x00,0x01,0xff,0x6c,0xcc,0xad,0x00,0x10,
+- 0x08,0x01,0xff,0x6d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,0x00,0xcf,0x86,0xe5,
+- 0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6d,0xcc,
+- 0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6d,0xcc,0xa3,0x00,
+- 0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x87,0x00,
+- 0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa3,0x00,0x01,0xff,
+- 0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0xb1,0x00,
+- 0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xad,0x00,0x01,0xff,
+- 0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,
+- 0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,0xcc,
+- 0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,0xd2,0x28,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x84,0xcc,
+- 0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
+- 0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x70,0xcc,0x81,0x00,0x01,0xff,
+- 0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x70,0xcc,0x87,0x00,0x01,0xff,0x70,0xcc,
+- 0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x87,0x00,0x01,0xff,
+- 0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xa3,0x00,0x01,0xff,0x72,0xcc,
+- 0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,
+- 0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xb1,0x00,0x01,0xff,
+- 0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x73,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,
+- 0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,
+- 0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x10,0x0a,0x01,0xff,
+- 0x73,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,0x00,0xd2,0x24,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,
+- 0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0x87,0x00,0x01,0xff,0x74,0xcc,
+- 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xa3,0x00,0x01,0xff,0x74,0xcc,
+- 0xa3,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,0xb1,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xad,0x00,0x01,0xff,
+- 0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa4,0x00,0x01,0xff,0x75,0xcc,
+- 0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xb0,0x00,0x01,0xff,0x75,0xcc,
+- 0xb0,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xad,0x00,0x01,0xff,0x75,0xcc,0xad,0x00,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x01,0xff,
+- 0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,
+- 0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x76,0xcc,
+- 0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x76,0xcc,0xa3,0x00,
+- 0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x11,0x02,0xcf,0x86,0xd5,0xe2,0xd4,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x80,0x00,0x01,0xff,0x77,
+- 0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x81,0x00,0x01,0xff,0x77,0xcc,0x81,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x88,0x00,0x01,0xff,0x77,0xcc,0x88,
+- 0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x87,0x00,0x01,0xff,0x77,0xcc,0x87,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0xa3,0x00,0x01,0xff,0x77,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x78,0xcc,0x87,0x00,0x01,0xff,0x78,0xcc,0x87,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x78,0xcc,0x88,0x00,0x01,0xff,0x78,0xcc,0x88,0x00,0x10,
+- 0x08,0x01,0xff,0x79,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,0x00,0xd3,0x33,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x82,0x00,0x01,0xff,0x7a,0xcc,0x82,
+- 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0xa3,0x00,0x01,0xff,0x7a,0xcc,0xa3,0x00,0xe1,
+- 0xc4,0x58,0x10,0x08,0x01,0xff,0x7a,0xcc,0xb1,0x00,0x01,0xff,0x7a,0xcc,0xb1,0x00,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,0x79,0xcc,
+- 0x8a,0x00,0x10,0x08,0x01,0xff,0x61,0xca,0xbe,0x00,0x02,0xff,0x73,0xcc,0x87,0x00,
+- 0x51,0x04,0x0a,0x00,0x10,0x07,0x0a,0xff,0x73,0x73,0x00,0x0a,0x00,0xd4,0x98,0xd3,
+- 0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa3,0x00,0x01,0xff,0x61,
+- 0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x89,
+- 0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x01,0xff,0x61,
+- 0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0x01,
+- 0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,
+- 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
+- 0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0xa3,
+- 0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0x01,0xff,0x61,
+- 0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,
+- 0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,0x10,0x0a,0x01,
+- 0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x86,
+- 0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0x01,0xff,0x61,
+- 0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa3,
+- 0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x89,0x00,0x01,
+- 0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x83,0x00,0x01,
+- 0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0x01,
+- 0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,0x90,0xd3,0x50,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x01,0xff,
+- 0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,
+- 0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,
+- 0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,
+- 0x65,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x89,0x00,0x01,0xff,0x69,0xcc,0x89,0x00,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,0x50,0xd2,0x28,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
+- 0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0x01,0xff,
+- 0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,
+- 0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,
+- 0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,0x28,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,0xcc,0xa3,0xcc,
+- 0x82,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
+- 0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,
+- 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,
+- 0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,0x48,0xd2,0x28,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,
+- 0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,
+- 0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xa3,0x00,
+- 0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x89,0x00,0x01,0xff,
+- 0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,
+- 0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,
+- 0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,
+- 0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,
+- 0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,
+- 0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,
+- 0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,
+- 0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x89,0x00,
+- 0x01,0xff,0x79,0xcc,0x89,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,
+- 0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbb,0x00,
+- 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbd,0x00,0x0a,0x00,0x10,0x08,
+- 0x0a,0xff,0xe1,0xbb,0xbf,0x00,0x0a,0x00,0xe1,0xbf,0x02,0xe0,0xa1,0x01,0xcf,0x86,
+- 0xd5,0xc6,0xd4,0x6c,0xd3,0x18,0xe2,0xc0,0x58,0xe1,0xa9,0x58,0x10,0x09,0x01,0xff,
+- 0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,
+- 0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,
+- 0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,
+- 0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x18,
+- 0xe2,0xfc,0x58,0xe1,0xe5,0x58,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x93,0x00,0x01,
+- 0xff,0xce,0xb5,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,
+- 0xcc,0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,
+- 0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,
+- 0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,
+- 0x94,0xcc,0x81,0x00,0x00,0x00,0xd4,0x6c,0xd3,0x18,0xe2,0x26,0x59,0xe1,0x0f,0x59,
+- 0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,
+- 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,
+- 0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,
+- 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,
+- 0x82,0x00,0xd3,0x18,0xe2,0x62,0x59,0xe1,0x4b,0x59,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,
+- 0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,
+- 0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,0xcf,0x86,0xd5,0xac,
+- 0xd4,0x5a,0xd3,0x18,0xe2,0x9f,0x59,0xe1,0x88,0x59,0x10,0x09,0x01,0xff,0xce,0xbf,
+- 0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,
+- 0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x18,0xe2,0xc9,0x59,0xe1,
+- 0xb2,0x59,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,
+- 0x94,0x00,0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,
+- 0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,
+- 0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xe4,0x85,0x5a,0xd3,0x18,0xe2,
+- 0x04,0x5a,0xe1,0xed,0x59,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,
+- 0xcf,0x89,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
+- 0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,
+- 0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xe0,0xd9,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,
+- 0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,
+- 0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,
+- 0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
+- 0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,
+- 0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,
+- 0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,
+- 0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xce,0xb9,
+- 0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,
+- 0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,
+- 0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,
+- 0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,
+- 0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,
+- 0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd4,0xc8,0xd3,
+- 0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xce,0xb9,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0xce,0xb9,
+- 0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,
+- 0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,
+- 0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xce,
+- 0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,
+- 0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,
+- 0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0xce,
+- 0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,
+- 0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
+- 0xcd,0x82,0xce,0xb9,0x00,0xd3,0x49,0xd2,0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,
+- 0xb1,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,0xd1,0x0f,0x10,
+- 0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x81,0x00,0xe1,0xa5,0x5a,0x10,0x09,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,0x01,0x00,
+- 0xcf,0x86,0xd5,0xbd,0xd4,0x7e,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x80,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,
+- 0xb7,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcd,0x82,
+- 0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,0x09,
+- 0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0xe1,0xb4,
+- 0x5a,0x10,0x09,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0x01,0xff,0xe1,0xbe,0xbf,0xcc,
+- 0x80,0x00,0xd3,0x18,0xe2,0xda,0x5a,0xe1,0xc3,0x5a,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0xe2,0xfe,0x5a,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd4,
+- 0x51,0xd3,0x18,0xe2,0x21,0x5b,0xe1,0x0a,0x5b,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,
+- 0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0x10,0x09,0x01,
+- 0xff,0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0xe1,0x41,0x5b,
+- 0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,
+- 0xd3,0x3b,0xd2,0x18,0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,
+- 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
+- 0xcf,0x89,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,
+- 0x82,0x00,0x01,0xff,0xcf,0x89,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0xe1,
+- 0x4b,0x5b,0x10,0x09,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0x01,0xff,0xc2,0xb4,0x00,
+- 0xe0,0xa2,0x67,0xcf,0x86,0xe5,0x24,0x02,0xe4,0x26,0x01,0xe3,0x1b,0x5e,0xd2,0x2b,
+- 0xe1,0xf5,0x5b,0xe0,0x7a,0x5b,0xcf,0x86,0xe5,0x5f,0x5b,0x94,0x1c,0x93,0x18,0x92,
+- 0x14,0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd1,0xd6,0xd0,0x46,0xcf,0x86,0x55,
+- 0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xcf,0x89,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,0x00,0x10,0x06,
+- 0x01,0xff,0x6b,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x01,0x00,0xe3,0xba,0x5c,0x92,
+- 0x10,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0x8e,0x00,0x01,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0x0a,0xe4,0xd7,0x5c,0x63,0xc2,0x5c,0x06,0x00,0x94,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb0,0x00,0x01,0xff,0xe2,
+- 0x85,0xb1,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb2,0x00,0x01,0xff,0xe2,0x85,0xb3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb4,0x00,0x01,0xff,0xe2,0x85,0xb5,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb6,0x00,0x01,0xff,0xe2,0x85,0xb7,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb8,0x00,0x01,0xff,0xe2,0x85,0xb9,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xba,0x00,0x01,0xff,0xe2,0x85,0xbb,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xbc,0x00,0x01,0xff,0xe2,0x85,0xbd,0x00,0x10,
+- 0x08,0x01,0xff,0xe2,0x85,0xbe,0x00,0x01,0xff,0xe2,0x85,0xbf,0x00,0x01,0x00,0xe0,
+- 0xc9,0x5c,0xcf,0x86,0xe5,0xa8,0x5c,0xe4,0x87,0x5c,0xe3,0x76,0x5c,0xe2,0x69,0x5c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0xff,0xe2,0x86,0x84,0x00,0xe3,0xb8,
+- 0x60,0xe2,0x85,0x60,0xd1,0x0c,0xe0,0x32,0x60,0xcf,0x86,0x65,0x13,0x60,0x01,0x00,
+- 0xd0,0x62,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x18,0x52,0x04,
+- 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x90,0x00,0x01,0xff,
+- 0xe2,0x93,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x92,0x00,
+- 0x01,0xff,0xe2,0x93,0x93,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x94,0x00,0x01,0xff,
+- 0xe2,0x93,0x95,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x96,0x00,0x01,0xff,
+- 0xe2,0x93,0x97,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x98,0x00,0x01,0xff,0xe2,0x93,
+- 0x99,0x00,0xcf,0x86,0xe5,0xec,0x5f,0x94,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe2,0x93,0x9a,0x00,0x01,0xff,0xe2,0x93,0x9b,0x00,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0x9c,0x00,0x01,0xff,0xe2,0x93,0x9d,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0x9e,0x00,0x01,0xff,0xe2,0x93,0x9f,0x00,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa0,0x00,0x01,0xff,0xe2,0x93,0xa1,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0xa2,0x00,0x01,0xff,0xe2,0x93,0xa3,0x00,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa4,0x00,0x01,0xff,0xe2,0x93,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa6,0x00,0x01,0xff,0xe2,0x93,0xa7,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0xa8,
+- 0x00,0x01,0xff,0xe2,0x93,0xa9,0x00,0x01,0x00,0xd4,0x0c,0xe3,0xc8,0x61,0xe2,0xc1,
+- 0x61,0xcf,0x06,0x04,0x00,0xe3,0xa1,0x64,0xe2,0x94,0x63,0xe1,0x2e,0x02,0xe0,0x84,
+- 0x01,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe2,0xb0,0xb0,0x00,0x08,0xff,0xe2,0xb0,0xb1,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb2,0x00,0x08,0xff,0xe2,0xb0,0xb3,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb4,0x00,0x08,0xff,0xe2,0xb0,0xb5,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xb6,0x00,0x08,0xff,0xe2,0xb0,0xb7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb8,0x00,0x08,0xff,0xe2,0xb0,0xb9,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xba,0x00,0x08,0xff,0xe2,0xb0,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xbc,0x00,0x08,0xff,0xe2,0xb0,0xbd,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,0xbe,0x00,
+- 0x08,0xff,0xe2,0xb0,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x80,0x00,0x08,0xff,0xe2,0xb1,0x81,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x82,0x00,0x08,0xff,0xe2,0xb1,0x83,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x84,0x00,0x08,0xff,0xe2,0xb1,0x85,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x86,0x00,
+- 0x08,0xff,0xe2,0xb1,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x88,0x00,0x08,0xff,0xe2,0xb1,0x89,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8a,0x00,
+- 0x08,0xff,0xe2,0xb1,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8c,0x00,
+- 0x08,0xff,0xe2,0xb1,0x8d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8e,0x00,0x08,0xff,
+- 0xe2,0xb1,0x8f,0x00,0x94,0x7c,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x90,0x00,0x08,0xff,0xe2,0xb1,0x91,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x92,0x00,0x08,0xff,0xe2,0xb1,0x93,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x94,0x00,0x08,0xff,0xe2,0xb1,0x95,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x96,0x00,
+- 0x08,0xff,0xe2,0xb1,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x98,0x00,0x08,0xff,0xe2,0xb1,0x99,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9a,0x00,
+- 0x08,0xff,0xe2,0xb1,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9c,0x00,
+- 0x08,0xff,0xe2,0xb1,0x9d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9e,0x00,0x00,0x00,
+- 0x08,0x00,0xcf,0x86,0xd5,0x07,0x64,0x84,0x61,0x08,0x00,0xd4,0x63,0xd3,0x32,0xd2,
+- 0x1b,0xd1,0x0c,0x10,0x08,0x09,0xff,0xe2,0xb1,0xa1,0x00,0x09,0x00,0x10,0x07,0x09,
+- 0xff,0xc9,0xab,0x00,0x09,0xff,0xe1,0xb5,0xbd,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,
+- 0xc9,0xbd,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xa8,0x00,0xd2,
+- 0x18,0xd1,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xaa,0x00,0x10,0x04,0x09,
+- 0x00,0x09,0xff,0xe2,0xb1,0xac,0x00,0xd1,0x0b,0x10,0x04,0x09,0x00,0x0a,0xff,0xc9,
+- 0x91,0x00,0x10,0x07,0x0a,0xff,0xc9,0xb1,0x00,0x0a,0xff,0xc9,0x90,0x00,0xd3,0x27,
+- 0xd2,0x17,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xc9,0x92,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xe2,0xb1,0xb3,0x00,0x0a,0x00,0x91,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,
+- 0xb1,0xb6,0x00,0x09,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x07,0x0b,
+- 0xff,0xc8,0xbf,0x00,0x0b,0xff,0xc9,0x80,0x00,0xe0,0x83,0x01,0xcf,0x86,0xd5,0xc0,
+- 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x81,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x83,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x87,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x89,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8f,0x00,0x08,0x00,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x91,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x97,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x99,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x9f,0x00,0x08,0x00,0xd4,0x60,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa1,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0xa3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0xa5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa7,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa9,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0xab,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0xad,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xaf,0x00,0x08,0x00,0xd3,0x30,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb1,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0xb3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0xb5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb7,0x00,0x08,0x00,0xd2,0x18,
+- 0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb9,0x00,0x08,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0xbb,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbd,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbf,0x00,0x08,0x00,0xcf,0x86,0xd5,0xc0,
+- 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x81,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x83,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb3,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x87,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x89,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb3,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8f,0x00,0x08,0x00,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x91,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb3,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x97,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x99,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb3,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,
+- 0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x9f,0x00,0x08,0x00,0xd4,0x3b,
+- 0xd3,0x1c,0x92,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa1,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0xa3,0x00,0x08,0x00,0x08,0x00,0xd2,0x10,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0xff,0xe2,0xb3,0xac,0x00,0xe1,0xd0,0x5e,0x10,
+- 0x04,0x0b,0x00,0x0b,0xff,0xe2,0xb3,0xae,0x00,0xe3,0xd5,0x5e,0x92,0x10,0x51,0x04,
+- 0x0b,0xe6,0x10,0x08,0x0d,0xff,0xe2,0xb3,0xb3,0x00,0x0d,0x00,0x00,0x00,0xe2,0x98,
+- 0x08,0xd1,0x0b,0xe0,0x8d,0x66,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe0,0xe1,0x6b,0xcf,
+- 0x86,0xe5,0xa7,0x05,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x0c,0xe2,0x74,0x67,0xe1,
+- 0x0b,0x67,0xcf,0x06,0x04,0x00,0xe2,0xdb,0x01,0xe1,0x26,0x01,0xd0,0x09,0xcf,0x86,
+- 0x65,0x70,0x67,0x0a,0x00,0xcf,0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,
+- 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
+- 0x99,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x85,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x8b,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x8d,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x93,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x95,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x97,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x99,0x99,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x9b,0x00,0x0a,
+- 0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x99,0x9f,0x00,0x0a,0x00,0xe4,0xd9,0x66,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x0c,0xff,0xea,0x99,0xa1,0x00,0x0c,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,
+- 0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0xa5,0x00,0x0a,0x00,
+- 0x10,0x08,0x0a,0xff,0xea,0x99,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0a,0xff,0xea,0x99,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0xab,0x00,
+- 0x0a,0x00,0xe1,0x88,0x66,0x10,0x08,0x0a,0xff,0xea,0x99,0xad,0x00,0x0a,0x00,0xe0,
+- 0xb1,0x66,0xcf,0x86,0x95,0xab,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x83,0x00,
+- 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x85,0x00,0x0a,0x00,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8b,0x00,0x0a,0x00,
+- 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x93,0x00,0x0a,0x00,
+- 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x97,0x00,0x0a,0x00,0xe2,0x0e,0x66,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,
+- 0x9a,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9a,0x9b,0x00,0x10,0x00,0x0b,
+- 0x00,0xe1,0x10,0x02,0xd0,0xb9,0xcf,0x86,0xd5,0x07,0x64,0x1a,0x66,0x08,0x00,0xd4,
+- 0x58,0xd3,0x28,0xd2,0x10,0x51,0x04,0x09,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa3,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa5,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xab,0x00,0x0a,
+- 0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xaf,0x00,0x0a,0x00,0xd3,0x28,0xd2,0x10,0x51,0x04,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xb3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9c,0xb5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb7,0x00,0x0a,0x00,0xd2,
+- 0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb9,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbd,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbf,0x00,0x0a,0x00,0xcf,0x86,0xd5,
+- 0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x81,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0x85,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x87,
+- 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x89,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8f,0x00,0x0a,
+- 0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x91,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x97,0x00,0x0a,
+- 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x99,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0x9b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9d,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9f,0x00,0x0a,0x00,0xd4,
+- 0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa1,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0xa5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa7,0x00,0x0a,
+- 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa9,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0xab,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9d,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xaf,0x00,0x0a,0x00,0x53,
+- 0x04,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xba,
+- 0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xbc,0x00,0xd1,0x0c,0x10,0x04,0x0a,
+- 0x00,0x0a,0xff,0xe1,0xb5,0xb9,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xbf,0x00,0x0a,
+- 0x00,0xe0,0x71,0x01,0xcf,0x86,0xd5,0xa6,0xd4,0x4e,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x0a,0xff,0xea,0x9e,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9e,
+- 0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x85,0x00,0x0a,0x00,
+- 0x10,0x08,0x0a,0xff,0xea,0x9e,0x87,0x00,0x0a,0x00,0xd2,0x10,0x51,0x04,0x0a,0x00,
+- 0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9e,0x8c,0x00,0xe1,0x16,0x64,0x10,0x04,0x0a,
+- 0x00,0x0c,0xff,0xc9,0xa5,0x00,0xd3,0x28,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,
+- 0xea,0x9e,0x91,0x00,0x0c,0x00,0x10,0x08,0x0d,0xff,0xea,0x9e,0x93,0x00,0x0d,0x00,
+- 0x51,0x04,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x97,0x00,0x10,0x00,0xd2,0x18,
+- 0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,
+- 0xea,0x9e,0x9b,0x00,0x10,0x00,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x9d,0x00,
+- 0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x9f,0x00,0x10,0x00,0xd4,0x63,0xd3,0x30,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa1,0x00,0x0c,0x00,0x10,0x08,
+- 0x0c,0xff,0xea,0x9e,0xa3,0x00,0x0c,0x00,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,
+- 0xa5,0x00,0x0c,0x00,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa7,0x00,0x0c,0x00,0xd2,0x1a,
+- 0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa9,0x00,0x0c,0x00,0x10,0x07,0x0d,0xff,
+- 0xc9,0xa6,0x00,0x10,0xff,0xc9,0x9c,0x00,0xd1,0x0e,0x10,0x07,0x10,0xff,0xc9,0xa1,
+- 0x00,0x10,0xff,0xc9,0xac,0x00,0x10,0x07,0x12,0xff,0xc9,0xaa,0x00,0x14,0x00,0xd3,
+- 0x35,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x10,0xff,0xca,0x9e,0x00,0x10,0xff,0xca,0x87,
+- 0x00,0x10,0x07,0x11,0xff,0xca,0x9d,0x00,0x11,0xff,0xea,0xad,0x93,0x00,0xd1,0x0c,
+- 0x10,0x08,0x11,0xff,0xea,0x9e,0xb5,0x00,0x11,0x00,0x10,0x08,0x11,0xff,0xea,0x9e,
+- 0xb7,0x00,0x11,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x14,0xff,0xea,0x9e,0xb9,0x00,
+- 0x14,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbb,0x00,0x15,0x00,0xd1,0x0c,0x10,0x08,
+- 0x15,0xff,0xea,0x9e,0xbd,0x00,0x15,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbf,0x00,
+- 0x15,0x00,0xcf,0x86,0xe5,0x50,0x63,0x94,0x2f,0x93,0x2b,0xd2,0x10,0x51,0x04,0x00,
+- 0x00,0x10,0x08,0x15,0xff,0xea,0x9f,0x83,0x00,0x15,0x00,0xd1,0x0f,0x10,0x08,0x15,
+- 0xff,0xea,0x9e,0x94,0x00,0x15,0xff,0xca,0x82,0x00,0x10,0x08,0x15,0xff,0xe1,0xb6,
+- 0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe4,0x30,0x66,0xd3,0x1d,0xe2,0xd7,0x63,
+- 0xe1,0x86,0x63,0xe0,0x73,0x63,0xcf,0x86,0xe5,0x54,0x63,0x94,0x0b,0x93,0x07,0x62,
+- 0x3f,0x63,0x08,0x00,0x08,0x00,0x08,0x00,0xd2,0x0f,0xe1,0xd6,0x64,0xe0,0xa3,0x64,
+- 0xcf,0x86,0x65,0x88,0x64,0x0a,0x00,0xd1,0xab,0xd0,0x1a,0xcf,0x86,0xe5,0x93,0x65,
+- 0xe4,0x76,0x65,0xe3,0x5d,0x65,0xe2,0x50,0x65,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,
+- 0x00,0x0c,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x0b,0x93,0x07,0x62,0xa3,0x65,
+- 0x11,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
+- 0xa0,0x00,0x11,0xff,0xe1,0x8e,0xa1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa2,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa4,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa6,0x00,0x11,0xff,
+- 0xe1,0x8e,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa8,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xaa,0x00,0x11,0xff,
+- 0xe1,0x8e,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xac,0x00,0x11,0xff,
+- 0xe1,0x8e,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xae,0x00,0x11,0xff,0xe1,0x8e,
+- 0xaf,0x00,0xe0,0x2e,0x65,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb0,0x00,0x11,0xff,0xe1,0x8e,0xb1,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb2,0x00,0x11,0xff,0xe1,0x8e,0xb3,0x00,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb4,0x00,0x11,0xff,0xe1,0x8e,0xb5,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xb6,0x00,0x11,0xff,0xe1,0x8e,0xb7,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb8,0x00,0x11,0xff,0xe1,0x8e,0xb9,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xba,0x00,0x11,0xff,0xe1,0x8e,0xbb,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xbc,0x00,0x11,0xff,0xe1,0x8e,0xbd,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8e,0xbe,0x00,0x11,0xff,0xe1,0x8e,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x80,0x00,0x11,0xff,0xe1,0x8f,0x81,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x82,0x00,0x11,0xff,0xe1,0x8f,0x83,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x84,0x00,0x11,0xff,0xe1,0x8f,0x85,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x86,0x00,0x11,0xff,0xe1,0x8f,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x88,0x00,0x11,0xff,0xe1,0x8f,0x89,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x8a,0x00,0x11,0xff,0xe1,0x8f,0x8b,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x8c,0x00,0x11,0xff,0xe1,0x8f,0x8d,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0x8e,0x00,0x11,0xff,0xe1,0x8f,0x8f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x90,0x00,0x11,0xff,0xe1,0x8f,0x91,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x92,0x00,0x11,0xff,0xe1,0x8f,0x93,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x94,0x00,0x11,0xff,0xe1,0x8f,0x95,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x96,0x00,0x11,0xff,0xe1,0x8f,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x98,0x00,0x11,0xff,0xe1,0x8f,0x99,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x9a,0x00,0x11,0xff,0xe1,0x8f,0x9b,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x9c,0x00,0x11,0xff,0xe1,0x8f,0x9d,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0x9e,0x00,0x11,0xff,0xe1,0x8f,0x9f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0xa0,0x00,0x11,0xff,0xe1,0x8f,0xa1,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa2,0x00,0x11,0xff,0xe1,0x8f,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa4,0x00,0x11,0xff,0xe1,0x8f,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xa6,0x00,0x11,0xff,0xe1,0x8f,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa8,0x00,0x11,0xff,0xe1,0x8f,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xaa,0x00,0x11,0xff,0xe1,0x8f,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xac,0x00,0x11,0xff,0xe1,0x8f,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,0xae,0x00,
+- 0x11,0xff,0xe1,0x8f,0xaf,0x00,0xd1,0x0c,0xe0,0x67,0x63,0xcf,0x86,0xcf,0x06,0x02,
+- 0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,
+- 0x01,0x00,0xd4,0xae,0xd3,0x09,0xe2,0xd0,0x63,0xcf,0x06,0x01,0x00,0xd2,0x27,0xe1,
+- 0x9b,0x6f,0xe0,0xa2,0x6d,0xcf,0x86,0xe5,0xbb,0x6c,0xe4,0x4a,0x6c,0xe3,0x15,0x6c,
+- 0xe2,0xf4,0x6b,0xe1,0xe3,0x6b,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,0x01,0xff,
+- 0xe5,0xba,0xa6,0x00,0xe1,0xf0,0x73,0xe0,0x64,0x73,0xcf,0x86,0xe5,0x9e,0x72,0xd4,
+- 0x3b,0x93,0x37,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x01,0xff,0x66,0x66,0x00,0x01,0xff,
+- 0x66,0x69,0x00,0x10,0x07,0x01,0xff,0x66,0x6c,0x00,0x01,0xff,0x66,0x66,0x69,0x00,
+- 0xd1,0x0f,0x10,0x08,0x01,0xff,0x66,0x66,0x6c,0x00,0x01,0xff,0x73,0x74,0x00,0x10,
+- 0x07,0x01,0xff,0x73,0x74,0x00,0x00,0x00,0x00,0x00,0xe3,0x44,0x72,0xd2,0x11,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xb6,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xd5,0xb4,0xd5,0xa5,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xab,0x00,
+- 0x10,0x09,0x01,0xff,0xd5,0xbe,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xad,0x00,
+- 0xd3,0x09,0xe2,0xbc,0x73,0xcf,0x06,0x01,0x00,0xd2,0x12,0xe1,0xab,0x74,0xe0,0x3c,
+- 0x74,0xcf,0x86,0xe5,0x19,0x74,0x64,0x08,0x74,0x06,0x00,0xe1,0x11,0x75,0xe0,0xde,
+- 0x74,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x7c,0xd3,0x3c,0xd2,
+- 0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xef,0xbd,0x81,0x00,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x82,0x00,0x01,0xff,0xef,0xbd,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x84,0x00,0x01,0xff,0xef,0xbd,0x85,0x00,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x86,0x00,0x01,0xff,0xef,0xbd,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x88,0x00,0x01,0xff,0xef,0xbd,0x89,0x00,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x8a,0x00,0x01,0xff,0xef,0xbd,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x8c,0x00,0x01,0xff,0xef,0xbd,0x8d,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x8e,
+- 0x00,0x01,0xff,0xef,0xbd,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x90,0x00,0x01,0xff,0xef,0xbd,0x91,0x00,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x92,0x00,0x01,0xff,0xef,0xbd,0x93,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x94,0x00,0x01,0xff,0xef,0xbd,0x95,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x96,
+- 0x00,0x01,0xff,0xef,0xbd,0x97,0x00,0x92,0x1c,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x98,0x00,0x01,0xff,0xef,0xbd,0x99,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x9a,
+- 0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0xd9,0xb2,0xe1,0xc3,0xaf,0xe0,0x40,0xae,0xcf,
+- 0x86,0xe5,0xe4,0x9a,0xc4,0xe3,0xc1,0x07,0xe2,0x62,0x06,0xe1,0x79,0x85,0xe0,0x09,
+- 0x05,0xcf,0x86,0xe5,0xfb,0x02,0xd4,0x1c,0xe3,0xe7,0x75,0xe2,0x3e,0x75,0xe1,0x19,
+- 0x75,0xe0,0xf2,0x74,0xcf,0x86,0xe5,0xbf,0x74,0x94,0x07,0x63,0xaa,0x74,0x07,0x00,
+- 0x07,0x00,0xe3,0x93,0x77,0xe2,0x58,0x77,0xe1,0x77,0x01,0xe0,0xf0,0x76,0xcf,0x86,
+- 0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xa8,0x00,0x05,0xff,0xf0,0x90,0x90,0xa9,0x00,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xaa,0x00,0x05,0xff,0xf0,0x90,0x90,0xab,0x00,0xd1,0x12,0x10,0x09,0x05,
+- 0xff,0xf0,0x90,0x90,0xac,0x00,0x05,0xff,0xf0,0x90,0x90,0xad,0x00,0x10,0x09,0x05,
+- 0xff,0xf0,0x90,0x90,0xae,0x00,0x05,0xff,0xf0,0x90,0x90,0xaf,0x00,0xd2,0x24,0xd1,
+- 0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb0,0x00,0x05,0xff,0xf0,0x90,0x90,0xb1,
+- 0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb2,0x00,0x05,0xff,0xf0,0x90,0x90,0xb3,
+- 0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb4,0x00,0x05,0xff,0xf0,0x90,
+- 0x90,0xb5,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb6,0x00,0x05,0xff,0xf0,0x90,
+- 0x90,0xb7,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,
+- 0xb8,0x00,0x05,0xff,0xf0,0x90,0x90,0xb9,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,
+- 0xba,0x00,0x05,0xff,0xf0,0x90,0x90,0xbb,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xbc,0x00,0x05,0xff,0xf0,0x90,0x90,0xbd,0x00,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xbe,0x00,0x05,0xff,0xf0,0x90,0x90,0xbf,0x00,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x05,0xff,0xf0,0x90,0x91,0x80,0x00,0x05,0xff,0xf0,0x90,0x91,0x81,0x00,0x10,
+- 0x09,0x05,0xff,0xf0,0x90,0x91,0x82,0x00,0x05,0xff,0xf0,0x90,0x91,0x83,0x00,0xd1,
+- 0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x84,0x00,0x05,0xff,0xf0,0x90,0x91,0x85,
+- 0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x86,0x00,0x05,0xff,0xf0,0x90,0x91,0x87,
+- 0x00,0x94,0x4c,0x93,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,
+- 0x88,0x00,0x05,0xff,0xf0,0x90,0x91,0x89,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,
+- 0x8a,0x00,0x05,0xff,0xf0,0x90,0x91,0x8b,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x91,0x8c,0x00,0x05,0xff,0xf0,0x90,0x91,0x8d,0x00,0x10,0x09,0x07,0xff,0xf0,
+- 0x90,0x91,0x8e,0x00,0x07,0xff,0xf0,0x90,0x91,0x8f,0x00,0x05,0x00,0x05,0x00,0xd0,
+- 0xa0,0xcf,0x86,0xd5,0x07,0x64,0x98,0x75,0x07,0x00,0xd4,0x07,0x63,0xa5,0x75,0x07,
+- 0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0x98,0x00,
+- 0x12,0xff,0xf0,0x90,0x93,0x99,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0x9a,0x00,
+- 0x12,0xff,0xf0,0x90,0x93,0x9b,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,
+- 0x9c,0x00,0x12,0xff,0xf0,0x90,0x93,0x9d,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,
+- 0x9e,0x00,0x12,0xff,0xf0,0x90,0x93,0x9f,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,
+- 0xff,0xf0,0x90,0x93,0xa0,0x00,0x12,0xff,0xf0,0x90,0x93,0xa1,0x00,0x10,0x09,0x12,
+- 0xff,0xf0,0x90,0x93,0xa2,0x00,0x12,0xff,0xf0,0x90,0x93,0xa3,0x00,0xd1,0x12,0x10,
+- 0x09,0x12,0xff,0xf0,0x90,0x93,0xa4,0x00,0x12,0xff,0xf0,0x90,0x93,0xa5,0x00,0x10,
+- 0x09,0x12,0xff,0xf0,0x90,0x93,0xa6,0x00,0x12,0xff,0xf0,0x90,0x93,0xa7,0x00,0xcf,
+- 0x86,0xe5,0x2e,0x75,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x90,0x93,0xa8,0x00,0x12,0xff,0xf0,0x90,0x93,0xa9,0x00,0x10,0x09,0x12,0xff,
+- 0xf0,0x90,0x93,0xaa,0x00,0x12,0xff,0xf0,0x90,0x93,0xab,0x00,0xd1,0x12,0x10,0x09,
+- 0x12,0xff,0xf0,0x90,0x93,0xac,0x00,0x12,0xff,0xf0,0x90,0x93,0xad,0x00,0x10,0x09,
+- 0x12,0xff,0xf0,0x90,0x93,0xae,0x00,0x12,0xff,0xf0,0x90,0x93,0xaf,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb0,0x00,0x12,0xff,0xf0,0x90,0x93,
+- 0xb1,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb2,0x00,0x12,0xff,0xf0,0x90,0x93,
+- 0xb3,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb4,0x00,0x12,0xff,0xf0,
+- 0x90,0x93,0xb5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb6,0x00,0x12,0xff,0xf0,
+- 0x90,0x93,0xb7,0x00,0x93,0x28,0x92,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,
+- 0x93,0xb8,0x00,0x12,0xff,0xf0,0x90,0x93,0xb9,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,
+- 0x93,0xba,0x00,0x12,0xff,0xf0,0x90,0x93,0xbb,0x00,0x00,0x00,0x12,0x00,0xd4,0x1f,
+- 0xe3,0x47,0x76,0xe2,0xd2,0x75,0xe1,0x71,0x75,0xe0,0x52,0x75,0xcf,0x86,0xe5,0x1f,
+- 0x75,0x94,0x0a,0xe3,0x0a,0x75,0x62,0x01,0x75,0x07,0x00,0x07,0x00,0xe3,0x46,0x78,
+- 0xe2,0x17,0x78,0xd1,0x09,0xe0,0xb4,0x77,0xcf,0x06,0x0b,0x00,0xe0,0xe7,0x77,0xcf,
+- 0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x80,0x00,0x11,0xff,0xf0,0x90,0xb3,0x81,0x00,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x82,0x00,0x11,0xff,0xf0,0x90,0xb3,0x83,0x00,0xd1,0x12,0x10,0x09,
+- 0x11,0xff,0xf0,0x90,0xb3,0x84,0x00,0x11,0xff,0xf0,0x90,0xb3,0x85,0x00,0x10,0x09,
+- 0x11,0xff,0xf0,0x90,0xb3,0x86,0x00,0x11,0xff,0xf0,0x90,0xb3,0x87,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x88,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x89,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8a,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x8b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8c,0x00,0x11,0xff,0xf0,
+- 0x90,0xb3,0x8d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8e,0x00,0x11,0xff,0xf0,
+- 0x90,0xb3,0x8f,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0x90,0x00,0x11,0xff,0xf0,0x90,0xb3,0x91,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0x92,0x00,0x11,0xff,0xf0,0x90,0xb3,0x93,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x94,0x00,0x11,0xff,0xf0,0x90,0xb3,0x95,0x00,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x96,0x00,0x11,0xff,0xf0,0x90,0xb3,0x97,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x98,0x00,0x11,0xff,0xf0,0x90,0xb3,0x99,0x00,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9a,0x00,0x11,0xff,0xf0,0x90,0xb3,0x9b,0x00,
+- 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9c,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x9d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9e,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x9f,0x00,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0xa0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa1,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0xa2,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa3,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0xa4,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa5,0x00,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0xa6,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa7,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xa8,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa9,0x00,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xaa,0x00,0x11,0xff,0xf0,0x90,0xb3,0xab,0x00,
+- 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xac,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0xad,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xae,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0xaf,0x00,0x93,0x23,0x92,0x1f,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xb0,
+- 0x00,0x11,0xff,0xf0,0x90,0xb3,0xb1,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xb2,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x15,0xe4,0xf9,0x7a,0xe3,0x03,
+- 0x79,0xe2,0xfc,0x77,0xe1,0x4c,0x77,0xe0,0x05,0x77,0xcf,0x06,0x0c,0x00,0xe4,0x53,
+- 0x7e,0xe3,0xac,0x7d,0xe2,0x55,0x7d,0xd1,0x0c,0xe0,0x1a,0x7d,0xcf,0x86,0x65,0xfb,
+- 0x7c,0x14,0x00,0xe0,0x1e,0x7d,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x90,0xd3,0x48,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x80,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x81,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x82,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x83,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x84,0x00,0x10,
+- 0xff,0xf0,0x91,0xa3,0x85,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x86,0x00,0x10,
+- 0xff,0xf0,0x91,0xa3,0x87,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x88,0x00,0x10,0xff,0xf0,0x91,0xa3,0x89,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x8a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8b,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,
+- 0xf0,0x91,0xa3,0x8c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8d,0x00,0x10,0x09,0x10,0xff,
+- 0xf0,0x91,0xa3,0x8e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8f,0x00,0xd3,0x48,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x90,0x00,0x10,0xff,0xf0,0x91,0xa3,
+- 0x91,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x92,0x00,0x10,0xff,0xf0,0x91,0xa3,
+- 0x93,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x94,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x95,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x96,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x97,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x98,
+- 0x00,0x10,0xff,0xf0,0x91,0xa3,0x99,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x9a,
+- 0x00,0x10,0xff,0xf0,0x91,0xa3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x9c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9d,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x9e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9f,0x00,0xd1,0x11,0xe0,0x7a,0x80,0xcf,
+- 0x86,0xe5,0x71,0x80,0xe4,0x3a,0x80,0xcf,0x06,0x00,0x00,0xe0,0x43,0x82,0xcf,0x86,
+- 0xd5,0x06,0xcf,0x06,0x00,0x00,0xd4,0x09,0xe3,0x78,0x80,0xcf,0x06,0x0c,0x00,0xd3,
+- 0x06,0xcf,0x06,0x00,0x00,0xe2,0xa3,0x81,0xe1,0x7e,0x81,0xd0,0x06,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0xa5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xa0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa1,0x00,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xa2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa3,0x00,0xd1,0x12,
+- 0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa5,0x00,
+- 0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa7,0x00,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa8,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xa9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xaa,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xab,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xac,0x00,0x14,
+- 0xff,0xf0,0x96,0xb9,0xad,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xae,0x00,0x14,
+- 0xff,0xf0,0x96,0xb9,0xaf,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,
+- 0xf0,0x96,0xb9,0xb0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb1,0x00,0x10,0x09,0x14,0xff,
+- 0xf0,0x96,0xb9,0xb2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb3,0x00,0xd1,0x12,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xb4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb5,0x00,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xb6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb7,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb8,0x00,0x14,0xff,0xf0,0x96,0xb9,
+- 0xb9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xba,0x00,0x14,0xff,0xf0,0x96,0xb9,
+- 0xbb,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbc,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xbd,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbe,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xbf,0x00,0x14,0x00,0xd2,0x14,0xe1,0x8d,0x81,0xe0,0x84,0x81,0xcf,0x86,
+- 0xe5,0x45,0x81,0xe4,0x02,0x81,0xcf,0x06,0x12,0x00,0xd1,0x0b,0xe0,0xb8,0x82,0xcf,
+- 0x86,0xcf,0x06,0x00,0x00,0xe0,0xf8,0x8a,0xcf,0x86,0xd5,0x22,0xe4,0x33,0x88,0xe3,
+- 0xf6,0x87,0xe2,0x9b,0x87,0xe1,0x94,0x87,0xe0,0x8d,0x87,0xcf,0x86,0xe5,0x5e,0x87,
+- 0xe4,0x45,0x87,0x93,0x07,0x62,0x34,0x87,0x12,0xe6,0x12,0xe6,0xe4,0x99,0x88,0xe3,
+- 0x92,0x88,0xd2,0x09,0xe1,0x1b,0x88,0xcf,0x06,0x10,0x00,0xe1,0x82,0x88,0xe0,0x4f,
+- 0x88,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xa2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa3,0x00,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xa4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa5,0x00,0xd1,0x12,
+- 0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa7,0x00,
+- 0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa9,0x00,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xaa,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa4,0xab,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xac,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa4,0xad,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xae,0x00,0x12,
+- 0xff,0xf0,0x9e,0xa4,0xaf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb0,0x00,0x12,
+- 0xff,0xf0,0x9e,0xa4,0xb1,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x9e,0xa4,0xb2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb3,0x00,0x10,0x09,0x12,0xff,
+- 0xf0,0x9e,0xa4,0xb4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb5,0x00,0xd1,0x12,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xb6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb7,0x00,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xb8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb9,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xba,0x00,0x12,0xff,0xf0,0x9e,0xa4,
+- 0xbb,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbc,0x00,0x12,0xff,0xf0,0x9e,0xa4,
+- 0xbd,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbe,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa4,0xbf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa5,0x80,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa5,0x81,0x00,0x94,0x1e,0x93,0x1a,0x92,0x16,0x91,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x9e,0xa5,0x82,0x00,0x12,0xff,0xf0,0x9e,0xa5,0x83,0x00,0x12,0x00,0x12,0x00,
+- 0x12,0x00,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- /* nfdi_c0100 */
+- 0x57,0x04,0x01,0x00,0xc6,0xe5,0x91,0x13,0xe4,0x27,0x0c,0xe3,0x61,0x07,0xe2,0xda,
+- 0x01,0xc1,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0xe4,0xd4,0x7c,0xd3,0x3c,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x80,0x00,0x01,0xff,0x41,0xcc,
+- 0x81,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x82,0x00,0x01,0xff,0x41,0xcc,0x83,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x88,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0x43,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x45,0xcc,0x80,0x00,0x01,0xff,0x45,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
+- 0x45,0xcc,0x82,0x00,0x01,0xff,0x45,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x49,0xcc,0x80,0x00,0x01,0xff,0x49,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,
+- 0x82,0x00,0x01,0xff,0x49,0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0x4e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x80,0x00,
+- 0x01,0xff,0x4f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x82,0x00,
+- 0x01,0xff,0x4f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x88,0x00,0x01,0x00,
+- 0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x55,0xcc,0x80,0x00,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x81,0x00,0x01,0xff,0x55,0xcc,0x82,0x00,0x91,0x10,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x88,0x00,0x01,0xff,0x59,0xcc,0x81,0x00,0x01,0x00,0xd4,0x7c,
+- 0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,
+- 0x61,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,
+- 0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,
+- 0x8a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,
+- 0x01,0xff,0x65,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0x80,0x00,0x01,0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
+- 0x69,0xcc,0x82,0x00,0x01,0xff,0x69,0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,
+- 0x80,0x00,0x01,0xff,0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,
+- 0x82,0x00,0x01,0xff,0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,
+- 0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,
+- 0x10,0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0x79,0xcc,0x88,0x00,0xe1,0x9a,0x03,0xe0,0xd3,0x01,0xcf,0x86,
+- 0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,
+- 0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x86,0x00,
+- 0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa8,0x00,
+- 0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x43,0xcc,0x81,0x00,0x01,0xff,
+- 0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x82,0x00,
+- 0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x43,0xcc,0x87,0x00,0x01,0xff,
+- 0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x8c,0x00,0x01,0xff,
+- 0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x8c,0x00,0x01,0xff,0x64,0xcc,
+- 0x8c,0x00,0xd3,0x34,0xd2,0x14,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x84,0x00,0x01,0xff,0x65,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x86,0x00,0x01,0xff,0x65,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x87,0x00,
+- 0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x8c,0x00,
+- 0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x82,0x00,
+- 0x01,0xff,0x67,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0x86,0x00,0x01,0xff,
+- 0x67,0xcc,0x86,0x00,0xd4,0x74,0xd3,0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x47,0xcc,0x87,0x00,0x01,0xff,0x67,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,
+- 0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,
+- 0x82,0x00,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x49,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
+- 0x49,0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x49,0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,
+- 0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x30,0xd2,0x10,0x91,0x0c,0x10,0x08,
+- 0x01,0xff,0x49,0xcc,0x87,0x00,0x01,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x4a,0xcc,0x82,0x00,0x01,0xff,0x6a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,
+- 0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x4c,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,
+- 0x4c,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,
+- 0x4c,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xd4,0xd4,0x60,0xd3,0x30,0xd2,0x10,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x4e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,
+- 0x01,0xff,0x4e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,
+- 0x4e,0xcc,0x8c,0x00,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,
+- 0x01,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x84,0x00,0x01,0xff,
+- 0x6f,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,
+- 0x86,0x00,0xd3,0x34,0xd2,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8b,0x00,
+- 0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa7,0x00,
+- 0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0x81,0x00,
+- 0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x82,0x00,
+- 0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0xa7,0x00,0x01,0xff,
+- 0x73,0xcc,0xa7,0x00,0xd4,0x74,0xd3,0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x53,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,
+- 0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,
+- 0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x83,0x00,0x01,0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0x86,0x00,0x01,0xff,0x75,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,
+- 0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x8b,0x00,0x01,0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0xa8,0x00,0x01,0xff,0x75,0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x57,0xcc,0x82,0x00,0x01,0xff,0x77,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,
+- 0x82,0x00,0x01,0xff,0x79,0xcc,0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x59,0xcc,0x88,0x00,0x01,0xff,0x5a,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,
+- 0x81,0x00,0x01,0xff,0x5a,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,
+- 0x87,0x00,0x01,0xff,0x5a,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,
+- 0x01,0x00,0xd0,0x4a,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x2c,0xd3,0x18,0x92,0x14,
+- 0x91,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,
+- 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x55,0xcc,0x9b,0x00,0x93,0x14,0x92,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,
+- 0x75,0xcc,0x9b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xb4,
+- 0xd4,0x24,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x41,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,
+- 0x49,0xcc,0x8c,0x00,0xd3,0x46,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,
+- 0x8c,0x00,0x01,0xff,0x4f,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,
+- 0x01,0xff,0x55,0xcc,0x8c,0x00,0xd1,0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,
+- 0x01,0xff,0x55,0xcc,0x88,0xcc,0x84,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,
+- 0x84,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x8c,0x00,
+- 0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,
+- 0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,
+- 0x10,0x0a,0x01,0xff,0x41,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,
+- 0x84,0x00,0xd4,0x80,0xd3,0x3a,0xd2,0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,
+- 0x87,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,
+- 0xc3,0x86,0xcc,0x84,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x08,0x01,0xff,0x47,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,
+- 0x10,0x08,0x01,0xff,0x4f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xc6,0xb7,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,
+- 0x8c,0x00,0xd3,0x24,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,
+- 0x01,0x00,0x01,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x81,0x00,0x01,0xff,
+- 0x67,0xcc,0x81,0x00,0x04,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x4e,0xcc,
+- 0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x8a,0xcc,
+- 0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xc3,0x86,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
+- 0xc3,0x98,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x07,0x02,0xe1,
+- 0xae,0x01,0xe0,0x93,0x01,0xcf,0x86,0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,0x10,
+- 0x08,0x01,0xff,0x41,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x45,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,0x01,
+- 0xff,0x45,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x49,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,0x01,
+- 0xff,0x49,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x4f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x4f,
+- 0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x52,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,0x01,
+- 0xff,0x52,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x55,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x55,
+- 0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x04,
+- 0xff,0x53,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,0x54,
+- 0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,
+- 0xff,0x48,0xcc,0x8c,0x00,0x04,0xff,0x68,0xcc,0x8c,0x00,0xd4,0x68,0xd3,0x20,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x07,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
+- 0x08,0x04,0xff,0x41,0xcc,0x87,0x00,0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,
+- 0x10,0x10,0x08,0x04,0xff,0x45,0xcc,0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,
+- 0x0a,0x04,0xff,0x4f,0xcc,0x88,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,
+- 0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,
+- 0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x4f,0xcc,0x87,0x00,0x04,0xff,0x6f,
+- 0xcc,0x87,0x00,0x93,0x30,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x87,
+- 0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x59,
+- 0xcc,0x84,0x00,0x04,0xff,0x79,0xcc,0x84,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,
+- 0x00,0x08,0x00,0x08,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,
+- 0x04,0x08,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,0xcf,
+- 0x86,0x55,0x04,0x01,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x07,0x00,0x01,0x00,0xcf,
+- 0x86,0xd5,0x18,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,
+- 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,
+- 0x00,0x07,0x00,0xe1,0x34,0x01,0xd0,0x72,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0xe6,
+- 0xd3,0x10,0x52,0x04,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0xe8,0x01,0xdc,
+- 0x92,0x0c,0x51,0x04,0x01,0xdc,0x10,0x04,0x01,0xe8,0x01,0xd8,0x01,0xdc,0xd4,0x2c,
+- 0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0xdc,0x01,0xca,0x10,0x04,0x01,0xca,
+- 0x01,0xdc,0x51,0x04,0x01,0xdc,0x10,0x04,0x01,0xdc,0x01,0xca,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x01,0xca,0x01,0xdc,0x01,0xdc,0x01,0xdc,0xd3,0x08,0x12,0x04,0x01,0xdc,
+- 0x01,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x01,0x01,0xdc,0x01,0xdc,0x91,0x08,
+- 0x10,0x04,0x01,0xdc,0x01,0xe6,0x01,0xe6,0xcf,0x86,0xd5,0x7e,0xd4,0x46,0xd3,0x2e,
+- 0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,
+- 0x10,0x04,0x01,0xe6,0x01,0xff,0xcc,0x93,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcc,
+- 0x88,0xcc,0x81,0x00,0x01,0xf0,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x08,0x11,0x04,
+- 0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x04,0xdc,
+- 0x06,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x07,0xe6,0x10,0x04,0x07,0xe6,0x07,0xdc,
+- 0x51,0x04,0x07,0xdc,0x10,0x04,0x07,0xdc,0x07,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x08,0xe8,0x08,0xdc,0x10,0x04,0x08,0xdc,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xe9,
+- 0x07,0xea,0x10,0x04,0x07,0xea,0x07,0xe9,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,
+- 0x01,0xea,0x10,0x04,0x04,0xe9,0x06,0xe6,0x06,0xe6,0x06,0xe6,0xd3,0x13,0x52,0x04,
+- 0x0a,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x0a,0x00,0xd2,
+- 0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x01,0x00,0x09,0x00,0x51,0x04,0x09,0x00,0x10,
+- 0x06,0x01,0xff,0x3b,0x00,0x10,0x00,0xd0,0xe1,0xcf,0x86,0xd5,0x7a,0xd4,0x5f,0xd3,
+- 0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcc,
+- 0x81,0x00,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0x01,0xff,0xc2,0xb7,0x00,
+- 0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x01,0xff,0xce,
+- 0x97,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0x00,0x00,0xd1,
+- 0x0d,0x10,0x09,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0xa5,0xcc,0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,
+- 0x91,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0xd4,0x4a,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xce,0x99,0xcc,0x88,0x00,0x01,0xff,0xce,0xa5,0xcc,0x88,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0x93,
+- 0x17,0x92,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x81,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x39,0x53,0x04,
+- 0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x88,
+- 0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,
+- 0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,
+- 0xcc,0x81,0x00,0x0a,0x00,0xd3,0x26,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xcf,0x92,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcf,0x92,
+- 0xcc,0x88,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd2,0x0c,0x51,0x04,0x06,
+- 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
+- 0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,
+- 0x04,0x05,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0x12,0x04,0x07,0x00,0x08,0x00,0xe3,
+- 0x47,0x04,0xe2,0xbe,0x02,0xe1,0x07,0x01,0xd0,0x8b,0xcf,0x86,0xd5,0x6c,0xd4,0x53,
+- 0xd3,0x30,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0x95,0xcc,0x80,0x00,0x01,
+- 0xff,0xd0,0x95,0xcc,0x88,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x93,0xcc,0x81,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x86,0xcc,0x88,0x00,
+- 0x52,0x04,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x9a,0xcc,0x81,0x00,0x04,
+- 0xff,0xd0,0x98,0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x86,0x00,0x01,
+- 0x00,0x53,0x04,0x01,0x00,0x92,0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,
+- 0x98,0xcc,0x86,0x00,0x01,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0x92,0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x01,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x57,0x54,0x04,0x01,0x00,0xd3,0x30,0xd2,0x1f,0xd1,
+- 0x12,0x10,0x09,0x04,0xff,0xd0,0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,
+- 0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0xb3,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xd1,0x96,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,
+- 0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0x00,0x54,0x04,0x01,0x00,
+- 0x93,0x1a,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb4,
+- 0xcc,0x8f,0x00,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,0x00,0xd0,0x2e,0xcf,0x86,
+- 0x95,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xe6,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,0x0a,0xe6,0x92,0x08,0x11,0x04,
+- 0x04,0x00,0x06,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xbe,0xd4,0x4a,
+- 0xd3,0x2a,0xd2,0x1a,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x96,0xcc,0x86,
+- 0x00,0x10,0x09,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,
+- 0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,
+- 0x06,0x00,0x10,0x04,0x06,0x00,0x09,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xd0,0x90,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,
+- 0x01,0xff,0xd0,0x90,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x09,0x01,0xff,0xd0,0x95,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,
+- 0x86,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x98,0xcc,0x88,
+- 0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x96,
+- 0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x97,
+- 0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,0x74,0xd3,0x3a,0xd2,0x16,
+- 0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x84,0x00,0x01,0xff,0xd0,
+- 0xb8,0xcc,0x84,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x88,0x00,0x01,
+- 0xff,0xd0,0xb8,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x9e,0xcc,0x88,0x00,0x01,
+- 0xff,0xd0,0xbe,0xcc,0x88,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xd3,0xa8,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,
+- 0x04,0xff,0xd0,0xad,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,
+- 0x01,0xff,0xd0,0xa3,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x3a,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x88,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x8b,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x8b,0x00,0x91,0x12,0x10,0x09,0x01,0xff,0xd0,0xa7,0xcc,0x88,0x00,0x01,
+- 0xff,0xd1,0x87,0xcc,0x88,0x00,0x08,0x00,0x92,0x16,0x91,0x12,0x10,0x09,0x01,0xff,
+- 0xd0,0xab,0xcc,0x88,0x00,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x09,0x00,0x09,0x00,
+- 0xd1,0x74,0xd0,0x36,0xcf,0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,
+- 0x09,0x00,0x0a,0x00,0x0a,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,
+- 0x0b,0x00,0x0c,0x00,0x10,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0x00,
+- 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xd0,0xba,0xcf,0x86,0xd5,0x4c,0xd4,0x24,0x53,0x04,0x01,0x00,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0xd1,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x0d,0x00,0xd3,0x18,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,0xdc,0x02,0xe6,0x51,0x04,0x02,0xe6,
+- 0x10,0x04,0x02,0xdc,0x02,0xe6,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xde,
+- 0x02,0xdc,0x02,0xe6,0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,
+- 0x08,0xdc,0x02,0xdc,0x02,0xdc,0xd2,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,
+- 0x02,0xe6,0xd1,0x08,0x10,0x04,0x02,0xe6,0x02,0xde,0x10,0x04,0x02,0xe4,0x02,0xe6,
+- 0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x0a,0x01,0x0b,0x10,0x04,0x01,0x0c,
+- 0x01,0x0d,0xd1,0x08,0x10,0x04,0x01,0x0e,0x01,0x0f,0x10,0x04,0x01,0x10,0x01,0x11,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x12,0x01,0x13,0x10,0x04,0x09,0x13,0x01,0x14,
+- 0xd1,0x08,0x10,0x04,0x01,0x15,0x01,0x16,0x10,0x04,0x01,0x00,0x01,0x17,0xcf,0x86,
+- 0xd5,0x28,0x94,0x24,0x93,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x18,
+- 0x10,0x04,0x01,0x19,0x01,0x00,0xd1,0x08,0x10,0x04,0x02,0xe6,0x08,0xdc,0x10,0x04,
+- 0x08,0x00,0x08,0x12,0x00,0x00,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x14,0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xe2,0xfa,0x01,0xe1,0x2a,0x01,0xd0,0xa7,0xcf,0x86,
+- 0xd5,0x54,0xd4,0x28,0xd3,0x10,0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,
+- 0x10,0x00,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x08,0x00,
+- 0x91,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0xd3,0x0c,0x52,0x04,0x07,0xe6,
+- 0x11,0x04,0x07,0xe6,0x0a,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0a,0x1e,0x0a,0x1f,
+- 0x10,0x04,0x0a,0x20,0x01,0x00,0xd1,0x08,0x10,0x04,0x0f,0x00,0x00,0x00,0x10,0x04,
+- 0x08,0x00,0x01,0x00,0xd4,0x3d,0x93,0x39,0xd2,0x1a,0xd1,0x08,0x10,0x04,0x0c,0x00,
+- 0x01,0x00,0x10,0x09,0x01,0xff,0xd8,0xa7,0xd9,0x93,0x00,0x01,0xff,0xd8,0xa7,0xd9,
+- 0x94,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd9,0x88,0xd9,0x94,0x00,0x01,0xff,0xd8,
+- 0xa7,0xd9,0x95,0x00,0x10,0x09,0x01,0xff,0xd9,0x8a,0xd9,0x94,0x00,0x01,0x00,0x01,
+- 0x00,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0a,
+- 0x00,0x0a,0x00,0xcf,0x86,0xd5,0x5c,0xd4,0x20,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,
+- 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0x1b,0xd1,0x08,0x10,0x04,0x01,0x1c,0x01,
+- 0x1d,0x10,0x04,0x01,0x1e,0x01,0x1f,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,
+- 0x20,0x01,0x21,0x10,0x04,0x01,0x22,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,
+- 0xdc,0x10,0x04,0x07,0xdc,0x07,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0xe6,0x08,
+- 0xe6,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xdc,0x08,0xe6,0x10,0x04,0x08,0xe6,0x0c,
+- 0xdc,0xd4,0x10,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,
+- 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x23,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,
+- 0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x04,0x00,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0xcf,0x86,0xd5,0x5b,0xd4,0x2e,0xd3,0x1e,0x92,0x1a,0xd1,
+- 0x0d,0x10,0x09,0x01,0xff,0xdb,0x95,0xd9,0x94,0x00,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xdb,0x81,0xd9,0x94,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd3,0x19,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xdb,0x92,0xd9,0x94,0x00,0x11,0x04,0x01,0x00,0x01,0xe6,
+- 0x52,0x04,0x01,0xe6,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xe6,0xd4,0x38,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,
+- 0x01,0xdc,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0xdc,0x01,0xe6,
+- 0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0xdc,0x07,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,
+- 0x11,0x04,0x01,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,
+- 0xd1,0xc8,0xd0,0x76,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,
+- 0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x93,0x10,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x04,0x00,0x04,0x24,0x04,0x00,0x04,0x00,0x04,0x00,0xd4,0x14,
+- 0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,0x00,
+- 0x07,0x00,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x04,0xe6,
+- 0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x0c,
+- 0x51,0x04,0x04,0xdc,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd1,0x08,0x10,0x04,0x04,0xdc,
+- 0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xcf,0x86,0xd5,0x3c,0x94,0x38,0xd3,0x1c,
+- 0xd2,0x0c,0x51,0x04,0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,
+- 0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x07,0x00,0x07,0x00,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,
+- 0x11,0x04,0x08,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x04,0x00,
+- 0x54,0x04,0x04,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x14,0x53,0x04,
+- 0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xe6,0x09,0xe6,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0x00,
+- 0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x14,0xdc,0x14,0x00,0xe4,0x78,0x57,0xe3,0xda,0x3e,0xe2,0x89,0x3e,0xe1,
+- 0x91,0x2c,0xe0,0x21,0x10,0xcf,0x86,0xc5,0xe4,0x80,0x08,0xe3,0xcb,0x03,0xe2,0x61,
+- 0x01,0xd1,0x94,0xd0,0x5a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,
+- 0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,0x92,0x0c,0x51,0x04,0x0b,0xe6,0x10,
+- 0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,0xd4,0x24,0xd3,0x10,0x52,0x04,0x0b,0xe6,0x91,
+- 0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,
+- 0x00,0x0b,0xe6,0x0b,0xe6,0x11,0x04,0x0b,0xe6,0x00,0x00,0x53,0x04,0x0b,0x00,0x52,
+- 0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x86,0xd5,
+- 0x20,0x54,0x04,0x0c,0x00,0x53,0x04,0x0c,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0c,
+- 0x00,0x0c,0xdc,0x0c,0xdc,0x51,0x04,0x00,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x94,
+- 0x14,0x53,0x04,0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xd0,0x4a,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x20,0xd3,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x0d,0x00,0x0d,0x00,0x52,
+- 0x04,0x0d,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,
+- 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x11,0x00,0x91,0x08,0x10,0x04,0x11,
+- 0x00,0x00,0x00,0x12,0x00,0x52,0x04,0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x18,0x54,0x04,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x14,0xdc,0x12,0xe6,0x12,0xe6,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x51,
+- 0x04,0x12,0xe6,0x10,0x04,0x12,0x00,0x11,0xdc,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,
+- 0xdc,0x0d,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xe6,0x91,
+- 0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xdc,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,
+- 0x04,0x0d,0x1b,0x0d,0x1c,0x10,0x04,0x0d,0x1d,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,
+- 0x04,0x0d,0xdc,0x0d,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x10,
+- 0x04,0x0d,0xdc,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x10,0xe6,0xe1,
+- 0x3a,0x01,0xd0,0x77,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x1b,0x53,0x04,0x01,0x00,0x92,0x13,0x91,0x0f,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xa8,0xe0,0xa4,0xbc,0x00,0x01,0x00,0x01,
+- 0x00,0xd3,0x26,0xd2,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xb0,
+- 0xe0,0xa4,0xbc,0x00,0x01,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xb3,0xe0,
+- 0xa4,0xbc,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x8c,0xd4,0x18,0x53,
+- 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x10,
+- 0x04,0x0b,0x00,0x0c,0x00,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,
+- 0xe6,0x10,0x04,0x01,0xdc,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x0b,0x00,0x0c,
+- 0x00,0xd2,0x2c,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x95,0xe0,0xa4,0xbc,0x00,
+- 0x01,0xff,0xe0,0xa4,0x96,0xe0,0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x97,
+- 0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0x9c,0xe0,0xa4,0xbc,0x00,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xe0,0xa4,0xa1,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xa2,0xe0,
+- 0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xab,0xe0,0xa4,0xbc,0x00,0x01,0xff,
+- 0xe0,0xa4,0xaf,0xe0,0xa4,0xbc,0x00,0x54,0x04,0x01,0x00,0xd3,0x14,0x92,0x10,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0c,0x00,0x0c,0x00,0xd2,
+- 0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x0b,0x00,0x10,0x04,0x0b,0x00,0x09,0x00,0x91,
+- 0x08,0x10,0x04,0x09,0x00,0x08,0x00,0x09,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,
+- 0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,
+- 0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x07,0x00,0x01,0x00,0xcf,
+- 0x86,0xd5,0x7b,0xd4,0x42,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,
+- 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,
+- 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa6,0xbe,0x00,
+- 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa7,0x97,0x00,0x01,0x09,0x10,
+- 0x04,0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,
+- 0xa6,0xa1,0xe0,0xa6,0xbc,0x00,0x01,0xff,0xe0,0xa6,0xa2,0xe0,0xa6,0xbc,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0xff,0xe0,0xa6,0xaf,0xe0,0xa6,0xbc,0x00,0xd4,0x10,0x93,0x0c,
+- 0x52,0x04,0x01,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x13,0x00,
+- 0x10,0x04,0x14,0xe6,0x00,0x00,0xe2,0x48,0x02,0xe1,0x4f,0x01,0xd0,0xa4,0xcf,0x86,
+- 0xd5,0x4c,0xd4,0x34,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x07,0x00,
+- 0x10,0x04,0x01,0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
+- 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x2e,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe0,0xa8,0xb2,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0xb8,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd2,0x08,
+- 0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x00,0x00,0x01,0x00,
+- 0xcf,0x86,0xd5,0x80,0xd4,0x34,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x10,
+- 0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0a,0x00,0x00,0x00,0x00,0x00,0xd2,0x25,0xd1,0x0f,0x10,0x04,0x00,0x00,
+- 0x01,0xff,0xe0,0xa8,0x96,0xe0,0xa8,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0x97,
+- 0xe0,0xa8,0xbc,0x00,0x01,0xff,0xe0,0xa8,0x9c,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0xab,0xe0,0xa8,0xbc,0x00,
+- 0x00,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x93,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,
+- 0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0xd0,0x82,0xcf,0x86,0xd5,0x40,0xd4,0x2c,
+- 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,
+- 0x07,0x00,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
+- 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,
+- 0x91,0x08,0x10,0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x3c,0xd4,0x28,
+- 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x01,0x09,0x00,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x07,0x00,0x00,0x00,0x00,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x11,0x00,0x13,0x00,0x13,0x00,0xe1,0x24,
+- 0x01,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,
+- 0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x07,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,
+- 0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x45,0xd3,0x14,0x52,
+- 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x96,0x00,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xad,0x87,0xe0,0xac,0xbe,0x00,0x91,
+- 0x0f,0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x97,0x00,0x01,0x09,0x00,0x00,
+- 0xd3,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xac,0xa1,0xe0,0xac,0xbc,0x00,0x01,0xff,0xe0,
+- 0xac,0xa2,0xe0,0xac,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd4,0x14,0x93,0x10,
+- 0xd2,0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x0c,0x00,0x0c,0x00,
+- 0x00,0x00,0xd0,0xb1,0xcf,0x86,0xd5,0x63,0xd4,0x28,0xd3,0x14,0xd2,0x08,0x11,0x04,
+- 0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0xd3,0x1f,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x0f,
+- 0x10,0x0b,0x01,0xff,0xe0,0xae,0x92,0xe0,0xaf,0x97,0x00,0x01,0x00,0x00,0x00,0xd2,
+- 0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,
+- 0x04,0x00,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x08,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x61,0xd4,0x45,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,0xae,
+- 0xbe,0x00,0x01,0xff,0xe0,0xaf,0x87,0xe0,0xae,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe0,0xaf,0x86,0xe0,0xaf,0x97,0x00,0x01,0x09,0x00,0x00,0x93,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,
+- 0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xe3,0x1c,0x04,0xe2,0x1a,0x02,0xd1,0xf3,
+- 0xd0,0x76,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,
+- 0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x10,0x00,
+- 0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x53,0xd4,0x2f,0xd3,0x10,0x52,0x04,
+- 0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd2,0x13,0x91,0x0f,
+- 0x10,0x0b,0x01,0xff,0xe0,0xb1,0x86,0xe0,0xb1,0x96,0x00,0x00,0x00,0x01,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x54,0x10,0x04,0x01,0x5b,0x00,0x00,0x92,0x0c,0x51,
+- 0x04,0x0a,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,
+- 0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0a,
+- 0x00,0xd0,0x76,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x10,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,
+- 0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,
+- 0x04,0x07,0x07,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x82,0xd4,0x5e,0xd3,0x2a,0xd2,
+- 0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb2,0xbf,0xe0,0xb3,0x95,0x00,0x01,0x00,
+- 0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xe0,0xb3,0x86,0xe0,0xb3,0x95,0x00,0xd2,0x28,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,
+- 0xb3,0x86,0xe0,0xb3,0x96,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,
+- 0xb3,0x82,0x00,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x82,0xe0,0xb3,0x95,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x00,
+- 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x09,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,
+- 0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xe1,0x06,0x01,0xd0,0x6e,0xcf,0x86,0xd5,0x3c,0xd4,0x28,
+- 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x10,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x0c,0x00,0x13,0x09,0x91,0x08,0x10,0x04,
+- 0x13,0x09,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x65,0xd4,0x45,0xd3,0x10,0x52,0x04,
+- 0x01,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb4,0xbe,
+- 0x00,0x01,0xff,0xe0,0xb5,0x87,0xe0,0xb4,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
+- 0xe0,0xb5,0x86,0xe0,0xb5,0x97,0x00,0x01,0x09,0x10,0x04,0x0c,0x00,0x12,0x00,0xd3,
+- 0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x01,0x00,0x52,
+- 0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x11,0x00,0xd4,0x14,0x93,
+- 0x10,0xd2,0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,
+- 0x00,0xd3,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x12,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x12,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x5a,0xcf,0x86,0xd5,
+- 0x34,0xd4,0x18,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x04,
+- 0x00,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,
+- 0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x04,0x00,0x00,0x00,0xcf,0x86,0xd5,0x77,0xd4,0x28,0xd3,0x10,0x52,0x04,0x04,
+- 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,
+- 0x00,0x10,0x04,0x04,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x04,
+- 0x00,0xd3,0x14,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0xd2,0x13,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe0,
+- 0xb7,0x99,0xe0,0xb7,0x8a,0x00,0x04,0x00,0xd1,0x19,0x10,0x0b,0x04,0xff,0xe0,0xb7,
+- 0x99,0xe0,0xb7,0x8f,0x00,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0xe0,0xb7,0x8a,
+- 0x00,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x9f,0x00,0x04,0x00,0xd4,0x10,
+- 0x93,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x14,
+- 0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xe2,0x31,0x01,0xd1,0x58,0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,
+- 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,
+- 0x04,0x01,0x67,0x10,0x04,0x01,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0xcf,0x86,0x95,0x18,0xd4,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,
+- 0x6b,0x01,0x00,0x53,0x04,0x01,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd0,
+- 0x9e,0xcf,0x86,0xd5,0x54,0xd4,0x3c,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x04,0x15,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x15,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x15,
+- 0x00,0xd3,0x08,0x12,0x04,0x15,0x00,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x15,0x00,0x01,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x15,0x00,0x01,0x00,0x91,0x08,0x10,
+- 0x04,0x15,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
+- 0x76,0x10,0x04,0x15,0x09,0x01,0x00,0x11,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0x95,
+- 0x34,0xd4,0x20,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x01,0x7a,0x11,0x04,0x01,0x00,0x00,
+- 0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x01,
+- 0x00,0x0d,0x00,0x00,0x00,0xe1,0x2b,0x01,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,
+- 0x02,0x00,0x53,0x04,0x02,0x00,0x92,0x08,0x11,0x04,0x02,0xdc,0x02,0x00,0x02,0x00,
+- 0x54,0x04,0x02,0x00,0xd3,0x14,0x52,0x04,0x02,0x00,0xd1,0x08,0x10,0x04,0x02,0x00,
+- 0x02,0xdc,0x10,0x04,0x02,0x00,0x02,0xdc,0x92,0x0c,0x91,0x08,0x10,0x04,0x02,0x00,
+- 0x02,0xd8,0x02,0x00,0x02,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x36,0xd3,0x17,0x92,0x13,
+- 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x82,0xe0,0xbe,0xb7,
+- 0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,
+- 0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x8c,0xe0,0xbe,0xb7,0x00,0x02,0x00,
+- 0xd3,0x26,0xd2,0x13,0x51,0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x91,0xe0,
+- 0xbe,0xb7,0x00,0x02,0x00,0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,
+- 0xbd,0x96,0xe0,0xbe,0xb7,0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,
+- 0xe0,0xbd,0x9b,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x02,0x00,0xd4,0x27,0x53,0x04,0x02,
+- 0x00,0xd2,0x17,0xd1,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x80,0xe0,0xbe,
+- 0xb5,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,
+- 0x00,0x00,0xd3,0x35,0xd2,0x17,0xd1,0x08,0x10,0x04,0x00,0x00,0x02,0x81,0x10,0x04,
+- 0x02,0x82,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbd,0xb2,0x00,0xd1,0x0f,0x10,0x04,0x02,
+- 0x84,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbd,0xb4,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,
+- 0xb2,0xe0,0xbe,0x80,0x00,0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,
+- 0xbe,0xb3,0xe0,0xbe,0x80,0x00,0x02,0x00,0x02,0x82,0x11,0x04,0x02,0x82,0x02,0x00,
+- 0xd0,0xd3,0xcf,0x86,0xd5,0x65,0xd4,0x27,0xd3,0x1f,0xd2,0x13,0x91,0x0f,0x10,0x04,
+- 0x02,0x82,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbe,0x80,0x00,0x02,0xe6,0x91,0x08,0x10,
+- 0x04,0x02,0x09,0x02,0x00,0x02,0xe6,0x12,0x04,0x02,0x00,0x0c,0x00,0xd3,0x1f,0xd2,
+- 0x13,0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x92,0xe0,0xbe,
+- 0xb7,0x00,0x51,0x04,0x02,0x00,0x10,0x04,0x04,0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,
+- 0xe0,0xbe,0x9c,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd4,0x3d,0xd3,0x26,0xd2,0x13,0x51,
+- 0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xa1,0xe0,0xbe,0xb7,0x00,0x02,0x00,
+- 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0xa6,0xe0,0xbe,0xb7,
+- 0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xab,0xe0,0xbe,
+- 0xb7,0x00,0x02,0x00,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,
+- 0x02,0x00,0x02,0x00,0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x04,0x04,0x00,0x02,0xff,
+- 0xe0,0xbe,0x90,0xe0,0xbe,0xb5,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0x04,0x00,0xcf,0x86,0x95,0x4c,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0xdc,0x04,0x00,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0x10,0x04,0x0a,0x00,0x04,0x00,0xd3,0x14,0xd2,0x08,0x11,
+- 0x04,0x08,0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,
+- 0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0xcf,0x86,0xe5,0xcc,0x04,0xe4,0x63,0x03,0xe3,0x65,0x01,0xe2,0x04,
+- 0x01,0xd1,0x7f,0xd0,0x65,0xcf,0x86,0x55,0x04,0x04,0x00,0xd4,0x33,0xd3,0x1f,0xd2,
+- 0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x0a,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
+- 0x0b,0x04,0xff,0xe1,0x80,0xa5,0xe1,0x80,0xae,0x00,0x04,0x00,0x92,0x10,0xd1,0x08,
+- 0x10,0x04,0x0a,0x00,0x04,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x04,0x00,0xd3,0x18,
+- 0xd2,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x51,0x04,0x0a,0x00,
+- 0x10,0x04,0x04,0x00,0x04,0x07,0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0x09,
+- 0x10,0x04,0x0a,0x09,0x0a,0x00,0x0a,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,0x04,0x00,
+- 0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,
+- 0xd0,0x2e,0xcf,0x86,0x95,0x28,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,
+- 0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,
+- 0x11,0x04,0x0a,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,0x00,
+- 0x00,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x06,0x00,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0d,0x00,
+- 0x0d,0x00,0xd1,0x28,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x1c,0x54,0x04,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,
+- 0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x0b,0x00,0x0b,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x0b,0x00,
+- 0xe2,0x21,0x01,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x52,
+- 0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x04,
+- 0x00,0x04,0x00,0xcf,0x86,0x95,0x48,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,
+- 0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x04,
+- 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd0,
+- 0x62,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,0x04,
+- 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,
+- 0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,
+- 0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,
+- 0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x93,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x94,0x14,0x53,0x04,0x04,
+- 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,
+- 0x00,0xd1,0x9c,0xd0,0x3e,0xcf,0x86,0x95,0x38,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,0x14,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,
+- 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,
+- 0x00,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0xd2,0x0c,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x0c,
+- 0xe6,0x10,0x04,0x0c,0xe6,0x08,0xe6,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x08,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,
+- 0x86,0x95,0x14,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,
+- 0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,
+- 0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x11,0x00,0x00,
+- 0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0xd3,0x30,0xd2,0x2a,0xd1,
+- 0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x0b,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,
+- 0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd2,0x6c,0xd1,0x24,0xd0,
+- 0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,
+- 0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0b,0x00,0x0b,
+- 0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,
+- 0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x04,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x46,0xcf,0x86,0xd5,0x28,0xd4,
+- 0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x00,
+- 0x00,0x06,0x00,0x93,0x10,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x09,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x06,0x00,0x93,0x14,0x52,0x04,0x06,0x00,0xd1,
+- 0x08,0x10,0x04,0x06,0x09,0x06,0x00,0x10,0x04,0x06,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x06,0x00,0x00,0x00,0x00,
+- 0x00,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,
+- 0x00,0x00,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x00,
+- 0x00,0x06,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0xd5,
+- 0x24,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,
+- 0x09,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,
+- 0xe6,0x00,0x00,0xd4,0x10,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x00,
+- 0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x08,0x11,0x04,0x07,0x00,0x00,0x00,0x00,
+- 0x00,0xe4,0xac,0x03,0xe3,0x4d,0x01,0xd2,0x84,0xd1,0x48,0xd0,0x2a,0xcf,0x86,0x95,
+- 0x24,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x00,
+- 0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x53,
+- 0x04,0x04,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x04,0x00,0x94,0x18,0x53,0x04,0x04,0x00,0x92,
+- 0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0xe4,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,
+- 0x00,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,0x52,
+- 0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x42,0xcf,
+- 0x86,0xd5,0x1c,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0xd1,
+- 0x08,0x10,0x04,0x07,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd4,0x0c,0x53,
+- 0x04,0x07,0x00,0x12,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x10,0xd1,
+- 0x08,0x10,0x04,0x07,0x00,0x07,0xde,0x10,0x04,0x07,0xe6,0x07,0xdc,0x00,0x00,0xcf,
+- 0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x00,
+- 0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,
+- 0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x07,0x00,0x91,
+- 0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,
+- 0x04,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x0b,
+- 0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x95,0x28,0xd4,0x10,0x53,0x04,0x08,0x00,0x92,
+- 0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,
+- 0x04,0x08,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x08,0x00,0x07,
+- 0x00,0xd2,0xe4,0xd1,0x80,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x54,0x04,0x08,0x00,0xd3,
+- 0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x08,0xe6,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x08,0xdc,0x08,0x00,0x08,0x00,0x11,0x04,0x00,0x00,0x08,
+- 0x00,0x0b,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,
+- 0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xd4,0x14,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x0b,
+- 0x00,0xd3,0x10,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,
+- 0xe6,0x52,0x04,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xe6,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x0b,0xdc,0xd0,0x5e,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0b,0x00,0x92,
+- 0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,0x11,
+- 0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd4,0x10,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,
+- 0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x10,0xe6,0x91,0x08,0x10,
+- 0x04,0x10,0xe6,0x10,0xdc,0x10,0xdc,0xd2,0x0c,0x51,0x04,0x10,0xdc,0x10,0x04,0x10,
+- 0xdc,0x10,0xe6,0xd1,0x08,0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xe1,0x1e,0x01,0xd0,0xaa,0xcf,0x86,0xd5,0x6e,0xd4,0x53,
+- 0xd3,0x17,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,
+- 0x85,0xe1,0xac,0xb5,0x00,0x09,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,
+- 0xac,0x87,0xe1,0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x89,0xe1,
+- 0xac,0xb5,0x00,0x09,0x00,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8b,0xe1,0xac,
+- 0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8d,0xe1,0xac,0xb5,0x00,0x09,
+- 0x00,0x93,0x17,0x92,0x13,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x91,
+- 0xe1,0xac,0xb5,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x54,0x04,0x09,0x00,0xd3,0x10,
+- 0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x07,0x09,0x00,0x09,0x00,0xd2,0x13,
+- 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xba,0xe1,0xac,0xb5,
+- 0x00,0x91,0x0f,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xbc,0xe1,0xac,0xb5,0x00,
+- 0x09,0x00,0xcf,0x86,0xd5,0x3d,0x94,0x39,0xd3,0x31,0xd2,0x25,0xd1,0x16,0x10,0x0b,
+- 0x09,0xff,0xe1,0xac,0xbe,0xe1,0xac,0xb5,0x00,0x09,0xff,0xe1,0xac,0xbf,0xe1,0xac,
+- 0xb5,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xad,0x82,0xe1,0xac,0xb5,0x00,0x91,
+- 0x08,0x10,0x04,0x09,0x09,0x09,0x00,0x09,0x00,0x12,0x04,0x09,0x00,0x00,0x00,0x09,
+- 0x00,0xd4,0x1c,0x53,0x04,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,
+- 0x00,0x09,0xe6,0x91,0x08,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0xe6,0xd3,0x08,0x12,
+- 0x04,0x09,0xe6,0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x00,
+- 0x00,0x00,0x00,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x18,0x53,0x04,0x0a,
+- 0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x09,0x0d,0x09,0x11,0x04,0x0d,
+- 0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x0d,0x00,0x0d,
+- 0x00,0xcf,0x86,0x55,0x04,0x0c,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x0c,0x00,0x51,
+- 0x04,0x0c,0x00,0x10,0x04,0x0c,0x07,0x0c,0x00,0x0c,0x00,0xd3,0x0c,0x92,0x08,0x11,
+- 0x04,0x0c,0x00,0x0c,0x09,0x00,0x00,0x12,0x04,0x00,0x00,0x0c,0x00,0xe3,0xb2,0x01,
+- 0xe2,0x09,0x01,0xd1,0x4c,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,0x0a,
+- 0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,
+- 0x07,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0xcf,
+- 0x86,0x95,0x1c,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,0x00,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,
+- 0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x54,0x04,0x14,0x00,0x53,
+- 0x04,0x14,0x00,0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x14,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x08,0x13,
+- 0x04,0x0d,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,0x0b,
+- 0xe6,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x01,0x0b,0xdc,0x0b,0xdc,0x92,0x08,0x11,
+- 0x04,0x0b,0xdc,0x0b,0xe6,0x0b,0xdc,0xd4,0x28,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x01,0x0b,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,
+- 0x01,0x0b,0x00,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xdc,0x0b,0x00,0xd3,
+- 0x1c,0xd2,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0d,0x00,0xd1,0x08,0x10,
+- 0x04,0x0d,0xe6,0x0d,0x00,0x10,0x04,0x0d,0x00,0x13,0x00,0x92,0x0c,0x51,0x04,0x10,
+- 0xe6,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x07,
+- 0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x94,0x0c,0x53,0x04,0x07,0x00,0x12,0x04,0x07,
+- 0x00,0x08,0x00,0x08,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0xd5,0x40,0xd4,
+- 0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0xe6,0x10,0x04,0x08,0xdc,0x08,0xe6,0x09,
+- 0xe6,0xd2,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x0a,0xe6,0xd1,0x08,0x10,
+- 0x04,0x0a,0xe6,0x0a,0xea,0x10,0x04,0x0a,0xd6,0x0a,0xdc,0x93,0x10,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x0a,0xca,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0xd4,0x14,0x93,
+- 0x10,0x52,0x04,0x0a,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xe6,0x10,
+- 0xe6,0xd3,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x13,0xe8,0x13,
+- 0xe4,0xd2,0x10,0xd1,0x08,0x10,0x04,0x13,0xe4,0x13,0xdc,0x10,0x04,0x00,0x00,0x12,
+- 0xe6,0xd1,0x08,0x10,0x04,0x0c,0xe9,0x0b,0xdc,0x10,0x04,0x09,0xe6,0x09,0xdc,0xe2,
+- 0x80,0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa5,0x00,0x01,0xff,0x61,
+- 0xcc,0xa5,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x42,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,
+- 0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x43,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,
+- 0xcc,0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x87,0x00,0x01,0xff,0x64,
+- 0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa3,0x00,0x01,0xff,0x64,
+- 0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,
+- 0x00,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa7,0x00,0x01,
+- 0xff,0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xad,0x00,0x01,0xff,0x64,
+- 0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x80,0x00,0x01,
+- 0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x81,
+- 0x00,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x45,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x45,
+- 0xcc,0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,
+- 0xcc,0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,
+- 0xff,0x46,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0x48,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,
+- 0x08,0x01,0xff,0x48,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,
+- 0x08,0x01,0xff,0x48,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x49,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,
+- 0xff,0x49,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x81,0x00,0x01,0xff,0x6b,
+- 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,
+- 0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,
+- 0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,
+- 0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xb1,0x00,0x01,0xff,0x6c,
+- 0xcc,0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4c,0xcc,0xad,0x00,0x01,0xff,0x6c,
+- 0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x4d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,
+- 0x00,0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x4d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,
+- 0x4d,0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x4e,0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x4e,0xcc,
+- 0xa3,0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x4e,0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x4e,0xcc,
+- 0xad,0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,
+- 0x83,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,
+- 0x4f,0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,
+- 0x6f,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x81,0x00,
+- 0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x50,0xcc,
+- 0x81,0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x50,0xcc,0x87,0x00,
+- 0x01,0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0x87,0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa3,0x00,
+- 0x01,0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x52,0xcc,0xa3,0xcc,
+- 0x84,0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0xb1,0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x53,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,
+- 0x01,0xff,0x53,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x53,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,
+- 0x10,0x0a,0x01,0xff,0x53,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,
+- 0x87,0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x53,0xcc,0xa3,0xcc,0x87,0x00,
+- 0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0x87,0x00,
+- 0x01,0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0xa3,0x00,
+- 0x01,0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xb1,0x00,0x01,0xff,
+- 0x74,0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,
+- 0xad,0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa4,0x00,
+- 0x01,0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0xb0,0x00,
+- 0x01,0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xad,0x00,0x01,0xff,
+- 0x75,0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x83,0xcc,
+- 0x81,0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,
+- 0x84,0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x56,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
+- 0x56,0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x10,0x02,0xcf,0x86,0xd5,
+- 0xe1,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x80,
+- 0x00,0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x81,0x00,0x01,
+- 0xff,0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x88,0x00,0x01,
+- 0xff,0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x87,0x00,0x01,0xff,0x77,
+- 0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0xa3,0x00,0x01,
+- 0xff,0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x58,0xcc,0x87,0x00,0x01,0xff,0x78,
+- 0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x58,0xcc,0x88,0x00,0x01,0xff,0x78,
+- 0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0x82,0x00,0x01,
+- 0xff,0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x5a,0xcc,0xa3,0x00,0x01,0xff,0x7a,
+- 0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0xb1,0x00,0x01,0xff,0x7a,
+- 0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x68,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,0x88,
+- 0x00,0x92,0x1d,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,0x79,
+- 0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x02,0xff,0xc5,0xbf,0xcc,0x87,0x00,0x0a,0x00,
+- 0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa3,0x00,
+- 0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x89,0x00,0x01,0xff,
+- 0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x81,0x00,
+- 0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,
++ 0xc6,0xe5,0xf9,0x14,0xe4,0x6f,0x0d,0xe3,0x39,0x08,0xe2,0x22,0x01,0xc1,0xd0,0x24,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x07,0x63,0xd8,0x43,0x01,0x00,0x93,0x13,0x52,
++ 0x04,0x01,0x00,0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xce,0xbc,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xe5,0xb3,0x44,0xd4,0x7f,0xd3,0x3f,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x07,0x01,0xff,0xc3,
++ 0xa6,0x00,0x01,0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x65,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0x82,0x00,0x01,0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,
++ 0x80,0x00,0x01,0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,
++ 0x01,0xff,0x69,0xcc,0x88,0x00,0xd3,0x3b,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,
++ 0xc3,0xb0,0x00,0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,
++ 0x00,0x01,0xff,0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,
++ 0x00,0x01,0xff,0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,
++ 0x00,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb8,0x00,0x01,0xff,0x75,0xcc,
++ 0x80,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,
++ 0x10,0x07,0x01,0xff,0xc3,0xbe,0x00,0x01,0xff,0x73,0x73,0x00,0xe1,0xd4,0x03,0xe0,
++ 0xeb,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x61,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,
++ 0x61,0xcc,0x86,0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x61,0xcc,0xa8,0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,
++ 0x81,0x00,0x01,0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x63,0xcc,0x82,0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,
++ 0x87,0x00,0x01,0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,
++ 0x8c,0x00,0x01,0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x8c,0x00,
++ 0x01,0xff,0x64,0xcc,0x8c,0x00,0xd3,0x3b,0xd2,0x1b,0xd1,0x0b,0x10,0x07,0x01,0xff,
++ 0xc4,0x91,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x84,0x00,0x01,0xff,0x65,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x86,0x00,0x01,0xff,0x65,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa8,0x00,0x01,0xff,0x65,
++ 0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,
++ 0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,
++ 0x7b,0xd3,0x3b,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x87,0x00,0x01,
++ 0xff,0x67,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0xa7,0x00,0x01,0xff,0x67,
++ 0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0xff,0x68,
++ 0xcc,0x82,0x00,0x10,0x07,0x01,0xff,0xc4,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,
++ 0x69,0xcc,0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x37,0xd2,0x17,0xd1,0x0c,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0x87,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc4,0xb3,
++ 0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6a,0xcc,0x82,0x00,0x01,0xff,0x6a,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x6b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,
++ 0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6c,0xcc,0x81,0x00,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,0x6c,0xcc,0xa7,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x6c,0xcc,0x8c,0x00,0x01,0xff,0xc5,0x80,0x00,0xcf,0x86,0xd5,0xed,0xd4,0x72,
++ 0xd3,0x37,0xd2,0x17,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc5,0x82,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0x6e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0x81,0x00,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,
++ 0x00,0x01,0xff,0x6e,0xcc,0x8c,0x00,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0x8c,0x00,0x01,0xff,0xca,0xbc,0x6e,0x00,0x10,0x07,0x01,0xff,0xc5,0x8b,0x00,
++ 0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,
++ 0x84,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,
++ 0xd3,0x3b,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0xff,
++ 0x6f,0xcc,0x8b,0x00,0x10,0x07,0x01,0xff,0xc5,0x93,0x00,0x01,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x72,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0x72,0xcc,0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x72,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x73,0xcc,0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x73,0xcc,0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x73,
++ 0xcc,0xa7,0x00,0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x7b,0xd3,0x3b,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x73,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0x74,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x74,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x10,0x07,0x01,
++ 0xff,0xc5,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,
++ 0x83,0x00,0x01,0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x84,0x00,
++ 0x01,0xff,0x75,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x86,0x00,
++ 0x01,0xff,0x75,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x8a,0x00,0x01,0xff,
++ 0x75,0xcc,0x8a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,
++ 0x8b,0x00,0x01,0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa8,0x00,
++ 0x01,0xff,0x75,0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x82,0x00,
++ 0x01,0xff,0x77,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x82,0x00,0x01,0xff,
++ 0x79,0xcc,0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,0x88,0x00,
++ 0x01,0xff,0x7a,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,
++ 0x7a,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,
++ 0x7a,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0xff,0x73,0x00,
++ 0xe0,0x65,0x01,0xcf,0x86,0xd5,0xb4,0xd4,0x5a,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xc9,0x93,0x00,0x10,0x07,0x01,0xff,0xc6,0x83,0x00,0x01,
++ 0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
++ 0xc9,0x94,0x00,0x01,0xff,0xc6,0x88,0x00,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xc9,0x96,0x00,0x10,0x07,0x01,0xff,0xc9,0x97,0x00,0x01,0xff,0xc6,0x8c,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xc7,0x9d,0x00,0x01,0xff,0xc9,0x99,
++ 0x00,0xd3,0x32,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0x9b,0x00,0x01,0xff,
++ 0xc6,0x92,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xa0,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xc9,0xa3,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0xa9,0x00,0x01,0xff,
++ 0xc9,0xa8,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0x99,0x00,0x01,0x00,
++ 0x01,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0xaf,0x00,0x01,0xff,0xc9,0xb2,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xb5,0x00,0xd4,0x5d,0xd3,0x34,0xd2,0x1b,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x10,
++ 0x07,0x01,0xff,0xc6,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0xa5,
++ 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x80,0x00,0x01,0xff,0xc6,0xa8,0x00,0xd2,
++ 0x0f,0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x83,0x00,0x01,0x00,0xd1,0x0b,
++ 0x10,0x07,0x01,0xff,0xc6,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x88,0x00,
++ 0x01,0xff,0x75,0xcc,0x9b,0x00,0xd3,0x33,0xd2,0x1d,0xd1,0x0f,0x10,0x08,0x01,0xff,
++ 0x75,0xcc,0x9b,0x00,0x01,0xff,0xca,0x8a,0x00,0x10,0x07,0x01,0xff,0xca,0x8b,0x00,
++ 0x01,0xff,0xc6,0xb4,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc6,0xb6,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x92,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,
++ 0xff,0xc6,0xb9,0x00,0x01,0x00,0x01,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xbd,
++ 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x44,0xd3,0x16,0x52,0x04,0x01,
++ 0x00,0x51,0x07,0x01,0xff,0xc7,0x86,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc7,0x89,
++ 0x00,0xd2,0x12,0x91,0x0b,0x10,0x07,0x01,0xff,0xc7,0x89,0x00,0x01,0x00,0x01,0xff,
++ 0xc7,0x8c,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x61,0xcc,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,0x69,0xcc,0x8c,0x00,0xd3,0x46,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x6f,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x8c,0x00,0xd1,
++ 0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,
++ 0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x88,
++ 0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,
++ 0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,
++ 0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,
++ 0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x88,
++ 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x87,0xd3,0x41,0xd2,
++ 0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,
++ 0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x01,0xff,
++ 0xc3,0xa6,0xcc,0x84,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc7,0xa5,0x00,0x01,0x00,
++ 0x10,0x08,0x01,0xff,0x67,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,
++ 0x10,0x08,0x01,0xff,0x6f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,
++ 0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,
++ 0x8c,0x00,0xd3,0x38,0xd2,0x1a,0xd1,0x0f,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,
++ 0x01,0xff,0xc7,0xb3,0x00,0x10,0x07,0x01,0xff,0xc7,0xb3,0x00,0x01,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x67,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x10,0x07,
++ 0x04,0xff,0xc6,0x95,0x00,0x04,0xff,0xc6,0xbf,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,
++ 0x04,0xff,0x6e,0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,
++ 0x61,0xcc,0x8a,0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,
++ 0x10,0x09,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,
++ 0xe2,0x31,0x02,0xe1,0xc3,0x44,0xe0,0xc8,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x8f,0x00,0x01,0xff,0x61,
++ 0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,
++ 0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,
++ 0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,
++ 0x08,0x01,0xff,0x6f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,
++ 0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,
++ 0x08,0x01,0xff,0x75,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x04,0xff,0x73,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,
++ 0x08,0x04,0xff,0x74,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0xd1,0x0b,0x10,
++ 0x07,0x04,0xff,0xc8,0x9d,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x68,0xcc,0x8c,0x00,
++ 0x04,0xff,0x68,0xcc,0x8c,0x00,0xd4,0x79,0xd3,0x31,0xd2,0x16,0xd1,0x0b,0x10,0x07,
++ 0x06,0xff,0xc6,0x9e,0x00,0x07,0x00,0x10,0x07,0x04,0xff,0xc8,0xa3,0x00,0x04,0x00,
++ 0xd1,0x0b,0x10,0x07,0x04,0xff,0xc8,0xa5,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x61,
++ 0xcc,0x87,0x00,0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,
++ 0xff,0x65,0xcc,0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x6f,
++ 0xcc,0x88,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,
++ 0x0a,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,
++ 0x00,0x10,0x08,0x04,0xff,0x6f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0xd3,
++ 0x27,0xe2,0x21,0x43,0xd1,0x14,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,
++ 0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x79,0xcc,0x84,0x00,
++ 0x04,0xff,0x79,0xcc,0x84,0x00,0xd2,0x13,0x51,0x04,0x08,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0xa5,0x00,0x08,0xff,0xc8,0xbc,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,
++ 0xff,0xc6,0x9a,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa6,0x00,0x08,0x00,0xcf,0x86,
++ 0x95,0x5f,0x94,0x5b,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,
++ 0xc9,0x82,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xc6,0x80,0x00,0xd1,0x0e,0x10,0x07,
++ 0x09,0xff,0xca,0x89,0x00,0x09,0xff,0xca,0x8c,0x00,0x10,0x07,0x09,0xff,0xc9,0x87,
++ 0x00,0x09,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x89,0x00,0x09,0x00,
++ 0x10,0x07,0x09,0xff,0xc9,0x8b,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,
++ 0x8d,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xc9,0x8f,0x00,0x09,0x00,0x01,0x00,0x01,
++ 0x00,0xd1,0x8b,0xd0,0x0c,0xcf,0x86,0xe5,0x10,0x43,0x64,0xef,0x42,0x01,0xe6,0xcf,
++ 0x86,0xd5,0x2a,0xe4,0x99,0x43,0xe3,0x7f,0x43,0xd2,0x11,0xe1,0x5e,0x43,0x10,0x07,
++ 0x01,0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0xe1,0x65,0x43,0x10,0x09,0x01,
++ 0xff,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0x00,0xd4,0x0f,0x93,0x0b,0x92,
++ 0x07,0x61,0xab,0x43,0x01,0xea,0x06,0xe6,0x06,0xe6,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
++ 0x10,0x07,0x0a,0xff,0xcd,0xb1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb3,0x00,
++ 0x0a,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x10,0x07,0x0a,
++ 0xff,0xcd,0xb7,0x00,0x0a,0x00,0xd2,0x07,0x61,0x97,0x43,0x00,0x00,0x51,0x04,0x09,
++ 0x00,0x10,0x06,0x01,0xff,0x3b,0x00,0x10,0xff,0xcf,0xb3,0x00,0xe0,0x31,0x01,0xcf,
++ 0x86,0xd5,0xd3,0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,
++ 0x00,0x01,0xff,0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,
++ 0xcc,0x81,0x00,0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,
++ 0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x81,0x00,0xd3,0x3c,0xd2,0x20,0xd1,0x12,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0x00,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,
++ 0xff,0xce,0xb3,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb4,0x00,0x01,0xff,0xce,
++ 0xb5,0x00,0x10,0x07,0x01,0xff,0xce,0xb6,0x00,0x01,0xff,0xce,0xb7,0x00,0xd2,0x1c,
++ 0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb8,0x00,0x01,0xff,0xce,0xb9,0x00,0x10,0x07,
++ 0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xce,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,
++ 0xce,0xbc,0x00,0x01,0xff,0xce,0xbd,0x00,0x10,0x07,0x01,0xff,0xce,0xbe,0x00,0x01,
++ 0xff,0xce,0xbf,0x00,0xe4,0x85,0x43,0xd3,0x35,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,
++ 0xff,0xcf,0x80,0x00,0x01,0xff,0xcf,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,
++ 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcf,0x84,0x00,0x01,0xff,0xcf,0x85,0x00,
++ 0x10,0x07,0x01,0xff,0xcf,0x86,0x00,0x01,0xff,0xcf,0x87,0x00,0xe2,0x2b,0x43,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xcf,0x88,0x00,0x01,0xff,0xcf,0x89,0x00,0x10,0x09,0x01,
++ 0xff,0xce,0xb9,0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xcf,0x86,0xd5,
++ 0x94,0xd4,0x3c,0xd3,0x13,0x92,0x0f,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,
++ 0x83,0x00,0x01,0x00,0x01,0x00,0xd2,0x07,0x61,0x3a,0x43,0x01,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,
++ 0x09,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x0a,0xff,0xcf,0x97,0x00,0xd3,0x2c,0xd2,
++ 0x11,0xe1,0x46,0x43,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb8,0x00,
++ 0xd1,0x10,0x10,0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0xff,0xcf,0x86,0x00,
++ 0x10,0x07,0x01,0xff,0xcf,0x80,0x00,0x04,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,
++ 0xff,0xcf,0x99,0x00,0x06,0x00,0x10,0x07,0x01,0xff,0xcf,0x9b,0x00,0x04,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xcf,0x9d,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0x9f,
++ 0x00,0x04,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,
++ 0xa1,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,
++ 0x07,0x01,0xff,0xcf,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xa7,0x00,0x01,
++ 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa9,0x00,0x01,0x00,0x10,0x07,
++ 0x01,0xff,0xcf,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xad,0x00,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xaf,0x00,0x01,0x00,0xd3,0x2b,0xd2,0x12,0x91,
++ 0x0e,0x10,0x07,0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xcf,0x81,0x00,0x01,0x00,0xd1,
++ 0x0e,0x10,0x07,0x05,0xff,0xce,0xb8,0x00,0x05,0xff,0xce,0xb5,0x00,0x10,0x04,0x06,
++ 0x00,0x07,0xff,0xcf,0xb8,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x07,0x00,0x07,0xff,
++ 0xcf,0xb2,0x00,0x10,0x07,0x07,0xff,0xcf,0xbb,0x00,0x07,0x00,0xd1,0x0b,0x10,0x04,
++ 0x08,0x00,0x08,0xff,0xcd,0xbb,0x00,0x10,0x07,0x08,0xff,0xcd,0xbc,0x00,0x08,0xff,
++ 0xcd,0xbd,0x00,0xe3,0xed,0x46,0xe2,0x3d,0x05,0xe1,0x27,0x02,0xe0,0x66,0x01,0xcf,
++ 0x86,0xd5,0xf0,0xd4,0x7e,0xd3,0x40,0xd2,0x22,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,
++ 0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x07,0x01,0xff,0xd1,
++ 0x92,0x00,0x01,0xff,0xd0,0xb3,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,
++ 0x94,0x00,0x01,0xff,0xd1,0x95,0x00,0x10,0x07,0x01,0xff,0xd1,0x96,0x00,0x01,0xff,
++ 0xd1,0x96,0xcc,0x88,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x98,0x00,
++ 0x01,0xff,0xd1,0x99,0x00,0x10,0x07,0x01,0xff,0xd1,0x9a,0x00,0x01,0xff,0xd1,0x9b,
++ 0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,
++ 0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0xff,0xd1,0x9f,
++ 0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xb0,0x00,0x01,0xff,
++ 0xd0,0xb1,0x00,0x10,0x07,0x01,0xff,0xd0,0xb2,0x00,0x01,0xff,0xd0,0xb3,0x00,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xd0,0xb4,0x00,0x01,0xff,0xd0,0xb5,0x00,0x10,0x07,0x01,
++ 0xff,0xd0,0xb6,0x00,0x01,0xff,0xd0,0xb7,0x00,0xd2,0x1e,0xd1,0x10,0x10,0x07,0x01,
++ 0xff,0xd0,0xb8,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x10,0x07,0x01,0xff,0xd0,
++ 0xba,0x00,0x01,0xff,0xd0,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xbc,0x00,
++ 0x01,0xff,0xd0,0xbd,0x00,0x10,0x07,0x01,0xff,0xd0,0xbe,0x00,0x01,0xff,0xd0,0xbf,
++ 0x00,0xe4,0x25,0x42,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x80,
++ 0x00,0x01,0xff,0xd1,0x81,0x00,0x10,0x07,0x01,0xff,0xd1,0x82,0x00,0x01,0xff,0xd1,
++ 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x84,0x00,0x01,0xff,0xd1,0x85,0x00,
++ 0x10,0x07,0x01,0xff,0xd1,0x86,0x00,0x01,0xff,0xd1,0x87,0x00,0xd2,0x1c,0xd1,0x0e,
++ 0x10,0x07,0x01,0xff,0xd1,0x88,0x00,0x01,0xff,0xd1,0x89,0x00,0x10,0x07,0x01,0xff,
++ 0xd1,0x8a,0x00,0x01,0xff,0xd1,0x8b,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x8c,
++ 0x00,0x01,0xff,0xd1,0x8d,0x00,0x10,0x07,0x01,0xff,0xd1,0x8e,0x00,0x01,0xff,0xd1,
++ 0x8f,0x00,0xcf,0x86,0xd5,0x07,0x64,0xcf,0x41,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,
++ 0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
++ 0xd1,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa5,0x00,0x01,0x00,
++ 0x10,0x07,0x01,0xff,0xd1,0xa7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xd1,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xab,0x00,0x01,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd1,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xaf,
++ 0x00,0x01,0x00,0xd3,0x33,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb1,0x00,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xb3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xd1,0xb5,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,
++ 0xff,0xd1,0xb5,0xcc,0x8f,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb9,
++ 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xd1,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbf,0x00,0x01,0x00,
++ 0xe0,0x41,0x01,0xcf,0x86,0xd5,0x8e,0xd4,0x36,0xd3,0x11,0xe2,0x91,0x41,0xe1,0x88,
++ 0x41,0x10,0x07,0x01,0xff,0xd2,0x81,0x00,0x01,0x00,0xd2,0x0f,0x51,0x04,0x04,0x00,
++ 0x10,0x07,0x06,0xff,0xd2,0x8b,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,0xd2,
++ 0x8d,0x00,0x04,0x00,0x10,0x07,0x04,0xff,0xd2,0x8f,0x00,0x04,0x00,0xd3,0x2c,0xd2,
++ 0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x91,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
++ 0xd2,0x93,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x95,0x00,0x01,0x00,
++ 0x10,0x07,0x01,0xff,0xd2,0x97,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xd2,0x99,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9b,0x00,0x01,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd2,0x9d,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9f,
++ 0x00,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,
++ 0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,
++ 0x07,0x01,0xff,0xd2,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa7,0x00,0x01,
++ 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa9,0x00,0x01,0x00,0x10,0x07,
++ 0x01,0xff,0xd2,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xad,0x00,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xaf,0x00,0x01,0x00,0xd3,0x2c,0xd2,0x16,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd2,0xb1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xb3,
++ 0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb5,0x00,0x01,0x00,0x10,0x07,
++ 0x01,0xff,0xd2,0xb7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,
++ 0xb9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,
++ 0x07,0x01,0xff,0xd2,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbf,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0xdc,0xd4,0x5a,0xd3,0x36,0xd2,0x20,0xd1,0x10,0x10,0x07,0x01,
++ 0xff,0xd3,0x8f,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,
++ 0xb6,0xcc,0x86,0x00,0x01,0xff,0xd3,0x84,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x06,
++ 0xff,0xd3,0x86,0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x88,0x00,0xd2,0x16,0xd1,
++ 0x0b,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8a,0x00,0x10,0x04,0x06,0x00,0x01,0xff,
++ 0xd3,0x8c,0x00,0xe1,0x69,0x40,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8e,0x00,0xd3,
++ 0x41,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x01,0xff,
++ 0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x01,0xff,
++ 0xd0,0xb0,0xcc,0x88,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x95,0x00,0x01,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,
++ 0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x99,0x00,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xd3,0x99,0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,
++ 0x09,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,
++ 0x82,0xd3,0x41,0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa1,0x00,0x01,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,
++ 0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,
++ 0x88,0x00,0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa9,0x00,0x01,0x00,0x10,
++ 0x09,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,
++ 0x12,0x10,0x09,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,
++ 0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,
++ 0x00,0xd3,0x41,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,
++ 0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,
++ 0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x87,0xcc,
++ 0x88,0x00,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,0x10,0x07,0x08,0xff,0xd3,0xb7,0x00,
++ 0x08,0x00,0xd2,0x1d,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x01,
++ 0xff,0xd1,0x8b,0xcc,0x88,0x00,0x10,0x07,0x09,0xff,0xd3,0xbb,0x00,0x09,0x00,0xd1,
++ 0x0b,0x10,0x07,0x09,0xff,0xd3,0xbd,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd3,0xbf,
++ 0x00,0x09,0x00,0xe1,0x26,0x02,0xe0,0x78,0x01,0xcf,0x86,0xd5,0xb0,0xd4,0x58,0xd3,
++ 0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x81,0x00,0x06,0x00,0x10,0x07,
++ 0x06,0xff,0xd4,0x83,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x85,0x00,
++ 0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x87,0x00,0x06,0x00,0xd2,0x16,0xd1,0x0b,0x10,
++ 0x07,0x06,0xff,0xd4,0x89,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8b,0x00,0x06,
++ 0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x8d,0x00,0x06,0x00,0x10,0x07,0x06,0xff,
++ 0xd4,0x8f,0x00,0x06,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xd4,
++ 0x91,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd4,0x93,0x00,0x09,0x00,0xd1,0x0b,0x10,
++ 0x07,0x0a,0xff,0xd4,0x95,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x97,0x00,0x0a,
++ 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x99,0x00,0x0a,0x00,0x10,0x07,
++ 0x0a,0xff,0xd4,0x9b,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x9d,0x00,
++ 0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x9f,0x00,0x0a,0x00,0xd4,0x58,0xd3,0x2c,0xd2,
++ 0x16,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0xa1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,
++ 0xd4,0xa3,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xd4,0xa5,0x00,0x0b,0x00,
++ 0x10,0x07,0x0c,0xff,0xd4,0xa7,0x00,0x0c,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x10,
++ 0xff,0xd4,0xa9,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xab,0x00,0x10,0x00,0xd1,
++ 0x0b,0x10,0x07,0x10,0xff,0xd4,0xad,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xaf,
++ 0x00,0x10,0x00,0xd3,0x35,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,
++ 0xa1,0x00,0x10,0x07,0x01,0xff,0xd5,0xa2,0x00,0x01,0xff,0xd5,0xa3,0x00,0xd1,0x0e,
++ 0x10,0x07,0x01,0xff,0xd5,0xa4,0x00,0x01,0xff,0xd5,0xa5,0x00,0x10,0x07,0x01,0xff,
++ 0xd5,0xa6,0x00,0x01,0xff,0xd5,0xa7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,
++ 0xd5,0xa8,0x00,0x01,0xff,0xd5,0xa9,0x00,0x10,0x07,0x01,0xff,0xd5,0xaa,0x00,0x01,
++ 0xff,0xd5,0xab,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xac,0x00,0x01,0xff,0xd5,
++ 0xad,0x00,0x10,0x07,0x01,0xff,0xd5,0xae,0x00,0x01,0xff,0xd5,0xaf,0x00,0xcf,0x86,
++ 0xe5,0x08,0x3f,0xd4,0x70,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,
++ 0xb0,0x00,0x01,0xff,0xd5,0xb1,0x00,0x10,0x07,0x01,0xff,0xd5,0xb2,0x00,0x01,0xff,
++ 0xd5,0xb3,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb4,0x00,0x01,0xff,0xd5,0xb5,
++ 0x00,0x10,0x07,0x01,0xff,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb7,0x00,0xd2,0x1c,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xd5,0xb8,0x00,0x01,0xff,0xd5,0xb9,0x00,0x10,0x07,0x01,
++ 0xff,0xd5,0xba,0x00,0x01,0xff,0xd5,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,
++ 0xbc,0x00,0x01,0xff,0xd5,0xbd,0x00,0x10,0x07,0x01,0xff,0xd5,0xbe,0x00,0x01,0xff,
++ 0xd5,0xbf,0x00,0xe3,0x87,0x3e,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x80,
++ 0x00,0x01,0xff,0xd6,0x81,0x00,0x10,0x07,0x01,0xff,0xd6,0x82,0x00,0x01,0xff,0xd6,
++ 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x84,0x00,0x01,0xff,0xd6,0x85,0x00,
++ 0x10,0x07,0x01,0xff,0xd6,0x86,0x00,0x00,0x00,0xe0,0x2f,0x3f,0xcf,0x86,0xe5,0xc0,
++ 0x3e,0xe4,0x97,0x3e,0xe3,0x76,0x3e,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xd5,0xa5,0xd6,0x82,0x00,0xe4,0x3e,0x25,0xe3,0xc3,0x1a,
++ 0xe2,0x7b,0x81,0xe1,0xc0,0x13,0xd0,0x1e,0xcf,0x86,0xc5,0xe4,0x08,0x4b,0xe3,0x53,
++ 0x46,0xe2,0xe9,0x43,0xe1,0x1c,0x43,0xe0,0xe1,0x42,0xcf,0x86,0xe5,0xa6,0x42,0x64,
++ 0x89,0x42,0x0b,0x00,0xcf,0x86,0xe5,0xfa,0x01,0xe4,0x03,0x56,0xe3,0x76,0x01,0xe2,
++ 0x8e,0x53,0xd1,0x0c,0xe0,0xef,0x52,0xcf,0x86,0x65,0x8d,0x52,0x04,0x00,0xe0,0x0d,
++ 0x01,0xcf,0x86,0xd5,0x0a,0xe4,0x10,0x53,0x63,0xff,0x52,0x0a,0x00,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x80,0x00,0x01,0xff,0xe2,
++ 0xb4,0x81,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x82,0x00,0x01,0xff,0xe2,0xb4,0x83,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x84,0x00,0x01,0xff,0xe2,0xb4,0x85,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x86,0x00,0x01,0xff,0xe2,0xb4,0x87,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x88,0x00,0x01,0xff,0xe2,0xb4,0x89,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x8a,0x00,0x01,0xff,0xe2,0xb4,0x8b,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x8c,0x00,0x01,0xff,0xe2,0xb4,0x8d,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x8e,0x00,0x01,0xff,0xe2,0xb4,0x8f,0x00,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x90,0x00,0x01,0xff,0xe2,0xb4,0x91,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x92,0x00,0x01,0xff,0xe2,0xb4,0x93,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x94,0x00,0x01,0xff,0xe2,0xb4,0x95,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x96,0x00,0x01,0xff,0xe2,0xb4,0x97,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x98,0x00,0x01,0xff,0xe2,0xb4,0x99,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x9a,0x00,0x01,0xff,0xe2,0xb4,0x9b,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x9c,0x00,0x01,0xff,0xe2,0xb4,0x9d,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0xb4,0x9e,0x00,0x01,0xff,0xe2,0xb4,0x9f,0x00,0xcf,0x86,0xe5,0x42,0x52,
++ 0x94,0x50,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa0,0x00,
++ 0x01,0xff,0xe2,0xb4,0xa1,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa2,0x00,0x01,0xff,
++ 0xe2,0xb4,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa4,0x00,0x01,0xff,
++ 0xe2,0xb4,0xa5,0x00,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xa7,0x00,0x52,0x04,
++ 0x00,0x00,0x91,0x0c,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xad,0x00,0x00,0x00,
++ 0x01,0x00,0xd2,0x1b,0xe1,0xfc,0x52,0xe0,0xad,0x52,0xcf,0x86,0x95,0x0f,0x94,0x0b,
++ 0x93,0x07,0x62,0x92,0x52,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd1,0x13,0xe0,
++ 0xd3,0x53,0xcf,0x86,0x95,0x0a,0xe4,0xa8,0x53,0x63,0x97,0x53,0x04,0x00,0x04,0x00,
++ 0xd0,0x0d,0xcf,0x86,0x95,0x07,0x64,0x22,0x54,0x08,0x00,0x04,0x00,0xcf,0x86,0x55,
++ 0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x07,0x62,0x2f,0x54,0x04,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0xb0,0x00,0x11,0xff,0xe1,0x8f,0xb1,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0xb2,0x00,0x11,0xff,0xe1,0x8f,0xb3,0x00,0x91,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0xb4,0x00,0x11,0xff,0xe1,0x8f,0xb5,0x00,0x00,0x00,
++ 0xd4,0x1c,0xe3,0xe0,0x56,0xe2,0x17,0x56,0xe1,0xda,0x55,0xe0,0xbb,0x55,0xcf,0x86,
++ 0x95,0x0a,0xe4,0xa4,0x55,0x63,0x88,0x55,0x04,0x00,0x04,0x00,0xe3,0xd2,0x01,0xe2,
++ 0x2b,0x5a,0xd1,0x0c,0xe0,0x4c,0x59,0xcf,0x86,0x65,0x25,0x59,0x0a,0x00,0xe0,0x9c,
++ 0x59,0xcf,0x86,0xd5,0xc5,0xd4,0x45,0xd3,0x31,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x12,
++ 0xff,0xd0,0xb2,0x00,0x12,0xff,0xd0,0xb4,0x00,0x10,0x07,0x12,0xff,0xd0,0xbe,0x00,
++ 0x12,0xff,0xd1,0x81,0x00,0x51,0x07,0x12,0xff,0xd1,0x82,0x00,0x10,0x07,0x12,0xff,
++ 0xd1,0x8a,0x00,0x12,0xff,0xd1,0xa3,0x00,0x92,0x10,0x91,0x0c,0x10,0x08,0x12,0xff,
++ 0xea,0x99,0x8b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x14,0xff,0xe1,0x83,0x90,0x00,0x14,0xff,0xe1,0x83,0x91,0x00,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0x92,0x00,0x14,0xff,0xe1,0x83,0x93,0x00,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0x94,0x00,0x14,0xff,0xe1,0x83,0x95,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0x96,0x00,0x14,0xff,0xe1,0x83,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0x98,0x00,0x14,0xff,0xe1,0x83,0x99,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0x9a,0x00,0x14,0xff,0xe1,0x83,0x9b,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0x9c,0x00,0x14,0xff,0xe1,0x83,0x9d,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0x9e,0x00,0x14,0xff,0xe1,0x83,0x9f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x14,0xff,0xe1,0x83,0xa0,0x00,0x14,0xff,0xe1,0x83,0xa1,0x00,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xa2,0x00,0x14,0xff,0xe1,0x83,0xa3,0x00,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xa4,0x00,0x14,0xff,0xe1,0x83,0xa5,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xa6,0x00,0x14,0xff,0xe1,0x83,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xa8,0x00,0x14,0xff,0xe1,0x83,0xa9,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xaa,0x00,0x14,0xff,0xe1,0x83,0xab,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xac,0x00,0x14,0xff,0xe1,0x83,0xad,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0xae,0x00,0x14,0xff,0xe1,0x83,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xb0,0x00,0x14,0xff,0xe1,0x83,0xb1,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xb2,0x00,0x14,0xff,0xe1,0x83,0xb3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xb4,0x00,0x14,0xff,0xe1,0x83,0xb5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0xb6,0x00,0x14,0xff,0xe1,0x83,0xb7,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xb8,0x00,0x14,0xff,0xe1,0x83,0xb9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0xba,0x00,0x00,0x00,0xd1,0x0c,0x10,0x04,0x00,0x00,0x14,0xff,0xe1,0x83,0xbd,0x00,
++ 0x10,0x08,0x14,0xff,0xe1,0x83,0xbe,0x00,0x14,0xff,0xe1,0x83,0xbf,0x00,0xe2,0x9d,
++ 0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa5,0x00,0x01,0xff,0x61,0xcc,
++ 0xa5,0x00,0x10,0x08,0x01,0xff,0x62,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x62,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,0x00,
++ 0x10,0x08,0x01,0xff,0x62,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,
++ 0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x87,0x00,0x01,0xff,0x64,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa3,0x00,0x01,0xff,0x64,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,0x00,
++ 0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa7,0x00,0x01,0xff,
++ 0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xad,0x00,0x01,0xff,0x64,0xcc,
++ 0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,
++ 0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,
++ 0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x65,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,
++ 0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,
++ 0x66,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,0x00,
++ 0x10,0x08,0x01,0xff,0x68,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x68,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,0x08,
++ 0x01,0xff,0x68,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x68,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,0x08,
++ 0x01,0xff,0x68,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,0xff,
++ 0x69,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x81,0x00,0x01,0xff,0x6b,0xcc,
++ 0x81,0x00,0x10,0x08,0x01,0xff,0x6b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,0x00,
++ 0x10,0x08,0x01,0xff,0x6c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,0xcc,
++ 0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0xb1,0x00,0x01,0xff,0x6c,0xcc,
++ 0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xad,0x00,0x01,0xff,0x6c,0xcc,
++ 0xad,0x00,0x10,0x08,0x01,0xff,0x6d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,0x00,
++ 0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x6d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6d,
++ 0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa3,
++ 0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xad,
++ 0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,
++ 0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,
++ 0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,0xd2,
++ 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x6f,
++ 0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x70,0xcc,0x81,
++ 0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x70,0xcc,0x87,0x00,0x01,
++ 0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x87,
++ 0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xa3,0x00,0x01,
++ 0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,
++ 0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xb1,
++ 0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x73,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,0x01,
++ 0xff,0x73,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,
++ 0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x10,
++ 0x0a,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,
++ 0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x01,
++ 0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0x87,0x00,0x01,
++ 0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xa3,0x00,0x01,
++ 0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0xb1,0x00,0x01,0xff,0x74,
++ 0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xad,
++ 0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa4,0x00,0x01,
++ 0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xb0,0x00,0x01,
++ 0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xad,0x00,0x01,0xff,0x75,
++ 0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,
++ 0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x84,
++ 0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x76,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x76,
++ 0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x11,0x02,0xcf,0x86,0xd5,0xe2,
++ 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x80,0x00,
++ 0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x81,0x00,0x01,0xff,
++ 0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x88,0x00,0x01,0xff,
++ 0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x87,0x00,0x01,0xff,0x77,0xcc,
++ 0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0xa3,0x00,0x01,0xff,
++ 0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x78,0xcc,0x87,0x00,0x01,0xff,0x78,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x78,0xcc,0x88,0x00,0x01,0xff,0x78,0xcc,
++ 0x88,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,0x00,
++ 0xd3,0x33,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x82,0x00,0x01,0xff,
++ 0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0xa3,0x00,0x01,0xff,0x7a,0xcc,
++ 0xa3,0x00,0xe1,0x12,0x59,0x10,0x08,0x01,0xff,0x7a,0xcc,0xb1,0x00,0x01,0xff,0x7a,
++ 0xcc,0xb1,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,
++ 0xff,0x79,0xcc,0x8a,0x00,0x10,0x08,0x01,0xff,0x61,0xca,0xbe,0x00,0x02,0xff,0x73,
++ 0xcc,0x87,0x00,0x51,0x04,0x0a,0x00,0x10,0x07,0x0a,0xff,0x73,0x73,0x00,0x0a,0x00,
++ 0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa3,0x00,
++ 0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x89,0x00,0x01,0xff,
++ 0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,
++ 0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,
+ 0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x41,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,
+- 0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,
+- 0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
+- 0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x81,0x00,
++ 0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,
++ 0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,
++ 0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
++ 0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,
+ 0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x41,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,
+- 0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,
+- 0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,
+- 0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x86,0x00,
++ 0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,
++ 0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,
++ 0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,
++ 0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,
+ 0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x45,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,
++ 0x65,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,
+ 0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,
+- 0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,0x80,
+- 0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,
++ 0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,
++ 0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,
+ 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,
+- 0xff,0x45,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,
+- 0x0a,0x01,0xff,0x45,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x89,0x00,0x01,0xff,0x69,
+- 0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,
+- 0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x80,
+- 0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,
++ 0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,
++ 0x0a,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x89,0x00,0x01,0xff,0x69,
++ 0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,
++ 0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,
++ 0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,
+ 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
+- 0xff,0x4f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,
+- 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,
+- 0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,
+- 0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,
++ 0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,
++ 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,
++ 0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,
++ 0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,
+ 0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,
+- 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x83,0x00,0x01,
+- 0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0xa3,
+- 0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,
+- 0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x89,
+- 0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,
++ 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x01,
++ 0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,
++ 0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,
++ 0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x89,
++ 0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,
+ 0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,
+- 0xff,0x55,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,
+- 0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,
+- 0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,
++ 0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,
++ 0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,
++ 0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,
++ 0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,
+ 0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,
+- 0xff,0x59,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x59,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x59,
+- 0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0x92,0x14,0x91,0x10,0x10,0x08,0x01,
+- 0xff,0x59,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x0a,0x00,0x0a,0x00,0xe1,
+- 0xc0,0x04,0xe0,0x80,0x02,0xcf,0x86,0xe5,0x2d,0x01,0xd4,0xa8,0xd3,0x54,0xd2,0x28,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
+- 0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0x00,
+- 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x93,0x00,0x01,0xff,0xce,
+- 0x91,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x91,
+- 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,
+- 0x82,0x00,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x93,
+- 0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,
+- 0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,
+- 0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x93,
+- 0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x95,0xcc,0x93,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,
+- 0x01,0xff,0xce,0x95,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,
+- 0x81,0x00,0x00,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,
+- 0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,
+- 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0x97,0xcc,0x93,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,
+- 0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,
+- 0xcd,0x82,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x54,0xd2,0x28,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,
+- 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
+- 0xb9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,
+- 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x93,0x00,0x01,0xff,0xce,
+- 0x99,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0x99,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x99,
+- 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0x99,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcd,
+- 0x82,0x00,0xcf,0x86,0xe5,0x13,0x01,0xd4,0x84,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,
+- 0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0x9f,0xcc,0x93,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,
+- 0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0x9f,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x54,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,
+- 0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,
+- 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,
+- 0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,
+- 0x85,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xd2,
+- 0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,
+- 0xce,0xa5,0xcc,0x94,0xcd,0x82,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
+- 0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xce,0xa9,0xcc,0x93,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
+- 0x00,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,
+- 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,
+- 0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,
+- 0xa9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0x00,0xd3,
+- 0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb1,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,
+- 0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,
+- 0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x91,0x12,0x10,0x09,0x01,
+- 0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x00,0x00,0xe0,
+- 0xe1,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,
+- 0x16,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,
+- 0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,
+- 0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,
+- 0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,
+- 0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,
+- 0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,
+- 0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xcd,0x85,
+- 0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,
+- 0xb7,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,
+- 0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,
+- 0x97,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x80,
+- 0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,
+- 0xff,0xce,0x97,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,
+- 0xcd,0x82,0xcd,0x85,0x00,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,
+- 0xff,0xcf,0x89,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x85,
+- 0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
+- 0xcf,0x89,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,
+- 0x89,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,
+- 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
+- 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,
+- 0xff,0xce,0xa9,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
+- 0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x82,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,0x49,0xd2,
+- 0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,
+- 0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
+- 0xce,0xb1,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xcd,
+- 0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,
+- 0xb1,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,
+- 0xcc,0x86,0x00,0x01,0xff,0xce,0x91,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x91,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,
+- 0xce,0x91,0xcd,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xb9,0x00,0x01,0x00,
+- 0xcf,0x86,0xe5,0x16,0x01,0xd4,0x8f,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x80,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
+- 0xce,0xb7,0xcc,0x81,0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcd,
+- 0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0x95,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0x97,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,0xd1,
+- 0x13,0x10,0x09,0x01,0xff,0xce,0x97,0xcd,0x85,0x00,0x01,0xff,0xe1,0xbe,0xbf,0xcc,
+- 0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbe,0xbf,0xcc,0x81,0x00,0x01,0xff,0xe1,0xbe,
+- 0xbf,0xcd,0x82,0x00,0xd3,0x40,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,
+- 0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,
+- 0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x86,
+- 0x00,0x01,0xff,0xce,0x99,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x80,
+- 0x00,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x04,0x00,0x00,0x01,0xff,
+- 0xe1,0xbf,0xbe,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbf,0xbe,0xcc,0x81,0x00,
+- 0x01,0xff,0xe1,0xbf,0xbe,0xcd,0x82,0x00,0xd4,0x93,0xd3,0x4e,0xd2,0x28,0xd1,0x12,
++ 0xff,0x79,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x79,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,
++ 0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x79,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x10,0x08,0x0a,0xff,0xe1,
++ 0xbb,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbd,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbf,0x00,0x0a,0x00,0xe1,0xbf,0x02,0xe0,0xa1,
++ 0x01,0xcf,0x86,0xd5,0xc6,0xd4,0x6c,0xd3,0x18,0xe2,0x0e,0x59,0xe1,0xf7,0x58,0x10,
++ 0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,0xd2,
++ 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,
++ 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
++ 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,
++ 0x00,0xd3,0x18,0xe2,0x4a,0x59,0xe1,0x33,0x59,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,
++ 0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xce,0xb5,0xcc,0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb5,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,
++ 0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,
++ 0xce,0xb5,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd4,0x6c,0xd3,0x18,0xe2,0x74,0x59,
++ 0xe1,0x5d,0x59,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,
++ 0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,
++ 0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,
++ 0xcc,0x94,0xcd,0x82,0x00,0xd3,0x18,0xe2,0xb0,0x59,0xe1,0x99,0x59,0x10,0x09,0x01,
++ 0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,
++ 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,
++ 0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,0xcf,
++ 0x86,0xd5,0xac,0xd4,0x5a,0xd3,0x18,0xe2,0xed,0x59,0xe1,0xd6,0x59,0x10,0x09,0x01,
++ 0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,
++ 0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x18,0xe2,
++ 0x17,0x5a,0xe1,0x00,0x5a,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,
++ 0xcf,0x85,0xcc,0x94,0x00,0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,
++ 0x85,0xcc,0x94,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x80,
++ 0x00,0xd1,0x0f,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,
++ 0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xe4,0xd3,0x5a,
++ 0xd3,0x18,0xe2,0x52,0x5a,0xe1,0x3b,0x5a,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0x00,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xcf,
++ 0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xe0,0xd9,0x02,0xcf,0x86,0xe5,
++ 0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,
++ 0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,
++ 0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,
++ 0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,
++ 0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,
++ 0xce,0xb1,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,
++ 0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,
++ 0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,
++ 0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,
++ 0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x64,0xd2,0x30,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,
++ 0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,
++ 0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,
++ 0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,
++ 0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb7,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,
++ 0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,
++ 0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb7,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,
++ 0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,
++ 0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,
++ 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,
++ 0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,
++ 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,
++ 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,
++ 0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,
++ 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,
++ 0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,
++ 0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,
++ 0x89,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x49,0xd2,0x26,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,0x84,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0xb1,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,
++ 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,
++ 0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcd,0x82,0xce,0xb9,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x81,0x00,0xe1,0xf3,0x5a,0x10,0x09,0x01,0xff,0xce,0xb1,0xce,0xb9,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0xbd,0xd4,0x7e,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,
++ 0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xce,0xb7,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,
++ 0xb7,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,
++ 0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,
++ 0x00,0xe1,0x02,0x5b,0x10,0x09,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0x01,0xff,0xe1,
++ 0xbe,0xbf,0xcc,0x80,0x00,0xd3,0x18,0xe2,0x28,0x5b,0xe1,0x11,0x5b,0x10,0x09,0x01,
++ 0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0xe2,0x4c,0x5b,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,
++ 0x81,0x00,0xd4,0x51,0xd3,0x18,0xe2,0x6f,0x5b,0xe1,0x58,0x5b,0x10,0x09,0x01,0xff,
++ 0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0xd2,0x24,0xd1,0x12,
+ 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,
+- 0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,
+- 0x88,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x93,0x00,0x01,
+- 0xff,0xcf,0x81,0xcc,0x94,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcd,0x82,0x00,0x01,
+- 0xff,0xcf,0x85,0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xa5,0xcc,0x86,0x00,0x01,0xff,0xce,0xa5,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0xa5,0xcc,0x80,0x00,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xa1,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,0x10,0x09,
+- 0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x01,0xff,0x60,0x00,0xd3,0x3b,0xd2,0x18,0x51,
+- 0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
+- 0xcf,0x89,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,0xcd,
+- 0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,0xcf,
+- 0x89,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x9f,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xa9,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0xd1,0x10,0x10,0x09,0x01,0xff,
+- 0xce,0xa9,0xcd,0x85,0x00,0x01,0xff,0xc2,0xb4,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0xe0,0x62,0x0c,0xcf,0x86,0xe5,0x9f,0x08,0xe4,0xf8,0x05,0xe3,0xdb,0x02,0xe2,0xa1,
+- 0x01,0xd1,0xb4,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0x92,0x14,0x91,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x01,0x00,0xcf,0x86,0xd5,
+- 0x48,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x06,0x00,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x06,0x00,0xd3,0x1c,0xd2,
+- 0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0xd1,0x08,0x10,0x04,0x07,
+- 0x00,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,
+- 0x00,0x10,0x04,0x08,0x00,0x06,0x00,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x06,0x00,0x91,
+- 0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x0f,0x00,0x92,0x08,0x11,0x04,0x0f,0x00,0x01,
+- 0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd0,0x7e,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x53,0x04,0x01,
+- 0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd3,
+- 0x10,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0c,0x00,0x0c,0x00,0x52,
+- 0x04,0x0c,0x00,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xd4,0x1c,0x53,
+- 0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x02,0x00,0x91,
+- 0x08,0x10,0x04,0x03,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0xd2,0x08,0x11,0x04,0x06,
+- 0x00,0x08,0x00,0x11,0x04,0x08,0x00,0x0b,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0b,
+- 0x00,0x0c,0x00,0x10,0x04,0x0e,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,
+- 0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0x54,0x04,0x00,0x00,0xd3,0x0c,0x92,0x08,0x11,
+- 0x04,0x01,0xe6,0x01,0x01,0x01,0xe6,0xd2,0x0c,0x51,0x04,0x01,0x01,0x10,0x04,0x01,
+- 0x01,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,
+- 0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x04,0x00,0xd1,0x08,0x10,
+- 0x04,0x06,0x00,0x06,0x01,0x10,0x04,0x06,0x01,0x06,0xe6,0x92,0x10,0xd1,0x08,0x10,
+- 0x04,0x06,0xdc,0x06,0xe6,0x10,0x04,0x06,0x01,0x08,0x01,0x09,0xdc,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x0a,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
+- 0x81,0xd0,0x4f,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xa9,0x00,0x01,0x00,0x92,0x12,
+- 0x51,0x04,0x01,0x00,0x10,0x06,0x01,0xff,0x4b,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,
+- 0x10,0x04,0x04,0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x06,0x00,0x06,0x00,
+- 0xcf,0x86,0x95,0x2c,0xd4,0x18,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0xd1,0x08,
+- 0x10,0x04,0x08,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xd0,0x68,0xcf,0x86,0xd5,0x48,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x11,0x00,0x00,0x00,0x53,0x04,
+- 0x01,0x00,0x92,0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x90,0xcc,
+- 0xb8,0x00,0x01,0xff,0xe2,0x86,0x92,0xcc,0xb8,0x00,0x01,0x00,0x94,0x1a,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,
+- 0x94,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x87,
+- 0x90,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x87,0x94,0xcc,0xb8,0x00,0x01,0xff,
+- 0xe2,0x87,0x92,0xcc,0xb8,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x93,0x08,0x12,0x04,
+- 0x04,0x00,0x06,0x00,0x06,0x00,0xe2,0x38,0x02,0xe1,0x3f,0x01,0xd0,0x68,0xcf,0x86,
+- 0xd5,0x3e,0x94,0x3a,0xd3,0x16,0x52,0x04,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,
+- 0xe2,0x88,0x83,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd2,0x12,0x91,0x0e,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe2,0x88,0x88,0xcc,0xb8,0x00,0x01,0x00,0x91,0x0e,0x10,0x0a,
+- 0x01,0xff,0xe2,0x88,0x8b,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x24,
+- 0x93,0x20,0x52,0x04,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa3,0xcc,
+- 0xb8,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa5,0xcc,0xb8,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x48,0x94,0x44,0xd3,0x2e,0xd2,0x12,0x91,0x0e,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x88,0xbc,0xcc,0xb8,0x00,0x01,0x00,0xd1,0x0e,
+- 0x10,0x0a,0x01,0xff,0xe2,0x89,0x83,0xcc,0xb8,0x00,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe2,0x89,0x85,0xcc,0xb8,0x00,0x92,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe2,0x89,0x88,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x40,
+- 0xd3,0x1e,0x92,0x1a,0xd1,0x0c,0x10,0x08,0x01,0xff,0x3d,0xcc,0xb8,0x00,0x01,0x00,
+- 0x10,0x0a,0x01,0xff,0xe2,0x89,0xa1,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x52,0x04,
+- 0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x89,0x8d,0xcc,0xb8,0x00,
+- 0x10,0x08,0x01,0xff,0x3c,0xcc,0xb8,0x00,0x01,0xff,0x3e,0xcc,0xb8,0x00,0xd3,0x30,
+- 0xd2,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xa4,0xcc,0xb8,0x00,0x01,0xff,
+- 0xe2,0x89,0xa5,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,
+- 0xb2,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xb3,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,
+- 0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xb6,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,
+- 0xb7,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x50,0x94,0x4c,
+- 0xd3,0x30,0xd2,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xba,0xcc,0xb8,0x00,
+- 0x01,0xff,0xe2,0x89,0xbb,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,
+- 0xe2,0x8a,0x82,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x83,0xcc,0xb8,0x00,0x01,0x00,
+- 0x92,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x86,0xcc,0xb8,0x00,0x01,0xff,
+- 0xe2,0x8a,0x87,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x30,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa2,0xcc,
+- 0xb8,0x00,0x01,0xff,0xe2,0x8a,0xa8,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,
+- 0xa9,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xab,0xcc,0xb8,0x00,0x01,0x00,0xcf,0x86,
+- 0x55,0x04,0x01,0x00,0xd4,0x5c,0xd3,0x2c,0x92,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,
+- 0xe2,0x89,0xbc,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xbd,0xcc,0xb8,0x00,0x10,0x0a,
+- 0x01,0xff,0xe2,0x8a,0x91,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x92,0xcc,0xb8,0x00,
+- 0x01,0x00,0xd2,0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb2,0xcc,
+- 0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb3,0xcc,0xb8,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,
+- 0xe2,0x8a,0xb4,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb5,0xcc,0xb8,0x00,0x01,0x00,
+- 0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xd1,0x64,
+- 0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x20,0x53,0x04,
+- 0x01,0x00,0x92,0x18,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x80,0x88,0x00,
+- 0x10,0x08,0x01,0xff,0xe3,0x80,0x89,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,
+- 0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,
+- 0x04,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,
+- 0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x06,0x00,0x06,0x00,
+- 0xcf,0x86,0xd5,0x2c,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,
+- 0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0x12,0x04,0x08,0x00,0x09,0x00,0xd4,0x14,
+- 0x53,0x04,0x09,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x0c,0x00,
+- 0x0c,0x00,0xd3,0x08,0x12,0x04,0x0c,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x10,0x00,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x13,0x00,
+- 0xd3,0xa6,0xd2,0x74,0xd1,0x40,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,
+- 0x93,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,
+- 0x04,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,
+- 0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,
+- 0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x06,0x00,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,
+- 0x10,0x04,0x06,0x00,0x07,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x1a,0xcf,0x86,
+- 0x95,0x14,0x54,0x04,0x01,0x00,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,
+- 0x06,0x00,0x06,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,
+- 0x13,0x04,0x04,0x00,0x06,0x00,0xd2,0xdc,0xd1,0x48,0xd0,0x26,0xcf,0x86,0x95,0x20,
+- 0x54,0x04,0x01,0x00,0xd3,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x07,0x00,0x06,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x08,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,
+- 0x04,0x00,0x06,0x00,0x06,0x00,0x52,0x04,0x06,0x00,0x11,0x04,0x06,0x00,0x08,0x00,
+- 0xd0,0x5e,0xcf,0x86,0xd5,0x2c,0xd4,0x10,0x53,0x04,0x06,0x00,0x92,0x08,0x11,0x04,
+- 0x06,0x00,0x07,0x00,0x07,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,
+- 0x08,0x00,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0a,0x00,0x0b,0x00,
+- 0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,
+- 0xd5,0x1c,0x94,0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x51,0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0b,0x00,0x94,0x14,0x93,0x10,
+- 0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0c,0x00,0x0b,0x00,
+- 0x0b,0x00,0xd1,0xa8,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x0c,0x00,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,
+- 0x94,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x18,0x53,0x04,0x01,0x00,
+- 0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x04,0x0c,0x00,
+- 0x01,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,
+- 0x51,0x04,0x0c,0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x06,0x00,0x93,0x0c,0x52,0x04,
+- 0x06,0x00,0x11,0x04,0x06,0x00,0x01,0x00,0x01,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,
+- 0x54,0x04,0x01,0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x0c,0x00,0x0c,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,
+- 0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0xd2,0x0c,
+- 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0d,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
+- 0x0d,0x00,0x0c,0x00,0x06,0x00,0x94,0x0c,0x53,0x04,0x06,0x00,0x12,0x04,0x06,0x00,
+- 0x0a,0x00,0x06,0x00,0xe4,0x39,0x01,0xd3,0x0c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xcf,
+- 0x06,0x06,0x00,0xd2,0x30,0xd1,0x06,0xcf,0x06,0x06,0x00,0xd0,0x06,0xcf,0x06,0x06,
+- 0x00,0xcf,0x86,0x95,0x1e,0x54,0x04,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,
+- 0x00,0x91,0x0e,0x10,0x0a,0x06,0xff,0xe2,0xab,0x9d,0xcc,0xb8,0x00,0x06,0x00,0x06,
+- 0x00,0x06,0x00,0xd1,0x80,0xd0,0x3a,0xcf,0x86,0xd5,0x28,0xd4,0x10,0x53,0x04,0x07,
+- 0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x08,0x00,0xd3,0x08,0x12,0x04,0x08,
+- 0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,
+- 0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,
+- 0x86,0xd5,0x30,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,
+- 0x04,0x0a,0x00,0x10,0x00,0x10,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,
+- 0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x10,0x00,0x10,
+- 0x00,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,
+- 0x00,0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,
+- 0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x53,
+- 0x04,0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x14,0x00,0x91,0x08,0x10,0x04,0x14,
+- 0x00,0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x15,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x92,
+- 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd4,
+- 0x0c,0x53,0x04,0x14,0x00,0x12,0x04,0x14,0x00,0x11,0x00,0x53,0x04,0x14,0x00,0x52,
+- 0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0xe3,0xb9,0x01,
+- 0xd2,0xac,0xd1,0x68,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x14,0x53,0x04,
+- 0x08,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,
+- 0x08,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,
+- 0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xd4,0x14,0x53,0x04,
+- 0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0a,0x00,0x0a,0x00,0x09,0x00,
+- 0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x0b,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,
+- 0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,
+- 0x0b,0xe6,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0d,0x00,0x00,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd1,0x6c,0xd0,0x2a,
+- 0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0d,0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,
+- 0xd3,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,
+- 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x0c,0x09,0xd0,0x5a,0xcf,0x86,0xd5,0x18,0x54,0x04,
+- 0x08,0x00,0x93,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,
+- 0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
+- 0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,
+- 0x00,0x00,0xcf,0x86,0x95,0x40,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
+- 0x10,0x04,0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
+- 0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x00,0x00,0x0a,0xe6,0xd2,0x9c,0xd1,0x68,0xd0,0x32,0xcf,0x86,0xd5,0x14,
+- 0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,0x08,0x00,
+- 0x0a,0x00,0x54,0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,
+- 0x0b,0x00,0x0d,0x00,0x0d,0x00,0x12,0x04,0x0d,0x00,0x10,0x00,0xcf,0x86,0x95,0x30,
+- 0x94,0x2c,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x12,0x00,
+- 0x91,0x08,0x10,0x04,0x12,0x00,0x13,0x00,0x13,0x00,0xd2,0x08,0x11,0x04,0x13,0x00,
+- 0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0x00,0x00,0x00,0x00,
+- 0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,
+- 0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x00,0x00,
+- 0x00,0x00,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,
+- 0xd5,0x14,0x54,0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,
+- 0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x04,0x00,0x12,0x04,0x04,0x00,
+- 0x00,0x00,0xcf,0x86,0xe5,0x8d,0x05,0xe4,0x86,0x05,0xe3,0x7d,0x04,0xe2,0xe4,0x03,
+- 0xe1,0xc0,0x01,0xd0,0x3e,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,
+- 0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0xda,0x01,0xe4,0x91,0x08,0x10,
+- 0x04,0x01,0xe8,0x01,0xde,0x01,0xe0,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x04,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0xaa,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe3,0x81,0x8b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0x8d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe3,0x81,0x8f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0x91,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x93,
+- 0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x95,0xe3,0x82,0x99,
+- 0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x97,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x99,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9b,0xe3,0x82,0x99,0x00,0x01,0x00,
+- 0x10,0x0b,0x01,0xff,0xe3,0x81,0x9d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd4,0x53,0xd3,
+- 0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9f,0xe3,0x82,0x99,0x00,
+- 0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xa1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,
+- 0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xa4,0xe3,0x82,0x99,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x81,0xa6,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,0x0f,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xa8,0xe3,0x82,0x99,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x99,
+- 0x00,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xe3,0x81,0xb2,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb2,
+- 0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x99,
+- 0x00,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x81,0xb8,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x81,0xb8,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0xbb,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,0x9a,0x00,0x01,0x00,
+- 0xd0,0xee,0xcf,0x86,0xd5,0x42,0x54,0x04,0x01,0x00,0xd3,0x1b,0x52,0x04,0x01,0x00,
+- 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x86,0xe3,0x82,0x99,0x00,0x06,0x00,0x10,
+- 0x04,0x06,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x08,0x10,
+- 0x04,0x01,0x08,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0x9d,
+- 0xe3,0x82,0x99,0x00,0x06,0x00,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x82,0xab,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x82,0xad,0xe3,0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x82,0xaf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x82,0xb1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,
+- 0xb3,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb5,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb7,0xe3,
+- 0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb9,0xe3,0x82,0x99,0x00,
+- 0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbb,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbd,0xe3,0x82,0x99,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xd5,0xd4,0x53,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,
+- 0xbf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x81,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x84,0xe3,
+- 0x82,0x99,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x86,0xe3,0x82,0x99,0x00,
+- 0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x88,0xe3,0x82,0x99,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,
+- 0x83,0x8f,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x8f,0xe3,0x82,0x9a,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x83,0x95,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x95,0xe3,0x82,0x9a,0x00,0xd2,
+- 0x1e,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x99,0x00,
+- 0x10,0x0b,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,
+- 0x0b,0x01,0xff,0xe3,0x83,0x9b,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x9b,0xe3,
+- 0x82,0x9a,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x22,0x52,0x04,0x01,0x00,0xd1,
+- 0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xa6,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x83,0xaf,0xe3,0x82,0x99,0x00,0xd2,0x25,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xe3,0x83,0xb0,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0xb1,0xe3,
+- 0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xb2,0xe3,0x82,0x99,0x00,0x01,0x00,
+- 0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xbd,0xe3,0x82,0x99,0x00,0x06,
+- 0x00,0xd1,0x4c,0xd0,0x46,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x00,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,
+- 0x18,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,
+- 0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x06,0x01,0x00,0xd0,0x32,0xcf,
+- 0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x54,0x04,0x04,0x00,0x53,0x04,0x04,
+- 0x00,0x92,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x08,0x14,0x04,0x08,0x00,0x0a,0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x0a,
+- 0x00,0x00,0x00,0x00,0x00,0x06,0x00,0xd2,0xa4,0xd1,0x5c,0xd0,0x22,0xcf,0x86,0x95,
+- 0x1c,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x01,0x00,0xcf,0x86,0xd5,
+- 0x20,0xd4,0x0c,0x93,0x08,0x12,0x04,0x01,0x00,0x0b,0x00,0x0b,0x00,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x54,
+- 0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x07,0x00,0x10,
+- 0x04,0x08,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,
+- 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,
+- 0x00,0x06,0x00,0xcf,0x86,0xd5,0x10,0x94,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,
+- 0x00,0x07,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x16,0x00,0xd1,0x30,0xd0,0x06,0xcf,
+- 0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0x92,0x0c,0x51,
+- 0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,
+- 0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x11,0x04,0x01,0x00,0x07,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0xcf,0x06,0x04,
+- 0x00,0xcf,0x06,0x04,0x00,0xd1,0x48,0xd0,0x40,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x04,
+- 0x00,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x2c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xd1,
+- 0x06,0xcf,0x06,0x04,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,
+- 0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x06,0x07,0x00,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,
+- 0x06,0x01,0x00,0xe2,0x71,0x05,0xd1,0x8c,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,
+- 0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xd4,0x06,0xcf,0x06,0x01,0x00,0xd3,0x06,
+- 0xcf,0x06,0x01,0x00,0xd2,0x06,0xcf,0x06,0x01,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,
+- 0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x01,0x00,
+- 0x11,0x04,0x01,0x00,0x08,0x00,0x08,0x00,0x53,0x04,0x08,0x00,0x12,0x04,0x08,0x00,
+- 0x0a,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,
+- 0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x11,0x00,0x11,0x00,0x93,0x0c,
+- 0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x13,0x00,0x13,0x00,0x94,0x14,0x53,0x04,
+- 0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,
+- 0x00,0x00,0xe0,0xdb,0x04,0xcf,0x86,0xe5,0xdf,0x01,0xd4,0x06,0xcf,0x06,0x04,0x00,
+- 0xd3,0x74,0xd2,0x6e,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,
+- 0x94,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,
+- 0x00,0x00,0x00,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x04,0x00,
+- 0x06,0x00,0x04,0x00,0x04,0x00,0x93,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,0x95,0x24,0x94,0x20,0x93,0x1c,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x04,0x00,0xd1,0x08,0x10,0x04,
+- 0x04,0x00,0x06,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0x0b,0x00,
+- 0xcf,0x06,0x0a,0x00,0xd2,0x84,0xd1,0x4c,0xd0,0x16,0xcf,0x86,0x55,0x04,0x0a,0x00,
+- 0x94,0x0c,0x53,0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
+- 0x55,0x04,0x0a,0x00,0xd4,0x1c,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x0a,0x00,
+- 0x0a,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xe6,
+- 0xd3,0x08,0x12,0x04,0x0a,0x00,0x0d,0xe6,0x52,0x04,0x0d,0xe6,0x11,0x04,0x0a,0xe6,
+- 0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,
+- 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,0xe6,0x0d,0xe6,0x0b,0x00,
+- 0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,
+- 0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x24,
+- 0x54,0x04,0x08,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,
+- 0x0a,0x00,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0x0a,0x00,0x0a,0x00,0xcf,0x06,0x0a,0x00,0xd0,0x5e,0xcf,0x86,0xd5,0x28,0xd4,0x18,
+- 0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0xd1,0x08,0x10,0x04,0x0a,0x00,0x0c,0x00,
+- 0x10,0x04,0x0c,0x00,0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x0d,0x00,
+- 0x10,0x00,0x10,0x00,0xd4,0x1c,0x53,0x04,0x0c,0x00,0xd2,0x0c,0x51,0x04,0x0c,0x00,
+- 0x10,0x04,0x0d,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x12,0x00,0x14,0x00,
+- 0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0x92,0x08,0x11,0x04,
+- 0x14,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x1c,0x94,0x18,0x93,0x14,0xd2,0x08,
+- 0x11,0x04,0x00,0x00,0x15,0x00,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x92,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,
+- 0x0c,0x00,0x0a,0x00,0x0a,0x00,0xe4,0xf2,0x02,0xe3,0x65,0x01,0xd2,0x98,0xd1,0x48,
+- 0xd0,0x36,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x09,0x08,0x00,0x08,0x00,0x08,0x00,0xd4,0x0c,0x53,0x04,
+- 0x08,0x00,0x12,0x04,0x08,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,0x11,0x04,
+- 0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0x54,0x04,0x09,0x00,
+- 0x13,0x04,0x09,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x0a,0x00,0xcf,0x86,0xd5,0x2c,
+- 0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x09,0x12,0x00,
+- 0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,
+- 0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x0b,0xe6,0xd3,0x0c,
+- 0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x11,0x04,
+- 0x11,0x00,0x14,0x00,0xd1,0x60,0xd0,0x22,0xcf,0x86,0x55,0x04,0x0a,0x00,0x94,0x18,
+- 0x53,0x04,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xdc,
+- 0x11,0x04,0x0a,0xdc,0x0a,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x0a,0x00,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0x09,0x00,0x00,
+- 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x54,0x04,
+- 0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,
+- 0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,
+- 0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0x07,0x0b,0x00,
+- 0x0b,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x20,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,
+- 0x10,0x04,0x00,0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x08,0x11,0x04,0x0b,0x00,
+- 0x00,0x00,0x11,0x04,0x00,0x00,0x0b,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
+- 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd2,0xd0,
+- 0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,0x0a,0x00,0x93,0x10,
+- 0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,
+- 0x0a,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,
+- 0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x10,0x00,
+- 0xd0,0x3a,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0xd3,0x1c,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xdc,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0xe6,
+- 0x0b,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,0xcf,0x86,0xd5,0x2c,0xd4,0x18,
+- 0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x10,0x04,0x0b,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,0x0d,0x00,0x93,0x10,0x52,0x04,
+- 0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x00,0x00,0x00,0x00,0xd1,0x8c,
+- 0xd0,0x72,0xcf,0x86,0xd5,0x4c,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,
++ 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,
++ 0xe1,0x8f,0x5b,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,
++ 0xcc,0x80,0x00,0xd3,0x3b,0xd2,0x18,0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,
++ 0x89,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0xd1,0x0f,0x10,
++ 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
++ 0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,
++ 0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x81,0x00,0xe1,0x99,0x5b,0x10,0x09,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0x01,0xff,
++ 0xc2,0xb4,0x00,0xe0,0x0c,0x68,0xcf,0x86,0xe5,0x23,0x02,0xe4,0x25,0x01,0xe3,0x85,
++ 0x5e,0xd2,0x2a,0xe1,0x5f,0x5c,0xe0,0xdd,0x5b,0xcf,0x86,0xe5,0xbb,0x5b,0x94,0x1b,
++ 0xe3,0xa4,0x5b,0x92,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,
++ 0xff,0xe2,0x80,0x83,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd1,0xd6,0xd0,0x46,0xcf,
++ 0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
++ 0x00,0x10,0x07,0x01,0xff,0xcf,0x89,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,0x00,
++ 0x10,0x06,0x01,0xff,0x6b,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x01,0x00,0xe3,0x25,
++ 0x5d,0x92,0x10,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0x8e,0x00,0x01,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x0a,0xe4,0x42,0x5d,0x63,0x2d,0x5d,0x06,0x00,0x94,
++ 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb0,0x00,0x01,
++ 0xff,0xe2,0x85,0xb1,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb2,0x00,0x01,0xff,0xe2,
++ 0x85,0xb3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb4,0x00,0x01,0xff,0xe2,
++ 0x85,0xb5,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb6,0x00,0x01,0xff,0xe2,0x85,0xb7,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb8,0x00,0x01,0xff,0xe2,
++ 0x85,0xb9,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xba,0x00,0x01,0xff,0xe2,0x85,0xbb,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xbc,0x00,0x01,0xff,0xe2,0x85,0xbd,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xbe,0x00,0x01,0xff,0xe2,0x85,0xbf,0x00,0x01,
++ 0x00,0xe0,0x34,0x5d,0xcf,0x86,0xe5,0x13,0x5d,0xe4,0xf2,0x5c,0xe3,0xe1,0x5c,0xe2,
++ 0xd4,0x5c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0xff,0xe2,0x86,0x84,0x00,
++ 0xe3,0x23,0x61,0xe2,0xf0,0x60,0xd1,0x0c,0xe0,0x9d,0x60,0xcf,0x86,0x65,0x7e,0x60,
++ 0x01,0x00,0xd0,0x62,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x18,
++ 0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x90,0x00,
++ 0x01,0xff,0xe2,0x93,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,
++ 0x92,0x00,0x01,0xff,0xe2,0x93,0x93,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x94,0x00,
++ 0x01,0xff,0xe2,0x93,0x95,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x96,0x00,
++ 0x01,0xff,0xe2,0x93,0x97,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x98,0x00,0x01,0xff,
++ 0xe2,0x93,0x99,0x00,0xcf,0x86,0xe5,0x57,0x60,0x94,0x80,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x9a,0x00,0x01,0xff,0xe2,0x93,0x9b,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0x9c,0x00,0x01,0xff,0xe2,0x93,0x9d,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0x9e,0x00,0x01,0xff,0xe2,0x93,0x9f,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa0,0x00,0x01,0xff,0xe2,0x93,0xa1,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0xa2,0x00,0x01,0xff,0xe2,0x93,0xa3,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa4,0x00,0x01,0xff,0xe2,0x93,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa6,0x00,0x01,0xff,0xe2,0x93,0xa7,0x00,0x10,0x08,0x01,0xff,0xe2,
++ 0x93,0xa8,0x00,0x01,0xff,0xe2,0x93,0xa9,0x00,0x01,0x00,0xd4,0x0c,0xe3,0x33,0x62,
++ 0xe2,0x2c,0x62,0xcf,0x06,0x04,0x00,0xe3,0x0c,0x65,0xe2,0xff,0x63,0xe1,0x2e,0x02,
++ 0xe0,0x84,0x01,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe2,0xb0,0xb0,0x00,0x08,0xff,0xe2,0xb0,0xb1,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb2,0x00,0x08,0xff,0xe2,0xb0,0xb3,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb4,0x00,0x08,0xff,0xe2,0xb0,0xb5,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xb6,0x00,0x08,0xff,0xe2,0xb0,0xb7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb8,0x00,0x08,0xff,0xe2,0xb0,0xb9,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xba,0x00,0x08,0xff,0xe2,0xb0,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xbc,0x00,0x08,0xff,0xe2,0xb0,0xbd,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
++ 0xbe,0x00,0x08,0xff,0xe2,0xb0,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb1,0x80,0x00,0x08,0xff,0xe2,0xb1,0x81,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x82,0x00,0x08,0xff,0xe2,0xb1,0x83,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x84,0x00,0x08,0xff,0xe2,0xb1,0x85,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x86,0x00,0x08,0xff,0xe2,0xb1,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x88,0x00,0x08,0xff,0xe2,0xb1,0x89,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x8a,0x00,0x08,0xff,0xe2,0xb1,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x8c,0x00,0x08,0xff,0xe2,0xb1,0x8d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8e,0x00,
++ 0x08,0xff,0xe2,0xb1,0x8f,0x00,0x94,0x7c,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb1,0x90,0x00,0x08,0xff,0xe2,0xb1,0x91,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x92,0x00,0x08,0xff,0xe2,0xb1,0x93,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x94,0x00,0x08,0xff,0xe2,0xb1,0x95,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x96,0x00,0x08,0xff,0xe2,0xb1,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x98,0x00,0x08,0xff,0xe2,0xb1,0x99,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x9a,0x00,0x08,0xff,0xe2,0xb1,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x9c,0x00,0x08,0xff,0xe2,0xb1,0x9d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9e,0x00,
++ 0x00,0x00,0x08,0x00,0xcf,0x86,0xd5,0x07,0x64,0xef,0x61,0x08,0x00,0xd4,0x63,0xd3,
++ 0x32,0xd2,0x1b,0xd1,0x0c,0x10,0x08,0x09,0xff,0xe2,0xb1,0xa1,0x00,0x09,0x00,0x10,
++ 0x07,0x09,0xff,0xc9,0xab,0x00,0x09,0xff,0xe1,0xb5,0xbd,0x00,0xd1,0x0b,0x10,0x07,
++ 0x09,0xff,0xc9,0xbd,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xa8,
++ 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xaa,0x00,0x10,
++ 0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xac,0x00,0xd1,0x0b,0x10,0x04,0x09,0x00,0x0a,
++ 0xff,0xc9,0x91,0x00,0x10,0x07,0x0a,0xff,0xc9,0xb1,0x00,0x0a,0xff,0xc9,0x90,0x00,
++ 0xd3,0x27,0xd2,0x17,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xc9,0x92,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xe2,0xb1,0xb3,0x00,0x0a,0x00,0x91,0x0c,0x10,0x04,0x09,0x00,0x09,
++ 0xff,0xe2,0xb1,0xb6,0x00,0x09,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,
++ 0x07,0x0b,0xff,0xc8,0xbf,0x00,0x0b,0xff,0xc9,0x80,0x00,0xe0,0x83,0x01,0xcf,0x86,
++ 0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0x81,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x83,0x00,0x08,0x00,0xd1,0x0c,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0x87,0x00,0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x89,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8f,0x00,
++ 0x08,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x91,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x97,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x99,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x9f,0x00,0x08,0x00,
++ 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa1,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0xa5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa7,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa9,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0xab,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0xad,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xaf,0x00,0x08,0x00,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb1,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0xb3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0xb5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb7,0x00,0x08,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb9,0x00,0x08,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0xbb,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0xbd,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbf,0x00,0x08,0x00,0xcf,0x86,
++ 0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,
++ 0x81,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x83,0x00,0x08,0x00,0xd1,0x0c,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,
++ 0x87,0x00,0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x89,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb3,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8f,0x00,
++ 0x08,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x91,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb3,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x97,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x99,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb3,0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x9f,0x00,0x08,0x00,
++ 0xd4,0x3b,0xd3,0x1c,0x92,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa1,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa3,0x00,0x08,0x00,0x08,0x00,0xd2,0x10,
++ 0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0xff,0xe2,0xb3,0xac,0x00,0xe1,0x3b,
++ 0x5f,0x10,0x04,0x0b,0x00,0x0b,0xff,0xe2,0xb3,0xae,0x00,0xe3,0x40,0x5f,0x92,0x10,
++ 0x51,0x04,0x0b,0xe6,0x10,0x08,0x0d,0xff,0xe2,0xb3,0xb3,0x00,0x0d,0x00,0x00,0x00,
++ 0xe2,0x98,0x08,0xd1,0x0b,0xe0,0x11,0x67,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe0,0x65,
++ 0x6c,0xcf,0x86,0xe5,0xa7,0x05,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x0c,0xe2,0xf8,
++ 0x67,0xe1,0x8f,0x67,0xcf,0x06,0x04,0x00,0xe2,0xdb,0x01,0xe1,0x26,0x01,0xd0,0x09,
++ 0xcf,0x86,0x65,0xf4,0x67,0x0a,0x00,0xcf,0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,
++ 0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,
++ 0xff,0xea,0x99,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x85,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x99,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x8d,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x99,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x95,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x97,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x99,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x9b,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x9d,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x9f,0x00,0x0a,0x00,0xe4,0x5d,0x67,0xd3,0x30,0xd2,0x18,
++ 0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x99,0xa1,0x00,0x0c,0x00,0x10,0x08,0x0a,0xff,
++ 0xea,0x99,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0xa5,0x00,
++ 0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x0a,0xff,0xea,0x99,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,
++ 0xab,0x00,0x0a,0x00,0xe1,0x0c,0x67,0x10,0x08,0x0a,0xff,0xea,0x99,0xad,0x00,0x0a,
++ 0x00,0xe0,0x35,0x67,0xcf,0x86,0x95,0xab,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x0a,0xff,0xea,0x9a,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,
++ 0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x85,0x00,0x0a,0x00,
++ 0x10,0x08,0x0a,0xff,0xea,0x9a,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8b,0x00,
++ 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8d,0x00,0x0a,0x00,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x93,0x00,
++ 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x95,0x00,0x0a,0x00,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x97,0x00,0x0a,0x00,0xe2,0x92,0x66,0xd1,0x0c,0x10,0x08,0x10,
++ 0xff,0xea,0x9a,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9a,0x9b,0x00,0x10,
++ 0x00,0x0b,0x00,0xe1,0x10,0x02,0xd0,0xb9,0xcf,0x86,0xd5,0x07,0x64,0x9e,0x66,0x08,
++ 0x00,0xd4,0x58,0xd3,0x28,0xd2,0x10,0x51,0x04,0x09,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x9c,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa5,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xab,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xad,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xaf,0x00,0x0a,0x00,0xd3,0x28,0xd2,0x10,0x51,0x04,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9c,0xb5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb7,0x00,0x0a,
++ 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb9,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9c,0xbd,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbf,0x00,0x0a,0x00,0xcf,
++ 0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9d,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x83,0x00,0x0a,0x00,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x85,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x9d,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x89,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8f,
++ 0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x91,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x97,
++ 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x99,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9f,0x00,0x0a,
++ 0x00,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa1,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0xa5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa7,
++ 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa9,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xab,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xaf,0x00,0x0a,
++ 0x00,0x53,0x04,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,
++ 0x9d,0xba,0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xbc,0x00,0xd1,0x0c,0x10,
++ 0x04,0x0a,0x00,0x0a,0xff,0xe1,0xb5,0xb9,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xbf,
++ 0x00,0x0a,0x00,0xe0,0x71,0x01,0xcf,0x86,0xd5,0xa6,0xd4,0x4e,0xd3,0x30,0xd2,0x18,
++ 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
++ 0xea,0x9e,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x85,0x00,
++ 0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9e,0x87,0x00,0x0a,0x00,0xd2,0x10,0x51,0x04,
++ 0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9e,0x8c,0x00,0xe1,0x9a,0x64,0x10,
++ 0x04,0x0a,0x00,0x0c,0xff,0xc9,0xa5,0x00,0xd3,0x28,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0c,0xff,0xea,0x9e,0x91,0x00,0x0c,0x00,0x10,0x08,0x0d,0xff,0xea,0x9e,0x93,0x00,
++ 0x0d,0x00,0x51,0x04,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x97,0x00,0x10,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x99,0x00,0x10,0x00,0x10,0x08,
++ 0x10,0xff,0xea,0x9e,0x9b,0x00,0x10,0x00,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,
++ 0x9d,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x9f,0x00,0x10,0x00,0xd4,0x63,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa1,0x00,0x0c,0x00,
++ 0x10,0x08,0x0c,0xff,0xea,0x9e,0xa3,0x00,0x0c,0x00,0xd1,0x0c,0x10,0x08,0x0c,0xff,
++ 0xea,0x9e,0xa5,0x00,0x0c,0x00,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa7,0x00,0x0c,0x00,
++ 0xd2,0x1a,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa9,0x00,0x0c,0x00,0x10,0x07,
++ 0x0d,0xff,0xc9,0xa6,0x00,0x10,0xff,0xc9,0x9c,0x00,0xd1,0x0e,0x10,0x07,0x10,0xff,
++ 0xc9,0xa1,0x00,0x10,0xff,0xc9,0xac,0x00,0x10,0x07,0x12,0xff,0xc9,0xaa,0x00,0x14,
++ 0x00,0xd3,0x35,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x10,0xff,0xca,0x9e,0x00,0x10,0xff,
++ 0xca,0x87,0x00,0x10,0x07,0x11,0xff,0xca,0x9d,0x00,0x11,0xff,0xea,0xad,0x93,0x00,
++ 0xd1,0x0c,0x10,0x08,0x11,0xff,0xea,0x9e,0xb5,0x00,0x11,0x00,0x10,0x08,0x11,0xff,
++ 0xea,0x9e,0xb7,0x00,0x11,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x14,0xff,0xea,0x9e,
++ 0xb9,0x00,0x14,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbb,0x00,0x15,0x00,0xd1,0x0c,
++ 0x10,0x08,0x15,0xff,0xea,0x9e,0xbd,0x00,0x15,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,
++ 0xbf,0x00,0x15,0x00,0xcf,0x86,0xe5,0xd4,0x63,0x94,0x2f,0x93,0x2b,0xd2,0x10,0x51,
++ 0x04,0x00,0x00,0x10,0x08,0x15,0xff,0xea,0x9f,0x83,0x00,0x15,0x00,0xd1,0x0f,0x10,
++ 0x08,0x15,0xff,0xea,0x9e,0x94,0x00,0x15,0xff,0xca,0x82,0x00,0x10,0x08,0x15,0xff,
++ 0xe1,0xb6,0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe4,0xb4,0x66,0xd3,0x1d,0xe2,
++ 0x5b,0x64,0xe1,0x0a,0x64,0xe0,0xf7,0x63,0xcf,0x86,0xe5,0xd8,0x63,0x94,0x0b,0x93,
++ 0x07,0x62,0xc3,0x63,0x08,0x00,0x08,0x00,0x08,0x00,0xd2,0x0f,0xe1,0x5a,0x65,0xe0,
++ 0x27,0x65,0xcf,0x86,0x65,0x0c,0x65,0x0a,0x00,0xd1,0xab,0xd0,0x1a,0xcf,0x86,0xe5,
++ 0x17,0x66,0xe4,0xfa,0x65,0xe3,0xe1,0x65,0xe2,0xd4,0x65,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x0c,0x00,0x0c,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x0b,0x93,0x07,0x62,
++ 0x27,0x66,0x11,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8e,0xa0,0x00,0x11,0xff,0xe1,0x8e,0xa1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa2,0x00,0x11,0xff,0xe1,0x8e,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa4,0x00,0x11,0xff,0xe1,0x8e,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa6,0x00,
++ 0x11,0xff,0xe1,0x8e,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa8,0x00,0x11,0xff,0xe1,0x8e,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xaa,0x00,
++ 0x11,0xff,0xe1,0x8e,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xac,0x00,
++ 0x11,0xff,0xe1,0x8e,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xae,0x00,0x11,0xff,
++ 0xe1,0x8e,0xaf,0x00,0xe0,0xb2,0x65,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb0,0x00,0x11,0xff,0xe1,0x8e,
++ 0xb1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb2,0x00,0x11,0xff,0xe1,0x8e,0xb3,0x00,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb4,0x00,0x11,0xff,0xe1,0x8e,0xb5,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb6,0x00,0x11,0xff,0xe1,0x8e,0xb7,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb8,0x00,0x11,0xff,0xe1,0x8e,0xb9,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xba,0x00,0x11,0xff,0xe1,0x8e,0xbb,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xbc,0x00,0x11,0xff,0xe1,0x8e,0xbd,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8e,0xbe,0x00,0x11,0xff,0xe1,0x8e,0xbf,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0x80,0x00,0x11,0xff,0xe1,0x8f,0x81,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x82,0x00,0x11,0xff,0xe1,0x8f,0x83,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x84,0x00,0x11,0xff,0xe1,0x8f,0x85,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x86,0x00,0x11,0xff,0xe1,0x8f,0x87,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x88,0x00,0x11,0xff,0xe1,0x8f,0x89,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x8a,0x00,0x11,0xff,0xe1,0x8f,0x8b,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x8c,0x00,0x11,0xff,0xe1,0x8f,0x8d,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x8e,0x00,0x11,0xff,0xe1,0x8f,0x8f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0x90,0x00,0x11,0xff,0xe1,0x8f,0x91,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x92,0x00,0x11,0xff,0xe1,0x8f,0x93,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x94,0x00,0x11,0xff,0xe1,0x8f,0x95,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x96,0x00,0x11,0xff,0xe1,0x8f,0x97,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x98,0x00,0x11,0xff,0xe1,0x8f,0x99,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x9a,0x00,0x11,0xff,0xe1,0x8f,0x9b,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x9c,0x00,0x11,0xff,0xe1,0x8f,0x9d,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x9e,0x00,0x11,0xff,0xe1,0x8f,0x9f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0xa0,0x00,0x11,0xff,0xe1,0x8f,0xa1,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa2,0x00,0x11,0xff,0xe1,0x8f,0xa3,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa4,0x00,0x11,0xff,0xe1,0x8f,0xa5,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xa6,0x00,0x11,0xff,0xe1,0x8f,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa8,0x00,0x11,0xff,0xe1,0x8f,0xa9,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xaa,0x00,0x11,0xff,0xe1,0x8f,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xac,0x00,0x11,0xff,0xe1,0x8f,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0xae,0x00,0x11,0xff,0xe1,0x8f,0xaf,0x00,0xd1,0x0c,0xe0,0xeb,0x63,0xcf,0x86,0xcf,
++ 0x06,0x02,0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,
++ 0xcf,0x06,0x01,0x00,0xd4,0xae,0xd3,0x09,0xe2,0x54,0x64,0xcf,0x06,0x01,0x00,0xd2,
++ 0x27,0xe1,0x1f,0x70,0xe0,0x26,0x6e,0xcf,0x86,0xe5,0x3f,0x6d,0xe4,0xce,0x6c,0xe3,
++ 0x99,0x6c,0xe2,0x78,0x6c,0xe1,0x67,0x6c,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,
++ 0x01,0xff,0xe5,0xba,0xa6,0x00,0xe1,0x74,0x74,0xe0,0xe8,0x73,0xcf,0x86,0xe5,0x22,
++ 0x73,0xd4,0x3b,0x93,0x37,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x01,0xff,0x66,0x66,0x00,
++ 0x01,0xff,0x66,0x69,0x00,0x10,0x07,0x01,0xff,0x66,0x6c,0x00,0x01,0xff,0x66,0x66,
++ 0x69,0x00,0xd1,0x0f,0x10,0x08,0x01,0xff,0x66,0x66,0x6c,0x00,0x01,0xff,0x73,0x74,
++ 0x00,0x10,0x07,0x01,0xff,0x73,0x74,0x00,0x00,0x00,0x00,0x00,0xe3,0xc8,0x72,0xd2,
++ 0x11,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xb6,0x00,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xd5,0xb4,0xd5,0xa5,0x00,0x01,0xff,0xd5,0xb4,0xd5,
++ 0xab,0x00,0x10,0x09,0x01,0xff,0xd5,0xbe,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb4,0xd5,
++ 0xad,0x00,0xd3,0x09,0xe2,0x40,0x74,0xcf,0x06,0x01,0x00,0xd2,0x13,0xe1,0x30,0x75,
++ 0xe0,0xc1,0x74,0xcf,0x86,0xe5,0x9e,0x74,0x64,0x8d,0x74,0x06,0xff,0x00,0xe1,0x96,
++ 0x75,0xe0,0x63,0x75,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x7c,
++ 0xd3,0x3c,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xef,0xbd,0x81,0x00,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x82,0x00,0x01,0xff,0xef,0xbd,0x83,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x84,0x00,0x01,0xff,0xef,0xbd,0x85,0x00,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x86,0x00,0x01,0xff,0xef,0xbd,0x87,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x88,0x00,0x01,0xff,0xef,0xbd,0x89,0x00,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x8a,0x00,0x01,0xff,0xef,0xbd,0x8b,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x8c,0x00,0x01,0xff,0xef,0xbd,0x8d,0x00,0x10,0x08,0x01,0xff,
++ 0xef,0xbd,0x8e,0x00,0x01,0xff,0xef,0xbd,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x90,0x00,0x01,0xff,0xef,0xbd,0x91,0x00,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x92,0x00,0x01,0xff,0xef,0xbd,0x93,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x94,0x00,0x01,0xff,0xef,0xbd,0x95,0x00,0x10,0x08,0x01,0xff,
++ 0xef,0xbd,0x96,0x00,0x01,0xff,0xef,0xbd,0x97,0x00,0x92,0x1c,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x98,0x00,0x01,0xff,0xef,0xbd,0x99,0x00,0x10,0x08,0x01,0xff,
++ 0xef,0xbd,0x9a,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0x87,0xb3,0xe1,0x60,0xb0,0xe0,
++ 0xdd,0xae,0xcf,0x86,0xe5,0x81,0x9b,0xc4,0xe3,0xc1,0x07,0xe2,0x62,0x06,0xe1,0x11,
++ 0x86,0xe0,0x09,0x05,0xcf,0x86,0xe5,0xfb,0x02,0xd4,0x1c,0xe3,0x7f,0x76,0xe2,0xd6,
++ 0x75,0xe1,0xb1,0x75,0xe0,0x8a,0x75,0xcf,0x86,0xe5,0x57,0x75,0x94,0x07,0x63,0x42,
++ 0x75,0x07,0x00,0x07,0x00,0xe3,0x2b,0x78,0xe2,0xf0,0x77,0xe1,0x77,0x01,0xe0,0x88,
++ 0x77,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xa8,0x00,0x05,0xff,0xf0,0x90,0x90,0xa9,0x00,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xaa,0x00,0x05,0xff,0xf0,0x90,0x90,0xab,0x00,0xd1,0x12,
++ 0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xac,0x00,0x05,0xff,0xf0,0x90,0x90,0xad,0x00,
++ 0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xae,0x00,0x05,0xff,0xf0,0x90,0x90,0xaf,0x00,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb0,0x00,0x05,0xff,0xf0,
++ 0x90,0x90,0xb1,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb2,0x00,0x05,0xff,0xf0,
++ 0x90,0x90,0xb3,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb4,0x00,0x05,
++ 0xff,0xf0,0x90,0x90,0xb5,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb6,0x00,0x05,
++ 0xff,0xf0,0x90,0x90,0xb7,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x90,0xb8,0x00,0x05,0xff,0xf0,0x90,0x90,0xb9,0x00,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x90,0xba,0x00,0x05,0xff,0xf0,0x90,0x90,0xbb,0x00,0xd1,0x12,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xbc,0x00,0x05,0xff,0xf0,0x90,0x90,0xbd,0x00,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xbe,0x00,0x05,0xff,0xf0,0x90,0x90,0xbf,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x80,0x00,0x05,0xff,0xf0,0x90,0x91,
++ 0x81,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x82,0x00,0x05,0xff,0xf0,0x90,0x91,
++ 0x83,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x84,0x00,0x05,0xff,0xf0,
++ 0x90,0x91,0x85,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x86,0x00,0x05,0xff,0xf0,
++ 0x90,0x91,0x87,0x00,0x94,0x4c,0x93,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x91,0x88,0x00,0x05,0xff,0xf0,0x90,0x91,0x89,0x00,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x91,0x8a,0x00,0x05,0xff,0xf0,0x90,0x91,0x8b,0x00,0xd1,0x12,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x91,0x8c,0x00,0x05,0xff,0xf0,0x90,0x91,0x8d,0x00,0x10,0x09,
++ 0x07,0xff,0xf0,0x90,0x91,0x8e,0x00,0x07,0xff,0xf0,0x90,0x91,0x8f,0x00,0x05,0x00,
++ 0x05,0x00,0xd0,0xa0,0xcf,0x86,0xd5,0x07,0x64,0x30,0x76,0x07,0x00,0xd4,0x07,0x63,
++ 0x3d,0x76,0x07,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,
++ 0x93,0x98,0x00,0x12,0xff,0xf0,0x90,0x93,0x99,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,
++ 0x93,0x9a,0x00,0x12,0xff,0xf0,0x90,0x93,0x9b,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,
++ 0xf0,0x90,0x93,0x9c,0x00,0x12,0xff,0xf0,0x90,0x93,0x9d,0x00,0x10,0x09,0x12,0xff,
++ 0xf0,0x90,0x93,0x9e,0x00,0x12,0xff,0xf0,0x90,0x93,0x9f,0x00,0xd2,0x24,0xd1,0x12,
++ 0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa0,0x00,0x12,0xff,0xf0,0x90,0x93,0xa1,0x00,
++ 0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa2,0x00,0x12,0xff,0xf0,0x90,0x93,0xa3,0x00,
++ 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa4,0x00,0x12,0xff,0xf0,0x90,0x93,
++ 0xa5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa6,0x00,0x12,0xff,0xf0,0x90,0x93,
++ 0xa7,0x00,0xcf,0x86,0xe5,0xc6,0x75,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x90,0x93,0xa8,0x00,0x12,0xff,0xf0,0x90,0x93,0xa9,0x00,0x10,
++ 0x09,0x12,0xff,0xf0,0x90,0x93,0xaa,0x00,0x12,0xff,0xf0,0x90,0x93,0xab,0x00,0xd1,
++ 0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xac,0x00,0x12,0xff,0xf0,0x90,0x93,0xad,
++ 0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xae,0x00,0x12,0xff,0xf0,0x90,0x93,0xaf,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb0,0x00,0x12,0xff,
++ 0xf0,0x90,0x93,0xb1,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb2,0x00,0x12,0xff,
++ 0xf0,0x90,0x93,0xb3,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb4,0x00,
++ 0x12,0xff,0xf0,0x90,0x93,0xb5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb6,0x00,
++ 0x12,0xff,0xf0,0x90,0x93,0xb7,0x00,0x93,0x28,0x92,0x24,0xd1,0x12,0x10,0x09,0x12,
++ 0xff,0xf0,0x90,0x93,0xb8,0x00,0x12,0xff,0xf0,0x90,0x93,0xb9,0x00,0x10,0x09,0x12,
++ 0xff,0xf0,0x90,0x93,0xba,0x00,0x12,0xff,0xf0,0x90,0x93,0xbb,0x00,0x00,0x00,0x12,
++ 0x00,0xd4,0x1f,0xe3,0xdf,0x76,0xe2,0x6a,0x76,0xe1,0x09,0x76,0xe0,0xea,0x75,0xcf,
++ 0x86,0xe5,0xb7,0x75,0x94,0x0a,0xe3,0xa2,0x75,0x62,0x99,0x75,0x07,0x00,0x07,0x00,
++ 0xe3,0xde,0x78,0xe2,0xaf,0x78,0xd1,0x09,0xe0,0x4c,0x78,0xcf,0x06,0x0b,0x00,0xe0,
++ 0x7f,0x78,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x80,0x00,0x11,0xff,0xf0,0x90,0xb3,0x81,0x00,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x82,0x00,0x11,0xff,0xf0,0x90,0xb3,0x83,0x00,0xd1,
++ 0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x84,0x00,0x11,0xff,0xf0,0x90,0xb3,0x85,
++ 0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x86,0x00,0x11,0xff,0xf0,0x90,0xb3,0x87,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x88,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x89,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8a,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x8b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8c,0x00,
++ 0x11,0xff,0xf0,0x90,0xb3,0x8d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8e,0x00,
++ 0x11,0xff,0xf0,0x90,0xb3,0x8f,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0x90,0x00,0x11,0xff,0xf0,0x90,0xb3,0x91,0x00,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0x92,0x00,0x11,0xff,0xf0,0x90,0xb3,0x93,0x00,0xd1,0x12,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x94,0x00,0x11,0xff,0xf0,0x90,0xb3,0x95,0x00,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x96,0x00,0x11,0xff,0xf0,0x90,0xb3,0x97,0x00,0xd2,
++ 0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x98,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0x99,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9a,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9c,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x9d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9e,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x9f,0x00,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0xa0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa1,0x00,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0xa2,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa3,0x00,0xd1,0x12,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0xa4,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa5,0x00,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0xa6,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa7,0x00,0xd2,
++ 0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xa8,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0xa9,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xaa,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0xab,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xac,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0xad,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xae,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0xaf,0x00,0x93,0x23,0x92,0x1f,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,
++ 0x90,0xb3,0xb0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xb1,0x00,0x10,0x09,0x11,0xff,0xf0,
++ 0x90,0xb3,0xb2,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x15,0xe4,0x91,
++ 0x7b,0xe3,0x9b,0x79,0xe2,0x94,0x78,0xe1,0xe4,0x77,0xe0,0x9d,0x77,0xcf,0x06,0x0c,
++ 0x00,0xe4,0xeb,0x7e,0xe3,0x44,0x7e,0xe2,0xed,0x7d,0xd1,0x0c,0xe0,0xb2,0x7d,0xcf,
++ 0x86,0x65,0x93,0x7d,0x14,0x00,0xe0,0xb6,0x7d,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,
++ 0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x80,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x81,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x82,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x83,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,
++ 0x84,0x00,0x10,0xff,0xf0,0x91,0xa3,0x85,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,
++ 0x86,0x00,0x10,0xff,0xf0,0x91,0xa3,0x87,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x88,0x00,0x10,0xff,0xf0,0x91,0xa3,0x89,0x00,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x8a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8b,0x00,0xd1,0x12,0x10,
++ 0x09,0x10,0xff,0xf0,0x91,0xa3,0x8c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8d,0x00,0x10,
++ 0x09,0x10,0xff,0xf0,0x91,0xa3,0x8e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8f,0x00,0xd3,
++ 0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x90,0x00,0x10,0xff,
++ 0xf0,0x91,0xa3,0x91,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x92,0x00,0x10,0xff,
++ 0xf0,0x91,0xa3,0x93,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x94,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x95,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x96,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x97,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,
++ 0x91,0xa3,0x98,0x00,0x10,0xff,0xf0,0x91,0xa3,0x99,0x00,0x10,0x09,0x10,0xff,0xf0,
++ 0x91,0xa3,0x9a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x9c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9d,0x00,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x9e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9f,0x00,0xd1,0x11,0xe0,
++ 0x12,0x81,0xcf,0x86,0xe5,0x09,0x81,0xe4,0xd2,0x80,0xcf,0x06,0x00,0x00,0xe0,0xdb,
++ 0x82,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xd4,0x09,0xe3,0x10,0x81,0xcf,0x06,
++ 0x0c,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xe2,0x3b,0x82,0xe1,0x16,0x82,0xd0,0x06,
++ 0xcf,0x06,0x00,0x00,0xcf,0x86,0xa5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa1,
++ 0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa3,
++ 0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa4,0x00,0x14,0xff,0xf0,0x96,
++ 0xb9,0xa5,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa6,0x00,0x14,0xff,0xf0,0x96,
++ 0xb9,0xa7,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa8,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xa9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xaa,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xab,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,
++ 0xac,0x00,0x14,0xff,0xf0,0x96,0xb9,0xad,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,
++ 0xae,0x00,0x14,0xff,0xf0,0x96,0xb9,0xaf,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x14,0xff,0xf0,0x96,0xb9,0xb0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb1,0x00,0x10,
++ 0x09,0x14,0xff,0xf0,0x96,0xb9,0xb2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb3,0x00,0xd1,
++ 0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb5,
++ 0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb7,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb8,0x00,0x14,0xff,
++ 0xf0,0x96,0xb9,0xb9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xba,0x00,0x14,0xff,
++ 0xf0,0x96,0xb9,0xbb,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbc,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xbd,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbe,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xbf,0x00,0x14,0x00,0xd2,0x14,0xe1,0x25,0x82,0xe0,0x1c,
++ 0x82,0xcf,0x86,0xe5,0xdd,0x81,0xe4,0x9a,0x81,0xcf,0x06,0x12,0x00,0xd1,0x0b,0xe0,
++ 0x51,0x83,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0x95,0x8b,0xcf,0x86,0xd5,0x22,0xe4,
++ 0xd0,0x88,0xe3,0x93,0x88,0xe2,0x38,0x88,0xe1,0x31,0x88,0xe0,0x2a,0x88,0xcf,0x86,
++ 0xe5,0xfb,0x87,0xe4,0xe2,0x87,0x93,0x07,0x62,0xd1,0x87,0x12,0xe6,0x12,0xe6,0xe4,
++ 0x36,0x89,0xe3,0x2f,0x89,0xd2,0x09,0xe1,0xb8,0x88,0xcf,0x06,0x10,0x00,0xe1,0x1f,
++ 0x89,0xe0,0xec,0x88,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa3,
++ 0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa5,
++ 0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa6,0x00,0x12,0xff,0xf0,0x9e,
++ 0xa4,0xa7,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa8,0x00,0x12,0xff,0xf0,0x9e,
++ 0xa4,0xa9,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xaa,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xab,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xac,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xad,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,
++ 0xae,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xaf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,
++ 0xb0,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb1,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb3,0x00,0x10,
++ 0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb5,0x00,0xd1,
++ 0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb7,
++ 0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb9,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xba,0x00,0x12,0xff,
++ 0xf0,0x9e,0xa4,0xbb,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbc,0x00,0x12,0xff,
++ 0xf0,0x9e,0xa4,0xbd,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbe,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xbf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa5,0x80,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa5,0x81,0x00,0x94,0x1e,0x93,0x1a,0x92,0x16,0x91,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x9e,0xa5,0x82,0x00,0x12,0xff,0xf0,0x9e,0xa5,0x83,0x00,0x12,
++ 0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ /* nfdi_c0100 */
++ 0x57,0x04,0x01,0x00,0xc6,0xe5,0xac,0x13,0xe4,0x41,0x0c,0xe3,0x7a,0x07,0xe2,0xf3,
++ 0x01,0xc1,0xd0,0x1f,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x15,0x53,0x04,0x01,0x00,
++ 0x52,0x04,0x01,0x00,0x91,0x09,0x10,0x04,0x01,0x00,0x01,0xff,0x00,0x01,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0xe4,0xd4,0x7c,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x41,0xcc,0x80,0x00,0x01,0xff,0x41,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x41,
++ 0xcc,0x82,0x00,0x01,0xff,0x41,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,
++ 0xcc,0x88,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x43,
++ 0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x80,0x00,0x01,
++ 0xff,0x45,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x82,0x00,0x01,0xff,0x45,
++ 0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x80,0x00,0x01,0xff,0x49,
++ 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x82,0x00,0x01,0xff,0x49,0xcc,0x88,
++ 0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x4e,0xcc,0x83,
++ 0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x80,0x00,0x01,0xff,0x4f,0xcc,0x81,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x82,0x00,0x01,0xff,0x4f,0xcc,0x83,0x00,0x10,
++ 0x08,0x01,0xff,0x4f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0x55,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x81,0x00,0x01,
++ 0xff,0x55,0xcc,0x82,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x88,0x00,0x01,
++ 0xff,0x59,0xcc,0x81,0x00,0x01,0x00,0xd4,0x7c,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x80,
++ 0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x82,0x00,0x01,
++ 0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x80,0x00,0x01,
++ 0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,0x01,0xff,0x69,
++ 0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6e,
++ 0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x81,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,0x00,0x01,0xff,0x6f,0xcc,0x83,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x81,
++ 0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x88,
++ 0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x79,0xcc,0x88,
++ 0x00,0xe1,0x9a,0x03,0xe0,0xd3,0x01,0xcf,0x86,0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,
++ 0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x86,0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa8,0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,
++ 0x08,0x01,0xff,0x43,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x82,0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,
++ 0x08,0x01,0xff,0x43,0xcc,0x87,0x00,0x01,0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x43,0xcc,0x8c,0x00,0x01,0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x44,0xcc,0x8c,0x00,0x01,0xff,0x64,0xcc,0x8c,0x00,0xd3,0x34,0xd2,0x14,0x51,
++ 0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x84,0x00,0x01,0xff,0x65,0xcc,0x84,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0x86,
++ 0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,
++ 0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,0x00,0x10,
++ 0x08,0x01,0xff,0x47,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,0x74,0xd3,
++ 0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x87,0x00,0x01,0xff,0x67,
++ 0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,
++ 0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0x82,0x00,0x01,0xff,0x68,0xcc,0x82,
++ 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x83,0x00,0x01,
++ 0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x84,0x00,0x01,0xff,0x69,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x86,0x00,0x01,0xff,0x69,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,
++ 0x00,0xd3,0x30,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x49,0xcc,0x87,0x00,0x01,
++ 0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4a,0xcc,0x82,0x00,0x01,0xff,0x6a,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,
++ 0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x4c,0xcc,0x81,0x00,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,0x4c,0xcc,0xa7,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,0x4c,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x6c,0xcc,0x8c,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x60,0xd3,0x30,0xd2,
++ 0x10,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x4e,0xcc,0x81,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,0x01,0xff,0x4e,0xcc,0xa7,0x00,0x10,
++ 0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,0x4e,0xcc,0x8c,0x00,0xd2,0x10,0x91,
++ 0x0c,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,0x01,0x00,0x01,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x4f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0x84,0x00,0x10,0x08,0x01,
++ 0xff,0x4f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,0xd3,0x34,0xd2,0x14,0x91,
++ 0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8b,0x00,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,
++ 0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,
++ 0x08,0x01,0xff,0x53,0xcc,0xa7,0x00,0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x74,0xd3,
++ 0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x8c,0x00,0x01,0xff,0x73,
++ 0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,
++ 0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,
++ 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x83,0x00,0x01,
++ 0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x84,0x00,0x01,0xff,0x75,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x86,0x00,0x01,0xff,0x75,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,
++ 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x8b,0x00,0x01,
++ 0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa8,0x00,0x01,0xff,0x75,
++ 0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x82,0x00,0x01,0xff,0x77,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x82,0x00,0x01,0xff,0x79,0xcc,0x82,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x59,0xcc,0x88,0x00,0x01,0xff,0x5a,
++ 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,0x5a,0xcc,0x87,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,0x5a,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0x00,0xd0,0x4a,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0xd4,0x2c,0xd3,0x18,0x92,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0x4f,
++ 0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x55,0xcc,0x9b,0x00,0x93,
++ 0x14,0x92,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x75,0xcc,0x9b,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xb4,0xd4,0x24,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x41,0xcc,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,0x49,0xcc,0x8c,0x00,0xd3,0x46,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x4f,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x8c,0x00,0xd1,
++ 0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x84,
++ 0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x55,0xcc,0x88,
++ 0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,
++ 0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,
++ 0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,
++ 0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x88,
++ 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x80,0xd3,0x3a,0xd2,
++ 0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,
++ 0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0x86,0xcc,0x84,0x00,0x01,0xff,
++ 0xc3,0xa6,0xcc,0x84,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0x8c,
++ 0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,
++ 0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa8,
++ 0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa8,
++ 0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc6,
++ 0xb7,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0xd3,0x24,0xd2,0x10,0x91,
++ 0x0c,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,0x01,0x00,0x01,0x00,0x91,0x10,0x10,
++ 0x08,0x01,0xff,0x47,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x04,0x00,0xd2,
++ 0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x4e,0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,
++ 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x8a,0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,
++ 0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xc3,0x86,0xcc,0x81,0x00,0x01,0xff,
++ 0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xc3,0x98,0xcc,0x81,0x00,0x01,0xff,
++ 0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x07,0x02,0xe1,0xae,0x01,0xe0,0x93,0x01,0xcf,0x86,
++ 0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,
++ 0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x91,0x00,
++ 0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x8f,0x00,
++ 0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x91,0x00,0x01,0xff,
++ 0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x8f,0x00,
++ 0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x91,0x00,0x01,0xff,
++ 0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8f,0x00,0x01,0xff,
++ 0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,
++ 0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x8f,0x00,
++ 0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0x91,0x00,0x01,0xff,
++ 0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x8f,0x00,0x01,0xff,
++ 0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,
++ 0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x04,0xff,0x53,0xcc,0xa6,0x00,0x04,0xff,
++ 0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,0x54,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,
++ 0xa6,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,0xff,0x48,0xcc,0x8c,0x00,0x04,0xff,
++ 0x68,0xcc,0x8c,0x00,0xd4,0x68,0xd3,0x20,0xd2,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,
++ 0x07,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,0xff,0x41,0xcc,0x87,0x00,
++ 0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x45,0xcc,
++ 0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x88,0xcc,
++ 0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,
++ 0x4f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,
++ 0x04,0xff,0x4f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0x93,0x30,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x87,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,
++ 0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x59,0xcc,0x84,0x00,0x04,0xff,0x79,0xcc,
++ 0x84,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0xcf,0x86,
++ 0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x08,0x00,0x09,0x00,0x09,0x00,
++ 0x09,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,
++ 0x53,0x04,0x01,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,
++ 0x11,0x04,0x04,0x00,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,0x00,
++ 0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x04,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,0x07,0x00,0xe1,0x35,0x01,0xd0,
++ 0x72,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0xe6,0xd3,0x10,0x52,0x04,0x01,0xe6,0x91,
++ 0x08,0x10,0x04,0x01,0xe6,0x01,0xe8,0x01,0xdc,0x92,0x0c,0x51,0x04,0x01,0xdc,0x10,
++ 0x04,0x01,0xe8,0x01,0xd8,0x01,0xdc,0xd4,0x2c,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,
++ 0x04,0x01,0xdc,0x01,0xca,0x10,0x04,0x01,0xca,0x01,0xdc,0x51,0x04,0x01,0xdc,0x10,
++ 0x04,0x01,0xdc,0x01,0xca,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0xca,0x01,0xdc,0x01,
++ 0xdc,0x01,0xdc,0xd3,0x08,0x12,0x04,0x01,0xdc,0x01,0x01,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x01,0x01,0x01,0xdc,0x01,0xdc,0x91,0x08,0x10,0x04,0x01,0xdc,0x01,0xe6,0x01,
++ 0xe6,0xcf,0x86,0xd5,0x7f,0xd4,0x47,0xd3,0x2e,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,
++ 0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0x10,0x04,0x01,0xe6,0x01,0xff,0xcc,
++ 0x93,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcc,0x88,0xcc,0x81,0x00,0x01,0xf0,0x10,
++ 0x04,0x04,0xe6,0x04,0xdc,0xd2,0x08,0x11,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,
++ 0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x04,0xdc,0x06,0xff,0x00,0xd3,0x18,0xd2,0x0c,
++ 0x51,0x04,0x07,0xe6,0x10,0x04,0x07,0xe6,0x07,0xdc,0x51,0x04,0x07,0xdc,0x10,0x04,
++ 0x07,0xdc,0x07,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe8,0x08,0xdc,0x10,0x04,
++ 0x08,0xdc,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xe9,0x07,0xea,0x10,0x04,0x07,0xea,
++ 0x07,0xe9,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0xea,0x10,0x04,0x04,0xe9,
++ 0x06,0xe6,0x06,0xe6,0x06,0xe6,0xd3,0x13,0x52,0x04,0x0a,0x00,0x91,0x0b,0x10,0x07,
++ 0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x01,0x00,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x06,0x01,0xff,0x3b,0x00,0x10,
++ 0x00,0xd0,0xe1,0xcf,0x86,0xd5,0x7a,0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,
++ 0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
++ 0xce,0x91,0xcc,0x81,0x00,0x01,0xff,0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,0x10,0x09,
++ 0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,
++ 0x9f,0xcc,0x81,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0x01,
++ 0xff,0xce,0xa9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,
++ 0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,
++ 0x4a,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,
++ 0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x88,0x00,
++ 0x01,0xff,0xce,0xa5,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,0x91,0x0f,0x10,
++ 0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x39,0x53,0x04,0x01,0x00,0xd2,0x16,0x51,0x04,
++ 0x01,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,
++ 0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,
++ 0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x0a,0x00,0xd3,
++ 0x26,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xcf,0x92,0xcc,
++ 0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0xd2,0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x01,0x00,0x04,
++ 0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd4,
++ 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x06,
++ 0x00,0x07,0x00,0x12,0x04,0x07,0x00,0x08,0x00,0xe3,0x47,0x04,0xe2,0xbe,0x02,0xe1,
++ 0x07,0x01,0xd0,0x8b,0xcf,0x86,0xd5,0x6c,0xd4,0x53,0xd3,0x30,0xd2,0x1f,0xd1,0x12,
++ 0x10,0x09,0x04,0xff,0xd0,0x95,0xcc,0x80,0x00,0x01,0xff,0xd0,0x95,0xcc,0x88,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x93,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xd0,0x86,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0x9a,0xcc,0x81,0x00,0x04,0xff,0xd0,0x98,0xcc,0x80,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x86,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0x92,
++ 0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x98,0xcc,0x86,0x00,0x01,0x00,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x11,0x91,0x0d,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,
++ 0x57,0x54,0x04,0x01,0x00,0xd3,0x30,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,
++ 0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xd0,0xb3,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xd1,0x96,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,
++ 0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd1,
++ 0x83,0xcc,0x86,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x1a,0x52,0x04,0x01,0x00,
++ 0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb4,0xcc,0x8f,0x00,0x01,0xff,0xd1,
++ 0xb5,0xcc,0x8f,0x00,0x01,0x00,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x94,0x24,0xd3,0x18,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0x51,0x04,0x01,0xe6,
++ 0x10,0x04,0x01,0xe6,0x0a,0xe6,0x92,0x08,0x11,0x04,0x04,0x00,0x06,0x00,0x04,0x00,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xbe,0xd4,0x4a,0xd3,0x2a,0xd2,0x1a,0xd1,0x0d,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x96,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,
++ 0xb6,0xcc,0x86,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
++ 0x06,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
++ 0x06,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,
++ 0x09,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x90,0xcc,0x86,
++ 0x00,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0x90,0xcc,0x88,
++ 0x00,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xd0,0x95,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0xd2,0x16,0x51,0x04,
++ 0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x98,0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,
++ 0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x96,0xcc,0x88,0x00,0x01,0xff,0xd0,
++ 0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x97,0xcc,0x88,0x00,0x01,0xff,0xd0,
++ 0xb7,0xcc,0x88,0x00,0xd4,0x74,0xd3,0x3a,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,
++ 0x01,0xff,0xd0,0x98,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0x9e,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,
++ 0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0xa8,0xcc,0x88,0x00,0x01,
++ 0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0xad,0xcc,0x88,
++ 0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x84,
++ 0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xd0,0xa3,0xcc,0x88,0x00,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x10,0x09,
++ 0x01,0xff,0xd0,0xa3,0xcc,0x8b,0x00,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0x91,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0xa7,0xcc,0x88,0x00,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,
++ 0x08,0x00,0x92,0x16,0x91,0x12,0x10,0x09,0x01,0xff,0xd0,0xab,0xcc,0x88,0x00,0x01,
++ 0xff,0xd1,0x8b,0xcc,0x88,0x00,0x09,0x00,0x09,0x00,0xd1,0x74,0xd0,0x36,0xcf,0x86,
++ 0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
++ 0xd4,0x10,0x93,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,0x0b,0x00,0x0c,0x00,0x10,0x00,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0xba,
++ 0xcf,0x86,0xd5,0x4c,0xd4,0x24,0x53,0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x14,0x00,0x01,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x10,0x00,0x10,0x04,0x10,0x00,0x0d,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x02,0xdc,0x02,0xe6,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,0x02,0xe6,
++ 0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xde,0x02,0xdc,0x02,0xe6,0xd4,0x2c,
++ 0xd3,0x10,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x08,0xdc,0x02,0xdc,0x02,0xdc,
++ 0xd2,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,0x02,0xe6,0xd1,0x08,0x10,0x04,
++ 0x02,0xe6,0x02,0xde,0x10,0x04,0x02,0xe4,0x02,0xe6,0xd3,0x20,0xd2,0x10,0xd1,0x08,
++ 0x10,0x04,0x01,0x0a,0x01,0x0b,0x10,0x04,0x01,0x0c,0x01,0x0d,0xd1,0x08,0x10,0x04,
++ 0x01,0x0e,0x01,0x0f,0x10,0x04,0x01,0x10,0x01,0x11,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x01,0x12,0x01,0x13,0x10,0x04,0x09,0x13,0x01,0x14,0xd1,0x08,0x10,0x04,0x01,0x15,
++ 0x01,0x16,0x10,0x04,0x01,0x00,0x01,0x17,0xcf,0x86,0xd5,0x28,0x94,0x24,0x93,0x20,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x18,0x10,0x04,0x01,0x19,0x01,0x00,
++ 0xd1,0x08,0x10,0x04,0x02,0xe6,0x08,0xdc,0x10,0x04,0x08,0x00,0x08,0x12,0x00,0x00,
++ 0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x93,0x10,
++ 0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xe2,0xfb,0x01,0xe1,0x2b,0x01,0xd0,0xa8,0xcf,0x86,0xd5,0x55,0xd4,0x28,0xd3,0x10,
++ 0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x0a,0x00,0xd2,0x0c,
++ 0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x08,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x07,0x00,0x07,0x00,0xd3,0x0c,0x52,0x04,0x07,0xe6,0x11,0x04,0x07,0xe6,0x0a,0xe6,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x0a,0x1e,0x0a,0x1f,0x10,0x04,0x0a,0x20,0x01,0x00,
++ 0xd1,0x09,0x10,0x05,0x0f,0xff,0x00,0x00,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0xd4,
++ 0x3d,0x93,0x39,0xd2,0x1a,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xd8,0xa7,0xd9,0x93,0x00,0x01,0xff,0xd8,0xa7,0xd9,0x94,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xd9,0x88,0xd9,0x94,0x00,0x01,0xff,0xd8,0xa7,0xd9,0x95,0x00,0x10,
++ 0x09,0x01,0xff,0xd9,0x8a,0xd9,0x94,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x86,
++ 0xd5,0x5c,0xd4,0x20,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x01,0x1b,0xd1,0x08,0x10,0x04,0x01,0x1c,0x01,0x1d,0x10,0x04,0x01,0x1e,
++ 0x01,0x1f,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x20,0x01,0x21,0x10,0x04,
++ 0x01,0x22,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x07,0xdc,
++ 0x07,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0xe6,0x08,0xe6,0x08,0xe6,0xd1,0x08,
++ 0x10,0x04,0x08,0xdc,0x08,0xe6,0x10,0x04,0x08,0xe6,0x0c,0xdc,0xd4,0x10,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x01,0x23,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,
++ 0x11,0x04,0x04,0x00,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,
++ 0xcf,0x86,0xd5,0x5b,0xd4,0x2e,0xd3,0x1e,0x92,0x1a,0xd1,0x0d,0x10,0x09,0x01,0xff,
++ 0xdb,0x95,0xd9,0x94,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xdb,0x81,0xd9,0x94,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x04,0x00,0xd3,0x19,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xdb,0x92,0xd9,0x94,0x00,0x11,0x04,0x01,0x00,0x01,0xe6,0x52,0x04,0x01,0xe6,0xd1,
++ 0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0xd4,0x38,0xd3,
++ 0x1c,0xd2,0x0c,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,0x01,0xdc,0xd1,0x08,0x10,
++ 0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0xd2,0x10,0xd1,0x08,0x10,
++ 0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0xdc,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,
++ 0xe6,0x01,0xdc,0x07,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x04,
++ 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,0xd1,0xc8,0xd0,0x76,0xcf,
++ 0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,
++ 0x00,0x04,0x24,0x04,0x00,0x04,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,
++ 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,0x00,0x07,0x00,0xd3,0x1c,0xd2,
++ 0x0c,0x91,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,
++ 0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x0c,0x51,0x04,0x04,0xdc,0x10,
++ 0x04,0x04,0xe6,0x04,0xdc,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,
++ 0xdc,0x04,0xe6,0xcf,0x86,0xd5,0x3c,0x94,0x38,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x04,
++ 0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,
++ 0x04,0x04,0xdc,0x04,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,
++ 0x04,0x04,0xe6,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x08,
++ 0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0a,
++ 0x00,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x14,0x53,0x04,0x09,0x00,0x92,0x0c,0x51,
++ 0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xe6,0x09,0xe6,0xd3,0x10,0x92,0x0c,0x51,
++ 0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,
++ 0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x14,0xdc,0x14,
++ 0x00,0xe4,0xf8,0x57,0xe3,0x45,0x3f,0xe2,0xf4,0x3e,0xe1,0xc7,0x2c,0xe0,0x21,0x10,
++ 0xcf,0x86,0xc5,0xe4,0x80,0x08,0xe3,0xcb,0x03,0xe2,0x61,0x01,0xd1,0x94,0xd0,0x5a,
++ 0xcf,0x86,0xd5,0x20,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,
++ 0x0b,0x00,0x0b,0xe6,0x92,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,0x0b,0x00,0x0b,0xe6,
++ 0x0b,0xe6,0xd4,0x24,0xd3,0x10,0x52,0x04,0x0b,0xe6,0x91,0x08,0x10,0x04,0x0b,0x00,
++ 0x0b,0xe6,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,
++ 0x11,0x04,0x0b,0xe6,0x00,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,
++ 0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x86,0xd5,0x20,0x54,0x04,0x0c,0x00,
++ 0x53,0x04,0x0c,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x0c,0xdc,0x0c,0xdc,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x13,0x00,
++ 0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x4a,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x20,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0d,0x00,0x10,0x00,0x0d,0x00,0x0d,0x00,0x52,0x04,0x0d,0x00,0x91,0x08,
++ 0x10,0x04,0x0d,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x10,0x00,0x11,0x00,0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x12,0x00,
++ 0x52,0x04,0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,
++ 0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x14,0xdc,
++ 0x12,0xe6,0x12,0xe6,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x12,0xe6,0x10,0x04,
++ 0x12,0x00,0x11,0xdc,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xdc,0x0d,0xe6,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xe6,0x91,0x08,0x10,0x04,0x0d,0xe6,
++ 0x0d,0xdc,0x0d,0xdc,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0x1b,0x0d,0x1c,
++ 0x10,0x04,0x0d,0x1d,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xdc,0x0d,0xe6,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x10,0x04,0x0d,0xdc,0x0d,0xe6,
++ 0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x10,0xe6,0xe1,0x3a,0x01,0xd0,0x77,0xcf,
++ 0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x01,
++ 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0xd4,0x1b,0x53,0x04,0x01,0x00,0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe0,0xa4,0xa8,0xe0,0xa4,0xbc,0x00,0x01,0x00,0x01,0x00,0xd3,0x26,0xd2,0x13,
++ 0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xb0,0xe0,0xa4,0xbc,0x00,0x01,
++ 0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xb3,0xe0,0xa4,0xbc,0x00,0x01,0x00,
++ 0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x91,0x08,0x10,0x04,0x01,0x07,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x8c,0xd4,0x18,0x53,0x04,0x01,0x00,0x52,0x04,
++ 0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x10,0x04,0x0b,0x00,0x0c,0x00,
++ 0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x10,0x04,0x01,0xdc,
++ 0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x0b,0x00,0x0c,0x00,0xd2,0x2c,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xe0,0xa4,0x95,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0x96,
++ 0xe0,0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x97,0xe0,0xa4,0xbc,0x00,0x01,
++ 0xff,0xe0,0xa4,0x9c,0xe0,0xa4,0xbc,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa4,
++ 0xa1,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xa2,0xe0,0xa4,0xbc,0x00,0x10,0x0b,
++ 0x01,0xff,0xe0,0xa4,0xab,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xaf,0xe0,0xa4,
++ 0xbc,0x00,0x54,0x04,0x01,0x00,0xd3,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x0a,0x00,0x10,0x04,0x0a,0x00,0x0c,0x00,0x0c,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x10,0x00,0x0b,0x00,0x10,0x04,0x0b,0x00,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,
++ 0x08,0x00,0x09,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
++ 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd3,0x18,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,
++ 0x91,0x08,0x10,0x04,0x01,0x07,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x42,
++ 0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa6,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,
++ 0xff,0xe0,0xa7,0x87,0xe0,0xa7,0x97,0x00,0x01,0x09,0x10,0x04,0x08,0x00,0x00,0x00,
++ 0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa6,0xa1,0xe0,0xa6,0xbc,
++ 0x00,0x01,0xff,0xe0,0xa6,0xa2,0xe0,0xa6,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0xff,
++ 0xe0,0xa6,0xaf,0xe0,0xa6,0xbc,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,
++ 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x14,0xe6,0x00,
++ 0x00,0xe2,0x48,0x02,0xe1,0x4f,0x01,0xd0,0xa4,0xcf,0x86,0xd5,0x4c,0xd4,0x34,0xd3,
++ 0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x10,0x04,0x01,0x00,0x07,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x2e,0xd2,0x17,0xd1,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa8,0xb2,
++ 0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,
++ 0xe0,0xa8,0xb8,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,
++ 0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x00,0x00,0x01,0x00,0xcf,0x86,0xd5,0x80,0xd4,
++ 0x34,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,
++ 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x01,
++ 0x09,0x00,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x00,
++ 0x00,0x00,0x00,0xd2,0x25,0xd1,0x0f,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xa8,0x96,
++ 0xe0,0xa8,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0x97,0xe0,0xa8,0xbc,0x00,0x01,
++ 0xff,0xe0,0xa8,0x9c,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x10,0x0b,0x01,0xff,0xe0,0xa8,0xab,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd4,0x10,0x93,
++ 0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x14,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,0x14,0x00,0x00,
++ 0x00,0x00,0x00,0xd0,0x82,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,
++ 0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x10,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,
++ 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x07,
++ 0x00,0x07,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x0d,0x00,0x07,0x00,0x00,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x11,0x00,0x13,0x00,0x13,0x00,0xe1,0x24,0x01,0xd0,0x86,0xcf,0x86,
++ 0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,
++ 0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,
++ 0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x45,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,
++ 0x10,0x04,0x0a,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,
++ 0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x96,0x00,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0xe0,0xad,0x87,0xe0,0xac,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,
++ 0xe0,0xad,0x87,0xe0,0xad,0x97,0x00,0x01,0x09,0x00,0x00,0xd3,0x0c,0x52,0x04,0x00,
++ 0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,
++ 0xff,0xe0,0xac,0xa1,0xe0,0xac,0xbc,0x00,0x01,0xff,0xe0,0xac,0xa2,0xe0,0xac,0xbc,
++ 0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,
++ 0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x0c,0x00,0x0c,0x00,0x00,0x00,0xd0,0xb1,0xcf,
++ 0x86,0xd5,0x63,0xd4,0x28,0xd3,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd3,0x1f,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,
++ 0xae,0x92,0xe0,0xaf,0x97,0x00,0x01,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x01,0x00,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
++ 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x08,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0x61,0xd4,0x45,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,0xae,0xbe,0x00,0x01,0xff,0xe0,
++ 0xaf,0x87,0xe0,0xae,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,
++ 0xaf,0x97,0x00,0x01,0x09,0x00,0x00,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0a,
++ 0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x00,
++ 0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x08,
++ 0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x07,0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
++ 0x00,0x00,0x00,0xe3,0x1c,0x04,0xe2,0x1a,0x02,0xd1,0xf3,0xd0,0x76,0xcf,0x86,0xd5,
++ 0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,
++ 0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,
++ 0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0xd2,
++ 0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x53,0xd4,0x2f,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,
++ 0xb1,0x86,0xe0,0xb1,0x96,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x54,0x10,0x04,0x01,0x5b,0x00,0x00,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,
++ 0x11,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,0x00,
++ 0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0a,0x00,0xd0,0x76,0xcf,0x86,
++ 0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x10,0x00,
++ 0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,
++ 0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
++ 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x07,0x07,0x07,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x82,0xd4,0x5e,0xd3,0x2a,0xd2,0x13,0x91,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe0,0xb2,0xbf,0xe0,0xb3,0x95,0x00,0x01,0x00,0x01,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,
++ 0x95,0x00,0xd2,0x28,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x96,
++ 0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x82,0x00,0x01,0xff,
++ 0xe0,0xb3,0x86,0xe0,0xb3,0x82,0xe0,0xb3,0x95,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,0x00,
++ 0x09,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,
++ 0x10,0x04,0x00,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xe1,0x06,0x01,0xd0,0x6e,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x13,0x00,0x10,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,
++ 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
++ 0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x0c,0x00,0x13,0x09,0x91,0x08,0x10,0x04,0x13,0x09,0x0a,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x65,0xd4,0x45,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,
++ 0x04,0x0a,0x00,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb4,0xbe,0x00,0x01,0xff,0xe0,0xb5,
++ 0x87,0xe0,0xb4,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb5,
++ 0x97,0x00,0x01,0x09,0x10,0x04,0x0c,0x00,0x12,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x01,0x00,0x52,0x04,0x12,0x00,0x51,0x04,
++ 0x12,0x00,0x10,0x04,0x12,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,
++ 0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x0c,0x52,0x04,
++ 0x0a,0x00,0x11,0x04,0x0a,0x00,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,
++ 0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x5a,0xcf,0x86,0xd5,0x34,0xd4,0x18,0x93,0x14,
++ 0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x04,0x00,
++ 0x04,0x00,0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x54,0x04,
++ 0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,
++ 0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x04,0x00,0x00,0x00,
++ 0xcf,0x86,0xd5,0x77,0xd4,0x28,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,
++ 0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x04,0x09,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0xd3,0x14,0x52,0x04,
++ 0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0xd2,0x13,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8a,
++ 0x00,0x04,0x00,0xd1,0x19,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0x00,
++ 0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0xe0,0xb7,0x8a,0x00,0x10,0x0b,0x04,0xff,
++ 0xe0,0xb7,0x99,0xe0,0xb7,0x9f,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x00,
++ 0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,
++ 0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe2,
++ 0x31,0x01,0xd1,0x58,0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x67,0x10,0x04,
++ 0x01,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xcf,0x86,
++ 0x95,0x18,0xd4,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,0x6b,0x01,0x00,0x53,0x04,
++ 0x01,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd0,0x9e,0xcf,0x86,0xd5,0x54,
++ 0xd4,0x3c,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x15,0x00,
++ 0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x15,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x15,0x00,0xd3,0x08,0x12,0x04,
++ 0x15,0x00,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,
++ 0x01,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x15,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,
++ 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x76,0x10,0x04,0x15,0x09,
++ 0x01,0x00,0x11,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0x95,0x34,0xd4,0x20,0xd3,0x14,
++ 0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x52,0x04,0x01,0x7a,0x11,0x04,0x01,0x00,0x00,0x00,0x53,0x04,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x01,0x00,0x0d,0x00,0x00,0x00,
++ 0xe1,0x2b,0x01,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,0x02,0x00,0x53,0x04,0x02,
++ 0x00,0x92,0x08,0x11,0x04,0x02,0xdc,0x02,0x00,0x02,0x00,0x54,0x04,0x02,0x00,0xd3,
++ 0x14,0x52,0x04,0x02,0x00,0xd1,0x08,0x10,0x04,0x02,0x00,0x02,0xdc,0x10,0x04,0x02,
++ 0x00,0x02,0xdc,0x92,0x0c,0x91,0x08,0x10,0x04,0x02,0x00,0x02,0xd8,0x02,0x00,0x02,
++ 0x00,0xcf,0x86,0xd5,0x73,0xd4,0x36,0xd3,0x17,0x92,0x13,0x51,0x04,0x02,0x00,0x10,
++ 0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x82,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,
++ 0x02,0xff,0xe0,0xbd,0x8c,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd3,0x26,0xd2,0x13,0x51,
++ 0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x91,0xe0,0xbe,0xb7,0x00,0x02,0x00,
++ 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x96,0xe0,0xbe,0xb7,
++ 0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x9b,0xe0,0xbe,
++ 0xb7,0x00,0x02,0x00,0x02,0x00,0xd4,0x27,0x53,0x04,0x02,0x00,0xd2,0x17,0xd1,0x0f,
++ 0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x80,0xe0,0xbe,0xb5,0x00,0x10,0x04,0x04,
++ 0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0xd3,0x35,0xd2,
++ 0x17,0xd1,0x08,0x10,0x04,0x00,0x00,0x02,0x81,0x10,0x04,0x02,0x82,0x02,0xff,0xe0,
++ 0xbd,0xb1,0xe0,0xbd,0xb2,0x00,0xd1,0x0f,0x10,0x04,0x02,0x84,0x02,0xff,0xe0,0xbd,
++ 0xb1,0xe0,0xbd,0xb4,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xb2,0xe0,0xbe,0x80,0x00,
++ 0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xb3,0xe0,0xbe,0x80,
++ 0x00,0x02,0x00,0x02,0x82,0x11,0x04,0x02,0x82,0x02,0x00,0xd0,0xd3,0xcf,0x86,0xd5,
++ 0x65,0xd4,0x27,0xd3,0x1f,0xd2,0x13,0x91,0x0f,0x10,0x04,0x02,0x82,0x02,0xff,0xe0,
++ 0xbd,0xb1,0xe0,0xbe,0x80,0x00,0x02,0xe6,0x91,0x08,0x10,0x04,0x02,0x09,0x02,0x00,
++ 0x02,0xe6,0x12,0x04,0x02,0x00,0x0c,0x00,0xd3,0x1f,0xd2,0x13,0x51,0x04,0x02,0x00,
++ 0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x92,0xe0,0xbe,0xb7,0x00,0x51,0x04,0x02,
++ 0x00,0x10,0x04,0x04,0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,
++ 0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x9c,0xe0,0xbe,
++ 0xb7,0x00,0x02,0x00,0xd4,0x3d,0xd3,0x26,0xd2,0x13,0x51,0x04,0x02,0x00,0x10,0x0b,
++ 0x02,0xff,0xe0,0xbe,0xa1,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x51,0x04,0x02,0x00,0x10,
++ 0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0xa6,0xe0,0xbe,0xb7,0x00,0x52,0x04,0x02,0x00,
++ 0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xab,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x04,
++ 0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x02,0x00,0x02,0x00,0x02,
++ 0x00,0xd2,0x13,0x91,0x0f,0x10,0x04,0x04,0x00,0x02,0xff,0xe0,0xbe,0x90,0xe0,0xbe,
++ 0xb5,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,
++ 0x95,0x4c,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,
++ 0x04,0xdc,0x04,0x00,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0x10,0x04,0x0a,0x00,0x04,0x00,0xd3,0x14,0xd2,0x08,0x11,0x04,0x08,0x00,0x0a,0x00,
++ 0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,
++ 0x0b,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
++ 0xe5,0xf7,0x04,0xe4,0x79,0x03,0xe3,0x7b,0x01,0xe2,0x04,0x01,0xd1,0x7f,0xd0,0x65,
++ 0xcf,0x86,0x55,0x04,0x04,0x00,0xd4,0x33,0xd3,0x1f,0xd2,0x0c,0x51,0x04,0x04,0x00,
++ 0x10,0x04,0x0a,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe1,0x80,
++ 0xa5,0xe1,0x80,0xae,0x00,0x04,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0a,0x00,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x04,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x04,0x00,0x04,
++ 0x07,0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0x09,0x10,0x04,0x0a,0x09,0x0a,
++ 0x00,0x0a,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,
++ 0x08,0x11,0x04,0x04,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x2e,0xcf,0x86,0x95,
++ 0x28,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,
++ 0x00,0x0a,0xdc,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,0x0b,
++ 0x00,0x11,0x04,0x0b,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,
++ 0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x52,
++ 0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,0x00,0x00,0x00,0x01,0x00,0x54,
++ 0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x06,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x06,0x00,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0d,0x00,0x0d,0x00,0xd1,0x3e,0xd0,
++ 0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x1d,0x54,0x04,0x01,0x00,0x53,0x04,0x01,
++ 0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
++ 0x00,0x01,0xff,0x00,0x94,0x15,0x93,0x11,0x92,0x0d,0x91,0x09,0x10,0x05,0x01,0xff,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x0b,0x00,0x0b,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,
++ 0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x0b,
++ 0x00,0xe2,0x21,0x01,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,
++ 0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,
++ 0x04,0x00,0x04,0x00,0xcf,0x86,0x95,0x48,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,
++ 0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,
++ 0xd0,0x62,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,
++ 0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,
++ 0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,
++ 0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
++ 0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,
++ 0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,
++ 0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x93,0x10,0x52,0x04,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x94,0x14,0x53,0x04,
++ 0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
++ 0x04,0x00,0xd1,0x9c,0xd0,0x3e,0xcf,0x86,0x95,0x38,0xd4,0x14,0x53,0x04,0x04,0x00,
++ 0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,0x14,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,
++ 0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
++ 0x04,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,
++ 0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0xd2,0x0c,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x0c,0xe6,0x10,0x04,0x0c,0xe6,0x08,0xe6,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x08,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x53,0x04,0x04,0x00,
++ 0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,
++ 0xcf,0x86,0x95,0x14,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,
++ 0x08,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,
++ 0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x11,0x00,
++ 0x00,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0xd3,0x30,0xd2,0x2a,
++ 0xd1,0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0b,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,
++ 0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd2,0x6c,0xd1,0x24,
++ 0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,
++ 0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0b,0x00,
++ 0x0b,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,
++ 0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,
++ 0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x04,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x46,0xcf,0x86,0xd5,0x28,
++ 0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x00,
++ 0x00,0x00,0x06,0x00,0x93,0x10,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x09,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x06,0x00,0x93,0x14,0x52,0x04,0x06,0x00,
++ 0xd1,0x08,0x10,0x04,0x06,0x09,0x06,0x00,0x10,0x04,0x06,0x00,0x00,0x00,0x00,0x00,
++ 0xcf,0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x06,0x00,0x00,0x00,
++ 0x00,0x00,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x00,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,
++ 0x00,0x00,0x06,0x00,0x00,0x00,0x00,0x00,0xd0,0x1b,0xcf,0x86,0x55,0x04,0x04,0x00,
++ 0x54,0x04,0x04,0x00,0x93,0x0d,0x52,0x04,0x04,0x00,0x11,0x05,0x04,0xff,0x00,0x04,
++ 0x00,0x04,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x09,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,
++ 0x08,0x10,0x04,0x04,0x00,0x07,0xe6,0x00,0x00,0xd4,0x10,0x53,0x04,0x04,0x00,0x92,
++ 0x08,0x11,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x08,0x11,
++ 0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xe4,0xb7,0x03,0xe3,0x58,0x01,0xd2,0x8f,0xd1,
++ 0x53,0xd0,0x35,0xcf,0x86,0x95,0x2f,0xd4,0x1f,0x53,0x04,0x04,0x00,0xd2,0x0d,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x04,0xff,0x00,0x51,0x05,0x04,0xff,0x00,0x10,
++ 0x05,0x04,0xff,0x00,0x00,0x00,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,
++ 0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,
++ 0x53,0x04,0x04,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x04,0x00,0x94,0x18,0x53,0x04,0x04,0x00,
++ 0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0xe4,0x10,0x04,0x0a,0x00,0x00,0x00,
++ 0x00,0x00,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,
++ 0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x42,
++ 0xcf,0x86,0xd5,0x1c,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,
++ 0xd1,0x08,0x10,0x04,0x07,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd4,0x0c,
++ 0x53,0x04,0x07,0x00,0x12,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x10,
++ 0xd1,0x08,0x10,0x04,0x07,0x00,0x07,0xde,0x10,0x04,0x07,0xe6,0x07,0xdc,0x00,0x00,
++ 0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd4,0x10,0x53,0x04,0x07,0x00,
++ 0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x07,0x00,
++ 0x91,0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,0x86,
++ 0x55,0x04,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,
++ 0x0b,0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x95,0x28,0xd4,0x10,0x53,0x04,0x08,0x00,
++ 0x92,0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0xd2,0x0c,
++ 0x51,0x04,0x08,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x08,0x00,
++ 0x07,0x00,0xd2,0xe4,0xd1,0x80,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x54,0x04,0x08,0x00,
++ 0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x08,0xe6,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x08,0xdc,0x08,0x00,0x08,0x00,0x11,0x04,0x00,0x00,
++ 0x08,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,
++ 0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xd4,0x14,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,
++ 0x0b,0x00,0xd3,0x10,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,
++ 0x0b,0xe6,0x52,0x04,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xe6,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x0b,0xdc,0xd0,0x5e,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0b,0x00,
++ 0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,
++ 0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd4,0x10,0x53,0x04,0x0b,0x00,0x52,0x04,
++ 0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x10,0xe6,0x91,0x08,
++ 0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0xdc,0xd2,0x0c,0x51,0x04,0x10,0xdc,0x10,0x04,
++ 0x10,0xdc,0x10,0xe6,0xd1,0x08,0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0x04,0x10,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xe1,0x1e,0x01,0xd0,0xaa,0xcf,0x86,0xd5,0x6e,0xd4,
++ 0x53,0xd3,0x17,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,
++ 0xac,0x85,0xe1,0xac,0xb5,0x00,0x09,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x09,0xff,
++ 0xe1,0xac,0x87,0xe1,0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x89,
++ 0xe1,0xac,0xb5,0x00,0x09,0x00,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8b,0xe1,
++ 0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8d,0xe1,0xac,0xb5,0x00,
++ 0x09,0x00,0x93,0x17,0x92,0x13,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,
++ 0x91,0xe1,0xac,0xb5,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x54,0x04,0x09,0x00,0xd3,
++ 0x10,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x07,0x09,0x00,0x09,0x00,0xd2,
++ 0x13,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xba,0xe1,0xac,
++ 0xb5,0x00,0x91,0x0f,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xbc,0xe1,0xac,0xb5,
++ 0x00,0x09,0x00,0xcf,0x86,0xd5,0x3d,0x94,0x39,0xd3,0x31,0xd2,0x25,0xd1,0x16,0x10,
++ 0x0b,0x09,0xff,0xe1,0xac,0xbe,0xe1,0xac,0xb5,0x00,0x09,0xff,0xe1,0xac,0xbf,0xe1,
++ 0xac,0xb5,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xad,0x82,0xe1,0xac,0xb5,0x00,
++ 0x91,0x08,0x10,0x04,0x09,0x09,0x09,0x00,0x09,0x00,0x12,0x04,0x09,0x00,0x00,0x00,
++ 0x09,0x00,0xd4,0x1c,0x53,0x04,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,
++ 0x09,0x00,0x09,0xe6,0x91,0x08,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0xe6,0xd3,0x08,
++ 0x12,0x04,0x09,0xe6,0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,
++ 0x00,0x00,0x00,0x00,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x18,0x53,0x04,
++ 0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x09,0x0d,0x09,0x11,0x04,
++ 0x0d,0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x0d,0x00,
++ 0x0d,0x00,0xcf,0x86,0x55,0x04,0x0c,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x0c,0x00,
++ 0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x07,0x0c,0x00,0x0c,0x00,0xd3,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x0c,0x09,0x00,0x00,0x12,0x04,0x00,0x00,0x0c,0x00,0xe3,0xb2,
++ 0x01,0xe2,0x09,0x01,0xd1,0x4c,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,
++ 0x0a,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,
++ 0x0a,0x07,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,
++ 0xcf,0x86,0x95,0x1c,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,
++ 0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,
++ 0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x54,0x04,0x14,0x00,
++ 0x53,0x04,0x14,0x00,0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x14,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x08,
++ 0x13,0x04,0x0d,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,
++ 0x0b,0xe6,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x01,0x0b,0xdc,0x0b,0xdc,0x92,0x08,
++ 0x11,0x04,0x0b,0xdc,0x0b,0xe6,0x0b,0xdc,0xd4,0x28,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x01,0x0b,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x0b,0x01,0x0b,0x00,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xdc,0x0b,0x00,
++ 0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0d,0x00,0xd1,0x08,
++ 0x10,0x04,0x0d,0xe6,0x0d,0x00,0x10,0x04,0x0d,0x00,0x13,0x00,0x92,0x0c,0x51,0x04,
++ 0x10,0xe6,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,
++ 0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x94,0x0c,0x53,0x04,0x07,0x00,0x12,0x04,
++ 0x07,0x00,0x08,0x00,0x08,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0xd5,0x40,
++ 0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0xe6,0x10,0x04,0x08,0xdc,0x08,0xe6,
++ 0x09,0xe6,0xd2,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x0a,0xe6,0xd1,0x08,
++ 0x10,0x04,0x0a,0xe6,0x0a,0xea,0x10,0x04,0x0a,0xd6,0x0a,0xdc,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x0a,0xca,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0xd4,0x14,
++ 0x93,0x10,0x52,0x04,0x0a,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xe6,
++ 0x10,0xe6,0xd3,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x13,0xe8,
++ 0x13,0xe4,0xd2,0x10,0xd1,0x08,0x10,0x04,0x13,0xe4,0x13,0xdc,0x10,0x04,0x00,0x00,
++ 0x12,0xe6,0xd1,0x08,0x10,0x04,0x0c,0xe9,0x0b,0xdc,0x10,0x04,0x09,0xe6,0x09,0xdc,
++ 0xe2,0x80,0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa5,0x00,0x01,0xff,
++ 0x61,0xcc,0xa5,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x42,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,
++ 0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x43,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,
++ 0x63,0xcc,0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x87,0x00,0x01,0xff,
++ 0x64,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa3,0x00,0x01,0xff,
++ 0x64,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,
++ 0xb1,0x00,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa7,0x00,
++ 0x01,0xff,0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xad,0x00,0x01,0xff,
++ 0x64,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x80,0x00,
++ 0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,
++ 0x81,0x00,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x45,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,
++ 0x45,0xcc,0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x45,0xcc,0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,
++ 0x01,0xff,0x46,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,
++ 0x84,0x00,0x10,0x08,0x01,0xff,0x48,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,
++ 0x10,0x08,0x01,0xff,0x48,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,
++ 0x10,0x08,0x01,0xff,0x48,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x49,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,
++ 0x01,0xff,0x49,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x81,0x00,0x01,0xff,
++ 0x6b,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,
++ 0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,
++ 0xb1,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,
++ 0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,
++ 0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xb1,0x00,0x01,0xff,
++ 0x6c,0xcc,0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4c,0xcc,0xad,0x00,0x01,0xff,
++ 0x6c,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x4d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,
++ 0x81,0x00,0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x4d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,
++ 0xff,0x4d,0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x4e,0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x4e,
++ 0xcc,0xa3,0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x4e,0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x4e,
++ 0xcc,0xad,0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,
++ 0xcc,0x83,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,
++ 0xff,0x4f,0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,
++ 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x80,0x00,0x01,
++ 0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x81,
++ 0x00,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x50,
++ 0xcc,0x81,0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x50,0xcc,0x87,
++ 0x00,0x01,0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,
++ 0xcc,0x87,0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa3,
++ 0x00,0x01,0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x52,0xcc,0xa3,
++ 0xcc,0x84,0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x52,
++ 0xcc,0xb1,0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,
++ 0x08,0x01,0xff,0x53,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x53,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,
++ 0x00,0x10,0x0a,0x01,0xff,0x53,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,
++ 0xcc,0x87,0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x53,0xcc,0xa3,0xcc,0x87,
++ 0x00,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0x87,
++ 0x00,0x01,0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0xa3,
++ 0x00,0x01,0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xb1,0x00,0x01,
++ 0xff,0x74,0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,
++ 0xcc,0xad,0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa4,
++ 0x00,0x01,0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0xb0,
++ 0x00,0x01,0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xad,0x00,0x01,
++ 0xff,0x75,0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x83,
++ 0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x55,
++ 0xcc,0x84,0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x56,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,
++ 0xff,0x56,0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x10,0x02,0xcf,0x86,
++ 0xd5,0xe1,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,
++ 0x80,0x00,0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x81,0x00,
++ 0x01,0xff,0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x88,0x00,
++ 0x01,0xff,0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x87,0x00,0x01,0xff,
++ 0x77,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0xa3,0x00,
++ 0x01,0xff,0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x58,0xcc,0x87,0x00,0x01,0xff,
++ 0x78,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x58,0xcc,0x88,0x00,0x01,0xff,
++ 0x78,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,
++ 0x87,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0x82,0x00,
++ 0x01,0xff,0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x5a,0xcc,0xa3,0x00,0x01,0xff,
++ 0x7a,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0xb1,0x00,0x01,0xff,
++ 0x7a,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x68,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,
++ 0x88,0x00,0x92,0x1d,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,
++ 0x79,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x02,0xff,0xc5,0xbf,0xcc,0x87,0x00,0x0a,
++ 0x00,0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa3,
++ 0x00,0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x89,0x00,0x01,
++ 0xff,0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x81,
++ 0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,
++ 0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,
++ 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,
++ 0xcc,0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x82,0x00,0x01,
++ 0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x81,
++ 0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,
++ 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,
++ 0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x83,0x00,0x01,
++ 0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x86,
++ 0x00,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x45,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x45,
++ 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,
++ 0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,
++ 0xcc,0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,
++ 0xd4,0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,
++ 0x80,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,
++ 0x82,0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x45,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,
++ 0x10,0x0a,0x01,0xff,0x45,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,
++ 0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x89,0x00,0x01,0xff,
++ 0x69,0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,
++ 0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,
++ 0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x81,0x00,
++ 0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,
++ 0x80,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x4f,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,
++ 0x01,0xff,0x4f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,
++ 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
++ 0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x81,0x00,
++ 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,
++ 0x9b,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,
++ 0x4f,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,
++ 0xd3,0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x83,0x00,
++ 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,
++ 0xa3,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x55,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,
++ 0x89,0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x55,0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,
++ 0x01,0xff,0x55,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,
++ 0x9b,0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,
++ 0x75,0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x55,0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,
++ 0x01,0xff,0x59,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x59,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,
++ 0x59,0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0x92,0x14,0x91,0x10,0x10,0x08,
++ 0x01,0xff,0x59,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x0a,0x00,0x0a,0x00,
++ 0xe1,0xc0,0x04,0xe0,0x80,0x02,0xcf,0x86,0xe5,0x2d,0x01,0xd4,0xa8,0xd3,0x54,0xd2,
++ 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,
++ 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
++ 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,
++ 0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x93,0x00,0x01,0xff,
++ 0xce,0x91,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0x00,
++ 0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,
++ 0x91,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x81,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,
++ 0xcd,0x82,0x00,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,
++ 0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,
++ 0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,
++ 0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,
++ 0x93,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x95,0xcc,
++ 0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0x95,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,
++ 0xcc,0x81,0x00,0x00,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,
++ 0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,
++ 0xce,0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,
++ 0x82,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0x97,0xcc,0x93,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,
++ 0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0x00,
++ 0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,
++ 0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x54,0xd2,
++ 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,
++ 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,
++ 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,
++ 0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x93,0x00,0x01,0xff,
++ 0xce,0x99,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcc,0x80,0x00,
++ 0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,
++ 0x99,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x81,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,
++ 0xcd,0x82,0x00,0xcf,0x86,0xe5,0x13,0x01,0xd4,0x84,0xd3,0x42,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,
++ 0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0x9f,0xcc,0x93,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,
++ 0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x54,0xd2,0x28,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,
++ 0x94,0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,
++ 0x85,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,
++ 0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
++ 0xcf,0x85,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,
++ 0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,0x10,0x04,
++ 0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,0x00,0x01,
++ 0xff,0xce,0xa5,0xcc,0x94,0xcd,0x82,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,
++ 0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xa9,0xcc,0x93,0x00,0x01,0xff,0xce,0xa9,0xcc,
++ 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
++ 0xa9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
++ 0xce,0xa9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0x00,
++ 0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0xb1,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0xb5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,
++ 0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
++ 0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x91,0x12,0x10,0x09,
++ 0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x00,0x00,
++ 0xe0,0xe1,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,
++ 0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,
++ 0x91,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,
++ 0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,
++ 0x91,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,
++ 0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,
++ 0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,
++ 0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xcd,
++ 0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,
++ 0xce,0xb7,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,
++ 0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,
++ 0xce,0x97,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,
++ 0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,
++ 0x01,0xff,0xce,0x97,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,
++ 0x94,0xcd,0x82,0xcd,0x85,0x00,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,
++ 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,
++ 0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,
++ 0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,
++ 0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xcd,0x85,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,
++ 0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,
++ 0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,
++ 0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x82,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,0x49,
++ 0xd2,0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x80,0xcd,0x85,0x00,0x01,
++ 0xff,0xce,0xb1,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,
++ 0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,
++ 0xce,0xb1,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0x91,0xcc,0x86,0x00,0x01,0xff,0xce,0x91,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,
++ 0x91,0xcc,0x80,0x00,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,
++ 0xff,0xce,0x91,0xcd,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xb9,0x00,0x01,
++ 0x00,0xcf,0x86,0xe5,0x16,0x01,0xd4,0x8f,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,
++ 0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x81,0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,
++ 0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,
++ 0x10,0x09,0x01,0xff,0xce,0x97,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,
++ 0xd1,0x13,0x10,0x09,0x01,0xff,0xce,0x97,0xcd,0x85,0x00,0x01,0xff,0xe1,0xbe,0xbf,
++ 0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbe,0xbf,0xcc,0x81,0x00,0x01,0xff,0xe1,
++ 0xbe,0xbf,0xcd,0x82,0x00,0xd3,0x40,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,
++ 0xb9,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,
++ 0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,
++ 0x86,0x00,0x01,0xff,0xce,0x99,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,
++ 0x80,0x00,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x04,0x00,0x00,0x01,
++ 0xff,0xe1,0xbf,0xbe,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbf,0xbe,0xcc,0x81,
++ 0x00,0x01,0xff,0xe1,0xbf,0xbe,0xcd,0x82,0x00,0xd4,0x93,0xd3,0x4e,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,
++ 0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,
++ 0xcc,0x88,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x93,0x00,
++ 0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcd,0x82,0x00,
++ 0x01,0xff,0xcf,0x85,0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xce,0xa5,0xcc,0x86,0x00,0x01,0xff,0xce,0xa5,0xcc,0x84,0x00,0x10,0x09,0x01,
++ 0xff,0xce,0xa5,0xcc,0x80,0x00,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xa1,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,0x10,
++ 0x09,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x01,0xff,0x60,0x00,0xd3,0x3b,0xd2,0x18,
++ 0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,0xcd,0x85,0x00,0x01,
++ 0xff,0xcf,0x89,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,
++ 0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,
++ 0xcf,0x89,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0x9f,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,
++ 0xa9,0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0xd1,0x10,0x10,0x09,0x01,
++ 0xff,0xce,0xa9,0xcd,0x85,0x00,0x01,0xff,0xc2,0xb4,0x00,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0xe0,0x7e,0x0c,0xcf,0x86,0xe5,0xbb,0x08,0xe4,0x14,0x06,0xe3,0xf7,0x02,0xe2,
++ 0xbd,0x01,0xd1,0xd0,0xd0,0x4f,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0xd3,0x18,0x92,0x14,
++ 0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,0x00,
++ 0x01,0x00,0x01,0x00,0x92,0x0d,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0x00,0x01,0xff,0x00,0x01,0x00,0x94,0x1b,0x53,0x04,0x01,0x00,0xd2,0x09,0x11,0x04,
++ 0x01,0x00,0x01,0xff,0x00,0x51,0x05,0x01,0xff,0x00,0x10,0x05,0x01,0xff,0x00,0x04,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x48,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,
++ 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x52,0x04,0x04,0x00,0x11,0x04,0x04,
++ 0x00,0x06,0x00,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x06,0x00,0x07,
++ 0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0x52,
++ 0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0xd4,0x23,0xd3,
++ 0x14,0x52,0x05,0x06,0xff,0x00,0x91,0x0a,0x10,0x05,0x0a,0xff,0x00,0x00,0xff,0x00,
++ 0x0f,0xff,0x00,0x92,0x0a,0x11,0x05,0x0f,0xff,0x00,0x01,0xff,0x00,0x01,0xff,0x00,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0xd0,0x7e,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x53,0x04,0x01,0x00,0x52,0x04,
++ 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,
++ 0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0c,0x00,0x0c,0x00,0x52,0x04,0x0c,0x00,
++ 0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x02,0x00,0x91,0x08,0x10,0x04,
++ 0x03,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0xd2,0x08,0x11,0x04,0x06,0x00,0x08,0x00,
++ 0x11,0x04,0x08,0x00,0x0b,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,
++ 0x10,0x04,0x0e,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,0x00,0x13,0x00,
++ 0xcf,0x86,0xd5,0x28,0x54,0x04,0x00,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x01,0xe6,
++ 0x01,0x01,0x01,0xe6,0xd2,0x0c,0x51,0x04,0x01,0x01,0x10,0x04,0x01,0x01,0x01,0xe6,
++ 0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x04,0x00,0xd1,0x08,0x10,0x04,0x06,0x00,
++ 0x06,0x01,0x10,0x04,0x06,0x01,0x06,0xe6,0x92,0x10,0xd1,0x08,0x10,0x04,0x06,0xdc,
++ 0x06,0xe6,0x10,0x04,0x06,0x01,0x08,0x01,0x09,0xdc,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0a,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x81,0xd0,0x4f,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xa9,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,
++ 0x00,0x10,0x06,0x01,0xff,0x4b,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x04,
++ 0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0x95,
++ 0x2c,0xd4,0x18,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0xd1,0x08,0x10,0x04,0x08,
++ 0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,
++ 0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x68,0xcf,
++ 0x86,0xd5,0x48,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x11,0x00,0x00,0x00,0x53,0x04,0x01,0x00,0x92,
++ 0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x90,0xcc,0xb8,0x00,0x01,
++ 0xff,0xe2,0x86,0x92,0xcc,0xb8,0x00,0x01,0x00,0x94,0x1a,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x94,0xcc,0xb8,
++ 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x87,0x90,0xcc,0xb8,
++ 0x00,0x10,0x0a,0x01,0xff,0xe2,0x87,0x94,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x87,0x92,
++ 0xcc,0xb8,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x06,
++ 0x00,0x06,0x00,0xe2,0x38,0x02,0xe1,0x3f,0x01,0xd0,0x68,0xcf,0x86,0xd5,0x3e,0x94,
++ 0x3a,0xd3,0x16,0x52,0x04,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0x83,
++ 0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd2,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe2,0x88,0x88,0xcc,0xb8,0x00,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,0xe2,
++ 0x88,0x8b,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x24,0x93,0x20,0x52,
++ 0x04,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa3,0xcc,0xb8,0x00,0x01,
++ 0x00,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa5,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x48,0x94,0x44,0xd3,0x2e,0xd2,0x12,0x91,0x0e,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xe2,0x88,0xbc,0xcc,0xb8,0x00,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,
++ 0xff,0xe2,0x89,0x83,0xcc,0xb8,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,
++ 0x89,0x85,0xcc,0xb8,0x00,0x92,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,
++ 0x89,0x88,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x40,0xd3,0x1e,0x92,
++ 0x1a,0xd1,0x0c,0x10,0x08,0x01,0xff,0x3d,0xcc,0xb8,0x00,0x01,0x00,0x10,0x0a,0x01,
++ 0xff,0xe2,0x89,0xa1,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,
++ 0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x89,0x8d,0xcc,0xb8,0x00,0x10,0x08,0x01,
++ 0xff,0x3c,0xcc,0xb8,0x00,0x01,0xff,0x3e,0xcc,0xb8,0x00,0xd3,0x30,0xd2,0x18,0x91,
++ 0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xa4,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xa5,
++ 0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xb2,0xcc,0xb8,
++ 0x00,0x01,0xff,0xe2,0x89,0xb3,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,0x91,0x14,0x10,
++ 0x0a,0x01,0xff,0xe2,0x89,0xb6,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xb7,0xcc,0xb8,
++ 0x00,0x01,0x00,0x01,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x50,0x94,0x4c,0xd3,0x30,0xd2,
++ 0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xba,0xcc,0xb8,0x00,0x01,0xff,0xe2,
++ 0x89,0xbb,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x82,
++ 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x83,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,0x91,
++ 0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x86,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x87,
++ 0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x30,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa2,0xcc,0xb8,0x00,0x01,
++ 0xff,0xe2,0x8a,0xa8,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa9,0xcc,0xb8,
++ 0x00,0x01,0xff,0xe2,0x8a,0xab,0xcc,0xb8,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,
++ 0x00,0xd4,0x5c,0xd3,0x2c,0x92,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xbc,
++ 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xbd,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,
++ 0x8a,0x91,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x92,0xcc,0xb8,0x00,0x01,0x00,0xd2,
++ 0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb2,0xcc,0xb8,0x00,0x01,
++ 0xff,0xe2,0x8a,0xb3,0xcc,0xb8,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb4,
++ 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb5,0xcc,0xb8,0x00,0x01,0x00,0x93,0x0c,0x92,
++ 0x08,0x11,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xd1,0x64,0xd0,0x3e,0xcf,
++ 0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x20,0x53,0x04,0x01,0x00,0x92,
++ 0x18,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x80,0x88,0x00,0x10,0x08,0x01,
++ 0xff,0xe3,0x80,0x89,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,
++ 0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,0x04,0x00,0xd0,
++ 0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0xd5,
++ 0x2c,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,0x10,
++ 0x04,0x06,0x00,0x07,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x08,
++ 0x00,0x08,0x00,0x08,0x00,0x12,0x04,0x08,0x00,0x09,0x00,0xd4,0x14,0x53,0x04,0x09,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0xd3,
++ 0x08,0x12,0x04,0x0c,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,
++ 0x00,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x13,0x00,0xd3,0xa6,0xd2,
++ 0x74,0xd1,0x40,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,0x93,0x14,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x04,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x92,
++ 0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x01,
++ 0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x14,0x53,
++ 0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x06,
++ 0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x06,
++ 0x00,0x07,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,
++ 0x04,0x01,0x00,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,0x00,0x06,
++ 0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x13,0x04,0x04,
++ 0x00,0x06,0x00,0xd2,0xdc,0xd1,0x48,0xd0,0x26,0xcf,0x86,0x95,0x20,0x54,0x04,0x01,
++ 0x00,0xd3,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x07,0x00,0x06,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x08,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x04,0x00,0x06,
++ 0x00,0x06,0x00,0x52,0x04,0x06,0x00,0x11,0x04,0x06,0x00,0x08,0x00,0xd0,0x5e,0xcf,
++ 0x86,0xd5,0x2c,0xd4,0x10,0x53,0x04,0x06,0x00,0x92,0x08,0x11,0x04,0x06,0x00,0x07,
++ 0x00,0x07,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x52,
++ 0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0a,0x00,0x0b,0x00,0xd4,0x10,0x93,
++ 0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd3,0x10,0x92,
++ 0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x52,0x04,0x0a,
++ 0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x1c,0x94,
++ 0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,
++ 0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0b,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,
++ 0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0c,0x00,0x0b,0x00,0x0b,0x00,0xd1,
++ 0xa8,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x01,
++ 0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x53,
++ 0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x18,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x04,0x0c,0x00,0x01,0x00,0xd3,
++ 0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0x51,0x04,0x0c,
++ 0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x0c,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x06,0x00,0x93,0x0c,0x52,0x04,0x06,0x00,0x11,
++ 0x04,0x06,0x00,0x01,0x00,0x01,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,
++ 0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x0c,
++ 0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,0x52,0x04,0x08,
++ 0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,
++ 0x00,0x10,0x04,0x09,0x00,0x0d,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0d,0x00,0x0c,
++ 0x00,0x06,0x00,0x94,0x0c,0x53,0x04,0x06,0x00,0x12,0x04,0x06,0x00,0x0a,0x00,0x06,
++ 0x00,0xe4,0x39,0x01,0xd3,0x0c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xcf,0x06,0x06,0x00,
++ 0xd2,0x30,0xd1,0x06,0xcf,0x06,0x06,0x00,0xd0,0x06,0xcf,0x06,0x06,0x00,0xcf,0x86,
++ 0x95,0x1e,0x54,0x04,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x0e,
++ 0x10,0x0a,0x06,0xff,0xe2,0xab,0x9d,0xcc,0xb8,0x00,0x06,0x00,0x06,0x00,0x06,0x00,
++ 0xd1,0x80,0xd0,0x3a,0xcf,0x86,0xd5,0x28,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,
++ 0x07,0x00,0x11,0x04,0x07,0x00,0x08,0x00,0xd3,0x08,0x12,0x04,0x08,0x00,0x09,0x00,
++ 0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x94,0x0c,
++ 0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x30,
++ 0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
++ 0x10,0x00,0x10,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
++ 0x0b,0x00,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x10,0x00,0x10,0x00,0x54,0x04,
++ 0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
++ 0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,
++ 0x11,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
++ 0xd2,0x08,0x11,0x04,0x10,0x00,0x14,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x10,0x00,
++ 0x10,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x10,0x00,0x15,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd4,0x0c,0x53,0x04,
++ 0x14,0x00,0x12,0x04,0x14,0x00,0x11,0x00,0x53,0x04,0x14,0x00,0x52,0x04,0x14,0x00,
++ 0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0xe3,0xb9,0x01,0xd2,0xac,0xd1,
++ 0x68,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x14,0x53,0x04,0x08,0x00,0x52,
++ 0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x08,0x00,0xcf,
++ 0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,0x51,
++ 0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xd4,0x14,0x53,0x04,0x09,0x00,0x52,
++ 0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0xd3,0x10,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0a,0x00,0x0a,0x00,0x09,0x00,0x52,0x04,0x0a,
++ 0x00,0x11,0x04,0x0a,0x00,0x0b,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0x55,
++ 0x04,0x08,0x00,0xd4,0x1c,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,
++ 0x04,0x08,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd3,
++ 0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0d,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd1,0x6c,0xd0,0x2a,0xcf,0x86,0x55,
++ 0x04,0x08,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,
++ 0x04,0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,
++ 0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,0xd3,0x0c,0x52,
++ 0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,
++ 0x00,0x10,0x04,0x00,0x00,0x08,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x00,0x00,0x0c,0x09,0xd0,0x5a,0xcf,0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x93,
++ 0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x00,
++ 0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
++ 0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
++ 0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xcf,
++ 0x86,0x95,0x40,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,
++ 0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
++ 0x00,0x0a,0xe6,0xd2,0x9c,0xd1,0x68,0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x08,
++ 0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,0x08,0x00,0x0a,0x00,0x54,
++ 0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0d,
++ 0x00,0x0d,0x00,0x12,0x04,0x0d,0x00,0x10,0x00,0xcf,0x86,0x95,0x30,0x94,0x2c,0xd3,
++ 0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x12,0x00,0x91,0x08,0x10,
++ 0x04,0x12,0x00,0x13,0x00,0x13,0x00,0xd2,0x08,0x11,0x04,0x13,0x00,0x14,0x00,0x51,
++ 0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,
++ 0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,
++ 0x00,0x54,0x04,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x06,0xcf,0x06,0x04,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0xd5,0x14,0x54,
++ 0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x00,
++ 0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x04,0x00,0x12,0x04,0x04,0x00,0x00,0x00,0xcf,
++ 0x86,0xe5,0xa6,0x05,0xe4,0x9f,0x05,0xe3,0x96,0x04,0xe2,0xe4,0x03,0xe1,0xc0,0x01,
++ 0xd0,0x3e,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0xda,0x01,0xe4,0x91,0x08,0x10,0x04,0x01,0xe8,
++ 0x01,0xde,0x01,0xe0,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,
++ 0x04,0x00,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x04,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0xaa,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0x8b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x8d,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0x8f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x91,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x93,0xe3,0x82,0x99,
++ 0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x95,0xe3,0x82,0x99,0x00,0x01,0x00,
++ 0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x97,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x99,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,
++ 0x10,0x0b,0x01,0xff,0xe3,0x81,0x9b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,
++ 0xff,0xe3,0x81,0x9d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd4,0x53,0xd3,0x3c,0xd2,0x1e,
++ 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,
++ 0x0b,0x01,0xff,0xe3,0x81,0xa1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xe3,0x81,0xa4,0xe3,0x82,0x99,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe3,0x81,0xa6,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe3,0x81,0xa8,0xe3,0x82,0x99,0x00,0x01,0x00,0x01,0x00,0xd3,0x4a,0xd2,
++ 0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x99,0x00,0x01,0xff,
++ 0xe3,0x81,0xaf,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xb2,
++ 0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb2,0xe3,0x82,0x9a,
++ 0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x99,0x00,0x01,0xff,
++ 0xe3,0x81,0xb5,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe3,0x81,0xb8,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb8,0xe3,
++ 0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,
++ 0x99,0x00,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,0x9a,0x00,0x01,0x00,0xd0,0xee,0xcf,
++ 0x86,0xd5,0x42,0x54,0x04,0x01,0x00,0xd3,0x1b,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,
++ 0x0b,0x01,0xff,0xe3,0x81,0x86,0xe3,0x82,0x99,0x00,0x06,0x00,0x10,0x04,0x06,0x00,
++ 0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x08,0x10,0x04,0x01,0x08,
++ 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0x9d,0xe3,0x82,0x99,
++ 0x00,0x06,0x00,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
++ 0x82,0xab,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xad,0xe3,
++ 0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
++ 0x82,0xaf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb1,0xe3,
++ 0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb3,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb5,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb7,0xe3,0x82,0x99,0x00,
++ 0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb9,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,
++ 0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbb,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,
++ 0x01,0xff,0xe3,0x82,0xbd,0xe3,0x82,0x99,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd5,0xd4,
++ 0x53,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbf,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x81,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x84,0xe3,0x82,0x99,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x86,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,
++ 0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x88,0xe3,0x82,0x99,0x00,0x01,0x00,
++ 0x01,0x00,0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x83,0x8f,0xe3,
++ 0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x8f,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
++ 0x83,0x92,0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x95,0xe3,
++ 0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x95,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,
++ 0xff,0xe3,0x83,0x98,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,
++ 0xe3,0x83,0x9b,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x9b,0xe3,0x82,0x9a,0x00,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x22,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe3,0x82,0xa6,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe3,0x83,0xaf,0xe3,0x82,0x99,0x00,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,
++ 0xe3,0x83,0xb0,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0xb1,0xe3,0x82,0x99,0x00,
++ 0x10,0x0b,0x01,0xff,0xe3,0x83,0xb2,0xe3,0x82,0x99,0x00,0x01,0x00,0x51,0x04,0x01,
++ 0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xbd,0xe3,0x82,0x99,0x00,0x06,0x00,0xd1,0x65,
++ 0xd0,0x46,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x18,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,
++ 0x13,0x00,0x14,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x15,0x93,0x11,
++ 0x52,0x04,0x01,0x00,0x91,0x09,0x10,0x05,0x01,0xff,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x54,
++ 0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x08,0x00,0x0a,0x00,0x94,
++ 0x0c,0x93,0x08,0x12,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x06,0x00,0xd2,0xa4,0xd1,
++ 0x5c,0xd0,0x22,0xcf,0x86,0x95,0x1c,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x20,0xd4,0x0c,0x93,0x08,0x12,0x04,0x01,0x00,0x0b,
++ 0x00,0x0b,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x06,0x00,0x06,
++ 0x00,0x06,0x00,0x06,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
++ 0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0xd5,0x10,0x94,0x0c,0x53,
++ 0x04,0x01,0x00,0x12,0x04,0x01,0x00,0x07,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x16,
++ 0x00,0xd1,0x30,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,
++ 0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x01,0x00,0x01,
++ 0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x07,0x00,0x54,0x04,0x01,
++ 0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x07,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd1,0x48,0xd0,0x40,0xcf,
++ 0x86,0xd5,0x06,0xcf,0x06,0x04,0x00,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x2c,0xd2,
++ 0x06,0xcf,0x06,0x04,0x00,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x1a,0xcf,0x86,0x55,
++ 0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x07,0x00,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,
++ 0x06,0x01,0x00,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe2,0x71,0x05,0xd1,0x8c,0xd0,0x08,
++ 0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xd4,0x06,
++ 0xcf,0x06,0x01,0x00,0xd3,0x06,0xcf,0x06,0x01,0x00,0xd2,0x06,0xcf,0x06,0x01,0x00,
++ 0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x10,
++ 0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x08,0x00,0x08,0x00,0x53,0x04,
++ 0x08,0x00,0x12,0x04,0x08,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0xd3,0x08,
++ 0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,
++ 0x11,0x00,0x11,0x00,0x93,0x0c,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x13,0x00,
++ 0x13,0x00,0x94,0x14,0x53,0x04,0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,
++ 0x13,0x00,0x14,0x00,0x14,0x00,0x00,0x00,0xe0,0xdb,0x04,0xcf,0x86,0xe5,0xdf,0x01,
++ 0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x74,0xd2,0x6e,0xd1,0x06,0xcf,0x06,0x04,0x00,
++ 0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,
++ 0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,
++ 0x92,0x08,0x11,0x04,0x04,0x00,0x06,0x00,0x04,0x00,0x04,0x00,0x93,0x10,0x52,0x04,
++ 0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,
++ 0x95,0x24,0x94,0x20,0x93,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,
++ 0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0x00,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x06,0x0a,0x00,0xd2,0x84,0xd1,0x4c,0xd0,0x16,
++ 0xcf,0x86,0x55,0x04,0x0a,0x00,0x94,0x0c,0x53,0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,
++ 0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x1c,0xd3,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x0a,0x00,0x0a,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,
++ 0x10,0x04,0x0a,0x00,0x0a,0xe6,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0d,0xe6,0x52,0x04,
++ 0x0d,0xe6,0x11,0x04,0x0a,0xe6,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
++ 0x0a,0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x11,0xe6,0x0d,0xe6,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,
++ 0x93,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x00,0x00,0xd1,0x40,
++ 0xd0,0x3a,0xcf,0x86,0xd5,0x24,0x54,0x04,0x08,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,
++ 0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,
++ 0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,
++ 0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x06,0x0a,0x00,0xd0,0x5e,
++ 0xcf,0x86,0xd5,0x28,0xd4,0x18,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0xd1,0x08,
++ 0x10,0x04,0x0a,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x11,0x00,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x0d,0x00,0x10,0x00,0x10,0x00,0xd4,0x1c,0x53,0x04,0x0c,0x00,
++ 0xd2,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0d,0x00,0x10,0x00,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x12,0x00,0x14,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,
++ 0x11,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x1c,
++ 0x94,0x18,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x51,0x04,0x15,0x00,
++ 0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0xd3,0x10,
++ 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x92,0x0c,
++ 0x51,0x04,0x0d,0x00,0x10,0x04,0x0c,0x00,0x0a,0x00,0x0a,0x00,0xe4,0xf2,0x02,0xe3,
++ 0x65,0x01,0xd2,0x98,0xd1,0x48,0xd0,0x36,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,
++ 0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x09,0x08,0x00,0x08,0x00,
++ 0x08,0x00,0xd4,0x0c,0x53,0x04,0x08,0x00,0x12,0x04,0x08,0x00,0x00,0x00,0x53,0x04,
++ 0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,
++ 0x09,0x00,0x54,0x04,0x09,0x00,0x13,0x04,0x09,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,
++ 0x0a,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,
++ 0x10,0x04,0x0a,0x09,0x12,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,
++ 0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,
++ 0x54,0x04,0x0b,0xe6,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,
++ 0x52,0x04,0x0b,0x00,0x11,0x04,0x11,0x00,0x14,0x00,0xd1,0x60,0xd0,0x22,0xcf,0x86,
++ 0x55,0x04,0x0a,0x00,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,
++ 0x10,0x04,0x0a,0x00,0x0a,0xdc,0x11,0x04,0x0a,0xdc,0x0a,0x00,0x0a,0x00,0xcf,0x86,
++ 0xd5,0x24,0x54,0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,
++ 0x0a,0x00,0x0a,0x09,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
++ 0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,
++ 0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,
++ 0x0b,0x00,0x0b,0x07,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x20,0xd3,0x10,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x52,0x04,
++ 0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,
++ 0xd2,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x0b,0x00,0x54,0x04,
++ 0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0xd2,0xd0,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0a,0x00,
++ 0x54,0x04,0x0a,0x00,0x93,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,
++ 0x0a,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0a,0x00,
++ 0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,
++ 0x11,0x04,0x0a,0x00,0x00,0x00,0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,
++ 0x12,0x04,0x0b,0x00,0x10,0x00,0xd0,0x3a,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,
++ 0x0b,0x00,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0xe6,
++ 0xd1,0x08,0x10,0x04,0x0b,0xdc,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,
++ 0xcf,0x86,0xd5,0x2c,0xd4,0x18,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,
++ 0x0b,0xe6,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x00,0x00,
++ 0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,
++ 0x0d,0x00,0x93,0x10,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,
++ 0x00,0x00,0x00,0x00,0xd1,0x8c,0xd0,0x72,0xcf,0x86,0xd5,0x4c,0xd4,0x30,0xd3,0x18,
+ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,
+- 0x10,0x04,0x0c,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
+- 0x94,0x20,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,
+- 0x00,0x00,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,
+- 0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x11,0x00,
+- 0x11,0x04,0x10,0x00,0x15,0x00,0x00,0x00,0x11,0x00,0xd0,0x06,0xcf,0x06,0x11,0x00,
+- 0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,
+- 0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x02,0xff,0xff,0xcf,0x86,0xcf,
+- 0x06,0x02,0xff,0xff,0xd1,0x76,0xd0,0x09,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xcf,
+- 0x86,0x85,0xd4,0x07,0xcf,0x06,0x02,0xff,0xff,0xd3,0x07,0xcf,0x06,0x02,0xff,0xff,
+- 0xd2,0x07,0xcf,0x06,0x02,0xff,0xff,0xd1,0x07,0xcf,0x06,0x02,0xff,0xff,0xd0,0x18,
+- 0xcf,0x86,0x55,0x05,0x02,0xff,0xff,0x94,0x0d,0x93,0x09,0x12,0x05,0x02,0xff,0xff,
+- 0x00,0x00,0x00,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,
+- 0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,0x0b,0x00,
+- 0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,
+- 0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x9c,0x10,0xe3,0x16,0x08,
+- 0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x08,0x04,0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,
+- 0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,
+- 0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0x10,0x08,0x01,0xff,0xe8,0xbb,0x8a,0x00,0x01,
+- 0xff,0xe8,0xb3,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xbb,0x91,0x00,0x01,
+- 0xff,0xe4,0xb8,0xb2,0x00,0x10,0x08,0x01,0xff,0xe5,0x8f,0xa5,0x00,0x01,0xff,0xe9,
+- 0xbe,0x9c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xbe,0x9c,0x00,0x01,
+- 0xff,0xe5,0xa5,0x91,0x00,0x10,0x08,0x01,0xff,0xe9,0x87,0x91,0x00,0x01,0xff,0xe5,
+- 0x96,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa5,0x88,0x00,0x01,0xff,0xe6,
+- 0x87,0xb6,0x00,0x10,0x08,0x01,0xff,0xe7,0x99,0xa9,0x00,0x01,0xff,0xe7,0xbe,0x85,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x98,0xbf,0x00,0x01,
+- 0xff,0xe8,0x9e,0xba,0x00,0x10,0x08,0x01,0xff,0xe8,0xa3,0xb8,0x00,0x01,0xff,0xe9,
+- 0x82,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x82,0x00,0x01,0xff,0xe6,
+- 0xb4,0x9b,0x00,0x10,0x08,0x01,0xff,0xe7,0x83,0x99,0x00,0x01,0xff,0xe7,0x8f,0x9e,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x90,0xbd,0x00,0x01,0xff,0xe9,
+- 0x85,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0xa7,0xb1,0x00,0x01,0xff,0xe4,0xba,0x82,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x8d,0xb5,0x00,0x01,0xff,0xe6,0xac,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x9b,0x00,0x01,0xff,0xe8,0x98,0xad,0x00,0xd4,
+- 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xb8,0x9e,0x00,0x01,
+- 0xff,0xe5,0xb5,0x90,0x00,0x10,0x08,0x01,0xff,0xe6,0xbf,0xab,0x00,0x01,0xff,0xe8,
+- 0x97,0x8d,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa5,0xa4,0x00,0x01,0xff,0xe6,
+- 0x8b,0x89,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0x98,0x00,0x01,0xff,0xe8,0xa0,0x9f,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xbb,0x8a,0x00,0x01,0xff,0xe6,
+- 0x9c,0x97,0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0xaa,0x00,0x01,0xff,0xe7,0x8b,0xbc,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x83,0x8e,0x00,0x01,0xff,0xe4,0xbe,0x86,
+- 0x00,0x10,0x08,0x01,0xff,0xe5,0x86,0xb7,0x00,0x01,0xff,0xe5,0x8b,0x9e,0x00,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x93,0x84,0x00,0x01,0xff,0xe6,
+- 0xab,0x93,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x90,0x00,0x01,0xff,0xe7,0x9b,0xa7,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x80,0x81,0x00,0x01,0xff,0xe8,0x98,0x86,
+- 0x00,0x10,0x08,0x01,0xff,0xe8,0x99,0x9c,0x00,0x01,0xff,0xe8,0xb7,0xaf,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9c,0xb2,0x00,0x01,0xff,0xe9,0xad,0xaf,
+- 0x00,0x10,0x08,0x01,0xff,0xe9,0xb7,0xba,0x00,0x01,0xff,0xe7,0xa2,0x8c,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe7,0xa5,0xbf,0x00,0x01,0xff,0xe7,0xb6,0xa0,0x00,0x10,
+- 0x08,0x01,0xff,0xe8,0x8f,0x89,0x00,0x01,0xff,0xe9,0x8c,0x84,0x00,0xcf,0x86,0xe5,
+- 0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xb9,
+- 0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0x10,0x08,0x01,0xff,0xe5,0xa3,0x9f,0x00,
+- 0x01,0xff,0xe5,0xbc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xb1,0xa0,0x00,
+- 0x01,0xff,0xe8,0x81,0xbe,0x00,0x10,0x08,0x01,0xff,0xe7,0x89,0xa2,0x00,0x01,0xff,
+- 0xe7,0xa3,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xb3,0x82,0x00,
+- 0x01,0xff,0xe9,0x9b,0xb7,0x00,0x10,0x08,0x01,0xff,0xe5,0xa3,0x98,0x00,0x01,0xff,
+- 0xe5,0xb1,0xa2,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x93,0x00,0x01,0xff,
+- 0xe6,0xb7,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,0x8f,0x00,0x01,0xff,0xe7,0xb4,
+- 0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
+- 0x01,0xff,0xe9,0x99,0x8b,0x00,0x10,0x08,0x01,0xff,0xe5,0x8b,0x92,0x00,0x01,0xff,
+- 0xe8,0x82,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x87,0x9c,0x00,0x01,0xff,
+- 0xe5,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe7,0xa8,0x9c,0x00,0x01,0xff,0xe7,0xb6,
+- 0xbe,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8f,0xb1,0x00,0x01,0xff,
+- 0xe9,0x99,0xb5,0x00,0x10,0x08,0x01,0xff,0xe8,0xae,0x80,0x00,0x01,0xff,0xe6,0x8b,
+- 0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x82,0x00,0x01,0xff,0xe8,0xab,
+- 0xbe,0x00,0x10,0x08,0x01,0xff,0xe4,0xb8,0xb9,0x00,0x01,0xff,0xe5,0xaf,0xa7,0x00,
+- 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x80,0x92,0x00,
+- 0x01,0xff,0xe7,0x8e,0x87,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xb0,0x00,0x01,0xff,
+- 0xe5,0x8c,0x97,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa3,0xbb,0x00,0x01,0xff,
+- 0xe4,0xbe,0xbf,0x00,0x10,0x08,0x01,0xff,0xe5,0xbe,0xa9,0x00,0x01,0xff,0xe4,0xb8,
+- 0x8d,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xb3,0x8c,0x00,0x01,0xff,
+- 0xe6,0x95,0xb8,0x00,0x10,0x08,0x01,0xff,0xe7,0xb4,0xa2,0x00,0x01,0xff,0xe5,0x8f,
+- 0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa1,0x9e,0x00,0x01,0xff,0xe7,0x9c,
+- 0x81,0x00,0x10,0x08,0x01,0xff,0xe8,0x91,0x89,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xae,0xba,0x00,0x01,0xff,
+- 0xe8,0xbe,0xb0,0x00,0x10,0x08,0x01,0xff,0xe6,0xb2,0x88,0x00,0x01,0xff,0xe6,0x8b,
+- 0xbe,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8b,0xa5,0x00,0x01,0xff,0xe6,0x8e,
+- 0xa0,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xa5,0x00,0x01,0xff,0xe4,0xba,0xae,0x00,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,0xa9,0x00,0x01,0xff,0xe5,0x87,
+- 0x89,0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0x81,0x00,0x01,0xff,0xe7,0xb3,0xa7,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x89,0xaf,0x00,0x01,0xff,0xe8,0xab,0x92,0x00,
+- 0x10,0x08,0x01,0xff,0xe9,0x87,0x8f,0x00,0x01,0xff,0xe5,0x8b,0xb5,0x00,0xe0,0x04,
+- 0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe5,0x91,0x82,0x00,0x01,0xff,0xe5,0xa5,0xb3,0x00,0x10,0x08,0x01,0xff,
+- 0xe5,0xbb,0xac,0x00,0x01,0xff,0xe6,0x97,0x85,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0xbf,0xbe,0x00,0x01,0xff,0xe7,0xa4,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0x96,
+- 0xad,0x00,0x01,0xff,0xe9,0xa9,0xaa,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe9,0xba,0x97,0x00,0x01,0xff,0xe9,0xbb,0x8e,0x00,0x10,0x08,0x01,0xff,0xe5,0x8a,
+- 0x9b,0x00,0x01,0xff,0xe6,0x9b,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xad,
+- 0xb7,0x00,0x01,0xff,0xe8,0xbd,0xa2,0x00,0x10,0x08,0x01,0xff,0xe5,0xb9,0xb4,0x00,
+- 0x01,0xff,0xe6,0x86,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0x88,0x80,0x00,0x01,0xff,0xe6,0x92,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,
+- 0xa3,0x00,0x01,0xff,0xe7,0x85,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0x92,
+- 0x89,0x00,0x01,0xff,0xe7,0xa7,0x8a,0x00,0x10,0x08,0x01,0xff,0xe7,0xb7,0xb4,0x00,
+- 0x01,0xff,0xe8,0x81,0xaf,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xbc,
+- 0xa6,0x00,0x01,0xff,0xe8,0x93,0xae,0x00,0x10,0x08,0x01,0xff,0xe9,0x80,0xa3,0x00,
+- 0x01,0xff,0xe9,0x8d,0x8a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x88,0x97,0x00,
+- 0x01,0xff,0xe5,0x8a,0xa3,0x00,0x10,0x08,0x01,0xff,0xe5,0x92,0xbd,0x00,0x01,0xff,
+- 0xe7,0x83,0x88,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe8,0xa3,0x82,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,0x10,0x08,0x01,0xff,0xe5,0xbb,
+- 0x89,0x00,0x01,0xff,0xe5,0xbf,0xb5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x8d,
+- 0xbb,0x00,0x01,0xff,0xe6,0xae,0xae,0x00,0x10,0x08,0x01,0xff,0xe7,0xb0,0xbe,0x00,
+- 0x01,0xff,0xe7,0x8d,0xb5,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe4,0xbb,
+- 0xa4,0x00,0x01,0xff,0xe5,0x9b,0xb9,0x00,0x10,0x08,0x01,0xff,0xe5,0xaf,0xa7,0x00,
+- 0x01,0xff,0xe5,0xb6,0xba,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x80,0x9c,0x00,
+- 0x01,0xff,0xe7,0x8e,0xb2,0x00,0x10,0x08,0x01,0xff,0xe7,0x91,0xa9,0x00,0x01,0xff,
+- 0xe7,0xbe,0x9a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x81,
+- 0x86,0x00,0x01,0xff,0xe9,0x88,0xb4,0x00,0x10,0x08,0x01,0xff,0xe9,0x9b,0xb6,0x00,
+- 0x01,0xff,0xe9,0x9d,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa0,0x98,0x00,
+- 0x01,0xff,0xe4,0xbe,0x8b,0x00,0x10,0x08,0x01,0xff,0xe7,0xa6,0xae,0x00,0x01,0xff,
+- 0xe9,0x86,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9a,0xb8,0x00,
+- 0x01,0xff,0xe6,0x83,0xa1,0x00,0x10,0x08,0x01,0xff,0xe4,0xba,0x86,0x00,0x01,0xff,
+- 0xe5,0x83,0x9a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xaf,0xae,0x00,0x01,0xff,
+- 0xe5,0xb0,0xbf,0x00,0x10,0x08,0x01,0xff,0xe6,0x96,0x99,0x00,0x01,0xff,0xe6,0xa8,
+- 0x82,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe7,0x87,0x8e,0x00,0x01,0xff,0xe7,0x99,0x82,0x00,0x10,0x08,0x01,
+- 0xff,0xe8,0x93,0xbc,0x00,0x01,0xff,0xe9,0x81,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe9,0xbe,0x8d,0x00,0x01,0xff,0xe6,0x9a,0x88,0x00,0x10,0x08,0x01,0xff,0xe9,
+- 0x98,0xae,0x00,0x01,0xff,0xe5,0x8a,0x89,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe6,0x9d,0xbb,0x00,0x01,0xff,0xe6,0x9f,0xb3,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0xb5,0x81,0x00,0x01,0xff,0xe6,0xba,0x9c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,
+- 0x90,0x89,0x00,0x01,0xff,0xe7,0x95,0x99,0x00,0x10,0x08,0x01,0xff,0xe7,0xa1,0xab,
+- 0x00,0x01,0xff,0xe7,0xb4,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe9,0xa1,0x9e,0x00,0x01,0xff,0xe5,0x85,0xad,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0x88,0xae,0x00,0x01,0xff,0xe9,0x99,0xb8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
+- 0x80,0xab,0x00,0x01,0xff,0xe5,0xb4,0x99,0x00,0x10,0x08,0x01,0xff,0xe6,0xb7,0xaa,
+- 0x00,0x01,0xff,0xe8,0xbc,0xaa,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
+- 0xbe,0x8b,0x00,0x01,0xff,0xe6,0x85,0x84,0x00,0x10,0x08,0x01,0xff,0xe6,0xa0,0x97,
+- 0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9a,0x86,
+- 0x00,0x01,0xff,0xe5,0x88,0xa9,0x00,0x10,0x08,0x01,0xff,0xe5,0x90,0x8f,0x00,0x01,
+- 0xff,0xe5,0xb1,0xa5,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe6,0x98,0x93,0x00,0x01,0xff,0xe6,0x9d,0x8e,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0xa2,0xa8,0x00,0x01,0xff,0xe6,0xb3,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,
+- 0x90,0x86,0x00,0x01,0xff,0xe7,0x97,0xa2,0x00,0x10,0x08,0x01,0xff,0xe7,0xbd,0xb9,
+- 0x00,0x01,0xff,0xe8,0xa3,0x8f,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
+- 0xa3,0xa1,0x00,0x01,0xff,0xe9,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe9,0x9b,0xa2,
+- 0x00,0x01,0xff,0xe5,0x8c,0xbf,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xba,0xba,
+- 0x00,0x01,0xff,0xe5,0x90,0x9d,0x00,0x10,0x08,0x01,0xff,0xe7,0x87,0x90,0x00,0x01,
+- 0xff,0xe7,0x92,0x98,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
+- 0x97,0xba,0x00,0x01,0xff,0xe9,0x9a,0xa3,0x00,0x10,0x08,0x01,0xff,0xe9,0xb1,0x97,
+- 0x00,0x01,0xff,0xe9,0xba,0x9f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x9e,0x97,
+- 0x00,0x01,0xff,0xe6,0xb7,0x8b,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0xa8,0x00,0x01,
+- 0xff,0xe7,0xab,0x8b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xac,0xa0,
+- 0x00,0x01,0xff,0xe7,0xb2,0x92,0x00,0x10,0x08,0x01,0xff,0xe7,0x8b,0x80,0x00,0x01,
+- 0xff,0xe7,0x82,0x99,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xad,0x98,0x00,0x01,
+- 0xff,0xe4,0xbb,0x80,0x00,0x10,0x08,0x01,0xff,0xe8,0x8c,0xb6,0x00,0x01,0xff,0xe5,
+- 0x88,0xba,0x00,0xe2,0xad,0x06,0xe1,0xc4,0x03,0xe0,0xcb,0x01,0xcf,0x86,0xd5,0xe4,
+- 0xd4,0x74,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,
+- 0x01,0xff,0xe5,0xba,0xa6,0x00,0x10,0x08,0x01,0xff,0xe6,0x8b,0x93,0x00,0x01,0xff,
+- 0xe7,0xb3,0x96,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xae,0x85,0x00,0x01,0xff,
+- 0xe6,0xb4,0x9e,0x00,0x10,0x08,0x01,0xff,0xe6,0x9a,0xb4,0x00,0x01,0xff,0xe8,0xbc,
+- 0xbb,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa1,0x8c,0x00,0x01,0xff,
+- 0xe9,0x99,0x8d,0x00,0x10,0x08,0x01,0xff,0xe8,0xa6,0x8b,0x00,0x01,0xff,0xe5,0xbb,
+- 0x93,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,0x80,0x00,0x01,0xff,0xe5,0x97,
+- 0x80,0x00,0x01,0x00,0xd3,0x34,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x01,0xff,0xe5,0xa1,
+- 0x9a,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe6,0x99,0xb4,0x00,0x01,0x00,0xd1,0x0c,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe5,0x87,0x9e,0x00,0x10,0x08,0x01,0xff,0xe7,0x8c,
+- 0xaa,0x00,0x01,0xff,0xe7,0x9b,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe7,0xa4,0xbc,0x00,0x01,0xff,0xe7,0xa5,0x9e,0x00,0x10,0x08,0x01,0xff,0xe7,0xa5,
+- 0xa5,0x00,0x01,0xff,0xe7,0xa6,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9d,
+- 0x96,0x00,0x01,0xff,0xe7,0xb2,0xbe,0x00,0x10,0x08,0x01,0xff,0xe7,0xbe,0xbd,0x00,
+- 0x01,0x00,0xd4,0x64,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x01,0xff,0xe8,0x98,
+- 0x92,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe8,0xab,0xb8,0x00,0x01,0x00,0xd1,0x0c,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe9,0x80,0xb8,0x00,0x10,0x08,0x01,0xff,0xe9,0x83,
+- 0xbd,0x00,0x01,0x00,0xd2,0x14,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe9,0xa3,
+- 0xaf,0x00,0x01,0xff,0xe9,0xa3,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa4,
+- 0xa8,0x00,0x01,0xff,0xe9,0xb6,0xb4,0x00,0x10,0x08,0x0d,0xff,0xe9,0x83,0x9e,0x00,
+- 0x0d,0xff,0xe9,0x9a,0xb7,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,
+- 0xe4,0xbe,0xae,0x00,0x06,0xff,0xe5,0x83,0xa7,0x00,0x10,0x08,0x06,0xff,0xe5,0x85,
+- 0x8d,0x00,0x06,0xff,0xe5,0x8b,0x89,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0x8b,
+- 0xa4,0x00,0x06,0xff,0xe5,0x8d,0x91,0x00,0x10,0x08,0x06,0xff,0xe5,0x96,0x9d,0x00,
+- 0x06,0xff,0xe5,0x98,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0x99,
+- 0xa8,0x00,0x06,0xff,0xe5,0xa1,0x80,0x00,0x10,0x08,0x06,0xff,0xe5,0xa2,0xa8,0x00,
+- 0x06,0xff,0xe5,0xb1,0xa4,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0xb1,0xae,0x00,
+- 0x06,0xff,0xe6,0x82,0x94,0x00,0x10,0x08,0x06,0xff,0xe6,0x85,0xa8,0x00,0x06,0xff,
+- 0xe6,0x86,0x8e,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x06,0xff,0xe6,0x87,0xb2,0x00,0x06,0xff,0xe6,0x95,0x8f,0x00,0x10,
+- 0x08,0x06,0xff,0xe6,0x97,0xa2,0x00,0x06,0xff,0xe6,0x9a,0x91,0x00,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe6,0xa2,0x85,0x00,0x06,0xff,0xe6,0xb5,0xb7,0x00,0x10,0x08,0x06,
+- 0xff,0xe6,0xb8,0x9a,0x00,0x06,0xff,0xe6,0xbc,0xa2,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe7,0x85,0xae,0x00,0x06,0xff,0xe7,0x88,0xab,0x00,0x10,0x08,0x06,
+- 0xff,0xe7,0x90,0xa2,0x00,0x06,0xff,0xe7,0xa2,0x91,0x00,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe7,0xa4,0xbe,0x00,0x06,0xff,0xe7,0xa5,0x89,0x00,0x10,0x08,0x06,0xff,0xe7,
+- 0xa5,0x88,0x00,0x06,0xff,0xe7,0xa5,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe7,0xa5,0x96,0x00,0x06,0xff,0xe7,0xa5,0x9d,0x00,0x10,0x08,0x06,
+- 0xff,0xe7,0xa6,0x8d,0x00,0x06,0xff,0xe7,0xa6,0x8e,0x00,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe7,0xa9,0x80,0x00,0x06,0xff,0xe7,0xaa,0x81,0x00,0x10,0x08,0x06,0xff,0xe7,
+- 0xaf,0x80,0x00,0x06,0xff,0xe7,0xb7,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe7,0xb8,0x89,0x00,0x06,0xff,0xe7,0xb9,0x81,0x00,0x10,0x08,0x06,0xff,0xe7,
+- 0xbd,0xb2,0x00,0x06,0xff,0xe8,0x80,0x85,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,
+- 0x87,0xad,0x00,0x06,0xff,0xe8,0x89,0xb9,0x00,0x10,0x08,0x06,0xff,0xe8,0x89,0xb9,
+- 0x00,0x06,0xff,0xe8,0x91,0x97,0x00,0xd4,0x75,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe8,0xa4,0x90,0x00,0x06,0xff,0xe8,0xa6,0x96,0x00,0x10,0x08,0x06,
+- 0xff,0xe8,0xac,0x81,0x00,0x06,0xff,0xe8,0xac,0xb9,0x00,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe8,0xb3,0x93,0x00,0x06,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,0x06,0xff,0xe8,
+- 0xbe,0xb6,0x00,0x06,0xff,0xe9,0x80,0xb8,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe9,0x9b,0xa3,0x00,0x06,0xff,0xe9,0x9f,0xbf,0x00,0x10,0x08,0x06,0xff,0xe9,
+- 0xa0,0xbb,0x00,0x0b,0xff,0xe6,0x81,0xb5,0x00,0x91,0x11,0x10,0x09,0x0b,0xff,0xf0,
+- 0xa4,0x8b,0xae,0x00,0x0b,0xff,0xe8,0x88,0x98,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x08,0xff,0xe4,0xb8,0xa6,0x00,0x08,0xff,0xe5,0x86,0xb5,0x00,
+- 0x10,0x08,0x08,0xff,0xe5,0x85,0xa8,0x00,0x08,0xff,0xe4,0xbe,0x80,0x00,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe5,0x85,0x85,0x00,0x08,0xff,0xe5,0x86,0x80,0x00,0x10,0x08,
+- 0x08,0xff,0xe5,0x8b,0x87,0x00,0x08,0xff,0xe5,0x8b,0xba,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe5,0x96,0x9d,0x00,0x08,0xff,0xe5,0x95,0x95,0x00,0x10,0x08,
+- 0x08,0xff,0xe5,0x96,0x99,0x00,0x08,0xff,0xe5,0x97,0xa2,0x00,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe5,0xa1,0x9a,0x00,0x08,0xff,0xe5,0xa2,0xb3,0x00,0x10,0x08,0x08,0xff,
+- 0xe5,0xa5,0x84,0x00,0x08,0xff,0xe5,0xa5,0x94,0x00,0xe0,0x04,0x02,0xcf,0x86,0xe5,
+- 0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xa9,
+- 0xa2,0x00,0x08,0xff,0xe5,0xac,0xa8,0x00,0x10,0x08,0x08,0xff,0xe5,0xbb,0x92,0x00,
+- 0x08,0xff,0xe5,0xbb,0x99,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xbd,0xa9,0x00,
+- 0x08,0xff,0xe5,0xbe,0xad,0x00,0x10,0x08,0x08,0xff,0xe6,0x83,0x98,0x00,0x08,0xff,
+- 0xe6,0x85,0x8e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x84,0x88,0x00,
+- 0x08,0xff,0xe6,0x86,0x8e,0x00,0x10,0x08,0x08,0xff,0xe6,0x85,0xa0,0x00,0x08,0xff,
+- 0xe6,0x87,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x88,0xb4,0x00,0x08,0xff,
+- 0xe6,0x8f,0x84,0x00,0x10,0x08,0x08,0xff,0xe6,0x90,0x9c,0x00,0x08,0xff,0xe6,0x91,
+- 0x92,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x95,0x96,0x00,
+- 0x08,0xff,0xe6,0x99,0xb4,0x00,0x10,0x08,0x08,0xff,0xe6,0x9c,0x97,0x00,0x08,0xff,
+- 0xe6,0x9c,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x9d,0x96,0x00,0x08,0xff,
+- 0xe6,0xad,0xb9,0x00,0x10,0x08,0x08,0xff,0xe6,0xae,0xba,0x00,0x08,0xff,0xe6,0xb5,
+- 0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0xbb,0x9b,0x00,0x08,0xff,
+- 0xe6,0xbb,0x8b,0x00,0x10,0x08,0x08,0xff,0xe6,0xbc,0xa2,0x00,0x08,0xff,0xe7,0x80,
+- 0x9e,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x85,0xae,0x00,0x08,0xff,0xe7,0x9e,
+- 0xa7,0x00,0x10,0x08,0x08,0xff,0xe7,0x88,0xb5,0x00,0x08,0xff,0xe7,0x8a,0xaf,0x00,
+- 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x8c,0xaa,0x00,
+- 0x08,0xff,0xe7,0x91,0xb1,0x00,0x10,0x08,0x08,0xff,0xe7,0x94,0x86,0x00,0x08,0xff,
+- 0xe7,0x94,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x98,0x9d,0x00,0x08,0xff,
+- 0xe7,0x98,0x9f,0x00,0x10,0x08,0x08,0xff,0xe7,0x9b,0x8a,0x00,0x08,0xff,0xe7,0x9b,
+- 0x9b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x9b,0xb4,0x00,0x08,0xff,
+- 0xe7,0x9d,0x8a,0x00,0x10,0x08,0x08,0xff,0xe7,0x9d,0x80,0x00,0x08,0xff,0xe7,0xa3,
+- 0x8c,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xaa,0xb1,0x00,0x08,0xff,0xe7,0xaf,
+- 0x80,0x00,0x10,0x08,0x08,0xff,0xe7,0xb1,0xbb,0x00,0x08,0xff,0xe7,0xb5,0x9b,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xb7,0xb4,0x00,0x08,0xff,
+- 0xe7,0xbc,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0x80,0x85,0x00,0x08,0xff,0xe8,0x8d,
+- 0x92,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0x8f,0xaf,0x00,0x08,0xff,0xe8,0x9d,
+- 0xb9,0x00,0x10,0x08,0x08,0xff,0xe8,0xa5,0x81,0x00,0x08,0xff,0xe8,0xa6,0x86,0x00,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xa6,0x96,0x00,0x08,0xff,0xe8,0xaa,
+- 0xbf,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xb8,0x00,0x08,0xff,0xe8,0xab,0x8b,0x00,
+- 0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xac,0x81,0x00,0x08,0xff,0xe8,0xab,0xbe,0x00,
+- 0x10,0x08,0x08,0xff,0xe8,0xab,0xad,0x00,0x08,0xff,0xe8,0xac,0xb9,0x00,0xcf,0x86,
+- 0x95,0xde,0xd4,0x81,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xae,
+- 0x8a,0x00,0x08,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,0x08,0xff,0xe8,0xbc,0xb8,0x00,
+- 0x08,0xff,0xe9,0x81,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0x86,0x99,0x00,
+- 0x08,0xff,0xe9,0x89,0xb6,0x00,0x10,0x08,0x08,0xff,0xe9,0x99,0xbc,0x00,0x08,0xff,
+- 0xe9,0x9b,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0x9d,0x96,0x00,
+- 0x08,0xff,0xe9,0x9f,0x9b,0x00,0x10,0x08,0x08,0xff,0xe9,0x9f,0xbf,0x00,0x08,0xff,
+- 0xe9,0xa0,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0xa0,0xbb,0x00,0x08,0xff,
+- 0xe9,0xac,0x92,0x00,0x10,0x08,0x08,0xff,0xe9,0xbe,0x9c,0x00,0x08,0xff,0xf0,0xa2,
+- 0xa1,0x8a,0x00,0xd3,0x45,0xd2,0x22,0xd1,0x12,0x10,0x09,0x08,0xff,0xf0,0xa2,0xa1,
+- 0x84,0x00,0x08,0xff,0xf0,0xa3,0x8f,0x95,0x00,0x10,0x08,0x08,0xff,0xe3,0xae,0x9d,
+- 0x00,0x08,0xff,0xe4,0x80,0x98,0x00,0xd1,0x11,0x10,0x08,0x08,0xff,0xe4,0x80,0xb9,
+- 0x00,0x08,0xff,0xf0,0xa5,0x89,0x89,0x00,0x10,0x09,0x08,0xff,0xf0,0xa5,0xb3,0x90,
+- 0x00,0x08,0xff,0xf0,0xa7,0xbb,0x93,0x00,0x92,0x14,0x91,0x10,0x10,0x08,0x08,0xff,
+- 0xe9,0xbd,0x83,0x00,0x08,0xff,0xe9,0xbe,0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xe1,0x94,0x01,0xe0,0x08,0x01,0xcf,0x86,0xd5,0x42,0xd4,0x14,0x93,0x10,0x52,0x04,
+- 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd3,0x10,
+- 0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,
+- 0x00,0x00,0xd1,0x0d,0x10,0x04,0x00,0x00,0x04,0xff,0xd7,0x99,0xd6,0xb4,0x00,0x10,
+- 0x04,0x01,0x1a,0x01,0xff,0xd7,0xb2,0xd6,0xb7,0x00,0xd4,0x42,0x53,0x04,0x01,0x00,
+- 0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd7,0xa9,0xd7,0x81,0x00,0x01,
+- 0xff,0xd7,0xa9,0xd7,0x82,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xd7,0xa9,0xd6,0xbc,
+- 0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,0x82,0x00,0x10,0x09,0x01,0xff,
+- 0xd7,0x90,0xd6,0xb7,0x00,0x01,0xff,0xd7,0x90,0xd6,0xb8,0x00,0xd3,0x43,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x91,0xd6,
+- 0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x92,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x93,0xd6,
+- 0xbc,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x94,0xd6,0xbc,0x00,0x01,0xff,0xd7,
+- 0x95,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x96,0xd6,0xbc,0x00,0x00,0x00,0xd2,
+- 0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x98,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x99,
+- 0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x9a,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x9b,
+- 0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0x9c,0xd6,0xbc,0x00,0x00,0x00,
+- 0x10,0x09,0x01,0xff,0xd7,0x9e,0xd6,0xbc,0x00,0x00,0x00,0xcf,0x86,0x95,0x85,0x94,
+- 0x81,0xd3,0x3e,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa0,0xd6,0xbc,0x00,
+- 0x01,0xff,0xd7,0xa1,0xd6,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd7,0xa3,0xd6,
+- 0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0xa4,0xd6,0xbc,0x00,0x00,0x00,0x10,
+- 0x09,0x01,0xff,0xd7,0xa6,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa7,0xd6,0xbc,0x00,0xd2,
+- 0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa8,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa9,
+- 0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0xaa,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x95,
+- 0xd6,0xb9,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x91,0xd6,0xbf,0x00,0x01,0xff,
+- 0xd7,0x9b,0xd6,0xbf,0x00,0x10,0x09,0x01,0xff,0xd7,0xa4,0xd6,0xbf,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,
+- 0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0xcf,0x86,
+- 0x95,0x24,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x5a,0xd2,0x06,0xcf,0x06,0x01,0x00,0xd1,0x14,
+- 0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x08,0x14,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,0x04,0x01,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x0c,
+- 0x94,0x08,0x13,0x04,0x01,0x00,0x00,0x00,0x05,0x00,0x54,0x04,0x05,0x00,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x07,0x00,0x00,0x00,
+- 0xd2,0xcc,0xd1,0xa4,0xd0,0x36,0xcf,0x86,0xd5,0x14,0x54,0x04,0x06,0x00,0x53,0x04,
+- 0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x94,0x1c,0xd3,0x10,
+- 0x52,0x04,0x01,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xdc,0x52,0x04,
+- 0x10,0xdc,0x11,0x04,0x10,0xdc,0x11,0xe6,0x01,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,
+- 0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
+- 0x06,0x00,0x07,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0xd4,0x18,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,
+- 0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd1,0x50,0xd0,0x1e,
+- 0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,
+- 0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x06,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,
+- 0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x18,
+- 0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x92,0x08,0x11,0x04,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0xd2,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x53,0x04,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x04,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x03,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,
+- 0x30,0x3e,0xe1,0x1a,0x3b,0xe0,0x97,0x39,0xcf,0x86,0xe5,0x3b,0x26,0xc4,0xe3,0x16,
+- 0x14,0xe2,0xef,0x11,0xe1,0xd0,0x10,0xe0,0x60,0x07,0xcf,0x86,0xe5,0x53,0x03,0xe4,
+- 0x4c,0x02,0xe3,0x3d,0x01,0xd2,0x94,0xd1,0x70,0xd0,0x4a,0xcf,0x86,0xd5,0x18,0x94,
+- 0x14,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,
+- 0x00,0x07,0x00,0x07,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,
+- 0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x07,0x00,0x53,0x04,0x07,0x00,0xd2,0x0c,0x51,
+- 0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,
+- 0x00,0x07,0x00,0xcf,0x86,0x95,0x20,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,0x07,
+- 0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,
+- 0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,
+- 0x04,0x07,0x00,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,
+- 0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,
+- 0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
+- 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,
+- 0x04,0x07,0x00,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x07,0x00,0x07,0x00,0xcf,0x06,0x08,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,
+- 0x20,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x10,
+- 0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x53,
+- 0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x08,0x14,0x04,0x00,0x00,0x0a,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,
+- 0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x00,0x00,0xd2,
+- 0x5e,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,
+- 0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,
+- 0x00,0x00,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0a,0x00,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,
+- 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0xdc,0x10,0x00,0x10,0x00,0x10,
+- 0x00,0x10,0x00,0x53,0x04,0x10,0x00,0x12,0x04,0x10,0x00,0x00,0x00,0xd1,0x70,0xd0,
+- 0x36,0xcf,0x86,0xd5,0x18,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,
+- 0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,
+- 0x04,0x05,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x13,
+- 0x00,0x13,0x00,0x05,0x00,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x05,0x00,0x92,
+- 0x0c,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0x54,
+- 0x04,0x10,0x00,0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x10,0xe6,0x92,
+- 0x0c,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,
+- 0x86,0x95,0x18,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x51,
+- 0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x08,0x00,0xcf,0x86,0x95,0x1c,0xd4,
+- 0x0c,0x93,0x08,0x12,0x04,0x08,0x00,0x00,0x00,0x08,0x00,0x93,0x0c,0x52,0x04,0x08,
+- 0x00,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0xba,0xd2,0x80,0xd1,
+- 0x34,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x05,
+- 0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0x95,0x14,0x94,
+- 0x10,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x07,
+- 0x00,0x07,0x00,0xd0,0x2a,0xcf,0x86,0xd5,0x14,0x54,0x04,0x07,0x00,0x53,0x04,0x07,
+- 0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x07,
+- 0x00,0x92,0x08,0x11,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xcf,0x86,0xd5,
+- 0x10,0x54,0x04,0x12,0x00,0x93,0x08,0x12,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x54,
+- 0x04,0x12,0x00,0x53,0x04,0x12,0x00,0x12,0x04,0x12,0x00,0x00,0x00,0xd1,0x34,0xd0,
+- 0x12,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x10,
+- 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x10,0x00,0x00,
+- 0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xd2,0x06,0xcf,0x06,0x10,0x00,0xd1,0x40,0xd0,0x1e,0xcf,
+- 0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,
+- 0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x14,0x54,
+- 0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x00,
+- 0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,
+- 0xce,0x02,0xe3,0x45,0x01,0xd2,0xd0,0xd1,0x70,0xd0,0x52,0xcf,0x86,0xd5,0x20,0x94,
+- 0x1c,0xd3,0x0c,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,
+- 0x00,0xd3,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,
+- 0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x08,0x10,
+- 0x04,0x07,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,0x95,0x18,0x54,
+- 0x04,0x0b,0x00,0x93,0x10,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,
+- 0x00,0x0b,0x00,0x0b,0x00,0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,
+- 0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,
+- 0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,
+- 0x04,0x11,0x00,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,
+- 0x00,0x11,0x04,0x11,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x11,0x00,0x11,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x1c,0x54,0x04,0x09,
+- 0x00,0x53,0x04,0x09,0x00,0xd2,0x08,0x11,0x04,0x09,0x00,0x0b,0x00,0x51,0x04,0x00,
+- 0x00,0x10,0x04,0x00,0x00,0x09,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,
+- 0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,
+- 0x00,0xcf,0x06,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,
+- 0x00,0x53,0x04,0x0d,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x11,0x00,0x0d,0x00,0xcf,
+- 0x86,0x95,0x14,0x54,0x04,0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x11,
+- 0x00,0x11,0x00,0x11,0x00,0x11,0x00,0xd2,0xec,0xd1,0xa4,0xd0,0x76,0xcf,0x86,0xd5,
+- 0x48,0xd4,0x28,0xd3,0x14,0x52,0x04,0x08,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x08,
+- 0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x08,
+- 0x00,0x08,0xdc,0x10,0x04,0x08,0x00,0x08,0xe6,0xd3,0x10,0x52,0x04,0x08,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x08,0x00,0x08,0x00,0x08,0x00,0x54,0x04,0x08,0x00,0xd3,0x0c,0x52,0x04,0x08,
+- 0x00,0x11,0x04,0x14,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe6,0x08,
+- 0x01,0x10,0x04,0x08,0xdc,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,
+- 0x09,0xcf,0x86,0x95,0x28,0xd4,0x14,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x0a,0xcf,
+- 0x86,0x15,0x04,0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x24,0xd3,
+- 0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0xe6,0x10,0x04,0x10,
+- 0xdc,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,
+- 0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0xd1,0x54,0xd0,0x26,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,
+- 0x00,0xd3,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x14,0x54,
+- 0x04,0x0b,0x00,0x93,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x0b,
+- 0x00,0x54,0x04,0x0b,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
+- 0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x54,0x04,0x10,
+- 0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x96,0xd2,
+- 0x68,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x0b,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,
+- 0x04,0x0b,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,
+- 0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0xd3,0x10,0x92,
+- 0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x92,0x08,0x11,
+- 0x04,0x00,0x00,0x11,0x00,0x11,0x00,0xd1,0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,
+- 0x00,0xd4,0x0c,0x93,0x08,0x12,0x04,0x14,0x00,0x14,0xe6,0x00,0x00,0x53,0x04,0x14,
+- 0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,
+- 0x06,0x00,0x00,0xd2,0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,
+- 0x04,0x00,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,
+- 0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x58,0xd0,
+- 0x12,0xcf,0x86,0x55,0x04,0x14,0x00,0x94,0x08,0x13,0x04,0x14,0x00,0x00,0x00,0x14,
+- 0x00,0xcf,0x86,0x95,0x40,0xd4,0x24,0xd3,0x0c,0x52,0x04,0x14,0x00,0x11,0x04,0x14,
+- 0x00,0x14,0xdc,0xd2,0x0c,0x51,0x04,0x14,0xe6,0x10,0x04,0x14,0xe6,0x14,0xdc,0x91,
+- 0x08,0x10,0x04,0x14,0xe6,0x14,0xdc,0x14,0xdc,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x14,0xdc,0x14,0x00,0x14,0x00,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,
+- 0x00,0x54,0x04,0x15,0x00,0x93,0x10,0x52,0x04,0x15,0x00,0x51,0x04,0x15,0x00,0x10,
+- 0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xe5,0x0f,0x06,0xe4,0xf8,0x03,0xe3,
+- 0x02,0x02,0xd2,0xfb,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0c,0x00,0xcf,0x86,0xd5,0x2c,
+- 0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x09,
+- 0x0c,0x00,0x52,0x04,0x0c,0x00,0x11,0x04,0x0c,0x00,0x00,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,
+- 0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x09,
+- 0xd0,0x69,0xcf,0x86,0xd5,0x32,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x15,
+- 0x51,0x04,0x0b,0x00,0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x99,0xf0,0x91,0x82,0xba,
+- 0x00,0x0b,0x00,0x91,0x11,0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x9b,0xf0,0x91,0x82,
+- 0xba,0x00,0x0b,0x00,0x0b,0x00,0xd4,0x1d,0x53,0x04,0x0b,0x00,0x92,0x15,0x51,0x04,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xff,0xf0,0x91,0x82,0xa5,0xf0,0x91,0x82,0xba,
+- 0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,
+- 0x09,0x10,0x04,0x0b,0x07,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,
+- 0x0c,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x00,0x00,0x0d,0x00,0xd4,0x14,0x53,0x04,0x0d,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,
+- 0x04,0x0d,0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0xd1,0x96,0xd0,
+- 0x5c,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x0d,0xe6,0x10,
+- 0x04,0x0d,0xe6,0x0d,0x00,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd4,0x26,0x53,0x04,0x0d,
+- 0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x0d,0x0d,0xff,0xf0,0x91,0x84,
+- 0xb1,0xf0,0x91,0x84,0xa7,0x00,0x0d,0xff,0xf0,0x91,0x84,0xb2,0xf0,0x91,0x84,0xa7,
+- 0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x0d,0x09,0x91,
+- 0x08,0x10,0x04,0x0d,0x09,0x00,0x00,0x0d,0x00,0x0d,0x00,0xcf,0x86,0xd5,0x18,0x94,
+- 0x14,0x93,0x10,0x52,0x04,0x0d,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,
+- 0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x10,
+- 0x00,0x10,0x04,0x10,0x00,0x10,0x07,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x09,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd2,
+- 0x10,0xd1,0x08,0x10,0x04,0x0d,0x00,0x11,0x00,0x10,0x04,0x11,0x07,0x11,0x00,0x91,
+- 0x08,0x10,0x04,0x11,0x00,0x10,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x0c,0x51,
+- 0x04,0x0d,0x00,0x10,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x93,
+- 0x10,0x52,0x04,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xd2,0xc8,0xd1,0x48,0xd0,0x42,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x93,
+- 0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,
+- 0x00,0x54,0x04,0x10,0x00,0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,
+- 0x00,0x10,0x09,0x10,0x04,0x10,0x07,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,
+- 0x00,0x10,0x04,0x12,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x52,0xcf,0x86,0xd5,
+- 0x3c,0xd4,0x28,0xd3,0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,
+- 0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x00,0x51,
+- 0x04,0x11,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,
+- 0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x94,0x10,0x53,0x04,0x11,
+- 0x00,0x92,0x08,0x11,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,
+- 0x04,0x10,0x00,0xd4,0x18,0x53,0x04,0x10,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x10,
+- 0x00,0x10,0x07,0x10,0x04,0x10,0x09,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,
+- 0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xe1,0x27,0x01,0xd0,0x8a,0xcf,0x86,
+- 0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x10,0x00,
+- 0x10,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,0x00,
+- 0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x93,0x14,
+- 0x92,0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0x10,0x00,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
+- 0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x14,0x07,0x91,0x08,0x10,0x04,
+- 0x10,0x07,0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x6a,0xd4,0x42,0xd3,0x14,0x52,0x04,
+- 0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0xd2,0x19,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0xff,
+- 0xf0,0x91,0x8d,0x87,0xf0,0x91,0x8c,0xbe,0x00,0x91,0x11,0x10,0x0d,0x10,0xff,0xf0,
+- 0x91,0x8d,0x87,0xf0,0x91,0x8d,0x97,0x00,0x10,0x09,0x00,0x00,0xd3,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x10,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0x10,0x00,0xd4,0x1c,0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0xe6,
+- 0x52,0x04,0x10,0xe6,0x91,0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x93,0x10,
+- 0x52,0x04,0x10,0xe6,0x91,0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x06,0x00,0x00,0xe3,0x30,0x01,0xd2,0xb7,0xd1,0x48,0xd0,0x06,0xcf,0x06,0x12,
+- 0x00,0xcf,0x86,0x95,0x3c,0xd4,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,
+- 0x04,0x12,0x09,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x07,0x12,0x00,0x12,
+- 0x00,0x53,0x04,0x12,0x00,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x00,0x00,0x12,
+- 0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x12,0x00,0x10,0x04,0x14,0xe6,0x15,0x00,0x00,
+- 0x00,0xd0,0x45,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,
+- 0x00,0xd2,0x15,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,0xff,0xf0,0x91,0x92,
+- 0xb9,0xf0,0x91,0x92,0xba,0x00,0xd1,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,
+- 0xf0,0x91,0x92,0xb0,0x00,0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,
+- 0x91,0x92,0xbd,0x00,0x10,0x00,0xcf,0x86,0x95,0x24,0xd4,0x14,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x09,0x10,0x07,0x10,0x00,0x00,0x00,0x53,0x04,
+- 0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x06,
+- 0xcf,0x06,0x00,0x00,0xd0,0x40,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,
+- 0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0xd2,0x1e,0x51,0x04,
+- 0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x96,0xb8,0xf0,0x91,0x96,0xaf,0x00,0x10,
+- 0xff,0xf0,0x91,0x96,0xb9,0xf0,0x91,0x96,0xaf,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
+- 0x10,0x00,0x10,0x09,0xcf,0x86,0x95,0x2c,0xd4,0x1c,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x10,0x07,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,
+- 0x11,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,
+- 0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,0x5c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,
+- 0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x10,0x00,0x10,0x09,0xcf,0x86,0xd5,0x24,0xd4,0x14,0x93,0x10,0x52,0x04,
+- 0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,
+- 0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,0x53,0x04,
+- 0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0xd3,0x10,
+- 0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x0d,0x07,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x14,
+- 0x94,0x10,0x53,0x04,0x0d,0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x11,0x00,
+- 0x53,0x04,0x11,0x00,0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x11,0x00,0x11,0x00,0x94,0x14,0x53,0x04,0x11,0x00,
+- 0x92,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x11,0x09,0x00,0x00,0x11,0x00,
+- 0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,0x59,0x01,0xd3,0xb2,0xd2,0x5c,0xd1,
+- 0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,
+- 0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x14,0x00,0x14,0x09,0x10,0x04,0x14,0x07,0x14,
+- 0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x00,0x00,0x10,
+- 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x10,0x92,0x0c,0x51,
+- 0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,
+- 0x1a,0xcf,0x86,0x55,0x04,0x00,0x00,0x94,0x10,0x53,0x04,0x15,0x00,0x92,0x08,0x11,
+- 0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x15,
+- 0x00,0x53,0x04,0x15,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x94,
+- 0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x15,0x09,0x15,0x00,0x15,0x00,0x91,
+- 0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,
+- 0x3c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x93,0x10,0x52,
+- 0x04,0x13,0x00,0x91,0x08,0x10,0x04,0x13,0x09,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,
+- 0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,
+- 0x04,0x13,0x00,0x13,0x09,0x00,0x00,0x13,0x00,0x13,0x00,0xd0,0x46,0xcf,0x86,0xd5,
+- 0x2c,0xd4,0x10,0x93,0x0c,0x52,0x04,0x13,0x00,0x11,0x04,0x15,0x00,0x13,0x00,0x13,
+- 0x00,0x53,0x04,0x13,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x13,0x09,0x13,
+- 0x00,0x91,0x08,0x10,0x04,0x13,0x00,0x14,0x00,0x13,0x00,0x94,0x14,0x93,0x10,0x92,
+- 0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xe3,0xa9,0x01,0xd2,0xb0,0xd1,0x6c,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,
+- 0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x12,0x00,
+- 0x12,0x00,0x12,0x00,0x54,0x04,0x12,0x00,0xd3,0x10,0x52,0x04,0x12,0x00,0x51,0x04,
+- 0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,
+- 0x10,0x04,0x12,0x00,0x12,0x09,0xcf,0x86,0xd5,0x14,0x94,0x10,0x93,0x0c,0x52,0x04,
+- 0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0x94,0x14,0x53,0x04,
+- 0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0x12,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,0x12,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x93,0x10,
+- 0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x06,0x00,0x00,0xd1,0xa0,0xd0,0x52,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,
+- 0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x92,0x0c,
+- 0x51,0x04,0x13,0x00,0x10,0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x54,0x04,
+- 0x13,0x00,0xd3,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,
+- 0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x51,0x04,
+- 0x13,0x00,0x10,0x04,0x00,0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0x93,0x14,
+- 0xd2,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x07,0x13,0x00,0x11,0x04,0x13,0x09,
+- 0x13,0x00,0x00,0x00,0x53,0x04,0x13,0x00,0x92,0x08,0x11,0x04,0x13,0x00,0x00,0x00,
+- 0x00,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,
+- 0x00,0x00,0x14,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x14,0x00,
+- 0x14,0x00,0x14,0x00,0xd0,0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x14,0x53,0x04,0x14,0x00,
+- 0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0xd3,0x18,
+- 0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x51,0x04,0x14,0x00,
+- 0x10,0x04,0x14,0x00,0x14,0x09,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,
+- 0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,
+- 0x14,0x00,0x53,0x04,0x14,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,
+- 0xcf,0x86,0x55,0x04,0x15,0x00,0x54,0x04,0x15,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,
+- 0x15,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x15,0x00,0xd0,0xca,0xcf,0x86,0xd5,0xc2,0xd4,0x54,0xd3,0x06,0xcf,0x06,
+- 0x09,0x00,0xd2,0x06,0xcf,0x06,0x09,0x00,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x09,0x00,
+- 0xcf,0x86,0x55,0x04,0x09,0x00,0x94,0x14,0x53,0x04,0x09,0x00,0x52,0x04,0x09,0x00,
+- 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,0x10,0x00,0xd0,0x1e,0xcf,0x86,
+- 0x95,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x68,
+- 0xd2,0x46,0xd1,0x40,0xd0,0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,
+- 0xd4,0x20,0xd3,0x10,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,
+- 0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,
+- 0x93,0x10,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x06,0x11,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,
+- 0x95,0x10,0x94,0x0c,0x93,0x08,0x12,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,
+- 0xd5,0x4c,0xd4,0x06,0xcf,0x06,0x0b,0x00,0xd3,0x40,0xd2,0x3a,0xd1,0x34,0xd0,0x2e,
+- 0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x53,0x04,0x15,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,
+- 0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,
+- 0xd1,0x4c,0xd0,0x44,0xcf,0x86,0xd5,0x3c,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,
+- 0xcf,0x06,0x11,0x00,0xd2,0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,
+- 0x95,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,
+- 0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0xd2,0x01,0xcf,
+- 0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x0b,0x01,0xd3,0x06,0xcf,0x06,0x0c,0x00,
+- 0xd2,0x84,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0c,0x00,0x54,0x04,0x0c,0x00,
+- 0x53,0x04,0x0c,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,
+- 0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,0x53,0x04,
+- 0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,
+- 0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x00,0x00,
+- 0x10,0x00,0xd4,0x10,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,
+- 0x00,0x00,0x93,0x10,0x52,0x04,0x10,0x01,0x91,0x08,0x10,0x04,0x10,0x01,0x10,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,
+- 0x10,0x00,0x93,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,
+- 0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x24,0xd4,0x10,0x93,0x0c,0x52,0x04,0x10,0x00,
+- 0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,
+- 0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,
+- 0x10,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
+- 0xd0,0x0e,0xcf,0x86,0x95,0x08,0x14,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,
+- 0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,0x30,0xd1,0x0c,0xd0,0x06,0xcf,0x06,
+- 0x00,0x00,0xcf,0x06,0x14,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x14,0x00,
+- 0x53,0x04,0x14,0x00,0x92,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0d,0x00,
+- 0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,0x52,0x04,0x0d,0x00,0x91,0x08,0x10,0x04,
+- 0x0d,0x00,0x15,0x00,0x15,0x00,0xd2,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,
+- 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0d,0x00,0x54,0x04,
+- 0x0d,0x00,0x53,0x04,0x0d,0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,
+- 0x0d,0x00,0x15,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x15,0x00,
+- 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x0d,0x00,
+- 0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x12,0x00,0x13,0x00,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
+- 0xcf,0x06,0x12,0x00,0xe2,0xc5,0x01,0xd1,0x8e,0xd0,0x86,0xcf,0x86,0xd5,0x48,0xd4,
+- 0x06,0xcf,0x06,0x12,0x00,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x06,0xcf,0x06,0x12,
+- 0x00,0xd1,0x06,0xcf,0x06,0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,
+- 0x04,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x14,0x00,0x14,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x14,0x00,0x15,
+- 0x00,0x15,0x00,0x00,0x00,0xd4,0x36,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x2a,0xd1,
+- 0x06,0xcf,0x06,0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,
+- 0x00,0x54,0x04,0x12,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,
++ 0x10,0x04,0x0c,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,
++ 0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,
++ 0x0c,0x00,0x00,0x00,0x00,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,
++ 0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,
++ 0x10,0x04,0x0c,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x10,
++ 0x93,0x0c,0x52,0x04,0x11,0x00,0x11,0x04,0x10,0x00,0x15,0x00,0x00,0x00,0x11,0x00,
++ 0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,
++ 0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x00,0x00,
++ 0x53,0x04,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,
++ 0x02,0xff,0xff,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd1,0x76,0xd0,0x09,0xcf,0x86,
++ 0xcf,0x06,0x02,0xff,0xff,0xcf,0x86,0x85,0xd4,0x07,0xcf,0x06,0x02,0xff,0xff,0xd3,
++ 0x07,0xcf,0x06,0x02,0xff,0xff,0xd2,0x07,0xcf,0x06,0x02,0xff,0xff,0xd1,0x07,0xcf,
++ 0x06,0x02,0xff,0xff,0xd0,0x18,0xcf,0x86,0x55,0x05,0x02,0xff,0xff,0x94,0x0d,0x93,
++ 0x09,0x12,0x05,0x02,0xff,0xff,0x00,0x00,0x00,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x24,
++ 0x94,0x20,0xd3,0x10,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,
++ 0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,
++ 0x0b,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x00,0x00,
++ 0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,
++ 0xe4,0x9c,0x10,0xe3,0x16,0x08,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x08,0x04,0xe0,
++ 0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0x10,0x08,0x01,
++ 0xff,0xe8,0xbb,0x8a,0x00,0x01,0xff,0xe8,0xb3,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe6,0xbb,0x91,0x00,0x01,0xff,0xe4,0xb8,0xb2,0x00,0x10,0x08,0x01,0xff,0xe5,
++ 0x8f,0xa5,0x00,0x01,0xff,0xe9,0xbe,0x9c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe9,0xbe,0x9c,0x00,0x01,0xff,0xe5,0xa5,0x91,0x00,0x10,0x08,0x01,0xff,0xe9,
++ 0x87,0x91,0x00,0x01,0xff,0xe5,0x96,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
++ 0xa5,0x88,0x00,0x01,0xff,0xe6,0x87,0xb6,0x00,0x10,0x08,0x01,0xff,0xe7,0x99,0xa9,
++ 0x00,0x01,0xff,0xe7,0xbe,0x85,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe8,0x98,0xbf,0x00,0x01,0xff,0xe8,0x9e,0xba,0x00,0x10,0x08,0x01,0xff,0xe8,
++ 0xa3,0xb8,0x00,0x01,0xff,0xe9,0x82,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,
++ 0xa8,0x82,0x00,0x01,0xff,0xe6,0xb4,0x9b,0x00,0x10,0x08,0x01,0xff,0xe7,0x83,0x99,
++ 0x00,0x01,0xff,0xe7,0x8f,0x9e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
++ 0x90,0xbd,0x00,0x01,0xff,0xe9,0x85,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0xa7,0xb1,
++ 0x00,0x01,0xff,0xe4,0xba,0x82,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x8d,0xb5,
++ 0x00,0x01,0xff,0xe6,0xac,0x84,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x9b,0x00,0x01,
++ 0xff,0xe8,0x98,0xad,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe9,0xb8,0x9e,0x00,0x01,0xff,0xe5,0xb5,0x90,0x00,0x10,0x08,0x01,0xff,0xe6,
++ 0xbf,0xab,0x00,0x01,0xff,0xe8,0x97,0x8d,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
++ 0xa5,0xa4,0x00,0x01,0xff,0xe6,0x8b,0x89,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0x98,
++ 0x00,0x01,0xff,0xe8,0xa0,0x9f,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
++ 0xbb,0x8a,0x00,0x01,0xff,0xe6,0x9c,0x97,0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0xaa,
++ 0x00,0x01,0xff,0xe7,0x8b,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x83,0x8e,
++ 0x00,0x01,0xff,0xe4,0xbe,0x86,0x00,0x10,0x08,0x01,0xff,0xe5,0x86,0xb7,0x00,0x01,
++ 0xff,0xe5,0x8b,0x9e,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,
++ 0x93,0x84,0x00,0x01,0xff,0xe6,0xab,0x93,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x90,
++ 0x00,0x01,0xff,0xe7,0x9b,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x80,0x81,
++ 0x00,0x01,0xff,0xe8,0x98,0x86,0x00,0x10,0x08,0x01,0xff,0xe8,0x99,0x9c,0x00,0x01,
++ 0xff,0xe8,0xb7,0xaf,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9c,0xb2,
++ 0x00,0x01,0xff,0xe9,0xad,0xaf,0x00,0x10,0x08,0x01,0xff,0xe9,0xb7,0xba,0x00,0x01,
++ 0xff,0xe7,0xa2,0x8c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa5,0xbf,0x00,0x01,
++ 0xff,0xe7,0xb6,0xa0,0x00,0x10,0x08,0x01,0xff,0xe8,0x8f,0x89,0x00,0x01,0xff,0xe9,
++ 0x8c,0x84,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0x10,0x08,
++ 0x01,0xff,0xe5,0xa3,0x9f,0x00,0x01,0xff,0xe5,0xbc,0x84,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe7,0xb1,0xa0,0x00,0x01,0xff,0xe8,0x81,0xbe,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0x89,0xa2,0x00,0x01,0xff,0xe7,0xa3,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe8,0xb3,0x82,0x00,0x01,0xff,0xe9,0x9b,0xb7,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0xa3,0x98,0x00,0x01,0xff,0xe5,0xb1,0xa2,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xa8,0x93,0x00,0x01,0xff,0xe6,0xb7,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,
++ 0x8f,0x00,0x01,0xff,0xe7,0xb4,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,0x99,0x8b,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0x8b,0x92,0x00,0x01,0xff,0xe8,0x82,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe5,0x87,0x9c,0x00,0x01,0xff,0xe5,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe7,0xa8,
++ 0x9c,0x00,0x01,0xff,0xe7,0xb6,0xbe,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe8,0x8f,0xb1,0x00,0x01,0xff,0xe9,0x99,0xb5,0x00,0x10,0x08,0x01,0xff,0xe8,0xae,
++ 0x80,0x00,0x01,0xff,0xe6,0x8b,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,
++ 0x82,0x00,0x01,0xff,0xe8,0xab,0xbe,0x00,0x10,0x08,0x01,0xff,0xe4,0xb8,0xb9,0x00,
++ 0x01,0xff,0xe5,0xaf,0xa7,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe6,0x80,0x92,0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0x95,0xb0,0x00,0x01,0xff,0xe5,0x8c,0x97,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe7,0xa3,0xbb,0x00,0x01,0xff,0xe4,0xbe,0xbf,0x00,0x10,0x08,0x01,0xff,0xe5,0xbe,
++ 0xa9,0x00,0x01,0xff,0xe4,0xb8,0x8d,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xb3,0x8c,0x00,0x01,0xff,0xe6,0x95,0xb8,0x00,0x10,0x08,0x01,0xff,0xe7,0xb4,
++ 0xa2,0x00,0x01,0xff,0xe5,0x8f,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa1,
++ 0x9e,0x00,0x01,0xff,0xe7,0x9c,0x81,0x00,0x10,0x08,0x01,0xff,0xe8,0x91,0x89,0x00,
++ 0x01,0xff,0xe8,0xaa,0xaa,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xae,0xba,0x00,0x01,0xff,0xe8,0xbe,0xb0,0x00,0x10,0x08,0x01,0xff,0xe6,0xb2,
++ 0x88,0x00,0x01,0xff,0xe6,0x8b,0xbe,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8b,
++ 0xa5,0x00,0x01,0xff,0xe6,0x8e,0xa0,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xa5,0x00,
++ 0x01,0xff,0xe4,0xba,0xae,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,
++ 0xa9,0x00,0x01,0xff,0xe5,0x87,0x89,0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0x81,0x00,
++ 0x01,0xff,0xe7,0xb3,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x89,0xaf,0x00,
++ 0x01,0xff,0xe8,0xab,0x92,0x00,0x10,0x08,0x01,0xff,0xe9,0x87,0x8f,0x00,0x01,0xff,
++ 0xe5,0x8b,0xb5,0x00,0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x91,0x82,0x00,0x01,0xff,0xe5,0xa5,
++ 0xb3,0x00,0x10,0x08,0x01,0xff,0xe5,0xbb,0xac,0x00,0x01,0xff,0xe6,0x97,0x85,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xbf,0xbe,0x00,0x01,0xff,0xe7,0xa4,0xaa,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0x96,0xad,0x00,0x01,0xff,0xe9,0xa9,0xaa,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xba,0x97,0x00,0x01,0xff,0xe9,0xbb,0x8e,0x00,
++ 0x10,0x08,0x01,0xff,0xe5,0x8a,0x9b,0x00,0x01,0xff,0xe6,0x9b,0x86,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe6,0xad,0xb7,0x00,0x01,0xff,0xe8,0xbd,0xa2,0x00,0x10,0x08,
++ 0x01,0xff,0xe5,0xb9,0xb4,0x00,0x01,0xff,0xe6,0x86,0x90,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x88,0x80,0x00,0x01,0xff,0xe6,0x92,0x9a,0x00,
++ 0x10,0x08,0x01,0xff,0xe6,0xbc,0xa3,0x00,0x01,0xff,0xe7,0x85,0x89,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe7,0x92,0x89,0x00,0x01,0xff,0xe7,0xa7,0x8a,0x00,0x10,0x08,
++ 0x01,0xff,0xe7,0xb7,0xb4,0x00,0x01,0xff,0xe8,0x81,0xaf,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe8,0xbc,0xa6,0x00,0x01,0xff,0xe8,0x93,0xae,0x00,0x10,0x08,
++ 0x01,0xff,0xe9,0x80,0xa3,0x00,0x01,0xff,0xe9,0x8d,0x8a,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe5,0x88,0x97,0x00,0x01,0xff,0xe5,0x8a,0xa3,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0x92,0xbd,0x00,0x01,0xff,0xe7,0x83,0x88,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa3,0x82,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,
++ 0x10,0x08,0x01,0xff,0xe5,0xbb,0x89,0x00,0x01,0xff,0xe5,0xbf,0xb5,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe6,0x8d,0xbb,0x00,0x01,0xff,0xe6,0xae,0xae,0x00,0x10,0x08,
++ 0x01,0xff,0xe7,0xb0,0xbe,0x00,0x01,0xff,0xe7,0x8d,0xb5,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe4,0xbb,0xa4,0x00,0x01,0xff,0xe5,0x9b,0xb9,0x00,0x10,0x08,
++ 0x01,0xff,0xe5,0xaf,0xa7,0x00,0x01,0xff,0xe5,0xb6,0xba,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe6,0x80,0x9c,0x00,0x01,0xff,0xe7,0x8e,0xb2,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0x91,0xa9,0x00,0x01,0xff,0xe7,0xbe,0x9a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe8,0x81,0x86,0x00,0x01,0xff,0xe9,0x88,0xb4,0x00,0x10,0x08,
++ 0x01,0xff,0xe9,0x9b,0xb6,0x00,0x01,0xff,0xe9,0x9d,0x88,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe9,0xa0,0x98,0x00,0x01,0xff,0xe4,0xbe,0x8b,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0xa6,0xae,0x00,0x01,0xff,0xe9,0x86,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe9,0x9a,0xb8,0x00,0x01,0xff,0xe6,0x83,0xa1,0x00,0x10,0x08,0x01,0xff,
++ 0xe4,0xba,0x86,0x00,0x01,0xff,0xe5,0x83,0x9a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe5,0xaf,0xae,0x00,0x01,0xff,0xe5,0xb0,0xbf,0x00,0x10,0x08,0x01,0xff,0xe6,0x96,
++ 0x99,0x00,0x01,0xff,0xe6,0xa8,0x82,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0x87,0x8e,0x00,0x01,0xff,0xe7,
++ 0x99,0x82,0x00,0x10,0x08,0x01,0xff,0xe8,0x93,0xbc,0x00,0x01,0xff,0xe9,0x81,0xbc,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xbe,0x8d,0x00,0x01,0xff,0xe6,0x9a,0x88,
++ 0x00,0x10,0x08,0x01,0xff,0xe9,0x98,0xae,0x00,0x01,0xff,0xe5,0x8a,0x89,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x9d,0xbb,0x00,0x01,0xff,0xe6,0x9f,0xb3,
++ 0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0x81,0x00,0x01,0xff,0xe6,0xba,0x9c,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe7,0x90,0x89,0x00,0x01,0xff,0xe7,0x95,0x99,0x00,0x10,
++ 0x08,0x01,0xff,0xe7,0xa1,0xab,0x00,0x01,0xff,0xe7,0xb4,0x90,0x00,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa1,0x9e,0x00,0x01,0xff,0xe5,0x85,0xad,
++ 0x00,0x10,0x08,0x01,0xff,0xe6,0x88,0xae,0x00,0x01,0xff,0xe9,0x99,0xb8,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe5,0x80,0xab,0x00,0x01,0xff,0xe5,0xb4,0x99,0x00,0x10,
++ 0x08,0x01,0xff,0xe6,0xb7,0xaa,0x00,0x01,0xff,0xe8,0xbc,0xaa,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe5,0xbe,0x8b,0x00,0x01,0xff,0xe6,0x85,0x84,0x00,0x10,
++ 0x08,0x01,0xff,0xe6,0xa0,0x97,0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe9,0x9a,0x86,0x00,0x01,0xff,0xe5,0x88,0xa9,0x00,0x10,0x08,0x01,
++ 0xff,0xe5,0x90,0x8f,0x00,0x01,0xff,0xe5,0xb1,0xa5,0x00,0xd4,0x80,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x98,0x93,0x00,0x01,0xff,0xe6,0x9d,0x8e,
++ 0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0xa8,0x00,0x01,0xff,0xe6,0xb3,0xa5,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe7,0x90,0x86,0x00,0x01,0xff,0xe7,0x97,0xa2,0x00,0x10,
++ 0x08,0x01,0xff,0xe7,0xbd,0xb9,0x00,0x01,0xff,0xe8,0xa3,0x8f,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe8,0xa3,0xa1,0x00,0x01,0xff,0xe9,0x87,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0xe9,0x9b,0xa2,0x00,0x01,0xff,0xe5,0x8c,0xbf,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe6,0xba,0xba,0x00,0x01,0xff,0xe5,0x90,0x9d,0x00,0x10,0x08,0x01,
++ 0xff,0xe7,0x87,0x90,0x00,0x01,0xff,0xe7,0x92,0x98,0x00,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe8,0x97,0xba,0x00,0x01,0xff,0xe9,0x9a,0xa3,0x00,0x10,
++ 0x08,0x01,0xff,0xe9,0xb1,0x97,0x00,0x01,0xff,0xe9,0xba,0x9f,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe6,0x9e,0x97,0x00,0x01,0xff,0xe6,0xb7,0x8b,0x00,0x10,0x08,0x01,
++ 0xff,0xe8,0x87,0xa8,0x00,0x01,0xff,0xe7,0xab,0x8b,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe7,0xac,0xa0,0x00,0x01,0xff,0xe7,0xb2,0x92,0x00,0x10,0x08,0x01,
++ 0xff,0xe7,0x8b,0x80,0x00,0x01,0xff,0xe7,0x82,0x99,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe8,0xad,0x98,0x00,0x01,0xff,0xe4,0xbb,0x80,0x00,0x10,0x08,0x01,0xff,0xe8,
++ 0x8c,0xb6,0x00,0x01,0xff,0xe5,0x88,0xba,0x00,0xe2,0xad,0x06,0xe1,0xc4,0x03,0xe0,
++ 0xcb,0x01,0xcf,0x86,0xd5,0xe4,0xd4,0x74,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe5,0x88,0x87,0x00,0x01,0xff,0xe5,0xba,0xa6,0x00,0x10,0x08,0x01,0xff,
++ 0xe6,0x8b,0x93,0x00,0x01,0xff,0xe7,0xb3,0x96,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe5,0xae,0x85,0x00,0x01,0xff,0xe6,0xb4,0x9e,0x00,0x10,0x08,0x01,0xff,0xe6,0x9a,
++ 0xb4,0x00,0x01,0xff,0xe8,0xbc,0xbb,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe8,0xa1,0x8c,0x00,0x01,0xff,0xe9,0x99,0x8d,0x00,0x10,0x08,0x01,0xff,0xe8,0xa6,
++ 0x8b,0x00,0x01,0xff,0xe5,0xbb,0x93,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,
++ 0x80,0x00,0x01,0xff,0xe5,0x97,0x80,0x00,0x01,0x00,0xd3,0x34,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x01,0xff,0xe5,0xa1,0x9a,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe6,0x99,
++ 0xb4,0x00,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe5,0x87,0x9e,0x00,
++ 0x10,0x08,0x01,0xff,0xe7,0x8c,0xaa,0x00,0x01,0xff,0xe7,0x9b,0x8a,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa4,0xbc,0x00,0x01,0xff,0xe7,0xa5,0x9e,0x00,
++ 0x10,0x08,0x01,0xff,0xe7,0xa5,0xa5,0x00,0x01,0xff,0xe7,0xa6,0x8f,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe9,0x9d,0x96,0x00,0x01,0xff,0xe7,0xb2,0xbe,0x00,0x10,0x08,
++ 0x01,0xff,0xe7,0xbe,0xbd,0x00,0x01,0x00,0xd4,0x64,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x01,0xff,0xe8,0x98,0x92,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe8,0xab,
++ 0xb8,0x00,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe9,0x80,0xb8,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0x83,0xbd,0x00,0x01,0x00,0xd2,0x14,0x51,0x04,0x01,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0xa3,0xaf,0x00,0x01,0xff,0xe9,0xa3,0xbc,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe9,0xa4,0xa8,0x00,0x01,0xff,0xe9,0xb6,0xb4,0x00,0x10,0x08,
++ 0x0d,0xff,0xe9,0x83,0x9e,0x00,0x0d,0xff,0xe9,0x9a,0xb7,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x06,0xff,0xe4,0xbe,0xae,0x00,0x06,0xff,0xe5,0x83,0xa7,0x00,
++ 0x10,0x08,0x06,0xff,0xe5,0x85,0x8d,0x00,0x06,0xff,0xe5,0x8b,0x89,0x00,0xd1,0x10,
++ 0x10,0x08,0x06,0xff,0xe5,0x8b,0xa4,0x00,0x06,0xff,0xe5,0x8d,0x91,0x00,0x10,0x08,
++ 0x06,0xff,0xe5,0x96,0x9d,0x00,0x06,0xff,0xe5,0x98,0x86,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x06,0xff,0xe5,0x99,0xa8,0x00,0x06,0xff,0xe5,0xa1,0x80,0x00,0x10,0x08,
++ 0x06,0xff,0xe5,0xa2,0xa8,0x00,0x06,0xff,0xe5,0xb1,0xa4,0x00,0xd1,0x10,0x10,0x08,
++ 0x06,0xff,0xe5,0xb1,0xae,0x00,0x06,0xff,0xe6,0x82,0x94,0x00,0x10,0x08,0x06,0xff,
++ 0xe6,0x85,0xa8,0x00,0x06,0xff,0xe6,0x86,0x8e,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,
++ 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe6,0x87,0xb2,0x00,0x06,
++ 0xff,0xe6,0x95,0x8f,0x00,0x10,0x08,0x06,0xff,0xe6,0x97,0xa2,0x00,0x06,0xff,0xe6,
++ 0x9a,0x91,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe6,0xa2,0x85,0x00,0x06,0xff,0xe6,
++ 0xb5,0xb7,0x00,0x10,0x08,0x06,0xff,0xe6,0xb8,0x9a,0x00,0x06,0xff,0xe6,0xbc,0xa2,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0x85,0xae,0x00,0x06,0xff,0xe7,
++ 0x88,0xab,0x00,0x10,0x08,0x06,0xff,0xe7,0x90,0xa2,0x00,0x06,0xff,0xe7,0xa2,0x91,
++ 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa4,0xbe,0x00,0x06,0xff,0xe7,0xa5,0x89,
++ 0x00,0x10,0x08,0x06,0xff,0xe7,0xa5,0x88,0x00,0x06,0xff,0xe7,0xa5,0x90,0x00,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa5,0x96,0x00,0x06,0xff,0xe7,
++ 0xa5,0x9d,0x00,0x10,0x08,0x06,0xff,0xe7,0xa6,0x8d,0x00,0x06,0xff,0xe7,0xa6,0x8e,
++ 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa9,0x80,0x00,0x06,0xff,0xe7,0xaa,0x81,
++ 0x00,0x10,0x08,0x06,0xff,0xe7,0xaf,0x80,0x00,0x06,0xff,0xe7,0xb7,0xb4,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xb8,0x89,0x00,0x06,0xff,0xe7,0xb9,0x81,
++ 0x00,0x10,0x08,0x06,0xff,0xe7,0xbd,0xb2,0x00,0x06,0xff,0xe8,0x80,0x85,0x00,0xd1,
++ 0x10,0x10,0x08,0x06,0xff,0xe8,0x87,0xad,0x00,0x06,0xff,0xe8,0x89,0xb9,0x00,0x10,
++ 0x08,0x06,0xff,0xe8,0x89,0xb9,0x00,0x06,0xff,0xe8,0x91,0x97,0x00,0xd4,0x75,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,0xa4,0x90,0x00,0x06,0xff,0xe8,
++ 0xa6,0x96,0x00,0x10,0x08,0x06,0xff,0xe8,0xac,0x81,0x00,0x06,0xff,0xe8,0xac,0xb9,
++ 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,0xb3,0x93,0x00,0x06,0xff,0xe8,0xb4,0x88,
++ 0x00,0x10,0x08,0x06,0xff,0xe8,0xbe,0xb6,0x00,0x06,0xff,0xe9,0x80,0xb8,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe9,0x9b,0xa3,0x00,0x06,0xff,0xe9,0x9f,0xbf,
++ 0x00,0x10,0x08,0x06,0xff,0xe9,0xa0,0xbb,0x00,0x0b,0xff,0xe6,0x81,0xb5,0x00,0x91,
++ 0x11,0x10,0x09,0x0b,0xff,0xf0,0xa4,0x8b,0xae,0x00,0x0b,0xff,0xe8,0x88,0x98,0x00,
++ 0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe4,0xb8,0xa6,0x00,
++ 0x08,0xff,0xe5,0x86,0xb5,0x00,0x10,0x08,0x08,0xff,0xe5,0x85,0xa8,0x00,0x08,0xff,
++ 0xe4,0xbe,0x80,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0x85,0x85,0x00,0x08,0xff,
++ 0xe5,0x86,0x80,0x00,0x10,0x08,0x08,0xff,0xe5,0x8b,0x87,0x00,0x08,0xff,0xe5,0x8b,
++ 0xba,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0x96,0x9d,0x00,0x08,0xff,
++ 0xe5,0x95,0x95,0x00,0x10,0x08,0x08,0xff,0xe5,0x96,0x99,0x00,0x08,0xff,0xe5,0x97,
++ 0xa2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xa1,0x9a,0x00,0x08,0xff,0xe5,0xa2,
++ 0xb3,0x00,0x10,0x08,0x08,0xff,0xe5,0xa5,0x84,0x00,0x08,0xff,0xe5,0xa5,0x94,0x00,
++ 0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe5,0xa9,0xa2,0x00,0x08,0xff,0xe5,0xac,0xa8,0x00,0x10,0x08,
++ 0x08,0xff,0xe5,0xbb,0x92,0x00,0x08,0xff,0xe5,0xbb,0x99,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe5,0xbd,0xa9,0x00,0x08,0xff,0xe5,0xbe,0xad,0x00,0x10,0x08,0x08,0xff,
++ 0xe6,0x83,0x98,0x00,0x08,0xff,0xe6,0x85,0x8e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe6,0x84,0x88,0x00,0x08,0xff,0xe6,0x86,0x8e,0x00,0x10,0x08,0x08,0xff,
++ 0xe6,0x85,0xa0,0x00,0x08,0xff,0xe6,0x87,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe6,0x88,0xb4,0x00,0x08,0xff,0xe6,0x8f,0x84,0x00,0x10,0x08,0x08,0xff,0xe6,0x90,
++ 0x9c,0x00,0x08,0xff,0xe6,0x91,0x92,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe6,0x95,0x96,0x00,0x08,0xff,0xe6,0x99,0xb4,0x00,0x10,0x08,0x08,0xff,
++ 0xe6,0x9c,0x97,0x00,0x08,0xff,0xe6,0x9c,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe6,0x9d,0x96,0x00,0x08,0xff,0xe6,0xad,0xb9,0x00,0x10,0x08,0x08,0xff,0xe6,0xae,
++ 0xba,0x00,0x08,0xff,0xe6,0xb5,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe6,0xbb,0x9b,0x00,0x08,0xff,0xe6,0xbb,0x8b,0x00,0x10,0x08,0x08,0xff,0xe6,0xbc,
++ 0xa2,0x00,0x08,0xff,0xe7,0x80,0x9e,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x85,
++ 0xae,0x00,0x08,0xff,0xe7,0x9e,0xa7,0x00,0x10,0x08,0x08,0xff,0xe7,0x88,0xb5,0x00,
++ 0x08,0xff,0xe7,0x8a,0xaf,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe7,0x8c,0xaa,0x00,0x08,0xff,0xe7,0x91,0xb1,0x00,0x10,0x08,0x08,0xff,
++ 0xe7,0x94,0x86,0x00,0x08,0xff,0xe7,0x94,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe7,0x98,0x9d,0x00,0x08,0xff,0xe7,0x98,0x9f,0x00,0x10,0x08,0x08,0xff,0xe7,0x9b,
++ 0x8a,0x00,0x08,0xff,0xe7,0x9b,0x9b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe7,0x9b,0xb4,0x00,0x08,0xff,0xe7,0x9d,0x8a,0x00,0x10,0x08,0x08,0xff,0xe7,0x9d,
++ 0x80,0x00,0x08,0xff,0xe7,0xa3,0x8c,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xaa,
++ 0xb1,0x00,0x08,0xff,0xe7,0xaf,0x80,0x00,0x10,0x08,0x08,0xff,0xe7,0xb1,0xbb,0x00,
++ 0x08,0xff,0xe7,0xb5,0x9b,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe7,0xb7,0xb4,0x00,0x08,0xff,0xe7,0xbc,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0x80,
++ 0x85,0x00,0x08,0xff,0xe8,0x8d,0x92,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0x8f,
++ 0xaf,0x00,0x08,0xff,0xe8,0x9d,0xb9,0x00,0x10,0x08,0x08,0xff,0xe8,0xa5,0x81,0x00,
++ 0x08,0xff,0xe8,0xa6,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xa6,
++ 0x96,0x00,0x08,0xff,0xe8,0xaa,0xbf,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xb8,0x00,
++ 0x08,0xff,0xe8,0xab,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xac,0x81,0x00,
++ 0x08,0xff,0xe8,0xab,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xad,0x00,0x08,0xff,
++ 0xe8,0xac,0xb9,0x00,0xcf,0x86,0x95,0xde,0xd4,0x81,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe8,0xae,0x8a,0x00,0x08,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,
++ 0x08,0xff,0xe8,0xbc,0xb8,0x00,0x08,0xff,0xe9,0x81,0xb2,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe9,0x86,0x99,0x00,0x08,0xff,0xe9,0x89,0xb6,0x00,0x10,0x08,0x08,0xff,
++ 0xe9,0x99,0xbc,0x00,0x08,0xff,0xe9,0x9b,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe9,0x9d,0x96,0x00,0x08,0xff,0xe9,0x9f,0x9b,0x00,0x10,0x08,0x08,0xff,
++ 0xe9,0x9f,0xbf,0x00,0x08,0xff,0xe9,0xa0,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe9,0xa0,0xbb,0x00,0x08,0xff,0xe9,0xac,0x92,0x00,0x10,0x08,0x08,0xff,0xe9,0xbe,
++ 0x9c,0x00,0x08,0xff,0xf0,0xa2,0xa1,0x8a,0x00,0xd3,0x45,0xd2,0x22,0xd1,0x12,0x10,
++ 0x09,0x08,0xff,0xf0,0xa2,0xa1,0x84,0x00,0x08,0xff,0xf0,0xa3,0x8f,0x95,0x00,0x10,
++ 0x08,0x08,0xff,0xe3,0xae,0x9d,0x00,0x08,0xff,0xe4,0x80,0x98,0x00,0xd1,0x11,0x10,
++ 0x08,0x08,0xff,0xe4,0x80,0xb9,0x00,0x08,0xff,0xf0,0xa5,0x89,0x89,0x00,0x10,0x09,
++ 0x08,0xff,0xf0,0xa5,0xb3,0x90,0x00,0x08,0xff,0xf0,0xa7,0xbb,0x93,0x00,0x92,0x14,
++ 0x91,0x10,0x10,0x08,0x08,0xff,0xe9,0xbd,0x83,0x00,0x08,0xff,0xe9,0xbe,0x8e,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xe1,0x94,0x01,0xe0,0x08,0x01,0xcf,0x86,0xd5,0x42,
++ 0xd4,0x14,0x93,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x00,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x00,0x00,0x04,0xff,
++ 0xd7,0x99,0xd6,0xb4,0x00,0x10,0x04,0x01,0x1a,0x01,0xff,0xd7,0xb2,0xd6,0xb7,0x00,
++ 0xd4,0x42,0x53,0x04,0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xd7,0xa9,0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd7,0x82,0x00,0xd1,0x16,0x10,0x0b,
++ 0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,
++ 0x82,0x00,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xb7,0x00,0x01,0xff,0xd7,0x90,0xd6,
++ 0xb8,0x00,0xd3,0x43,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xbc,
++ 0x00,0x01,0xff,0xd7,0x91,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x92,0xd6,0xbc,
++ 0x00,0x01,0xff,0xd7,0x93,0xd6,0xbc,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x94,
++ 0xd6,0xbc,0x00,0x01,0xff,0xd7,0x95,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x96,
++ 0xd6,0xbc,0x00,0x00,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x98,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0x99,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x9a,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0x9b,0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,
++ 0x9c,0xd6,0xbc,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xd7,0x9e,0xd6,0xbc,0x00,0x00,
++ 0x00,0xcf,0x86,0x95,0x85,0x94,0x81,0xd3,0x3e,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xd7,0xa0,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa1,0xd6,0xbc,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0xd7,0xa3,0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0xa4,
++ 0xd6,0xbc,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xd7,0xa6,0xd6,0xbc,0x00,0x01,0xff,
++ 0xd7,0xa7,0xd6,0xbc,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa8,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0xaa,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0x95,0xd6,0xb9,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,
++ 0x91,0xd6,0xbf,0x00,0x01,0xff,0xd7,0x9b,0xd6,0xbf,0x00,0x10,0x09,0x01,0xff,0xd7,
++ 0xa4,0xd6,0xbf,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,
++ 0x0c,0x00,0x0c,0x00,0xcf,0x86,0x95,0x24,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,
++ 0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x5a,0xd2,0x06,
++ 0xcf,0x06,0x01,0x00,0xd1,0x14,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x08,
++ 0x14,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,0x04,
++ 0x01,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x0c,0x94,0x08,0x13,0x04,0x01,0x00,0x00,0x00,0x05,0x00,
++ 0x54,0x04,0x05,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x07,0x00,0x00,0x00,0xd2,0xce,0xd1,0xa5,0xd0,0x37,0xcf,0x86,0xd5,0x15,
++ 0x54,0x05,0x06,0xff,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x00,
++ 0x00,0x00,0x00,0x94,0x1c,0xd3,0x10,0x52,0x04,0x01,0xe6,0x51,0x04,0x0a,0xe6,0x10,
++ 0x04,0x0a,0xe6,0x10,0xdc,0x52,0x04,0x10,0xdc,0x11,0x04,0x10,0xdc,0x11,0xe6,0x01,
++ 0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x07,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd4,0x18,0xd3,0x10,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x12,0x04,0x01,
++ 0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,
++ 0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,
++ 0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,
++ 0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0x00,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x94,0x14,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xd0,0x2f,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x15,0x93,0x11,
++ 0x92,0x0d,0x91,0x09,0x10,0x05,0x01,0xff,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x18,0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,
++ 0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x00,
++ 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,
++ 0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x53,0x05,0x00,
++ 0xff,0x00,0xd2,0x0d,0x91,0x09,0x10,0x05,0x00,0xff,0x00,0x04,0x00,0x04,0x00,0x91,
++ 0x08,0x10,0x04,0x03,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0x46,0x3e,0xe1,0x1f,0x3b,
++ 0xe0,0x9c,0x39,0xcf,0x86,0xe5,0x40,0x26,0xc4,0xe3,0x16,0x14,0xe2,0xef,0x11,0xe1,
++ 0xd0,0x10,0xe0,0x60,0x07,0xcf,0x86,0xe5,0x53,0x03,0xe4,0x4c,0x02,0xe3,0x3d,0x01,
++ 0xd2,0x94,0xd1,0x70,0xd0,0x4a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x07,0x00,
++ 0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,
++ 0xd4,0x14,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x07,0x00,0x53,0x04,0x07,0x00,0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,
++ 0x07,0x00,0x00,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,
++ 0x95,0x20,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,
++ 0x00,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,
++ 0x00,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x54,0x04,
++ 0x07,0x00,0x53,0x04,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,
++ 0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,0x00,0x93,0x10,
++ 0x52,0x04,0x07,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,
++ 0xcf,0x06,0x08,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,0x20,0x53,0x04,0x08,0x00,
++ 0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x10,0x00,0xd1,0x08,0x10,0x04,
++ 0x10,0x00,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x12,0x04,
++ 0x0a,0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,
++ 0x00,0x00,0x0a,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,
++ 0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x00,0x00,0xd2,0x5e,0xd1,0x06,0xcf,0x06,
++ 0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,
++ 0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x0a,0x00,
++ 0xcf,0x86,0xd5,0x18,0x54,0x04,0x0a,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x0a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x10,0xdc,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,
++ 0x10,0x00,0x12,0x04,0x10,0x00,0x00,0x00,0xd1,0x70,0xd0,0x36,0xcf,0x86,0xd5,0x18,
++ 0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x51,0x04,0x05,0x00,
++ 0x10,0x04,0x05,0x00,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x05,0x00,0x00,0x00,
++ 0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x05,0x00,
++ 0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x05,0x00,0x92,0x0c,0x51,0x04,0x05,0x00,
++ 0x10,0x04,0x05,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x0c,
++ 0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x10,0xe6,0x92,0x0c,0x51,0x04,0x10,0xe6,
++ 0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
++ 0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,
++ 0x00,0x00,0x07,0x00,0x08,0x00,0xcf,0x86,0x95,0x1c,0xd4,0x0c,0x93,0x08,0x12,0x04,
++ 0x08,0x00,0x00,0x00,0x08,0x00,0x93,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0xba,0xd2,0x80,0xd1,0x34,0xd0,0x1a,0xcf,0x86,
++ 0x55,0x04,0x05,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,
++ 0x07,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x53,0x04,0x05,0x00,
++ 0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd0,0x2a,
++ 0xcf,0x86,0xd5,0x14,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,
++ 0x11,0x04,0x07,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x07,0x00,0x92,0x08,0x11,0x04,
++ 0x07,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xcf,0x86,0xd5,0x10,0x54,0x04,0x12,0x00,
++ 0x93,0x08,0x12,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x54,0x04,0x12,0x00,0x53,0x04,
++ 0x12,0x00,0x12,0x04,0x12,0x00,0x00,0x00,0xd1,0x34,0xd0,0x12,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x10,0x00,0x00,0x00,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,
++ 0xd2,0x06,0xcf,0x06,0x10,0x00,0xd1,0x40,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,
++ 0x54,0x04,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,
++ 0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x08,0x13,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,0xce,0x02,0xe3,0x45,0x01,
++ 0xd2,0xd0,0xd1,0x70,0xd0,0x52,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,0x0c,0x52,0x04,
++ 0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,0x00,0xd3,0x10,0x52,0x04,
++ 0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,0x95,0x18,0x54,0x04,0x0b,0x00,0x93,0x10,
++ 0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,
++ 0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
++ 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,
++ 0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
++ 0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x11,0x00,0xd3,0x14,
++ 0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x04,0x11,0x00,
++ 0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x11,0x00,
++ 0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x1c,0x54,0x04,0x09,0x00,0x53,0x04,0x09,0x00,
++ 0xd2,0x08,0x11,0x04,0x09,0x00,0x0b,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,
++ 0x09,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0xcf,0x06,0x00,0x00,
++ 0xd0,0x1a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0x53,0x04,0x0d,0x00,
++ 0x52,0x04,0x00,0x00,0x11,0x04,0x11,0x00,0x0d,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,
++ 0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x11,0x00,0x11,0x00,0x11,0x00,
++ 0x11,0x00,0xd2,0xec,0xd1,0xa4,0xd0,0x76,0xcf,0x86,0xd5,0x48,0xd4,0x28,0xd3,0x14,
++ 0x52,0x04,0x08,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x10,0x04,0x08,0x00,
++ 0x00,0x00,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x08,0x00,0x08,0xdc,0x10,0x04,
++ 0x08,0x00,0x08,0xe6,0xd3,0x10,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x08,0x00,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,
++ 0x08,0x00,0x54,0x04,0x08,0x00,0xd3,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x14,0x00,
++ 0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe6,0x08,0x01,0x10,0x04,0x08,0xdc,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,0x09,0xcf,0x86,0x95,0x28,
++ 0xd4,0x14,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x08,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x10,0x00,
++ 0x00,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x24,0xd3,0x14,0x52,0x04,0x10,0x00,
++ 0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0xe6,0x10,0x04,0x10,0xdc,0x00,0x00,0x92,0x0c,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x52,0x04,
++ 0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd1,0x54,
++ 0xd0,0x26,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,0x04,
++ 0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x0b,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x0b,0x00,0x93,0x0c,
++ 0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x0b,0x00,0x54,0x04,0x0b,0x00,
++ 0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,
++ 0x0b,0x00,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x54,0x04,0x10,0x00,0xd3,0x0c,0x92,0x08,
++ 0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x10,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,
++ 0x53,0x04,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
++ 0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x96,0xd2,0x68,0xd1,0x24,0xd0,0x06,
++ 0xcf,0x06,0x0b,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x0b,0x00,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x1e,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0x93,0x10,0x92,0x0c,
++ 0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
++ 0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x11,0x00,
++ 0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x11,0x00,
++ 0x11,0x00,0xd1,0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,0x00,0xd4,0x0c,0x93,0x08,
++ 0x12,0x04,0x14,0x00,0x14,0xe6,0x00,0x00,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,
++ 0x14,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,
++ 0xd1,0x24,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,
++ 0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,
++ 0x0b,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x58,0xd0,0x12,0xcf,0x86,0x55,0x04,
++ 0x14,0x00,0x94,0x08,0x13,0x04,0x14,0x00,0x00,0x00,0x14,0x00,0xcf,0x86,0x95,0x40,
++ 0xd4,0x24,0xd3,0x0c,0x52,0x04,0x14,0x00,0x11,0x04,0x14,0x00,0x14,0xdc,0xd2,0x0c,
++ 0x51,0x04,0x14,0xe6,0x10,0x04,0x14,0xe6,0x14,0xdc,0x91,0x08,0x10,0x04,0x14,0xe6,
++ 0x14,0xdc,0x14,0xdc,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0xdc,0x14,0x00,
++ 0x14,0x00,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x15,0x00,
++ 0x93,0x10,0x52,0x04,0x15,0x00,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x86,0xe5,0x0f,0x06,0xe4,0xf8,0x03,0xe3,0x02,0x02,0xd2,0xfb,0xd1,
++ 0x4c,0xd0,0x06,0xcf,0x06,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x1c,0xd3,0x10,0x52,
++ 0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x09,0x0c,0x00,0x52,0x04,0x0c,
++ 0x00,0x11,0x04,0x0c,0x00,0x00,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x0c,
++ 0x00,0x0c,0x00,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,0x00,0x00,0x52,0x04,0x00,
++ 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x09,0xd0,0x69,0xcf,0x86,0xd5,
++ 0x32,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x15,0x51,0x04,0x0b,0x00,0x10,
++ 0x0d,0x0b,0xff,0xf0,0x91,0x82,0x99,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x91,0x11,
++ 0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x9b,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x0b,
++ 0x00,0xd4,0x1d,0x53,0x04,0x0b,0x00,0x92,0x15,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
++ 0x00,0x0b,0xff,0xf0,0x91,0x82,0xa5,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x53,0x04,
++ 0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x10,0x04,0x0b,0x07,
++ 0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,0x0c,0x92,0x08,0x11,0x04,
++ 0x0b,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x14,0x00,0x00,0x00,0x0d,0x00,0xd4,0x14,0x53,0x04,0x0d,0x00,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x08,
++ 0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0xd1,0x96,0xd0,0x5c,0xcf,0x86,0xd5,0x18,
++ 0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x0d,0x00,
++ 0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd4,0x26,0x53,0x04,0x0d,0x00,0x52,0x04,0x0d,0x00,
++ 0x51,0x04,0x0d,0x00,0x10,0x0d,0x0d,0xff,0xf0,0x91,0x84,0xb1,0xf0,0x91,0x84,0xa7,
++ 0x00,0x0d,0xff,0xf0,0x91,0x84,0xb2,0xf0,0x91,0x84,0xa7,0x00,0x93,0x18,0xd2,0x0c,
++ 0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x0d,0x09,0x91,0x08,0x10,0x04,0x0d,0x09,
++ 0x00,0x00,0x0d,0x00,0x0d,0x00,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,
++ 0x0d,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x10,0x00,
++ 0x54,0x04,0x10,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,
++ 0x10,0x07,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,
++ 0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0d,0x09,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x0d,0x00,0x11,0x00,0x10,0x04,0x11,0x07,0x11,0x00,0x91,0x08,0x10,0x04,0x11,0x00,
++ 0x10,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,
++ 0x10,0x00,0x11,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xc8,0xd1,0x48,
++ 0xd0,0x42,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x54,0x04,0x10,0x00,
++ 0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0x09,0x10,0x04,
++ 0x10,0x07,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x12,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x10,
++ 0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,
++ 0x00,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,
++ 0x10,0x04,0x00,0x00,0x11,0x00,0x94,0x10,0x53,0x04,0x11,0x00,0x92,0x08,0x11,0x04,
++ 0x11,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x18,
++ 0x53,0x04,0x10,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0x07,0x10,0x04,
++ 0x10,0x09,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,
++ 0x00,0x00,0x00,0x00,0xe1,0x27,0x01,0xd0,0x8a,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,
++ 0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x10,0x00,0x10,0x00,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,
++ 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0xd4,
++ 0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,
++ 0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,
++ 0x00,0x10,0x04,0x00,0x00,0x14,0x07,0x91,0x08,0x10,0x04,0x10,0x07,0x10,0x00,0x10,
++ 0x00,0xcf,0x86,0xd5,0x6a,0xd4,0x42,0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0xd2,0x19,0xd1,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0xff,0xf0,0x91,0x8d,0x87,0xf0,
++ 0x91,0x8c,0xbe,0x00,0x91,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x8d,0x87,0xf0,0x91,
++ 0x8d,0x97,0x00,0x10,0x09,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,
++ 0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x52,
++ 0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd4,0x1c,0xd3,
++ 0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0xe6,0x52,0x04,0x10,0xe6,0x91,
++ 0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x10,0xe6,0x91,
++ 0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe3,
++ 0x30,0x01,0xd2,0xb7,0xd1,0x48,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x95,0x3c,
++ 0xd4,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x09,0x12,0x00,
++ 0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x07,0x12,0x00,0x12,0x00,0x53,0x04,0x12,0x00,
++ 0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x00,0x00,0x12,0x00,0xd1,0x08,0x10,0x04,
++ 0x00,0x00,0x12,0x00,0x10,0x04,0x14,0xe6,0x15,0x00,0x00,0x00,0xd0,0x45,0xcf,0x86,
++ 0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0xd2,0x15,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x10,0x00,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xba,
++ 0x00,0xd1,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xb0,0x00,
++ 0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xbd,0x00,0x10,
++ 0x00,0xcf,0x86,0x95,0x24,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,
++ 0x04,0x10,0x09,0x10,0x07,0x10,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,
++ 0x40,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x0c,0x52,0x04,0x10,
++ 0x00,0x11,0x04,0x10,0x00,0x00,0x00,0xd2,0x1e,0x51,0x04,0x10,0x00,0x10,0x0d,0x10,
++ 0xff,0xf0,0x91,0x96,0xb8,0xf0,0x91,0x96,0xaf,0x00,0x10,0xff,0xf0,0x91,0x96,0xb9,
++ 0xf0,0x91,0x96,0xaf,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,0x09,0xcf,
++ 0x86,0x95,0x2c,0xd4,0x1c,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x07,0x10,
++ 0x00,0x10,0x00,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0x53,
++ 0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0xd2,
++ 0xa0,0xd1,0x5c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,
++ 0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,
++ 0x09,0xcf,0x86,0xd5,0x24,0xd4,0x14,0x93,0x10,0x52,0x04,0x10,0x00,0x91,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,
++ 0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x2a,0xcf,
++ 0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0xd3,0x10,0x52,0x04,0x0d,0x00,0x51,
++ 0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x0d,0x07,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x53,0x04,0x0d,
++ 0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x11,0x00,0x53,0x04,0x11,0x00,0xd2,
++ 0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x11,0x00,0x11,0x00,0x94,0x14,0x53,0x04,0x11,0x00,0x92,0x0c,0x51,0x04,0x11,
++ 0x00,0x10,0x04,0x11,0x00,0x11,0x09,0x00,0x00,0x11,0x00,0xcf,0x06,0x00,0x00,0xcf,
++ 0x06,0x00,0x00,0xe4,0x59,0x01,0xd3,0xb2,0xd2,0x5c,0xd1,0x28,0xd0,0x22,0xcf,0x86,
++ 0x55,0x04,0x14,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,0x00,0x92,0x10,0xd1,0x08,
++ 0x10,0x04,0x14,0x00,0x14,0x09,0x10,0x04,0x14,0x07,0x14,0x00,0x00,0x00,0xcf,0x06,
++ 0x00,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x10,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,
++ 0x00,0x00,0x94,0x10,0x53,0x04,0x15,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,
++ 0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x15,0x00,0x53,0x04,0x15,0x00,
++ 0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x94,0x1c,0x93,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x15,0x09,0x15,0x00,0x15,0x00,0x91,0x08,0x10,0x04,0x15,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,0x3c,0xd0,0x1e,0xcf,0x86,
++ 0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x93,0x10,0x52,0x04,0x13,0x00,0x91,0x08,
++ 0x10,0x04,0x13,0x09,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,
++ 0x93,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x13,0x09,
++ 0x00,0x00,0x13,0x00,0x13,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,0x10,0x93,0x0c,
++ 0x52,0x04,0x13,0x00,0x11,0x04,0x15,0x00,0x13,0x00,0x13,0x00,0x53,0x04,0x13,0x00,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x13,0x09,0x13,0x00,0x91,0x08,0x10,0x04,
++ 0x13,0x00,0x14,0x00,0x13,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x13,0x00,
++ 0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe3,0xa9,0x01,0xd2,
++ 0xb0,0xd1,0x6c,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x54,
++ 0x04,0x12,0x00,0xd3,0x10,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,
++ 0x00,0x00,0x00,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x12,
++ 0x09,0xcf,0x86,0xd5,0x14,0x94,0x10,0x93,0x0c,0x52,0x04,0x12,0x00,0x11,0x04,0x12,
++ 0x00,0x00,0x00,0x00,0x00,0x12,0x00,0x94,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,
++ 0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xd0,0x3e,0xcf,
++ 0x86,0xd5,0x14,0x54,0x04,0x12,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x12,
++ 0x00,0x12,0x00,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x93,0x10,0x52,0x04,0x12,0x00,0x51,
++ 0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,
++ 0xa0,0xd0,0x52,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,0x13,0x00,0x51,
++ 0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,
++ 0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x54,0x04,0x13,0x00,0xd3,0x10,0x52,
++ 0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0xd2,0x0c,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x00,
++ 0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x51,0x04,0x13,
++ 0x00,0x10,0x04,0x13,0x07,0x13,0x00,0x11,0x04,0x13,0x09,0x13,0x00,0x00,0x00,0x53,
++ 0x04,0x13,0x00,0x92,0x08,0x11,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x94,0x20,0xd3,
++ 0x10,0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd0,
++ 0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x14,0x53,0x04,0x14,0x00,0x52,0x04,0x14,0x00,0x51,
++ 0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x14,
++ 0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x14,
++ 0x09,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x94,
++ 0x10,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,
++ 0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x15,
++ 0x00,0x54,0x04,0x15,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x15,0x00,0x00,0x00,0x00,
++ 0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0xd0,
++ 0xca,0xcf,0x86,0xd5,0xc2,0xd4,0x54,0xd3,0x06,0xcf,0x06,0x09,0x00,0xd2,0x06,0xcf,
++ 0x06,0x09,0x00,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,
++ 0x00,0x94,0x14,0x53,0x04,0x09,0x00,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,
++ 0x04,0x09,0x00,0x10,0x00,0x10,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x10,
++ 0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x11,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x68,0xd2,0x46,0xd1,0x40,0xd0,
++ 0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x20,0xd3,0x10,0x92,
++ 0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,
++ 0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x09,
++ 0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x11,
++ 0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x95,0x10,0x94,0x0c,0x93,
++ 0x08,0x12,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x4c,0xd4,0x06,0xcf,
++ 0x06,0x0b,0x00,0xd3,0x40,0xd2,0x3a,0xd1,0x34,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0b,
++ 0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,
++ 0x04,0x0b,0x00,0x00,0x00,0x53,0x04,0x15,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,
+- 0x86,0xcf,0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,
+- 0xa2,0xd4,0x9c,0xd3,0x74,0xd2,0x26,0xd1,0x20,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x94,
+- 0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,
+- 0x00,0x13,0x00,0xcf,0x06,0x13,0x00,0xcf,0x06,0x13,0x00,0xd1,0x48,0xd0,0x1e,0xcf,
+- 0x86,0x95,0x18,0x54,0x04,0x13,0x00,0x53,0x04,0x13,0x00,0x52,0x04,0x13,0x00,0x51,
+- 0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,
+- 0x04,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x00,0x00,0x15,0x00,0x00,
+- 0x00,0x13,0x00,0xcf,0x06,0x13,0x00,0xd2,0x22,0xd1,0x06,0xcf,0x06,0x13,0x00,0xd0,
+- 0x06,0xcf,0x06,0x13,0x00,0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x53,
+- 0x04,0x13,0x00,0x12,0x04,0x13,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x7e,0xd2,0x78,0xd1,0x34,0xd0,0x06,0xcf,
+- 0x06,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,
+- 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,
+- 0x00,0x52,0x04,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,
+- 0x3e,0xcf,0x86,0xd5,0x2c,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,
+- 0x04,0x10,0x00,0x00,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x01,0x10,0x00,0x94,
+- 0x0c,0x93,0x08,0x12,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xe1,0x92,0x04,0xd0,0x08,0xcf,0x86,
+- 0xcf,0x06,0x00,0x00,0xcf,0x86,0xe5,0x2f,0x04,0xe4,0x7f,0x02,0xe3,0xf4,0x01,0xd2,
+- 0x26,0xd1,0x06,0xcf,0x06,0x05,0x00,0xd0,0x06,0xcf,0x06,0x05,0x00,0xcf,0x86,0x55,
+- 0x04,0x05,0x00,0x54,0x04,0x05,0x00,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,
+- 0x00,0x00,0x00,0x00,0x00,0xd1,0xeb,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,
+- 0x20,0xd3,0x10,0x52,0x04,0x05,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x05,0x00,0x05,0x00,0x05,
+- 0x00,0xcf,0x86,0xd5,0x2a,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,
+- 0x00,0x51,0x04,0x05,0x00,0x10,0x0d,0x05,0xff,0xf0,0x9d,0x85,0x97,0xf0,0x9d,0x85,
+- 0xa5,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0x00,0xd4,0x75,0xd3,
+- 0x61,0xd2,0x44,0xd1,0x22,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,
+- 0xa5,0xf0,0x9d,0x85,0xae,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,
+- 0xf0,0x9d,0x85,0xaf,0x00,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,
+- 0xa5,0xf0,0x9d,0x85,0xb0,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,
+- 0xf0,0x9d,0x85,0xb1,0x00,0xd1,0x15,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,
+- 0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xb2,0x00,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x01,
+- 0xd2,0x08,0x11,0x04,0x05,0x01,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe2,
+- 0x05,0xd8,0xd3,0x10,0x92,0x0c,0x51,0x04,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x00,
+- 0x05,0x00,0x92,0x0c,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x05,0xdc,0x05,0xdc,
++ 0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x4c,0xd0,0x44,0xcf,
++ 0x86,0xd5,0x3c,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x11,0x00,0xd2,
++ 0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,
++ 0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0xd2,0x01,0xcf,0x86,0xd5,0x06,0xcf,0x06,
++ 0x00,0x00,0xe4,0x0b,0x01,0xd3,0x06,0xcf,0x06,0x0c,0x00,0xd2,0x84,0xd1,0x50,0xd0,
++ 0x1e,0xcf,0x86,0x55,0x04,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,0x0c,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,
++ 0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,
++ 0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x00,0x00,0xd0,0x06,0xcf,
++ 0x06,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x00,0x00,0x10,0x00,0xd4,0x10,0x53,
++ 0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x93,0x10,0x52,
++ 0x04,0x10,0x01,0x91,0x08,0x10,0x04,0x10,0x01,0x10,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x6c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x10,0x52,
++ 0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,0x10,0x00,0x10,0x00,0xcf,
++ 0x86,0xd5,0x24,0xd4,0x10,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,
++ 0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,
++ 0x00,0x10,0x00,0x10,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,
++ 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x00,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd0,0x0e,0xcf,0x86,0x95,
++ 0x08,0x14,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,
++ 0x06,0x00,0x00,0xd2,0x30,0xd1,0x0c,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x06,0x14,
++ 0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x14,0x00,0x53,0x04,0x14,0x00,0x92,
++ 0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,
++ 0x06,0x00,0x00,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x2c,0x94,
++ 0x28,0xd3,0x10,0x52,0x04,0x0d,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x15,0x00,0x15,
++ 0x00,0xd2,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,0x51,0x04,0x00,
++ 0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0d,0x00,0x54,0x04,0x0d,0x00,0x53,0x04,0x0d,
++ 0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x15,0x00,0xd0,
++ 0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x15,0x00,0x52,0x04,0x00,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x0d,0x00,0x00,0x00,0xcf,0x86,0x55,
++ 0x04,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x13,
++ 0x00,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xcf,0x06,0x12,0x00,0xe2,
++ 0xc6,0x01,0xd1,0x8e,0xd0,0x86,0xcf,0x86,0xd5,0x48,0xd4,0x06,0xcf,0x06,0x12,0x00,
++ 0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x06,0xcf,0x06,0x12,0x00,0xd1,0x06,0xcf,0x06,
++ 0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,0x00,0xd4,0x14,
++ 0x53,0x04,0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x14,0x00,
++ 0x14,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x14,0x00,0x15,0x00,0x15,0x00,0x00,0x00,
++ 0xd4,0x36,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,0x12,0x00,
++ 0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,0x00,0x54,0x04,0x12,0x00,
++ 0x93,0x10,0x92,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,
++ 0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0xa2,0xd4,0x9c,0xd3,0x74,
++ 0xd2,0x26,0xd1,0x20,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,0x06,
++ 0x13,0x00,0xcf,0x06,0x13,0x00,0xd1,0x48,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
++ 0x13,0x00,0x53,0x04,0x13,0x00,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,
++ 0x13,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x00,0x00,0x93,0x10,
++ 0x92,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x94,0x0c,0x93,0x08,0x12,0x04,0x00,0x00,0x15,0x00,0x00,0x00,0x13,0x00,0xcf,0x06,
++ 0x13,0x00,0xd2,0x22,0xd1,0x06,0xcf,0x06,0x13,0x00,0xd0,0x06,0xcf,0x06,0x13,0x00,
++ 0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x53,0x04,0x13,0x00,0x12,0x04,
++ 0x13,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,
++ 0x00,0x00,0xd3,0x7f,0xd2,0x79,0xd1,0x34,0xd0,0x06,0xcf,0x06,0x10,0x00,0xcf,0x86,
++ 0x55,0x04,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,0x3f,0xcf,0x86,0xd5,0x2c,
++ 0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x00,0x00,
++ 0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x01,0x10,0x00,0x94,0x0d,0x93,0x09,0x12,0x05,
++ 0x10,0xff,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xe1,0x96,0x04,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,
++ 0xcf,0x86,0xe5,0x33,0x04,0xe4,0x83,0x02,0xe3,0xf8,0x01,0xd2,0x26,0xd1,0x06,0xcf,
++ 0x06,0x05,0x00,0xd0,0x06,0xcf,0x06,0x05,0x00,0xcf,0x86,0x55,0x04,0x05,0x00,0x54,
++ 0x04,0x05,0x00,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x00,0x00,0x00,
++ 0x00,0xd1,0xef,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,0x20,0xd3,0x10,0x52,
++ 0x04,0x05,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x05,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0xd5,
++ 0x2a,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x51,0x04,0x05,
++ 0x00,0x10,0x0d,0x05,0xff,0xf0,0x9d,0x85,0x97,0xf0,0x9d,0x85,0xa5,0x00,0x05,0xff,
++ 0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0x00,0xd4,0x75,0xd3,0x61,0xd2,0x44,0xd1,
++ 0x22,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,
++ 0xae,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xaf,
++ 0x00,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,
++ 0xb0,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xb1,
++ 0x00,0xd1,0x15,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,
++ 0x9d,0x85,0xb2,0x00,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x01,0xd2,0x08,0x11,0x04,
++ 0x05,0x01,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe2,0x05,0xd8,0xd3,0x12,
++ 0x92,0x0d,0x51,0x04,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0xff,0x00,0x05,0xff,0x00,
++ 0x92,0x0e,0x51,0x05,0x05,0xff,0x00,0x10,0x05,0x05,0xff,0x00,0x05,0xdc,0x05,0xdc,
+ 0xd0,0x97,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x05,0xdc,
+ 0x10,0x04,0x05,0xdc,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe6,0x05,0xe6,
+ 0x92,0x08,0x11,0x04,0x05,0xe6,0x05,0xdc,0x05,0x00,0x05,0x00,0xd4,0x14,0x53,0x04,
+@@ -4080,20 +4090,21 @@ static const unsigned char utf8data[64080] = {
+ 0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,
+ 0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,
+ 0x04,0x00,0x00,0x53,0x04,0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,
+- 0x00,0xd4,0xc8,0xd3,0x70,0xd2,0x68,0xd1,0x60,0xd0,0x58,0xcf,0x86,0xd5,0x50,0xd4,
+- 0x4a,0xd3,0x44,0xd2,0x2a,0xd1,0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x05,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x05,0x00,0xcf,0x06,0x05,0x00,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,
+- 0x06,0x07,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x14,
+- 0x04,0x07,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,
+- 0x06,0x00,0x00,0xd2,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xd1,0x08,0xcf,0x86,0xcf,
+- 0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x06,0xcf,
+- 0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,
+- 0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x00,0x00,0x52,
+- 0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,0x00,0xcf,0x86,0xcf,0x06,0x02,0x00,0x81,
+- 0x80,0xcf,0x86,0x85,0x84,0xcf,0x86,0xcf,0x06,0x02,0x00,0x00,0x00,0x00,0x00,0x00
++ 0x00,0xd4,0xd9,0xd3,0x81,0xd2,0x79,0xd1,0x71,0xd0,0x69,0xcf,0x86,0xd5,0x60,0xd4,
++ 0x59,0xd3,0x52,0xd2,0x33,0xd1,0x2c,0xd0,0x25,0xcf,0x86,0x95,0x1e,0x94,0x19,0x93,
++ 0x14,0x92,0x0f,0x91,0x0a,0x10,0x05,0x00,0xff,0x00,0x05,0xff,0x00,0x00,0xff,0x00,
++ 0x00,0xff,0x00,0x00,0xff,0x00,0x00,0xff,0x00,0x05,0xff,0x00,0xcf,0x06,0x05,0xff,
++ 0x00,0xcf,0x06,0x00,0xff,0x00,0xd1,0x07,0xcf,0x06,0x07,0xff,0x00,0xd0,0x07,0xcf,
++ 0x06,0x07,0xff,0x00,0xcf,0x86,0x55,0x05,0x07,0xff,0x00,0x14,0x05,0x07,0xff,0x00,
++ 0x00,0xff,0x00,0xcf,0x06,0x00,0xff,0x00,0xcf,0x06,0x00,0xff,0x00,0xcf,0x06,0x00,
++ 0xff,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,
++ 0xcf,0x06,0x00,0x00,0xd2,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xd1,0x08,0xcf,0x86,
++ 0xcf,0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x06,
++ 0xcf,0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,
++ 0xd2,0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,
++ 0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x00,0x00,
++ 0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,0x00,0xcf,0x86,0xcf,0x06,0x02,0x00,
++ 0x81,0x80,0xcf,0x86,0x85,0x84,0xcf,0x86,0xcf,0x06,0x02,0x00,0x00,0x00,0x00,0x00
+ };
+
+ struct utf8data_table utf8_data_table = {
+diff --git a/include/acpi/pcc.h b/include/acpi/pcc.h
+index 9b373d172a7760..699c1a37b8e784 100644
+--- a/include/acpi/pcc.h
++++ b/include/acpi/pcc.h
+@@ -12,6 +12,7 @@
+ struct pcc_mbox_chan {
+ struct mbox_chan *mchan;
+ u64 shmem_base_addr;
++ void __iomem *shmem;
+ u64 shmem_size;
+ u32 latency;
+ u32 max_access_rate;
+@@ -31,11 +32,13 @@ struct pcc_mbox_chan {
+ #define PCC_CMD_COMPLETION_NOTIFY BIT(0)
+
+ #define MAX_PCC_SUBSPACES 256
++#define PCC_ACK_FLAG_MASK 0x1
+
+ #ifdef CONFIG_PCC
+ extern struct pcc_mbox_chan *
+ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id);
+ extern void pcc_mbox_free_channel(struct pcc_mbox_chan *chan);
++extern int pcc_mbox_ioremap(struct mbox_chan *chan);
+ #else
+ static inline struct pcc_mbox_chan *
+ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id)
+@@ -43,6 +46,10 @@ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id)
+ return ERR_PTR(-ENODEV);
+ }
+ static inline void pcc_mbox_free_channel(struct pcc_mbox_chan *chan) { }
++static inline int pcc_mbox_ioremap(struct mbox_chan *chan)
++{
++ return 0;
++};
+ #endif
+
+ #endif /* _PCC_H */
+diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h
+index f6a1cbb0f600fa..a80ba457a858f3 100644
+--- a/include/drm/display/drm_dp_mst_helper.h
++++ b/include/drm/display/drm_dp_mst_helper.h
+@@ -699,6 +699,13 @@ struct drm_dp_mst_topology_mgr {
+ */
+ bool payload_id_table_cleared : 1;
+
++ /**
++ * @reset_rx_state: The down request's reply and up request message
++ * receiver state must be reset, after the topology manager got
++ * removed. Protected by @lock.
++ */
++ bool reset_rx_state : 1;
++
+ /**
+ * @payload_count: The number of currently active payloads in hardware. This value is only
+ * intended to be used internally by MST helpers for payload tracking, and is only safe to
+diff --git a/include/drm/intel/xe_pciids.h b/include/drm/intel/xe_pciids.h
+index 644872a35c3526..4ba88d2dccd4b3 100644
+--- a/include/drm/intel/xe_pciids.h
++++ b/include/drm/intel/xe_pciids.h
+@@ -120,7 +120,6 @@
+
+ /* RPL-P */
+ #define XE_RPLP_IDS(MACRO__, ...) \
+- XE_RPLU_IDS(MACRO__, ## __VA_ARGS__), \
+ MACRO__(0xA720, ## __VA_ARGS__), \
+ MACRO__(0xA7A0, ## __VA_ARGS__), \
+ MACRO__(0xA7A8, ## __VA_ARGS__), \
+@@ -175,18 +174,38 @@
+ XE_ATS_M150_IDS(MACRO__, ## __VA_ARGS__),\
+ XE_ATS_M75_IDS(MACRO__, ## __VA_ARGS__)
+
+-/* MTL / ARL */
++/* ARL */
++#define XE_ARL_IDS(MACRO__, ...) \
++ MACRO__(0x7D41, ## __VA_ARGS__), \
++ MACRO__(0x7D51, ## __VA_ARGS__), \
++ MACRO__(0x7D67, ## __VA_ARGS__), \
++ MACRO__(0x7DD1, ## __VA_ARGS__), \
++ MACRO__(0xB640, ## __VA_ARGS__)
++
++/* MTL */
+ #define XE_MTL_IDS(MACRO__, ...) \
+ MACRO__(0x7D40, ## __VA_ARGS__), \
+- MACRO__(0x7D41, ## __VA_ARGS__), \
+ MACRO__(0x7D45, ## __VA_ARGS__), \
+- MACRO__(0x7D51, ## __VA_ARGS__), \
+ MACRO__(0x7D55, ## __VA_ARGS__), \
+ MACRO__(0x7D60, ## __VA_ARGS__), \
+- MACRO__(0x7D67, ## __VA_ARGS__), \
+- MACRO__(0x7DD1, ## __VA_ARGS__), \
+ MACRO__(0x7DD5, ## __VA_ARGS__)
+
++/* PVC */
++#define XE_PVC_IDS(MACRO__, ...) \
++ MACRO__(0x0B69, ## __VA_ARGS__), \
++ MACRO__(0x0B6E, ## __VA_ARGS__), \
++ MACRO__(0x0BD4, ## __VA_ARGS__), \
++ MACRO__(0x0BD5, ## __VA_ARGS__), \
++ MACRO__(0x0BD6, ## __VA_ARGS__), \
++ MACRO__(0x0BD7, ## __VA_ARGS__), \
++ MACRO__(0x0BD8, ## __VA_ARGS__), \
++ MACRO__(0x0BD9, ## __VA_ARGS__), \
++ MACRO__(0x0BDA, ## __VA_ARGS__), \
++ MACRO__(0x0BDB, ## __VA_ARGS__), \
++ MACRO__(0x0BE0, ## __VA_ARGS__), \
++ MACRO__(0x0BE1, ## __VA_ARGS__), \
++ MACRO__(0x0BE5, ## __VA_ARGS__)
++
+ #define XE_LNL_IDS(MACRO__, ...) \
+ MACRO__(0x6420, ## __VA_ARGS__), \
+ MACRO__(0x64A0, ## __VA_ARGS__), \
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index e84a93c4013207..6b4bc85f4999ba 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -195,7 +195,7 @@ struct gendisk {
+ unsigned int nr_zones;
+ unsigned int zone_capacity;
+ unsigned int last_zone_capacity;
+- unsigned long *conv_zones_bitmap;
++ unsigned long __rcu *conv_zones_bitmap;
+ unsigned int zone_wplugs_hash_bits;
+ spinlock_t zone_wplugs_lock;
+ struct mempool_s *zone_wplugs_pool;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bc2e3dab0487ea..cbe2350912460b 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1300,8 +1300,12 @@ void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
+ bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
+
+ #ifdef CONFIG_BPF_JIT
+-int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
+-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
++int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog);
++int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog);
+ struct bpf_trampoline *bpf_trampoline_get(u64 key,
+ struct bpf_attach_target_info *tgt_info);
+ void bpf_trampoline_put(struct bpf_trampoline *tr);
+@@ -1383,12 +1387,14 @@ void bpf_jit_uncharge_modmem(u32 size);
+ bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
+ #else
+ static inline int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+- struct bpf_trampoline *tr)
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ return -ENOTSUPP;
+ }
+ static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+- struct bpf_trampoline *tr)
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ return -ENOTSUPP;
+ }
+@@ -1492,6 +1498,9 @@ struct bpf_prog_aux {
+ bool xdp_has_frags;
+ bool exception_cb;
+ bool exception_boundary;
++ bool is_extended; /* true if extended by freplace program */
++ u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
++ struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
+ struct bpf_arena *arena;
+ /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
+ const struct btf_type *attach_func_proto;
+diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
+index 518bd1fd86fbe0..0cc66f8d28e7b6 100644
+--- a/include/linux/cleanup.h
++++ b/include/linux/cleanup.h
+@@ -285,14 +285,20 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ * similar to scoped_guard(), except it does fail when the lock
+ * acquire fails.
+ *
++ * Only for conditional locks.
+ */
+
++#define __DEFINE_CLASS_IS_CONDITIONAL(_name, _is_cond) \
++static __maybe_unused const bool class_##_name##_is_conditional = _is_cond
++
+ #define DEFINE_GUARD(_name, _type, _lock, _unlock) \
++ __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+ DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \
+ static inline void * class_##_name##_lock_ptr(class_##_name##_t *_T) \
+ { return (void *)(__force unsigned long)*_T; }
+
+ #define DEFINE_GUARD_COND(_name, _ext, _condlock) \
++ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \
+ EXTEND_CLASS(_name, _ext, \
+ ({ void *_t = _T; if (_T && !(_condlock)) _t = NULL; _t; }), \
+ class_##_name##_t _T) \
+@@ -303,17 +309,40 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ CLASS(_name, __UNIQUE_ID(guard))
+
+ #define __guard_ptr(_name) class_##_name##_lock_ptr
++#define __is_cond_ptr(_name) class_##_name##_is_conditional
+
+-#define scoped_guard(_name, args...) \
+- for (CLASS(_name, scope)(args), \
+- *done = NULL; __guard_ptr(_name)(&scope) && !done; done = (void *)1)
+-
+-#define scoped_cond_guard(_name, _fail, args...) \
+- for (CLASS(_name, scope)(args), \
+- *done = NULL; !done; done = (void *)1) \
+- if (!__guard_ptr(_name)(&scope)) _fail; \
+- else
+-
++/*
++ * Helper macro for scoped_guard().
++ *
++ * Note that the "!__is_cond_ptr(_name)" part of the condition ensures that
++ * compiler would be sure that for the unconditional locks the body of the
++ * loop (caller-provided code glued to the else clause) could not be skipped.
++ * It is needed because the other part - "__guard_ptr(_name)(&scope)" - is too
++ * hard to deduce (even if could be proven true for unconditional locks).
++ */
++#define __scoped_guard(_name, _label, args...) \
++ for (CLASS(_name, scope)(args); \
++ __guard_ptr(_name)(&scope) || !__is_cond_ptr(_name); \
++ ({ goto _label; })) \
++ if (0) { \
++_label: \
++ break; \
++ } else
++
++#define scoped_guard(_name, args...) \
++ __scoped_guard(_name, __UNIQUE_ID(label), args)
++
++#define __scoped_cond_guard(_name, _fail, _label, args...) \
++ for (CLASS(_name, scope)(args); true; ({ goto _label; })) \
++ if (!__guard_ptr(_name)(&scope)) { \
++ BUILD_BUG_ON(!__is_cond_ptr(_name)); \
++ _fail; \
++_label: \
++ break; \
++ } else
++
++#define scoped_cond_guard(_name, _fail, args...) \
++ __scoped_cond_guard(_name, _fail, __UNIQUE_ID(label), args)
+ /*
+ * Additional helper macros for generating lock guards with types, either for
+ * locks that don't have a native type (eg. RCU, preempt) or those that need a
+@@ -369,14 +398,17 @@ static inline class_##_name##_t class_##_name##_constructor(void) \
+ }
+
+ #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \
++__DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+ __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \
+ __DEFINE_LOCK_GUARD_1(_name, _type, _lock)
+
+ #define DEFINE_LOCK_GUARD_0(_name, _lock, _unlock, ...) \
++__DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+ __DEFINE_UNLOCK_GUARD(_name, void, _unlock, __VA_ARGS__) \
+ __DEFINE_LOCK_GUARD_0(_name, _lock)
+
+ #define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock) \
++ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \
+ EXTEND_CLASS(_name, _ext, \
+ ({ class_##_name##_t _t = { .lock = l }, *_T = &_t;\
+ if (_T->lock && !(_condlock)) _T->lock = NULL; \
+diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
+index d35b677b08fe13..c846436b64593e 100644
+--- a/include/linux/clocksource.h
++++ b/include/linux/clocksource.h
+@@ -49,6 +49,7 @@ struct module;
+ * @archdata: Optional arch-specific data
+ * @max_cycles: Maximum safe cycle value which won't overflow on
+ * multiplication
++ * @max_raw_delta: Maximum safe delta value for negative motion detection
+ * @name: Pointer to clocksource name
+ * @list: List head for registration (internal)
+ * @freq_khz: Clocksource frequency in khz.
+@@ -109,6 +110,7 @@ struct clocksource {
+ struct arch_clocksource_data archdata;
+ #endif
+ u64 max_cycles;
++ u64 max_raw_delta;
+ const char *name;
+ struct list_head list;
+ u32 freq_khz;
+diff --git a/include/linux/eeprom_93cx6.h b/include/linux/eeprom_93cx6.h
+index c860c72a921d03..3a485cc0e0fa0b 100644
+--- a/include/linux/eeprom_93cx6.h
++++ b/include/linux/eeprom_93cx6.h
+@@ -11,6 +11,8 @@
+ Supported chipsets: 93c46, 93c56 and 93c66.
+ */
+
++#include <linux/bits.h>
++
+ /*
+ * EEPROM operation defines.
+ */
+@@ -34,6 +36,7 @@
+ * @register_write(struct eeprom_93cx6 *eeprom): handler to
+ * write to the eeprom register by using all reg_* fields.
+ * @width: eeprom width, should be one of the PCI_EEPROM_WIDTH_* defines
++ * @quirks: eeprom or controller quirks
+ * @drive_data: Set if we're driving the data line.
+ * @reg_data_in: register field to indicate data input
+ * @reg_data_out: register field to indicate data output
+@@ -50,6 +53,9 @@ struct eeprom_93cx6 {
+ void (*register_write)(struct eeprom_93cx6 *eeprom);
+
+ int width;
++ unsigned int quirks;
++/* Some EEPROMs require an extra clock cycle before reading */
++#define PCI_EEPROM_QUIRK_EXTRA_READ_CYCLE BIT(0)
+
+ char drive_data;
+ char reg_data_in;
+@@ -71,3 +77,8 @@ extern void eeprom_93cx6_wren(struct eeprom_93cx6 *eeprom, bool enable);
+
+ extern void eeprom_93cx6_write(struct eeprom_93cx6 *eeprom,
+ u8 addr, u16 data);
++
++static inline bool has_quirk_extra_read_cycle(struct eeprom_93cx6 *eeprom)
++{
++ return eeprom->quirks & PCI_EEPROM_QUIRK_EXTRA_READ_CYCLE;
++}
+diff --git a/include/linux/eventpoll.h b/include/linux/eventpoll.h
+index 3337745d81bd69..0c0d00fcd131f9 100644
+--- a/include/linux/eventpoll.h
++++ b/include/linux/eventpoll.h
+@@ -42,7 +42,7 @@ static inline void eventpoll_release(struct file *file)
+ * because the file in on the way to be removed and nobody ( but
+ * eventpoll ) has still a reference to this file.
+ */
+- if (likely(!file->f_ep))
++ if (likely(!READ_ONCE(file->f_ep)))
+ return;
+
+ /*
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index 3b2ad444c002ee..c24f8bc01045df 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -24,6 +24,7 @@
+ #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+ #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
+
++#define F2FS_BLKSIZE_MASK (F2FS_BLKSIZE - 1)
+ #define F2FS_BYTES_TO_BLK(bytes) ((unsigned long long)(bytes) >> F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_TO_BYTES(blk) ((unsigned long long)(blk) << F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1)
+diff --git a/include/linux/fanotify.h b/include/linux/fanotify.h
+index 4f1c4f60311808..89ff45bd6f01ba 100644
+--- a/include/linux/fanotify.h
++++ b/include/linux/fanotify.h
+@@ -36,6 +36,7 @@
+ #define FANOTIFY_ADMIN_INIT_FLAGS (FANOTIFY_PERM_CLASSES | \
+ FAN_REPORT_TID | \
+ FAN_REPORT_PIDFD | \
++ FAN_REPORT_FD_ERROR | \
+ FAN_UNLIMITED_QUEUE | \
+ FAN_UNLIMITED_MARKS)
+
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 121d5b8bc86753..a7d60a1c72a09a 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -359,6 +359,7 @@ struct hid_item {
+ * | @HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP:
+ * | @HID_QUIRK_HAVE_SPECIAL_DRIVER:
+ * | @HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE:
++ * | @HID_QUIRK_IGNORE_SPECIAL_DRIVER
+ * | @HID_QUIRK_FULLSPEED_INTERVAL:
+ * | @HID_QUIRK_NO_INIT_REPORTS:
+ * | @HID_QUIRK_NO_IGNORE:
+@@ -384,6 +385,7 @@ struct hid_item {
+ #define HID_QUIRK_HAVE_SPECIAL_DRIVER BIT(19)
+ #define HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE BIT(20)
+ #define HID_QUIRK_NOINVERT BIT(21)
++#define HID_QUIRK_IGNORE_SPECIAL_DRIVER BIT(22)
+ #define HID_QUIRK_FULLSPEED_INTERVAL BIT(28)
+ #define HID_QUIRK_NO_INIT_REPORTS BIT(29)
+ #define HID_QUIRK_NO_IGNORE BIT(30)
+diff --git a/include/linux/i3c/master.h b/include/linux/i3c/master.h
+index 2a1ed05d5782a8..6e5328c6c6afd2 100644
+--- a/include/linux/i3c/master.h
++++ b/include/linux/i3c/master.h
+@@ -298,7 +298,8 @@ enum i3c_open_drain_speed {
+ * @I3C_ADDR_SLOT_I2C_DEV: address is assigned to an I2C device
+ * @I3C_ADDR_SLOT_I3C_DEV: address is assigned to an I3C device
+ * @I3C_ADDR_SLOT_STATUS_MASK: address slot mask
+- *
++ * @I3C_ADDR_SLOT_EXT_DESIRED: the bitmask represents addresses that are preferred by some devices,
++ * such as the "assigned-address" property in a device tree source.
+ * On an I3C bus, addresses are assigned dynamically, and we need to know which
+ * addresses are free to use and which ones are already assigned.
+ *
+@@ -311,8 +312,12 @@ enum i3c_addr_slot_status {
+ I3C_ADDR_SLOT_I2C_DEV,
+ I3C_ADDR_SLOT_I3C_DEV,
+ I3C_ADDR_SLOT_STATUS_MASK = 3,
++ I3C_ADDR_SLOT_EXT_STATUS_MASK = 7,
++ I3C_ADDR_SLOT_EXT_DESIRED = BIT(2),
+ };
+
++#define I3C_ADDR_SLOT_STATUS_BITS 4
++
+ /**
+ * struct i3c_bus - I3C bus object
+ * @cur_master: I3C master currently driving the bus. Since I3C is multi-master
+@@ -354,7 +359,7 @@ enum i3c_addr_slot_status {
+ struct i3c_bus {
+ struct i3c_dev_desc *cur_master;
+ int id;
+- unsigned long addrslots[((I2C_MAX_ADDR + 1) * 2) / BITS_PER_LONG];
++ unsigned long addrslots[((I2C_MAX_ADDR + 1) * I3C_ADDR_SLOT_STATUS_BITS) / BITS_PER_LONG];
+ enum i3c_bus_mode mode;
+ struct {
+ unsigned long i3c;
+diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
+index c189d36ad55ea6..968de0cde25d58 100644
+--- a/include/linux/io_uring/cmd.h
++++ b/include/linux/io_uring/cmd.h
+@@ -43,7 +43,7 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
+ * Note: the caller should never hard code @issue_flags and is only allowed
+ * to pass the mask provided by the core io_uring code.
+ */
+-void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret, ssize_t res2,
++void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret, u64 res2,
+ unsigned issue_flags);
+
+ void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
+@@ -67,7 +67,7 @@ static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
+ return -EOPNOTSUPP;
+ }
+ static inline void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret,
+- ssize_t ret2, unsigned issue_flags)
++ u64 ret2, unsigned issue_flags)
+ {
+ }
+ static inline void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
+diff --git a/include/linux/leds.h b/include/linux/leds.h
+index e5968c3ed4ae08..2337f516fa7c2c 100644
+--- a/include/linux/leds.h
++++ b/include/linux/leds.h
+@@ -238,7 +238,7 @@ struct led_classdev {
+ struct kernfs_node *brightness_hw_changed_kn;
+ #endif
+
+- /* Ensures consistent access to the LED Flash Class device */
++ /* Ensures consistent access to the LED class device */
+ struct mutex led_access;
+ };
+
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index f34407cc27888d..eb67d3d5ff5b22 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -35,7 +35,7 @@ struct mmc_csd {
+ unsigned int wp_grp_size;
+ unsigned int read_blkbits;
+ unsigned int write_blkbits;
+- unsigned int capacity;
++ sector_t capacity;
+ unsigned int read_partial:1,
+ read_misalign:1,
+ write_partial:1,
+@@ -294,6 +294,7 @@ struct mmc_card {
+ #define MMC_QUIRK_BROKEN_SD_DISCARD (1<<14) /* Disable broken SD discard support */
+ #define MMC_QUIRK_BROKEN_SD_CACHE (1<<15) /* Disable broken SD cache support */
+ #define MMC_QUIRK_BROKEN_CACHE_FLUSH (1<<16) /* Don't flush cache until the write has occurred */
++#define MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY (1<<17) /* Disable broken SD poweroff notify support */
+
+ bool written_flag; /* Indicates eMMC has been written since power on */
+ bool reenable_cmdq; /* Re-enable Command Queue */
+diff --git a/include/linux/mmc/sd.h b/include/linux/mmc/sd.h
+index 6727576a875559..865cc0ca8543d1 100644
+--- a/include/linux/mmc/sd.h
++++ b/include/linux/mmc/sd.h
+@@ -36,6 +36,7 @@
+ /* OCR bit definitions */
+ #define SD_OCR_S18R (1 << 24) /* 1.8V switching request */
+ #define SD_ROCR_S18A SD_OCR_S18R /* 1.8V switching accepted by card */
++#define SD_OCR_2T (1 << 27) /* HO2T/CO2T - SDUC support */
+ #define SD_OCR_XPC (1 << 28) /* SDXC power control */
+ #define SD_OCR_CCS (1 << 30) /* Card Capacity Status */
+
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index cc839e4365c182..74aa9fbbdae70b 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -306,7 +306,7 @@ static const unsigned long *const_folio_flags(const struct folio *folio,
+ {
+ const struct page *page = &folio->page;
+
+- VM_BUG_ON_PGFLAGS(PageTail(page), page);
++ VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
+ VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
+ return &page[n].flags;
+ }
+@@ -315,7 +315,7 @@ static unsigned long *folio_flags(struct folio *folio, unsigned n)
+ {
+ struct page *page = &folio->page;
+
+- VM_BUG_ON_PGFLAGS(PageTail(page), page);
++ VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
+ VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
+ return &page[n].flags;
+ }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 573b4c4c2be61f..4e77c4230c0a19 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2609,6 +2609,12 @@ pci_host_bridge_acpi_msi_domain(struct pci_bus *bus) { return NULL; }
+ static inline bool pci_pr3_present(struct pci_dev *pdev) { return false; }
+ #endif
+
++#if defined(CONFIG_X86) && defined(CONFIG_ACPI)
++bool arch_pci_dev_is_removable(struct pci_dev *pdev);
++#else
++static inline bool arch_pci_dev_is_removable(struct pci_dev *pdev) { return false; }
++#endif
++
+ #ifdef CONFIG_EEH
+ static inline struct eeh_dev *pci_dev_to_eeh_dev(struct pci_dev *pdev)
+ {
+diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
+index e61d164622db47..1bad36e3e4ef1f 100644
+--- a/include/linux/scatterlist.h
++++ b/include/linux/scatterlist.h
+@@ -313,7 +313,7 @@ static inline void sg_dma_mark_bus_address(struct scatterlist *sg)
+ }
+
+ /**
+- * sg_unmark_bus_address - Unmark the scatterlist entry as a bus address
++ * sg_dma_unmark_bus_address - Unmark the scatterlist entry as a bus address
+ * @sg: SG entry
+ *
+ * Description:
+diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
+index e9ec32fb97d4a7..2cc21ffcdaf9e4 100644
+--- a/include/linux/stackdepot.h
++++ b/include/linux/stackdepot.h
+@@ -147,7 +147,7 @@ static inline int stack_depot_early_init(void) { return 0; }
+ * If the provided stack trace comes from the interrupt context, only the part
+ * up to the interrupt entry is saved.
+ *
+- * Context: Any context, but setting STACK_DEPOT_FLAG_CAN_ALLOC is required if
++ * Context: Any context, but unsetting STACK_DEPOT_FLAG_CAN_ALLOC is required if
+ * alloc_pages() cannot be used from the current context. Currently
+ * this is the case for contexts where neither %GFP_ATOMIC nor
+ * %GFP_NOWAIT can be used (NMI, raw_spin_lock).
+@@ -156,7 +156,7 @@ static inline int stack_depot_early_init(void) { return 0; }
+ */
+ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ unsigned int nr_entries,
+- gfp_t gfp_flags,
++ gfp_t alloc_flags,
+ depot_flags_t depot_flags);
+
+ /**
+@@ -175,7 +175,7 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ * Return: Handle of the stack trace stored in depot, 0 on failure
+ */
+ depot_stack_handle_t stack_depot_save(unsigned long *entries,
+- unsigned int nr_entries, gfp_t gfp_flags);
++ unsigned int nr_entries, gfp_t alloc_flags);
+
+ /**
+ * __stack_depot_get_stack_record - Get a pointer to a stack_record struct
+diff --git a/include/linux/timekeeper_internal.h b/include/linux/timekeeper_internal.h
+index 902c20ef495acb..715e0919972e4c 100644
+--- a/include/linux/timekeeper_internal.h
++++ b/include/linux/timekeeper_internal.h
+@@ -68,9 +68,6 @@ struct tk_read_base {
+ * shifted nano seconds.
+ * @ntp_error_shift: Shift conversion between clock shifted nano seconds and
+ * ntp shifted nano seconds.
+- * @last_warning: Warning ratelimiter (DEBUG_TIMEKEEPING)
+- * @underflow_seen: Underflow warning flag (DEBUG_TIMEKEEPING)
+- * @overflow_seen: Overflow warning flag (DEBUG_TIMEKEEPING)
+ *
+ * Note: For timespec(64) based interfaces wall_to_monotonic is what
+ * we need to add to xtime (or xtime corrected for sub jiffy times)
+@@ -124,18 +121,6 @@ struct timekeeper {
+ u32 ntp_err_mult;
+ /* Flag used to avoid updating NTP twice with same second */
+ u32 skip_second_overflow;
+-#ifdef CONFIG_DEBUG_TIMEKEEPING
+- long last_warning;
+- /*
+- * These simple flag variables are managed
+- * without locks, which is racy, but they are
+- * ok since we don't really care about being
+- * super precise about how many events were
+- * seen, just that a problem was observed.
+- */
+- int underflow_seen;
+- int overflow_seen;
+-#endif
+ };
+
+ #ifdef CONFIG_GENERIC_TIME_VSYSCALL
+diff --git a/include/linux/usb/chipidea.h b/include/linux/usb/chipidea.h
+index 5a7f96684ea226..ebdfef124b2bc0 100644
+--- a/include/linux/usb/chipidea.h
++++ b/include/linux/usb/chipidea.h
+@@ -65,6 +65,7 @@ struct ci_hdrc_platform_data {
+ #define CI_HDRC_PHY_VBUS_CONTROL BIT(16)
+ #define CI_HDRC_HAS_PORTSC_PEC_MISSED BIT(17)
+ #define CI_HDRC_FORCE_VBUS_ACTIVE_ALWAYS BIT(18)
++#define CI_HDRC_HAS_SHORT_PKT_LIMIT BIT(19)
+ enum usb_dr_mode dr_mode;
+ #define CI_HDRC_CONTROLLER_RESET_EVENT 0
+ #define CI_HDRC_CONTROLLER_STOPPED_EVENT 1
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index a1864cff616aee..5bb4eaa52e14cf 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -301,6 +301,20 @@ enum {
+ */
+ HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT,
+
++ /*
++ * When this quirk is set, the HCI_OP_LE_EXT_CREATE_CONN command is
++ * disabled. This is required for the Actions Semiconductor ATS2851
++ * based controllers, which erroneously claims to support it.
++ */
++ HCI_QUIRK_BROKEN_EXT_CREATE_CONN,
++
++ /*
++ * When this quirk is set, the command WRITE_AUTH_PAYLOAD_TIMEOUT is
++ * skipped. This is required for the Actions Semiconductor ATS2851
++ * based controllers, due to a race condition in pairing process.
++ */
++ HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT,
++
+ /* When this quirk is set, MSFT extension monitor tracking by
+ * address filter is supported. Since tracking quantity of each
+ * pattern is limited, this feature supports tracking multiple
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 4c185a08c3a3af..c95f7e6ba25514 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1934,8 +1934,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ !test_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &(dev)->quirks))
+
+ /* Use ext create connection if command is supported */
+-#define use_ext_conn(dev) ((dev)->commands[37] & 0x80)
+-
++#define use_ext_conn(dev) (((dev)->commands[37] & 0x80) && \
++ !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &(dev)->quirks))
+ /* Extended advertising support */
+ #define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV))
+
+@@ -1948,8 +1948,10 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ * C24: Mandatory if the LE Controller supports Connection State and either
+ * LE Feature (LL Privacy) or LE Feature (Extended Advertising) is supported
+ */
+-#define use_enhanced_conn_complete(dev) (ll_privacy_capable(dev) || \
+- ext_adv_capable(dev))
++#define use_enhanced_conn_complete(dev) ((ll_privacy_capable(dev) || \
++ ext_adv_capable(dev)) && \
++ !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, \
++ &(dev)->quirks))
+
+ /* Periodic advertising support */
+ #define per_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_PERIODIC_ADV))
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index ff27cb2e166207..03b6165756fc5d 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -161,6 +161,7 @@ enum {
+ };
+
+ struct nft_inner_tun_ctx {
++ unsigned long cookie;
+ u16 type;
+ u16 inner_tunoff;
+ u16 inner_lloff;
+diff --git a/include/net/tcp_ao.h b/include/net/tcp_ao.h
+index 1d46460d0fefab..df655ce6987d37 100644
+--- a/include/net/tcp_ao.h
++++ b/include/net/tcp_ao.h
+@@ -183,7 +183,8 @@ int tcp_ao_hash_skb(unsigned short int family,
+ const u8 *tkey, int hash_offset, u32 sne);
+ int tcp_parse_ao(struct sock *sk, int cmd, unsigned short int family,
+ sockptr_t optval, int optlen);
+-struct tcp_ao_key *tcp_ao_established_key(struct tcp_ao_info *ao,
++struct tcp_ao_key *tcp_ao_established_key(const struct sock *sk,
++ struct tcp_ao_info *ao,
+ int sndid, int rcvid);
+ int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk,
+ struct request_sock *req, struct sk_buff *skb,
+diff --git a/include/sound/soc_sdw_utils.h b/include/sound/soc_sdw_utils.h
+index f68c1f193b3b46..0150b3735b4bd5 100644
+--- a/include/sound/soc_sdw_utils.h
++++ b/include/sound/soc_sdw_utils.h
+@@ -28,6 +28,7 @@
+ * - SOC_SDW_CODEC_SPKR | SOF_SIDECAR_AMPS - Not currently supported
+ */
+ #define SOC_SDW_SIDECAR_AMPS BIT(16)
++#define SOC_SDW_CODEC_MIC BIT(17)
+
+ #define SOC_SDW_UNUSED_DAI_ID -1
+ #define SOC_SDW_JACK_OUT_DAI_ID 0
+@@ -59,6 +60,7 @@ struct asoc_sdw_dai_info {
+ int (*rtd_init)(struct snd_soc_pcm_runtime *rtd, struct snd_soc_dai *dai);
+ bool rtd_init_done; /* Indicate that the rtd_init callback is done */
+ unsigned long quirk;
++ bool quirk_exclude;
+ };
+
+ struct asoc_sdw_codec_info {
+diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h
+index 23200aabccacb1..da4bd9fd11625e 100644
+--- a/include/trace/events/damon.h
++++ b/include/trace/events/damon.h
+@@ -15,7 +15,7 @@ TRACE_EVENT_CONDITION(damos_before_apply,
+ unsigned int target_idx, struct damon_region *r,
+ unsigned int nr_regions, bool do_trace),
+
+- TP_ARGS(context_idx, target_idx, scheme_idx, r, nr_regions, do_trace),
++ TP_ARGS(context_idx, scheme_idx, target_idx, r, nr_regions, do_trace),
+
+ TP_CONDITION(do_trace),
+
+diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
+index c2f9cabf154d11..fa0d51cad57a80 100644
+--- a/include/trace/trace_events.h
++++ b/include/trace/trace_events.h
+@@ -244,6 +244,9 @@ static struct trace_event_fields trace_event_fields_##call[] = { \
+ tstruct \
+ {} };
+
++#undef DECLARE_EVENT_SYSCALL_CLASS
++#define DECLARE_EVENT_SYSCALL_CLASS DECLARE_EVENT_CLASS
++
+ #undef DEFINE_EVENT_PRINT
+ #define DEFINE_EVENT_PRINT(template, name, proto, args, print)
+
+@@ -374,11 +377,11 @@ static inline notrace int trace_event_get_offsets_##call( \
+
+ #include "stages/stage6_event_callback.h"
+
+-#undef DECLARE_EVENT_CLASS
+-#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
+- \
++
++#undef __DECLARE_EVENT_CLASS
++#define __DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
+ static notrace void \
+-trace_event_raw_event_##call(void *__data, proto) \
++do_trace_event_raw_event_##call(void *__data, proto) \
+ { \
+ struct trace_event_file *trace_file = __data; \
+ struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\
+@@ -403,6 +406,29 @@ trace_event_raw_event_##call(void *__data, proto) \
+ \
+ trace_event_buffer_commit(&fbuffer); \
+ }
++
++#undef DECLARE_EVENT_CLASS
++#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
++__DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
++ PARAMS(assign), PARAMS(print)) \
++static notrace void \
++trace_event_raw_event_##call(void *__data, proto) \
++{ \
++ do_trace_event_raw_event_##call(__data, args); \
++}
++
++#undef DECLARE_EVENT_SYSCALL_CLASS
++#define DECLARE_EVENT_SYSCALL_CLASS(call, proto, args, tstruct, assign, print) \
++__DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
++ PARAMS(assign), PARAMS(print)) \
++static notrace void \
++trace_event_raw_event_##call(void *__data, proto) \
++{ \
++ preempt_disable_notrace(); \
++ do_trace_event_raw_event_##call(__data, args); \
++ preempt_enable_notrace(); \
++}
++
+ /*
+ * The ftrace_test_probe is compiled out, it is only here as a build time check
+ * to make sure that if the tracepoint handling changes, the ftrace probe will
+@@ -418,6 +444,8 @@ static inline void ftrace_test_probe_##call(void) \
+
+ #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+
++#undef __DECLARE_EVENT_CLASS
++
+ #include "stages/stage7_class_define.h"
+
+ #undef DECLARE_EVENT_CLASS
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index b6fbe4988f2e9e..c4182e95a61955 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -512,7 +512,9 @@ struct drm_xe_query_gt_list {
+ * containing the following in mask:
+ * ``DSS_COMPUTE ff ff ff ff 00 00 00 00``
+ * means 32 DSS are available for compute.
+- * - %DRM_XE_TOPO_L3_BANK - To query the mask of enabled L3 banks
++ * - %DRM_XE_TOPO_L3_BANK - To query the mask of enabled L3 banks. This type
++ * may be omitted if the driver is unable to query the mask from the
++ * hardware.
+ * - %DRM_XE_TOPO_EU_PER_DSS - To query the mask of Execution Units (EU)
+ * available per Dual Sub Slices (DSS). For example a query response
+ * containing the following in mask:
+diff --git a/include/uapi/linux/fanotify.h b/include/uapi/linux/fanotify.h
+index a37de58ca571ae..34f221d3a1b957 100644
+--- a/include/uapi/linux/fanotify.h
++++ b/include/uapi/linux/fanotify.h
+@@ -60,6 +60,7 @@
+ #define FAN_REPORT_DIR_FID 0x00000400 /* Report unique directory id */
+ #define FAN_REPORT_NAME 0x00000800 /* Report events with name */
+ #define FAN_REPORT_TARGET_FID 0x00001000 /* Report dirent target id */
++#define FAN_REPORT_FD_ERROR 0x00002000 /* event->fd can report error */
+
+ /* Convenience macro - FAN_REPORT_NAME requires FAN_REPORT_DIR_FID */
+ #define FAN_REPORT_DFID_NAME (FAN_REPORT_DIR_FID | FAN_REPORT_NAME)
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 3f68ae3e4330dc..8932ec5bd7c029 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -299,6 +299,8 @@ struct ufs_pwr_mode_info {
+ * @max_num_rtt: maximum RTT supported by the host
+ * @init: called when the driver is initialized
+ * @exit: called to cleanup everything done in init
++ * @set_dma_mask: For setting another DMA mask than indicated by the 64AS
++ * capability bit.
+ * @get_ufs_hci_version: called to get UFS HCI version
+ * @clk_scale_notify: notifies that clks are scaled up/down
+ * @setup_clocks: called before touching any of the controller registers
+@@ -308,7 +310,9 @@ struct ufs_pwr_mode_info {
+ * to allow variant specific Uni-Pro initialization.
+ * @pwr_change_notify: called before and after a power mode change
+ * is carried out to allow vendor spesific capabilities
+- * to be set.
++ * to be set. PRE_CHANGE can modify final_params based
++ * on desired_pwr_mode, but POST_CHANGE must not alter
++ * the final_params parameter
+ * @setup_xfer_req: called before any transfer request is issued
+ * to set some things
+ * @setup_task_mgmt: called before any task management request is issued
+@@ -341,6 +345,7 @@ struct ufs_hba_variant_ops {
+ int (*init)(struct ufs_hba *);
+ void (*exit)(struct ufs_hba *);
+ u32 (*get_ufs_hci_version)(struct ufs_hba *);
++ int (*set_dma_mask)(struct ufs_hba *);
+ int (*clk_scale_notify)(struct ufs_hba *, bool,
+ enum ufs_notify_change_status);
+ int (*setup_clocks)(struct ufs_hba *, bool,
+@@ -350,9 +355,9 @@ struct ufs_hba_variant_ops {
+ int (*link_startup_notify)(struct ufs_hba *,
+ enum ufs_notify_change_status);
+ int (*pwr_change_notify)(struct ufs_hba *,
+- enum ufs_notify_change_status status,
+- struct ufs_pa_layer_attr *,
+- struct ufs_pa_layer_attr *);
++ enum ufs_notify_change_status status,
++ struct ufs_pa_layer_attr *desired_pwr_mode,
++ struct ufs_pa_layer_attr *final_params);
+ void (*setup_xfer_req)(struct ufs_hba *hba, int tag,
+ bool is_scsi_cmd);
+ void (*setup_task_mgmt)(struct ufs_hba *, int, u8);
+@@ -623,12 +628,6 @@ enum ufshcd_quirks {
+ */
+ UFSHCD_QUIRK_SKIP_PH_CONFIGURATION = 1 << 16,
+
+- /*
+- * This quirk needs to be enabled if the host controller has
+- * 64-bit addressing supported capability but it doesn't work.
+- */
+- UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS = 1 << 17,
+-
+ /*
+ * This quirk needs to be enabled if the host controller has
+ * auto-hibernate capability but it's FASTAUTO only.
+diff --git a/io_uring/tctx.c b/io_uring/tctx.c
+index c043fe93a3f232..84f6a838572040 100644
+--- a/io_uring/tctx.c
++++ b/io_uring/tctx.c
+@@ -47,8 +47,19 @@ static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
+ void __io_uring_free(struct task_struct *tsk)
+ {
+ struct io_uring_task *tctx = tsk->io_uring;
++ struct io_tctx_node *node;
++ unsigned long index;
+
+- WARN_ON_ONCE(!xa_empty(&tctx->xa));
++ /*
++ * Fault injection forcing allocation errors in the xa_store() path
++ * can lead to xa_empty() returning false, even though no actual
++ * node is stored in the xarray. Until that gets sorted out, attempt
++ * an iteration here and warn if any entries are found.
++ */
++ xa_for_each(&tctx->xa, index, node) {
++ WARN_ON_ONCE(1);
++ break;
++ }
+ WARN_ON_ONCE(tctx->io_wq);
+ WARN_ON_ONCE(tctx->cached_refs);
+
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 39c3c816ec7882..883510a3e8d075 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -147,7 +147,7 @@ static inline void io_req_set_cqe32_extra(struct io_kiocb *req,
+ * Called by consumers of io_uring_cmd, if they originally returned
+ * -EIOCBQUEUED upon receiving the command.
+ */
+-void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
++void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, u64 res2,
+ unsigned issue_flags)
+ {
+ struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 79660e3fca4c1b..6cdbb4c33d31d5 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -947,22 +947,44 @@ static void *prog_fd_array_get_ptr(struct bpf_map *map,
+ struct file *map_file, int fd)
+ {
+ struct bpf_prog *prog = bpf_prog_get(fd);
++ bool is_extended;
+
+ if (IS_ERR(prog))
+ return prog;
+
+- if (!bpf_prog_map_compatible(map, prog)) {
++ if (prog->type == BPF_PROG_TYPE_EXT ||
++ !bpf_prog_map_compatible(map, prog)) {
+ bpf_prog_put(prog);
+ return ERR_PTR(-EINVAL);
+ }
+
++ mutex_lock(&prog->aux->ext_mutex);
++ is_extended = prog->aux->is_extended;
++ if (!is_extended)
++ prog->aux->prog_array_member_cnt++;
++ mutex_unlock(&prog->aux->ext_mutex);
++ if (is_extended) {
++ /* Extended prog can not be tail callee. It's to prevent a
++ * potential infinite loop like:
++ * tail callee prog entry -> tail callee prog subprog ->
++ * freplace prog entry --tailcall-> tail callee prog entry.
++ */
++ bpf_prog_put(prog);
++ return ERR_PTR(-EBUSY);
++ }
++
+ return prog;
+ }
+
+ static void prog_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
++ struct bpf_prog *prog = ptr;
++
++ mutex_lock(&prog->aux->ext_mutex);
++ prog->aux->prog_array_member_cnt--;
++ mutex_unlock(&prog->aux->ext_mutex);
+ /* bpf_prog is freed after one RCU or tasks trace grace period */
+- bpf_prog_put(ptr);
++ bpf_prog_put(prog);
+ }
+
+ static u32 prog_fd_array_sys_lookup_elem(void *ptr)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 5e77c58e06010e..233ea78f8f1bd9 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -131,6 +131,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
+ INIT_LIST_HEAD_RCU(&fp->aux->ksym_prefix.lnode);
+ #endif
+ mutex_init(&fp->aux->used_maps_mutex);
++ mutex_init(&fp->aux->ext_mutex);
+ mutex_init(&fp->aux->dst_mutex);
+
+ return fp;
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 7878be18e9d264..3aa002a47a9666 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -184,7 +184,7 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
+ static void dev_map_free(struct bpf_map *map)
+ {
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+- int i;
++ u32 i;
+
+ /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
+ * so the programs (can be more than one that used this map) were
+@@ -821,7 +821,7 @@ static long dev_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ struct bpf_dtab_netdev *old_dev;
+- int k = *(u32 *)key;
++ u32 k = *(u32 *)key;
+
+ if (k >= map->max_entries)
+ return -EINVAL;
+@@ -838,7 +838,7 @@ static long dev_map_hash_delete_elem(struct bpf_map *map, void *key)
+ {
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ struct bpf_dtab_netdev *old_dev;
+- int k = *(u32 *)key;
++ u32 k = *(u32 *)key;
+ unsigned long flags;
+ int ret = -ENOENT;
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index b14b87463ee04e..3ec941a0ea41c5 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -896,9 +896,12 @@ static int htab_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
+ static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l)
+ {
+ check_and_free_fields(htab, l);
++
++ migrate_disable();
+ if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH)
+ bpf_mem_cache_free(&htab->pcpu_ma, l->ptr_to_pptr);
+ bpf_mem_cache_free(&htab->ma, l);
++ migrate_enable();
+ }
+
+ static void htab_put_fd_value(struct bpf_htab *htab, struct htab_elem *l)
+@@ -948,7 +951,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
+ if (htab_is_prealloc(htab)) {
+ bpf_map_dec_elem_count(&htab->map);
+ check_and_free_fields(htab, l);
+- __pcpu_freelist_push(&htab->freelist, &l->fnode);
++ pcpu_freelist_push(&htab->freelist, &l->fnode);
+ } else {
+ dec_elem_count(htab);
+ htab_elem_free(htab, l);
+@@ -1018,7 +1021,6 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+ */
+ pl_new = this_cpu_ptr(htab->extra_elems);
+ l_new = *pl_new;
+- htab_put_fd_value(htab, old_elem);
+ *pl_new = old_elem;
+ } else {
+ struct pcpu_freelist_node *l;
+@@ -1105,6 +1107,7 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
+ struct htab_elem *l_new = NULL, *l_old;
+ struct hlist_nulls_head *head;
+ unsigned long flags;
++ void *old_map_ptr;
+ struct bucket *b;
+ u32 key_size, hash;
+ int ret;
+@@ -1183,12 +1186,27 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
+ hlist_nulls_add_head_rcu(&l_new->hash_node, head);
+ if (l_old) {
+ hlist_nulls_del_rcu(&l_old->hash_node);
++
++ /* l_old has already been stashed in htab->extra_elems, free
++ * its special fields before it is available for reuse. Also
++ * save the old map pointer in htab of maps before unlock
++ * and release it after unlock.
++ */
++ old_map_ptr = NULL;
++ if (htab_is_prealloc(htab)) {
++ if (map->ops->map_fd_put_ptr)
++ old_map_ptr = fd_htab_map_get_ptr(map, l_old);
++ check_and_free_fields(htab, l_old);
++ }
++ }
++ htab_unlock_bucket(htab, b, hash, flags);
++ if (l_old) {
++ if (old_map_ptr)
++ map->ops->map_fd_put_ptr(map, old_map_ptr, true);
+ if (!htab_is_prealloc(htab))
+ free_htab_elem(htab, l_old);
+- else
+- check_and_free_fields(htab, l_old);
+ }
+- ret = 0;
++ return 0;
+ err:
+ htab_unlock_bucket(htab, b, hash, flags);
+ return ret;
+@@ -1432,15 +1450,15 @@ static long htab_map_delete_elem(struct bpf_map *map, void *key)
+ return ret;
+
+ l = lookup_elem_raw(head, hash, key, key_size);
+-
+- if (l) {
++ if (l)
+ hlist_nulls_del_rcu(&l->hash_node);
+- free_htab_elem(htab, l);
+- } else {
++ else
+ ret = -ENOENT;
+- }
+
+ htab_unlock_bucket(htab, b, hash, flags);
++
++ if (l)
++ free_htab_elem(htab, l);
+ return ret;
+ }
+
+@@ -1853,13 +1871,14 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
+ * may cause deadlock. See comments in function
+ * prealloc_lru_pop(). Let us do bpf_lru_push_free()
+ * after releasing the bucket lock.
++ *
++ * For htab of maps, htab_put_fd_value() in
++ * free_htab_elem() may acquire a spinlock with bucket
++ * lock being held and it violates the lock rule, so
++ * invoke free_htab_elem() after unlock as well.
+ */
+- if (is_lru_map) {
+- l->batch_flink = node_to_free;
+- node_to_free = l;
+- } else {
+- free_htab_elem(htab, l);
+- }
++ l->batch_flink = node_to_free;
++ node_to_free = l;
+ }
+ dst_key += key_size;
+ dst_val += value_size;
+@@ -1871,7 +1890,10 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
+ while (node_to_free) {
+ l = node_to_free;
+ node_to_free = node_to_free->batch_flink;
+- htab_lru_push_free(htab, l);
++ if (is_lru_map)
++ htab_lru_push_free(htab, l);
++ else
++ free_htab_elem(htab, l);
+ }
+
+ next_batch:
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 9b60eda0f727b3..010e91ac978e62 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -310,12 +310,22 @@ static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie,
+ return node;
+ }
+
++static int trie_check_add_elem(struct lpm_trie *trie, u64 flags)
++{
++ if (flags == BPF_EXIST)
++ return -ENOENT;
++ if (trie->n_entries == trie->map.max_entries)
++ return -ENOSPC;
++ trie->n_entries++;
++ return 0;
++}
++
+ /* Called from syscall or from eBPF program */
+ static long trie_update_elem(struct bpf_map *map,
+ void *_key, void *value, u64 flags)
+ {
+ struct lpm_trie *trie = container_of(map, struct lpm_trie, map);
+- struct lpm_trie_node *node, *im_node = NULL, *new_node = NULL;
++ struct lpm_trie_node *node, *im_node, *new_node = NULL;
+ struct lpm_trie_node *free_node = NULL;
+ struct lpm_trie_node __rcu **slot;
+ struct bpf_lpm_trie_key_u8 *key = _key;
+@@ -333,20 +343,12 @@ static long trie_update_elem(struct bpf_map *map,
+ spin_lock_irqsave(&trie->lock, irq_flags);
+
+ /* Allocate and fill a new node */
+-
+- if (trie->n_entries == trie->map.max_entries) {
+- ret = -ENOSPC;
+- goto out;
+- }
+-
+ new_node = lpm_trie_node_alloc(trie, value);
+ if (!new_node) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+- trie->n_entries++;
+-
+ new_node->prefixlen = key->prefixlen;
+ RCU_INIT_POINTER(new_node->child[0], NULL);
+ RCU_INIT_POINTER(new_node->child[1], NULL);
+@@ -376,6 +378,10 @@ static long trie_update_elem(struct bpf_map *map,
+ * simply assign the @new_node to that slot and be done.
+ */
+ if (!node) {
++ ret = trie_check_add_elem(trie, flags);
++ if (ret)
++ goto out;
++
+ rcu_assign_pointer(*slot, new_node);
+ goto out;
+ }
+@@ -384,18 +390,30 @@ static long trie_update_elem(struct bpf_map *map,
+ * which already has the correct data array set.
+ */
+ if (node->prefixlen == matchlen) {
++ if (!(node->flags & LPM_TREE_NODE_FLAG_IM)) {
++ if (flags == BPF_NOEXIST) {
++ ret = -EEXIST;
++ goto out;
++ }
++ } else {
++ ret = trie_check_add_elem(trie, flags);
++ if (ret)
++ goto out;
++ }
++
+ new_node->child[0] = node->child[0];
+ new_node->child[1] = node->child[1];
+
+- if (!(node->flags & LPM_TREE_NODE_FLAG_IM))
+- trie->n_entries--;
+-
+ rcu_assign_pointer(*slot, new_node);
+ free_node = node;
+
+ goto out;
+ }
+
++ ret = trie_check_add_elem(trie, flags);
++ if (ret)
++ goto out;
++
+ /* If the new node matches the prefix completely, it must be inserted
+ * as an ancestor. Simply insert it between @node and *@slot.
+ */
+@@ -408,6 +426,7 @@ static long trie_update_elem(struct bpf_map *map,
+
+ im_node = lpm_trie_node_alloc(trie, NULL);
+ if (!im_node) {
++ trie->n_entries--;
+ ret = -ENOMEM;
+ goto out;
+ }
+@@ -429,14 +448,8 @@ static long trie_update_elem(struct bpf_map *map,
+ rcu_assign_pointer(*slot, im_node);
+
+ out:
+- if (ret) {
+- if (new_node)
+- trie->n_entries--;
+-
++ if (ret)
+ kfree(new_node);
+- kfree(im_node);
+- }
+-
+ spin_unlock_irqrestore(&trie->lock, irq_flags);
+ kfree_rcu(free_node, rcu);
+
+@@ -633,7 +646,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
+ struct lpm_trie_node **node_stack = NULL;
+ int err = 0, stack_ptr = -1;
+ unsigned int next_bit;
+- size_t matchlen;
++ size_t matchlen = 0;
+
+ /* The get_next_key follows postorder. For the 4 node example in
+ * the top of this file, the trie_get_next_key() returns the following
+@@ -672,7 +685,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
+ next_bit = extract_bit(key->data, node->prefixlen);
+ node = rcu_dereference(node->child[next_bit]);
+ }
+- if (!node || node->prefixlen != key->prefixlen ||
++ if (!node || node->prefixlen != matchlen ||
+ (node->flags & LPM_TREE_NODE_FLAG_IM))
+ goto find_leftmost;
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index c5aa127ed4cc01..368ae8d231d417 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2976,12 +2976,24 @@ void bpf_link_inc(struct bpf_link *link)
+ atomic64_inc(&link->refcnt);
+ }
+
++static void bpf_link_dealloc(struct bpf_link *link)
++{
++ /* now that we know that bpf_link itself can't be reached, put underlying BPF program */
++ if (link->prog)
++ bpf_prog_put(link->prog);
++
++ /* free bpf_link and its containing memory */
++ if (link->ops->dealloc_deferred)
++ link->ops->dealloc_deferred(link);
++ else
++ link->ops->dealloc(link);
++}
++
+ static void bpf_link_defer_dealloc_rcu_gp(struct rcu_head *rcu)
+ {
+ struct bpf_link *link = container_of(rcu, struct bpf_link, rcu);
+
+- /* free bpf_link and its containing memory */
+- link->ops->dealloc_deferred(link);
++ bpf_link_dealloc(link);
+ }
+
+ static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu)
+@@ -3003,7 +3015,6 @@ static void bpf_link_free(struct bpf_link *link)
+ sleepable = link->prog->sleepable;
+ /* detach BPF program, clean up used resources */
+ ops->release(link);
+- bpf_prog_put(link->prog);
+ }
+ if (ops->dealloc_deferred) {
+ /* schedule BPF link deallocation; if underlying BPF program
+@@ -3014,8 +3025,9 @@ static void bpf_link_free(struct bpf_link *link)
+ call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp);
+ else
+ call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp);
+- } else if (ops->dealloc)
+- ops->dealloc(link);
++ } else if (ops->dealloc) {
++ bpf_link_dealloc(link);
++ }
+ }
+
+ static void bpf_link_put_deferred(struct work_struct *work)
+@@ -3218,7 +3230,8 @@ static void bpf_tracing_link_release(struct bpf_link *link)
+ container_of(link, struct bpf_tracing_link, link.link);
+
+ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link,
+- tr_link->trampoline));
++ tr_link->trampoline,
++ tr_link->tgt_prog));
+
+ bpf_trampoline_put(tr_link->trampoline);
+
+@@ -3358,7 +3371,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
+ * in prog->aux
+ *
+ * - if prog->aux->dst_trampoline is NULL, the program has already been
+- * attached to a target and its initial target was cleared (below)
++ * attached to a target and its initial target was cleared (below)
+ *
+ * - if tgt_prog != NULL, the caller specified tgt_prog_fd +
+ * target_btf_id using the link_create API.
+@@ -3433,7 +3446,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
+ if (err)
+ goto out_unlock;
+
+- err = bpf_trampoline_link_prog(&link->link, tr);
++ err = bpf_trampoline_link_prog(&link->link, tr, tgt_prog);
+ if (err) {
+ bpf_link_cleanup(&link_primer);
+ link = NULL;
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 1166d9dd3e8b5d..ecdd2660561f5b 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -528,7 +528,27 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
+ }
+ }
+
+-static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
++{
++ struct bpf_prog_aux *aux = tgt_prog->aux;
++
++ guard(mutex)(&aux->ext_mutex);
++ if (aux->prog_array_member_cnt)
++ /* Program extensions can not extend target prog when the target
++ * prog has been updated to any prog_array map as tail callee.
++ * It's to prevent a potential infinite loop like:
++ * tgt prog entry -> tgt prog subprog -> freplace prog entry
++ * --tailcall-> tgt prog entry.
++ */
++ return -EBUSY;
++
++ aux->is_extended = true;
++ return 0;
++}
++
++static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ enum bpf_tramp_prog_type kind;
+ struct bpf_tramp_link *link_exiting;
+@@ -549,6 +569,9 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr
+ /* Cannot attach extension if fentry/fexit are in use. */
+ if (cnt)
+ return -EBUSY;
++ err = bpf_freplace_check_tgt_prog(tgt_prog);
++ if (err)
++ return err;
+ tr->extension_prog = link->link.prog;
+ return bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, NULL,
+ link->link.prog->bpf_func);
+@@ -575,17 +598,21 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr
+ return err;
+ }
+
+-int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ int err;
+
+ mutex_lock(&tr->mutex);
+- err = __bpf_trampoline_link_prog(link, tr);
++ err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
+ mutex_unlock(&tr->mutex);
+ return err;
+ }
+
+-static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ enum bpf_tramp_prog_type kind;
+ int err;
+@@ -596,6 +623,8 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_
+ err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP,
+ tr->extension_prog->bpf_func, NULL);
+ tr->extension_prog = NULL;
++ guard(mutex)(&tgt_prog->aux->ext_mutex);
++ tgt_prog->aux->is_extended = false;
+ return err;
+ }
+ hlist_del_init(&link->tramp_hlist);
+@@ -604,12 +633,14 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_
+ }
+
+ /* bpf_trampoline_unlink_prog() should never fail. */
+-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ int err;
+
+ mutex_lock(&tr->mutex);
+- err = __bpf_trampoline_unlink_prog(link, tr);
++ err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
+ mutex_unlock(&tr->mutex);
+ return err;
+ }
+@@ -624,7 +655,7 @@ static void bpf_shim_tramp_link_release(struct bpf_link *link)
+ if (!shim_link->trampoline)
+ return;
+
+- WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline));
++ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline, NULL));
+ bpf_trampoline_put(shim_link->trampoline);
+ }
+
+@@ -738,7 +769,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
+ goto err;
+ }
+
+- err = __bpf_trampoline_link_prog(&shim_link->link, tr);
++ err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL);
+ if (err)
+ goto err;
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 91317857ea3ee5..b2008076df9c26 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1200,14 +1200,17 @@ static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack)
+ /* Mark stack slot as STACK_MISC, unless it is already STACK_INVALID, in which
+ * case they are equivalent, or it's STACK_ZERO, in which case we preserve
+ * more precise STACK_ZERO.
+- * Note, in uprivileged mode leaving STACK_INVALID is wrong, so we take
+- * env->allow_ptr_leaks into account and force STACK_MISC, if necessary.
++ * Regardless of allow_ptr_leaks setting (i.e., privileged or unprivileged
++ * mode), we won't promote STACK_INVALID to STACK_MISC. In privileged case it is
++ * unnecessary as both are considered equivalent when loading data and pruning,
++ * in case of unprivileged mode it will be incorrect to allow reads of invalid
++ * slots.
+ */
+ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype)
+ {
+ if (*stype == STACK_ZERO)
+ return;
+- if (env->allow_ptr_leaks && *stype == STACK_INVALID)
++ if (*stype == STACK_INVALID)
+ return;
+ *stype = STACK_MISC;
+ }
+@@ -4646,6 +4649,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ */
+ if (!env->allow_ptr_leaks &&
+ is_spilled_reg(&state->stack[spi]) &&
++ !is_spilled_scalar_reg(&state->stack[spi]) &&
+ size != BPF_REG_SIZE) {
+ verbose(env, "attempt to corrupt spilled pointer on stack\n");
+ return -EACCES;
+@@ -8021,6 +8025,11 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ const struct btf_type *t;
+ int spi, err, i, nr_slots, btf_id;
+
++ if (reg->type != PTR_TO_STACK) {
++ verbose(env, "arg#%d expected pointer to an iterator on stack\n", regno - 1);
++ return -EINVAL;
++ }
++
+ /* For iter_{new,next,destroy} functions, btf_check_iter_kfuncs()
+ * ensures struct convention, so we wouldn't need to do any BTF
+ * validation here. But given iter state can be passed as a parameter
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index d570535342cb78..f6f0387761d05a 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -1052,9 +1052,13 @@ static void check_unmap(struct dma_debug_entry *ref)
+ }
+
+ hash_bucket_del(entry);
+- dma_entry_free(entry);
+-
+ put_hash_bucket(bucket, flags);
++
++ /*
++ * Free the entry outside of bucket_lock to avoid ABBA deadlocks
++ * between that and radix_lock.
++ */
++ dma_entry_free(entry);
+ }
+
+ static void check_for_stack(struct device *dev,
+diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
+index 53b21ae30e00ee..b14072071889fb 100644
+--- a/kernel/kcsan/debugfs.c
++++ b/kernel/kcsan/debugfs.c
+@@ -46,14 +46,8 @@ static struct {
+ int used; /* number of elements used */
+ bool sorted; /* if elements are sorted */
+ bool whitelist; /* if list is a blacklist or whitelist */
+-} report_filterlist = {
+- .addrs = NULL,
+- .size = 8, /* small initial size */
+- .used = 0,
+- .sorted = false,
+- .whitelist = false, /* default is blacklist */
+-};
+-static DEFINE_SPINLOCK(report_filterlist_lock);
++} report_filterlist;
++static DEFINE_RAW_SPINLOCK(report_filterlist_lock);
+
+ /*
+ * The microbenchmark allows benchmarking KCSAN core runtime only. To run
+@@ -110,7 +104,7 @@ bool kcsan_skip_report_debugfs(unsigned long func_addr)
+ return false;
+ func_addr -= offset; /* Get function start */
+
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
+ if (report_filterlist.used == 0)
+ goto out;
+
+@@ -127,7 +121,7 @@ bool kcsan_skip_report_debugfs(unsigned long func_addr)
+ ret = !ret;
+
+ out:
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+ return ret;
+ }
+
+@@ -135,9 +129,9 @@ static void set_report_filterlist_whitelist(bool whitelist)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
+ report_filterlist.whitelist = whitelist;
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+ }
+
+ /* Returns 0 on success, error-code otherwise. */
+@@ -145,6 +139,9 @@ static ssize_t insert_report_filterlist(const char *func)
+ {
+ unsigned long flags;
+ unsigned long addr = kallsyms_lookup_name(func);
++ unsigned long *delay_free = NULL;
++ unsigned long *new_addrs = NULL;
++ size_t new_size = 0;
+ ssize_t ret = 0;
+
+ if (!addr) {
+@@ -152,32 +149,33 @@ static ssize_t insert_report_filterlist(const char *func)
+ return -ENOENT;
+ }
+
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++retry_alloc:
++ /*
++ * Check if we need an allocation, and re-validate under the lock. Since
++ * the report_filterlist_lock is a raw, cannot allocate under the lock.
++ */
++ if (data_race(report_filterlist.used == report_filterlist.size)) {
++ new_size = (report_filterlist.size ?: 4) * 2;
++ delay_free = new_addrs = kmalloc_array(new_size, sizeof(unsigned long), GFP_KERNEL);
++ if (!new_addrs)
++ return -ENOMEM;
++ }
+
+- if (report_filterlist.addrs == NULL) {
+- /* initial allocation */
+- report_filterlist.addrs =
+- kmalloc_array(report_filterlist.size,
+- sizeof(unsigned long), GFP_ATOMIC);
+- if (report_filterlist.addrs == NULL) {
+- ret = -ENOMEM;
+- goto out;
+- }
+- } else if (report_filterlist.used == report_filterlist.size) {
+- /* resize filterlist */
+- size_t new_size = report_filterlist.size * 2;
+- unsigned long *new_addrs =
+- krealloc(report_filterlist.addrs,
+- new_size * sizeof(unsigned long), GFP_ATOMIC);
+-
+- if (new_addrs == NULL) {
+- /* leave filterlist itself untouched */
+- ret = -ENOMEM;
+- goto out;
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
++ if (report_filterlist.used == report_filterlist.size) {
++ /* Check we pre-allocated enough, and retry if not. */
++ if (report_filterlist.used >= new_size) {
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ kfree(new_addrs); /* kfree(NULL) is safe */
++ delay_free = new_addrs = NULL;
++ goto retry_alloc;
+ }
+
++ if (report_filterlist.used)
++ memcpy(new_addrs, report_filterlist.addrs, report_filterlist.used * sizeof(unsigned long));
++ delay_free = report_filterlist.addrs; /* free the old list */
++ report_filterlist.addrs = new_addrs; /* switch to the new list */
+ report_filterlist.size = new_size;
+- report_filterlist.addrs = new_addrs;
+ }
+
+ /* Note: deduplicating should be done in userspace. */
+@@ -185,9 +183,9 @@ static ssize_t insert_report_filterlist(const char *func)
+ kallsyms_lookup_name(func);
+ report_filterlist.sorted = false;
+
+-out:
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+
++ kfree(delay_free);
+ return ret;
+ }
+
+@@ -204,13 +202,13 @@ static int show_info(struct seq_file *file, void *v)
+ }
+
+ /* show filter functions, and filter type */
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
+ seq_printf(file, "\n%s functions: %s\n",
+ report_filterlist.whitelist ? "whitelisted" : "blacklisted",
+ report_filterlist.used == 0 ? "none" : "");
+ for (i = 0; i < report_filterlist.used; ++i)
+ seq_printf(file, " %ps\n", (void *)report_filterlist.addrs[i]);
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+
+ return 0;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 76b27b2a9c56ad..6cc12777bb11ab 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1242,9 +1242,9 @@ static void nohz_csd_func(void *info)
+ WARN_ON(!(flags & NOHZ_KICK_MASK));
+
+ rq->idle_balance = idle_cpu(cpu);
+- if (rq->idle_balance && !need_resched()) {
++ if (rq->idle_balance) {
+ rq->nohz_idle_balance = flags;
+- raise_softirq_irqoff(SCHED_SOFTIRQ);
++ __raise_softirq_irqoff(SCHED_SOFTIRQ);
+ }
+ }
+
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index be1b917dc8ce4c..40a1ad4493b4d9 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2042,6 +2042,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags)
+ } else if (flags & ENQUEUE_REPLENISH) {
+ replenish_dl_entity(dl_se);
+ } else if ((flags & ENQUEUE_RESTORE) &&
++ !is_dl_boosted(dl_se) &&
+ dl_time_before(dl_se->deadline, rq_clock(rq_of_dl_se(dl_se)))) {
+ setup_new_dl_entity(dl_se);
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 16613631543f18..79bb18651cdb8b 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3105,6 +3105,12 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
+
+ *found = false;
+
++
++ /*
++ * This is necessary to protect llc_cpus.
++ */
++ rcu_read_lock();
++
+ /*
+ * If WAKE_SYNC, the waker's local DSQ is empty, and the system is
+ * under utilized, wake up @p to the local DSQ of the waker. Checking
+@@ -3147,9 +3153,12 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
+ if (cpu >= 0)
+ goto cpu_found;
+
++ rcu_read_unlock();
+ return prev_cpu;
+
+ cpu_found:
++ rcu_read_unlock();
++
+ *found = true;
+ return cpu;
+ }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2d16c8545c71ed..782ce70ebd1b08 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3399,10 +3399,16 @@ static void task_numa_work(struct callback_head *work)
+
+ /* Initialise new per-VMA NUMAB state. */
+ if (!vma->numab_state) {
+- vma->numab_state = kzalloc(sizeof(struct vma_numab_state),
+- GFP_KERNEL);
+- if (!vma->numab_state)
++ struct vma_numab_state *ptr;
++
++ ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
++ if (!ptr)
++ continue;
++
++ if (cmpxchg(&vma->numab_state, NULL, ptr)) {
++ kfree(ptr);
+ continue;
++ }
+
+ vma->numab_state->start_scan_seq = mm->numa_scan_seq;
+
+@@ -12574,7 +12580,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
+ * work being done for other CPUs. Next load
+ * balancing owner will pick it up.
+ */
+- if (need_resched()) {
++ if (!idle_cpu(this_cpu) && need_resched()) {
+ if (flags & NOHZ_STATS_KICK)
+ has_blocked_load = true;
+ if (flags & NOHZ_NEXT_KICK)
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 24f9f90b6574e5..1784ed1fb3fe5d 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -1238,7 +1238,7 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ bool empty = !cpumask_and(new_mask, new_mask,
+ ctx->user_mask);
+
+- if (WARN_ON_ONCE(empty))
++ if (empty)
+ cpumask_copy(new_mask, cpus_allowed);
+ }
+ __set_cpus_allowed_ptr(p, ctx);
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index d082e7840f8802..8c4524ce65fafe 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -280,17 +280,24 @@ static inline void invoke_softirq(void)
+ wakeup_softirqd();
+ }
+
++#define SCHED_SOFTIRQ_MASK BIT(SCHED_SOFTIRQ)
++
+ /*
+ * flush_smp_call_function_queue() can raise a soft interrupt in a function
+- * call. On RT kernels this is undesired and the only known functionality
+- * in the block layer which does this is disabled on RT. If soft interrupts
+- * get raised which haven't been raised before the flush, warn so it can be
++ * call. On RT kernels this is undesired and the only known functionalities
++ * are in the block layer which is disabled on RT, and in the scheduler for
++ * idle load balancing. If soft interrupts get raised which haven't been
++ * raised before the flush, warn if it is not a SCHED_SOFTIRQ so it can be
+ * investigated.
+ */
+ void do_softirq_post_smp_call_flush(unsigned int was_pending)
+ {
+- if (WARN_ON_ONCE(was_pending != local_softirq_pending()))
++ unsigned int is_pending = local_softirq_pending();
++
++ if (unlikely(was_pending != is_pending)) {
++ WARN_ON_ONCE(was_pending != (is_pending & ~SCHED_SOFTIRQ_MASK));
+ invoke_softirq();
++ }
+ }
+
+ #else /* CONFIG_PREEMPT_RT */
+diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
+index 8ebb6d5a106bea..b0b97a60aaa6fc 100644
+--- a/kernel/time/Kconfig
++++ b/kernel/time/Kconfig
+@@ -17,11 +17,6 @@ config ARCH_CLOCKSOURCE_DATA
+ config ARCH_CLOCKSOURCE_INIT
+ bool
+
+-# Clocksources require validation of the clocksource against the last
+-# cycle update - x86/TSC misfeature
+-config CLOCKSOURCE_VALIDATE_LAST_CYCLE
+- bool
+-
+ # Timekeeping vsyscall support
+ config GENERIC_TIME_VSYSCALL
+ bool
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 23336eecb4f43b..8a40a616288b81 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -22,7 +22,7 @@
+
+ static noinline u64 cycles_to_nsec_safe(struct clocksource *cs, u64 start, u64 end)
+ {
+- u64 delta = clocksource_delta(end, start, cs->mask);
++ u64 delta = clocksource_delta(end, start, cs->mask, cs->max_raw_delta);
+
+ if (likely(delta < cs->max_cycles))
+ return clocksource_cyc2ns(delta, cs->mult, cs->shift);
+@@ -985,6 +985,15 @@ static inline void clocksource_update_max_deferment(struct clocksource *cs)
+ cs->max_idle_ns = clocks_calc_max_nsecs(cs->mult, cs->shift,
+ cs->maxadj, cs->mask,
+ &cs->max_cycles);
++
++ /*
++ * Threshold for detecting negative motion in clocksource_delta().
++ *
++ * Allow for 0.875 of the counter width so that overly long idle
++ * sleeps, which go slightly over mask/2, do not trigger the
++ * negative motion detection.
++ */
++ cs->max_raw_delta = (cs->mask >> 1) + (cs->mask >> 2) + (cs->mask >> 3);
+ }
+
+ static struct clocksource *clocksource_find_best(bool oneshot, bool skipcur)
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 802b336f4b8c2f..de3547d63aa975 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -804,7 +804,7 @@ int __do_adjtimex(struct __kernel_timex *txc, const struct timespec64 *ts,
+ txc->offset = shift_right(time_offset * NTP_INTERVAL_FREQ,
+ NTP_SCALE_SHIFT);
+ if (!(time_status & STA_NANO))
+- txc->offset = (u32)txc->offset / NSEC_PER_USEC;
++ txc->offset = div_s64(txc->offset, NSEC_PER_USEC);
+ }
+
+ result = time_state; /* mostly `TIME_OK' */
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 7e6f409bf3114a..96933082431fe0 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -195,97 +195,6 @@ static inline u64 tk_clock_read(const struct tk_read_base *tkr)
+ return clock->read(clock);
+ }
+
+-#ifdef CONFIG_DEBUG_TIMEKEEPING
+-#define WARNING_FREQ (HZ*300) /* 5 minute rate-limiting */
+-
+-static void timekeeping_check_update(struct timekeeper *tk, u64 offset)
+-{
+-
+- u64 max_cycles = tk->tkr_mono.clock->max_cycles;
+- const char *name = tk->tkr_mono.clock->name;
+-
+- if (offset > max_cycles) {
+- printk_deferred("WARNING: timekeeping: Cycle offset (%lld) is larger than allowed by the '%s' clock's max_cycles value (%lld): time overflow danger\n",
+- offset, name, max_cycles);
+- printk_deferred(" timekeeping: Your kernel is sick, but tries to cope by capping time updates\n");
+- } else {
+- if (offset > (max_cycles >> 1)) {
+- printk_deferred("INFO: timekeeping: Cycle offset (%lld) is larger than the '%s' clock's 50%% safety margin (%lld)\n",
+- offset, name, max_cycles >> 1);
+- printk_deferred(" timekeeping: Your kernel is still fine, but is feeling a bit nervous\n");
+- }
+- }
+-
+- if (tk->underflow_seen) {
+- if (jiffies - tk->last_warning > WARNING_FREQ) {
+- printk_deferred("WARNING: Underflow in clocksource '%s' observed, time update ignored.\n", name);
+- printk_deferred(" Please report this, consider using a different clocksource, if possible.\n");
+- printk_deferred(" Your kernel is probably still fine.\n");
+- tk->last_warning = jiffies;
+- }
+- tk->underflow_seen = 0;
+- }
+-
+- if (tk->overflow_seen) {
+- if (jiffies - tk->last_warning > WARNING_FREQ) {
+- printk_deferred("WARNING: Overflow in clocksource '%s' observed, time update capped.\n", name);
+- printk_deferred(" Please report this, consider using a different clocksource, if possible.\n");
+- printk_deferred(" Your kernel is probably still fine.\n");
+- tk->last_warning = jiffies;
+- }
+- tk->overflow_seen = 0;
+- }
+-}
+-
+-static inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 cycles);
+-
+-static inline u64 timekeeping_debug_get_ns(const struct tk_read_base *tkr)
+-{
+- struct timekeeper *tk = &tk_core.timekeeper;
+- u64 now, last, mask, max, delta;
+- unsigned int seq;
+-
+- /*
+- * Since we're called holding a seqcount, the data may shift
+- * under us while we're doing the calculation. This can cause
+- * false positives, since we'd note a problem but throw the
+- * results away. So nest another seqcount here to atomically
+- * grab the points we are checking with.
+- */
+- do {
+- seq = read_seqcount_begin(&tk_core.seq);
+- now = tk_clock_read(tkr);
+- last = tkr->cycle_last;
+- mask = tkr->mask;
+- max = tkr->clock->max_cycles;
+- } while (read_seqcount_retry(&tk_core.seq, seq));
+-
+- delta = clocksource_delta(now, last, mask);
+-
+- /*
+- * Try to catch underflows by checking if we are seeing small
+- * mask-relative negative values.
+- */
+- if (unlikely((~delta & mask) < (mask >> 3)))
+- tk->underflow_seen = 1;
+-
+- /* Check for multiplication overflows */
+- if (unlikely(delta > max))
+- tk->overflow_seen = 1;
+-
+- /* timekeeping_cycles_to_ns() handles both under and overflow */
+- return timekeeping_cycles_to_ns(tkr, now);
+-}
+-#else
+-static inline void timekeeping_check_update(struct timekeeper *tk, u64 offset)
+-{
+-}
+-static inline u64 timekeeping_debug_get_ns(const struct tk_read_base *tkr)
+-{
+- BUG();
+-}
+-#endif
+-
+ /**
+ * tk_setup_internals - Set up internals to use clocksource clock.
+ *
+@@ -390,19 +299,11 @@ static inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 c
+ return ((delta * tkr->mult) + tkr->xtime_nsec) >> tkr->shift;
+ }
+
+-static __always_inline u64 __timekeeping_get_ns(const struct tk_read_base *tkr)
++static __always_inline u64 timekeeping_get_ns(const struct tk_read_base *tkr)
+ {
+ return timekeeping_cycles_to_ns(tkr, tk_clock_read(tkr));
+ }
+
+-static inline u64 timekeeping_get_ns(const struct tk_read_base *tkr)
+-{
+- if (IS_ENABLED(CONFIG_DEBUG_TIMEKEEPING))
+- return timekeeping_debug_get_ns(tkr);
+-
+- return __timekeeping_get_ns(tkr);
+-}
+-
+ /**
+ * update_fast_timekeeper - Update the fast and NMI safe monotonic timekeeper.
+ * @tkr: Timekeeping readout base from which we take the update
+@@ -446,7 +347,7 @@ static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf)
+ seq = raw_read_seqcount_latch(&tkf->seq);
+ tkr = tkf->base + (seq & 0x01);
+ now = ktime_to_ns(tkr->base);
+- now += __timekeeping_get_ns(tkr);
++ now += timekeeping_get_ns(tkr);
+ } while (raw_read_seqcount_latch_retry(&tkf->seq, seq));
+
+ return now;
+@@ -562,7 +463,7 @@ static __always_inline u64 __ktime_get_real_fast(struct tk_fast *tkf, u64 *mono)
+ tkr = tkf->base + (seq & 0x01);
+ basem = ktime_to_ns(tkr->base);
+ baser = ktime_to_ns(tkr->base_real);
+- delta = __timekeeping_get_ns(tkr);
++ delta = timekeeping_get_ns(tkr);
+ } while (raw_read_seqcount_latch_retry(&tkf->seq, seq));
+
+ if (mono)
+@@ -793,7 +694,8 @@ static void timekeeping_forward_now(struct timekeeper *tk)
+ u64 cycle_now, delta;
+
+ cycle_now = tk_clock_read(&tk->tkr_mono);
+- delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask);
++ delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask,
++ tk->tkr_mono.clock->max_raw_delta);
+ tk->tkr_mono.cycle_last = cycle_now;
+ tk->tkr_raw.cycle_last = cycle_now;
+
+@@ -2292,15 +2194,13 @@ static bool timekeeping_advance(enum timekeeping_adv_mode mode)
+ goto out;
+
+ offset = clocksource_delta(tk_clock_read(&tk->tkr_mono),
+- tk->tkr_mono.cycle_last, tk->tkr_mono.mask);
++ tk->tkr_mono.cycle_last, tk->tkr_mono.mask,
++ tk->tkr_mono.clock->max_raw_delta);
+
+ /* Check if there's really nothing to do */
+ if (offset < real_tk->cycle_interval && mode == TK_ADV_TICK)
+ goto out;
+
+- /* Do some additional sanity checking */
+- timekeeping_check_update(tk, offset);
+-
+ /*
+ * With NO_HZ we may have to accumulate many cycle_intervals
+ * (think "ticks") worth of time at once. To do this efficiently,
+diff --git a/kernel/time/timekeeping_internal.h b/kernel/time/timekeeping_internal.h
+index 4ca2787d1642e2..feb366b0142887 100644
+--- a/kernel/time/timekeeping_internal.h
++++ b/kernel/time/timekeeping_internal.h
+@@ -15,23 +15,16 @@ extern void tk_debug_account_sleep_time(const struct timespec64 *t);
+ #define tk_debug_account_sleep_time(x)
+ #endif
+
+-#ifdef CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE
+-static inline u64 clocksource_delta(u64 now, u64 last, u64 mask)
++static inline u64 clocksource_delta(u64 now, u64 last, u64 mask, u64 max_delta)
+ {
+ u64 ret = (now - last) & mask;
+
+ /*
+- * Prevent time going backwards by checking the MSB of mask in
+- * the result. If set, return 0.
++ * Prevent time going backwards by checking the result against
++ * @max_delta. If greater, return 0.
+ */
+- return ret & ~(mask >> 1) ? 0 : ret;
++ return ret > max_delta ? 0 : ret;
+ }
+-#else
+-static inline u64 clocksource_delta(u64 now, u64 last, u64 mask)
+-{
+- return (now - last) & mask;
+-}
+-#endif
+
+ /* Semi public for serialization of non timekeeper VDSO updates. */
+ extern raw_spinlock_t timekeeper_lock;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 5807116bcd0bf7..366eb4c4f28e57 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -482,6 +482,8 @@ struct ring_buffer_per_cpu {
+ unsigned long nr_pages;
+ unsigned int current_context;
+ struct list_head *pages;
++ /* pages generation counter, incremented when the list changes */
++ unsigned long cnt;
+ struct buffer_page *head_page; /* read from head */
+ struct buffer_page *tail_page; /* write to tail */
+ struct buffer_page *commit_page; /* committed pages */
+@@ -1475,40 +1477,87 @@ static void rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer,
+ RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK);
+ }
+
++static bool rb_check_links(struct ring_buffer_per_cpu *cpu_buffer,
++ struct list_head *list)
++{
++ if (RB_WARN_ON(cpu_buffer,
++ rb_list_head(rb_list_head(list->next)->prev) != list))
++ return false;
++
++ if (RB_WARN_ON(cpu_buffer,
++ rb_list_head(rb_list_head(list->prev)->next) != list))
++ return false;
++
++ return true;
++}
++
+ /**
+ * rb_check_pages - integrity check of buffer pages
+ * @cpu_buffer: CPU buffer with pages to test
+ *
+ * As a safety measure we check to make sure the data pages have not
+ * been corrupted.
+- *
+- * Callers of this function need to guarantee that the list of pages doesn't get
+- * modified during the check. In particular, if it's possible that the function
+- * is invoked with concurrent readers which can swap in a new reader page then
+- * the caller should take cpu_buffer->reader_lock.
+ */
+ static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+- struct list_head *head = rb_list_head(cpu_buffer->pages);
+- struct list_head *tmp;
++ struct list_head *head, *tmp;
++ unsigned long buffer_cnt;
++ unsigned long flags;
++ int nr_loops = 0;
+
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(head->next)->prev) != head))
++ /*
++ * Walk the linked list underpinning the ring buffer and validate all
++ * its next and prev links.
++ *
++ * The check acquires the reader_lock to avoid concurrent processing
++ * with code that could be modifying the list. However, the lock cannot
++ * be held for the entire duration of the walk, as this would make the
++ * time when interrupts are disabled non-deterministic, dependent on the
++ * ring buffer size. Therefore, the code releases and re-acquires the
++ * lock after checking each page. The ring_buffer_per_cpu.cnt variable
++ * is then used to detect if the list was modified while the lock was
++ * not held, in which case the check needs to be restarted.
++ *
++ * The code attempts to perform the check at most three times before
++ * giving up. This is acceptable because this is only a self-validation
++ * to detect problems early on. In practice, the list modification
++ * operations are fairly spaced, and so this check typically succeeds at
++ * most on the second try.
++ */
++again:
++ if (++nr_loops > 3)
+ return;
+
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(head->prev)->next) != head))
+- return;
++ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++ head = rb_list_head(cpu_buffer->pages);
++ if (!rb_check_links(cpu_buffer, head))
++ goto out_locked;
++ buffer_cnt = cpu_buffer->cnt;
++ tmp = head;
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+
+- for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) {
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(tmp->next)->prev) != tmp))
+- return;
++ while (true) {
++ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(tmp->prev)->next) != tmp))
+- return;
++ if (buffer_cnt != cpu_buffer->cnt) {
++ /* The list was updated, try again. */
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++ goto again;
++ }
++
++ tmp = rb_list_head(tmp->next);
++ if (tmp == head)
++ /* The iteration circled back, all is done. */
++ goto out_locked;
++
++ if (!rb_check_links(cpu_buffer, tmp))
++ goto out_locked;
++
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ }
++
++out_locked:
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ }
+
+ /*
+@@ -2532,6 +2581,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+
+ /* make sure pages points to a valid page in the ring buffer */
+ cpu_buffer->pages = next_page;
++ cpu_buffer->cnt++;
+
+ /* update head page */
+ if (head_bit)
+@@ -2638,6 +2688,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ * pointer to point to end of list
+ */
+ head_page->prev = last_page;
++ cpu_buffer->cnt++;
+ success = true;
+ break;
+ }
+@@ -2873,12 +2924,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ */
+ synchronize_rcu();
+ for_each_buffer_cpu(buffer, cpu) {
+- unsigned long flags;
+-
+ cpu_buffer = buffer->buffers[cpu];
+- raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ rb_check_pages(cpu_buffer);
+- raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ }
+ atomic_dec(&buffer->record_disabled);
+ }
+@@ -5296,6 +5343,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
+ rb_list_head(reader->list.next)->prev = &cpu_buffer->reader_page->list;
+ rb_inc_page(&cpu_buffer->head_page);
+
++ cpu_buffer->cnt++;
+ local_inc(&cpu_buffer->pages_read);
+
+ /* Finally update the reader page to the new head */
+@@ -5835,12 +5883,9 @@ void
+ ring_buffer_read_finish(struct ring_buffer_iter *iter)
+ {
+ struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
+- unsigned long flags;
+
+ /* Use this opportunity to check the integrity of the ring buffer. */
+- raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ rb_check_pages(cpu_buffer);
+- raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+
+ atomic_dec(&cpu_buffer->resize_disabled);
+ kfree(iter->event);
+@@ -6757,6 +6802,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ /* Install the new pages, remove the head from the list */
+ cpu_buffer->pages = cpu_buffer->new_pages.next;
+ list_del_init(&cpu_buffer->new_pages);
++ cpu_buffer->cnt++;
+
+ cpu_buffer->head_page
+ = list_entry(cpu_buffer->pages, struct buffer_page, list);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 6a891e00aa7f46..17d2ffde0bb604 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -988,7 +988,8 @@ static inline void trace_access_lock_init(void)
+ #endif
+
+ #ifdef CONFIG_STACKTRACE
+-static void __ftrace_trace_stack(struct trace_buffer *buffer,
++static void __ftrace_trace_stack(struct trace_array *tr,
++ struct trace_buffer *buffer,
+ unsigned int trace_ctx,
+ int skip, struct pt_regs *regs);
+ static inline void ftrace_trace_stack(struct trace_array *tr,
+@@ -997,7 +998,8 @@ static inline void ftrace_trace_stack(struct trace_array *tr,
+ int skip, struct pt_regs *regs);
+
+ #else
+-static inline void __ftrace_trace_stack(struct trace_buffer *buffer,
++static inline void __ftrace_trace_stack(struct trace_array *tr,
++ struct trace_buffer *buffer,
+ unsigned int trace_ctx,
+ int skip, struct pt_regs *regs)
+ {
+@@ -2947,7 +2949,8 @@ struct ftrace_stacks {
+ static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
+ static DEFINE_PER_CPU(int, ftrace_stack_reserve);
+
+-static void __ftrace_trace_stack(struct trace_buffer *buffer,
++static void __ftrace_trace_stack(struct trace_array *tr,
++ struct trace_buffer *buffer,
+ unsigned int trace_ctx,
+ int skip, struct pt_regs *regs)
+ {
+@@ -2994,6 +2997,20 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
+ nr_entries = stack_trace_save(fstack->calls, size, skip);
+ }
+
++#ifdef CONFIG_DYNAMIC_FTRACE
++ /* Mark entry of stack trace as trampoline code */
++ if (tr->ops && tr->ops->trampoline) {
++ unsigned long tramp_start = tr->ops->trampoline;
++ unsigned long tramp_end = tramp_start + tr->ops->trampoline_size;
++ unsigned long *calls = fstack->calls;
++
++ for (int i = 0; i < nr_entries; i++) {
++ if (calls[i] >= tramp_start && calls[i] < tramp_end)
++ calls[i] = FTRACE_TRAMPOLINE_MARKER;
++ }
++ }
++#endif
++
+ event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
+ struct_size(entry, caller, nr_entries),
+ trace_ctx);
+@@ -3024,7 +3041,7 @@ static inline void ftrace_trace_stack(struct trace_array *tr,
+ if (!(tr->trace_flags & TRACE_ITER_STACKTRACE))
+ return;
+
+- __ftrace_trace_stack(buffer, trace_ctx, skip, regs);
++ __ftrace_trace_stack(tr, buffer, trace_ctx, skip, regs);
+ }
+
+ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
+@@ -3033,7 +3050,7 @@ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
+ struct trace_buffer *buffer = tr->array_buffer.buffer;
+
+ if (rcu_is_watching()) {
+- __ftrace_trace_stack(buffer, trace_ctx, skip, NULL);
++ __ftrace_trace_stack(tr, buffer, trace_ctx, skip, NULL);
+ return;
+ }
+
+@@ -3050,7 +3067,7 @@ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
+ return;
+
+ ct_irq_enter_irqson();
+- __ftrace_trace_stack(buffer, trace_ctx, skip, NULL);
++ __ftrace_trace_stack(tr, buffer, trace_ctx, skip, NULL);
+ ct_irq_exit_irqson();
+ }
+
+@@ -3067,8 +3084,8 @@ void trace_dump_stack(int skip)
+ /* Skip 1 to skip this function. */
+ skip++;
+ #endif
+- __ftrace_trace_stack(printk_trace->array_buffer.buffer,
+- tracing_gen_ctx(), skip, NULL);
++ __ftrace_trace_stack(printk_trace, printk_trace->array_buffer.buffer,
++ tracing_gen_ctx(), skip, NULL);
+ }
+ EXPORT_SYMBOL_GPL(trace_dump_stack);
+
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index c866991b9c78bf..30d6675c78cfe1 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -2176,4 +2176,11 @@ static inline int rv_init_interface(void)
+ }
+ #endif
+
++/*
++ * This is used only to distinguish
++ * function address from trampoline code.
++ * So this value has no meaning.
++ */
++#define FTRACE_TRAMPOLINE_MARKER ((unsigned long) INT_MAX)
++
+ #endif /* _LINUX_KERNEL_TRACE_H */
+diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c
+index 4702efb00ff21e..4cb2ebc439be68 100644
+--- a/kernel/trace/trace_clock.c
++++ b/kernel/trace/trace_clock.c
+@@ -154,5 +154,5 @@ static atomic64_t trace_counter;
+ */
+ u64 notrace trace_clock_counter(void)
+ {
+- return atomic64_add_return(1, &trace_counter);
++ return atomic64_inc_return(&trace_counter);
+ }
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index ebda68ee9abff9..be8be0c1aaf0f1 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -963,6 +963,11 @@ static int __trace_eprobe_create(int argc, const char *argv[])
+ goto error;
+ }
+ ret = dyn_event_add(&ep->devent, &ep->tp.event->call);
++ if (ret < 0) {
++ trace_probe_unregister_event_call(&ep->tp);
++ mutex_unlock(&event_mutex);
++ goto error;
++ }
+ mutex_unlock(&event_mutex);
+ return ret;
+ parse_error:
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 868f2f912f2809..c14573e5a90337 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -1246,6 +1246,10 @@ static enum print_line_t trace_stack_print(struct trace_iterator *iter,
+ break;
+
+ trace_seq_puts(s, " => ");
++ if ((*p) == FTRACE_TRAMPOLINE_MARKER) {
++ trace_seq_puts(s, "[FTRACE TRAMPOLINE]\n");
++ continue;
++ }
+ seq_print_ip_sym(s, (*p) + delta, flags);
+ trace_seq_putc(s, '\n');
+ }
+diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
+index 785733245eadf5..f9b21bac9d45e6 100644
+--- a/kernel/trace/trace_syscalls.c
++++ b/kernel/trace/trace_syscalls.c
+@@ -299,6 +299,12 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
+ int syscall_nr;
+ int size;
+
++ /*
++ * Syscall probe called with preemption enabled, but the ring
++ * buffer and per-cpu data require preemption to be disabled.
++ */
++ guard(preempt_notrace)();
++
+ syscall_nr = trace_get_syscall_nr(current, regs);
+ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
+ return;
+@@ -338,6 +344,12 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
+ struct trace_event_buffer fbuffer;
+ int syscall_nr;
+
++ /*
++ * Syscall probe called with preemption enabled, but the ring
++ * buffer and per-cpu data require preemption to be disabled.
++ */
++ guard(preempt_notrace)();
++
+ syscall_nr = trace_get_syscall_nr(current, regs);
+ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
+ return;
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index 3a56e7c8aa4f67..1921ade45be38b 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -845,15 +845,11 @@ int tracing_map_init(struct tracing_map *map)
+ static int cmp_entries_dup(const void *A, const void *B)
+ {
+ const struct tracing_map_sort_entry *a, *b;
+- int ret = 0;
+
+ a = *(const struct tracing_map_sort_entry **)A;
+ b = *(const struct tracing_map_sort_entry **)B;
+
+- if (memcmp(a->key, b->key, a->elt->map->key_size))
+- ret = 1;
+-
+- return ret;
++ return memcmp(a->key, b->key, a->elt->map->key_size);
+ }
+
+ static int cmp_entries_sum(const void *A, const void *B)
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 7312ae7c3cc57b..3f9c238bb58ea3 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1328,19 +1328,6 @@ config SCHEDSTATS
+
+ endmenu
+
+-config DEBUG_TIMEKEEPING
+- bool "Enable extra timekeeping sanity checking"
+- help
+- This option will enable additional timekeeping sanity checks
+- which may be helpful when diagnosing issues where timekeeping
+- problems are suspected.
+-
+- This may include checks in the timekeeping hotpaths, so this
+- option may have a (very small) performance impact to some
+- workloads.
+-
+- If unsure, say N.
+-
+ config DEBUG_PREEMPT
+ bool "Debug preemptible kernel"
+ depends on DEBUG_KERNEL && PREEMPTION && TRACE_IRQFLAGS_SUPPORT
+diff --git a/lib/stackdepot.c b/lib/stackdepot.c
+index 5ed34cc963fc38..245d5b41669995 100644
+--- a/lib/stackdepot.c
++++ b/lib/stackdepot.c
+@@ -630,7 +630,15 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ prealloc = page_address(page);
+ }
+
+- raw_spin_lock_irqsave(&pool_lock, flags);
++ if (in_nmi()) {
++ /* We can never allocate in NMI context. */
++ WARN_ON_ONCE(can_alloc);
++ /* Best effort; bail if we fail to take the lock. */
++ if (!raw_spin_trylock_irqsave(&pool_lock, flags))
++ goto exit;
++ } else {
++ raw_spin_lock_irqsave(&pool_lock, flags);
++ }
+ printk_deferred_enter();
+
+ /* Try to find again, to avoid concurrently inserting duplicates. */
+diff --git a/lib/stackinit_kunit.c b/lib/stackinit_kunit.c
+index c14c6f8e6308df..c40818ec9c1801 100644
+--- a/lib/stackinit_kunit.c
++++ b/lib/stackinit_kunit.c
+@@ -212,6 +212,7 @@ static noinline void test_ ## name (struct kunit *test) \
+ static noinline DO_NOTHING_TYPE_ ## which(var_type) \
+ do_nothing_ ## name(var_type *ptr) \
+ { \
++ OPTIMIZER_HIDE_VAR(ptr); \
+ /* Will always be true, but compiler doesn't know. */ \
+ if ((unsigned long)ptr > 0x2) \
+ return DO_NOTHING_RETURN_ ## which(ptr); \
+diff --git a/mm/debug.c b/mm/debug.c
+index aa57d3ffd4edf6..95b6ab809c0ee6 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -124,19 +124,22 @@ static void __dump_page(const struct page *page)
+ {
+ struct folio *foliop, folio;
+ struct page precise;
++ unsigned long head;
+ unsigned long pfn = page_to_pfn(page);
+ unsigned long idx, nr_pages = 1;
+ int loops = 5;
+
+ again:
+ memcpy(&precise, page, sizeof(*page));
+- foliop = page_folio(&precise);
+- if (foliop == (struct folio *)&precise) {
++ head = precise.compound_head;
++ if ((head & 1) == 0) {
++ foliop = (struct folio *)&precise;
+ idx = 0;
+ if (!folio_test_large(foliop))
+ goto dump;
+ foliop = (struct folio *)page;
+ } else {
++ foliop = (struct folio *)(head - 1);
+ idx = folio_page_idx(foliop, page);
+ }
+
+diff --git a/mm/gup.c b/mm/gup.c
+index ad0c8922dac3cb..7053f8114e0127 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -52,7 +52,12 @@ static inline void sanity_check_pinned_pages(struct page **pages,
+ */
+ for (; npages; npages--, pages++) {
+ struct page *page = *pages;
+- struct folio *folio = page_folio(page);
++ struct folio *folio;
++
++ if (!page)
++ continue;
++
++ folio = page_folio(page);
+
+ if (is_zero_page(page) ||
+ !folio_test_anon(folio))
+@@ -409,6 +414,10 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
+
+ sanity_check_pinned_pages(pages, npages);
+ for (i = 0; i < npages; i += nr) {
++ if (!pages[i]) {
++ nr = 1;
++ continue;
++ }
+ folio = gup_folio_next(pages, npages, i, &nr);
+ gup_put_folio(folio, nr, FOLL_PIN);
+ }
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index b48c768acc84d2..c7c0083203cb73 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -200,7 +200,7 @@ static inline void fail_non_kasan_kunit_test(void) { }
+
+ #endif /* CONFIG_KUNIT */
+
+-static DEFINE_SPINLOCK(report_lock);
++static DEFINE_RAW_SPINLOCK(report_lock);
+
+ static void start_report(unsigned long *flags, bool sync)
+ {
+@@ -211,7 +211,7 @@ static void start_report(unsigned long *flags, bool sync)
+ lockdep_off();
+ /* Make sure we don't end up in loop. */
+ report_suppress_start();
+- spin_lock_irqsave(&report_lock, *flags);
++ raw_spin_lock_irqsave(&report_lock, *flags);
+ pr_err("==================================================================\n");
+ }
+
+@@ -221,7 +221,7 @@ static void end_report(unsigned long *flags, const void *addr, bool is_write)
+ trace_error_report_end(ERROR_DETECTOR_KASAN,
+ (unsigned long)addr);
+ pr_err("==================================================================\n");
+- spin_unlock_irqrestore(&report_lock, *flags);
++ raw_spin_unlock_irqrestore(&report_lock, *flags);
+ if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+ check_panic_on_warn("KASAN");
+ switch (kasan_arg_fault) {
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 0389ce5cd281e1..095c18b5c430da 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -735,7 +735,7 @@ int __init_memblock memblock_add(phys_addr_t base, phys_addr_t size)
+ /**
+ * memblock_validate_numa_coverage - check if amount of memory with
+ * no node ID assigned is less than a threshold
+- * @threshold_bytes: maximal number of pages that can have unassigned node
++ * @threshold_bytes: maximal memory size that can have unassigned node
+ * ID (in bytes).
+ *
+ * A buggy firmware may report memory that does not belong to any node.
+@@ -755,7 +755,7 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
+ nr_pages += end_pfn - start_pfn;
+ }
+
+- if ((nr_pages << PAGE_SHIFT) >= threshold_bytes) {
++ if ((nr_pages << PAGE_SHIFT) > threshold_bytes) {
+ mem_size_mb = memblock_phys_mem_size() >> 20;
+ pr_err("NUMA: no nodes coverage for %luMB of %luMB RAM\n",
+ (nr_pages << PAGE_SHIFT) >> 20, mem_size_mb);
+diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
+index c0672e25bcdb20..6fbc78e0e440ce 100644
+--- a/mm/memcontrol-v1.h
++++ b/mm/memcontrol-v1.h
+@@ -38,7 +38,7 @@ void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n);
+ iter = mem_cgroup_iter(NULL, iter, NULL))
+
+ /* Whether legacy memory+swap accounting is active */
+-static bool do_memsw_account(void)
++static inline bool do_memsw_account(void)
+ {
+ return !cgroup_subsys_on_dfl(memory_cgrp_subsys);
+ }
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index b646fab3e45e10..7b908c4cc7eecb 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1080,6 +1080,10 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest,
+
+ mmap_read_lock(mm);
+ vma = find_vma(mm, 0);
++ if (unlikely(!vma)) {
++ mmap_read_unlock(mm);
++ return 0;
++ }
+
+ /*
+ * This does not migrate the range, but isolates all pages that
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 4f6e566d52faa6..7fb4c1e97175f9 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -901,6 +901,7 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ if (get_area) {
+ addr = get_area(file, addr, len, pgoff, flags);
+ } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
++ && !addr /* no hint */
+ && IS_ALIGNED(len, PMD_SIZE)) {
+ /* Ensures that larger anonymous mappings are THP aligned. */
+ addr = thp_get_unmapped_area_vmflags(file, addr, len,
+diff --git a/mm/readahead.c b/mm/readahead.c
+index 3dc6c7a128dd35..99fdb2b5b56862 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -453,8 +453,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
+ struct file_ra_state *ra, unsigned int new_order)
+ {
+ struct address_space *mapping = ractl->mapping;
+- pgoff_t start = readahead_index(ractl);
+- pgoff_t index = start;
++ pgoff_t index = readahead_index(ractl);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
+ pgoff_t mark = index + ra->size - ra->async_size;
+@@ -517,7 +516,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
+ if (!err)
+ return;
+ fallback:
+- do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
++ do_page_cache_ra(ractl, ra->size, ra->async_size);
+ }
+
+ static unsigned long ractl_max_pages(struct readahead_control *ractl,
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 5480b77f4167d7..0161cb4391e1d1 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -4093,7 +4093,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ /* Zero out spare memory. */
+ if (want_init_on_alloc(flags))
+ memset((void *)p + size, 0, old_size - size);
+-
++ kasan_poison_vmalloc(p + size, old_size - size);
++ kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
+ return (void *)p;
+ }
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 6354cdf9c2b372..e6591f487a5119 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1128,9 +1128,9 @@ void hci_conn_del(struct hci_conn *conn)
+
+ hci_conn_unlink(conn);
+
+- cancel_delayed_work_sync(&conn->disc_work);
+- cancel_delayed_work_sync(&conn->auto_accept_work);
+- cancel_delayed_work_sync(&conn->idle_work);
++ disable_delayed_work_sync(&conn->disc_work);
++ disable_delayed_work_sync(&conn->auto_accept_work);
++ disable_delayed_work_sync(&conn->idle_work);
+
+ if (conn->type == ACL_LINK) {
+ /* Unacked frames */
+@@ -2345,13 +2345,9 @@ struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ conn->iso_qos.bcast.big);
+ if (parent && parent != conn) {
+ link = hci_conn_link(parent, conn);
+- if (!link) {
+- hci_conn_drop(conn);
+- return ERR_PTR(-ENOLINK);
+- }
+-
+- /* Link takes the refcount */
+ hci_conn_drop(conn);
++ if (!link)
++ return ERR_PTR(-ENOLINK);
+ }
+
+ return conn;
+@@ -2441,15 +2437,12 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ }
+
+ link = hci_conn_link(le, cis);
++ hci_conn_drop(cis);
+ if (!link) {
+ hci_conn_drop(le);
+- hci_conn_drop(cis);
+ return ERR_PTR(-ENOLINK);
+ }
+
+- /* Link takes the refcount */
+- hci_conn_drop(cis);
+-
+ cis->state = BT_CONNECT;
+
+ hci_le_create_cis_pending(hdev);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 0ac354db817794..72439764186ed2 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3771,18 +3771,22 @@ static void hci_tx_work(struct work_struct *work)
+ /* ACL data packet */
+ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- struct hci_acl_hdr *hdr = (void *) skb->data;
++ struct hci_acl_hdr *hdr;
+ struct hci_conn *conn;
+ __u16 handle, flags;
+
+- skb_pull(skb, HCI_ACL_HDR_SIZE);
++ hdr = skb_pull_data(skb, sizeof(*hdr));
++ if (!hdr) {
++ bt_dev_err(hdev, "ACL packet too small");
++ goto drop;
++ }
+
+ handle = __le16_to_cpu(hdr->handle);
+ flags = hci_flags(handle);
+ handle = hci_handle(handle);
+
+- BT_DBG("%s len %d handle 0x%4.4x flags 0x%4.4x", hdev->name, skb->len,
+- handle, flags);
++ bt_dev_dbg(hdev, "len %d handle 0x%4.4x flags 0x%4.4x", skb->len,
++ handle, flags);
+
+ hdev->stat.acl_rx++;
+
+@@ -3801,6 +3805,7 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ handle);
+ }
+
++drop:
+ kfree_skb(skb);
+ }
+
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 2e4bd3e961ce09..2b5ba8acd1d84a 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3626,6 +3626,13 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
++ /* We skip the WRITE_AUTH_PAYLOAD_TIMEOUT for ATS2851 based controllers
++ * to avoid unexpected SMP command errors when pairing.
++ */
++ if (test_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT,
++ &hdev->quirks))
++ goto notify;
++
+ /* Set the default Authenticated Payload Timeout after
+ * an LE Link is established. As per Core Spec v5.0, Vol 2, Part B
+ * Section 3.3, the HCI command WRITE_AUTH_PAYLOAD_TIMEOUT should be
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index c0203a2b510756..c86f4e42e69cab 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4842,6 +4842,13 @@ static const struct {
+ HCI_QUIRK_BROKEN(SET_RPA_TIMEOUT,
+ "HCI LE Set Random Private Address Timeout command is "
+ "advertised, but not supported."),
++ HCI_QUIRK_BROKEN(EXT_CREATE_CONN,
++ "HCI LE Extended Create Connection command is "
++ "advertised, but not supported."),
++ HCI_QUIRK_BROKEN(WRITE_AUTH_PAYLOAD_TIMEOUT,
++ "HCI WRITE AUTH PAYLOAD TIMEOUT command leads "
++ "to unexpected SMP errors when pairing "
++ "and will not be used."),
+ HCI_QUIRK_BROKEN(LE_CODED,
+ "HCI LE Coded PHY feature bit is set, "
+ "but its usage is not supported.")
+@@ -6477,7 +6484,7 @@ static int hci_le_create_conn_sync(struct hci_dev *hdev, void *data)
+ &own_addr_type);
+ if (err)
+ goto done;
+-
++ /* Send command LE Extended Create Connection if supported */
+ if (use_ext_conn(hdev)) {
+ err = hci_le_ext_create_conn_sync(hdev, conn, own_addr_type);
+ goto done;
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index ba437c6f6ee591..18e89e764f3b42 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1886,6 +1886,7 @@ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
+ chan = l2cap_chan_create();
+ if (!chan) {
+ sk_free(sk);
++ sock->sk = NULL;
+ return NULL;
+ }
+
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 8af1bf518321fd..40766f8119ed9c 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -274,13 +274,13 @@ static struct sock *rfcomm_sock_alloc(struct net *net, struct socket *sock,
+ struct rfcomm_dlc *d;
+ struct sock *sk;
+
+- sk = bt_sock_alloc(net, sock, &rfcomm_proto, proto, prio, kern);
+- if (!sk)
++ d = rfcomm_dlc_alloc(prio);
++ if (!d)
+ return NULL;
+
+- d = rfcomm_dlc_alloc(prio);
+- if (!d) {
+- sk_free(sk);
++ sk = bt_sock_alloc(net, sock, &rfcomm_proto, proto, prio, kern);
++ if (!sk) {
++ rfcomm_dlc_free(d);
+ return NULL;
+ }
+
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 707576eeeb5823..01f3fbb3b67dc6 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -171,6 +171,7 @@ static int can_create(struct net *net, struct socket *sock, int protocol,
+ /* release sk on errors */
+ sock_orphan(sk);
+ sock_put(sk);
++ sock->sk = NULL;
+ }
+
+ errout:
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 319f47df33300c..95f7a7e65a73fa 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1505,7 +1505,7 @@ static struct j1939_session *j1939_session_new(struct j1939_priv *priv,
+ session->state = J1939_SESSION_NEW;
+
+ skb_queue_head_init(&session->skb_queue);
+- skb_queue_tail(&session->skb_queue, skb);
++ skb_queue_tail(&session->skb_queue, skb_get(skb));
+
+ skcb = j1939_skb_to_cb(skb);
+ memcpy(&session->skcb, skcb, sizeof(session->skcb));
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index ab150641142aa1..1b4d39e3808427 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -45,9 +45,14 @@ static unsigned int default_operstate(const struct net_device *dev)
+ int iflink = dev_get_iflink(dev);
+ struct net_device *peer;
+
+- if (iflink == dev->ifindex)
++ /* If called from netdev_run_todo()/linkwatch_sync_dev(),
++ * dev_net(dev) can be already freed, and RTNL is not held.
++ */
++ if (dev->reg_state == NETREG_UNREGISTERED ||
++ iflink == dev->ifindex)
+ return IF_OPER_DOWN;
+
++ ASSERT_RTNL();
+ peer = __dev_get_by_index(dev_net(dev), iflink);
+ if (!peer)
+ return IF_OPER_DOWN;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 77b819cd995b25..cc58315a40a79c 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -2876,6 +2876,7 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
+ err = neigh_valid_dump_req(nlh, cb->strict_check, &filter, cb->extack);
+ if (err < 0 && cb->strict_check)
+ return err;
++ err = 0;
+
+ s_t = cb->args[0];
+
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index aa49b92e9194ba..45fb60bc480395 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -626,7 +626,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ goto out;
+ }
+
+- if (!ndev->npinfo) {
++ if (!rcu_access_pointer(ndev->npinfo)) {
+ npinfo = kmalloc(sizeof(*npinfo), GFP_KERNEL);
+ if (!npinfo) {
+ err = -ENOMEM;
+diff --git a/net/dccp/feat.c b/net/dccp/feat.c
+index 54086bb05c42cd..f7554dcdaaba93 100644
+--- a/net/dccp/feat.c
++++ b/net/dccp/feat.c
+@@ -1166,8 +1166,12 @@ static u8 dccp_feat_change_recv(struct list_head *fn, u8 is_mandatory, u8 opt,
+ goto not_valid_or_not_known;
+ }
+
+- return dccp_feat_push_confirm(fn, feat, local, &fval);
++ if (dccp_feat_push_confirm(fn, feat, local, &fval)) {
++ kfree(fval.sp.vec);
++ return DCCP_RESET_CODE_TOO_BUSY;
++ }
+
++ return 0;
+ } else if (entry->state == FEAT_UNSTABLE) { /* 6.6.2 */
+ return 0;
+ }
+diff --git a/net/ethtool/bitset.c b/net/ethtool/bitset.c
+index 0515d6604b3b9d..f0883357d12e52 100644
+--- a/net/ethtool/bitset.c
++++ b/net/ethtool/bitset.c
+@@ -425,12 +425,32 @@ static int ethnl_parse_bit(unsigned int *index, bool *val, unsigned int nbits,
+ return 0;
+ }
+
++/**
++ * ethnl_bitmap32_equal() - Compare two bitmaps
++ * @map1: first bitmap
++ * @map2: second bitmap
++ * @nbits: bit size to compare
++ *
++ * Return: true if first @nbits are equal, false if not
++ */
++static bool ethnl_bitmap32_equal(const u32 *map1, const u32 *map2,
++ unsigned int nbits)
++{
++ if (memcmp(map1, map2, nbits / 32 * sizeof(u32)))
++ return false;
++ if (nbits % 32 == 0)
++ return true;
++ return !((map1[nbits / 32] ^ map2[nbits / 32]) &
++ ethnl_lower_bits(nbits % 32));
++}
++
+ static int
+ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
+ const struct nlattr *attr, struct nlattr **tb,
+ ethnl_string_array_t names,
+ struct netlink_ext_ack *extack, bool *mod)
+ {
++ u32 *saved_bitmap = NULL;
+ struct nlattr *bit_attr;
+ bool no_mask;
+ int rem;
+@@ -448,8 +468,20 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
+ }
+
+ no_mask = tb[ETHTOOL_A_BITSET_NOMASK];
+- if (no_mask)
+- ethnl_bitmap32_clear(bitmap, 0, nbits, mod);
++ if (no_mask) {
++ unsigned int nwords = DIV_ROUND_UP(nbits, 32);
++ unsigned int nbytes = nwords * sizeof(u32);
++ bool dummy;
++
++ /* The bitmap size is only the size of the map part without
++ * its mask part.
++ */
++ saved_bitmap = kcalloc(nwords, sizeof(u32), GFP_KERNEL);
++ if (!saved_bitmap)
++ return -ENOMEM;
++ memcpy(saved_bitmap, bitmap, nbytes);
++ ethnl_bitmap32_clear(bitmap, 0, nbits, &dummy);
++ }
+
+ nla_for_each_nested(bit_attr, tb[ETHTOOL_A_BITSET_BITS], rem) {
+ bool old_val, new_val;
+@@ -458,22 +490,30 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
+ if (nla_type(bit_attr) != ETHTOOL_A_BITSET_BITS_BIT) {
+ NL_SET_ERR_MSG_ATTR(extack, bit_attr,
+ "only ETHTOOL_A_BITSET_BITS_BIT allowed in ETHTOOL_A_BITSET_BITS");
++ kfree(saved_bitmap);
+ return -EINVAL;
+ }
+ ret = ethnl_parse_bit(&idx, &new_val, nbits, bit_attr, no_mask,
+ names, extack);
+- if (ret < 0)
++ if (ret < 0) {
++ kfree(saved_bitmap);
+ return ret;
++ }
+ old_val = bitmap[idx / 32] & ((u32)1 << (idx % 32));
+ if (new_val != old_val) {
+ if (new_val)
+ bitmap[idx / 32] |= ((u32)1 << (idx % 32));
+ else
+ bitmap[idx / 32] &= ~((u32)1 << (idx % 32));
+- *mod = true;
++ if (!no_mask)
++ *mod = true;
+ }
+ }
+
++ if (no_mask && !ethnl_bitmap32_equal(saved_bitmap, bitmap, nbits))
++ *mod = true;
++
++ kfree(saved_bitmap);
+ return 0;
+ }
+
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index f630d6645636dd..44048d7538ddc3 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -246,20 +246,22 @@ static const struct header_ops hsr_header_ops = {
+ .parse = eth_header_parse,
+ };
+
+-static struct sk_buff *hsr_init_skb(struct hsr_port *master)
++static struct sk_buff *hsr_init_skb(struct hsr_port *master, int extra)
+ {
+ struct hsr_priv *hsr = master->hsr;
+ struct sk_buff *skb;
+ int hlen, tlen;
++ int len;
+
+ hlen = LL_RESERVED_SPACE(master->dev);
+ tlen = master->dev->needed_tailroom;
++ len = sizeof(struct hsr_sup_tag) + sizeof(struct hsr_sup_payload);
+ /* skb size is same for PRP/HSR frames, only difference
+ * being, for PRP it is a trailer and for HSR it is a
+- * header
++ * header.
++ * RedBox might use @extra more bytes.
+ */
+- skb = dev_alloc_skb(sizeof(struct hsr_sup_tag) +
+- sizeof(struct hsr_sup_payload) + hlen + tlen);
++ skb = dev_alloc_skb(len + extra + hlen + tlen);
+
+ if (!skb)
+ return skb;
+@@ -295,6 +297,7 @@ static void send_hsr_supervision_frame(struct hsr_port *port,
+ struct hsr_sup_tlv *hsr_stlv;
+ struct hsr_sup_tag *hsr_stag;
+ struct sk_buff *skb;
++ int extra = 0;
+
+ *interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
+ if (hsr->announce_count < 3 && hsr->prot_version == 0) {
+@@ -303,7 +306,11 @@ static void send_hsr_supervision_frame(struct hsr_port *port,
+ hsr->announce_count++;
+ }
+
+- skb = hsr_init_skb(port);
++ if (hsr->redbox)
++ extra = sizeof(struct hsr_sup_tlv) +
++ sizeof(struct hsr_sup_payload);
++
++ skb = hsr_init_skb(port, extra);
+ if (!skb) {
+ netdev_warn_once(port->dev, "HSR: Could not send supervision frame\n");
+ return;
+@@ -362,7 +369,7 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ struct hsr_sup_tag *hsr_stag;
+ struct sk_buff *skb;
+
+- skb = hsr_init_skb(master);
++ skb = hsr_init_skb(master, 0);
+ if (!skb) {
+ netdev_warn_once(master->dev, "PRP: Could not send supervision frame\n");
+ return;
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index b38060246e62e8..40c5fbbd155d66 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -688,6 +688,8 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ frame->is_vlan = true;
+
+ if (frame->is_vlan) {
++ if (skb->mac_len < offsetofend(struct hsr_vlan_ethhdr, vlanhdr))
++ return -EINVAL;
+ vlan_hdr = (struct hsr_vlan_ethhdr *)ethhdr;
+ proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto;
+ /* FIXME: */
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index 990a83455dcfb5..18d267921bb531 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -1043,19 +1043,21 @@ static int ieee802154_create(struct net *net, struct socket *sock,
+
+ if (sk->sk_prot->hash) {
+ rc = sk->sk_prot->hash(sk);
+- if (rc) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (rc)
++ goto out_sk_release;
+ }
+
+ if (sk->sk_prot->init) {
+ rc = sk->sk_prot->init(sk);
+ if (rc)
+- sk_common_release(sk);
++ goto out_sk_release;
+ }
+ out:
+ return rc;
++out_sk_release:
++ sk_common_release(sk);
++ sock->sk = NULL;
++ goto out;
+ }
+
+ static const struct net_proto_family ieee802154_family_ops = {
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index b24d74616637a0..8095e82de8083d 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -376,32 +376,30 @@ static int inet_create(struct net *net, struct socket *sock, int protocol,
+ inet->inet_sport = htons(inet->inet_num);
+ /* Add to protocol hash chains. */
+ err = sk->sk_prot->hash(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+
+ if (sk->sk_prot->init) {
+ err = sk->sk_prot->init(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+
+ if (!kern) {
+ err = BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+ out:
+ return err;
+ out_rcu_unlock:
+ rcu_read_unlock();
+ goto out;
++out_sk_release:
++ sk_common_release(sk);
++ sock->sk = NULL;
++ goto out;
+ }
+
+
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index e1384e7331d82f..c3ad41573b33ea 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -519,6 +519,9 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ if (!IS_ERR(dst)) {
+ if (rt != rt2)
+ return rt;
++ if (inet_addr_type_dev_table(net, route_lookup_dev,
++ fl4->daddr) == RTN_LOCAL)
++ return rt;
+ } else if (PTR_ERR(dst) == -EPERM) {
+ rt = NULL;
+ } else {
+diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c
+index db6516092daf5b..bbb8d5f0eae7d3 100644
+--- a/net/ipv4/tcp_ao.c
++++ b/net/ipv4/tcp_ao.c
+@@ -109,12 +109,13 @@ bool tcp_ao_ignore_icmp(const struct sock *sk, int family, int type, int code)
+ * it's known that the keys in ao_info are matching peer's
+ * family/address/VRF/etc.
+ */
+-struct tcp_ao_key *tcp_ao_established_key(struct tcp_ao_info *ao,
++struct tcp_ao_key *tcp_ao_established_key(const struct sock *sk,
++ struct tcp_ao_info *ao,
+ int sndid, int rcvid)
+ {
+ struct tcp_ao_key *key;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node) {
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk)) {
+ if ((sndid >= 0 && key->sndid != sndid) ||
+ (rcvid >= 0 && key->rcvid != rcvid))
+ continue;
+@@ -205,7 +206,7 @@ static struct tcp_ao_key *__tcp_ao_do_lookup(const struct sock *sk, int l3index,
+ if (!ao)
+ return NULL;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node) {
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk)) {
+ u8 prefixlen = min(prefix, key->prefixlen);
+
+ if (!tcp_ao_key_cmp(key, l3index, addr, prefixlen,
+@@ -793,7 +794,7 @@ int tcp_ao_prepare_reset(const struct sock *sk, struct sk_buff *skb,
+ if (!ao_info)
+ return -ENOENT;
+
+- *key = tcp_ao_established_key(ao_info, aoh->rnext_keyid, -1);
++ *key = tcp_ao_established_key(sk, ao_info, aoh->rnext_keyid, -1);
+ if (!*key)
+ return -ENOENT;
+ *traffic_key = snd_other_key(*key);
+@@ -979,7 +980,7 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb,
+ */
+ key = READ_ONCE(info->rnext_key);
+ if (key->rcvid != aoh->keyid) {
+- key = tcp_ao_established_key(info, -1, aoh->keyid);
++ key = tcp_ao_established_key(sk, info, -1, aoh->keyid);
+ if (!key)
+ goto key_not_found;
+ }
+@@ -1003,7 +1004,7 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb,
+ aoh->rnext_keyid,
+ tcp_ao_hdr_maclen(aoh));
+ /* If the key is not found we do nothing. */
+- key = tcp_ao_established_key(info, aoh->rnext_keyid, -1);
++ key = tcp_ao_established_key(sk, info, aoh->rnext_keyid, -1);
+ if (key)
+ /* pairs with tcp_ao_del_cmd */
+ WRITE_ONCE(info->current_key, key);
+@@ -1163,7 +1164,7 @@ void tcp_ao_established(struct sock *sk)
+ if (!ao)
+ return;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node)
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+ }
+
+@@ -1180,7 +1181,7 @@ void tcp_ao_finish_connect(struct sock *sk, struct sk_buff *skb)
+ WRITE_ONCE(ao->risn, tcp_hdr(skb)->seq);
+ ao->rcv_sne = 0;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node)
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+ }
+
+@@ -1256,14 +1257,14 @@ int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk,
+ key_head = rcu_dereference(hlist_first_rcu(&new_ao->head));
+ first_key = hlist_entry_safe(key_head, struct tcp_ao_key, node);
+
+- key = tcp_ao_established_key(new_ao, tcp_rsk(req)->ao_keyid, -1);
++ key = tcp_ao_established_key(req_to_sk(req), new_ao, tcp_rsk(req)->ao_keyid, -1);
+ if (key)
+ new_ao->current_key = key;
+ else
+ new_ao->current_key = first_key;
+
+ /* set rnext_key */
+- key = tcp_ao_established_key(new_ao, -1, tcp_rsk(req)->ao_rcv_next);
++ key = tcp_ao_established_key(req_to_sk(req), new_ao, -1, tcp_rsk(req)->ao_rcv_next);
+ if (key)
+ new_ao->rnext_key = key;
+ else
+@@ -1857,12 +1858,12 @@ static int tcp_ao_del_cmd(struct sock *sk, unsigned short int family,
+ * if there's any.
+ */
+ if (cmd.set_current) {
+- new_current = tcp_ao_established_key(ao_info, cmd.current_key, -1);
++ new_current = tcp_ao_established_key(sk, ao_info, cmd.current_key, -1);
+ if (!new_current)
+ return -ENOENT;
+ }
+ if (cmd.set_rnext) {
+- new_rnext = tcp_ao_established_key(ao_info, -1, cmd.rnext);
++ new_rnext = tcp_ao_established_key(sk, ao_info, -1, cmd.rnext);
+ if (!new_rnext)
+ return -ENOENT;
+ }
+@@ -1902,7 +1903,8 @@ static int tcp_ao_del_cmd(struct sock *sk, unsigned short int family,
+ * "It is presumed that an MKT affecting a particular
+ * connection cannot be destroyed during an active connection"
+ */
+- hlist_for_each_entry_rcu(key, &ao_info->head, node) {
++ hlist_for_each_entry_rcu(key, &ao_info->head, node,
++ lockdep_sock_is_held(sk)) {
+ if (cmd.sndid != key->sndid ||
+ cmd.rcvid != key->rcvid)
+ continue;
+@@ -2000,14 +2002,14 @@ static int tcp_ao_info_cmd(struct sock *sk, unsigned short int family,
+ * if there's any.
+ */
+ if (cmd.set_current) {
+- new_current = tcp_ao_established_key(ao_info, cmd.current_key, -1);
++ new_current = tcp_ao_established_key(sk, ao_info, cmd.current_key, -1);
+ if (!new_current) {
+ err = -ENOENT;
+ goto out;
+ }
+ }
+ if (cmd.set_rnext) {
+- new_rnext = tcp_ao_established_key(ao_info, -1, cmd.rnext);
++ new_rnext = tcp_ao_established_key(sk, ao_info, -1, cmd.rnext);
+ if (!new_rnext) {
+ err = -ENOENT;
+ goto out;
+@@ -2101,7 +2103,8 @@ int tcp_v4_parse_ao(struct sock *sk, int cmd, sockptr_t optval, int optlen)
+ * The layout of the fields in the user and kernel structures is expected to
+ * be the same (including in the 32bit vs 64bit case).
+ */
+-static int tcp_ao_copy_mkts_to_user(struct tcp_ao_info *ao_info,
++static int tcp_ao_copy_mkts_to_user(const struct sock *sk,
++ struct tcp_ao_info *ao_info,
+ sockptr_t optval, sockptr_t optlen)
+ {
+ struct tcp_ao_getsockopt opt_in, opt_out;
+@@ -2229,7 +2232,8 @@ static int tcp_ao_copy_mkts_to_user(struct tcp_ao_info *ao_info,
+ /* May change in RX, while we're dumping, pre-fetch it */
+ current_key = READ_ONCE(ao_info->current_key);
+
+- hlist_for_each_entry_rcu(key, &ao_info->head, node) {
++ hlist_for_each_entry_rcu(key, &ao_info->head, node,
++ lockdep_sock_is_held(sk)) {
+ if (opt_in.get_all)
+ goto match;
+
+@@ -2309,7 +2313,7 @@ int tcp_ao_get_mkts(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+ if (!ao_info)
+ return -ENOENT;
+
+- return tcp_ao_copy_mkts_to_user(ao_info, optval, optlen);
++ return tcp_ao_copy_mkts_to_user(sk, ao_info, optval, optlen);
+ }
+
+ int tcp_ao_get_sock_info(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+@@ -2396,7 +2400,7 @@ int tcp_ao_set_repair(struct sock *sk, sockptr_t optval, unsigned int optlen)
+ WRITE_ONCE(ao->snd_sne, cmd.snd_sne);
+ WRITE_ONCE(ao->rcv_sne, cmd.rcv_sne);
+
+- hlist_for_each_entry_rcu(key, &ao->head, node)
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+
+ return 0;
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 370993c03d3136..99cef92e6290cf 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -441,7 +441,6 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ cork = true;
+ psock->cork = NULL;
+ }
+- sk_msg_return(sk, msg, tosend);
+ release_sock(sk);
+
+ origsize = msg->sg.size;
+@@ -453,8 +452,9 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ sock_put(sk_redir);
+
+ lock_sock(sk);
++ sk_mem_uncharge(sk, sent);
+ if (unlikely(ret < 0)) {
+- int free = sk_msg_free_nocharge(sk, msg);
++ int free = sk_msg_free(sk, msg);
+
+ if (!cork)
+ *copied -= free;
+@@ -468,7 +468,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ break;
+ case __SK_DROP:
+ default:
+- sk_msg_free_partial(sk, msg, tosend);
++ sk_msg_free(sk, msg);
+ sk_msg_apply_bytes(psock, tosend);
+ *copied -= (tosend + delta);
+ return -EACCES;
+@@ -484,11 +484,8 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ }
+ if (msg &&
+ msg->sg.data[msg->sg.start].page_link &&
+- msg->sg.data[msg->sg.start].length) {
+- if (eval == __SK_REDIRECT)
+- sk_mem_charge(sk, tosend - sent);
++ msg->sg.data[msg->sg.start].length)
+ goto more_data;
+- }
+ }
+ return ret;
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 5afe5e57c89b5c..a7cd433a54c9ae 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1053,7 +1053,8 @@ static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ }
+
+ if (aoh)
+- key.ao_key = tcp_ao_established_key(ao_info, aoh->rnext_keyid, -1);
++ key.ao_key = tcp_ao_established_key(sk, ao_info,
++ aoh->rnext_keyid, -1);
+ }
+ }
+ if (key.ao_key) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 2849b273b13107..ff85242720a0a9 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1516,7 +1516,6 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ struct sk_buff_head *list = &sk->sk_receive_queue;
+ int rmem, err = -ENOMEM;
+ spinlock_t *busy = NULL;
+- bool becomes_readable;
+ int size, rcvbuf;
+
+ /* Immediately drop when the receive queue is full.
+@@ -1557,19 +1556,12 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ */
+ sock_skb_set_dropcount(sk, skb);
+
+- becomes_readable = skb_queue_empty(list);
+ __skb_queue_tail(list, skb);
+ spin_unlock(&list->lock);
+
+- if (!sock_flag(sk, SOCK_DEAD)) {
+- if (becomes_readable ||
+- sk->sk_data_ready != sock_def_readable ||
+- READ_ONCE(sk->sk_peek_off) >= 0)
+- INDIRECT_CALL_1(sk->sk_data_ready,
+- sock_def_readable, sk);
+- else
+- sk_wake_async_rcu(sk, SOCK_WAKE_WAITD, POLL_IN);
+- }
++ if (!sock_flag(sk, SOCK_DEAD))
++ INDIRECT_CALL_1(sk->sk_data_ready, sock_def_readable, sk);
++
+ busylock_release(busy);
+ return 0;
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 01115e1a34cb66..f7c17388ff6aaf 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4821,7 +4821,7 @@ inet6_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
+ ifm->ifa_prefixlen, extack);
+ }
+
+-static int modify_prefix_route(struct inet6_ifaddr *ifp,
++static int modify_prefix_route(struct net *net, struct inet6_ifaddr *ifp,
+ unsigned long expires, u32 flags,
+ bool modify_peer)
+ {
+@@ -4845,7 +4845,9 @@ static int modify_prefix_route(struct inet6_ifaddr *ifp,
+ ifp->prefix_len,
+ ifp->rt_priority, ifp->idev->dev,
+ expires, flags, GFP_KERNEL);
+- } else {
++ return 0;
++ }
++ if (f6i != net->ipv6.fib6_null_entry) {
+ table = f6i->fib6_table;
+ spin_lock_bh(&table->tb6_lock);
+
+@@ -4858,9 +4860,8 @@ static int modify_prefix_route(struct inet6_ifaddr *ifp,
+ }
+
+ spin_unlock_bh(&table->tb6_lock);
+-
+- fib6_info_release(f6i);
+ }
++ fib6_info_release(f6i);
+
+ return 0;
+ }
+@@ -4939,7 +4940,7 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ int rc = -ENOENT;
+
+ if (had_prefixroute)
+- rc = modify_prefix_route(ifp, expires, flags, false);
++ rc = modify_prefix_route(net, ifp, expires, flags, false);
+
+ /* prefix route could have been deleted; if so restore it */
+ if (rc == -ENOENT) {
+@@ -4949,7 +4950,7 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ }
+
+ if (had_prefixroute && !ipv6_addr_any(&ifp->peer_addr))
+- rc = modify_prefix_route(ifp, expires, flags, true);
++ rc = modify_prefix_route(net, ifp, expires, flags, true);
+
+ if (rc == -ENOENT && !ipv6_addr_any(&ifp->peer_addr)) {
+ addrconf_prefix_route(&ifp->peer_addr, ifp->prefix_len,
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index ba69b86f1c7d5e..f60ec8b0f8ea40 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -252,31 +252,29 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol,
+ */
+ inet->inet_sport = htons(inet->inet_num);
+ err = sk->sk_prot->hash(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+ if (sk->sk_prot->init) {
+ err = sk->sk_prot->init(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+
+ if (!kern) {
+ err = BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+ out:
+ return err;
+ out_rcu_unlock:
+ rcu_read_unlock();
+ goto out;
++out_sk_release:
++ sk_common_release(sk);
++ sock->sk = NULL;
++ goto out;
+ }
+
+ static int __inet6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index cff4fbbc66efb2..8ebfed5d63232e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2780,10 +2780,10 @@ static void ip6_negative_advice(struct sock *sk,
+ if (rt->rt6i_flags & RTF_CACHE) {
+ rcu_read_lock();
+ if (rt6_check_expired(rt)) {
+- /* counteract the dst_release() in sk_dst_reset() */
+- dst_hold(dst);
++ /* rt/dst can not be destroyed yet,
++ * because of rcu_read_lock()
++ */
+ sk_dst_reset(sk);
+-
+ rt6_remove_exception_rt(rt);
+ }
+ rcu_read_unlock();
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index c9de5ef8f26750..59173f58ce9923 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1169,8 +1169,8 @@ static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
+ goto out;
+ if (aoh)
+- key.ao_key = tcp_ao_established_key(ao_info,
+- aoh->rnext_keyid, -1);
++ key.ao_key = tcp_ao_established_key(sk, ao_info,
++ aoh->rnext_keyid, -1);
+ }
+ }
+ if (key.ao_key) {
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index 2d3efb405437d8..02205f7994d752 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -47,7 +47,7 @@ static int subflow_get_info(struct sock *sk, struct sk_buff *skb)
+ flags |= MPTCP_SUBFLOW_FLAG_BKUP_REM;
+ if (sf->request_bkup)
+ flags |= MPTCP_SUBFLOW_FLAG_BKUP_LOC;
+- if (sf->fully_established)
++ if (READ_ONCE(sf->fully_established))
+ flags |= MPTCP_SUBFLOW_FLAG_FULLY_ESTABLISHED;
+ if (sf->conn_finished)
+ flags |= MPTCP_SUBFLOW_FLAG_CONNECTED;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 370c3836b7712f..1603b3702e2207 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -461,7 +461,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ return false;
+
+ /* MPC/MPJ needed only on 3rd ack packet, DATA_FIN and TCP shutdown take precedence */
+- if (subflow->fully_established || snd_data_fin_enable ||
++ if (READ_ONCE(subflow->fully_established) || snd_data_fin_enable ||
+ subflow->snd_isn != TCP_SKB_CB(skb)->seq ||
+ sk->sk_state != TCP_ESTABLISHED)
+ return false;
+@@ -930,7 +930,7 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ /* here we can process OoO, in-window pkts, only in-sequence 4th ack
+ * will make the subflow fully established
+ */
+- if (likely(subflow->fully_established)) {
++ if (likely(READ_ONCE(subflow->fully_established))) {
+ /* on passive sockets, check for 3rd ack retransmission
+ * note that msk is always set by subflow_syn_recv_sock()
+ * for mp_join subflows
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 48d480982b7870..8a8e8fee337f5e 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2728,8 +2728,8 @@ void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout)
+ if (!fail_tout && !inet_csk(sk)->icsk_mtup.probe_timestamp)
+ return;
+
+- close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies +
+- mptcp_close_timeout(sk);
++ close_timeout = (unsigned long)inet_csk(sk)->icsk_mtup.probe_timestamp -
++ tcp_jiffies32 + jiffies + mptcp_close_timeout(sk);
+
+ /* the close timeout takes precedence on the fail one, and here at least one of
+ * them is active
+@@ -3519,7 +3519,7 @@ static void schedule_3rdack_retransmission(struct sock *ssk)
+ struct tcp_sock *tp = tcp_sk(ssk);
+ unsigned long timeout;
+
+- if (mptcp_subflow_ctx(ssk)->fully_established)
++ if (READ_ONCE(mptcp_subflow_ctx(ssk)->fully_established))
+ return;
+
+ /* reschedule with a timeout above RTT, as we must look only for drop */
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 568a72702b080d..a93e661ef5c435 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -513,7 +513,6 @@ struct mptcp_subflow_context {
+ request_bkup : 1,
+ mp_capable : 1, /* remote is MPTCP capable */
+ mp_join : 1, /* remote is JOINing */
+- fully_established : 1, /* path validated */
+ pm_notified : 1, /* PM hook called for established status */
+ conn_finished : 1,
+ map_valid : 1,
+@@ -532,10 +531,11 @@ struct mptcp_subflow_context {
+ is_mptfo : 1, /* subflow is doing TFO */
+ close_event_done : 1, /* has done the post-closed part */
+ mpc_drop : 1, /* the MPC option has been dropped in a rtx */
+- __unused : 8;
++ __unused : 9;
+ bool data_avail;
+ bool scheduled;
+ bool pm_listener; /* a listener managed by the kernel PM? */
++ bool fully_established; /* path validated */
+ u32 remote_nonce;
+ u64 thmac;
+ u32 local_nonce;
+@@ -780,7 +780,7 @@ static inline bool __tcp_can_send(const struct sock *ssk)
+ static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
+ {
+ /* can't send if JOIN hasn't completed yet (i.e. is usable for mptcp) */
+- if (subflow->request_join && !subflow->fully_established)
++ if (subflow->request_join && !READ_ONCE(subflow->fully_established))
+ return false;
+
+ return __tcp_can_send(mptcp_subflow_tcp_sock(subflow));
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 6170f2fff71e4f..860903e0642255 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -800,7 +800,7 @@ void __mptcp_subflow_fully_established(struct mptcp_sock *msk,
+ const struct mptcp_options_received *mp_opt)
+ {
+ subflow_set_remote_key(msk, subflow, mp_opt);
+- subflow->fully_established = 1;
++ WRITE_ONCE(subflow->fully_established, true);
+ WRITE_ONCE(msk->fully_established, true);
+
+ if (subflow->is_mptfo)
+@@ -2062,7 +2062,7 @@ static void subflow_ulp_clone(const struct request_sock *req,
+ } else if (subflow_req->mp_join) {
+ new_ctx->ssn_offset = subflow_req->ssn_offset;
+ new_ctx->mp_join = 1;
+- new_ctx->fully_established = 1;
++ WRITE_ONCE(new_ctx->fully_established, true);
+ new_ctx->remote_key_valid = 1;
+ new_ctx->backup = subflow_req->backup;
+ new_ctx->request_bkup = subflow_req->request_bkup;
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 61431690cbd5f1..cc20e6d56807c6 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -104,14 +104,19 @@ find_set_type(const char *name, u8 family, u8 revision)
+ static bool
+ load_settype(const char *name)
+ {
++ if (!try_module_get(THIS_MODULE))
++ return false;
++
+ nfnl_unlock(NFNL_SUBSYS_IPSET);
+ pr_debug("try to load ip_set_%s\n", name);
+ if (request_module("ip_set_%s", name) < 0) {
+ pr_warn("Can't find ip_set type %s\n", name);
+ nfnl_lock(NFNL_SUBSYS_IPSET);
++ module_put(THIS_MODULE);
+ return false;
+ }
+ nfnl_lock(NFNL_SUBSYS_IPSET);
++ module_put(THIS_MODULE);
+ return true;
+ }
+
+diff --git a/net/netfilter/ipvs/ip_vs_proto.c b/net/netfilter/ipvs/ip_vs_proto.c
+index f100da4ba3bc3c..a9fd1d3fc2cbfe 100644
+--- a/net/netfilter/ipvs/ip_vs_proto.c
++++ b/net/netfilter/ipvs/ip_vs_proto.c
+@@ -340,7 +340,7 @@ void __net_exit ip_vs_protocol_net_cleanup(struct netns_ipvs *ipvs)
+
+ int __init ip_vs_protocol_init(void)
+ {
+- char protocols[64];
++ char protocols[64] = { 0 };
+ #define REGISTER_PROTOCOL(p) \
+ do { \
+ register_ip_vs_protocol(p); \
+@@ -348,8 +348,6 @@ int __init ip_vs_protocol_init(void)
+ strcat(protocols, (p)->name); \
+ } while (0)
+
+- protocols[0] = '\0';
+- protocols[2] = '\0';
+ #ifdef CONFIG_IP_VS_PROTO_TCP
+ REGISTER_PROTOCOL(&ip_vs_protocol_tcp);
+ #endif
+diff --git a/net/netfilter/nft_inner.c b/net/netfilter/nft_inner.c
+index 928312d01eb1d6..817ab978d24a19 100644
+--- a/net/netfilter/nft_inner.c
++++ b/net/netfilter/nft_inner.c
+@@ -210,35 +210,66 @@ static int nft_inner_parse(const struct nft_inner *priv,
+ struct nft_pktinfo *pkt,
+ struct nft_inner_tun_ctx *tun_ctx)
+ {
+- struct nft_inner_tun_ctx ctx = {};
+ u32 off = pkt->inneroff;
+
+ if (priv->flags & NFT_INNER_HDRSIZE &&
+- nft_inner_parse_tunhdr(priv, pkt, &ctx, &off) < 0)
++ nft_inner_parse_tunhdr(priv, pkt, tun_ctx, &off) < 0)
+ return -1;
+
+ if (priv->flags & (NFT_INNER_LL | NFT_INNER_NH)) {
+- if (nft_inner_parse_l2l3(priv, pkt, &ctx, off) < 0)
++ if (nft_inner_parse_l2l3(priv, pkt, tun_ctx, off) < 0)
+ return -1;
+ } else if (priv->flags & NFT_INNER_TH) {
+- ctx.inner_thoff = off;
+- ctx.flags |= NFT_PAYLOAD_CTX_INNER_TH;
++ tun_ctx->inner_thoff = off;
++ tun_ctx->flags |= NFT_PAYLOAD_CTX_INNER_TH;
+ }
+
+- *tun_ctx = ctx;
+ tun_ctx->type = priv->type;
++ tun_ctx->cookie = (unsigned long)pkt->skb;
+ pkt->flags |= NFT_PKTINFO_INNER_FULL;
+
+ return 0;
+ }
+
++static bool nft_inner_restore_tun_ctx(const struct nft_pktinfo *pkt,
++ struct nft_inner_tun_ctx *tun_ctx)
++{
++ struct nft_inner_tun_ctx *this_cpu_tun_ctx;
++
++ local_bh_disable();
++ this_cpu_tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx);
++ if (this_cpu_tun_ctx->cookie != (unsigned long)pkt->skb) {
++ local_bh_enable();
++ return false;
++ }
++ *tun_ctx = *this_cpu_tun_ctx;
++ local_bh_enable();
++
++ return true;
++}
++
++static void nft_inner_save_tun_ctx(const struct nft_pktinfo *pkt,
++ const struct nft_inner_tun_ctx *tun_ctx)
++{
++ struct nft_inner_tun_ctx *this_cpu_tun_ctx;
++
++ local_bh_disable();
++ this_cpu_tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx);
++ if (this_cpu_tun_ctx->cookie != tun_ctx->cookie)
++ *this_cpu_tun_ctx = *tun_ctx;
++ local_bh_enable();
++}
++
+ static bool nft_inner_parse_needed(const struct nft_inner *priv,
+ const struct nft_pktinfo *pkt,
+- const struct nft_inner_tun_ctx *tun_ctx)
++ struct nft_inner_tun_ctx *tun_ctx)
+ {
+ if (!(pkt->flags & NFT_PKTINFO_INNER_FULL))
+ return true;
+
++ if (!nft_inner_restore_tun_ctx(pkt, tun_ctx))
++ return true;
++
+ if (priv->type != tun_ctx->type)
+ return true;
+
+@@ -248,27 +279,29 @@ static bool nft_inner_parse_needed(const struct nft_inner *priv,
+ static void nft_inner_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
+ {
+- struct nft_inner_tun_ctx *tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx);
+ const struct nft_inner *priv = nft_expr_priv(expr);
++ struct nft_inner_tun_ctx tun_ctx = {};
+
+ if (nft_payload_inner_offset(pkt) < 0)
+ goto err;
+
+- if (nft_inner_parse_needed(priv, pkt, tun_ctx) &&
+- nft_inner_parse(priv, (struct nft_pktinfo *)pkt, tun_ctx) < 0)
++ if (nft_inner_parse_needed(priv, pkt, &tun_ctx) &&
++ nft_inner_parse(priv, (struct nft_pktinfo *)pkt, &tun_ctx) < 0)
+ goto err;
+
+ switch (priv->expr_type) {
+ case NFT_INNER_EXPR_PAYLOAD:
+- nft_payload_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, tun_ctx);
++ nft_payload_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, &tun_ctx);
+ break;
+ case NFT_INNER_EXPR_META:
+- nft_meta_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, tun_ctx);
++ nft_meta_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, &tun_ctx);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ goto err;
+ }
++ nft_inner_save_tun_ctx(pkt, &tun_ctx);
++
+ return;
+ err:
+ regs->verdict.code = NFT_BREAK;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index daa56dda737ae2..b93f046ac7d1e1 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -24,11 +24,13 @@
+ struct nft_rhash {
+ struct rhashtable ht;
+ struct delayed_work gc_work;
++ u32 wq_gc_seq;
+ };
+
+ struct nft_rhash_elem {
+ struct nft_elem_priv priv;
+ struct rhash_head node;
++ u32 wq_gc_seq;
+ struct nft_set_ext ext;
+ };
+
+@@ -338,6 +340,10 @@ static void nft_rhash_gc(struct work_struct *work)
+ if (!gc)
+ goto done;
+
++ /* Elements never collected use a zero gc worker sequence number. */
++ if (unlikely(++priv->wq_gc_seq == 0))
++ priv->wq_gc_seq++;
++
+ rhashtable_walk_enter(&priv->ht, &hti);
+ rhashtable_walk_start(&hti);
+
+@@ -355,6 +361,14 @@ static void nft_rhash_gc(struct work_struct *work)
+ goto try_later;
+ }
+
++ /* rhashtable walk is unstable, already seen in this gc run?
++ * Then, skip this element. In case of (unlikely) sequence
++ * wraparound and stale element wq_gc_seq, next gc run will
++ * just find this expired element.
++ */
++ if (he->wq_gc_seq == priv->wq_gc_seq)
++ continue;
++
+ if (nft_set_elem_is_dead(&he->ext))
+ goto dead_elem;
+
+@@ -371,6 +385,8 @@ static void nft_rhash_gc(struct work_struct *work)
+ if (!gc)
+ goto try_later;
+
++ /* annotate gc sequence for this attempt. */
++ he->wq_gc_seq = priv->wq_gc_seq;
+ nft_trans_gc_elem_add(gc, he);
+ }
+
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index f5da0c1775f2e7..35d0409b009501 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -68,7 +68,7 @@ static noinline int nft_socket_cgroup_subtree_level(void)
+
+ cgroup_put(cgrp);
+
+- if (WARN_ON_ONCE(level > 255))
++ if (level > 255)
+ return -ERANGE;
+
+ if (WARN_ON_ONCE(level < 0))
+diff --git a/net/netfilter/xt_LED.c b/net/netfilter/xt_LED.c
+index f7b0286d106ac1..8a80fd76fe45b2 100644
+--- a/net/netfilter/xt_LED.c
++++ b/net/netfilter/xt_LED.c
+@@ -96,7 +96,9 @@ static int led_tg_check(const struct xt_tgchk_param *par)
+ struct xt_led_info_internal *ledinternal;
+ int err;
+
+- if (ledinfo->id[0] == '\0')
++ /* Bail out if empty string or not a string at all. */
++ if (ledinfo->id[0] == '\0' ||
++ !memchr(ledinfo->id, '\0', sizeof(ledinfo->id)))
+ return -EINVAL;
+
+ mutex_lock(&xt_led_mutex);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index a705ec21425409..97774bd4b6cb11 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3421,17 +3421,17 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ if (sock->type == SOCK_PACKET)
+ sock->ops = &packet_ops_spkt;
+
++ po = pkt_sk(sk);
++ err = packet_alloc_pending(po);
++ if (err)
++ goto out_sk_free;
++
+ sock_init_data(sock, sk);
+
+- po = pkt_sk(sk);
+ init_completion(&po->skb_completion);
+ sk->sk_family = PF_PACKET;
+ po->num = proto;
+
+- err = packet_alloc_pending(po);
+- if (err)
+- goto out2;
+-
+ packet_cached_dev_reset(po);
+
+ sk->sk_destruct = packet_sock_destruct;
+@@ -3463,7 +3463,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ sock_prot_inuse_add(net, &packet_proto, 1);
+
+ return 0;
+-out2:
++out_sk_free:
+ sk_free(sk);
+ out:
+ return err;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index e280c27cb9f9af..1008ec8a464c93 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1369,7 +1369,6 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ int err;
+
+ md = (struct erspan_metadata *)&key->enc_opts.data[key->enc_opts.len];
+- memset(md, 0xff, sizeof(*md));
+ md->version = 1;
+
+ if (!depth)
+@@ -1398,9 +1397,9 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ NL_SET_ERR_MSG(extack, "Missing tunnel key erspan option index");
+ return -EINVAL;
+ }
++ memset(&md->u.index, 0xff, sizeof(md->u.index));
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX];
+- memset(&md->u, 0x00, sizeof(md->u));
+ md->u.index = nla_get_be32(nla);
+ }
+ } else if (md->version == 2) {
+@@ -1409,10 +1408,12 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ NL_SET_ERR_MSG(extack, "Missing tunnel key erspan option dir or hwid");
+ return -EINVAL;
+ }
++ md->u.md2.dir = 1;
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_DIR]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_DIR];
+ md->u.md2.dir = nla_get_u8(nla);
+ }
++ set_hwid(&md->u.md2, 0xff);
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_HWID]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_HWID];
+ set_hwid(&md->u.md2, nla_get_u8(nla));
+diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
+index 939425da18955b..8c9a0400c8622c 100644
+--- a/net/sched/sch_cbs.c
++++ b/net/sched/sch_cbs.c
+@@ -310,7 +310,7 @@ static void cbs_set_port_rate(struct net_device *dev, struct cbs_sched_data *q)
+ {
+ struct ethtool_link_ksettings ecmd;
+ int speed = SPEED_10;
+- int port_rate;
++ s64 port_rate;
+ int err;
+
+ err = __ethtool_get_link_ksettings(dev, &ecmd);
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index f1d09183ae632d..dc26b22d53c734 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -208,7 +208,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
+ struct tbf_sched_data *q = qdisc_priv(sch);
+ struct sk_buff *segs, *nskb;
+ netdev_features_t features = netif_skb_features(skb);
+- unsigned int len = 0, prev_len = qdisc_pkt_len(skb);
++ unsigned int len = 0, prev_len = qdisc_pkt_len(skb), seg_len;
+ int ret, nb;
+
+ segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+@@ -219,21 +219,27 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
+ nb = 0;
+ skb_list_walk_safe(segs, segs, nskb) {
+ skb_mark_not_on_list(segs);
+- qdisc_skb_cb(segs)->pkt_len = segs->len;
+- len += segs->len;
++ seg_len = segs->len;
++ qdisc_skb_cb(segs)->pkt_len = seg_len;
+ ret = qdisc_enqueue(segs, q->qdisc, to_free);
+ if (ret != NET_XMIT_SUCCESS) {
+ if (net_xmit_drop_count(ret))
+ qdisc_qstats_drop(sch);
+ } else {
+ nb++;
++ len += seg_len;
+ }
+ }
+ sch->q.qlen += nb;
+- if (nb > 1)
++ sch->qstats.backlog += len;
++ if (nb > 0) {
+ qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
+- consume_skb(skb);
+- return nb > 0 ? NET_XMIT_SUCCESS : NET_XMIT_DROP;
++ consume_skb(skb);
++ return NET_XMIT_SUCCESS;
++ }
++
++ kfree_skb(skb);
++ return NET_XMIT_DROP;
+ }
+
+ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 9d76e902fd770f..9e6c69d18581ce 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -383,6 +383,7 @@ void smc_sk_init(struct net *net, struct sock *sk, int protocol)
+ smc->limit_smc_hs = net->smc.limit_smc_hs;
+ smc->use_fallback = false; /* assume rdma capability first */
+ smc->fallback_rsn = 0;
++ smc_close_init(smc);
+ }
+
+ static struct sock *smc_sock_alloc(struct net *net, struct socket *sock,
+@@ -1299,7 +1300,6 @@ static int smc_connect_rdma(struct smc_sock *smc,
+ goto connect_abort;
+ }
+
+- smc_close_init(smc);
+ smc_rx_init(smc);
+
+ if (ini->first_contact_local) {
+@@ -1435,7 +1435,6 @@ static int smc_connect_ism(struct smc_sock *smc,
+ goto connect_abort;
+ }
+ }
+- smc_close_init(smc);
+ smc_rx_init(smc);
+ smc_tx_init(smc);
+
+@@ -1901,6 +1900,7 @@ static void smc_listen_out(struct smc_sock *new_smc)
+ if (tcp_sk(new_smc->clcsock->sk)->syn_smc)
+ atomic_dec(&lsmc->queued_smc_hs);
+
++ release_sock(newsmcsk); /* lock in smc_listen_work() */
+ if (lsmc->sk.sk_state == SMC_LISTEN) {
+ lock_sock_nested(&lsmc->sk, SINGLE_DEPTH_NESTING);
+ smc_accept_enqueue(&lsmc->sk, newsmcsk);
+@@ -2422,6 +2422,7 @@ static void smc_listen_work(struct work_struct *work)
+ u8 accept_version;
+ int rc = 0;
+
++ lock_sock(&new_smc->sk); /* release in smc_listen_out() */
+ if (new_smc->listen_smc->sk.sk_state != SMC_LISTEN)
+ return smc_listen_out_err(new_smc);
+
+@@ -2479,7 +2480,6 @@ static void smc_listen_work(struct work_struct *work)
+ goto out_decl;
+
+ mutex_lock(&smc_server_lgr_pending);
+- smc_close_init(new_smc);
+ smc_rx_init(new_smc);
+ smc_tx_init(new_smc);
+
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 439f7553997728..b7e25e7e9933b6 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -814,10 +814,10 @@ static void cleanup_bearer(struct work_struct *work)
+ kfree_rcu(rcast, rcu);
+ }
+
+- atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
+ dst_cache_destroy(&ub->rcast.dst_cache);
+ udp_tunnel_sock_release(ub->ubsock);
+ synchronize_net();
++ atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
+ kfree(ub);
+ }
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index dfd29160fe11c4..b52b798aa4c292 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -117,12 +117,14 @@
+ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr);
+ static void vsock_sk_destruct(struct sock *sk);
+ static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
++static void vsock_close(struct sock *sk, long timeout);
+
+ /* Protocol family. */
+ struct proto vsock_proto = {
+ .name = "AF_VSOCK",
+ .owner = THIS_MODULE,
+ .obj_size = sizeof(struct vsock_sock),
++ .close = vsock_close,
+ #ifdef CONFIG_BPF_SYSCALL
+ .psock_update_sk_prot = vsock_bpf_update_proto,
+ #endif
+@@ -797,39 +799,37 @@ static bool sock_type_connectible(u16 type)
+
+ static void __vsock_release(struct sock *sk, int level)
+ {
+- if (sk) {
+- struct sock *pending;
+- struct vsock_sock *vsk;
+-
+- vsk = vsock_sk(sk);
+- pending = NULL; /* Compiler warning. */
++ struct vsock_sock *vsk;
++ struct sock *pending;
+
+- /* When "level" is SINGLE_DEPTH_NESTING, use the nested
+- * version to avoid the warning "possible recursive locking
+- * detected". When "level" is 0, lock_sock_nested(sk, level)
+- * is the same as lock_sock(sk).
+- */
+- lock_sock_nested(sk, level);
++ vsk = vsock_sk(sk);
++ pending = NULL; /* Compiler warning. */
+
+- if (vsk->transport)
+- vsk->transport->release(vsk);
+- else if (sock_type_connectible(sk->sk_type))
+- vsock_remove_sock(vsk);
++ /* When "level" is SINGLE_DEPTH_NESTING, use the nested
++ * version to avoid the warning "possible recursive locking
++ * detected". When "level" is 0, lock_sock_nested(sk, level)
++ * is the same as lock_sock(sk).
++ */
++ lock_sock_nested(sk, level);
+
+- sock_orphan(sk);
+- sk->sk_shutdown = SHUTDOWN_MASK;
++ if (vsk->transport)
++ vsk->transport->release(vsk);
++ else if (sock_type_connectible(sk->sk_type))
++ vsock_remove_sock(vsk);
+
+- skb_queue_purge(&sk->sk_receive_queue);
++ sock_orphan(sk);
++ sk->sk_shutdown = SHUTDOWN_MASK;
+
+- /* Clean up any sockets that never were accepted. */
+- while ((pending = vsock_dequeue_accept(sk)) != NULL) {
+- __vsock_release(pending, SINGLE_DEPTH_NESTING);
+- sock_put(pending);
+- }
++ skb_queue_purge(&sk->sk_receive_queue);
+
+- release_sock(sk);
+- sock_put(sk);
++ /* Clean up any sockets that never were accepted. */
++ while ((pending = vsock_dequeue_accept(sk)) != NULL) {
++ __vsock_release(pending, SINGLE_DEPTH_NESTING);
++ sock_put(pending);
+ }
++
++ release_sock(sk);
++ sock_put(sk);
+ }
+
+ static void vsock_sk_destruct(struct sock *sk)
+@@ -901,9 +901,22 @@ void vsock_data_ready(struct sock *sk)
+ }
+ EXPORT_SYMBOL_GPL(vsock_data_ready);
+
++/* Dummy callback required by sockmap.
++ * See unconditional call of saved_close() in sock_map_close().
++ */
++static void vsock_close(struct sock *sk, long timeout)
++{
++}
++
+ static int vsock_release(struct socket *sock)
+ {
+- __vsock_release(sock->sk, 0);
++ struct sock *sk = sock->sk;
++
++ if (!sk)
++ return 0;
++
++ sk->sk_prot->close(sk, 0);
++ __vsock_release(sk, 0);
+ sock->sk = NULL;
+ sock->state = SS_FREE;
+
+@@ -1054,6 +1067,9 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ mask |= EPOLLRDHUP;
+ }
+
++ if (sk_is_readable(sk))
++ mask |= EPOLLIN | EPOLLRDNORM;
++
+ if (sock->type == SOCK_DGRAM) {
+ /* For datagram sockets we can read if there is something in
+ * the queue and write as long as the socket isn't shutdown for
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 521a2938e50a12..0662d34b09ee78 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -387,10 +387,9 @@ void xp_dma_unmap(struct xsk_buff_pool *pool, unsigned long attrs)
+ return;
+ }
+
+- if (!refcount_dec_and_test(&dma_map->users))
+- return;
++ if (refcount_dec_and_test(&dma_map->users))
++ __xp_dma_unmap(dma_map, attrs);
+
+- __xp_dma_unmap(dma_map, attrs);
+ kvfree(pool->dma_pages);
+ pool->dma_pages = NULL;
+ pool->dma_pages_cnt = 0;
+diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
+index e1c526f97ce31f..afa457506274c1 100644
+--- a/net/xdp/xskmap.c
++++ b/net/xdp/xskmap.c
+@@ -224,7 +224,7 @@ static long xsk_map_delete_elem(struct bpf_map *map, void *key)
+ struct xsk_map *m = container_of(map, struct xsk_map, map);
+ struct xdp_sock __rcu **map_entry;
+ struct xdp_sock *old_xs;
+- int k = *(u32 *)key;
++ u32 k = *(u32 *)key;
+
+ if (k >= map->max_entries)
+ return -EINVAL;
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index 032c9089e6862d..e936254531fd0a 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -12,10 +12,10 @@
+ //! do so first instead of bypassing this crate.
+
+ #![no_std]
++#![feature(arbitrary_self_types)]
+ #![feature(coerce_unsized)]
+ #![feature(dispatch_from_dyn)]
+ #![feature(new_uninit)]
+-#![feature(receiver_trait)]
+ #![feature(unsize)]
+
+ // Ensure conditional compilation based on the kernel configuration works;
+diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
+index d801b9dc6291db..3483d8c232c4f1 100644
+--- a/rust/kernel/list/arc.rs
++++ b/rust/kernel/list/arc.rs
+@@ -441,9 +441,6 @@ fn as_ref(&self) -> &Arc<T> {
+ }
+ }
+
+-// This is to allow [`ListArc`] (and variants) to be used as the type of `self`.
+-impl<T, const ID: u64> core::ops::Receiver for ListArc<T, ID> where T: ListArcSafe<ID> + ?Sized {}
+-
+ // This is to allow coercion from `ListArc<T>` to `ListArc<U>` if `T` can be converted to the
+ // dynamically-sized type (DST) `U`.
+ impl<T, U, const ID: u64> core::ops::CoerceUnsized<ListArc<U, ID>> for ListArc<T, ID>
+diff --git a/rust/kernel/sync/arc.rs b/rust/kernel/sync/arc.rs
+index 3021f30fd822f6..28743a7c74a847 100644
+--- a/rust/kernel/sync/arc.rs
++++ b/rust/kernel/sync/arc.rs
+@@ -171,9 +171,6 @@ unsafe fn container_of(ptr: *const T) -> NonNull<ArcInner<T>> {
+ }
+ }
+
+-// This is to allow [`Arc`] (and variants) to be used as the type of `self`.
+-impl<T: ?Sized> core::ops::Receiver for Arc<T> {}
+-
+ // This is to allow coercion from `Arc<T>` to `Arc<U>` if `T` can be converted to the
+ // dynamically-sized type (DST) `U`.
+ impl<T: ?Sized + Unsize<U>, U: ?Sized> core::ops::CoerceUnsized<Arc<U>> for Arc<T> {}
+@@ -480,9 +477,6 @@ pub struct ArcBorrow<'a, T: ?Sized + 'a> {
+ _p: PhantomData<&'a ()>,
+ }
+
+-// This is to allow [`ArcBorrow`] (and variants) to be used as the type of `self`.
+-impl<T: ?Sized> core::ops::Receiver for ArcBorrow<'_, T> {}
+-
+ // This is to allow `ArcBorrow<U>` to be dispatched on when `ArcBorrow<T>` can be coerced into
+ // `ArcBorrow<U>`.
+ impl<T: ?Sized + Unsize<U>, U: ?Sized> core::ops::DispatchFromDyn<ArcBorrow<'_, U>>
+diff --git a/samples/bpf/test_cgrp2_sock.c b/samples/bpf/test_cgrp2_sock.c
+index a0811df888f453..8ca2a445ffa155 100644
+--- a/samples/bpf/test_cgrp2_sock.c
++++ b/samples/bpf/test_cgrp2_sock.c
+@@ -178,8 +178,10 @@ static int show_sockopts(int family)
+ return 1;
+ }
+
+- if (get_bind_to_device(sd, name, sizeof(name)) < 0)
++ if (get_bind_to_device(sd, name, sizeof(name)) < 0) {
++ close(sd);
+ return 1;
++ }
+
+ mark = get_somark(sd);
+ prio = get_priority(sd);
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 8f423a1faf5077..880785b52c04ad 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -248,7 +248,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE
+ # Compile Rust sources (.rs)
+ # ---------------------------------------------------------------------------
+
+-rust_allowed_features := new_uninit
++rust_allowed_features := arbitrary_self_types,new_uninit
+
+ # `--out-dir` is required to avoid temporaries being created by `rustc` in the
+ # current working directory, which may be not accessible in the out-of-tree
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 107393a8c48a59..971eda0c6ba737 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -785,7 +785,7 @@ static void check_section(const char *modname, struct elf_info *elf,
+ ".ltext", ".ltext.*"
+ #define OTHER_TEXT_SECTIONS ".ref.text", ".head.text", ".spinlock.text", \
+ ".fixup", ".entry.text", ".exception.text", \
+- ".coldtext", ".softirqentry.text"
++ ".coldtext", ".softirqentry.text", ".irqentry.text"
+
+ #define ALL_TEXT_SECTIONS ".init.text", ".exit.text", \
+ TEXT_SECTIONS, OTHER_TEXT_SECTIONS
+diff --git a/scripts/setlocalversion b/scripts/setlocalversion
+index 38b96c6797f408..5818465abba984 100755
+--- a/scripts/setlocalversion
++++ b/scripts/setlocalversion
+@@ -30,6 +30,27 @@ if test $# -gt 0 -o ! -d "$srctree"; then
+ usage
+ fi
+
++try_tag() {
++ tag="$1"
++
++ # Is $tag an annotated tag?
++ [ "$(git cat-file -t "$tag" 2> /dev/null)" = tag ] || return 1
++
++ # Is it an ancestor of HEAD, and if so, how many commits are in $tag..HEAD?
++ # shellcheck disable=SC2046 # word splitting is the point here
++ set -- $(git rev-list --count --left-right "$tag"...HEAD 2> /dev/null)
++
++ # $1 is 0 if and only if $tag is an ancestor of HEAD. Use
++ # string comparison, because $1 is empty if the 'git rev-list'
++ # command somehow failed.
++ [ "$1" = 0 ] || return 1
++
++ # $2 is the number of commits in the range $tag..HEAD, possibly 0.
++ count="$2"
++
++ return 0
++}
++
+ scm_version()
+ {
+ local short=false
+@@ -61,33 +82,33 @@ scm_version()
+ # stable kernel: 6.1.7 -> v6.1.7
+ version_tag=v$(echo "${KERNELVERSION}" | sed -E 's/^([0-9]+\.[0-9]+)\.0(.*)$/\1\2/')
+
++ # try_tag initializes count if the tag is usable.
++ count=
++
+ # If a localversion* file exists, and the corresponding
+ # annotated tag exists and is an ancestor of HEAD, use
+ # it. This is the case in linux-next.
+- tag=${file_localversion#-}
+- desc=
+- if [ -n "${tag}" ]; then
+- desc=$(git describe --match=$tag 2>/dev/null)
++ if [ -n "${file_localversion#-}" ] ; then
++ try_tag "${file_localversion#-}"
+ fi
+
+ # Otherwise, if a localversion* file exists, and the tag
+ # obtained by appending it to the tag derived from
+ # KERNELVERSION exists and is an ancestor of HEAD, use
+ # it. This is e.g. the case in linux-rt.
+- if [ -z "${desc}" ] && [ -n "${file_localversion}" ]; then
+- tag="${version_tag}${file_localversion}"
+- desc=$(git describe --match=$tag 2>/dev/null)
++ if [ -z "${count}" ] && [ -n "${file_localversion}" ]; then
++ try_tag "${version_tag}${file_localversion}"
+ fi
+
+ # Otherwise, default to the annotated tag derived from KERNELVERSION.
+- if [ -z "${desc}" ]; then
+- tag="${version_tag}"
+- desc=$(git describe --match=$tag 2>/dev/null)
++ if [ -z "${count}" ]; then
++ try_tag "${version_tag}"
+ fi
+
+- # If we are at the tagged commit, we ignore it because the version is
+- # well-defined.
+- if [ "${tag}" != "${desc}" ]; then
++ # If we are at the tagged commit, we ignore it because the
++ # version is well-defined. If none of the attempted tags exist
++ # or were usable, $count is still empty.
++ if [ -z "${count}" ] || [ "${count}" -gt 0 ]; then
+
+ # If only the short version is requested, don't bother
+ # running further git commands
+@@ -95,14 +116,15 @@ scm_version()
+ echo "+"
+ return
+ fi
++
+ # If we are past the tagged commit, we pretty print it.
+ # (like 6.1.0-14595-g292a089d78d3)
+- if [ -n "${desc}" ]; then
+- echo "${desc}" | awk -F- '{printf("-%05d", $(NF-1))}'
++ if [ -n "${count}" ]; then
++ printf "%s%05d" "-" "${count}"
+ fi
+
+ # Add -g and exactly 12 hex chars.
+- printf '%s%s' -g "$(echo $head | cut -c1-12)"
++ printf '%s%.12s' -g "$head"
+ fi
+
+ if ${no_dirty}; then
+diff --git a/sound/core/seq/seq_ump_client.c b/sound/core/seq/seq_ump_client.c
+index e5d3f4d206bf6a..e956f17f379282 100644
+--- a/sound/core/seq/seq_ump_client.c
++++ b/sound/core/seq/seq_ump_client.c
+@@ -257,12 +257,12 @@ static void update_port_infos(struct seq_ump_client *client)
+ continue;
+
+ old->addr.client = client->seq_client;
+- old->addr.port = i;
++ old->addr.port = ump_group_to_seq_port(i);
+ err = snd_seq_kernel_client_ctl(client->seq_client,
+ SNDRV_SEQ_IOCTL_GET_PORT_INFO,
+ old);
+ if (err < 0)
+- return;
++ continue;
+ fill_port_info(new, client, &client->ump->groups[i]);
+ if (old->capability == new->capability &&
+ !strcmp(old->name, new->name))
+@@ -271,7 +271,7 @@ static void update_port_infos(struct seq_ump_client *client)
+ SNDRV_SEQ_IOCTL_SET_PORT_INFO,
+ new);
+ if (err < 0)
+- return;
++ continue;
+ /* notify to system port */
+ snd_seq_system_client_ev_port_change(client->seq_client, i);
+ }
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 7c6b1fe8dfcce3..8e74be038b0fad 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -956,6 +956,28 @@ void snd_hda_pick_pin_fixup(struct hda_codec *codec,
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_pick_pin_fixup);
+
++/* check whether the given quirk entry matches with vendor/device pair */
++static bool hda_quirk_match(u16 vendor, u16 device, const struct hda_quirk *q)
++{
++ if (q->subvendor != vendor)
++ return false;
++ return !q->subdevice ||
++ (device & q->subdevice_mask) == q->subdevice;
++}
++
++/* look through the quirk list and return the matching entry */
++static const struct hda_quirk *
++hda_quirk_lookup_id(u16 vendor, u16 device, const struct hda_quirk *list)
++{
++ const struct hda_quirk *q;
++
++ for (q = list; q->subvendor || q->subdevice; q++) {
++ if (hda_quirk_match(vendor, device, q))
++ return q;
++ }
++ return NULL;
++}
++
+ /**
+ * snd_hda_pick_fixup - Pick up a fixup matching with PCI/codec SSID or model string
+ * @codec: the HDA codec
+@@ -975,14 +997,16 @@ EXPORT_SYMBOL_GPL(snd_hda_pick_pin_fixup);
+ */
+ void snd_hda_pick_fixup(struct hda_codec *codec,
+ const struct hda_model_fixup *models,
+- const struct snd_pci_quirk *quirk,
++ const struct hda_quirk *quirk,
+ const struct hda_fixup *fixlist)
+ {
+- const struct snd_pci_quirk *q;
++ const struct hda_quirk *q;
+ int id = HDA_FIXUP_ID_NOT_SET;
+ const char *name = NULL;
+ const char *type = NULL;
+ unsigned int vendor, device;
++ u16 pci_vendor, pci_device;
++ u16 codec_vendor, codec_device;
+
+ if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET)
+ return;
+@@ -1013,27 +1037,42 @@ void snd_hda_pick_fixup(struct hda_codec *codec,
+ if (!quirk)
+ return;
+
++ if (codec->bus->pci) {
++ pci_vendor = codec->bus->pci->subsystem_vendor;
++ pci_device = codec->bus->pci->subsystem_device;
++ }
++
++ codec_vendor = codec->core.subsystem_id >> 16;
++ codec_device = codec->core.subsystem_id & 0xffff;
++
+ /* match with the SSID alias given by the model string "XXXX:YYYY" */
+ if (codec->modelname &&
+ sscanf(codec->modelname, "%04x:%04x", &vendor, &device) == 2) {
+- q = snd_pci_quirk_lookup_id(vendor, device, quirk);
++ q = hda_quirk_lookup_id(vendor, device, quirk);
+ if (q) {
+ type = "alias SSID";
+ goto found_device;
+ }
+ }
+
+- /* match with the PCI SSID */
+- q = snd_pci_quirk_lookup(codec->bus->pci, quirk);
+- if (q) {
+- type = "PCI SSID";
+- goto found_device;
++ /* match primarily with the PCI SSID */
++ for (q = quirk; q->subvendor || q->subdevice; q++) {
++ /* if the entry is specific to codec SSID, check with it */
++ if (!codec->bus->pci || q->match_codec_ssid) {
++ if (hda_quirk_match(codec_vendor, codec_device, q)) {
++ type = "codec SSID";
++ goto found_device;
++ }
++ } else {
++ if (hda_quirk_match(pci_vendor, pci_device, q)) {
++ type = "PCI SSID";
++ goto found_device;
++ }
++ }
+ }
+
+ /* match with the codec SSID */
+- q = snd_pci_quirk_lookup_id(codec->core.subsystem_id >> 16,
+- codec->core.subsystem_id & 0xffff,
+- quirk);
++ q = hda_quirk_lookup_id(codec_vendor, codec_device, quirk);
+ if (q) {
+ type = "codec SSID";
+ goto found_device;
+diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h
+index 53a5a62b78fa98..763f79f6f32e70 100644
+--- a/sound/pci/hda/hda_local.h
++++ b/sound/pci/hda/hda_local.h
+@@ -292,6 +292,32 @@ struct hda_fixup {
+ } v;
+ };
+
++/*
++ * extended form of snd_pci_quirk:
++ * for PCI SSID matching, use SND_PCI_QUIRK() like before;
++ * for codec SSID matching, use the new HDA_CODEC_QUIRK() instead
++ */
++struct hda_quirk {
++ unsigned short subvendor; /* PCI subvendor ID */
++ unsigned short subdevice; /* PCI subdevice ID */
++ unsigned short subdevice_mask; /* bitmask to match */
++ bool match_codec_ssid; /* match only with codec SSID */
++ int value; /* value */
++#ifdef CONFIG_SND_DEBUG_VERBOSE
++ const char *name; /* name of the device (optional) */
++#endif
++};
++
++#ifdef CONFIG_SND_DEBUG_VERBOSE
++#define HDA_CODEC_QUIRK(vend, dev, xname, val) \
++ { _SND_PCI_QUIRK_ID(vend, dev), .value = (val), .name = (xname),\
++ .match_codec_ssid = true }
++#else
++#define HDA_CODEC_QUIRK(vend, dev, xname, val) \
++ { _SND_PCI_QUIRK_ID(vend, dev), .value = (val), \
++ .match_codec_ssid = true }
++#endif
++
+ struct snd_hda_pin_quirk {
+ unsigned int codec; /* Codec vendor/device ID */
+ unsigned short subvendor; /* PCI subvendor ID */
+@@ -351,7 +377,7 @@ void snd_hda_apply_fixup(struct hda_codec *codec, int action);
+ void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth);
+ void snd_hda_pick_fixup(struct hda_codec *codec,
+ const struct hda_model_fixup *models,
+- const struct snd_pci_quirk *quirk,
++ const struct hda_quirk *quirk,
+ const struct hda_fixup *fixlist);
+ void snd_hda_pick_pin_fixup(struct hda_codec *codec,
+ const struct snd_hda_pin_quirk *pin_quirk,
+diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c
+index 1e9dadcdc51be2..56354fe060a1aa 100644
+--- a/sound/pci/hda/patch_analog.c
++++ b/sound/pci/hda/patch_analog.c
+@@ -345,7 +345,7 @@ static const struct hda_fixup ad1986a_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk ad1986a_fixup_tbl[] = {
++static const struct hda_quirk ad1986a_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x30af, "HP B2800", AD1986A_FIXUP_LAPTOP_IMIC),
+ SND_PCI_QUIRK(0x1043, 0x1153, "ASUS M9V", AD1986A_FIXUP_LAPTOP_IMIC),
+ SND_PCI_QUIRK(0x1043, 0x1443, "ASUS Z99He", AD1986A_FIXUP_EAPD),
+@@ -588,7 +588,7 @@ static const struct hda_fixup ad1981_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk ad1981_fixup_tbl[] = {
++static const struct hda_quirk ad1981_fixup_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x1014, "Lenovo", AD1981_FIXUP_AMP_OVERRIDE),
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", AD1981_FIXUP_HP_EAPD),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo", AD1981_FIXUP_AMP_OVERRIDE),
+@@ -1061,7 +1061,7 @@ static const struct hda_fixup ad1884_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk ad1884_fixup_tbl[] = {
++static const struct hda_quirk ad1884_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x2a82, "HP Touchsmart", AD1884_FIXUP_HP_TOUCHSMART),
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", AD1884_FIXUP_HP_EAPD),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1884_FIXUP_THINKPAD),
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index 654724559355ef..06e046214a4134 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -385,7 +385,7 @@ static const struct hda_model_fixup cs420x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cs420x_fixup_tbl[] = {
++static const struct hda_quirk cs420x_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10de, 0x0ac0, "MacBookPro 5,3", CS420X_MBP53),
+ SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55),
+ SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55),
+@@ -634,13 +634,13 @@ static const struct hda_model_fixup cs4208_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cs4208_fixup_tbl[] = {
++static const struct hda_quirk cs4208_fixup_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS4208_MAC_AUTO),
+ {} /* terminator */
+ };
+
+ /* codec SSID matching */
+-static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = {
++static const struct hda_quirk cs4208_mac_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11),
+ SND_PCI_QUIRK(0x106b, 0x6c00, "MacMini 7,1", CS4208_MACMINI),
+ SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
+@@ -818,7 +818,7 @@ static const struct hda_model_fixup cs421x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cs421x_fixup_tbl[] = {
++static const struct hda_quirk cs421x_fixup_tbl[] = {
+ /* Test Intel board + CDB2410 */
+ SND_PCI_QUIRK(0x8086, 0x5001, "DP45SG/CDB4210", CS421X_CDB4210),
+ {} /* terminator */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index b2bcdf76da3058..2e9f817b948eb3 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -828,23 +828,6 @@ static const struct hda_pintbl cxt_pincfg_sws_js201d[] = {
+ {}
+ };
+
+-/* pincfg quirk for Tuxedo Sirius;
+- * unfortunately the (PCI) SSID conflicts with System76 Pangolin pang14,
+- * which has incompatible pin setup, so we check the codec SSID (luckily
+- * different one!) and conditionally apply the quirk here
+- */
+-static void cxt_fixup_sirius_top_speaker(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- /* ignore for incorrectly picked-up pang14 */
+- if (codec->core.subsystem_id == 0x278212b3)
+- return;
+- /* set up the top speaker pin */
+- if (action == HDA_FIXUP_ACT_PRE_PROBE)
+- snd_hda_codec_set_pincfg(codec, 0x1d, 0x82170111);
+-}
+-
+ static const struct hda_fixup cxt_fixups[] = {
+ [CXT_PINCFG_LENOVO_X200] = {
+ .type = HDA_FIXUP_PINS,
+@@ -1009,12 +992,15 @@ static const struct hda_fixup cxt_fixups[] = {
+ .v.pins = cxt_pincfg_sws_js201d,
+ },
+ [CXT_PINCFG_TOP_SPEAKER] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = cxt_fixup_sirius_top_speaker,
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1d, 0x82170111 },
++ { }
++ },
+ },
+ };
+
+-static const struct snd_pci_quirk cxt5045_fixups[] = {
++static const struct hda_quirk cxt5045_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x30d5, "HP 530", CXT_FIXUP_HP_530),
+ SND_PCI_QUIRK(0x1179, 0xff31, "Toshiba P105", CXT_FIXUP_TOSHIBA_P105),
+ /* HP, Packard Bell, Fujitsu-Siemens & Lenovo laptops have
+@@ -1034,7 +1020,7 @@ static const struct hda_model_fixup cxt5045_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cxt5047_fixups[] = {
++static const struct hda_quirk cxt5047_fixups[] = {
+ /* HP laptops have really bad sound over 0 dB on NID 0x10.
+ */
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", CXT_FIXUP_CAP_MIX_AMP_5047),
+@@ -1046,7 +1032,7 @@ static const struct hda_model_fixup cxt5047_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cxt5051_fixups[] = {
++static const struct hda_quirk cxt5051_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60),
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200),
+ {}
+@@ -1057,7 +1043,7 @@ static const struct hda_model_fixup cxt5051_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cxt5066_fixups[] = {
++static const struct hda_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x054f, "Acer Aspire 4830T", CXT_FIXUP_ASPIRE_DMIC),
+@@ -1109,8 +1095,8 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
+ SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205),
+- SND_PCI_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
+- SND_PCI_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
++ HDA_CODEC_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
++ HDA_CODEC_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
+ {}
+ };
+
+diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c
+index 36b411d1a9609a..759f48038273df 100644
+--- a/sound/pci/hda/patch_cs8409-tables.c
++++ b/sound/pci/hda/patch_cs8409-tables.c
+@@ -473,7 +473,7 @@ struct sub_codec dolphin_cs42l42_1 = {
+ * Arrays Used for all projects using CS8409
+ ******************************************************************************/
+
+-const struct snd_pci_quirk cs8409_fixup_tbl[] = {
++const struct hda_quirk cs8409_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0A11, "Bullseye", CS8409_BULLSEYE),
+ SND_PCI_QUIRK(0x1028, 0x0A12, "Bullseye", CS8409_BULLSEYE),
+ SND_PCI_QUIRK(0x1028, 0x0A23, "Bullseye", CS8409_BULLSEYE),
+diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h
+index 937e9387abdc7a..5e48115caf096b 100644
+--- a/sound/pci/hda/patch_cs8409.h
++++ b/sound/pci/hda/patch_cs8409.h
+@@ -355,7 +355,7 @@ int cs42l42_volume_put(struct snd_kcontrol *kctrl, struct snd_ctl_elem_value *uc
+
+ extern const struct hda_pcm_stream cs42l42_48k_pcm_analog_playback;
+ extern const struct hda_pcm_stream cs42l42_48k_pcm_analog_capture;
+-extern const struct snd_pci_quirk cs8409_fixup_tbl[];
++extern const struct hda_quirk cs8409_fixup_tbl[];
+ extern const struct hda_model_fixup cs8409_models[];
+ extern const struct hda_fixup cs8409_fixups[];
+ extern const struct hda_verb cs8409_cs42l42_init_verbs[];
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 18e6779a83be2f..973671e0cdb09d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1567,7 +1567,7 @@ static const struct hda_fixup alc880_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc880_fixup_tbl[] = {
++static const struct hda_quirk alc880_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1019, 0x0f69, "Coeus G610P", ALC880_FIXUP_W810),
+ SND_PCI_QUIRK(0x1043, 0x10c3, "ASUS W5A", ALC880_FIXUP_ASUS_W5A),
+ SND_PCI_QUIRK(0x1043, 0x1964, "ASUS Z71V", ALC880_FIXUP_Z71V),
+@@ -1876,7 +1876,7 @@ static const struct hda_fixup alc260_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc260_fixup_tbl[] = {
++static const struct hda_quirk alc260_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x007b, "Acer C20x", ALC260_FIXUP_GPIO1),
+ SND_PCI_QUIRK(0x1025, 0x007f, "Acer Aspire 9500", ALC260_FIXUP_COEF),
+ SND_PCI_QUIRK(0x1025, 0x008f, "Acer", ALC260_FIXUP_GPIO1),
+@@ -2568,7 +2568,7 @@ static const struct hda_fixup alc882_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc882_fixup_tbl[] = {
++static const struct hda_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x006c, "Acer Aspire 9810", ALC883_FIXUP_ACER_EAPD),
+ SND_PCI_QUIRK(0x1025, 0x0090, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ SND_PCI_QUIRK(0x1025, 0x0107, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+@@ -2912,7 +2912,7 @@ static const struct hda_fixup alc262_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc262_fixup_tbl[] = {
++static const struct hda_quirk alc262_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x170b, "HP Z200", ALC262_FIXUP_HP_Z200),
+ SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu Lifebook S7110", ALC262_FIXUP_FSC_S7110),
+ SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FIXUP_BENQ),
+@@ -3073,7 +3073,7 @@ static const struct hda_model_fixup alc268_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk alc268_fixup_tbl[] = {
++static const struct hda_quirk alc268_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0139, "Acer TravelMate 6293", ALC268_FIXUP_SPDIF),
+ SND_PCI_QUIRK(0x1025, 0x015b, "Acer AOA 150 (ZG5)", ALC268_FIXUP_INV_DMIC),
+ /* below is codec SSID since multiple Toshiba laptops have the
+@@ -7726,8 +7726,6 @@ enum {
+ ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ ALC298_FIXUP_LENOVO_C940_DUET7,
+- ALC287_FIXUP_LENOVO_14IRP8_DUETITL,
+- ALC287_FIXUP_LENOVO_LEGION_7,
+ ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ ALC256_FIXUP_SET_COEF_DEFAULTS,
+ ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+@@ -7772,8 +7770,6 @@ enum {
+ ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1,
+ ALC287_FIXUP_LENOVO_THKPAD_WH_ALC1318,
+ ALC256_FIXUP_CHROME_BOOK,
+- ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7,
+- ALC287_FIXUP_LENOVO_SSID_17AA3820,
+ ALC245_FIXUP_CLEVO_NOISY_MIC,
+ ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE,
+ ALC233_FIXUP_MEDION_MTL_SPK,
+@@ -7796,72 +7792,6 @@ static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec,
+ __snd_hda_apply_fixup(codec, id, action, 0);
+ }
+
+-/* A special fixup for Lenovo Slim/Yoga Pro 9 14IRP8 and Yoga DuetITL 2021;
+- * 14IRP8 PCI SSID will mistakenly be matched with the DuetITL codec SSID,
+- * so we need to apply a different fixup in this case. The only DuetITL codec
+- * SSID reported so far is the 17aa:3802 while the 14IRP8 has the 17aa:38be
+- * and 17aa:38bf. If it weren't for the PCI SSID, the 14IRP8 models would
+- * have matched correctly by their codecs.
+- */
+-static void alc287_fixup_lenovo_14irp8_duetitl(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa3802)
+- id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* DuetITL */
+- else
+- id = ALC287_FIXUP_TAS2781_I2C; /* 14IRP8 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+-/* Similar to above the Lenovo Yoga Pro 7 14ARP8 PCI SSID matches the codec SSID of the
+- Legion Y9000X 2022 IAH7.*/
+-static void alc287_fixup_lenovo_14arp8_legion_iah7(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa386e)
+- id = ALC287_FIXUP_CS35L41_I2C_2; /* Legion Y9000X 2022 IAH7 */
+- else
+- id = ALC285_FIXUP_SPEAKER2_TO_DAC1; /* Yoga Pro 7 14ARP8 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+-/* Another hilarious PCI SSID conflict with Lenovo Legion Pro 7 16ARX8H (with
+- * TAS2781 codec) and Legion 7i 16IAX7 (with CS35L41 codec);
+- * we apply a corresponding fixup depending on the codec SSID instead
+- */
+-static void alc287_fixup_lenovo_legion_7(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa38a8)
+- id = ALC287_FIXUP_TAS2781_I2C; /* Legion Pro 7 16ARX8H */
+- else
+- id = ALC287_FIXUP_CS35L41_I2C_2; /* Legion 7i 16IAX7 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+-/* Yet more conflicting PCI SSID (17aa:3820) on two Lenovo models */
+-static void alc287_fixup_lenovo_ssid_17aa3820(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa3820)
+- id = ALC269_FIXUP_ASPIRE_HEADSET_MIC; /* IdeaPad 330-17IKB 81DM */
+- else /* 0x17aa3802 */
+- id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* "Yoga Duet 7 13ITL6 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+ static const struct hda_fixup alc269_fixups[] = {
+ [ALC269_FIXUP_GPIO2] = {
+ .type = HDA_FIXUP_FUNC,
+@@ -9810,14 +9740,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc298_fixup_lenovo_c940_duet7,
+ },
+- [ALC287_FIXUP_LENOVO_14IRP8_DUETITL] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_14irp8_duetitl,
+- },
+- [ALC287_FIXUP_LENOVO_LEGION_7] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_legion_7,
+- },
+ [ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -10002,10 +9924,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK,
+ },
+- [ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_14arp8_legion_iah7,
+- },
+ [ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc287_fixup_yoga9_14iap7_bass_spk_pin,
+@@ -10140,10 +10058,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC225_FIXUP_HEADSET_JACK
+ },
+- [ALC287_FIXUP_LENOVO_SSID_17AA3820] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_ssid_17aa3820,
+- },
+ [ALC245_FIXUP_CLEVO_NOISY_MIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+@@ -10169,7 +10083,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc269_fixup_tbl[] = {
++static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0283, "Acer TravelMate 8371", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
+@@ -10411,6 +10325,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87d3, "HP Laptop 15-gw0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++ SND_PCI_QUIRK(0x103c, 0x87df, "HP ProBook 430 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87f1, "HP ProBook 630 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -10592,7 +10507,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d91, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d92, "HP ZBook Firefly 16 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e18, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e19, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e1a, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -10746,6 +10667,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP),
++ SND_PCI_QUIRK(0x144d, 0xca06, "Samsung Galaxy Book3 360 (NP730QFG)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc870, "Samsung Galaxy Book2 Pro (NP950XED)", ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS),
+ SND_PCI_QUIRK(0x144d, 0xc872, "Samsung Galaxy Book2 Pro (NP950XEE)", ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS),
+@@ -10903,11 +10825,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+- SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8 / DuetITL 2021", ALC287_FIXUP_LENOVO_14IRP8_DUETITL),
++ HDA_CODEC_QUIRK(0x17aa, 0x3802, "DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++ SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7),
+ SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+- SND_PCI_QUIRK(0x17aa, 0x3820, "IdeaPad 330 / Yoga Duet 7", ALC287_FIXUP_LENOVO_SSID_17AA3820),
++ HDA_CODEC_QUIRK(0x17aa, 0x3820, "IdeaPad 330-17IKB 81DM", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
++ SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+@@ -10921,8 +10845,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3865, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3866, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+- SND_PCI_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7 / Yoga Pro 7 14ARP8", ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7),
+- SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7/7i", ALC287_FIXUP_LENOVO_LEGION_7),
++ HDA_CODEC_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x17aa, 0x386e, "Yoga Pro 7 14ARP8", ALC285_FIXUP_SPEAKER2_TO_DAC1),
++ HDA_CODEC_QUIRK(0x17aa, 0x386f, "Legion Pro 7 16ARX8H", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7i 16IAX7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3877, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3878, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -11096,7 +11022,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk alc269_fixup_vendor_tbl[] = {
++static const struct hda_quirk alc269_fixup_vendor_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC),
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", ALC269_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK_VENDOR(0x104d, "Sony VAIO", ALC269_FIXUP_SONY_VAIO),
+@@ -12032,7 +11958,7 @@ static const struct hda_fixup alc861_fixups[] = {
+ }
+ };
+
+-static const struct snd_pci_quirk alc861_fixup_tbl[] = {
++static const struct hda_quirk alc861_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1253, "ASUS W7J", ALC660_FIXUP_ASUS_W7J),
+ SND_PCI_QUIRK(0x1043, 0x1263, "ASUS Z35HL", ALC660_FIXUP_ASUS_W7J),
+ SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
+@@ -12136,7 +12062,7 @@ static const struct hda_fixup alc861vd_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc861vd_fixup_tbl[] = {
++static const struct hda_quirk alc861vd_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x30bf, "HP TX1000", ALC861VD_FIX_DALLAS),
+ SND_PCI_QUIRK(0x1043, 0x1339, "ASUS A7-K", ALC660VD_FIX_ASUS_GPIO1),
+ SND_PCI_QUIRK(0x1179, 0xff31, "Toshiba L30-149", ALC861VD_FIX_DALLAS),
+@@ -12937,7 +12863,7 @@ static const struct hda_fixup alc662_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc662_fixup_tbl[] = {
++static const struct hda_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1019, 0x9087, "ECS", ALC662_FIXUP_ASUS_MODE2),
+ SND_PCI_QUIRK(0x1019, 0x9859, "JP-IK LEAP W502", ALC897_FIXUP_HEADSET_MIC_PIN3),
+ SND_PCI_QUIRK(0x1025, 0x022f, "Acer Aspire One", ALC662_FIXUP_INV_DMIC),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index ae1a34c68c6161..bde6b737385831 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -1462,7 +1462,7 @@ static const struct hda_model_fixup stac9200_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac9200_fixup_tbl[] = {
++static const struct hda_quirk stac9200_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_REF),
+@@ -1683,7 +1683,7 @@ static const struct hda_model_fixup stac925x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac925x_fixup_tbl[] = {
++static const struct hda_quirk stac925x_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668, "DFI LanParty", STAC_REF),
+ SND_PCI_QUIRK(PCI_VENDOR_ID_DFI, 0x3101, "DFI LanParty", STAC_REF),
+@@ -1957,7 +1957,7 @@ static const struct hda_model_fixup stac92hd73xx_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac92hd73xx_fixup_tbl[] = {
++static const struct hda_quirk stac92hd73xx_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_92HD73XX_REF),
+@@ -2753,7 +2753,7 @@ static const struct hda_model_fixup stac92hd83xxx_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac92hd83xxx_fixup_tbl[] = {
++static const struct hda_quirk stac92hd83xxx_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_92HD83XXX_REF),
+@@ -3236,7 +3236,7 @@ static const struct hda_model_fixup stac92hd71bxx_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac92hd71bxx_fixup_tbl[] = {
++static const struct hda_quirk stac92hd71bxx_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_92HD71BXX_REF),
+@@ -3496,7 +3496,7 @@ static const struct hda_pintbl ecs202_pin_configs[] = {
+ };
+
+ /* codec SSIDs for Intel Mac sharing the same PCI SSID 8384:7680 */
+-static const struct snd_pci_quirk stac922x_intel_mac_fixup_tbl[] = {
++static const struct hda_quirk stac922x_intel_mac_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x0000, 0x0100, "Mac Mini", STAC_INTEL_MAC_V3),
+ SND_PCI_QUIRK(0x106b, 0x0800, "Mac", STAC_INTEL_MAC_V1),
+ SND_PCI_QUIRK(0x106b, 0x0600, "Mac", STAC_INTEL_MAC_V2),
+@@ -3640,7 +3640,7 @@ static const struct hda_model_fixup stac922x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac922x_fixup_tbl[] = {
++static const struct hda_quirk stac922x_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_D945_REF),
+@@ -3968,7 +3968,7 @@ static const struct hda_model_fixup stac927x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac927x_fixup_tbl[] = {
++static const struct hda_quirk stac927x_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_D965_REF),
+@@ -4178,7 +4178,7 @@ static const struct hda_model_fixup stac9205_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac9205_fixup_tbl[] = {
++static const struct hda_quirk stac9205_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_9205_REF),
+@@ -4255,7 +4255,7 @@ static const struct hda_fixup stac92hd95_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk stac92hd95_fixup_tbl[] = {
++static const struct hda_quirk stac92hd95_fixup_tbl[] = {
+ SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1911, "HP Spectre 13", STAC_92HD95_HP_BASS),
+ {} /* terminator */
+ };
+@@ -5002,7 +5002,7 @@ static const struct hda_fixup stac9872_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk stac9872_fixup_tbl[] = {
++static const struct hda_quirk stac9872_fixup_tbl[] = {
+ SND_PCI_QUIRK_MASK(0x104d, 0xfff0, 0x81e0,
+ "Sony VAIO F/S", STAC_9872_VAIO),
+ {} /* terminator */
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index a8ef4bb70dd057..d0893059b1b9b7 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -1035,7 +1035,7 @@ static const struct hda_fixup via_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk vt2002p_fixups[] = {
++static const struct hda_quirk vt2002p_fixups[] = {
+ SND_PCI_QUIRK(0x1043, 0x13f7, "Asus B23E", VIA_FIXUP_POWER_SAVE),
+ SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75),
+ SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 5153a68d8c0795..e38c5885dadfbc 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -220,6 +220,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21J6"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21M1"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -416,6 +423,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Xiaomi Book Pro 14 2022"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "TIMI"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Redmi G 2022"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index 74caae52e1273f..d9df29a26f4f21 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -185,84 +185,97 @@ static const struct snd_pcm_chmap_elem hdmi_codec_8ch_chmaps[] = {
+ /*
+ * hdmi_codec_channel_alloc: speaker configuration available for CEA
+ *
+- * This is an ordered list that must match with hdmi_codec_8ch_chmaps struct
++ * This is an ordered list where ca_id must exist in hdmi_codec_8ch_chmaps
+ * The preceding ones have better chances to be selected by
+ * hdmi_codec_get_ch_alloc_table_idx().
+ */
+ static const struct hdmi_codec_cea_spk_alloc hdmi_codec_channel_alloc[] = {
+ { .ca_id = 0x00, .n_ch = 2,
+- .mask = FL | FR},
+- /* 2.1 */
+- { .ca_id = 0x01, .n_ch = 4,
+- .mask = FL | FR | LFE},
+- /* Dolby Surround */
++ .mask = FL | FR },
++ { .ca_id = 0x03, .n_ch = 4,
++ .mask = FL | FR | LFE | FC },
+ { .ca_id = 0x02, .n_ch = 4,
+ .mask = FL | FR | FC },
+- /* surround51 */
++ { .ca_id = 0x01, .n_ch = 4,
++ .mask = FL | FR | LFE },
+ { .ca_id = 0x0b, .n_ch = 6,
+- .mask = FL | FR | LFE | FC | RL | RR},
+- /* surround40 */
+- { .ca_id = 0x08, .n_ch = 6,
+- .mask = FL | FR | RL | RR },
+- /* surround41 */
+- { .ca_id = 0x09, .n_ch = 6,
+- .mask = FL | FR | LFE | RL | RR },
+- /* surround50 */
++ .mask = FL | FR | LFE | FC | RL | RR },
+ { .ca_id = 0x0a, .n_ch = 6,
+ .mask = FL | FR | FC | RL | RR },
+- /* 6.1 */
+- { .ca_id = 0x0f, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | RL | RR | RC },
+- /* surround71 */
++ { .ca_id = 0x09, .n_ch = 6,
++ .mask = FL | FR | LFE | RL | RR },
++ { .ca_id = 0x08, .n_ch = 6,
++ .mask = FL | FR | RL | RR },
++ { .ca_id = 0x07, .n_ch = 6,
++ .mask = FL | FR | LFE | FC | RC },
++ { .ca_id = 0x06, .n_ch = 6,
++ .mask = FL | FR | FC | RC },
++ { .ca_id = 0x05, .n_ch = 6,
++ .mask = FL | FR | LFE | RC },
++ { .ca_id = 0x04, .n_ch = 6,
++ .mask = FL | FR | RC },
+ { .ca_id = 0x13, .n_ch = 8,
+ .mask = FL | FR | LFE | FC | RL | RR | RLC | RRC },
+- /* others */
+- { .ca_id = 0x03, .n_ch = 8,
+- .mask = FL | FR | LFE | FC },
+- { .ca_id = 0x04, .n_ch = 8,
+- .mask = FL | FR | RC},
+- { .ca_id = 0x05, .n_ch = 8,
+- .mask = FL | FR | LFE | RC },
+- { .ca_id = 0x06, .n_ch = 8,
+- .mask = FL | FR | FC | RC },
+- { .ca_id = 0x07, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | RC },
+- { .ca_id = 0x0c, .n_ch = 8,
+- .mask = FL | FR | RC | RL | RR },
+- { .ca_id = 0x0d, .n_ch = 8,
+- .mask = FL | FR | LFE | RL | RR | RC },
+- { .ca_id = 0x0e, .n_ch = 8,
+- .mask = FL | FR | FC | RL | RR | RC },
+- { .ca_id = 0x10, .n_ch = 8,
+- .mask = FL | FR | RL | RR | RLC | RRC },
+- { .ca_id = 0x11, .n_ch = 8,
+- .mask = FL | FR | LFE | RL | RR | RLC | RRC },
++ { .ca_id = 0x1f, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RL | RR | FLC | FRC },
+ { .ca_id = 0x12, .n_ch = 8,
+ .mask = FL | FR | FC | RL | RR | RLC | RRC },
+- { .ca_id = 0x14, .n_ch = 8,
+- .mask = FL | FR | FLC | FRC },
+- { .ca_id = 0x15, .n_ch = 8,
+- .mask = FL | FR | LFE | FLC | FRC },
+- { .ca_id = 0x16, .n_ch = 8,
+- .mask = FL | FR | FC | FLC | FRC },
+- { .ca_id = 0x17, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | FLC | FRC },
+- { .ca_id = 0x18, .n_ch = 8,
+- .mask = FL | FR | RC | FLC | FRC },
+- { .ca_id = 0x19, .n_ch = 8,
+- .mask = FL | FR | LFE | RC | FLC | FRC },
+- { .ca_id = 0x1a, .n_ch = 8,
+- .mask = FL | FR | RC | FC | FLC | FRC },
+- { .ca_id = 0x1b, .n_ch = 8,
+- .mask = FL | FR | LFE | RC | FC | FLC | FRC },
+- { .ca_id = 0x1c, .n_ch = 8,
+- .mask = FL | FR | RL | RR | FLC | FRC },
+- { .ca_id = 0x1d, .n_ch = 8,
+- .mask = FL | FR | LFE | RL | RR | FLC | FRC },
+ { .ca_id = 0x1e, .n_ch = 8,
+ .mask = FL | FR | FC | RL | RR | FLC | FRC },
+- { .ca_id = 0x1f, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | RL | RR | FLC | FRC },
++ { .ca_id = 0x11, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR | RLC | RRC },
++ { .ca_id = 0x1d, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR | FLC | FRC },
++ { .ca_id = 0x10, .n_ch = 8,
++ .mask = FL | FR | RL | RR | RLC | RRC },
++ { .ca_id = 0x1c, .n_ch = 8,
++ .mask = FL | FR | RL | RR | FLC | FRC },
++ { .ca_id = 0x0f, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RL | RR | RC },
++ { .ca_id = 0x1b, .n_ch = 8,
++ .mask = FL | FR | LFE | RC | FC | FLC | FRC },
++ { .ca_id = 0x0e, .n_ch = 8,
++ .mask = FL | FR | FC | RL | RR | RC },
++ { .ca_id = 0x1a, .n_ch = 8,
++ .mask = FL | FR | RC | FC | FLC | FRC },
++ { .ca_id = 0x0d, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR | RC },
++ { .ca_id = 0x19, .n_ch = 8,
++ .mask = FL | FR | LFE | RC | FLC | FRC },
++ { .ca_id = 0x0c, .n_ch = 8,
++ .mask = FL | FR | RC | RL | RR },
++ { .ca_id = 0x18, .n_ch = 8,
++ .mask = FL | FR | RC | FLC | FRC },
++ { .ca_id = 0x17, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | FLC | FRC },
++ { .ca_id = 0x16, .n_ch = 8,
++ .mask = FL | FR | FC | FLC | FRC },
++ { .ca_id = 0x15, .n_ch = 8,
++ .mask = FL | FR | LFE | FLC | FRC },
++ { .ca_id = 0x14, .n_ch = 8,
++ .mask = FL | FR | FLC | FRC },
++ { .ca_id = 0x0b, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RL | RR },
++ { .ca_id = 0x0a, .n_ch = 8,
++ .mask = FL | FR | FC | RL | RR },
++ { .ca_id = 0x09, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR },
++ { .ca_id = 0x08, .n_ch = 8,
++ .mask = FL | FR | RL | RR },
++ { .ca_id = 0x07, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RC },
++ { .ca_id = 0x06, .n_ch = 8,
++ .mask = FL | FR | FC | RC },
++ { .ca_id = 0x05, .n_ch = 8,
++ .mask = FL | FR | LFE | RC },
++ { .ca_id = 0x04, .n_ch = 8,
++ .mask = FL | FR | RC },
++ { .ca_id = 0x03, .n_ch = 8,
++ .mask = FL | FR | LFE | FC },
++ { .ca_id = 0x02, .n_ch = 8,
++ .mask = FL | FR | FC },
++ { .ca_id = 0x01, .n_ch = 8,
++ .mask = FL | FR | LFE },
+ };
+
+ struct hdmi_codec_priv {
+@@ -371,7 +384,8 @@ static int hdmi_codec_chmap_ctl_get(struct snd_kcontrol *kcontrol,
+ struct snd_pcm_chmap *info = snd_kcontrol_chip(kcontrol);
+ struct hdmi_codec_priv *hcp = info->private_data;
+
+- map = info->chmap[hcp->chmap_idx].map;
++ if (hcp->chmap_idx != HDMI_CODEC_CHMAP_IDX_UNKNOWN)
++ map = info->chmap[hcp->chmap_idx].map;
+
+ for (i = 0; i < info->max_channels; i++) {
+ if (hcp->chmap_idx == HDMI_CODEC_CHMAP_IDX_UNKNOWN)
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 4af81158035681..945f9c0a6a5455 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -509,7 +509,7 @@ static int avs_pcm_hw_constraints_init(struct snd_pcm_substream *substream)
+ SNDRV_PCM_HW_PARAM_FORMAT, SNDRV_PCM_HW_PARAM_CHANNELS,
+ SNDRV_PCM_HW_PARAM_RATE, -1);
+
+- return ret;
++ return 0;
+ }
+
+ static int avs_dai_fe_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index bc581fea0e3a16..866589fece7a3d 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -870,6 +870,13 @@ static const struct platform_device_id board_ids[] = {
+ SOF_SSP_PORT_BT_OFFLOAD(2) |
+ SOF_BT_OFFLOAD_PRESENT),
+ },
++ {
++ .name = "mtl_rt5682_c1_h02",
++ .driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
++ SOF_SSP_PORT_CODEC(1) |
++ /* SSP 0 and SSP 2 are used for HDMI IN */
++ SOF_SSP_MASK_HDMI_CAPTURE(0x5)),
++ },
+ {
+ .name = "arl_rt5682_c1_h02",
+ .driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 4a0ab50d1e50dc..a58842a8c8a641 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -580,6 +580,47 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ },
+ .driver_data = (void *)(SOC_SDW_CODEC_SPKR),
+ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3838")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3832")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "380E")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C")
++ },
++ /* Note this quirk excludes the CODEC mic */
++ .driver_data = (void *)(SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
+
+ /* ArrowLake devices */
+ {
+diff --git a/sound/soc/intel/common/soc-acpi-intel-arl-match.c b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+index 072b8486d0727c..24d850df77ca8e 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-arl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+@@ -44,6 +44,31 @@ static const struct snd_soc_acpi_endpoint spk_3_endpoint = {
+ .group_id = 1,
+ };
+
++/*
++ * RT722 is a multi-function codec, three endpoints are created for
++ * its headset, amp and dmic functions.
++ */
++static const struct snd_soc_acpi_endpoint rt722_endpoints[] = {
++ {
++ .num = 0,
++ .aggregated = 0,
++ .group_position = 0,
++ .group_id = 0,
++ },
++ {
++ .num = 1,
++ .aggregated = 0,
++ .group_position = 0,
++ .group_id = 0,
++ },
++ {
++ .num = 2,
++ .aggregated = 0,
++ .group_position = 0,
++ .group_id = 0,
++ },
++};
++
+ static const struct snd_soc_acpi_adr_device cs35l56_2_lr_adr[] = {
+ {
+ .adr = 0x00023001FA355601ull,
+@@ -185,6 +210,24 @@ static const struct snd_soc_acpi_adr_device rt711_sdca_0_adr[] = {
+ }
+ };
+
++static const struct snd_soc_acpi_adr_device rt722_0_single_adr[] = {
++ {
++ .adr = 0x000030025D072201ull,
++ .num_endpoints = ARRAY_SIZE(rt722_endpoints),
++ .endpoints = rt722_endpoints,
++ .name_prefix = "rt722"
++ }
++};
++
++static const struct snd_soc_acpi_adr_device rt1320_2_single_adr[] = {
++ {
++ .adr = 0x000230025D132001ull,
++ .num_endpoints = 1,
++ .endpoints = &single_endpoint,
++ .name_prefix = "rt1320-1"
++ }
++};
++
+ static const struct snd_soc_acpi_link_adr arl_cs42l43_l0[] = {
+ {
+ .mask = BIT(0),
+@@ -287,6 +330,20 @@ static const struct snd_soc_acpi_link_adr arl_sdca_rvp[] = {
+ {}
+ };
+
++static const struct snd_soc_acpi_link_adr arl_rt722_l0_rt1320_l2[] = {
++ {
++ .mask = BIT(0),
++ .num_adr = ARRAY_SIZE(rt722_0_single_adr),
++ .adr_d = rt722_0_single_adr,
++ },
++ {
++ .mask = BIT(2),
++ .num_adr = ARRAY_SIZE(rt1320_2_single_adr),
++ .adr_d = rt1320_2_single_adr,
++ },
++ {}
++};
++
+ static const struct snd_soc_acpi_codecs arl_essx_83x6 = {
+ .num_codecs = 3,
+ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
+@@ -385,6 +442,12 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_arl_sdw_machines[] = {
+ .drv_name = "sof_sdw",
+ .sof_tplg_filename = "sof-arl-rt711-l0.tplg",
+ },
++ {
++ .link_mask = BIT(0) | BIT(2),
++ .links = arl_rt722_l0_rt1320_l2,
++ .drv_name = "sof_sdw",
++ .sof_tplg_filename = "sof-arl-rt722-l0_rt1320-l2.tplg",
++ },
+ {},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_arl_sdw_machines);
+diff --git a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+index d4435a34a3a3f4..fd02c864e25ef9 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+@@ -42,6 +42,13 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_mtl_machines[] = {
+ SND_SOC_ACPI_TPLG_INTEL_SSP_MSB |
+ SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER,
+ },
++ {
++ .comp_ids = &mtl_rt5682_rt5682s_hp,
++ .drv_name = "mtl_rt5682_c1_h02",
++ .machine_quirk = snd_soc_acpi_codec_list,
++ .quirk_data = &mtl_lt6911_hdmi,
++ .sof_tplg_filename = "sof-mtl-rt5682-ssp1-hdmi-ssp02.tplg",
++ },
+ /* place boards for each headphone codec: sof driver will complete the
+ * tplg name and machine driver will detect the amp type
+ */
+diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+index 4eed90d13a5326..62429e8e57b559 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c
++++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+@@ -188,9 +188,7 @@ SND_SOC_DAILINK_DEFS(pcm1,
+ SND_SOC_DAILINK_DEFS(ul_src,
+ DAILINK_COMP_ARRAY(COMP_CPU("UL_SRC")),
+ DAILINK_COMP_ARRAY(COMP_CODEC("mt6359-sound",
+- "mt6359-snd-codec-aif1"),
+- COMP_CODEC("dmic-codec",
+- "dmic-hifi")),
++ "mt6359-snd-codec-aif1")),
+ DAILINK_COMP_ARRAY(COMP_EMPTY()));
+
+ SND_SOC_DAILINK_DEFS(AFE_SOF_DL2,
+diff --git a/sound/soc/sdw_utils/soc_sdw_utils.c b/sound/soc/sdw_utils/soc_sdw_utils.c
+index a6070f822eb9e4..e6ac5c0fd3bec8 100644
+--- a/sound/soc/sdw_utils/soc_sdw_utils.c
++++ b/sound/soc/sdw_utils/soc_sdw_utils.c
+@@ -363,6 +363,8 @@ struct asoc_sdw_codec_info codec_info_list[] = {
+ .num_controls = ARRAY_SIZE(generic_spk_controls),
+ .widgets = generic_spk_widgets,
+ .num_widgets = ARRAY_SIZE(generic_spk_widgets),
++ .quirk = SOC_SDW_CODEC_SPKR,
++ .quirk_exclude = true,
+ },
+ {
+ .direction = {false, true},
+@@ -487,6 +489,8 @@ struct asoc_sdw_codec_info codec_info_list[] = {
+ .rtd_init = asoc_sdw_cs42l43_dmic_rtd_init,
+ .widgets = generic_dmic_widgets,
+ .num_widgets = ARRAY_SIZE(generic_dmic_widgets),
++ .quirk = SOC_SDW_CODEC_MIC,
++ .quirk_exclude = true,
+ },
+ {
+ .direction = {false, true},
+@@ -1112,7 +1116,8 @@ int asoc_sdw_parse_sdw_endpoints(struct snd_soc_card *card,
+ dai_info = &codec_info->dais[adr_end->num];
+ soc_dai = asoc_sdw_find_dailink(soc_dais, adr_end);
+
+- if (dai_info->quirk && !(dai_info->quirk & ctx->mc_quirk))
++ if (dai_info->quirk &&
++ !(dai_info->quirk_exclude ^ !!(dai_info->quirk & ctx->mc_quirk)))
+ continue;
+
+ dev_dbg(dev,
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index be61e377e59e03..e98b53b67d12b9 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -20,6 +20,9 @@
+ /* size of tplg ABI in bytes */
+ #define SOF_IPC3_TPLG_ABI_SIZE 3
+
++/* Base of SOF_DAI_INTEL_ALH, this should be aligned with SOC_SDW_INTEL_BIDIR_PDI_BASE */
++#define INTEL_ALH_DAI_INDEX_BASE 2
++
+ struct sof_widget_data {
+ int ctrl_type;
+ int ipc_cmd;
+@@ -1585,14 +1588,26 @@ static int sof_ipc3_widget_setup_comp_dai(struct snd_sof_widget *swidget)
+ ret = sof_update_ipc_object(scomp, comp_dai, SOF_DAI_TOKENS, swidget->tuples,
+ swidget->num_tuples, sizeof(*comp_dai), 1);
+ if (ret < 0)
+- goto free;
++ goto free_comp;
+
+ /* update comp_tokens */
+ ret = sof_update_ipc_object(scomp, &comp_dai->config, SOF_COMP_TOKENS,
+ swidget->tuples, swidget->num_tuples,
+ sizeof(comp_dai->config), 1);
+ if (ret < 0)
+- goto free;
++ goto free_comp;
++
++ /* Subtract the base to match the FW dai index. */
++ if (comp_dai->type == SOF_DAI_INTEL_ALH) {
++ if (comp_dai->dai_index < INTEL_ALH_DAI_INDEX_BASE) {
++ dev_err(sdev->dev,
++ "Invalid ALH dai index %d, only Pin numbers >= %d can be used\n",
++ comp_dai->dai_index, INTEL_ALH_DAI_INDEX_BASE);
++ ret = -EINVAL;
++ goto free_comp;
++ }
++ comp_dai->dai_index -= INTEL_ALH_DAI_INDEX_BASE;
++ }
+
+ dev_dbg(scomp->dev, "dai %s: type %d index %d\n",
+ swidget->widget->name, comp_dai->type, comp_dai->dai_index);
+@@ -2167,8 +2182,16 @@ static int sof_ipc3_dai_config(struct snd_sof_dev *sdev, struct snd_sof_widget *
+ case SOF_DAI_INTEL_ALH:
+ if (data) {
+ /* save the dai_index during hw_params and reuse it for hw_free */
+- if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS)
+- config->dai_index = data->dai_index;
++ if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS) {
++ /* Subtract the base to match the FW dai index. */
++ if (data->dai_index < INTEL_ALH_DAI_INDEX_BASE) {
++ dev_err(sdev->dev,
++ "Invalid ALH dai index %d, only Pin numbers >= %d can be used\n",
++ config->dai_index, INTEL_ALH_DAI_INDEX_BASE);
++ return -EINVAL;
++ }
++ config->dai_index = data->dai_index - INTEL_ALH_DAI_INDEX_BASE;
++ }
+ config->alh.stream_id = data->dai_data;
+ }
+ break;
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 568099467dbbcc..a29f28eb7d0c64 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -403,10 +403,15 @@ static int prepare_inbound_urb(struct snd_usb_endpoint *ep,
+ static void notify_xrun(struct snd_usb_endpoint *ep)
+ {
+ struct snd_usb_substream *data_subs;
++ struct snd_pcm_substream *psubs;
+
+ data_subs = READ_ONCE(ep->data_subs);
+- if (data_subs && data_subs->pcm_substream)
+- snd_pcm_stop_xrun(data_subs->pcm_substream);
++ if (!data_subs)
++ return;
++ psubs = data_subs->pcm_substream;
++ if (psubs && psubs->runtime &&
++ psubs->runtime->state == SNDRV_PCM_STATE_RUNNING)
++ snd_pcm_stop_xrun(psubs);
+ }
+
+ static struct snd_usb_packet_info *
+@@ -562,7 +567,10 @@ static void snd_complete_urb(struct urb *urb)
+ push_back_to_ready_list(ep, ctx);
+ clear_bit(ctx->index, &ep->active_mask);
+ snd_usb_queue_pending_output_urbs(ep, false);
+- atomic_dec(&ep->submitted_urbs); /* decrement at last */
++ /* decrement at last, and check xrun */
++ if (atomic_dec_and_test(&ep->submitted_urbs) &&
++ !snd_usb_endpoint_implicit_feedback_sink(ep))
++ notify_xrun(ep);
+ return;
+ }
+
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index bd67027c767751..0591da2839269b 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1084,6 +1084,21 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ struct snd_kcontrol *kctl)
+ {
+ struct snd_usb_audio *chip = cval->head.mixer->chip;
++
++ if (chip->quirk_flags & QUIRK_FLAG_MIC_RES_384) {
++ if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
++ usb_audio_info(chip,
++ "set resolution quirk: cval->res = 384\n");
++ cval->res = 384;
++ }
++ } else if (chip->quirk_flags & QUIRK_FLAG_MIC_RES_16) {
++ if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
++ usb_audio_info(chip,
++ "set resolution quirk: cval->res = 16\n");
++ cval->res = 16;
++ }
++ }
++
+ switch (chip->usb_id) {
+ case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */
+ case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C600 */
+@@ -1168,27 +1183,6 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ }
+ break;
+
+- case USB_ID(0x046d, 0x0807): /* Logitech Webcam C500 */
+- case USB_ID(0x046d, 0x0808):
+- case USB_ID(0x046d, 0x0809):
+- case USB_ID(0x046d, 0x0819): /* Logitech Webcam C210 */
+- case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */
+- case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */
+- case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */
+- case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
+- case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
+- case USB_ID(0x046d, 0x0991):
+- case USB_ID(0x046d, 0x09a2): /* QuickCam Communicate Deluxe/S7500 */
+- /* Most audio usb devices lie about volume resolution.
+- * Most Logitech webcams have res = 384.
+- * Probably there is some logitech magic behind this number --fishor
+- */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 384\n");
+- cval->res = 384;
+- }
+- break;
+ case USB_ID(0x0495, 0x3042): /* ESS Technology Asus USB DAC */
+ if ((strstr(kctl->id.name, "Playback Volume") != NULL) ||
+ strstr(kctl->id.name, "Capture Volume") != NULL) {
+@@ -1197,28 +1191,6 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ cval->res = 1;
+ }
+ break;
+- case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 16\n");
+- cval->res = 16;
+- }
+- break;
+- case USB_ID(0x1bcf, 0x2283): /* NexiGo N930AF FHD Webcam */
+- case USB_ID(0x03f0, 0x654a): /* HP 320 FHD Webcam */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 16\n");
+- cval->res = 16;
+- }
+- break;
+- case USB_ID(0x1bcf, 0x2281): /* HD Webcam */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 16\n");
+- cval->res = 16;
+- }
+- break;
+ }
+ }
+
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 23260aa1919d32..0e9b5431a47f20 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -621,6 +621,16 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .id = USB_ID(0x1b1c, 0x0a42),
+ .map = corsair_virtuoso_map,
+ },
++ {
++ /* Corsair HS80 RGB Wireless (wired mode) */
++ .id = USB_ID(0x1b1c, 0x0a6a),
++ .map = corsair_virtuoso_map,
++ },
++ {
++ /* Corsair HS80 RGB Wireless (wireless mode) */
++ .id = USB_ID(0x1b1c, 0x0a6b),
++ .map = corsair_virtuoso_map,
++ },
+ { /* Gigabyte TRX40 Aorus Master (rear panel + front mic) */
+ .id = USB_ID(0x0414, 0xa001),
+ .map = aorus_master_alc1220vb_map,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 6456e87e2f3974..a95ebcf4e46e76 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -4059,6 +4059,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ err = snd_bbfpro_controls_create(mixer);
+ break;
+ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ case USB_ID(0x2a39, 0x3fa0): /* RME Digiface USB (alternate) */
+ err = snd_rme_digiface_controls_create(mixer);
+ break;
+ case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 199d0603cf8e59..3f8beacca27a17 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3616,176 +3616,181 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+-{
+- /* Only claim interface 0 */
+- .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
+- USB_DEVICE_ID_MATCH_PRODUCT |
+- USB_DEVICE_ID_MATCH_INT_CLASS |
+- USB_DEVICE_ID_MATCH_INT_NUMBER,
+- .idVendor = 0x2a39,
+- .idProduct = 0x3f8c,
+- .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+- .bInterfaceNumber = 0,
+- QUIRK_DRIVER_INFO {
+- QUIRK_DATA_COMPOSITE {
++#define QUIRK_RME_DIGIFACE(pid) \
++{ \
++ /* Only claim interface 0 */ \
++ .match_flags = USB_DEVICE_ID_MATCH_VENDOR | \
++ USB_DEVICE_ID_MATCH_PRODUCT | \
++ USB_DEVICE_ID_MATCH_INT_CLASS | \
++ USB_DEVICE_ID_MATCH_INT_NUMBER, \
++ .idVendor = 0x2a39, \
++ .idProduct = pid, \
++ .bInterfaceClass = USB_CLASS_VENDOR_SPEC, \
++ .bInterfaceNumber = 0, \
++ QUIRK_DRIVER_INFO { \
++ QUIRK_DATA_COMPOSITE { \
+ /*
+ * Three modes depending on sample rate band,
+ * with different channel counts for in/out
+- */
+- { QUIRK_DATA_STANDARD_MIXER(0) },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 34, // outputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x02,
+- .ep_idx = 1,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_32000 |
+- SNDRV_PCM_RATE_44100 |
+- SNDRV_PCM_RATE_48000,
+- .rate_min = 32000,
+- .rate_max = 48000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 32000, 44100, 48000,
+- },
+- .sync_ep = 0x81,
+- .sync_iface = 0,
+- .sync_altsetting = 1,
+- .sync_ep_idx = 0,
+- .implicit_fb = 1,
+- },
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 18, // outputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x02,
+- .ep_idx = 1,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_64000 |
+- SNDRV_PCM_RATE_88200 |
+- SNDRV_PCM_RATE_96000,
+- .rate_min = 64000,
+- .rate_max = 96000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 64000, 88200, 96000,
+- },
+- .sync_ep = 0x81,
+- .sync_iface = 0,
+- .sync_altsetting = 1,
+- .sync_ep_idx = 0,
+- .implicit_fb = 1,
+- },
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 10, // outputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x02,
+- .ep_idx = 1,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_KNOT |
+- SNDRV_PCM_RATE_176400 |
+- SNDRV_PCM_RATE_192000,
+- .rate_min = 128000,
+- .rate_max = 192000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 128000, 176400, 192000,
+- },
+- .sync_ep = 0x81,
+- .sync_iface = 0,
+- .sync_altsetting = 1,
+- .sync_ep_idx = 0,
+- .implicit_fb = 1,
+- },
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 32, // inputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x81,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_32000 |
+- SNDRV_PCM_RATE_44100 |
+- SNDRV_PCM_RATE_48000,
+- .rate_min = 32000,
+- .rate_max = 48000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 32000, 44100, 48000,
+- }
+- }
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 16, // inputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x81,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_64000 |
+- SNDRV_PCM_RATE_88200 |
+- SNDRV_PCM_RATE_96000,
+- .rate_min = 64000,
+- .rate_max = 96000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 64000, 88200, 96000,
+- }
+- }
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 8, // inputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x81,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_KNOT |
+- SNDRV_PCM_RATE_176400 |
+- SNDRV_PCM_RATE_192000,
+- .rate_min = 128000,
+- .rate_max = 192000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 128000, 176400, 192000,
+- }
+- }
+- },
+- QUIRK_COMPOSITE_END
+- }
+- }
+-},
++ */ \
++ { QUIRK_DATA_STANDARD_MIXER(0) }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 34, /* outputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x02, \
++ .ep_idx = 1, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_32000 | \
++ SNDRV_PCM_RATE_44100 | \
++ SNDRV_PCM_RATE_48000, \
++ .rate_min = 32000, \
++ .rate_max = 48000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 32000, 44100, 48000, \
++ }, \
++ .sync_ep = 0x81, \
++ .sync_iface = 0, \
++ .sync_altsetting = 1, \
++ .sync_ep_idx = 0, \
++ .implicit_fb = 1, \
++ }, \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 18, /* outputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x02, \
++ .ep_idx = 1, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_64000 | \
++ SNDRV_PCM_RATE_88200 | \
++ SNDRV_PCM_RATE_96000, \
++ .rate_min = 64000, \
++ .rate_max = 96000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 64000, 88200, 96000, \
++ }, \
++ .sync_ep = 0x81, \
++ .sync_iface = 0, \
++ .sync_altsetting = 1, \
++ .sync_ep_idx = 0, \
++ .implicit_fb = 1, \
++ }, \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 10, /* outputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x02, \
++ .ep_idx = 1, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_KNOT | \
++ SNDRV_PCM_RATE_176400 | \
++ SNDRV_PCM_RATE_192000, \
++ .rate_min = 128000, \
++ .rate_max = 192000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 128000, 176400, 192000, \
++ }, \
++ .sync_ep = 0x81, \
++ .sync_iface = 0, \
++ .sync_altsetting = 1, \
++ .sync_ep_idx = 0, \
++ .implicit_fb = 1, \
++ }, \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 32, /* inputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x81, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_32000 | \
++ SNDRV_PCM_RATE_44100 | \
++ SNDRV_PCM_RATE_48000, \
++ .rate_min = 32000, \
++ .rate_max = 48000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 32000, 44100, 48000, \
++ } \
++ } \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 16, /* inputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x81, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_64000 | \
++ SNDRV_PCM_RATE_88200 | \
++ SNDRV_PCM_RATE_96000, \
++ .rate_min = 64000, \
++ .rate_max = 96000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 64000, 88200, 96000, \
++ } \
++ } \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 8, /* inputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x81, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_KNOT | \
++ SNDRV_PCM_RATE_176400 | \
++ SNDRV_PCM_RATE_192000, \
++ .rate_min = 128000, \
++ .rate_max = 192000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 128000, 176400, 192000, \
++ } \
++ } \
++ }, \
++ QUIRK_COMPOSITE_END \
++ } \
++ } \
++}
++
++QUIRK_RME_DIGIFACE(0x3f8c),
++QUIRK_RME_DIGIFACE(0x3fa0),
++
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 8538fdfce3535b..00101875d9a8d5 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -555,7 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
+ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf)
+ {
+ struct usb_host_config *config = dev->actconfig;
+- struct usb_device_descriptor new_device_descriptor;
++ struct usb_device_descriptor *new_device_descriptor __free(kfree) = NULL;
+ int err;
+
+ if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD ||
+@@ -566,15 +566,19 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac
+ 0x10, 0x43, 0x0001, 0x000a, NULL, 0);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error sending boot message: %d\n", err);
++
++ new_device_descriptor = kmalloc(sizeof(*new_device_descriptor), GFP_KERNEL);
++ if (!new_device_descriptor)
++ return -ENOMEM;
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &new_device_descriptor, sizeof(new_device_descriptor));
++ new_device_descriptor, sizeof(*new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+- if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ if (new_device_descriptor->bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+- new_device_descriptor.bNumConfigurations);
++ new_device_descriptor->bNumConfigurations);
+ else
+- memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
++ memcpy(&dev->descriptor, new_device_descriptor, sizeof(dev->descriptor));
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err);
+@@ -906,7 +910,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev)
+ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
+- struct usb_device_descriptor new_device_descriptor;
++ struct usb_device_descriptor *new_device_descriptor __free(kfree) = NULL;
+ int err;
+ u8 bootresponse[0x12];
+ int fwsize;
+@@ -941,15 +945,19 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+
+ dev_dbg(&dev->dev, "device initialised!\n");
+
++ new_device_descriptor = kmalloc(sizeof(*new_device_descriptor), GFP_KERNEL);
++ if (!new_device_descriptor)
++ return -ENOMEM;
++
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &new_device_descriptor, sizeof(new_device_descriptor));
++ new_device_descriptor, sizeof(*new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+- if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ if (new_device_descriptor->bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+- new_device_descriptor.bNumConfigurations);
++ new_device_descriptor->bNumConfigurations);
+ else
+- memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
++ memcpy(&dev->descriptor, new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1259,7 +1267,7 @@ static void mbox3_setup_defaults(struct usb_device *dev)
+ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
+- struct usb_device_descriptor new_device_descriptor;
++ struct usb_device_descriptor *new_device_descriptor __free(kfree) = NULL;
+ int err;
+ int descriptor_size;
+
+@@ -1272,15 +1280,19 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+
+ dev_dbg(&dev->dev, "MBOX3: device initialised!\n");
+
++ new_device_descriptor = kmalloc(sizeof(*new_device_descriptor), GFP_KERNEL);
++ if (!new_device_descriptor)
++ return -ENOMEM;
++
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &new_device_descriptor, sizeof(new_device_descriptor));
++ new_device_descriptor, sizeof(*new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "MBOX3: error usb_get_descriptor: %d\n", err);
+- if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ if (new_device_descriptor->bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "MBOX3: error too large bNumConfigurations: %d\n",
+- new_device_descriptor.bNumConfigurations);
++ new_device_descriptor->bNumConfigurations);
+ else
+- memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
++ memcpy(&dev->descriptor, new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1653,6 +1665,7 @@ int snd_usb_apply_boot_quirk(struct usb_device *dev,
+ return snd_usb_motu_microbookii_boot_quirk(dev);
+ break;
+ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ case USB_ID(0x2a39, 0x3fa0): /* RME Digiface USB (alternate) */
+ return snd_usb_rme_digiface_boot_quirk(dev);
+ }
+
+@@ -1866,6 +1879,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ mbox3_set_format_quirk(subs, fmt); /* Digidesign Mbox 3 */
+ break;
+ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ case USB_ID(0x2a39, 0x3fa0): /* RME Digiface USB (alternate) */
+ rme_digiface_set_format_quirk(subs);
+ break;
+ }
+@@ -2130,7 +2144,7 @@ struct usb_audio_quirk_flags_table {
+ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ /* Device matches */
+ DEVICE_FLG(0x03f0, 0x654a, /* HP 320 FHD Webcam */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x041e, 0x3000, /* Creative SB Extigy */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x041e, 0x4080, /* Creative Live Cam VF0610 */
+@@ -2138,10 +2152,31 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ DEVICE_FLG(0x045e, 0x083c, /* MS USB Link headset */
+ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY |
+ QUIRK_FLAG_DISABLE_AUTOSUSPEND),
++ DEVICE_FLG(0x046d, 0x0807, /* Logitech Webcam C500 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0808, /* Logitech Webcam C600 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0809,
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0819, /* Logitech Webcam C210 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x081b, /* HD Webcam c310 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x081d, /* HD Webcam c510 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0825, /* HD Webcam c270 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0826, /* HD Webcam c525 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x084c, /* Logitech ConferenceCam Connect */
+ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY_1M),
++ DEVICE_FLG(0x046d, 0x08ca, /* Logitech Quickcam Fusion */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x0991, /* Logitech QuickCam Pro */
+- QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR |
++ QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x09a2, /* QuickCam Communicate Deluxe/S7500 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
+@@ -2209,7 +2244,7 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ DEVICE_FLG(0x0fd9, 0x0008, /* Hauppauge HVR-950Q */
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x1397, 0x0507, /* Behringer UMC202HD */
+@@ -2247,9 +2282,9 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x1bcf, 0x2283, /* NexiGo N930AF FHD Webcam */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x2040, 0x7200, /* Hauppauge HVR-950Q */
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x2040, 0x7201, /* Hauppauge HVR-950Q-MXL */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index b0f042c996087e..158ec053dc44dd 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -194,6 +194,8 @@ extern bool snd_usb_skip_validation;
+ * QUIRK_FLAG_FIXED_RATE
+ * Do not set PCM rate (frequency) when only one rate is available
+ * for the given endpoint.
++ * QUIRK_FLAG_MIC_RES_16 and QUIRK_FLAG_MIC_RES_384
++ * Set the fixed resolution for Mic Capture Volume (mostly for webcams)
+ */
+
+ #define QUIRK_FLAG_GET_SAMPLE_RATE (1U << 0)
+@@ -218,5 +220,7 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_IFACE_SKIP_CLOSE (1U << 19)
+ #define QUIRK_FLAG_FORCE_IFACE_RESET (1U << 20)
+ #define QUIRK_FLAG_FIXED_RATE (1U << 21)
++#define QUIRK_FLAG_MIC_RES_16 (1U << 22)
++#define QUIRK_FLAG_MIC_RES_384 (1U << 23)
+
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 2ff949ea82fa66..e71be67f1d8658 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -822,11 +822,18 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ printf("%s:\n", sym_name);
+ }
+
+- if (disasm_print_insn(img, lens[i], opcodes,
+- name, disasm_opt, btf,
+- prog_linfo, ksyms[i], i,
+- linum))
+- goto exit_free;
++ if (ksyms) {
++ if (disasm_print_insn(img, lens[i], opcodes,
++ name, disasm_opt, btf,
++ prog_linfo, ksyms[i], i,
++ linum))
++ goto exit_free;
++ } else {
++ if (disasm_print_insn(img, lens[i], opcodes,
++ name, disasm_opt, btf,
++ NULL, 0, 0, false))
++ goto exit_free;
++ }
+
+ img += lens[i];
+
+diff --git a/tools/scripts/Makefile.arch b/tools/scripts/Makefile.arch
+index f6a50f06dfc453..eabfe9f411d914 100644
+--- a/tools/scripts/Makefile.arch
++++ b/tools/scripts/Makefile.arch
+@@ -7,8 +7,8 @@ HOSTARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+ -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+ -e s/riscv.*/riscv/ -e s/loongarch.*/loongarch/)
+
+-ifndef ARCH
+-ARCH := $(HOSTARCH)
++ifeq ($(strip $(ARCH)),)
++override ARCH := $(HOSTARCH)
+ endif
+
+ SRCARCH := $(ARCH)
+diff --git a/tools/testing/selftests/arm64/fp/fp-stress.c b/tools/testing/selftests/arm64/fp/fp-stress.c
+index faac24bdefeb94..80f22789504d66 100644
+--- a/tools/testing/selftests/arm64/fp/fp-stress.c
++++ b/tools/testing/selftests/arm64/fp/fp-stress.c
+@@ -79,7 +79,7 @@ static void child_start(struct child_data *child, const char *program)
+ */
+ ret = dup2(pipefd[1], 1);
+ if (ret == -1) {
+- fprintf(stderr, "dup2() %d\n", errno);
++ printf("dup2() %d\n", errno);
+ exit(EXIT_FAILURE);
+ }
+
+@@ -89,7 +89,7 @@ static void child_start(struct child_data *child, const char *program)
+ */
+ ret = dup2(startup_pipe[0], 3);
+ if (ret == -1) {
+- fprintf(stderr, "dup2() %d\n", errno);
++ printf("dup2() %d\n", errno);
+ exit(EXIT_FAILURE);
+ }
+
+@@ -107,16 +107,15 @@ static void child_start(struct child_data *child, const char *program)
+ */
+ ret = read(3, &i, sizeof(i));
+ if (ret < 0)
+- fprintf(stderr, "read(startp pipe) failed: %s (%d)\n",
+- strerror(errno), errno);
++ printf("read(startp pipe) failed: %s (%d)\n",
++ strerror(errno), errno);
+ if (ret > 0)
+- fprintf(stderr, "%d bytes of data on startup pipe\n",
+- ret);
++ printf("%d bytes of data on startup pipe\n", ret);
+ close(3);
+
+ ret = execl(program, program, NULL);
+- fprintf(stderr, "execl(%s) failed: %d (%s)\n",
+- program, errno, strerror(errno));
++ printf("execl(%s) failed: %d (%s)\n",
++ program, errno, strerror(errno));
+
+ exit(EXIT_FAILURE);
+ } else {
+diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
+index b743daa772f55f..5a07b3958fbf29 100644
+--- a/tools/testing/selftests/arm64/pauth/pac.c
++++ b/tools/testing/selftests/arm64/pauth/pac.c
+@@ -182,6 +182,9 @@ int exec_sign_all(struct signatures *signed_vals, size_t val)
+ return -1;
+ }
+
++ close(new_stdin[1]);
++ close(new_stdout[0]);
++
+ return 0;
+ }
+
+diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+index 7c881bca9af5c7..a7a6ae6c162fe0 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
++++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+@@ -35,9 +35,9 @@ __description("uninitialized iter in ->next()")
+ __failure __msg("expected an initialized iter_bits as arg #1")
+ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+ {
+- struct bpf_iter_bits *it = NULL;
++ struct bpf_iter_bits it = {};
+
+- bpf_iter_bits_next(it);
++ bpf_iter_bits_next(&it);
+ return 0;
+ }
+
+diff --git a/tools/testing/selftests/damon/Makefile b/tools/testing/selftests/damon/Makefile
+index 5b2a6a5dd1af7f..812f656260fba9 100644
+--- a/tools/testing/selftests/damon/Makefile
++++ b/tools/testing/selftests/damon/Makefile
+@@ -6,7 +6,7 @@ TEST_GEN_FILES += debugfs_target_ids_read_before_terminate_race
+ TEST_GEN_FILES += debugfs_target_ids_pid_leak
+ TEST_GEN_FILES += access_memory access_memory_even
+
+-TEST_FILES = _chk_dependency.sh _debugfs_common.sh
++TEST_FILES = _chk_dependency.sh _debugfs_common.sh _damon_sysfs.py
+
+ # functionality tests
+ TEST_PROGS = debugfs_attrs.sh debugfs_schemes.sh debugfs_target_ids.sh
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+index a16c6a6f6055cf..8f1c58f0c2397f 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+@@ -111,7 +111,7 @@ check_error 'p vfs_read $arg* ^$arg*' # DOUBLE_ARGS
+ if !grep -q 'kernel return probes support:' README; then
+ check_error 'r vfs_read ^$arg*' # NOFENTRY_ARGS
+ fi
+-check_error 'p vfs_read+8 ^$arg*' # NOFENTRY_ARGS
++check_error 'p vfs_read+20 ^$arg*' # NOFENTRY_ARGS
+ check_error 'p vfs_read ^hoge' # NO_BTFARG
+ check_error 'p kfree ^$arg10' # NO_BTFARG (exceed the number of parameters)
+ check_error 'r kfree ^$retval' # NO_RETVAL
+diff --git a/tools/testing/selftests/hid/run-hid-tools-tests.sh b/tools/testing/selftests/hid/run-hid-tools-tests.sh
+index bdae8464da8656..af1682a53c27e1 100755
+--- a/tools/testing/selftests/hid/run-hid-tools-tests.sh
++++ b/tools/testing/selftests/hid/run-hid-tools-tests.sh
+@@ -2,24 +2,26 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Runs tests for the HID subsystem
+
++KSELFTEST_SKIP_TEST=4
++
+ if ! command -v python3 > /dev/null 2>&1; then
+ echo "hid-tools: [SKIP] python3 not installed"
+- exit 77
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ if ! python3 -c "import pytest" > /dev/null 2>&1; then
+- echo "hid: [SKIP/ pytest module not installed"
+- exit 77
++ echo "hid: [SKIP] pytest module not installed"
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ if ! python3 -c "import pytest_tap" > /dev/null 2>&1; then
+- echo "hid: [SKIP/ pytest_tap module not installed"
+- exit 77
++ echo "hid: [SKIP] pytest_tap module not installed"
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ if ! python3 -c "import hidtools" > /dev/null 2>&1; then
+- echo "hid: [SKIP/ hid-tools module not installed"
+- exit 77
++ echo "hid: [SKIP] hid-tools module not installed"
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ TARGET=${TARGET:=.}
+diff --git a/tools/testing/selftests/mm/hugetlb_dio.c b/tools/testing/selftests/mm/hugetlb_dio.c
+index 432d5af15e66b7..db63abe5ee5e85 100644
+--- a/tools/testing/selftests/mm/hugetlb_dio.c
++++ b/tools/testing/selftests/mm/hugetlb_dio.c
+@@ -76,19 +76,15 @@ void run_dio_using_hugetlb(unsigned int start_off, unsigned int end_off)
+ /* Get the free huge pages after unmap*/
+ free_hpage_a = get_free_hugepages();
+
++ ksft_print_msg("No. Free pages before allocation : %d\n", free_hpage_b);
++ ksft_print_msg("No. Free pages after munmap : %d\n", free_hpage_a);
++
+ /*
+ * If the no. of free hugepages before allocation and after unmap does
+ * not match - that means there could still be a page which is pinned.
+ */
+- if (free_hpage_a != free_hpage_b) {
+- ksft_print_msg("No. Free pages before allocation : %d\n", free_hpage_b);
+- ksft_print_msg("No. Free pages after munmap : %d\n", free_hpage_a);
+- ksft_test_result_fail(": Huge pages not freed!\n");
+- } else {
+- ksft_print_msg("No. Free pages before allocation : %d\n", free_hpage_b);
+- ksft_print_msg("No. Free pages after munmap : %d\n", free_hpage_a);
+- ksft_test_result_pass(": Huge pages freed successfully !\n");
+- }
++ ksft_test_result(free_hpage_a == free_hpage_b,
++ "free huge pages from %u-%u\n", start_off, end_off);
+ }
+
+ int main(void)
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index f118f659e89600..e92e4f463f37bb 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -159,7 +159,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
+
+ return -1;
+ }
+- if (fscanf(fp, "%s", cas_count_cfg) <= 0) {
++ if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
+ ksft_perror("Could not get iMC cas count read");
+ fclose(fp);
+
+@@ -177,7 +177,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
+
+ return -1;
+ }
+- if (fscanf(fp, "%s", cas_count_cfg) <= 0) {
++ if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
+ ksft_perror("Could not get iMC cas count write");
+ fclose(fp);
+
+diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
+index 250c320349a785..a53cd1cb6e0c64 100644
+--- a/tools/testing/selftests/resctrl/resctrlfs.c
++++ b/tools/testing/selftests/resctrl/resctrlfs.c
+@@ -182,7 +182,7 @@ int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size
+
+ return -1;
+ }
+- if (fscanf(fp, "%s", cache_str) <= 0) {
++ if (fscanf(fp, "%63s", cache_str) <= 0) {
+ ksft_perror("Could not get cache_size");
+ fclose(fp);
+
+diff --git a/tools/testing/selftests/wireguard/qemu/debug.config b/tools/testing/selftests/wireguard/qemu/debug.config
+index 9d172210e2c63f..139fd9aa8b1218 100644
+--- a/tools/testing/selftests/wireguard/qemu/debug.config
++++ b/tools/testing/selftests/wireguard/qemu/debug.config
+@@ -31,7 +31,6 @@ CONFIG_SCHED_DEBUG=y
+ CONFIG_SCHED_INFO=y
+ CONFIG_SCHEDSTATS=y
+ CONFIG_SCHED_STACK_END_CHECK=y
+-CONFIG_DEBUG_TIMEKEEPING=y
+ CONFIG_DEBUG_PREEMPT=y
+ CONFIG_DEBUG_RT_MUTEXES=y
+ CONFIG_DEBUG_SPINLOCK=y
+diff --git a/tools/testing/vsock/vsock_perf.c b/tools/testing/vsock/vsock_perf.c
+index 4e8578f815e08a..8e0a6c0770d372 100644
+--- a/tools/testing/vsock/vsock_perf.c
++++ b/tools/testing/vsock/vsock_perf.c
+@@ -33,7 +33,7 @@
+
+ static unsigned int port = DEFAULT_PORT;
+ static unsigned long buf_size_bytes = DEFAULT_BUF_SIZE_BYTES;
+-static unsigned long vsock_buf_bytes = DEFAULT_VSOCK_BUF_BYTES;
++static unsigned long long vsock_buf_bytes = DEFAULT_VSOCK_BUF_BYTES;
+ static bool zerocopy;
+
+ static void error(const char *s)
+@@ -133,7 +133,7 @@ static float get_gbps(unsigned long bits, time_t ns_delta)
+ ((float)ns_delta / NSEC_PER_SEC);
+ }
+
+-static void run_receiver(unsigned long rcvlowat_bytes)
++static void run_receiver(int rcvlowat_bytes)
+ {
+ unsigned int read_cnt;
+ time_t rx_begin_ns;
+@@ -162,8 +162,8 @@ static void run_receiver(unsigned long rcvlowat_bytes)
+ printf("Run as receiver\n");
+ printf("Listen port %u\n", port);
+ printf("RX buffer %lu bytes\n", buf_size_bytes);
+- printf("vsock buffer %lu bytes\n", vsock_buf_bytes);
+- printf("SO_RCVLOWAT %lu bytes\n", rcvlowat_bytes);
++ printf("vsock buffer %llu bytes\n", vsock_buf_bytes);
++ printf("SO_RCVLOWAT %d bytes\n", rcvlowat_bytes);
+
+ fd = socket(AF_VSOCK, SOCK_STREAM, 0);
+
+@@ -439,7 +439,7 @@ static long strtolx(const char *arg)
+ int main(int argc, char **argv)
+ {
+ unsigned long to_send_bytes = DEFAULT_TO_SEND_BYTES;
+- unsigned long rcvlowat_bytes = DEFAULT_RCVLOWAT_BYTES;
++ int rcvlowat_bytes = DEFAULT_RCVLOWAT_BYTES;
+ int peer_cid = -1;
+ bool sender = false;
+
+diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c
+index 8d38dbf8f41f04..0b7f5bf546da56 100644
+--- a/tools/testing/vsock/vsock_test.c
++++ b/tools/testing/vsock/vsock_test.c
+@@ -429,7 +429,7 @@ static void test_seqpacket_msg_bounds_client(const struct test_opts *opts)
+
+ static void test_seqpacket_msg_bounds_server(const struct test_opts *opts)
+ {
+- unsigned long sock_buf_size;
++ unsigned long long sock_buf_size;
+ unsigned long remote_hash;
+ unsigned long curr_hash;
+ int fd;
+@@ -634,7 +634,8 @@ static void test_seqpacket_timeout_server(const struct test_opts *opts)
+
+ static void test_seqpacket_bigmsg_client(const struct test_opts *opts)
+ {
+- unsigned long sock_buf_size;
++ unsigned long long sock_buf_size;
++ size_t buf_size;
+ socklen_t len;
+ void *data;
+ int fd;
+@@ -655,13 +656,20 @@ static void test_seqpacket_bigmsg_client(const struct test_opts *opts)
+
+ sock_buf_size++;
+
+- data = malloc(sock_buf_size);
++ /* size_t can be < unsigned long long */
++ buf_size = (size_t)sock_buf_size;
++ if (buf_size != sock_buf_size) {
++ fprintf(stderr, "Returned BUFFER_SIZE too large\n");
++ exit(EXIT_FAILURE);
++ }
++
++ data = malloc(buf_size);
+ if (!data) {
+ perror("malloc");
+ exit(EXIT_FAILURE);
+ }
+
+- send_buf(fd, data, sock_buf_size, 0, -EMSGSIZE);
++ send_buf(fd, data, buf_size, 0, -EMSGSIZE);
+
+ control_writeln("CLISENT");
+
+@@ -835,7 +843,7 @@ static void test_stream_poll_rcvlowat_server(const struct test_opts *opts)
+
+ static void test_stream_poll_rcvlowat_client(const struct test_opts *opts)
+ {
+- unsigned long lowat_val = RCVLOWAT_BUF_SIZE;
++ int lowat_val = RCVLOWAT_BUF_SIZE;
+ char buf[RCVLOWAT_BUF_SIZE];
+ struct pollfd fds;
+ short poll_flags;
+@@ -1357,9 +1365,10 @@ static void test_stream_rcvlowat_def_cred_upd_client(const struct test_opts *opt
+ static void test_stream_credit_update_test(const struct test_opts *opts,
+ bool low_rx_bytes_test)
+ {
+- size_t recv_buf_size;
++ int recv_buf_size;
+ struct pollfd fds;
+ size_t buf_size;
++ unsigned long long sock_buf_size;
+ void *buf;
+ int fd;
+
+@@ -1371,8 +1380,11 @@ static void test_stream_credit_update_test(const struct test_opts *opts,
+
+ buf_size = RCVLOWAT_CREDIT_UPD_BUF_SIZE;
+
++ /* size_t can be < unsigned long long */
++ sock_buf_size = buf_size;
++
+ if (setsockopt(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE,
+- &buf_size, sizeof(buf_size))) {
++ &sock_buf_size, sizeof(sock_buf_size))) {
+ perror("setsockopt(SO_VM_SOCKETS_BUFFER_SIZE)");
+ exit(EXIT_FAILURE);
+ }
+diff --git a/tools/tracing/rtla/sample/timerlat_load.py b/tools/tracing/rtla/sample/timerlat_load.py
+index 8cc5eb2d2e69e5..52eccb6225f92d 100644
+--- a/tools/tracing/rtla/sample/timerlat_load.py
++++ b/tools/tracing/rtla/sample/timerlat_load.py
+@@ -25,13 +25,12 @@ import sys
+ import os
+
+ parser = argparse.ArgumentParser(description='user-space timerlat thread in Python')
+-parser.add_argument("cpu", help='CPU to run timerlat thread')
+-parser.add_argument("-p", "--prio", help='FIFO priority')
+-
++parser.add_argument("cpu", type=int, help='CPU to run timerlat thread')
++parser.add_argument("-p", "--prio", type=int, help='FIFO priority')
+ args = parser.parse_args()
+
+ try:
+- affinity_mask = { int(args.cpu) }
++ affinity_mask = {args.cpu}
+ except:
+ print("Invalid cpu: " + args.cpu)
+ exit(1)
+@@ -44,7 +43,7 @@ except:
+
+ if (args.prio):
+ try:
+- param = os.sched_param(int(args.prio))
++ param = os.sched_param(args.prio)
+ os.sched_setscheduler(0, os.SCHED_FIFO, param)
+ except:
+ print("Error setting priority")
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 829511a712224f..ae55cd79128336 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -62,9 +62,9 @@ struct timerlat_hist_cpu {
+ int *thread;
+ int *user;
+
+- int irq_count;
+- int thread_count;
+- int user_count;
++ unsigned long long irq_count;
++ unsigned long long thread_count;
++ unsigned long long user_count;
+
+ unsigned long long min_irq;
+ unsigned long long sum_irq;
+@@ -304,15 +304,15 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ continue;
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ data->hist[cpu].irq_count);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ data->hist[cpu].thread_count);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ data->hist[cpu].user_count);
+ }
+ trace_seq_printf(trace->seq, "\n");
+@@ -488,15 +488,15 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "count:");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ sum.irq_count);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ sum.thread_count);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ sum.user_count);
+
+ trace_seq_printf(trace->seq, "\n");
+@@ -778,7 +778,7 @@ static struct timerlat_hist_params
+ /* getopt_long stores the option index here. */
+ int option_index = 0;
+
+- c = getopt_long(argc, argv, "a:c:C::b:d:e:E:DhH:i:knp:P:s:t::T:uU0123456:7:8:9\1\2:\3",
++ c = getopt_long(argc, argv, "a:c:C::b:d:e:E:DhH:i:knp:P:s:t::T:uU0123456:7:8:9\1\2:\3:",
+ long_options, &option_index);
+
+ /* detect the end of the options. */
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 3b62519a412fc9..ac2ff38a57ee55 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -54,9 +54,9 @@ struct timerlat_top_params {
+ };
+
+ struct timerlat_top_cpu {
+- int irq_count;
+- int thread_count;
+- int user_count;
++ unsigned long long irq_count;
++ unsigned long long thread_count;
++ unsigned long long user_count;
+
+ unsigned long long cur_irq;
+ unsigned long long min_irq;
+@@ -280,7 +280,7 @@ static void timerlat_top_print(struct osnoise_tool *top, int cpu)
+ /*
+ * Unless trace is being lost, IRQ counter is always the max.
+ */
+- trace_seq_printf(s, "%3d #%-9d |", cpu, cpu_data->irq_count);
++ trace_seq_printf(s, "%3d #%-9llu |", cpu, cpu_data->irq_count);
+
+ if (!cpu_data->irq_count) {
+ trace_seq_printf(s, "%s %s %s %s |", no_value, no_value, no_value, no_value);
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index 9ac71a66840c1b..0735fcb827ed76 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -233,7 +233,7 @@ long parse_ns_duration(char *val)
+
+ #define SCHED_DEADLINE 6
+
+-static inline int sched_setattr(pid_t pid, const struct sched_attr *attr,
++static inline int syscall_sched_setattr(pid_t pid, const struct sched_attr *attr,
+ unsigned int flags) {
+ return syscall(__NR_sched_setattr, pid, attr, flags);
+ }
+@@ -243,7 +243,7 @@ int __set_sched_attr(int pid, struct sched_attr *attr)
+ int flags = 0;
+ int retval;
+
+- retval = sched_setattr(pid, attr, flags);
++ retval = syscall_sched_setattr(pid, attr, flags);
+ if (retval < 0) {
+ err_msg("Failed to set sched attributes to the pid %d: %s\n",
+ pid, strerror(errno));
+diff --git a/tools/tracing/rtla/src/utils.h b/tools/tracing/rtla/src/utils.h
+index d44513e6c66a01..99c9cf81bcd02c 100644
+--- a/tools/tracing/rtla/src/utils.h
++++ b/tools/tracing/rtla/src/utils.h
+@@ -46,6 +46,7 @@ update_sum(unsigned long long *a, unsigned long long *b)
+ *a += *b;
+ }
+
++#ifndef SCHED_ATTR_SIZE_VER0
+ struct sched_attr {
+ uint32_t size;
+ uint32_t sched_policy;
+@@ -56,6 +57,7 @@ struct sched_attr {
+ uint64_t sched_deadline;
+ uint64_t sched_period;
+ };
++#endif /* SCHED_ATTR_SIZE_VER0 */
+
+ int parse_prio(char *arg, struct sched_attr *sched_param);
+ int parse_cpu_set(char *cpu_list, cpu_set_t *set);
+diff --git a/tools/verification/dot2/automata.py b/tools/verification/dot2/automata.py
+index baffeb960ff0b3..bdeb98baa8b065 100644
+--- a/tools/verification/dot2/automata.py
++++ b/tools/verification/dot2/automata.py
+@@ -29,11 +29,11 @@ class Automata:
+
+ def __get_model_name(self):
+ basename = ntpath.basename(self.__dot_path)
+- if basename.endswith(".dot") == False:
++ if not basename.endswith(".dot") and not basename.endswith(".gv"):
+ print("not a dot file")
+ raise Exception("not a dot file: %s" % self.__dot_path)
+
+- model_name = basename[0:-4]
++ model_name = ntpath.splitext(basename)[0]
+ if model_name.__len__() == 0:
+ raise Exception("not a dot file: %s" % self.__dot_path)
+
+@@ -68,9 +68,9 @@ class Automata:
+ def __get_cursor_begin_events(self):
+ cursor = 0
+ while self.__dot_lines[cursor].split()[0] != "{node":
+- cursor += 1
++ cursor += 1
+ while self.__dot_lines[cursor].split()[0] == "{node":
+- cursor += 1
++ cursor += 1
+ # skip initial state transition
+ cursor += 1
+ return cursor
+@@ -94,11 +94,11 @@ class Automata:
+ initial_state = state[7:]
+ else:
+ states.append(state)
+- if self.__dot_lines[cursor].__contains__("doublecircle") == True:
++ if "doublecircle" in self.__dot_lines[cursor]:
+ final_states.append(state)
+ has_final_states = True
+
+- if self.__dot_lines[cursor].__contains__("ellipse") == True:
++ if "ellipse" in self.__dot_lines[cursor]:
+ final_states.append(state)
+ has_final_states = True
+
+@@ -110,7 +110,7 @@ class Automata:
+ # Insert the initial state at the bein og the states
+ states.insert(0, initial_state)
+
+- if has_final_states == False:
++ if not has_final_states:
+ final_states.append(initial_state)
+
+ return states, initial_state, final_states
+@@ -120,7 +120,7 @@ class Automata:
+ cursor = self.__get_cursor_begin_events()
+
+ events = []
+- while self.__dot_lines[cursor][1] == '"':
++ while self.__dot_lines[cursor].lstrip()[0] == '"':
+ # transitions have the format:
+ # "all_fired" -> "both_fired" [ label = "disable_irq" ];
+ # ------------ event is here ------------^^^^^
+@@ -161,7 +161,7 @@ class Automata:
+ # and we are back! Let's fill the matrix
+ cursor = self.__get_cursor_begin_events()
+
+- while self.__dot_lines[cursor][1] == '"':
++ while self.__dot_lines[cursor].lstrip()[0] == '"':
+ if self.__dot_lines[cursor].split()[1] == "->":
+ line = self.__dot_lines[cursor].split()
+ origin_state = line[0].replace('"','').replace(',','_')
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-14 23:59 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-14 23:59 UTC (permalink / raw
To: gentoo-commits
commit: 19cecaf31ceabc39e8291a5e852adf5e8f726445
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 14 23:59:03 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Dec 14 23:59:03 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=19cecaf3
Remove redundant patch
Removed:
2700_drm-display-GCC15.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ----
2700_drm-display-GCC15.patch | 52 --------------------------------------------
2 files changed, 56 deletions(-)
diff --git a/0000_README b/0000_README
index 6429d035..81c02320 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2700_drm-display-GCC15.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
-Desc: drm/display: Fix building with GCC 15
-
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2700_drm-display-GCC15.patch b/2700_drm-display-GCC15.patch
deleted file mode 100644
index 0be775ea..00000000
--- a/2700_drm-display-GCC15.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From a500f3751d3c861be7e4463c933cf467240cca5d Mon Sep 17 00:00:00 2001
-From: Brahmajit Das <brahmajit.xyz@gmail.com>
-Date: Wed, 2 Oct 2024 14:53:11 +0530
-Subject: drm/display: Fix building with GCC 15
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-GCC 15 enables -Werror=unterminated-string-initialization by default.
-This results in the following build error
-
-drivers/gpu/drm/display/drm_dp_dual_mode_helper.c: In function ‘is_hdmi_adaptor’:
-drivers/gpu/drm/display/drm_dp_dual_mode_helper.c:164:17: error: initializer-string for array of
- ‘char’ is too long [-Werror=unterminated-string-initialization]
- 164 | "DP-HDMI ADAPTOR\x04";
- | ^~~~~~~~~~~~~~~~~~~~~
-
-After discussion with Ville, the fix was to increase the size of
-dp_dual_mode_hdmi_id array by one, so that it can accommodate the NULL
-line character. This should let us build the kernel with GCC 15.
-
-Signed-off-by: Brahmajit Das <brahmajit.xyz@gmail.com>
-Reviewed-by: Jani Nikula <jani.nikula@intel.com>
-Link: https://patchwork.freedesktop.org/patch/msgid/20241002092311.942822-1-brahmajit.xyz@gmail.com
-Signed-off-by: Jani Nikula <jani.nikula@intel.com>
----
- drivers/gpu/drm/display/drm_dp_dual_mode_helper.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-(limited to 'drivers/gpu/drm/display/drm_dp_dual_mode_helper.c')
-
-diff --git a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
-index 14a2a8473682b0..c491e3203bf11c 100644
---- a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
-+++ b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
-@@ -160,11 +160,11 @@ EXPORT_SYMBOL(drm_dp_dual_mode_write);
-
- static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
- {
-- static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
-+ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN + 1] =
- "DP-HDMI ADAPTOR\x04";
-
- return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
-- sizeof(dp_dual_mode_hdmi_id)) == 0;
-+ DP_DUAL_MODE_HDMI_ID_LEN) == 0;
- }
-
- static bool is_type1_adaptor(uint8_t adaptor_id)
---
-cgit 1.2.3-korg
-
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-15 0:02 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-15 0:02 UTC (permalink / raw
To: gentoo-commits
commit: 1142e63b91589e751e9f9e537c19cc52f96c790d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 15 00:02:06 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Dec 15 00:02:06 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1142e63b
Remove redundant patches
Removed
1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 --
...-change-caller-of-update_pkru_in_sigframe.patch | 107 ---------------------
...eys-ensure-updated-pkru-value-is-xrstor-d.patch | 96 ------------------
3 files changed, 211 deletions(-)
diff --git a/0000_README b/0000_README
index 81c02320..a2c9782d 100644
--- a/0000_README
+++ b/0000_README
@@ -75,14 +75,6 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
-Patch: 1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
-From: https://git.kernel.org/
-Desc: x86/pkeys: Change caller of update_pkru_in_sigframe()
-
-Patch: 1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
-From: https://git.kernel.org/
-Desc: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch b/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
deleted file mode 100644
index 3a1fbd82..00000000
--- a/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
+++ /dev/null
@@ -1,107 +0,0 @@
-From 5683d0ce8fb46f36315a2b508f90ec6221cda018 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 19 Nov 2024 17:45:19 +0000
-Subject: x86/pkeys: Change caller of update_pkru_in_sigframe()
-
-From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-
-[ Upstream commit 6a1853bdf17874392476b552398df261f75503e0 ]
-
-update_pkru_in_sigframe() will shortly need some information which
-is only available inside xsave_to_user_sigframe(). Move
-update_pkru_in_sigframe() inside the other function to make it
-easier to provide it that information.
-
-No functional changes.
-
-Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
-Link: https://lore.kernel.org/all/20241119174520.3987538-2-aruna.ramakrishna%40oracle.com
-Stable-dep-of: ae6012d72fa6 ("x86/pkeys: Ensure updated PKRU value is XRSTOR'd")
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/kernel/fpu/signal.c | 20 ++------------------
- arch/x86/kernel/fpu/xstate.h | 15 ++++++++++++++-
- 2 files changed, 16 insertions(+), 19 deletions(-)
-
-diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
-index 1065ab995305c..8f62e0666dea5 100644
---- a/arch/x86/kernel/fpu/signal.c
-+++ b/arch/x86/kernel/fpu/signal.c
-@@ -63,16 +63,6 @@ static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
- return true;
- }
-
--/*
-- * Update the value of PKRU register that was already pushed onto the signal frame.
-- */
--static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
--{
-- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
-- return 0;
-- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
--}
--
- /*
- * Signal frame handlers.
- */
-@@ -168,14 +158,8 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
-
- static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
- {
-- int err = 0;
--
-- if (use_xsave()) {
-- err = xsave_to_user_sigframe(buf);
-- if (!err)
-- err = update_pkru_in_sigframe(buf, pkru);
-- return err;
-- }
-+ if (use_xsave())
-+ return xsave_to_user_sigframe(buf, pkru);
-
- if (use_fxsr())
- return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
-diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
-index 0b86a5002c846..6b2924fbe5b8d 100644
---- a/arch/x86/kernel/fpu/xstate.h
-+++ b/arch/x86/kernel/fpu/xstate.h
-@@ -69,6 +69,16 @@ static inline u64 xfeatures_mask_independent(void)
- return fpu_kernel_cfg.independent_features;
- }
-
-+/*
-+ * Update the value of PKRU register that was already pushed onto the signal frame.
-+ */
-+static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
-+{
-+ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
-+ return 0;
-+ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
-+}
-+
- /* XSAVE/XRSTOR wrapper functions */
-
- #ifdef CONFIG_X86_64
-@@ -256,7 +266,7 @@ static inline u64 xfeatures_need_sigframe_write(void)
- * The caller has to zero buf::header before calling this because XSAVE*
- * does not touch the reserved fields in the header.
- */
--static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
-+static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkru)
- {
- /*
- * Include the features which are not xsaved/rstored by the kernel
-@@ -281,6 +291,9 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
- XSTATE_OP(XSAVE, buf, lmask, hmask, err);
- clac();
-
-+ if (!err)
-+ err = update_pkru_in_sigframe(buf, pkru);
-+
- return err;
- }
-
---
-2.43.0
-
diff --git a/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch b/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
deleted file mode 100644
index 11b1f768..00000000
--- a/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
+++ /dev/null
@@ -1,96 +0,0 @@
-From 24fedf2768fd57e0d767137044c4f7493357b325 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 19 Nov 2024 17:45:20 +0000
-Subject: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
-
-From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-
-[ Upstream commit ae6012d72fa60c9ff92de5bac7a8021a47458e5b ]
-
-When XSTATE_BV[i] is 0, and XRSTOR attempts to restore state component
-'i' it ignores any value in the XSAVE buffer and instead restores the
-state component's init value.
-
-This means that if XSAVE writes XSTATE_BV[PKRU]=0 then XRSTOR will
-ignore the value that update_pkru_in_sigframe() writes to the XSAVE buffer.
-
-XSTATE_BV[PKRU] only gets written as 0 if PKRU is in its init state. On
-Intel CPUs, basically never happens because the kernel usually
-overwrites the init value (aside: this is why we didn't notice this bug
-until now). But on AMD, the init tracker is more aggressive and will
-track PKRU as being in its init state upon any wrpkru(0x0).
-Unfortunately, sig_prepare_pkru() does just that: wrpkru(0x0).
-
-This writes XSTATE_BV[PKRU]=0 which makes XRSTOR ignore the PKRU value
-in the sigframe.
-
-To fix this, always overwrite the sigframe XSTATE_BV with a value that
-has XSTATE_BV[PKRU]==1. This ensures that XRSTOR will not ignore what
-update_pkru_in_sigframe() wrote.
-
-The problematic sequence of events is something like this:
-
-Userspace does:
- * wrpkru(0xffff0000) (or whatever)
- * Hardware sets: XINUSE[PKRU]=1
-Signal happens, kernel is entered:
- * sig_prepare_pkru() => wrpkru(0x00000000)
- * Hardware sets: XINUSE[PKRU]=0 (aggressive AMD init tracker)
- * XSAVE writes most of XSAVE buffer, including
- XSTATE_BV[PKRU]=XINUSE[PKRU]=0
- * update_pkru_in_sigframe() overwrites PKRU in XSAVE buffer
-... signal handling
- * XRSTOR sees XSTATE_BV[PKRU]==0, ignores just-written value
- from update_pkru_in_sigframe()
-
-Fixes: 70044df250d0 ("x86/pkeys: Update PKRU to enable all pkeys before XSAVE")
-Suggested-by: Rudi Horn <rudi.horn@oracle.com>
-Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
-Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
-Link: https://lore.kernel.org/all/20241119174520.3987538-3-aruna.ramakrishna%40oracle.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/kernel/fpu/xstate.h | 16 ++++++++++++++--
- 1 file changed, 14 insertions(+), 2 deletions(-)
-
-diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
-index 6b2924fbe5b8d..aa16f1a1bbcf1 100644
---- a/arch/x86/kernel/fpu/xstate.h
-+++ b/arch/x86/kernel/fpu/xstate.h
-@@ -72,10 +72,22 @@ static inline u64 xfeatures_mask_independent(void)
- /*
- * Update the value of PKRU register that was already pushed onto the signal frame.
- */
--static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
-+static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
- {
-+ u64 xstate_bv;
-+ int err;
-+
- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
- return 0;
-+
-+ /* Mark PKRU as in-use so that it is restored correctly. */
-+ xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
-+
-+ err = __put_user(xstate_bv, &buf->header.xfeatures);
-+ if (err)
-+ return err;
-+
-+ /* Update PKRU value in the userspace xsave buffer. */
- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
- }
-
-@@ -292,7 +304,7 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkr
- clac();
-
- if (!err)
-- err = update_pkru_in_sigframe(buf, pkru);
-+ err = update_pkru_in_sigframe(buf, mask, pkru);
-
- return err;
- }
---
-2.43.0
-
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-19 18:07 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-19 18:07 UTC (permalink / raw
To: gentoo-commits
commit: 1d0712601fc0cfb16d2abc9bf8d0e34c43b6afe4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 19 18:07:10 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 19 18:07:10 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1d071260
Linuxpatch 6.12.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-6.12.6.patch | 8402 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8406 insertions(+)
diff --git a/0000_README b/0000_README
index a2c9782d..1bb8df77 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-6.12.5.patch
From: https://www.kernel.org
Desc: Linux 6.12.5
+Patch: 1005_linux-6.12.6.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.6
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1005_linux-6.12.6.patch b/1005_linux-6.12.6.patch
new file mode 100644
index 00000000..e9bbd96e
--- /dev/null
+++ b/1005_linux-6.12.6.patch
@@ -0,0 +1,8402 @@
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index eacf8983e23074..dcbb6f6caf6de3 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -2170,6 +2170,12 @@ nexthop_compat_mode - BOOLEAN
+ understands the new API, this sysctl can be disabled to achieve full
+ performance benefits of the new API by disabling the nexthop expansion
+ and extraneous notifications.
++
++ Note that as a backward-compatible mode, dumping of modern features
++ might be incomplete or wrong. For example, resilient groups will not be
++ shown as such, but rather as just a list of next hops. Also weights that
++ do not fit into 8 bits will show incorrectly.
++
+ Default: true (backward compat mode)
+
+ fib_notify_on_flag_change - INTEGER
+diff --git a/Documentation/power/runtime_pm.rst b/Documentation/power/runtime_pm.rst
+index 53d1996460abfc..12f429359a823e 100644
+--- a/Documentation/power/runtime_pm.rst
++++ b/Documentation/power/runtime_pm.rst
+@@ -347,7 +347,9 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
+
+ `int pm_runtime_resume_and_get(struct device *dev);`
+ - run pm_runtime_resume(dev) and if successful, increment the device's
+- usage counter; return the result of pm_runtime_resume
++ usage counter; returns 0 on success (whether or not the device's
++ runtime PM status was already 'active') or the error code from
++ pm_runtime_resume() on failure.
+
+ `int pm_request_idle(struct device *dev);`
+ - submit a request to execute the subsystem-level idle callback for the
+diff --git a/Makefile b/Makefile
+index f158bfe6407ac9..c10952585c14b0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index ff8c4e1b847ed4..fbed433283c9b9 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1535,6 +1535,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_DF2);
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
++ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
+ break;
+ case SYS_ID_AA64PFR2_EL1:
+ /* We only expose FPMR */
+@@ -1724,6 +1725,13 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+
+ val &= ~ID_AA64PFR0_EL1_AMU_MASK;
+
++ /*
++ * MPAM is disabled by default as KVM also needs a set of PARTID to
++ * program the MPAMVPMx_EL2 PARTID remapping registers with. But some
++ * older kernels let the guest see the ID bit.
++ */
++ val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
++
+ return val;
+ }
+
+@@ -1834,6 +1842,42 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
+ return set_id_reg(vcpu, rd, val);
+ }
+
++static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
++ const struct sys_reg_desc *rd, u64 user_val)
++{
++ u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
++ u64 mpam_mask = ID_AA64PFR0_EL1_MPAM_MASK;
++
++ /*
++ * Commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits
++ * in ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to
++ * guests, but didn't add trap handling. KVM doesn't support MPAM and
++ * always returns an UNDEF for these registers. The guest must see 0
++ * for this field.
++ *
++ * But KVM must also accept values from user-space that were provided
++ * by KVM. On CPUs that support MPAM, permit user-space to write
++ * the sanitizied value to ID_AA64PFR0_EL1.MPAM, but ignore this field.
++ */
++ if ((hw_val & mpam_mask) == (user_val & mpam_mask))
++ user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
++
++ return set_id_reg(vcpu, rd, user_val);
++}
++
++static int set_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
++ const struct sys_reg_desc *rd, u64 user_val)
++{
++ u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
++ u64 mpam_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
++
++ /* See set_id_aa64pfr0_el1 for comment about MPAM */
++ if ((hw_val & mpam_mask) == (user_val & mpam_mask))
++ user_val &= ~ID_AA64PFR1_EL1_MPAM_frac_MASK;
++
++ return set_id_reg(vcpu, rd, user_val);
++}
++
+ /*
+ * cpufeature ID register user accessors
+ *
+@@ -2377,7 +2421,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ { SYS_DESC(SYS_ID_AA64PFR0_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+- .set_user = set_id_reg,
++ .set_user = set_id_aa64pfr0_el1,
+ .reset = read_sanitised_id_aa64pfr0_el1,
+ .val = ~(ID_AA64PFR0_EL1_AMU |
+ ID_AA64PFR0_EL1_MPAM |
+@@ -2385,7 +2429,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ ID_AA64PFR0_EL1_RAS |
+ ID_AA64PFR0_EL1_AdvSIMD |
+ ID_AA64PFR0_EL1_FP), },
+- ID_WRITABLE(ID_AA64PFR1_EL1, ~(ID_AA64PFR1_EL1_PFAR |
++ { SYS_DESC(SYS_ID_AA64PFR1_EL1),
++ .access = access_id_reg,
++ .get_user = get_id_reg,
++ .set_user = set_id_aa64pfr1_el1,
++ .reset = kvm_read_sanitised_id_reg,
++ .val = ~(ID_AA64PFR1_EL1_PFAR |
+ ID_AA64PFR1_EL1_DF2 |
+ ID_AA64PFR1_EL1_MTEX |
+ ID_AA64PFR1_EL1_THE |
+@@ -2397,7 +2446,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ ID_AA64PFR1_EL1_RES0 |
+ ID_AA64PFR1_EL1_MPAM_frac |
+ ID_AA64PFR1_EL1_RAS_frac |
+- ID_AA64PFR1_EL1_MTE)),
++ ID_AA64PFR1_EL1_MTE), },
+ ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),
+ ID_UNALLOCATED(4,3),
+ ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),
+diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h
+index 7388edd88986f9..d08bf7fb3aee61 100644
+--- a/arch/riscv/include/asm/kfence.h
++++ b/arch/riscv/include/asm/kfence.h
+@@ -22,7 +22,9 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
+ else
+ set_pte(pte, __pte(pte_val(ptep_get(pte)) | _PAGE_PRESENT));
+
+- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
++ preempt_disable();
++ local_flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
++ preempt_enable();
+
+ return true;
+ }
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 26c886db4fb3d1..2b3c152d3c91f5 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -227,7 +227,7 @@ static void __init init_resources(void)
+ static void __init parse_dtb(void)
+ {
+ /* Early scan of device tree from init memory */
+- if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) {
++ if (early_init_dt_scan(dtb_early_va, dtb_early_pa)) {
+ const char *name = of_flat_dt_get_machine_name();
+
+ if (name) {
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 0e8c20adcd98df..fc53ce748c8049 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -1566,7 +1566,7 @@ static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd)
+ pmd_clear(pmd);
+ }
+
+-static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud)
++static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud, bool is_vmemmap)
+ {
+ struct page *page = pud_page(*pud);
+ struct ptdesc *ptdesc = page_ptdesc(page);
+@@ -1579,7 +1579,8 @@ static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud)
+ return;
+ }
+
+- pagetable_pmd_dtor(ptdesc);
++ if (!is_vmemmap)
++ pagetable_pmd_dtor(ptdesc);
+ if (PageReserved(page))
+ free_reserved_page(page);
+ else
+@@ -1703,7 +1704,7 @@ static void __meminit remove_pud_mapping(pud_t *pud_base, unsigned long addr, un
+ remove_pmd_mapping(pmd_base, addr, next, is_vmemmap, altmap);
+
+ if (pgtable_l4_enabled)
+- free_pmd_table(pmd_base, pudp);
++ free_pmd_table(pmd_base, pudp, is_vmemmap);
+ }
+ }
+
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index fa5ea65de0d0fa..6188650707ab27 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1468,7 +1468,7 @@ void intel_pmu_pebs_enable(struct perf_event *event)
+ * hence we need to drain when changing said
+ * size.
+ */
+- intel_pmu_drain_large_pebs(cpuc);
++ intel_pmu_drain_pebs_buffer();
+ adaptive_pebs_record_size_update();
+ wrmsrl(MSR_PEBS_DATA_CFG, pebs_data_cfg);
+ cpuc->active_pebs_data_cfg = pebs_data_cfg;
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 4a686f0e5dbf6d..2d776635aa539e 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -212,6 +212,8 @@ static inline unsigned long long l1tf_pfn_limit(void)
+ return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
+ }
+
++void init_cpu_devs(void);
++void get_cpu_vendor(struct cpuinfo_x86 *c);
+ extern void early_cpu_init(void);
+ extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+ extern void print_cpu_info(struct cpuinfo_x86 *);
+diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
+index 125c407e2abe6d..41502bd2afd646 100644
+--- a/arch/x86/include/asm/static_call.h
++++ b/arch/x86/include/asm/static_call.h
+@@ -65,4 +65,19 @@
+
+ extern bool __static_call_fixup(void *tramp, u8 op, void *dest);
+
++extern void __static_call_update_early(void *tramp, void *func);
++
++#define static_call_update_early(name, _func) \
++({ \
++ typeof(&STATIC_CALL_TRAMP(name)) __F = (_func); \
++ if (static_call_initialized) { \
++ __static_call_update(&STATIC_CALL_KEY(name), \
++ STATIC_CALL_TRAMP_ADDR(name), __F);\
++ } else { \
++ WRITE_ONCE(STATIC_CALL_KEY(name).func, _func); \
++ __static_call_update_early(STATIC_CALL_TRAMP_ADDR(name),\
++ __F); \
++ } \
++})
++
+ #endif /* _ASM_STATIC_CALL_H */
+diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h
+index ab7382f92aff27..96bda43538ee70 100644
+--- a/arch/x86/include/asm/sync_core.h
++++ b/arch/x86/include/asm/sync_core.h
+@@ -8,7 +8,7 @@
+ #include <asm/special_insns.h>
+
+ #ifdef CONFIG_X86_32
+-static inline void iret_to_self(void)
++static __always_inline void iret_to_self(void)
+ {
+ asm volatile (
+ "pushfl\n\t"
+@@ -19,7 +19,7 @@ static inline void iret_to_self(void)
+ : ASM_CALL_CONSTRAINT : : "memory");
+ }
+ #else
+-static inline void iret_to_self(void)
++static __always_inline void iret_to_self(void)
+ {
+ unsigned int tmp;
+
+@@ -55,7 +55,7 @@ static inline void iret_to_self(void)
+ * Like all of Linux's memory ordering operations, this is a
+ * compiler barrier as well.
+ */
+-static inline void sync_core(void)
++static __always_inline void sync_core(void)
+ {
+ /*
+ * The SERIALIZE instruction is the most straightforward way to
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index a2dd24947eb85a..97771b9d33af30 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -39,9 +39,11 @@
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/pgtable.h>
++#include <linux/instrumentation.h>
+
+ #include <trace/events/xen.h>
+
++#include <asm/alternative.h>
+ #include <asm/page.h>
+ #include <asm/smap.h>
+ #include <asm/nospec-branch.h>
+@@ -86,11 +88,20 @@ struct xen_dm_op_buf;
+ * there aren't more than 5 arguments...)
+ */
+
+-extern struct { char _entry[32]; } hypercall_page[];
++void xen_hypercall_func(void);
++DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func);
+
+-#define __HYPERCALL "call hypercall_page+%c[offset]"
+-#define __HYPERCALL_ENTRY(x) \
+- [offset] "i" (__HYPERVISOR_##x * sizeof(hypercall_page[0]))
++#ifdef MODULE
++#define __ADDRESSABLE_xen_hypercall
++#else
++#define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall)
++#endif
++
++#define __HYPERCALL \
++ __ADDRESSABLE_xen_hypercall \
++ "call __SCT__xen_hypercall"
++
++#define __HYPERCALL_ENTRY(x) "a" (x)
+
+ #ifdef CONFIG_X86_32
+ #define __HYPERCALL_RETREG "eax"
+@@ -148,7 +159,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_0ARG(); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_0PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER0); \
+ (type)__res; \
+ })
+@@ -159,7 +170,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_1ARG(a1); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_1PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER1); \
+ (type)__res; \
+ })
+@@ -170,7 +181,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_2ARG(a1, a2); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_2PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER2); \
+ (type)__res; \
+ })
+@@ -181,7 +192,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_3ARG(a1, a2, a3); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_3PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER3); \
+ (type)__res; \
+ })
+@@ -192,7 +203,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_4ARG(a1, a2, a3, a4); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_4PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER4); \
+ (type)__res; \
+ })
+@@ -206,12 +217,9 @@ xen_single_call(unsigned int call,
+ __HYPERCALL_DECLS;
+ __HYPERCALL_5ARG(a1, a2, a3, a4, a5);
+
+- if (call >= PAGE_SIZE / sizeof(hypercall_page[0]))
+- return -EINVAL;
+-
+- asm volatile(CALL_NOSPEC
++ asm volatile(__HYPERCALL
+ : __HYPERCALL_5PARAM
+- : [thunk_target] "a" (&hypercall_page[call])
++ : __HYPERCALL_ENTRY(call)
+ : __HYPERCALL_CLOBBER5);
+
+ return (long)__res;
+diff --git a/arch/x86/kernel/callthunks.c b/arch/x86/kernel/callthunks.c
+index 4656474567533b..f17d166078823c 100644
+--- a/arch/x86/kernel/callthunks.c
++++ b/arch/x86/kernel/callthunks.c
+@@ -142,11 +142,6 @@ static bool skip_addr(void *dest)
+ if (dest >= (void *)relocate_kernel &&
+ dest < (void*)relocate_kernel + KEXEC_CONTROL_CODE_MAX_SIZE)
+ return true;
+-#endif
+-#ifdef CONFIG_XEN
+- if (dest >= (void *)hypercall_page &&
+- dest < (void*)hypercall_page + PAGE_SIZE)
+- return true;
+ #endif
+ return false;
+ }
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b17bcf9b67eed4..f439763f45ae6f 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -868,7 +868,7 @@ static void cpu_detect_tlb(struct cpuinfo_x86 *c)
+ tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
+ }
+
+-static void get_cpu_vendor(struct cpuinfo_x86 *c)
++void get_cpu_vendor(struct cpuinfo_x86 *c)
+ {
+ char *v = c->x86_vendor_id;
+ int i;
+@@ -1652,15 +1652,11 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ detect_nopl();
+ }
+
+-void __init early_cpu_init(void)
++void __init init_cpu_devs(void)
+ {
+ const struct cpu_dev *const *cdev;
+ int count = 0;
+
+-#ifdef CONFIG_PROCESSOR_SELECT
+- pr_info("KERNEL supported cpus:\n");
+-#endif
+-
+ for (cdev = __x86_cpu_dev_start; cdev < __x86_cpu_dev_end; cdev++) {
+ const struct cpu_dev *cpudev = *cdev;
+
+@@ -1668,20 +1664,30 @@ void __init early_cpu_init(void)
+ break;
+ cpu_devs[count] = cpudev;
+ count++;
++ }
++}
+
++void __init early_cpu_init(void)
++{
+ #ifdef CONFIG_PROCESSOR_SELECT
+- {
+- unsigned int j;
+-
+- for (j = 0; j < 2; j++) {
+- if (!cpudev->c_ident[j])
+- continue;
+- pr_info(" %s %s\n", cpudev->c_vendor,
+- cpudev->c_ident[j]);
+- }
+- }
++ unsigned int i, j;
++
++ pr_info("KERNEL supported cpus:\n");
+ #endif
++
++ init_cpu_devs();
++
++#ifdef CONFIG_PROCESSOR_SELECT
++ for (i = 0; i < X86_VENDOR_NUM && cpu_devs[i]; i++) {
++ for (j = 0; j < 2; j++) {
++ if (!cpu_devs[i]->c_ident[j])
++ continue;
++ pr_info(" %s %s\n", cpu_devs[i]->c_vendor,
++ cpu_devs[i]->c_ident[j]);
++ }
+ }
++#endif
++
+ early_identify_cpu(&boot_cpu_data);
+ }
+
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 4eefaac64c6cba..9eed0c144dad51 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -172,6 +172,15 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
+ }
+ EXPORT_SYMBOL_GPL(arch_static_call_transform);
+
++noinstr void __static_call_update_early(void *tramp, void *func)
++{
++ BUG_ON(system_state != SYSTEM_BOOTING);
++ BUG_ON(!early_boot_irqs_disabled);
++ BUG_ON(static_call_initialized);
++ __text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE);
++ sync_core();
++}
++
+ #ifdef CONFIG_MITIGATION_RETHUNK
+ /*
+ * This is called by apply_returns() to fix up static call trampolines,
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 84e5adbd0925cb..b4f3784f27e956 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -2,6 +2,7 @@
+
+ #include <linux/console.h>
+ #include <linux/cpu.h>
++#include <linux/instrumentation.h>
+ #include <linux/kexec.h>
+ #include <linux/memblock.h>
+ #include <linux/slab.h>
+@@ -21,7 +22,8 @@
+
+ #include "xen-ops.h"
+
+-EXPORT_SYMBOL_GPL(hypercall_page);
++DEFINE_STATIC_CALL(xen_hypercall, xen_hypercall_hvm);
++EXPORT_STATIC_CALL_TRAMP(xen_hypercall);
+
+ /*
+ * Pointer to the xen_vcpu_info structure or
+@@ -68,6 +70,66 @@ EXPORT_SYMBOL(xen_start_flags);
+ */
+ struct shared_info *HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
++static __ref void xen_get_vendor(void)
++{
++ init_cpu_devs();
++ cpu_detect(&boot_cpu_data);
++ get_cpu_vendor(&boot_cpu_data);
++}
++
++void xen_hypercall_setfunc(void)
++{
++ if (static_call_query(xen_hypercall) != xen_hypercall_hvm)
++ return;
++
++ if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON))
++ static_call_update(xen_hypercall, xen_hypercall_amd);
++ else
++ static_call_update(xen_hypercall, xen_hypercall_intel);
++}
++
++/*
++ * Evaluate processor vendor in order to select the correct hypercall
++ * function for HVM/PVH guests.
++ * Might be called very early in boot before vendor has been set by
++ * early_cpu_init().
++ */
++noinstr void *__xen_hypercall_setfunc(void)
++{
++ void (*func)(void);
++
++ /*
++ * Xen is supported only on CPUs with CPUID, so testing for
++ * X86_FEATURE_CPUID is a test for early_cpu_init() having been
++ * run.
++ *
++ * Note that __xen_hypercall_setfunc() is noinstr only due to a nasty
++ * dependency chain: it is being called via the xen_hypercall static
++ * call when running as a PVH or HVM guest. Hypercalls need to be
++ * noinstr due to PV guests using hypercalls in noinstr code. So we
++ * the PV guest requirement is not of interest here (xen_get_vendor()
++ * calls noinstr functions, and static_call_update_early() might do
++ * so, too).
++ */
++ instrumentation_begin();
++
++ if (!boot_cpu_has(X86_FEATURE_CPUID))
++ xen_get_vendor();
++
++ if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON))
++ func = xen_hypercall_amd;
++ else
++ func = xen_hypercall_intel;
++
++ static_call_update_early(xen_hypercall, func);
++
++ instrumentation_end();
++
++ return func;
++}
++
+ static int xen_cpu_up_online(unsigned int cpu)
+ {
+ xen_init_lock_cpu(cpu);
+diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
+index 24d2957a4726d8..fe57ff85d004ba 100644
+--- a/arch/x86/xen/enlighten_hvm.c
++++ b/arch/x86/xen/enlighten_hvm.c
+@@ -106,15 +106,8 @@ static void __init init_hvm_pv_info(void)
+ /* PVH set up hypercall page in xen_prepare_pvh(). */
+ if (xen_pvh_domain())
+ pv_info.name = "Xen PVH";
+- else {
+- u64 pfn;
+- uint32_t msr;
+-
++ else
+ pv_info.name = "Xen HVM";
+- msr = cpuid_ebx(base + 2);
+- pfn = __pa(hypercall_page);
+- wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
+- }
+
+ xen_setup_features();
+
+@@ -300,6 +293,10 @@ static uint32_t __init xen_platform_hvm(void)
+ if (xen_pv_domain())
+ return 0;
+
++ /* Set correct hypercall function. */
++ if (xen_domain)
++ xen_hypercall_setfunc();
++
+ if (xen_pvh_domain() && nopv) {
+ /* Guest booting via the Xen-PVH boot entry goes here */
+ pr_info("\"nopv\" parameter is ignored in PVH guest\n");
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index d6818c6cafda16..a8eb7e0c473cf6 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1341,6 +1341,9 @@ asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
+
+ xen_domain_type = XEN_PV_DOMAIN;
+ xen_start_flags = xen_start_info->flags;
++ /* Interrupts are guaranteed to be off initially. */
++ early_boot_irqs_disabled = true;
++ static_call_update_early(xen_hypercall, xen_hypercall_pv);
+
+ xen_setup_features();
+
+@@ -1431,7 +1434,6 @@ asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
+ WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_pv, xen_cpu_dead_pv));
+
+ local_irq_disable();
+- early_boot_irqs_disabled = true;
+
+ xen_raw_console_write("mapping kernel into physical memory\n");
+ xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base,
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index bf68c329fc013e..0e3d930bcb89e8 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -129,17 +129,10 @@ static void __init pvh_arch_setup(void)
+
+ void __init xen_pvh_init(struct boot_params *boot_params)
+ {
+- u32 msr;
+- u64 pfn;
+-
+ xen_pvh = 1;
+ xen_domain_type = XEN_HVM_DOMAIN;
+ xen_start_flags = pvh_start_info.flags;
+
+- msr = cpuid_ebx(xen_cpuid_base() + 2);
+- pfn = __pa(hypercall_page);
+- wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
+-
+ x86_init.oem.arch_setup = pvh_arch_setup;
+ x86_init.oem.banner = xen_banner;
+
+diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
+index 83189cf5cdce93..b518f36d1ca2e7 100644
+--- a/arch/x86/xen/xen-asm.S
++++ b/arch/x86/xen/xen-asm.S
+@@ -20,9 +20,32 @@
+
+ #include <linux/init.h>
+ #include <linux/linkage.h>
++#include <linux/objtool.h>
+ #include <../entry/calling.h>
+
+ .pushsection .noinstr.text, "ax"
++/*
++ * PV hypercall interface to the hypervisor.
++ *
++ * Called via inline asm(), so better preserve %rcx and %r11.
++ *
++ * Input:
++ * %eax: hypercall number
++ * %rdi, %rsi, %rdx, %r10, %r8: args 1..5 for the hypercall
++ * Output: %rax
++ */
++SYM_FUNC_START(xen_hypercall_pv)
++ ANNOTATE_NOENDBR
++ push %rcx
++ push %r11
++ UNWIND_HINT_SAVE
++ syscall
++ UNWIND_HINT_RESTORE
++ pop %r11
++ pop %rcx
++ RET
++SYM_FUNC_END(xen_hypercall_pv)
++
+ /*
+ * Disabling events is simply a matter of making the event mask
+ * non-zero.
+@@ -176,7 +199,6 @@ SYM_CODE_START(xen_early_idt_handler_array)
+ SYM_CODE_END(xen_early_idt_handler_array)
+ __FINIT
+
+-hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
+ /*
+ * Xen64 iret frame:
+ *
+@@ -186,17 +208,28 @@ hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
+ * cs
+ * rip <-- standard iret frame
+ *
+- * flags
++ * flags <-- xen_iret must push from here on
+ *
+- * rcx }
+- * r11 }<-- pushed by hypercall page
+- * rsp->rax }
++ * rcx
++ * r11
++ * rsp->rax
+ */
++.macro xen_hypercall_iret
++ pushq $0 /* Flags */
++ push %rcx
++ push %r11
++ push %rax
++ mov $__HYPERVISOR_iret, %eax
++ syscall /* Do the IRET. */
++#ifdef CONFIG_MITIGATION_SLS
++ int3
++#endif
++.endm
++
+ SYM_CODE_START(xen_iret)
+ UNWIND_HINT_UNDEFINED
+ ANNOTATE_NOENDBR
+- pushq $0
+- jmp hypercall_iret
++ xen_hypercall_iret
+ SYM_CODE_END(xen_iret)
+
+ /*
+@@ -301,8 +334,7 @@ SYM_CODE_START(xen_entry_SYSENTER_compat)
+ ENDBR
+ lea 16(%rsp), %rsp /* strip %rcx, %r11 */
+ mov $-ENOSYS, %rax
+- pushq $0
+- jmp hypercall_iret
++ xen_hypercall_iret
+ SYM_CODE_END(xen_entry_SYSENTER_compat)
+ SYM_CODE_END(xen_entry_SYSCALL_compat)
+
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 758bcd47b72d32..721a57700a3b05 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -6,9 +6,11 @@
+
+ #include <linux/elfnote.h>
+ #include <linux/init.h>
++#include <linux/instrumentation.h>
+
+ #include <asm/boot.h>
+ #include <asm/asm.h>
++#include <asm/frame.h>
+ #include <asm/msr.h>
+ #include <asm/page_types.h>
+ #include <asm/percpu.h>
+@@ -20,28 +22,6 @@
+ #include <xen/interface/xen-mca.h>
+ #include <asm/xen/interface.h>
+
+-.pushsection .noinstr.text, "ax"
+- .balign PAGE_SIZE
+-SYM_CODE_START(hypercall_page)
+- .rept (PAGE_SIZE / 32)
+- UNWIND_HINT_FUNC
+- ANNOTATE_NOENDBR
+- ANNOTATE_UNRET_SAFE
+- ret
+- /*
+- * Xen will write the hypercall page, and sort out ENDBR.
+- */
+- .skip 31, 0xcc
+- .endr
+-
+-#define HYPERCALL(n) \
+- .equ xen_hypercall_##n, hypercall_page + __HYPERVISOR_##n * 32; \
+- .type xen_hypercall_##n, @function; .size xen_hypercall_##n, 32
+-#include <asm/xen-hypercalls.h>
+-#undef HYPERCALL
+-SYM_CODE_END(hypercall_page)
+-.popsection
+-
+ #ifdef CONFIG_XEN_PV
+ __INIT
+ SYM_CODE_START(startup_xen)
+@@ -87,6 +67,87 @@ SYM_CODE_END(xen_cpu_bringup_again)
+ #endif
+ #endif
+
++ .pushsection .noinstr.text, "ax"
++/*
++ * Xen hypercall interface to the hypervisor.
++ *
++ * Input:
++ * %eax: hypercall number
++ * 32-bit:
++ * %ebx, %ecx, %edx, %esi, %edi: args 1..5 for the hypercall
++ * 64-bit:
++ * %rdi, %rsi, %rdx, %r10, %r8: args 1..5 for the hypercall
++ * Output: %[er]ax
++ */
++SYM_FUNC_START(xen_hypercall_hvm)
++ ENDBR
++ FRAME_BEGIN
++ /* Save all relevant registers (caller save and arguments). */
++#ifdef CONFIG_X86_32
++ push %eax
++ push %ebx
++ push %ecx
++ push %edx
++ push %esi
++ push %edi
++#else
++ push %rax
++ push %rcx
++ push %rdx
++ push %rdi
++ push %rsi
++ push %r11
++ push %r10
++ push %r9
++ push %r8
++#ifdef CONFIG_FRAME_POINTER
++ pushq $0 /* Dummy push for stack alignment. */
++#endif
++#endif
++ /* Set the vendor specific function. */
++ call __xen_hypercall_setfunc
++ /* Set ZF = 1 if AMD, Restore saved registers. */
++#ifdef CONFIG_X86_32
++ lea xen_hypercall_amd, %ebx
++ cmp %eax, %ebx
++ pop %edi
++ pop %esi
++ pop %edx
++ pop %ecx
++ pop %ebx
++ pop %eax
++#else
++ lea xen_hypercall_amd(%rip), %rbx
++ cmp %rax, %rbx
++#ifdef CONFIG_FRAME_POINTER
++ pop %rax /* Dummy pop. */
++#endif
++ pop %r8
++ pop %r9
++ pop %r10
++ pop %r11
++ pop %rsi
++ pop %rdi
++ pop %rdx
++ pop %rcx
++ pop %rax
++#endif
++ /* Use correct hypercall function. */
++ jz xen_hypercall_amd
++ jmp xen_hypercall_intel
++SYM_FUNC_END(xen_hypercall_hvm)
++
++SYM_FUNC_START(xen_hypercall_amd)
++ vmmcall
++ RET
++SYM_FUNC_END(xen_hypercall_amd)
++
++SYM_FUNC_START(xen_hypercall_intel)
++ vmcall
++ RET
++SYM_FUNC_END(xen_hypercall_intel)
++ .popsection
++
+ ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS, .asciz "linux")
+ ELFNOTE(Xen, XEN_ELFNOTE_GUEST_VERSION, .asciz "2.6")
+ ELFNOTE(Xen, XEN_ELFNOTE_XEN_VERSION, .asciz "xen-3.0")
+@@ -115,7 +176,6 @@ SYM_CODE_END(xen_cpu_bringup_again)
+ #else
+ # define FEATURES_DOM0 0
+ #endif
+- ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
+ ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES,
+ .long FEATURES_PV | FEATURES_PVH | FEATURES_DOM0)
+ ELFNOTE(Xen, XEN_ELFNOTE_LOADER, .asciz "generic")
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index e1b782e823e6b4..63c13a2ccf556a 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -326,4 +326,13 @@ static inline void xen_smp_intr_free_pv(unsigned int cpu) {}
+ static inline void xen_smp_count_cpus(void) { }
+ #endif /* CONFIG_SMP */
+
++#ifdef CONFIG_XEN_PV
++void xen_hypercall_pv(void);
++#endif
++void xen_hypercall_hvm(void);
++void xen_hypercall_amd(void);
++void xen_hypercall_intel(void);
++void xen_hypercall_setfunc(void);
++void *__xen_hypercall_setfunc(void);
++
+ #endif /* XEN_OPS_H */
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index e68c725cf8d975..45a395862fbc88 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1324,10 +1324,14 @@ void blkcg_unpin_online(struct cgroup_subsys_state *blkcg_css)
+ struct blkcg *blkcg = css_to_blkcg(blkcg_css);
+
+ do {
++ struct blkcg *parent;
++
+ if (!refcount_dec_and_test(&blkcg->online_pin))
+ break;
++
++ parent = blkcg_parent(blkcg);
+ blkcg_destroy_blkgs(blkcg);
+- blkcg = blkcg_parent(blkcg);
++ blkcg = parent;
+ } while (blkcg);
+ }
+
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 384aa15e8260bd..a5894ec9696e7e 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1098,7 +1098,14 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
+ inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum,
+ iocg->child_active_sum);
+ } else {
+- inuse = clamp_t(u32, inuse, 1, active);
++ /*
++ * It may be tempting to turn this into a clamp expression with
++ * a lower limit of 1 but active may be 0, which cannot be used
++ * as an upper limit in that situation. This expression allows
++ * active to clamp inuse unless it is 0, in which case inuse
++ * becomes 1.
++ */
++ inuse = min(inuse, active) ?: 1;
+ }
+
+ iocg->last_inuse = iocg->inuse;
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 156e9bb07abf1a..cd5ea6eaa76b09 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -275,15 +275,13 @@ void blk_mq_sysfs_unregister_hctxs(struct request_queue *q)
+ struct blk_mq_hw_ctx *hctx;
+ unsigned long i;
+
+- mutex_lock(&q->sysfs_dir_lock);
++ lockdep_assert_held(&q->sysfs_dir_lock);
++
+ if (!q->mq_sysfs_init_done)
+- goto unlock;
++ return;
+
+ queue_for_each_hw_ctx(q, hctx, i)
+ blk_mq_unregister_hctx(hctx);
+-
+-unlock:
+- mutex_unlock(&q->sysfs_dir_lock);
+ }
+
+ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+@@ -292,9 +290,10 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ unsigned long i;
+ int ret = 0;
+
+- mutex_lock(&q->sysfs_dir_lock);
++ lockdep_assert_held(&q->sysfs_dir_lock);
++
+ if (!q->mq_sysfs_init_done)
+- goto unlock;
++ return ret;
+
+ queue_for_each_hw_ctx(q, hctx, i) {
+ ret = blk_mq_register_hctx(hctx);
+@@ -302,8 +301,5 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ break;
+ }
+
+-unlock:
+- mutex_unlock(&q->sysfs_dir_lock);
+-
+ return ret;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index b4fba7b398e5bc..cc1b3202383840 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -43,6 +43,7 @@
+
+ static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+ static DEFINE_PER_CPU(call_single_data_t, blk_cpu_csd);
++static DEFINE_MUTEX(blk_mq_cpuhp_lock);
+
+ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags);
+ static void blk_mq_request_bypass_insert(struct request *rq,
+@@ -3740,13 +3741,91 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node)
+ return 0;
+ }
+
+-static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
++static void __blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
+ {
+- if (!(hctx->flags & BLK_MQ_F_STACKING))
++ lockdep_assert_held(&blk_mq_cpuhp_lock);
++
++ if (!(hctx->flags & BLK_MQ_F_STACKING) &&
++ !hlist_unhashed(&hctx->cpuhp_online)) {
+ cpuhp_state_remove_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
+ &hctx->cpuhp_online);
+- cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD,
+- &hctx->cpuhp_dead);
++ INIT_HLIST_NODE(&hctx->cpuhp_online);
++ }
++
++ if (!hlist_unhashed(&hctx->cpuhp_dead)) {
++ cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD,
++ &hctx->cpuhp_dead);
++ INIT_HLIST_NODE(&hctx->cpuhp_dead);
++ }
++}
++
++static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
++{
++ mutex_lock(&blk_mq_cpuhp_lock);
++ __blk_mq_remove_cpuhp(hctx);
++ mutex_unlock(&blk_mq_cpuhp_lock);
++}
++
++static void __blk_mq_add_cpuhp(struct blk_mq_hw_ctx *hctx)
++{
++ lockdep_assert_held(&blk_mq_cpuhp_lock);
++
++ if (!(hctx->flags & BLK_MQ_F_STACKING) &&
++ hlist_unhashed(&hctx->cpuhp_online))
++ cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
++ &hctx->cpuhp_online);
++
++ if (hlist_unhashed(&hctx->cpuhp_dead))
++ cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD,
++ &hctx->cpuhp_dead);
++}
++
++static void __blk_mq_remove_cpuhp_list(struct list_head *head)
++{
++ struct blk_mq_hw_ctx *hctx;
++
++ lockdep_assert_held(&blk_mq_cpuhp_lock);
++
++ list_for_each_entry(hctx, head, hctx_list)
++ __blk_mq_remove_cpuhp(hctx);
++}
++
++/*
++ * Unregister cpuhp callbacks from exited hw queues
++ *
++ * Safe to call if this `request_queue` is live
++ */
++static void blk_mq_remove_hw_queues_cpuhp(struct request_queue *q)
++{
++ LIST_HEAD(hctx_list);
++
++ spin_lock(&q->unused_hctx_lock);
++ list_splice_init(&q->unused_hctx_list, &hctx_list);
++ spin_unlock(&q->unused_hctx_lock);
++
++ mutex_lock(&blk_mq_cpuhp_lock);
++ __blk_mq_remove_cpuhp_list(&hctx_list);
++ mutex_unlock(&blk_mq_cpuhp_lock);
++
++ spin_lock(&q->unused_hctx_lock);
++ list_splice(&hctx_list, &q->unused_hctx_list);
++ spin_unlock(&q->unused_hctx_lock);
++}
++
++/*
++ * Register cpuhp callbacks from all hw queues
++ *
++ * Safe to call if this `request_queue` is live
++ */
++static void blk_mq_add_hw_queues_cpuhp(struct request_queue *q)
++{
++ struct blk_mq_hw_ctx *hctx;
++ unsigned long i;
++
++ mutex_lock(&blk_mq_cpuhp_lock);
++ queue_for_each_hw_ctx(q, hctx, i)
++ __blk_mq_add_cpuhp(hctx);
++ mutex_unlock(&blk_mq_cpuhp_lock);
+ }
+
+ /*
+@@ -3797,8 +3876,6 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ if (set->ops->exit_hctx)
+ set->ops->exit_hctx(hctx, hctx_idx);
+
+- blk_mq_remove_cpuhp(hctx);
+-
+ xa_erase(&q->hctx_table, hctx_idx);
+
+ spin_lock(&q->unused_hctx_lock);
+@@ -3815,6 +3892,7 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
+ queue_for_each_hw_ctx(q, hctx, i) {
+ if (i == nr_queue)
+ break;
++ blk_mq_remove_cpuhp(hctx);
+ blk_mq_exit_hctx(q, set, hctx, i);
+ }
+ }
+@@ -3878,6 +3956,8 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
+ INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn);
+ spin_lock_init(&hctx->lock);
+ INIT_LIST_HEAD(&hctx->dispatch);
++ INIT_HLIST_NODE(&hctx->cpuhp_dead);
++ INIT_HLIST_NODE(&hctx->cpuhp_online);
+ hctx->queue = q;
+ hctx->flags = set->flags & ~BLK_MQ_F_TAG_QUEUE_SHARED;
+
+@@ -4382,7 +4462,8 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ unsigned long i, j;
+
+ /* protect against switching io scheduler */
+- mutex_lock(&q->sysfs_lock);
++ lockdep_assert_held(&q->sysfs_lock);
++
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ int old_node;
+ int node = blk_mq_get_hctx_node(set, i);
+@@ -4415,7 +4496,12 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+
+ xa_for_each_start(&q->hctx_table, j, hctx, j)
+ blk_mq_exit_hctx(q, set, hctx, j);
+- mutex_unlock(&q->sysfs_lock);
++
++ /* unregister cpuhp callbacks for exited hctxs */
++ blk_mq_remove_hw_queues_cpuhp(q);
++
++ /* register cpuhp for new initialized hctxs */
++ blk_mq_add_hw_queues_cpuhp(q);
+ }
+
+ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+@@ -4441,10 +4527,14 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+
+ xa_init(&q->hctx_table);
+
++ mutex_lock(&q->sysfs_lock);
++
+ blk_mq_realloc_hw_ctxs(set, q);
+ if (!q->nr_hw_queues)
+ goto err_hctxs;
+
++ mutex_unlock(&q->sysfs_lock);
++
+ INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
+ blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
+
+@@ -4463,6 +4553,7 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ return 0;
+
+ err_hctxs:
++ mutex_unlock(&q->sysfs_lock);
+ blk_mq_release(q);
+ err_exit:
+ q->mq_ops = NULL;
+@@ -4843,12 +4934,12 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ return false;
+
+ /* q->elevator needs protection from ->sysfs_lock */
+- mutex_lock(&q->sysfs_lock);
++ lockdep_assert_held(&q->sysfs_lock);
+
+ /* the check has to be done with holding sysfs_lock */
+ if (!q->elevator) {
+ kfree(qe);
+- goto unlock;
++ goto out;
+ }
+
+ INIT_LIST_HEAD(&qe->node);
+@@ -4858,9 +4949,7 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ __elevator_get(qe->type);
+ list_add(&qe->node, head);
+ elevator_disable(q);
+-unlock:
+- mutex_unlock(&q->sysfs_lock);
+-
++out:
+ return true;
+ }
+
+@@ -4889,11 +4978,9 @@ static void blk_mq_elv_switch_back(struct list_head *head,
+ list_del(&qe->node);
+ kfree(qe);
+
+- mutex_lock(&q->sysfs_lock);
+ elevator_switch(q, t);
+ /* drop the reference acquired in blk_mq_elv_switch_none */
+ elevator_put(t);
+- mutex_unlock(&q->sysfs_lock);
+ }
+
+ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+@@ -4913,8 +5000,11 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues)
+ return;
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list)
++ list_for_each_entry(q, &set->tag_list, tag_set_list) {
++ mutex_lock(&q->sysfs_dir_lock);
++ mutex_lock(&q->sysfs_lock);
+ blk_mq_freeze_queue(q);
++ }
+ /*
+ * Switch IO scheduler to 'none', cleaning up the data associated
+ * with the previous scheduler. We will switch back once we are done
+@@ -4970,8 +5060,11 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_elv_switch_back(&head, q);
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list)
++ list_for_each_entry(q, &set->tag_list, tag_set_list) {
+ blk_mq_unfreeze_queue(q);
++ mutex_unlock(&q->sysfs_lock);
++ mutex_unlock(&q->sysfs_dir_lock);
++ }
+
+ /* Free the excess tags when nr_hw_queues shrink. */
+ for (i = set->nr_hw_queues; i < prev_nr_hw_queues; i++)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 207577145c54f4..42c2cb97d778af 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -690,11 +690,11 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
+ return res;
+ }
+
+- blk_mq_freeze_queue(q);
+ mutex_lock(&q->sysfs_lock);
++ blk_mq_freeze_queue(q);
+ res = entry->store(disk, page, length);
+- mutex_unlock(&q->sysfs_lock);
+ blk_mq_unfreeze_queue(q);
++ mutex_unlock(&q->sysfs_lock);
+ return res;
+ }
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 0b1184176ce77a..767bcbce74facb 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -18,7 +18,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/sched/mm.h>
+ #include <linux/spinlock.h>
+-#include <linux/atomic.h>
++#include <linux/refcount.h>
+ #include <linux/mempool.h>
+
+ #include "blk.h"
+@@ -41,7 +41,6 @@ static const char *const zone_cond_name[] = {
+ /*
+ * Per-zone write plug.
+ * @node: hlist_node structure for managing the plug using a hash table.
+- * @link: To list the plug in the zone write plug error list of the disk.
+ * @ref: Zone write plug reference counter. A zone write plug reference is
+ * always at least 1 when the plug is hashed in the disk plug hash table.
+ * The reference is incremented whenever a new BIO needing plugging is
+@@ -63,8 +62,7 @@ static const char *const zone_cond_name[] = {
+ */
+ struct blk_zone_wplug {
+ struct hlist_node node;
+- struct list_head link;
+- atomic_t ref;
++ refcount_t ref;
+ spinlock_t lock;
+ unsigned int flags;
+ unsigned int zone_no;
+@@ -80,8 +78,8 @@ struct blk_zone_wplug {
+ * - BLK_ZONE_WPLUG_PLUGGED: Indicates that the zone write plug is plugged,
+ * that is, that write BIOs are being throttled due to a write BIO already
+ * being executed or the zone write plug bio list is not empty.
+- * - BLK_ZONE_WPLUG_ERROR: Indicates that a write error happened which will be
+- * recovered with a report zone to update the zone write pointer offset.
++ * - BLK_ZONE_WPLUG_NEED_WP_UPDATE: Indicates that we lost track of a zone
++ * write pointer offset and need to update it.
+ * - BLK_ZONE_WPLUG_UNHASHED: Indicates that the zone write plug was removed
+ * from the disk hash table and that the initial reference to the zone
+ * write plug set when the plug was first added to the hash table has been
+@@ -91,11 +89,9 @@ struct blk_zone_wplug {
+ * freed once all remaining references from BIOs or functions are dropped.
+ */
+ #define BLK_ZONE_WPLUG_PLUGGED (1U << 0)
+-#define BLK_ZONE_WPLUG_ERROR (1U << 1)
++#define BLK_ZONE_WPLUG_NEED_WP_UPDATE (1U << 1)
+ #define BLK_ZONE_WPLUG_UNHASHED (1U << 2)
+
+-#define BLK_ZONE_WPLUG_BUSY (BLK_ZONE_WPLUG_PLUGGED | BLK_ZONE_WPLUG_ERROR)
+-
+ /**
+ * blk_zone_cond_str - Return string XXX in BLK_ZONE_COND_XXX.
+ * @zone_cond: BLK_ZONE_COND_XXX.
+@@ -115,6 +111,30 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond)
+ }
+ EXPORT_SYMBOL_GPL(blk_zone_cond_str);
+
++struct disk_report_zones_cb_args {
++ struct gendisk *disk;
++ report_zones_cb user_cb;
++ void *user_data;
++};
++
++static void disk_zone_wplug_sync_wp_offset(struct gendisk *disk,
++ struct blk_zone *zone);
++
++static int disk_report_zones_cb(struct blk_zone *zone, unsigned int idx,
++ void *data)
++{
++ struct disk_report_zones_cb_args *args = data;
++ struct gendisk *disk = args->disk;
++
++ if (disk->zone_wplugs_hash)
++ disk_zone_wplug_sync_wp_offset(disk, zone);
++
++ if (!args->user_cb)
++ return 0;
++
++ return args->user_cb(zone, idx, args->user_data);
++}
++
+ /**
+ * blkdev_report_zones - Get zones information
+ * @bdev: Target block device
+@@ -139,6 +159,11 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector,
+ {
+ struct gendisk *disk = bdev->bd_disk;
+ sector_t capacity = get_capacity(disk);
++ struct disk_report_zones_cb_args args = {
++ .disk = disk,
++ .user_cb = cb,
++ .user_data = data,
++ };
+
+ if (!bdev_is_zoned(bdev) || WARN_ON_ONCE(!disk->fops->report_zones))
+ return -EOPNOTSUPP;
+@@ -146,7 +171,8 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector,
+ if (!nr_zones || sector >= capacity)
+ return 0;
+
+- return disk->fops->report_zones(disk, sector, nr_zones, cb, data);
++ return disk->fops->report_zones(disk, sector, nr_zones,
++ disk_report_zones_cb, &args);
+ }
+ EXPORT_SYMBOL_GPL(blkdev_report_zones);
+
+@@ -417,7 +443,7 @@ static struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
+
+ hlist_for_each_entry_rcu(zwplug, &disk->zone_wplugs_hash[idx], node) {
+ if (zwplug->zone_no == zno &&
+- atomic_inc_not_zero(&zwplug->ref)) {
++ refcount_inc_not_zero(&zwplug->ref)) {
+ rcu_read_unlock();
+ return zwplug;
+ }
+@@ -438,9 +464,9 @@ static void disk_free_zone_wplug_rcu(struct rcu_head *rcu_head)
+
+ static inline void disk_put_zone_wplug(struct blk_zone_wplug *zwplug)
+ {
+- if (atomic_dec_and_test(&zwplug->ref)) {
++ if (refcount_dec_and_test(&zwplug->ref)) {
+ WARN_ON_ONCE(!bio_list_empty(&zwplug->bio_list));
+- WARN_ON_ONCE(!list_empty(&zwplug->link));
++ WARN_ON_ONCE(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED);
+ WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_UNHASHED));
+
+ call_rcu(&zwplug->rcu_head, disk_free_zone_wplug_rcu);
+@@ -454,8 +480,8 @@ static inline bool disk_should_remove_zone_wplug(struct gendisk *disk,
+ if (zwplug->flags & BLK_ZONE_WPLUG_UNHASHED)
+ return false;
+
+- /* If the zone write plug is still busy, it cannot be removed. */
+- if (zwplug->flags & BLK_ZONE_WPLUG_BUSY)
++ /* If the zone write plug is still plugged, it cannot be removed. */
++ if (zwplug->flags & BLK_ZONE_WPLUG_PLUGGED)
+ return false;
+
+ /*
+@@ -469,7 +495,7 @@ static inline bool disk_should_remove_zone_wplug(struct gendisk *disk,
+ * taken when the plug was allocated and another reference taken by the
+ * caller context).
+ */
+- if (atomic_read(&zwplug->ref) > 2)
++ if (refcount_read(&zwplug->ref) > 2)
+ return false;
+
+ /* We can remove zone write plugs for zones that are empty or full. */
+@@ -538,12 +564,11 @@ static struct blk_zone_wplug *disk_get_and_lock_zone_wplug(struct gendisk *disk,
+ return NULL;
+
+ INIT_HLIST_NODE(&zwplug->node);
+- INIT_LIST_HEAD(&zwplug->link);
+- atomic_set(&zwplug->ref, 2);
++ refcount_set(&zwplug->ref, 2);
+ spin_lock_init(&zwplug->lock);
+ zwplug->flags = 0;
+ zwplug->zone_no = zno;
+- zwplug->wp_offset = sector & (disk->queue->limits.chunk_sectors - 1);
++ zwplug->wp_offset = bdev_offset_from_zone_start(disk->part0, sector);
+ bio_list_init(&zwplug->bio_list);
+ INIT_WORK(&zwplug->bio_work, blk_zone_wplug_bio_work);
+ zwplug->disk = disk;
+@@ -587,124 +612,81 @@ static void disk_zone_wplug_abort(struct blk_zone_wplug *zwplug)
+ }
+
+ /*
+- * Abort (fail) all plugged BIOs of a zone write plug that are not aligned
+- * with the assumed write pointer location of the zone when the BIO will
+- * be unplugged.
++ * Set a zone write plug write pointer offset to the specified value.
++ * This aborts all plugged BIOs, which is fine as this function is called for
++ * a zone reset operation, a zone finish operation or if the zone needs a wp
++ * update from a report zone after a write error.
+ */
+-static void disk_zone_wplug_abort_unaligned(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
+-{
+- unsigned int wp_offset = zwplug->wp_offset;
+- struct bio_list bl = BIO_EMPTY_LIST;
+- struct bio *bio;
+-
+- while ((bio = bio_list_pop(&zwplug->bio_list))) {
+- if (disk_zone_is_full(disk, zwplug->zone_no, wp_offset) ||
+- (bio_op(bio) != REQ_OP_ZONE_APPEND &&
+- bio_offset_from_zone_start(bio) != wp_offset)) {
+- blk_zone_wplug_bio_io_error(zwplug, bio);
+- continue;
+- }
+-
+- wp_offset += bio_sectors(bio);
+- bio_list_add(&bl, bio);
+- }
+-
+- bio_list_merge(&zwplug->bio_list, &bl);
+-}
+-
+-static inline void disk_zone_wplug_set_error(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
++static void disk_zone_wplug_set_wp_offset(struct gendisk *disk,
++ struct blk_zone_wplug *zwplug,
++ unsigned int wp_offset)
+ {
+- unsigned long flags;
++ lockdep_assert_held(&zwplug->lock);
+
+- if (zwplug->flags & BLK_ZONE_WPLUG_ERROR)
+- return;
++ /* Update the zone write pointer and abort all plugged BIOs. */
++ zwplug->flags &= ~BLK_ZONE_WPLUG_NEED_WP_UPDATE;
++ zwplug->wp_offset = wp_offset;
++ disk_zone_wplug_abort(zwplug);
+
+ /*
+- * At this point, we already have a reference on the zone write plug.
+- * However, since we are going to add the plug to the disk zone write
+- * plugs work list, increase its reference count. This reference will
+- * be dropped in disk_zone_wplugs_work() once the error state is
+- * handled, or in disk_zone_wplug_clear_error() if the zone is reset or
+- * finished.
++ * The zone write plug now has no BIO plugged: remove it from the
++ * hash table so that it cannot be seen. The plug will be freed
++ * when the last reference is dropped.
+ */
+- zwplug->flags |= BLK_ZONE_WPLUG_ERROR;
+- atomic_inc(&zwplug->ref);
+-
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+- list_add_tail(&zwplug->link, &disk->zone_wplugs_err_list);
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
++ if (disk_should_remove_zone_wplug(disk, zwplug))
++ disk_remove_zone_wplug(disk, zwplug);
+ }
+
+-static inline void disk_zone_wplug_clear_error(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
++static unsigned int blk_zone_wp_offset(struct blk_zone *zone)
+ {
+- unsigned long flags;
+-
+- if (!(zwplug->flags & BLK_ZONE_WPLUG_ERROR))
+- return;
+-
+- /*
+- * We are racing with the error handling work which drops the reference
+- * on the zone write plug after handling the error state. So remove the
+- * plug from the error list and drop its reference count only if the
+- * error handling has not yet started, that is, if the zone write plug
+- * is still listed.
+- */
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+- if (!list_empty(&zwplug->link)) {
+- list_del_init(&zwplug->link);
+- zwplug->flags &= ~BLK_ZONE_WPLUG_ERROR;
+- disk_put_zone_wplug(zwplug);
++ switch (zone->cond) {
++ case BLK_ZONE_COND_IMP_OPEN:
++ case BLK_ZONE_COND_EXP_OPEN:
++ case BLK_ZONE_COND_CLOSED:
++ return zone->wp - zone->start;
++ case BLK_ZONE_COND_FULL:
++ return zone->len;
++ case BLK_ZONE_COND_EMPTY:
++ return 0;
++ case BLK_ZONE_COND_NOT_WP:
++ case BLK_ZONE_COND_OFFLINE:
++ case BLK_ZONE_COND_READONLY:
++ default:
++ /*
++ * Conventional, offline and read-only zones do not have a valid
++ * write pointer.
++ */
++ return UINT_MAX;
+ }
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+ }
+
+-/*
+- * Set a zone write plug write pointer offset to either 0 (zone reset case)
+- * or to the zone size (zone finish case). This aborts all plugged BIOs, which
+- * is fine to do as doing a zone reset or zone finish while writes are in-flight
+- * is a mistake from the user which will most likely cause all plugged BIOs to
+- * fail anyway.
+- */
+-static void disk_zone_wplug_set_wp_offset(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug,
+- unsigned int wp_offset)
++static void disk_zone_wplug_sync_wp_offset(struct gendisk *disk,
++ struct blk_zone *zone)
+ {
++ struct blk_zone_wplug *zwplug;
+ unsigned long flags;
+
+- spin_lock_irqsave(&zwplug->lock, flags);
+-
+- /*
+- * Make sure that a BIO completion or another zone reset or finish
+- * operation has not already removed the plug from the hash table.
+- */
+- if (zwplug->flags & BLK_ZONE_WPLUG_UNHASHED) {
+- spin_unlock_irqrestore(&zwplug->lock, flags);
++ zwplug = disk_get_zone_wplug(disk, zone->start);
++ if (!zwplug)
+ return;
+- }
+
+- /* Update the zone write pointer and abort all plugged BIOs. */
+- zwplug->wp_offset = wp_offset;
+- disk_zone_wplug_abort(zwplug);
++ spin_lock_irqsave(&zwplug->lock, flags);
++ if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE)
++ disk_zone_wplug_set_wp_offset(disk, zwplug,
++ blk_zone_wp_offset(zone));
++ spin_unlock_irqrestore(&zwplug->lock, flags);
+
+- /*
+- * Updating the write pointer offset puts back the zone
+- * in a good state. So clear the error flag and decrement the
+- * error count if we were in error state.
+- */
+- disk_zone_wplug_clear_error(disk, zwplug);
++ disk_put_zone_wplug(zwplug);
++}
+
+- /*
+- * The zone write plug now has no BIO plugged: remove it from the
+- * hash table so that it cannot be seen. The plug will be freed
+- * when the last reference is dropped.
+- */
+- if (disk_should_remove_zone_wplug(disk, zwplug))
+- disk_remove_zone_wplug(disk, zwplug);
++static int disk_zone_sync_wp_offset(struct gendisk *disk, sector_t sector)
++{
++ struct disk_report_zones_cb_args args = {
++ .disk = disk,
++ };
+
+- spin_unlock_irqrestore(&zwplug->lock, flags);
++ return disk->fops->report_zones(disk, sector, 1,
++ disk_report_zones_cb, &args);
+ }
+
+ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+@@ -713,6 +695,7 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+ sector_t sector = bio->bi_iter.bi_sector;
+ struct blk_zone_wplug *zwplug;
++ unsigned long flags;
+
+ /* Conventional zones cannot be reset nor finished. */
+ if (disk_zone_is_conv(disk, sector)) {
+@@ -720,6 +703,15 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+ return true;
+ }
+
++ /*
++ * No-wait reset or finish BIOs do not make much sense as the callers
++ * issue these as blocking operations in most cases. To avoid issues
++ * the BIO execution potentially failing with BLK_STS_AGAIN, warn about
++ * REQ_NOWAIT being set and ignore that flag.
++ */
++ if (WARN_ON_ONCE(bio->bi_opf & REQ_NOWAIT))
++ bio->bi_opf &= ~REQ_NOWAIT;
++
+ /*
+ * If we have a zone write plug, set its write pointer offset to 0
+ * (reset case) or to the zone size (finish case). This will abort all
+@@ -729,7 +721,9 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+ */
+ zwplug = disk_get_zone_wplug(disk, sector);
+ if (zwplug) {
++ spin_lock_irqsave(&zwplug->lock, flags);
+ disk_zone_wplug_set_wp_offset(disk, zwplug, wp_offset);
++ spin_unlock_irqrestore(&zwplug->lock, flags);
+ disk_put_zone_wplug(zwplug);
+ }
+
+@@ -740,6 +734,7 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio)
+ {
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+ struct blk_zone_wplug *zwplug;
++ unsigned long flags;
+ sector_t sector;
+
+ /*
+@@ -751,7 +746,9 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio)
+ sector += disk->queue->limits.chunk_sectors) {
+ zwplug = disk_get_zone_wplug(disk, sector);
+ if (zwplug) {
++ spin_lock_irqsave(&zwplug->lock, flags);
+ disk_zone_wplug_set_wp_offset(disk, zwplug, 0);
++ spin_unlock_irqrestore(&zwplug->lock, flags);
+ disk_put_zone_wplug(zwplug);
+ }
+ }
+@@ -759,9 +756,25 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio)
+ return false;
+ }
+
+-static inline void blk_zone_wplug_add_bio(struct blk_zone_wplug *zwplug,
+- struct bio *bio, unsigned int nr_segs)
++static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk,
++ struct blk_zone_wplug *zwplug)
+ {
++ /*
++ * Take a reference on the zone write plug and schedule the submission
++ * of the next plugged BIO. blk_zone_wplug_bio_work() will release the
++ * reference we take here.
++ */
++ WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED));
++ refcount_inc(&zwplug->ref);
++ queue_work(disk->zone_wplugs_wq, &zwplug->bio_work);
++}
++
++static inline void disk_zone_wplug_add_bio(struct gendisk *disk,
++ struct blk_zone_wplug *zwplug,
++ struct bio *bio, unsigned int nr_segs)
++{
++ bool schedule_bio_work = false;
++
+ /*
+ * Grab an extra reference on the BIO request queue usage counter.
+ * This reference will be reused to submit a request for the BIO for
+@@ -777,6 +790,16 @@ static inline void blk_zone_wplug_add_bio(struct blk_zone_wplug *zwplug,
+ */
+ bio_clear_polled(bio);
+
++ /*
++ * REQ_NOWAIT BIOs are always handled using the zone write plug BIO
++ * work, which can block. So clear the REQ_NOWAIT flag and schedule the
++ * work if this is the first BIO we are plugging.
++ */
++ if (bio->bi_opf & REQ_NOWAIT) {
++ schedule_bio_work = !(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED);
++ bio->bi_opf &= ~REQ_NOWAIT;
++ }
++
+ /*
+ * Reuse the poll cookie field to store the number of segments when
+ * split to the hardware limits.
+@@ -790,6 +813,11 @@ static inline void blk_zone_wplug_add_bio(struct blk_zone_wplug *zwplug,
+ * at the tail of the list to preserve the sequential write order.
+ */
+ bio_list_add(&zwplug->bio_list, bio);
++
++ zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
++
++ if (schedule_bio_work)
++ disk_zone_wplug_schedule_bio_work(disk, zwplug);
+ }
+
+ /*
+@@ -902,13 +930,23 @@ static bool blk_zone_wplug_prepare_bio(struct blk_zone_wplug *zwplug,
+ {
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+
++ /*
++ * If we lost track of the zone write pointer due to a write error,
++ * the user must either execute a report zones, reset the zone or finish
++ * the to recover a reliable write pointer position. Fail BIOs if the
++ * user did not do that as we cannot handle emulated zone append
++ * otherwise.
++ */
++ if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE)
++ return false;
++
+ /*
+ * Check that the user is not attempting to write to a full zone.
+ * We know such BIO will fail, and that would potentially overflow our
+ * write pointer offset beyond the end of the zone.
+ */
+ if (disk_zone_wplug_is_full(disk, zwplug))
+- goto err;
++ return false;
+
+ if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
+ /*
+@@ -927,24 +965,18 @@ static bool blk_zone_wplug_prepare_bio(struct blk_zone_wplug *zwplug,
+ bio_set_flag(bio, BIO_EMULATES_ZONE_APPEND);
+ } else {
+ /*
+- * Check for non-sequential writes early because we avoid a
+- * whole lot of error handling trouble if we don't send it off
+- * to the driver.
++ * Check for non-sequential writes early as we know that BIOs
++ * with a start sector not unaligned to the zone write pointer
++ * will fail.
+ */
+ if (bio_offset_from_zone_start(bio) != zwplug->wp_offset)
+- goto err;
++ return false;
+ }
+
+ /* Advance the zone write pointer offset. */
+ zwplug->wp_offset += bio_sectors(bio);
+
+ return true;
+-
+-err:
+- /* We detected an invalid write BIO: schedule error recovery. */
+- disk_zone_wplug_set_error(disk, zwplug);
+- kblockd_schedule_work(&disk->zone_wplugs_work);
+- return false;
+ }
+
+ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+@@ -983,7 +1015,10 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+
+ zwplug = disk_get_and_lock_zone_wplug(disk, sector, gfp_mask, &flags);
+ if (!zwplug) {
+- bio_io_error(bio);
++ if (bio->bi_opf & REQ_NOWAIT)
++ bio_wouldblock_error(bio);
++ else
++ bio_io_error(bio);
+ return true;
+ }
+
+@@ -991,18 +1026,20 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+ bio_set_flag(bio, BIO_ZONE_WRITE_PLUGGING);
+
+ /*
+- * If the zone is already plugged or has a pending error, add the BIO
+- * to the plug BIO list. Otherwise, plug and let the BIO execute.
++ * If the zone is already plugged, add the BIO to the plug BIO list.
++ * Do the same for REQ_NOWAIT BIOs to ensure that we will not see a
++ * BLK_STS_AGAIN failure if we let the BIO execute.
++ * Otherwise, plug and let the BIO execute.
+ */
+- if (zwplug->flags & BLK_ZONE_WPLUG_BUSY)
++ if ((zwplug->flags & BLK_ZONE_WPLUG_PLUGGED) ||
++ (bio->bi_opf & REQ_NOWAIT))
+ goto plug;
+
+- /*
+- * If an error is detected when preparing the BIO, add it to the BIO
+- * list so that error recovery can deal with it.
+- */
+- if (!blk_zone_wplug_prepare_bio(zwplug, bio))
+- goto plug;
++ if (!blk_zone_wplug_prepare_bio(zwplug, bio)) {
++ spin_unlock_irqrestore(&zwplug->lock, flags);
++ bio_io_error(bio);
++ return true;
++ }
+
+ zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
+
+@@ -1011,8 +1048,7 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+ return false;
+
+ plug:
+- zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
+- blk_zone_wplug_add_bio(zwplug, bio, nr_segs);
++ disk_zone_wplug_add_bio(disk, zwplug, bio, nr_segs);
+
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+
+@@ -1096,19 +1132,6 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ }
+ EXPORT_SYMBOL_GPL(blk_zone_plug_bio);
+
+-static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
+-{
+- /*
+- * Take a reference on the zone write plug and schedule the submission
+- * of the next plugged BIO. blk_zone_wplug_bio_work() will release the
+- * reference we take here.
+- */
+- WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED));
+- atomic_inc(&zwplug->ref);
+- queue_work(disk->zone_wplugs_wq, &zwplug->bio_work);
+-}
+-
+ static void disk_zone_wplug_unplug_bio(struct gendisk *disk,
+ struct blk_zone_wplug *zwplug)
+ {
+@@ -1116,16 +1139,6 @@ static void disk_zone_wplug_unplug_bio(struct gendisk *disk,
+
+ spin_lock_irqsave(&zwplug->lock, flags);
+
+- /*
+- * If we had an error, schedule error recovery. The recovery work
+- * will restart submission of plugged BIOs.
+- */
+- if (zwplug->flags & BLK_ZONE_WPLUG_ERROR) {
+- spin_unlock_irqrestore(&zwplug->lock, flags);
+- kblockd_schedule_work(&disk->zone_wplugs_work);
+- return;
+- }
+-
+ /* Schedule submission of the next plugged BIO if we have one. */
+ if (!bio_list_empty(&zwplug->bio_list)) {
+ disk_zone_wplug_schedule_bio_work(disk, zwplug);
+@@ -1168,12 +1181,13 @@ void blk_zone_write_plug_bio_endio(struct bio *bio)
+ }
+
+ /*
+- * If the BIO failed, mark the plug as having an error to trigger
+- * recovery.
++ * If the BIO failed, abort all plugged BIOs and mark the plug as
++ * needing a write pointer update.
+ */
+ if (bio->bi_status != BLK_STS_OK) {
+ spin_lock_irqsave(&zwplug->lock, flags);
+- disk_zone_wplug_set_error(disk, zwplug);
++ disk_zone_wplug_abort(zwplug);
++ zwplug->flags |= BLK_ZONE_WPLUG_NEED_WP_UPDATE;
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+ }
+
+@@ -1229,6 +1243,7 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ */
+ spin_lock_irqsave(&zwplug->lock, flags);
+
++again:
+ bio = bio_list_pop(&zwplug->bio_list);
+ if (!bio) {
+ zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+@@ -1237,10 +1252,8 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ }
+
+ if (!blk_zone_wplug_prepare_bio(zwplug, bio)) {
+- /* Error recovery will decide what to do with the BIO. */
+- bio_list_add_head(&zwplug->bio_list, bio);
+- spin_unlock_irqrestore(&zwplug->lock, flags);
+- goto put_zwplug;
++ blk_zone_wplug_bio_io_error(zwplug, bio);
++ goto again;
+ }
+
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+@@ -1262,120 +1275,6 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ disk_put_zone_wplug(zwplug);
+ }
+
+-static unsigned int blk_zone_wp_offset(struct blk_zone *zone)
+-{
+- switch (zone->cond) {
+- case BLK_ZONE_COND_IMP_OPEN:
+- case BLK_ZONE_COND_EXP_OPEN:
+- case BLK_ZONE_COND_CLOSED:
+- return zone->wp - zone->start;
+- case BLK_ZONE_COND_FULL:
+- return zone->len;
+- case BLK_ZONE_COND_EMPTY:
+- return 0;
+- case BLK_ZONE_COND_NOT_WP:
+- case BLK_ZONE_COND_OFFLINE:
+- case BLK_ZONE_COND_READONLY:
+- default:
+- /*
+- * Conventional, offline and read-only zones do not have a valid
+- * write pointer.
+- */
+- return UINT_MAX;
+- }
+-}
+-
+-static int blk_zone_wplug_report_zone_cb(struct blk_zone *zone,
+- unsigned int idx, void *data)
+-{
+- struct blk_zone *zonep = data;
+-
+- *zonep = *zone;
+- return 0;
+-}
+-
+-static void disk_zone_wplug_handle_error(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
+-{
+- sector_t zone_start_sector =
+- bdev_zone_sectors(disk->part0) * zwplug->zone_no;
+- unsigned int noio_flag;
+- struct blk_zone zone;
+- unsigned long flags;
+- int ret;
+-
+- /* Get the current zone information from the device. */
+- noio_flag = memalloc_noio_save();
+- ret = disk->fops->report_zones(disk, zone_start_sector, 1,
+- blk_zone_wplug_report_zone_cb, &zone);
+- memalloc_noio_restore(noio_flag);
+-
+- spin_lock_irqsave(&zwplug->lock, flags);
+-
+- /*
+- * A zone reset or finish may have cleared the error already. In such
+- * case, do nothing as the report zones may have seen the "old" write
+- * pointer value before the reset/finish operation completed.
+- */
+- if (!(zwplug->flags & BLK_ZONE_WPLUG_ERROR))
+- goto unlock;
+-
+- zwplug->flags &= ~BLK_ZONE_WPLUG_ERROR;
+-
+- if (ret != 1) {
+- /*
+- * We failed to get the zone information, meaning that something
+- * is likely really wrong with the device. Abort all remaining
+- * plugged BIOs as otherwise we could endup waiting forever on
+- * plugged BIOs to complete if there is a queue freeze on-going.
+- */
+- disk_zone_wplug_abort(zwplug);
+- goto unplug;
+- }
+-
+- /* Update the zone write pointer offset. */
+- zwplug->wp_offset = blk_zone_wp_offset(&zone);
+- disk_zone_wplug_abort_unaligned(disk, zwplug);
+-
+- /* Restart BIO submission if we still have any BIO left. */
+- if (!bio_list_empty(&zwplug->bio_list)) {
+- disk_zone_wplug_schedule_bio_work(disk, zwplug);
+- goto unlock;
+- }
+-
+-unplug:
+- zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+- if (disk_should_remove_zone_wplug(disk, zwplug))
+- disk_remove_zone_wplug(disk, zwplug);
+-
+-unlock:
+- spin_unlock_irqrestore(&zwplug->lock, flags);
+-}
+-
+-static void disk_zone_wplugs_work(struct work_struct *work)
+-{
+- struct gendisk *disk =
+- container_of(work, struct gendisk, zone_wplugs_work);
+- struct blk_zone_wplug *zwplug;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+-
+- while (!list_empty(&disk->zone_wplugs_err_list)) {
+- zwplug = list_first_entry(&disk->zone_wplugs_err_list,
+- struct blk_zone_wplug, link);
+- list_del_init(&zwplug->link);
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+-
+- disk_zone_wplug_handle_error(disk, zwplug);
+- disk_put_zone_wplug(zwplug);
+-
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+- }
+-
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+-}
+-
+ static inline unsigned int disk_zone_wplugs_hash_size(struct gendisk *disk)
+ {
+ return 1U << disk->zone_wplugs_hash_bits;
+@@ -1384,8 +1283,6 @@ static inline unsigned int disk_zone_wplugs_hash_size(struct gendisk *disk)
+ void disk_init_zone_resources(struct gendisk *disk)
+ {
+ spin_lock_init(&disk->zone_wplugs_lock);
+- INIT_LIST_HEAD(&disk->zone_wplugs_err_list);
+- INIT_WORK(&disk->zone_wplugs_work, disk_zone_wplugs_work);
+ }
+
+ /*
+@@ -1450,7 +1347,7 @@ static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
+ while (!hlist_empty(&disk->zone_wplugs_hash[i])) {
+ zwplug = hlist_entry(disk->zone_wplugs_hash[i].first,
+ struct blk_zone_wplug, node);
+- atomic_inc(&zwplug->ref);
++ refcount_inc(&zwplug->ref);
+ disk_remove_zone_wplug(disk, zwplug);
+ disk_put_zone_wplug(zwplug);
+ }
+@@ -1484,8 +1381,6 @@ void disk_free_zone_resources(struct gendisk *disk)
+ if (!disk->zone_wplugs_pool)
+ return;
+
+- cancel_work_sync(&disk->zone_wplugs_work);
+-
+ if (disk->zone_wplugs_wq) {
+ destroy_workqueue(disk->zone_wplugs_wq);
+ disk->zone_wplugs_wq = NULL;
+@@ -1682,6 +1577,8 @@ static int blk_revalidate_seq_zone(struct blk_zone *zone, unsigned int idx,
+ if (!disk->zone_wplugs_hash)
+ return 0;
+
++ disk_zone_wplug_sync_wp_offset(disk, zone);
++
+ wp_offset = blk_zone_wp_offset(zone);
+ if (!wp_offset || wp_offset >= zone->capacity)
+ return 0;
+@@ -1818,6 +1715,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ memalloc_noio_restore(noio_flag);
+ return ret;
+ }
++
+ ret = disk->fops->report_zones(disk, 0, UINT_MAX,
+ blk_revalidate_zone_cb, &args);
+ if (!ret) {
+@@ -1854,6 +1752,48 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ }
+ EXPORT_SYMBOL_GPL(blk_revalidate_disk_zones);
+
++/**
++ * blk_zone_issue_zeroout - zero-fill a block range in a zone
++ * @bdev: blockdev to write
++ * @sector: start sector
++ * @nr_sects: number of sectors to write
++ * @gfp_mask: memory allocation flags (for bio_alloc)
++ *
++ * Description:
++ * Zero-fill a block range in a zone (@sector must be equal to the zone write
++ * pointer), handling potential errors due to the (initially unknown) lack of
++ * hardware offload (See blkdev_issue_zeroout()).
++ */
++int blk_zone_issue_zeroout(struct block_device *bdev, sector_t sector,
++ sector_t nr_sects, gfp_t gfp_mask)
++{
++ int ret;
++
++ if (WARN_ON_ONCE(!bdev_is_zoned(bdev)))
++ return -EIO;
++
++ ret = blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
++ BLKDEV_ZERO_NOFALLBACK);
++ if (ret != -EOPNOTSUPP)
++ return ret;
++
++ /*
++ * The failed call to blkdev_issue_zeroout() advanced the zone write
++ * pointer. Undo this using a report zone to update the zone write
++ * pointer to the correct current value.
++ */
++ ret = disk_zone_sync_wp_offset(bdev->bd_disk, sector);
++ if (ret != 1)
++ return ret < 0 ? ret : -EIO;
++
++ /*
++ * Retry without BLKDEV_ZERO_NOFALLBACK to force the fallback to a
++ * regular write with zero-pages.
++ */
++ return blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask, 0);
++}
++EXPORT_SYMBOL_GPL(blk_zone_issue_zeroout);
++
+ #ifdef CONFIG_BLK_DEBUG_FS
+
+ int queue_zone_wplugs_show(void *data, struct seq_file *m)
+@@ -1876,7 +1816,7 @@ int queue_zone_wplugs_show(void *data, struct seq_file *m)
+ spin_lock_irqsave(&zwplug->lock, flags);
+ zwp_zone_no = zwplug->zone_no;
+ zwp_flags = zwplug->flags;
+- zwp_ref = atomic_read(&zwplug->ref);
++ zwp_ref = refcount_read(&zwplug->ref);
+ zwp_wp_offset = zwplug->wp_offset;
+ zwp_bio_list_size = bio_list_size(&zwplug->bio_list);
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+diff --git a/drivers/acpi/acpica/evxfregn.c b/drivers/acpi/acpica/evxfregn.c
+index 95f78383bbdba1..bff2d099f4691e 100644
+--- a/drivers/acpi/acpica/evxfregn.c
++++ b/drivers/acpi/acpica/evxfregn.c
+@@ -232,8 +232,6 @@ acpi_remove_address_space_handler(acpi_handle device,
+
+ /* Now we can delete the handler object */
+
+- acpi_os_release_mutex(handler_obj->address_space.
+- context_mutex);
+ acpi_ut_remove_reference(handler_obj);
+ goto unlock_and_exit;
+ }
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 5429ec9ef06f06..a5d47819b3a4e2 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -454,8 +454,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ if (cmd_rc)
+ *cmd_rc = -EINVAL;
+
+- if (cmd == ND_CMD_CALL)
++ if (cmd == ND_CMD_CALL) {
++ if (!buf || buf_len < sizeof(*call_pkg))
++ return -EINVAL;
++
+ call_pkg = buf;
++ }
++
+ func = cmd_to_func(nfit_mem, cmd, call_pkg, &family);
+ if (func < 0)
+ return func;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 7fe842dae1ec05..821867de43bea3 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -250,6 +250,9 @@ static bool acpi_decode_space(struct resource_win *win,
+ switch (addr->resource_type) {
+ case ACPI_MEMORY_RANGE:
+ acpi_dev_memresource_flags(res, len, wp);
++
++ if (addr->info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
++ res->flags |= IORESOURCE_PREFETCH;
+ break;
+ case ACPI_IO_RANGE:
+ acpi_dev_ioresource_flags(res, len, iodec,
+@@ -265,9 +268,6 @@ static bool acpi_decode_space(struct resource_win *win,
+ if (addr->producer_consumer == ACPI_PRODUCER)
+ res->flags |= IORESOURCE_WINDOW;
+
+- if (addr->info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
+- res->flags |= IORESOURCE_PREFETCH;
+-
+ return !(res->flags & IORESOURCE_DISABLED);
+ }
+
+diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
+index 63ef7bb073ce03..596c6d294da906 100644
+--- a/drivers/ata/sata_highbank.c
++++ b/drivers/ata/sata_highbank.c
+@@ -348,6 +348,7 @@ static int highbank_initialize_phys(struct device *dev, void __iomem *addr)
+ phy_nodes[phy] = phy_data.np;
+ cphy_base[phy] = of_iomap(phy_nodes[phy], 0);
+ if (cphy_base[phy] == NULL) {
++ of_node_put(phy_data.np);
+ return 0;
+ }
+ phy_count += 1;
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 480e4adba9faa6..85e99641eaae02 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -395,6 +395,7 @@ int btmtk_process_coredump(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct btmtk_data *data = hci_get_priv(hdev);
+ int err;
++ bool complete = false;
+
+ if (!IS_ENABLED(CONFIG_DEV_COREDUMP)) {
+ kfree_skb(skb);
+@@ -416,19 +417,22 @@ int btmtk_process_coredump(struct hci_dev *hdev, struct sk_buff *skb)
+ fallthrough;
+ case HCI_DEVCOREDUMP_ACTIVE:
+ default:
++ /* Mediatek coredump data would be more than MTK_COREDUMP_NUM */
++ if (data->cd_info.cnt >= MTK_COREDUMP_NUM &&
++ skb->len > MTK_COREDUMP_END_LEN)
++ if (!memcmp((char *)&skb->data[skb->len - MTK_COREDUMP_END_LEN],
++ MTK_COREDUMP_END, MTK_COREDUMP_END_LEN - 1))
++ complete = true;
++
+ err = hci_devcd_append(hdev, skb);
+ if (err < 0)
+ break;
+ data->cd_info.cnt++;
+
+- /* Mediatek coredump data would be more than MTK_COREDUMP_NUM */
+- if (data->cd_info.cnt > MTK_COREDUMP_NUM &&
+- skb->len > MTK_COREDUMP_END_LEN)
+- if (!memcmp((char *)&skb->data[skb->len - MTK_COREDUMP_END_LEN],
+- MTK_COREDUMP_END, MTK_COREDUMP_END_LEN - 1)) {
+- bt_dev_info(hdev, "Mediatek coredump end");
+- hci_devcd_complete(hdev);
+- }
++ if (complete) {
++ bt_dev_info(hdev, "Mediatek coredump end");
++ hci_devcd_complete(hdev);
++ }
+
+ break;
+ }
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index bc21b292144926..62a62eaba2aad8 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -92,6 +92,7 @@ static const u32 slic_base[] = { 100000000, 3125000 };
+ static const u32 npu_base[] = { 333000000, 400000000, 500000000 };
+ /* EN7581 */
+ static const u32 emi7581_base[] = { 540000000, 480000000, 400000000, 300000000 };
++static const u32 bus7581_base[] = { 600000000, 540000000 };
+ static const u32 npu7581_base[] = { 800000000, 750000000, 720000000, 600000000 };
+ static const u32 crypto_base[] = { 540000000, 480000000 };
+
+@@ -227,8 +228,8 @@ static const struct en_clk_desc en7581_base_clks[] = {
+ .base_reg = REG_BUS_CLK_DIV_SEL,
+ .base_bits = 1,
+ .base_shift = 8,
+- .base_values = bus_base,
+- .n_base_values = ARRAY_SIZE(bus_base),
++ .base_values = bus7581_base,
++ .n_base_values = ARRAY_SIZE(bus7581_base),
+
+ .div_bits = 3,
+ .div_shift = 0,
+diff --git a/drivers/crypto/hisilicon/debugfs.c b/drivers/crypto/hisilicon/debugfs.c
+index 1b9b7bccdeff08..45e130b901eb5e 100644
+--- a/drivers/crypto/hisilicon/debugfs.c
++++ b/drivers/crypto/hisilicon/debugfs.c
+@@ -192,7 +192,7 @@ static int qm_sqc_dump(struct hisi_qm *qm, char *s, char *name)
+
+ down_read(&qm->qps_lock);
+ if (qm->sqc) {
+- memcpy(&sqc, qm->sqc + qp_id * sizeof(struct qm_sqc), sizeof(struct qm_sqc));
++ memcpy(&sqc, qm->sqc + qp_id, sizeof(struct qm_sqc));
+ sqc.base_h = cpu_to_le32(QM_XQC_ADDR_MASK);
+ sqc.base_l = cpu_to_le32(QM_XQC_ADDR_MASK);
+ dump_show(qm, &sqc, sizeof(struct qm_sqc), "SOFT SQC");
+@@ -229,7 +229,7 @@ static int qm_cqc_dump(struct hisi_qm *qm, char *s, char *name)
+
+ down_read(&qm->qps_lock);
+ if (qm->cqc) {
+- memcpy(&cqc, qm->cqc + qp_id * sizeof(struct qm_cqc), sizeof(struct qm_cqc));
++ memcpy(&cqc, qm->cqc + qp_id, sizeof(struct qm_cqc));
+ cqc.base_h = cpu_to_le32(QM_XQC_ADDR_MASK);
+ cqc.base_l = cpu_to_le32(QM_XQC_ADDR_MASK);
+ dump_show(qm, &cqc, sizeof(struct qm_cqc), "SOFT CQC");
+diff --git a/drivers/gpio/gpio-graniterapids.c b/drivers/gpio/gpio-graniterapids.c
+index f2e911a3d2ca02..ad6a045fd3d2d2 100644
+--- a/drivers/gpio/gpio-graniterapids.c
++++ b/drivers/gpio/gpio-graniterapids.c
+@@ -32,12 +32,14 @@
+ #define GNR_PINS_PER_REG 32
+ #define GNR_NUM_REGS DIV_ROUND_UP(GNR_NUM_PINS, GNR_PINS_PER_REG)
+
+-#define GNR_CFG_BAR 0x00
++#define GNR_CFG_PADBAR 0x00
+ #define GNR_CFG_LOCK_OFFSET 0x04
+-#define GNR_GPI_STATUS_OFFSET 0x20
++#define GNR_GPI_STATUS_OFFSET 0x14
+ #define GNR_GPI_ENABLE_OFFSET 0x24
+
+-#define GNR_CFG_DW_RX_MASK GENMASK(25, 22)
++#define GNR_CFG_DW_HOSTSW_MODE BIT(27)
++#define GNR_CFG_DW_RX_MASK GENMASK(23, 22)
++#define GNR_CFG_DW_INTSEL_MASK GENMASK(21, 14)
+ #define GNR_CFG_DW_RX_DISABLE FIELD_PREP(GNR_CFG_DW_RX_MASK, 2)
+ #define GNR_CFG_DW_RX_EDGE FIELD_PREP(GNR_CFG_DW_RX_MASK, 1)
+ #define GNR_CFG_DW_RX_LEVEL FIELD_PREP(GNR_CFG_DW_RX_MASK, 0)
+@@ -50,6 +52,7 @@
+ * struct gnr_gpio - Intel Granite Rapids-D vGPIO driver state
+ * @gc: GPIO controller interface
+ * @reg_base: base address of the GPIO registers
++ * @pad_base: base address of the vGPIO pad configuration registers
+ * @ro_bitmap: bitmap of read-only pins
+ * @lock: guard the registers
+ * @pad_backup: backup of the register state for suspend
+@@ -57,6 +60,7 @@
+ struct gnr_gpio {
+ struct gpio_chip gc;
+ void __iomem *reg_base;
++ void __iomem *pad_base;
+ DECLARE_BITMAP(ro_bitmap, GNR_NUM_PINS);
+ raw_spinlock_t lock;
+ u32 pad_backup[];
+@@ -65,7 +69,7 @@ struct gnr_gpio {
+ static void __iomem *gnr_gpio_get_padcfg_addr(const struct gnr_gpio *priv,
+ unsigned int gpio)
+ {
+- return priv->reg_base + gpio * sizeof(u32);
++ return priv->pad_base + gpio * sizeof(u32);
+ }
+
+ static int gnr_gpio_configure_line(struct gpio_chip *gc, unsigned int gpio,
+@@ -88,6 +92,20 @@ static int gnr_gpio_configure_line(struct gpio_chip *gc, unsigned int gpio,
+ return 0;
+ }
+
++static int gnr_gpio_request(struct gpio_chip *gc, unsigned int gpio)
++{
++ struct gnr_gpio *priv = gpiochip_get_data(gc);
++ u32 dw;
++
++ dw = readl(gnr_gpio_get_padcfg_addr(priv, gpio));
++ if (!(dw & GNR_CFG_DW_HOSTSW_MODE)) {
++ dev_warn(gc->parent, "GPIO %u is not owned by host", gpio);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
+ static int gnr_gpio_get(struct gpio_chip *gc, unsigned int gpio)
+ {
+ const struct gnr_gpio *priv = gpiochip_get_data(gc);
+@@ -139,6 +157,7 @@ static int gnr_gpio_direction_output(struct gpio_chip *gc, unsigned int gpio, in
+
+ static const struct gpio_chip gnr_gpio_chip = {
+ .owner = THIS_MODULE,
++ .request = gnr_gpio_request,
+ .get = gnr_gpio_get,
+ .set = gnr_gpio_set,
+ .get_direction = gnr_gpio_get_direction,
+@@ -166,7 +185,7 @@ static void gnr_gpio_irq_ack(struct irq_data *d)
+ guard(raw_spinlock_irqsave)(&priv->lock);
+
+ reg = readl(addr);
+- reg &= ~BIT(bit_idx);
++ reg |= BIT(bit_idx);
+ writel(reg, addr);
+ }
+
+@@ -209,10 +228,18 @@ static void gnr_gpio_irq_unmask(struct irq_data *d)
+ static int gnr_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t pin = irqd_to_hwirq(d);
+- u32 mask = GNR_CFG_DW_RX_MASK;
++ struct gnr_gpio *priv = gpiochip_get_data(gc);
++ irq_hw_number_t hwirq = irqd_to_hwirq(d);
++ u32 reg;
+ u32 set;
+
++ /* Allow interrupts only if Interrupt Select field is non-zero */
++ reg = readl(gnr_gpio_get_padcfg_addr(priv, hwirq));
++ if (!(reg & GNR_CFG_DW_INTSEL_MASK)) {
++ dev_dbg(gc->parent, "GPIO %lu cannot be used as IRQ", hwirq);
++ return -EPERM;
++ }
++
+ /* Falling edge and level low triggers not supported by the GPIO controller */
+ switch (type) {
+ case IRQ_TYPE_NONE:
+@@ -230,10 +257,11 @@ static int gnr_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ return -EINVAL;
+ }
+
+- return gnr_gpio_configure_line(gc, pin, mask, set);
++ return gnr_gpio_configure_line(gc, hwirq, GNR_CFG_DW_RX_MASK, set);
+ }
+
+ static const struct irq_chip gnr_gpio_irq_chip = {
++ .name = "gpio-graniterapids",
+ .irq_ack = gnr_gpio_irq_ack,
+ .irq_mask = gnr_gpio_irq_mask,
+ .irq_unmask = gnr_gpio_irq_unmask,
+@@ -291,6 +319,7 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+ struct gnr_gpio *priv;
+ void __iomem *regs;
+ int irq, ret;
++ u32 offset;
+
+ priv = devm_kzalloc(dev, struct_size(priv, pad_backup, num_backup_pins), GFP_KERNEL);
+ if (!priv)
+@@ -302,6 +331,10 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+ if (IS_ERR(regs))
+ return PTR_ERR(regs);
+
++ priv->reg_base = regs;
++ offset = readl(priv->reg_base + GNR_CFG_PADBAR);
++ priv->pad_base = priv->reg_base + offset;
++
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+@@ -311,8 +344,6 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "failed to request interrupt\n");
+
+- priv->reg_base = regs + readl(regs + GNR_CFG_BAR);
+-
+ gnr_gpio_init_pin_ro_bits(dev, priv->reg_base + GNR_CFG_LOCK_OFFSET,
+ priv->ro_bitmap);
+
+@@ -324,7 +355,6 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+
+ girq = &priv->gc.irq;
+ gpio_irq_chip_set_chip(girq, &gnr_gpio_irq_chip);
+- girq->chip->name = dev_name(dev);
+ girq->parent_handler = NULL;
+ girq->num_parents = 0;
+ girq->parents = NULL;
+diff --git a/drivers/gpio/gpio-ljca.c b/drivers/gpio/gpio-ljca.c
+index dfec9fbfc7a9bd..c2a9b425397441 100644
+--- a/drivers/gpio/gpio-ljca.c
++++ b/drivers/gpio/gpio-ljca.c
+@@ -82,9 +82,9 @@ static int ljca_gpio_config(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id,
+ int ret;
+
+ mutex_lock(&ljca_gpio->trans_lock);
++ packet->num = 1;
+ packet->item[0].index = gpio_id;
+ packet->item[0].value = config | ljca_gpio->connect_mode[gpio_id];
+- packet->num = 1;
+
+ ret = ljca_transfer(ljca_gpio->ljca, LJCA_GPIO_CONFIG, (u8 *)packet,
+ struct_size(packet, item, packet->num), NULL, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index d891ab779ca7f5..5df21529b3b13e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1801,13 +1801,18 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+ if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->exec.ticket)
+ return -EINVAL;
+
++ /* Make sure VRAM is allocated contigiously */
+ (*bo)->flags |= AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+- amdgpu_bo_placement_from_domain(*bo, (*bo)->allowed_domains);
+- for (i = 0; i < (*bo)->placement.num_placement; i++)
+- (*bo)->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
+- r = ttm_bo_validate(&(*bo)->tbo, &(*bo)->placement, &ctx);
+- if (r)
+- return r;
++ if ((*bo)->tbo.resource->mem_type == TTM_PL_VRAM &&
++ !((*bo)->tbo.resource->placement & TTM_PL_FLAG_CONTIGUOUS)) {
++
++ amdgpu_bo_placement_from_domain(*bo, (*bo)->allowed_domains);
++ for (i = 0; i < (*bo)->placement.num_placement; i++)
++ (*bo)->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
++ r = ttm_bo_validate(&(*bo)->tbo, &(*bo)->placement, &ctx);
++ if (r)
++ return r;
++ }
+
+ return amdgpu_ttm_alloc_gart(&(*bo)->tbo);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+index 31fd30dcd593ba..65bb26215e867a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+@@ -551,6 +551,8 @@ static void amdgpu_uvd_force_into_uvd_segment(struct amdgpu_bo *abo)
+ for (i = 0; i < abo->placement.num_placement; ++i) {
+ abo->placements[i].fpfn = 0 >> PAGE_SHIFT;
+ abo->placements[i].lpfn = (256 * 1024 * 1024) >> PAGE_SHIFT;
++ if (abo->placements[i].mem_type == TTM_PL_VRAM)
++ abo->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 6005280f5f38f0..8d2562d0f143c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -674,12 +674,8 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ pasid_mapping_needed &= adev->gmc.gmc_funcs->emit_pasid_mapping &&
+ ring->funcs->emit_wreg;
+
+- if (adev->gfx.enable_cleaner_shader &&
+- ring->funcs->emit_cleaner_shader &&
+- job->enforce_isolation)
+- ring->funcs->emit_cleaner_shader(ring);
+-
+- if (!vm_flush_needed && !gds_switch_needed && !need_pipe_sync)
++ if (!vm_flush_needed && !gds_switch_needed && !need_pipe_sync &&
++ !(job->enforce_isolation && !job->vmid))
+ return 0;
+
+ amdgpu_ring_ib_begin(ring);
+@@ -690,6 +686,11 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ if (need_pipe_sync)
+ amdgpu_ring_emit_pipeline_sync(ring);
+
++ if (adev->gfx.enable_cleaner_shader &&
++ ring->funcs->emit_cleaner_shader &&
++ job->enforce_isolation)
++ ring->funcs->emit_cleaner_shader(ring);
++
+ if (vm_flush_needed) {
+ trace_amdgpu_vm_flush(ring, job->vmid, job->vm_pd_addr);
+ amdgpu_ring_emit_vm_flush(ring, job->vmid, job->vm_pd_addr);
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+index 6068b784dc6938..9a30b8c10838c1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+@@ -1289,7 +1289,7 @@ static int uvd_v7_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+ struct amdgpu_job *job,
+ struct amdgpu_ib *ib)
+ {
+- struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
++ struct amdgpu_ring *ring = amdgpu_job_ring(job);
+ unsigned i;
+
+ /* No patching necessary for the first instance */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 8de61cc524c943..d2993594c848ad 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1422,6 +1422,7 @@ int kfd_parse_crat_table(void *crat_image, struct list_head *device_list,
+
+
+ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
++ bool cache_line_size_missing,
+ struct kfd_gpu_cache_info *pcache_info)
+ {
+ struct amdgpu_device *adev = kdev->adev;
+@@ -1436,6 +1437,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.gc_num_tcp_per_wpg / 2;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_tcp_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* Scalar L1 Instruction Cache per SQC */
+@@ -1448,6 +1451,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.gc_num_sqc_per_wgp * 2;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_instruction_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* Scalar L1 Data Cache per SQC */
+@@ -1459,6 +1464,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.gc_num_sqc_per_wgp * 2;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_scalar_data_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 64;
+ i++;
+ }
+ /* GL1 Data Cache per SA */
+@@ -1471,7 +1478,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.max_cu_per_sh;
+- pcache_info[i].cache_line_size = 0;
++ if (cache_line_size_missing)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* L2 Data Cache per GPU (Total Tex Cache) */
+@@ -1483,6 +1491,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.max_cu_per_sh;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_tcc_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* L3 Data Cache per GPU */
+@@ -1493,7 +1503,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.max_cu_per_sh;
+- pcache_info[i].cache_line_size = 0;
++ pcache_info[i].cache_line_size = 64;
+ i++;
+ }
+ return i;
+@@ -1568,6 +1578,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ int kfd_get_gpu_cache_info(struct kfd_node *kdev, struct kfd_gpu_cache_info **pcache_info)
+ {
+ int num_of_cache_types = 0;
++ bool cache_line_size_missing = false;
+
+ switch (kdev->adev->asic_type) {
+ case CHIP_KAVERI:
+@@ -1691,10 +1702,17 @@ int kfd_get_gpu_cache_info(struct kfd_node *kdev, struct kfd_gpu_cache_info **pc
+ case IP_VERSION(11, 5, 0):
+ case IP_VERSION(11, 5, 1):
+ case IP_VERSION(11, 5, 2):
++ /* Cacheline size not available in IP discovery for gc11.
++ * kfd_fill_gpu_cache_info_from_gfx_config to hard code it
++ */
++ cache_line_size_missing = true;
++ fallthrough;
+ case IP_VERSION(12, 0, 0):
+ case IP_VERSION(12, 0, 1):
+ num_of_cache_types =
+- kfd_fill_gpu_cache_info_from_gfx_config(kdev->kfd, *pcache_info);
++ kfd_fill_gpu_cache_info_from_gfx_config(kdev->kfd,
++ cache_line_size_missing,
++ *pcache_info);
+ break;
+ default:
+ *pcache_info = dummy_cache_info;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 648f40091aa395..f5b3ed20e891b3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -205,6 +205,21 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+ if (!down_read_trylock(&adev->reset_domain->sem))
+ return -EIO;
+
++ if (!pdd->proc_ctx_cpu_ptr) {
++ r = amdgpu_amdkfd_alloc_gtt_mem(adev,
++ AMDGPU_MES_PROC_CTX_SIZE,
++ &pdd->proc_ctx_bo,
++ &pdd->proc_ctx_gpu_addr,
++ &pdd->proc_ctx_cpu_ptr,
++ false);
++ if (r) {
++ dev_err(adev->dev,
++ "failed to allocate process context bo\n");
++ return r;
++ }
++ memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
++ }
++
+ memset(&queue_input, 0x0, sizeof(struct mes_add_queue_input));
+ queue_input.process_id = qpd->pqm->process->pasid;
+ queue_input.page_table_base_addr = qpd->page_table_base;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index ff34bb1ac9db79..3139987b82b100 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1076,7 +1076,8 @@ static void kfd_process_destroy_pdds(struct kfd_process *p)
+
+ kfd_free_process_doorbells(pdd->dev->kfd, pdd);
+
+- if (pdd->dev->kfd->shared_resources.enable_mes)
++ if (pdd->dev->kfd->shared_resources.enable_mes &&
++ pdd->proc_ctx_cpu_ptr)
+ amdgpu_amdkfd_free_gtt_mem(pdd->dev->adev,
+ &pdd->proc_ctx_bo);
+ /*
+@@ -1610,7 +1611,6 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ struct kfd_process *p)
+ {
+ struct kfd_process_device *pdd = NULL;
+- int retval = 0;
+
+ if (WARN_ON_ONCE(p->n_pdds >= MAX_GPU_INSTANCE))
+ return NULL;
+@@ -1634,21 +1634,6 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ pdd->user_gpu_id = dev->id;
+ atomic64_set(&pdd->evict_duration_counter, 0);
+
+- if (dev->kfd->shared_resources.enable_mes) {
+- retval = amdgpu_amdkfd_alloc_gtt_mem(dev->adev,
+- AMDGPU_MES_PROC_CTX_SIZE,
+- &pdd->proc_ctx_bo,
+- &pdd->proc_ctx_gpu_addr,
+- &pdd->proc_ctx_cpu_ptr,
+- false);
+- if (retval) {
+- dev_err(dev->adev->dev,
+- "failed to allocate process context bo\n");
+- goto err_free_pdd;
+- }
+- memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
+- }
+-
+ p->pdds[p->n_pdds++] = pdd;
+ if (kfd_dbg_is_per_vmid_supported(pdd->dev))
+ pdd->spi_dbg_override = pdd->dev->kfd2kgd->disable_debug_trap(
+@@ -1660,10 +1645,6 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ idr_init(&pdd->alloc_idr);
+
+ return pdd;
+-
+-err_free_pdd:
+- kfree(pdd);
+- return NULL;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 01b960b152743d..ead4317a21680b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -212,13 +212,17 @@ static void pqm_clean_queue_resource(struct process_queue_manager *pqm,
+ void pqm_uninit(struct process_queue_manager *pqm)
+ {
+ struct process_queue_node *pqn, *next;
+- struct kfd_process_device *pdd;
+
+ list_for_each_entry_safe(pqn, next, &pqm->queues, process_queue_list) {
+ if (pqn->q) {
+- pdd = kfd_get_process_device_data(pqn->q->device, pqm->process);
+- kfd_queue_unref_bo_vas(pdd, &pqn->q->properties);
+- kfd_queue_release_buffers(pdd, &pqn->q->properties);
++ struct kfd_process_device *pdd = kfd_get_process_device_data(pqn->q->device,
++ pqm->process);
++ if (pdd) {
++ kfd_queue_unref_bo_vas(pdd, &pqn->q->properties);
++ kfd_queue_release_buffers(pdd, &pqn->q->properties);
++ } else {
++ WARN_ON(!pdd);
++ }
+ pqm_clean_queue_resource(pqm, pqn);
+ }
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index d0e6d051e9cf9f..1aedfafa507f7e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2717,4 +2717,5 @@ void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
+ smu->workload_map = smu_v13_0_7_workload_map;
+ smu->smc_driver_if_version = SMU13_0_7_DRIVER_IF_VERSION;
+ smu_v13_0_set_smu_mailbox_registers(smu);
++ smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ }
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index 1ef56cb07dfbd2..447740d79d3d2e 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -929,7 +929,6 @@ fn draw_all(&mut self, data: impl Iterator<Item = u8>) {
+ /// * `tmp` must be valid for reading and writing for `tmp_size` bytes.
+ ///
+ /// They must remain valid for the duration of the function call.
+-
+ #[no_mangle]
+ pub unsafe extern "C" fn drm_panic_qr_generate(
+ url: *const i8,
+diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c
+index 5d701f48351b96..ec55cb651d4498 100644
+--- a/drivers/gpu/drm/i915/display/intel_color.c
++++ b/drivers/gpu/drm/i915/display/intel_color.c
+@@ -1333,19 +1333,29 @@ static void ilk_load_lut_8(const struct intel_crtc_state *crtc_state,
+ lut = blob->data;
+
+ /*
+- * DSB fails to correctly load the legacy LUT
+- * unless we either write each entry twice,
+- * or use non-posted writes
++ * DSB fails to correctly load the legacy LUT unless
++ * we either write each entry twice when using posted
++ * writes, or we use non-posted writes.
++ *
++ * If palette anti-collision is active during LUT
++ * register writes:
++ * - posted writes simply get dropped and thus the LUT
++ * contents may not be correctly updated
++ * - non-posted writes are blocked and thus the LUT
++ * contents are always correct, but simultaneous CPU
++ * MMIO access will start to fail
++ *
++ * Choose the lesser of two evils and use posted writes.
++ * Using posted writes is also faster, even when having
++ * to write each register twice.
+ */
+- if (crtc_state->dsb_color_vblank)
+- intel_dsb_nonpost_start(crtc_state->dsb_color_vblank);
+-
+- for (i = 0; i < 256; i++)
++ for (i = 0; i < 256; i++) {
+ ilk_lut_write(crtc_state, LGC_PALETTE(pipe, i),
+ i9xx_lut_8(&lut[i]));
+-
+- if (crtc_state->dsb_color_vblank)
+- intel_dsb_nonpost_end(crtc_state->dsb_color_vblank);
++ if (crtc_state->dsb_color_vblank)
++ ilk_lut_write(crtc_state, LGC_PALETTE(pipe, i),
++ i9xx_lut_8(&lut[i]));
++ }
+ }
+
+ static void ilk_load_lut_10(const struct intel_crtc_state *crtc_state,
+diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
+index 6469b9bcf2ec44..082ac72c757a9f 100644
+--- a/drivers/gpu/drm/i915/i915_gpu_error.c
++++ b/drivers/gpu/drm/i915/i915_gpu_error.c
+@@ -1652,9 +1652,21 @@ capture_engine(struct intel_engine_cs *engine,
+ return NULL;
+
+ intel_engine_get_hung_entity(engine, &ce, &rq);
+- if (rq && !i915_request_started(rq))
+- drm_info(&engine->gt->i915->drm, "Got hung context on %s with active request %lld:%lld [0x%04X] not yet started\n",
+- engine->name, rq->fence.context, rq->fence.seqno, ce->guc_id.id);
++ if (rq && !i915_request_started(rq)) {
++ /*
++ * We want to know also what is the guc_id of the context,
++ * but if we don't have the context reference, then skip
++ * printing it.
++ */
++ if (ce)
++ drm_info(&engine->gt->i915->drm,
++ "Got hung context on %s with active request %lld:%lld [0x%04X] not yet started\n",
++ engine->name, rq->fence.context, rq->fence.seqno, ce->guc_id.id);
++ else
++ drm_info(&engine->gt->i915->drm,
++ "Got hung context on %s with active request %lld:%lld not yet started\n",
++ engine->name, rq->fence.context, rq->fence.seqno);
++ }
+
+ if (rq) {
+ capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL);
+diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
+index 762127dd56c538..70a854557e6ec5 100644
+--- a/drivers/gpu/drm/i915/i915_scheduler.c
++++ b/drivers/gpu/drm/i915/i915_scheduler.c
+@@ -506,6 +506,6 @@ int __init i915_scheduler_module_init(void)
+ return 0;
+
+ err_priorities:
+- kmem_cache_destroy(slab_priorities);
++ kmem_cache_destroy(slab_dependencies);
+ return -ENOMEM;
+ }
+diff --git a/drivers/gpu/drm/xe/tests/xe_migrate.c b/drivers/gpu/drm/xe/tests/xe_migrate.c
+index 1a192a2a941b69..3bbdb362d6f0dc 100644
+--- a/drivers/gpu/drm/xe/tests/xe_migrate.c
++++ b/drivers/gpu/drm/xe/tests/xe_migrate.c
+@@ -224,8 +224,8 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_PINNED);
+ if (IS_ERR(tiny)) {
+- KUNIT_FAIL(test, "Failed to allocate fake pt: %li\n",
+- PTR_ERR(pt));
++ KUNIT_FAIL(test, "Failed to allocate tiny fake pt: %li\n",
++ PTR_ERR(tiny));
+ goto free_pt;
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 9d82ea30f4df23..7e385940df0863 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -65,6 +65,14 @@ invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fe
+ __invalidation_fence_signal(xe, fence);
+ }
+
++void xe_gt_tlb_invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence)
++{
++ if (WARN_ON_ONCE(!fence->gt))
++ return;
++
++ __invalidation_fence_signal(gt_to_xe(fence->gt), fence);
++}
++
+ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ {
+ struct xe_gt *gt = container_of(work, struct xe_gt,
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+index f430d5797af701..00b1c6c01e8d95 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+@@ -28,6 +28,7 @@ int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+ void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
+ struct xe_gt_tlb_invalidation_fence *fence,
+ bool stack);
++void xe_gt_tlb_invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence);
+
+ static inline void
+ xe_gt_tlb_invalidation_fence_wait(struct xe_gt_tlb_invalidation_fence *fence)
+diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
+index f27f579f4d85aa..797576690356f2 100644
+--- a/drivers/gpu/drm/xe/xe_pt.c
++++ b/drivers/gpu/drm/xe/xe_pt.c
+@@ -1333,8 +1333,7 @@ static void invalidation_fence_cb(struct dma_fence *fence,
+ queue_work(system_wq, &ifence->work);
+ } else {
+ ifence->base.base.error = ifence->fence->error;
+- dma_fence_signal(&ifence->base.base);
+- dma_fence_put(&ifence->base.base);
++ xe_gt_tlb_invalidation_fence_signal(&ifence->base);
+ }
+ dma_fence_put(ifence->fence);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
+index 440ac572f6e5ef..52969c0909659d 100644
+--- a/drivers/gpu/drm/xe/xe_reg_sr.c
++++ b/drivers/gpu/drm/xe/xe_reg_sr.c
+@@ -26,46 +26,27 @@
+ #include "xe_reg_whitelist.h"
+ #include "xe_rtp_types.h"
+
+-#define XE_REG_SR_GROW_STEP_DEFAULT 16
+-
+ static void reg_sr_fini(struct drm_device *drm, void *arg)
+ {
+ struct xe_reg_sr *sr = arg;
++ struct xe_reg_sr_entry *entry;
++ unsigned long reg;
++
++ xa_for_each(&sr->xa, reg, entry)
++ kfree(entry);
+
+ xa_destroy(&sr->xa);
+- kfree(sr->pool.arr);
+- memset(&sr->pool, 0, sizeof(sr->pool));
+ }
+
+ int xe_reg_sr_init(struct xe_reg_sr *sr, const char *name, struct xe_device *xe)
+ {
+ xa_init(&sr->xa);
+- memset(&sr->pool, 0, sizeof(sr->pool));
+- sr->pool.grow_step = XE_REG_SR_GROW_STEP_DEFAULT;
+ sr->name = name;
+
+ return drmm_add_action_or_reset(&xe->drm, reg_sr_fini, sr);
+ }
+ EXPORT_SYMBOL_IF_KUNIT(xe_reg_sr_init);
+
+-static struct xe_reg_sr_entry *alloc_entry(struct xe_reg_sr *sr)
+-{
+- if (sr->pool.used == sr->pool.allocated) {
+- struct xe_reg_sr_entry *arr;
+-
+- arr = krealloc_array(sr->pool.arr,
+- ALIGN(sr->pool.allocated + 1, sr->pool.grow_step),
+- sizeof(*arr), GFP_KERNEL);
+- if (!arr)
+- return NULL;
+-
+- sr->pool.arr = arr;
+- sr->pool.allocated += sr->pool.grow_step;
+- }
+-
+- return &sr->pool.arr[sr->pool.used++];
+-}
+-
+ static bool compatible_entries(const struct xe_reg_sr_entry *e1,
+ const struct xe_reg_sr_entry *e2)
+ {
+@@ -111,7 +92,7 @@ int xe_reg_sr_add(struct xe_reg_sr *sr,
+ return 0;
+ }
+
+- pentry = alloc_entry(sr);
++ pentry = kmalloc(sizeof(*pentry), GFP_KERNEL);
+ if (!pentry) {
+ ret = -ENOMEM;
+ goto fail;
+diff --git a/drivers/gpu/drm/xe/xe_reg_sr_types.h b/drivers/gpu/drm/xe/xe_reg_sr_types.h
+index ad48a52b824a18..ebe11f237fa26d 100644
+--- a/drivers/gpu/drm/xe/xe_reg_sr_types.h
++++ b/drivers/gpu/drm/xe/xe_reg_sr_types.h
+@@ -20,12 +20,6 @@ struct xe_reg_sr_entry {
+ };
+
+ struct xe_reg_sr {
+- struct {
+- struct xe_reg_sr_entry *arr;
+- unsigned int used;
+- unsigned int allocated;
+- unsigned int grow_step;
+- } pool;
+ struct xarray xa;
+ const char *name;
+
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index c8ec74f089f3d6..6e41ddaa24d636 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -339,7 +339,7 @@ tegra241_cmdqv_get_cmdq(struct arm_smmu_device *smmu,
+ * one CPU at a time can enter the process, while the others
+ * will be spinning at the same lock.
+ */
+- lidx = smp_processor_id() % cmdqv->num_lvcmdqs_per_vintf;
++ lidx = raw_smp_processor_id() % cmdqv->num_lvcmdqs_per_vintf;
+ vcmdq = vintf->lvcmdqs[lidx];
+ if (!vcmdq || !READ_ONCE(vcmdq->enabled))
+ return NULL;
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index e5b89f728ad3b2..09694cca8752df 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -105,12 +105,35 @@ static void cache_tag_unassign(struct dmar_domain *domain, u16 did,
+ spin_unlock_irqrestore(&domain->cache_lock, flags);
+ }
+
++/* domain->qi_batch will be freed in iommu_free_domain() path. */
++static int domain_qi_batch_alloc(struct dmar_domain *domain)
++{
++ unsigned long flags;
++ int ret = 0;
++
++ spin_lock_irqsave(&domain->cache_lock, flags);
++ if (domain->qi_batch)
++ goto out_unlock;
++
++ domain->qi_batch = kzalloc(sizeof(*domain->qi_batch), GFP_ATOMIC);
++ if (!domain->qi_batch)
++ ret = -ENOMEM;
++out_unlock:
++ spin_unlock_irqrestore(&domain->cache_lock, flags);
++
++ return ret;
++}
++
+ static int __cache_tag_assign_domain(struct dmar_domain *domain, u16 did,
+ struct device *dev, ioasid_t pasid)
+ {
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ int ret;
+
++ ret = domain_qi_batch_alloc(domain);
++ if (ret)
++ return ret;
++
+ ret = cache_tag_assign(domain, did, dev, pasid, CACHE_TAG_IOTLB);
+ if (ret || !info->ats_enabled)
+ return ret;
+@@ -139,6 +162,10 @@ static int __cache_tag_assign_parent_domain(struct dmar_domain *domain, u16 did,
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ int ret;
+
++ ret = domain_qi_batch_alloc(domain);
++ if (ret)
++ return ret;
++
+ ret = cache_tag_assign(domain, did, dev, pasid, CACHE_TAG_NESTING_IOTLB);
+ if (ret || !info->ats_enabled)
+ return ret;
+@@ -190,13 +217,6 @@ int cache_tag_assign_domain(struct dmar_domain *domain,
+ u16 did = domain_get_id_for_dev(domain, dev);
+ int ret;
+
+- /* domain->qi_bach will be freed in iommu_free_domain() path. */
+- if (!domain->qi_batch) {
+- domain->qi_batch = kzalloc(sizeof(*domain->qi_batch), GFP_KERNEL);
+- if (!domain->qi_batch)
+- return -ENOMEM;
+- }
+-
+ ret = __cache_tag_assign_domain(domain, did, dev, pasid);
+ if (ret || domain->domain.type != IOMMU_DOMAIN_NESTED)
+ return ret;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index a167d59101ae2e..cc23cfcdeb2d59 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3372,6 +3372,9 @@ void device_block_translation(struct device *dev)
+ struct intel_iommu *iommu = info->iommu;
+ unsigned long flags;
+
++ if (info->domain)
++ cache_tag_unassign_domain(info->domain, dev, IOMMU_NO_PASID);
++
+ iommu_disable_pci_caps(info);
+ if (!dev_is_real_dma_subdevice(dev)) {
+ if (sm_supported(iommu))
+@@ -3388,7 +3391,6 @@ void device_block_translation(struct device *dev)
+ list_del(&info->link);
+ spin_unlock_irqrestore(&info->domain->lock, flags);
+
+- cache_tag_unassign_domain(info->domain, dev, IOMMU_NO_PASID);
+ domain_detach_iommu(info->domain, iommu);
+ info->domain = NULL;
+ }
+diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
+index d58db9a27e6cfd..76e2c686854871 100644
+--- a/drivers/md/dm-zoned-reclaim.c
++++ b/drivers/md/dm-zoned-reclaim.c
+@@ -76,9 +76,9 @@ static int dmz_reclaim_align_wp(struct dmz_reclaim *zrc, struct dm_zone *zone,
+ * pointer and the requested position.
+ */
+ nr_blocks = block - wp_block;
+- ret = blkdev_issue_zeroout(dev->bdev,
+- dmz_start_sect(zmd, zone) + dmz_blk2sect(wp_block),
+- dmz_blk2sect(nr_blocks), GFP_NOIO, 0);
++ ret = blk_zone_issue_zeroout(dev->bdev,
++ dmz_start_sect(zmd, zone) + dmz_blk2sect(wp_block),
++ dmz_blk2sect(nr_blocks), GFP_NOIO);
+ if (ret) {
+ dmz_dev_err(dev,
+ "Align zone %u wp %llu to %llu (wp+%u) blocks failed %d",
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 15e0f14d0d49de..4d73abae503d1e 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1520,9 +1520,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
+ struct slave *slave;
+
+ mask = features;
+-
+- features &= ~NETIF_F_ONE_FOR_ALL;
+- features |= NETIF_F_ALL_FOR_ALL;
++ features = netdev_base_features(features);
+
+ bond_for_each_slave(bond, slave, iter) {
+ features = netdev_increment_features(features,
+@@ -1536,6 +1534,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
+
+ #define BOND_VLAN_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_FRAGLIST | NETIF_F_GSO_SOFTWARE | \
++ NETIF_F_GSO_ENCAP_ALL | \
+ NETIF_F_HIGHDMA | NETIF_F_LRO)
+
+ #define BOND_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+@@ -1564,8 +1563,9 @@ static void bond_compute_features(struct bonding *bond)
+
+ if (!bond_has_slaves(bond))
+ goto done;
+- vlan_features &= NETIF_F_ALL_FOR_ALL;
+- mpls_features &= NETIF_F_ALL_FOR_ALL;
++
++ vlan_features = netdev_base_features(vlan_features);
++ mpls_features = netdev_base_features(mpls_features);
+
+ bond_for_each_slave(bond, slave, iter) {
+ vlan_features = netdev_increment_features(vlan_features,
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 5290f5ad98f392..bf26cd0abf6dd9 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -1098,10 +1098,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x1030, 0x1030),
+ regmap_reg_range(0x1100, 0x1115),
+ regmap_reg_range(0x111a, 0x111f),
+- regmap_reg_range(0x1122, 0x1127),
+- regmap_reg_range(0x112a, 0x112b),
+- regmap_reg_range(0x1136, 0x1139),
+- regmap_reg_range(0x113e, 0x113f),
++ regmap_reg_range(0x1120, 0x112b),
++ regmap_reg_range(0x1134, 0x113b),
++ regmap_reg_range(0x113c, 0x113f),
+ regmap_reg_range(0x1400, 0x1401),
+ regmap_reg_range(0x1403, 0x1403),
+ regmap_reg_range(0x1410, 0x1417),
+@@ -1128,10 +1127,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x2030, 0x2030),
+ regmap_reg_range(0x2100, 0x2115),
+ regmap_reg_range(0x211a, 0x211f),
+- regmap_reg_range(0x2122, 0x2127),
+- regmap_reg_range(0x212a, 0x212b),
+- regmap_reg_range(0x2136, 0x2139),
+- regmap_reg_range(0x213e, 0x213f),
++ regmap_reg_range(0x2120, 0x212b),
++ regmap_reg_range(0x2134, 0x213b),
++ regmap_reg_range(0x213c, 0x213f),
+ regmap_reg_range(0x2400, 0x2401),
+ regmap_reg_range(0x2403, 0x2403),
+ regmap_reg_range(0x2410, 0x2417),
+@@ -1158,10 +1156,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x3030, 0x3030),
+ regmap_reg_range(0x3100, 0x3115),
+ regmap_reg_range(0x311a, 0x311f),
+- regmap_reg_range(0x3122, 0x3127),
+- regmap_reg_range(0x312a, 0x312b),
+- regmap_reg_range(0x3136, 0x3139),
+- regmap_reg_range(0x313e, 0x313f),
++ regmap_reg_range(0x3120, 0x312b),
++ regmap_reg_range(0x3134, 0x313b),
++ regmap_reg_range(0x313c, 0x313f),
+ regmap_reg_range(0x3400, 0x3401),
+ regmap_reg_range(0x3403, 0x3403),
+ regmap_reg_range(0x3410, 0x3417),
+@@ -1188,10 +1185,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x4030, 0x4030),
+ regmap_reg_range(0x4100, 0x4115),
+ regmap_reg_range(0x411a, 0x411f),
+- regmap_reg_range(0x4122, 0x4127),
+- regmap_reg_range(0x412a, 0x412b),
+- regmap_reg_range(0x4136, 0x4139),
+- regmap_reg_range(0x413e, 0x413f),
++ regmap_reg_range(0x4120, 0x412b),
++ regmap_reg_range(0x4134, 0x413b),
++ regmap_reg_range(0x413c, 0x413f),
+ regmap_reg_range(0x4400, 0x4401),
+ regmap_reg_range(0x4403, 0x4403),
+ regmap_reg_range(0x4410, 0x4417),
+@@ -1218,10 +1214,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x5030, 0x5030),
+ regmap_reg_range(0x5100, 0x5115),
+ regmap_reg_range(0x511a, 0x511f),
+- regmap_reg_range(0x5122, 0x5127),
+- regmap_reg_range(0x512a, 0x512b),
+- regmap_reg_range(0x5136, 0x5139),
+- regmap_reg_range(0x513e, 0x513f),
++ regmap_reg_range(0x5120, 0x512b),
++ regmap_reg_range(0x5134, 0x513b),
++ regmap_reg_range(0x513c, 0x513f),
+ regmap_reg_range(0x5400, 0x5401),
+ regmap_reg_range(0x5403, 0x5403),
+ regmap_reg_range(0x5410, 0x5417),
+@@ -1248,10 +1243,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x6030, 0x6030),
+ regmap_reg_range(0x6100, 0x6115),
+ regmap_reg_range(0x611a, 0x611f),
+- regmap_reg_range(0x6122, 0x6127),
+- regmap_reg_range(0x612a, 0x612b),
+- regmap_reg_range(0x6136, 0x6139),
+- regmap_reg_range(0x613e, 0x613f),
++ regmap_reg_range(0x6120, 0x612b),
++ regmap_reg_range(0x6134, 0x613b),
++ regmap_reg_range(0x613c, 0x613f),
+ regmap_reg_range(0x6300, 0x6301),
+ regmap_reg_range(0x6400, 0x6401),
+ regmap_reg_range(0x6403, 0x6403),
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 0102a82e88cc61..940f1b71226d64 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -24,7 +24,7 @@
+ #define VSC9959_NUM_PORTS 6
+
+ #define VSC9959_TAS_GCL_ENTRY_MAX 63
+-#define VSC9959_TAS_MIN_GATE_LEN_NS 33
++#define VSC9959_TAS_MIN_GATE_LEN_NS 35
+ #define VSC9959_VCAP_POLICER_BASE 63
+ #define VSC9959_VCAP_POLICER_MAX 383
+ #define VSC9959_SWITCH_PCI_BAR 4
+@@ -1056,11 +1056,15 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ mdiobus_free(felix->imdio);
+ }
+
+-/* The switch considers any frame (regardless of size) as eligible for
+- * transmission if the traffic class gate is open for at least 33 ns.
++/* The switch considers any frame (regardless of size) as eligible
++ * for transmission if the traffic class gate is open for at least
++ * VSC9959_TAS_MIN_GATE_LEN_NS.
++ *
+ * Overruns are prevented by cropping an interval at the end of the gate time
+- * slot for which egress scheduling is blocked, but we need to still keep 33 ns
+- * available for one packet to be transmitted, otherwise the port tc will hang.
++ * slot for which egress scheduling is blocked, but we need to still keep
++ * VSC9959_TAS_MIN_GATE_LEN_NS available for one packet to be transmitted,
++ * otherwise the port tc will hang.
++ *
+ * This function returns the size of a gate interval that remains available for
+ * setting the guard band, after reserving the space for one egress frame.
+ */
+@@ -1303,7 +1307,8 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+ * per-tc static guard band lengths, so it reduces the
+ * useful gate interval length. Therefore, be careful
+ * to calculate a guard band (and therefore max_sdu)
+- * that still leaves 33 ns available in the time slot.
++ * that still leaves VSC9959_TAS_MIN_GATE_LEN_NS
++ * available in the time slot.
+ */
+ max_sdu = div_u64(remaining_gate_len_ps, picos_per_byte);
+ /* A TC gate may be completely closed, which is a
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 3d9ee91e1f8be0..dafc5a4039cd2c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1518,7 +1518,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ if (TPA_START_IS_IPV6(tpa_start1))
+ tpa_info->gso_type = SKB_GSO_TCPV6;
+ /* RSS profiles 1 and 3 with extract code 0 for inner 4-tuple */
+- else if (cmp_type == CMP_TYPE_RX_L2_TPA_START_CMP &&
++ else if (!BNXT_CHIP_P4_PLUS(bp) &&
+ TPA_START_HASH_TYPE(tpa_start) == 3)
+ tpa_info->gso_type = SKB_GSO_TCPV6;
+ tpa_info->rss_hash =
+@@ -2212,15 +2212,13 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ if (cmp_type == CMP_TYPE_RX_L2_V3_CMP) {
+ type = bnxt_rss_ext_op(bp, rxcmp);
+ } else {
+- u32 hash_type = RX_CMP_HASH_TYPE(rxcmp);
++ u32 itypes = RX_CMP_ITYPES(rxcmp);
+
+- /* RSS profiles 1 and 3 with extract code 0 for inner
+- * 4-tuple
+- */
+- if (hash_type != 1 && hash_type != 3)
+- type = PKT_HASH_TYPE_L3;
+- else
++ if (itypes == RX_CMP_FLAGS_ITYPE_TCP ||
++ itypes == RX_CMP_FLAGS_ITYPE_UDP)
+ type = PKT_HASH_TYPE_L4;
++ else
++ type = PKT_HASH_TYPE_L3;
+ }
+ skb_set_hash(skb, le32_to_cpu(rxcmp->rx_cmp_rss_hash), type);
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 69231e85140b2e..9e05704d94450e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -267,6 +267,9 @@ struct rx_cmp {
+ (((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_RSS_HASH_TYPE) >>\
+ RX_CMP_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK)
+
++#define RX_CMP_ITYPES(rxcmp) \
++ (le32_to_cpu((rxcmp)->rx_cmp_len_flags_type) & RX_CMP_FLAGS_ITYPES_MASK)
++
+ #define RX_CMP_V3_HASH_TYPE_LEGACY(rxcmp) \
+ ((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_V3_RSS_EXT_OP_LEGACY) >>\
+ RX_CMP_V3_RSS_EXT_OP_LEGACY_SHIFT)
+@@ -378,7 +381,7 @@ struct rx_agg_cmp {
+ u32 rx_agg_cmp_opaque;
+ __le32 rx_agg_cmp_v;
+ #define RX_AGG_CMP_V (1 << 0)
+- #define RX_AGG_CMP_AGG_ID (0xffff << 16)
++ #define RX_AGG_CMP_AGG_ID (0x0fff << 16)
+ #define RX_AGG_CMP_AGG_ID_SHIFT 16
+ __le32 rx_agg_cmp_unused;
+ };
+@@ -416,7 +419,7 @@ struct rx_tpa_start_cmp {
+ #define RX_TPA_START_CMP_V3_RSS_HASH_TYPE_SHIFT 7
+ #define RX_TPA_START_CMP_AGG_ID (0x7f << 25)
+ #define RX_TPA_START_CMP_AGG_ID_SHIFT 25
+- #define RX_TPA_START_CMP_AGG_ID_P5 (0xffff << 16)
++ #define RX_TPA_START_CMP_AGG_ID_P5 (0x0fff << 16)
+ #define RX_TPA_START_CMP_AGG_ID_SHIFT_P5 16
+ #define RX_TPA_START_CMP_METADATA1 (0xf << 28)
+ #define RX_TPA_START_CMP_METADATA1_SHIFT 28
+@@ -540,7 +543,7 @@ struct rx_tpa_end_cmp {
+ #define RX_TPA_END_CMP_PAYLOAD_OFFSET_SHIFT 16
+ #define RX_TPA_END_CMP_AGG_ID (0x7f << 25)
+ #define RX_TPA_END_CMP_AGG_ID_SHIFT 25
+- #define RX_TPA_END_CMP_AGG_ID_P5 (0xffff << 16)
++ #define RX_TPA_END_CMP_AGG_ID_P5 (0x0fff << 16)
+ #define RX_TPA_END_CMP_AGG_ID_SHIFT_P5 16
+
+ __le32 rx_tpa_end_cmp_tsdelta;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+index bbf7641a0fc799..7e13cd69f68a1f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+@@ -2077,7 +2077,7 @@ void t4_idma_monitor(struct adapter *adapter,
+ struct sge_idma_monitor_state *idma,
+ int hz, int ticks);
+ int t4_set_vf_mac_acl(struct adapter *adapter, unsigned int vf,
+- unsigned int naddr, u8 *addr);
++ u8 start, unsigned int naddr, u8 *addr);
+ void t4_tp_pio_read(struct adapter *adap, u32 *buff, u32 nregs,
+ u32 start_index, bool sleep_ok);
+ void t4_tp_tm_pio_read(struct adapter *adap, u32 *buff, u32 nregs,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 2418645c882373..fb3933fbb8425e 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -3246,7 +3246,7 @@ static int cxgb4_mgmt_set_vf_mac(struct net_device *dev, int vf, u8 *mac)
+
+ dev_info(pi->adapter->pdev_dev,
+ "Setting MAC %pM on VF %d\n", mac, vf);
+- ret = t4_set_vf_mac_acl(adap, vf + 1, 1, mac);
++ ret = t4_set_vf_mac_acl(adap, vf + 1, pi->lport, 1, mac);
+ if (!ret)
+ ether_addr_copy(adap->vfinfo[vf].vf_mac_addr, mac);
+ return ret;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 76de55306c4d01..175bf9b1305888 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -10215,11 +10215,12 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
+ * t4_set_vf_mac_acl - Set MAC address for the specified VF
+ * @adapter: The adapter
+ * @vf: one of the VFs instantiated by the specified PF
++ * @start: The start port id associated with specified VF
+ * @naddr: the number of MAC addresses
+ * @addr: the MAC address(es) to be set to the specified VF
+ */
+ int t4_set_vf_mac_acl(struct adapter *adapter, unsigned int vf,
+- unsigned int naddr, u8 *addr)
++ u8 start, unsigned int naddr, u8 *addr)
+ {
+ struct fw_acl_mac_cmd cmd;
+
+@@ -10234,7 +10235,7 @@ int t4_set_vf_mac_acl(struct adapter *adapter, unsigned int vf,
+ cmd.en_to_len16 = cpu_to_be32((unsigned int)FW_LEN16(cmd));
+ cmd.nmac = naddr;
+
+- switch (adapter->pf) {
++ switch (start) {
+ case 3:
+ memcpy(cmd.macaddr3, addr, sizeof(cmd.macaddr3));
+ break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+index 3d74109f82300e..49f22cad92bfd0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+@@ -297,7 +297,9 @@ dr_domain_add_vport_cap(struct mlx5dr_domain *dmn, u16 vport)
+ if (ret) {
+ mlx5dr_dbg(dmn, "Couldn't insert new vport into xarray (%d)\n", ret);
+ kvfree(vport_caps);
+- return ERR_PTR(ret);
++ if (ret == -EBUSY)
++ return ERR_PTR(-EBUSY);
++ return NULL;
+ }
+
+ return vport_caps;
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+index b64c814eac11e8..0c4c75b3682faa 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+@@ -693,12 +693,11 @@ static int sparx5_start(struct sparx5 *sparx5)
+ err = -ENXIO;
+ if (sparx5->fdma_irq >= 0) {
+ if (GCB_CHIP_ID_REV_ID_GET(sparx5->chip_id) > 0)
+- err = devm_request_threaded_irq(sparx5->dev,
+- sparx5->fdma_irq,
+- NULL,
+- sparx5_fdma_handler,
+- IRQF_ONESHOT,
+- "sparx5-fdma", sparx5);
++ err = devm_request_irq(sparx5->dev,
++ sparx5->fdma_irq,
++ sparx5_fdma_handler,
++ 0,
++ "sparx5-fdma", sparx5);
+ if (!err)
+ err = sparx5_fdma_start(sparx5);
+ if (err)
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
+index 062e486c002cf6..672508efce5c29 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
+@@ -1119,7 +1119,7 @@ int sparx5_port_init(struct sparx5 *sparx5,
+ spx5_inst_rmw(DEV10G_MAC_MAXLEN_CFG_MAX_LEN_SET(ETH_MAXLEN),
+ DEV10G_MAC_MAXLEN_CFG_MAX_LEN,
+ devinst,
+- DEV10G_MAC_ENA_CFG(0));
++ DEV10G_MAC_MAXLEN_CFG(0));
+
+ /* Handle Signal Detect in 10G PCS */
+ spx5_inst_wr(PCS10G_BR_PCS_SD_CFG_SD_POL_SET(sd_pol) |
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index ca4ed58f1206dd..0c2ba2fa88c466 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -1315,7 +1315,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
+ GFP_KERNEL);
+ if (!gc->irq_contexts) {
+ err = -ENOMEM;
+- goto free_irq_vector;
++ goto free_irq_array;
+ }
+
+ for (i = 0; i < nvec; i++) {
+@@ -1372,6 +1372,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
+ gc->max_num_msix = nvec;
+ gc->num_msix_usable = nvec;
+ cpus_read_unlock();
++ kfree(irqs);
+ return 0;
+
+ free_irq:
+@@ -1384,8 +1385,9 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
+ }
+
+ kfree(gc->irq_contexts);
+- kfree(irqs);
+ gc->irq_contexts = NULL;
++free_irq_array:
++ kfree(irqs);
+ free_irq_vector:
+ cpus_read_unlock();
+ pci_free_irq_vectors(pdev);
+diff --git a/drivers/net/ethernet/mscc/ocelot_ptp.c b/drivers/net/ethernet/mscc/ocelot_ptp.c
+index e172638b060102..808ce8e68d3937 100644
+--- a/drivers/net/ethernet/mscc/ocelot_ptp.c
++++ b/drivers/net/ethernet/mscc/ocelot_ptp.c
+@@ -14,6 +14,8 @@
+ #include <soc/mscc/ocelot.h>
+ #include "ocelot.h"
+
++#define OCELOT_PTP_TX_TSTAMP_TIMEOUT (5 * HZ)
++
+ int ocelot_ptp_gettime64(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ {
+ struct ocelot *ocelot = container_of(ptp, struct ocelot, ptp_info);
+@@ -495,6 +497,28 @@ static int ocelot_traps_to_ptp_rx_filter(unsigned int proto)
+ return HWTSTAMP_FILTER_NONE;
+ }
+
++static int ocelot_ptp_tx_type_to_cmd(int tx_type, int *ptp_cmd)
++{
++ switch (tx_type) {
++ case HWTSTAMP_TX_ON:
++ *ptp_cmd = IFH_REW_OP_TWO_STEP_PTP;
++ break;
++ case HWTSTAMP_TX_ONESTEP_SYNC:
++ /* IFH_REW_OP_ONE_STEP_PTP updates the correctionField,
++ * what we need to update is the originTimestamp.
++ */
++ *ptp_cmd = IFH_REW_OP_ORIGIN_PTP;
++ break;
++ case HWTSTAMP_TX_OFF:
++ *ptp_cmd = 0;
++ break;
++ default:
++ return -ERANGE;
++ }
++
++ return 0;
++}
++
+ int ocelot_hwstamp_get(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+@@ -521,30 +545,19 @@ EXPORT_SYMBOL(ocelot_hwstamp_get);
+ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
++ int ptp_cmd, old_ptp_cmd = ocelot_port->ptp_cmd;
+ bool l2 = false, l4 = false;
+ struct hwtstamp_config cfg;
++ bool old_l2, old_l4;
+ int err;
+
+ if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
+ return -EFAULT;
+
+ /* Tx type sanity check */
+- switch (cfg.tx_type) {
+- case HWTSTAMP_TX_ON:
+- ocelot_port->ptp_cmd = IFH_REW_OP_TWO_STEP_PTP;
+- break;
+- case HWTSTAMP_TX_ONESTEP_SYNC:
+- /* IFH_REW_OP_ONE_STEP_PTP updates the correctional field, we
+- * need to update the origin time.
+- */
+- ocelot_port->ptp_cmd = IFH_REW_OP_ORIGIN_PTP;
+- break;
+- case HWTSTAMP_TX_OFF:
+- ocelot_port->ptp_cmd = 0;
+- break;
+- default:
+- return -ERANGE;
+- }
++ err = ocelot_ptp_tx_type_to_cmd(cfg.tx_type, &ptp_cmd);
++ if (err)
++ return err;
+
+ switch (cfg.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+@@ -569,13 +582,27 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ return -ERANGE;
+ }
+
++ old_l2 = ocelot_port->trap_proto & OCELOT_PROTO_PTP_L2;
++ old_l4 = ocelot_port->trap_proto & OCELOT_PROTO_PTP_L4;
++
+ err = ocelot_setup_ptp_traps(ocelot, port, l2, l4);
+ if (err)
+ return err;
+
++ ocelot_port->ptp_cmd = ptp_cmd;
++
+ cfg.rx_filter = ocelot_traps_to_ptp_rx_filter(ocelot_port->trap_proto);
+
+- return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
++ if (copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg))) {
++ err = -EFAULT;
++ goto out_restore_ptp_traps;
++ }
++
++ return 0;
++out_restore_ptp_traps:
++ ocelot_setup_ptp_traps(ocelot, port, old_l2, old_l4);
++ ocelot_port->ptp_cmd = old_ptp_cmd;
++ return err;
+ }
+ EXPORT_SYMBOL(ocelot_hwstamp_set);
+
+@@ -603,34 +630,87 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
+ }
+ EXPORT_SYMBOL(ocelot_get_ts_info);
+
+-static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
+- struct sk_buff *clone)
++static struct sk_buff *ocelot_port_dequeue_ptp_tx_skb(struct ocelot *ocelot,
++ int port, u8 ts_id,
++ u32 seqid)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+- unsigned long flags;
++ struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
++ struct ptp_header *hdr;
+
+- spin_lock_irqsave(&ocelot->ts_id_lock, flags);
++ spin_lock(&ocelot->ts_id_lock);
+
+- if (ocelot_port->ptp_skbs_in_flight == OCELOT_MAX_PTP_ID ||
+- ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
+- spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
+- return -EBUSY;
++ skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) {
++ if (OCELOT_SKB_CB(skb)->ts_id != ts_id)
++ continue;
++
++ /* Check that the timestamp ID is for the expected PTP
++ * sequenceId. We don't have to test ptp_parse_header() against
++ * NULL, because we've pre-validated the packet's ptp_class.
++ */
++ hdr = ptp_parse_header(skb, OCELOT_SKB_CB(skb)->ptp_class);
++ if (seqid != ntohs(hdr->sequence_id))
++ continue;
++
++ __skb_unlink(skb, &ocelot_port->tx_skbs);
++ ocelot->ptp_skbs_in_flight--;
++ skb_match = skb;
++ break;
+ }
+
+- skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+- /* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
+- OCELOT_SKB_CB(clone)->ts_id = ocelot_port->ts_id;
++ spin_unlock(&ocelot->ts_id_lock);
+
+- ocelot_port->ts_id++;
+- if (ocelot_port->ts_id == OCELOT_MAX_PTP_ID)
+- ocelot_port->ts_id = 0;
++ return skb_match;
++}
++
++static int ocelot_port_queue_ptp_tx_skb(struct ocelot *ocelot, int port,
++ struct sk_buff *clone)
++{
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
++ DECLARE_BITMAP(ts_id_in_flight, OCELOT_MAX_PTP_ID);
++ struct sk_buff *skb, *skb_tmp;
++ unsigned long n;
++
++ spin_lock(&ocelot->ts_id_lock);
++
++ /* To get a better chance of acquiring a timestamp ID, first flush the
++ * stale packets still waiting in the TX timestamping queue. They are
++ * probably lost.
++ */
++ skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) {
++ if (time_before(OCELOT_SKB_CB(skb)->ptp_tx_time +
++ OCELOT_PTP_TX_TSTAMP_TIMEOUT, jiffies)) {
++ dev_warn_ratelimited(ocelot->dev,
++ "port %d invalidating stale timestamp ID %u which seems lost\n",
++ port, OCELOT_SKB_CB(skb)->ts_id);
++ __skb_unlink(skb, &ocelot_port->tx_skbs);
++ kfree_skb(skb);
++ ocelot->ptp_skbs_in_flight--;
++ } else {
++ __set_bit(OCELOT_SKB_CB(skb)->ts_id, ts_id_in_flight);
++ }
++ }
++
++ if (ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
++ spin_unlock(&ocelot->ts_id_lock);
++ return -EBUSY;
++ }
++
++ n = find_first_zero_bit(ts_id_in_flight, OCELOT_MAX_PTP_ID);
++ if (n == OCELOT_MAX_PTP_ID) {
++ spin_unlock(&ocelot->ts_id_lock);
++ return -EBUSY;
++ }
+
+- ocelot_port->ptp_skbs_in_flight++;
++ /* Found an available timestamp ID, use it */
++ OCELOT_SKB_CB(clone)->ts_id = n;
++ OCELOT_SKB_CB(clone)->ptp_tx_time = jiffies;
+ ocelot->ptp_skbs_in_flight++;
++ __skb_queue_tail(&ocelot_port->tx_skbs, clone);
+
+- skb_queue_tail(&ocelot_port->tx_skbs, clone);
++ spin_unlock(&ocelot->ts_id_lock);
+
+- spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
++ dev_dbg_ratelimited(ocelot->dev, "port %d timestamp id %lu\n", port, n);
+
+ return 0;
+ }
+@@ -687,10 +767,14 @@ int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port,
+ if (!(*clone))
+ return -ENOMEM;
+
+- err = ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
+- if (err)
++ /* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
++ err = ocelot_port_queue_ptp_tx_skb(ocelot, port, *clone);
++ if (err) {
++ kfree_skb(*clone);
+ return err;
++ }
+
++ skb_shinfo(*clone)->tx_flags |= SKBTX_IN_PROGRESS;
+ OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
+ OCELOT_SKB_CB(*clone)->ptp_class = ptp_class;
+ }
+@@ -726,28 +810,15 @@ static void ocelot_get_hwtimestamp(struct ocelot *ocelot,
+ spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
+ }
+
+-static bool ocelot_validate_ptp_skb(struct sk_buff *clone, u16 seqid)
+-{
+- struct ptp_header *hdr;
+-
+- hdr = ptp_parse_header(clone, OCELOT_SKB_CB(clone)->ptp_class);
+- if (WARN_ON(!hdr))
+- return false;
+-
+- return seqid == ntohs(hdr->sequence_id);
+-}
+-
+ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ {
+ int budget = OCELOT_PTP_QUEUE_SZ;
+
+ while (budget--) {
+- struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
+ struct skb_shared_hwtstamps shhwtstamps;
+ u32 val, id, seqid, txport;
+- struct ocelot_port *port;
++ struct sk_buff *skb_match;
+ struct timespec64 ts;
+- unsigned long flags;
+
+ val = ocelot_read(ocelot, SYS_PTP_STATUS);
+
+@@ -762,36 +833,14 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ txport = SYS_PTP_STATUS_PTP_MESS_TXPORT_X(val);
+ seqid = SYS_PTP_STATUS_PTP_MESS_SEQ_ID(val);
+
+- port = ocelot->ports[txport];
+-
+- spin_lock(&ocelot->ts_id_lock);
+- port->ptp_skbs_in_flight--;
+- ocelot->ptp_skbs_in_flight--;
+- spin_unlock(&ocelot->ts_id_lock);
+-
+ /* Retrieve its associated skb */
+-try_again:
+- spin_lock_irqsave(&port->tx_skbs.lock, flags);
+-
+- skb_queue_walk_safe(&port->tx_skbs, skb, skb_tmp) {
+- if (OCELOT_SKB_CB(skb)->ts_id != id)
+- continue;
+- __skb_unlink(skb, &port->tx_skbs);
+- skb_match = skb;
+- break;
+- }
+-
+- spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
+-
+- if (WARN_ON(!skb_match))
+- continue;
+-
+- if (!ocelot_validate_ptp_skb(skb_match, seqid)) {
+- dev_err_ratelimited(ocelot->dev,
+- "port %d received stale TX timestamp for seqid %d, discarding\n",
+- txport, seqid);
+- dev_kfree_skb_any(skb);
+- goto try_again;
++ skb_match = ocelot_port_dequeue_ptp_tx_skb(ocelot, txport, id,
++ seqid);
++ if (!skb_match) {
++ dev_warn_ratelimited(ocelot->dev,
++ "port %d received TX timestamp (seqid %d, ts id %u) for packet previously declared stale\n",
++ txport, seqid, id);
++ goto next_ts;
+ }
+
+ /* Get the h/w timestamp */
+@@ -802,7 +851,7 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+ skb_complete_tx_timestamp(skb_match, &shhwtstamps);
+
+- /* Next ts */
++next_ts:
+ ocelot_write(ocelot, SYS_PTP_NXT_PTP_NXT, SYS_PTP_NXT);
+ }
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 8f7ce6b51a1c9b..6b4b40c6e1fe00 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -53,7 +53,7 @@ MODULE_PARM_DESC(qcaspi_burst_len, "Number of data bytes per burst. Use 1-5000."
+
+ #define QCASPI_PLUGGABLE_MIN 0
+ #define QCASPI_PLUGGABLE_MAX 1
+-static int qcaspi_pluggable = QCASPI_PLUGGABLE_MIN;
++static int qcaspi_pluggable = QCASPI_PLUGGABLE_MAX;
+ module_param(qcaspi_pluggable, int, 0);
+ MODULE_PARM_DESC(qcaspi_pluggable, "Pluggable SPI connection (yes/no).");
+
+@@ -812,7 +812,6 @@ qcaspi_netdev_init(struct net_device *dev)
+
+ dev->mtu = QCAFRM_MAX_MTU;
+ dev->type = ARPHRD_ETHER;
+- qca->clkspeed = qcaspi_clkspeed;
+ qca->burst_len = qcaspi_burst_len;
+ qca->spi_thread = NULL;
+ qca->buffer_size = (QCAFRM_MAX_MTU + VLAN_ETH_HLEN + QCAFRM_HEADER_LEN +
+@@ -903,17 +902,15 @@ qca_spi_probe(struct spi_device *spi)
+ legacy_mode = of_property_read_bool(spi->dev.of_node,
+ "qca,legacy-mode");
+
+- if (qcaspi_clkspeed == 0) {
+- if (spi->max_speed_hz)
+- qcaspi_clkspeed = spi->max_speed_hz;
+- else
+- qcaspi_clkspeed = QCASPI_CLK_SPEED;
+- }
++ if (qcaspi_clkspeed)
++ spi->max_speed_hz = qcaspi_clkspeed;
++ else if (!spi->max_speed_hz)
++ spi->max_speed_hz = QCASPI_CLK_SPEED;
+
+- if ((qcaspi_clkspeed < QCASPI_CLK_SPEED_MIN) ||
+- (qcaspi_clkspeed > QCASPI_CLK_SPEED_MAX)) {
+- dev_err(&spi->dev, "Invalid clkspeed: %d\n",
+- qcaspi_clkspeed);
++ if (spi->max_speed_hz < QCASPI_CLK_SPEED_MIN ||
++ spi->max_speed_hz > QCASPI_CLK_SPEED_MAX) {
++ dev_err(&spi->dev, "Invalid clkspeed: %u\n",
++ spi->max_speed_hz);
+ return -EINVAL;
+ }
+
+@@ -938,14 +935,13 @@ qca_spi_probe(struct spi_device *spi)
+ return -EINVAL;
+ }
+
+- dev_info(&spi->dev, "ver=%s, clkspeed=%d, burst_len=%d, pluggable=%d\n",
++ dev_info(&spi->dev, "ver=%s, clkspeed=%u, burst_len=%d, pluggable=%d\n",
+ QCASPI_DRV_VERSION,
+- qcaspi_clkspeed,
++ spi->max_speed_hz,
+ qcaspi_burst_len,
+ qcaspi_pluggable);
+
+ spi->mode = SPI_MODE_3;
+- spi->max_speed_hz = qcaspi_clkspeed;
+ if (spi_setup(spi) < 0) {
+ dev_err(&spi->dev, "Unable to setup SPI device\n");
+ return -EFAULT;
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
+index 8f4808695e8206..0831cefc58b898 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.h
++++ b/drivers/net/ethernet/qualcomm/qca_spi.h
+@@ -89,7 +89,6 @@ struct qcaspi {
+ #endif
+
+ /* user configurable options */
+- u32 clkspeed;
+ u8 legacy_mode;
+ u16 burst_len;
+ };
+diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
+index b80aa27a7214d4..09117110e3dd2a 100644
+--- a/drivers/net/ethernet/renesas/rswitch.c
++++ b/drivers/net/ethernet/renesas/rswitch.c
+@@ -862,13 +862,10 @@ static void rswitch_tx_free(struct net_device *ndev)
+ struct rswitch_ext_desc *desc;
+ struct sk_buff *skb;
+
+- for (; rswitch_get_num_cur_queues(gq) > 0;
+- gq->dirty = rswitch_next_queue_index(gq, false, 1)) {
+- desc = &gq->tx_ring[gq->dirty];
+- if ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY)
+- break;
+-
++ desc = &gq->tx_ring[gq->dirty];
++ while ((desc->desc.die_dt & DT_MASK) == DT_FEMPTY) {
+ dma_rmb();
++
+ skb = gq->skbs[gq->dirty];
+ if (skb) {
+ rdev->ndev->stats.tx_packets++;
+@@ -879,7 +876,10 @@ static void rswitch_tx_free(struct net_device *ndev)
+ dev_kfree_skb_any(gq->skbs[gq->dirty]);
+ gq->skbs[gq->dirty] = NULL;
+ }
++
+ desc->desc.die_dt = DT_EEMPTY;
++ gq->dirty = rswitch_next_queue_index(gq, false, 1);
++ desc = &gq->tx_ring[gq->dirty];
+ }
+ }
+
+@@ -908,8 +908,10 @@ static int rswitch_poll(struct napi_struct *napi, int budget)
+
+ if (napi_complete_done(napi, budget - quota)) {
+ spin_lock_irqsave(&priv->lock, flags);
+- rswitch_enadis_data_irq(priv, rdev->tx_queue->index, true);
+- rswitch_enadis_data_irq(priv, rdev->rx_queue->index, true);
++ if (test_bit(rdev->port, priv->opened_ports)) {
++ rswitch_enadis_data_irq(priv, rdev->tx_queue->index, true);
++ rswitch_enadis_data_irq(priv, rdev->rx_queue->index, true);
++ }
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+@@ -1114,25 +1116,40 @@ static int rswitch_etha_wait_link_verification(struct rswitch_etha *etha)
+
+ static void rswitch_rmac_setting(struct rswitch_etha *etha, const u8 *mac)
+ {
+- u32 val;
++ u32 pis, lsc;
+
+ rswitch_etha_write_mac_address(etha, mac);
+
++ switch (etha->phy_interface) {
++ case PHY_INTERFACE_MODE_SGMII:
++ pis = MPIC_PIS_GMII;
++ break;
++ case PHY_INTERFACE_MODE_USXGMII:
++ case PHY_INTERFACE_MODE_5GBASER:
++ pis = MPIC_PIS_XGMII;
++ break;
++ default:
++ pis = FIELD_GET(MPIC_PIS, ioread32(etha->addr + MPIC));
++ break;
++ }
++
+ switch (etha->speed) {
+ case 100:
+- val = MPIC_LSC_100M;
++ lsc = MPIC_LSC_100M;
+ break;
+ case 1000:
+- val = MPIC_LSC_1G;
++ lsc = MPIC_LSC_1G;
+ break;
+ case 2500:
+- val = MPIC_LSC_2_5G;
++ lsc = MPIC_LSC_2_5G;
+ break;
+ default:
+- return;
++ lsc = FIELD_GET(MPIC_LSC, ioread32(etha->addr + MPIC));
++ break;
+ }
+
+- iowrite32(MPIC_PIS_GMII | val, etha->addr + MPIC);
++ rswitch_modify(etha->addr, MPIC, MPIC_PIS | MPIC_LSC,
++ FIELD_PREP(MPIC_PIS, pis) | FIELD_PREP(MPIC_LSC, lsc));
+ }
+
+ static void rswitch_etha_enable_mii(struct rswitch_etha *etha)
+@@ -1538,20 +1555,20 @@ static int rswitch_open(struct net_device *ndev)
+ struct rswitch_device *rdev = netdev_priv(ndev);
+ unsigned long flags;
+
+- phy_start(ndev->phydev);
++ if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
++ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
+
+ napi_enable(&rdev->napi);
+- netif_start_queue(ndev);
+
+ spin_lock_irqsave(&rdev->priv->lock, flags);
++ bitmap_set(rdev->priv->opened_ports, rdev->port, 1);
+ rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, true);
+ rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, true);
+ spin_unlock_irqrestore(&rdev->priv->lock, flags);
+
+- if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
+- iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
++ phy_start(ndev->phydev);
+
+- bitmap_set(rdev->priv->opened_ports, rdev->port, 1);
++ netif_start_queue(ndev);
+
+ return 0;
+ };
+@@ -1563,7 +1580,16 @@ static int rswitch_stop(struct net_device *ndev)
+ unsigned long flags;
+
+ netif_tx_stop_all_queues(ndev);
++
++ phy_stop(ndev->phydev);
++
++ spin_lock_irqsave(&rdev->priv->lock, flags);
++ rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false);
++ rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false);
+ bitmap_clear(rdev->priv->opened_ports, rdev->port, 1);
++ spin_unlock_irqrestore(&rdev->priv->lock, flags);
++
++ napi_disable(&rdev->napi);
+
+ if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
+ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
+@@ -1576,14 +1602,6 @@ static int rswitch_stop(struct net_device *ndev)
+ kfree(ts_info);
+ }
+
+- spin_lock_irqsave(&rdev->priv->lock, flags);
+- rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false);
+- rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false);
+- spin_unlock_irqrestore(&rdev->priv->lock, flags);
+-
+- phy_stop(ndev->phydev);
+- napi_disable(&rdev->napi);
+-
+ return 0;
+ };
+
+@@ -1681,8 +1699,11 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd
+ if (dma_mapping_error(ndev->dev.parent, dma_addr_orig))
+ goto err_kfree;
+
+- gq->skbs[gq->cur] = skb;
+- gq->unmap_addrs[gq->cur] = dma_addr_orig;
++ /* Stored the skb at the last descriptor to avoid skb free before hardware completes send */
++ gq->skbs[(gq->cur + nr_desc - 1) % gq->ring_size] = skb;
++ gq->unmap_addrs[(gq->cur + nr_desc - 1) % gq->ring_size] = dma_addr_orig;
++
++ dma_wmb();
+
+ /* DT_FSTART should be set at last. So, this is reverse order. */
+ for (i = nr_desc; i-- > 0; ) {
+@@ -1694,14 +1715,13 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd
+ goto err_unmap;
+ }
+
+- wmb(); /* gq->cur must be incremented after die_dt was set */
+-
+ gq->cur = rswitch_next_queue_index(gq, true, nr_desc);
+ rswitch_modify(rdev->addr, GWTRC(gq->index), 0, BIT(gq->index % 32));
+
+ return ret;
+
+ err_unmap:
++ gq->skbs[(gq->cur + nr_desc - 1) % gq->ring_size] = NULL;
+ dma_unmap_single(ndev->dev.parent, dma_addr_orig, skb->len, DMA_TO_DEVICE);
+
+ err_kfree:
+@@ -1889,7 +1909,6 @@ static int rswitch_device_alloc(struct rswitch_private *priv, unsigned int index
+ rdev->np_port = rswitch_get_port_node(rdev);
+ rdev->disabled = !rdev->np_port;
+ err = of_get_ethdev_address(rdev->np_port, ndev);
+- of_node_put(rdev->np_port);
+ if (err) {
+ if (is_valid_ether_addr(rdev->etha->mac_addr))
+ eth_hw_addr_set(ndev, rdev->etha->mac_addr);
+@@ -1919,6 +1938,7 @@ static int rswitch_device_alloc(struct rswitch_private *priv, unsigned int index
+
+ out_rxdmac:
+ out_get_params:
++ of_node_put(rdev->np_port);
+ netif_napi_del(&rdev->napi);
+ free_netdev(ndev);
+
+@@ -1932,6 +1952,7 @@ static void rswitch_device_free(struct rswitch_private *priv, unsigned int index
+
+ rswitch_txdmac_free(ndev);
+ rswitch_rxdmac_free(ndev);
++ of_node_put(rdev->np_port);
+ netif_napi_del(&rdev->napi);
+ free_netdev(ndev);
+ }
+diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h
+index 72e3ff596d3183..e020800dcc570e 100644
+--- a/drivers/net/ethernet/renesas/rswitch.h
++++ b/drivers/net/ethernet/renesas/rswitch.h
+@@ -724,13 +724,13 @@ enum rswitch_etha_mode {
+
+ #define EAVCC_VEM_SC_TAG (0x3 << 16)
+
+-#define MPIC_PIS_MII 0x00
+-#define MPIC_PIS_GMII 0x02
+-#define MPIC_PIS_XGMII 0x04
+-#define MPIC_LSC_SHIFT 3
+-#define MPIC_LSC_100M (1 << MPIC_LSC_SHIFT)
+-#define MPIC_LSC_1G (2 << MPIC_LSC_SHIFT)
+-#define MPIC_LSC_2_5G (3 << MPIC_LSC_SHIFT)
++#define MPIC_PIS GENMASK(2, 0)
++#define MPIC_PIS_GMII 2
++#define MPIC_PIS_XGMII 4
++#define MPIC_LSC GENMASK(5, 3)
++#define MPIC_LSC_100M 1
++#define MPIC_LSC_1G 2
++#define MPIC_LSC_2_5G 3
+
+ #define MDIO_READ_C45 0x03
+ #define MDIO_WRITE_C45 0x01
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 18191d5a8bd4d3..6ace5a74cddb57 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -983,7 +983,8 @@ static void team_port_disable(struct team *team,
+
+ #define TEAM_VLAN_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_FRAGLIST | NETIF_F_GSO_SOFTWARE | \
+- NETIF_F_HIGHDMA | NETIF_F_LRO)
++ NETIF_F_HIGHDMA | NETIF_F_LRO | \
++ NETIF_F_GSO_ENCAP_ALL)
+
+ #define TEAM_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_RXCSUM | NETIF_F_GSO_SOFTWARE)
+@@ -991,13 +992,14 @@ static void team_port_disable(struct team *team,
+ static void __team_compute_features(struct team *team)
+ {
+ struct team_port *port;
+- netdev_features_t vlan_features = TEAM_VLAN_FEATURES &
+- NETIF_F_ALL_FOR_ALL;
++ netdev_features_t vlan_features = TEAM_VLAN_FEATURES;
+ netdev_features_t enc_features = TEAM_ENC_FEATURES;
+ unsigned short max_hard_header_len = ETH_HLEN;
+ unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
+ IFF_XMIT_DST_RELEASE_PERM;
+
++ vlan_features = netdev_base_features(vlan_features);
++
+ rcu_read_lock();
+ list_for_each_entry_rcu(port, &team->port_list, list) {
+ vlan_features = netdev_increment_features(vlan_features,
+@@ -2012,8 +2014,7 @@ static netdev_features_t team_fix_features(struct net_device *dev,
+ netdev_features_t mask;
+
+ mask = features;
+- features &= ~NETIF_F_ONE_FOR_ALL;
+- features |= NETIF_F_ALL_FOR_ALL;
++ features = netdev_base_features(features);
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(port, &team->port_list, list) {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index c897afef0b414c..60027b439021b8 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -502,6 +502,7 @@ struct virtio_net_common_hdr {
+ };
+
+ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
++static void virtnet_sq_free_unused_buf_done(struct virtqueue *vq);
+ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+ struct net_device *dev,
+ unsigned int *xdp_xmit,
+@@ -2898,7 +2899,6 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
+ if (err < 0)
+ goto err_xdp_reg_mem_model;
+
+- netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, qp_index));
+ virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
+ virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
+
+@@ -3166,7 +3166,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
+
+ virtnet_rx_pause(vi, rq);
+
+- err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf);
++ err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf, NULL);
+ if (err)
+ netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
+
+@@ -3229,7 +3229,8 @@ static int virtnet_tx_resize(struct virtnet_info *vi, struct send_queue *sq,
+
+ virtnet_tx_pause(vi, sq);
+
+- err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
++ err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf,
++ virtnet_sq_free_unused_buf_done);
+ if (err)
+ netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
+
+@@ -5997,6 +5998,14 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
+ xdp_return_frame(ptr_to_xdp(buf));
+ }
+
++static void virtnet_sq_free_unused_buf_done(struct virtqueue *vq)
++{
++ struct virtnet_info *vi = vq->vdev->priv;
++ int i = vq2txq(vq);
++
++ netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, i));
++}
++
+ static void free_unused_bufs(struct virtnet_info *vi)
+ {
+ void *buf;
+@@ -6728,11 +6737,20 @@ static int virtnet_probe(struct virtio_device *vdev)
+
+ static void remove_vq_common(struct virtnet_info *vi)
+ {
++ int i;
++
+ virtio_reset_device(vi->vdev);
+
+ /* Free unused buffers in both send and recv, if any. */
+ free_unused_bufs(vi);
+
++ /*
++ * Rule of thumb is netdev_tx_reset_queue() should follow any
++ * skb freeing not followed by netdev_tx_completed_queue()
++ */
++ for (i = 0; i < vi->max_queue_pairs; i++)
++ netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, i));
++
+ free_receive_bufs(vi);
+
+ free_receive_page_frags(vi);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index a7a10e716e6517..e96ddaeeeeff52 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -1967,7 +1967,7 @@ void iwl_mvm_channel_switch_error_notif(struct iwl_mvm *mvm,
+ if (csa_err_mask & (CS_ERR_COUNT_ERROR |
+ CS_ERR_LONG_DELAY_AFTER_CS |
+ CS_ERR_TX_BLOCK_TIMER_EXPIRED))
+- ieee80211_channel_switch_disconnect(vif, true);
++ ieee80211_channel_switch_disconnect(vif);
+ rcu_read_unlock();
+ }
+
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 4265c1cd0ff716..63fe51d0e64db3 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -867,7 +867,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ static int xennet_close(struct net_device *dev)
+ {
+ struct netfront_info *np = netdev_priv(dev);
+- unsigned int num_queues = dev->real_num_tx_queues;
++ unsigned int num_queues = np->queues ? dev->real_num_tx_queues : 0;
+ unsigned int i;
+ struct netfront_queue *queue;
+ netif_tx_stop_all_queues(np->netdev);
+@@ -882,6 +882,9 @@ static void xennet_destroy_queues(struct netfront_info *info)
+ {
+ unsigned int i;
+
++ if (!info->queues)
++ return;
++
+ for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
+ struct netfront_queue *queue = &info->queues[i];
+
+diff --git a/drivers/ptp/ptp_kvm_x86.c b/drivers/ptp/ptp_kvm_x86.c
+index 617c8d6706d3d0..6cea4fe39bcfe4 100644
+--- a/drivers/ptp/ptp_kvm_x86.c
++++ b/drivers/ptp/ptp_kvm_x86.c
+@@ -26,7 +26,7 @@ int kvm_arch_ptp_init(void)
+ long ret;
+
+ if (!kvm_para_available())
+- return -ENODEV;
++ return -EOPNOTSUPP;
+
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ p = alloc_page(GFP_KERNEL | __GFP_ZERO);
+@@ -46,14 +46,14 @@ int kvm_arch_ptp_init(void)
+
+ clock_pair_gpa = slow_virt_to_phys(clock_pair);
+ if (!pvclock_get_pvti_cpu0_va()) {
+- ret = -ENODEV;
++ ret = -EOPNOTSUPP;
+ goto err;
+ }
+
+ ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa,
+ KVM_CLOCK_PAIRING_WALLCLOCK);
+ if (ret == -KVM_ENOSYS) {
+- ret = -ENODEV;
++ ret = -EOPNOTSUPP;
+ goto err;
+ }
+
+diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c
+index a8e91d9d028b89..945d2917b91bac 100644
+--- a/drivers/regulator/axp20x-regulator.c
++++ b/drivers/regulator/axp20x-regulator.c
+@@ -371,8 +371,8 @@
+ .ops = &axp20x_ops, \
+ }
+
+-#define AXP_DESC(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
+- _vmask, _ereg, _emask) \
++#define AXP_DESC_DELAY(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
++ _vmask, _ereg, _emask, _ramp_delay) \
+ [_family##_##_id] = { \
+ .name = (_match), \
+ .supply_name = (_supply), \
+@@ -388,9 +388,15 @@
+ .vsel_mask = (_vmask), \
+ .enable_reg = (_ereg), \
+ .enable_mask = (_emask), \
++ .ramp_delay = (_ramp_delay), \
+ .ops = &axp20x_ops, \
+ }
+
++#define AXP_DESC(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
++ _vmask, _ereg, _emask) \
++ AXP_DESC_DELAY(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
++ _vmask, _ereg, _emask, 0)
++
+ #define AXP_DESC_SW(_family, _id, _match, _supply, _ereg, _emask) \
+ [_family##_##_id] = { \
+ .name = (_match), \
+@@ -419,8 +425,8 @@
+ .ops = &axp20x_ops_fixed \
+ }
+
+-#define AXP_DESC_RANGES(_family, _id, _match, _supply, _ranges, _n_voltages, \
+- _vreg, _vmask, _ereg, _emask) \
++#define AXP_DESC_RANGES_DELAY(_family, _id, _match, _supply, _ranges, _n_voltages, \
++ _vreg, _vmask, _ereg, _emask, _ramp_delay) \
+ [_family##_##_id] = { \
+ .name = (_match), \
+ .supply_name = (_supply), \
+@@ -436,9 +442,15 @@
+ .enable_mask = (_emask), \
+ .linear_ranges = (_ranges), \
+ .n_linear_ranges = ARRAY_SIZE(_ranges), \
++ .ramp_delay = (_ramp_delay), \
+ .ops = &axp20x_ops_range, \
+ }
+
++#define AXP_DESC_RANGES(_family, _id, _match, _supply, _ranges, _n_voltages, \
++ _vreg, _vmask, _ereg, _emask) \
++ AXP_DESC_RANGES_DELAY(_family, _id, _match, _supply, _ranges, \
++ _n_voltages, _vreg, _vmask, _ereg, _emask, 0)
++
+ static const int axp209_dcdc2_ldo3_slew_rates[] = {
+ 1600,
+ 800,
+@@ -781,21 +793,21 @@ static const struct linear_range axp717_dcdc3_ranges[] = {
+ };
+
+ static const struct regulator_desc axp717_regulators[] = {
+- AXP_DESC_RANGES(AXP717, DCDC1, "dcdc1", "vin1",
++ AXP_DESC_RANGES_DELAY(AXP717, DCDC1, "dcdc1", "vin1",
+ axp717_dcdc1_ranges, AXP717_DCDC1_NUM_VOLTAGES,
+ AXP717_DCDC1_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(0)),
+- AXP_DESC_RANGES(AXP717, DCDC2, "dcdc2", "vin2",
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(0), 640),
++ AXP_DESC_RANGES_DELAY(AXP717, DCDC2, "dcdc2", "vin2",
+ axp717_dcdc2_ranges, AXP717_DCDC2_NUM_VOLTAGES,
+ AXP717_DCDC2_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(1)),
+- AXP_DESC_RANGES(AXP717, DCDC3, "dcdc3", "vin3",
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(1), 640),
++ AXP_DESC_RANGES_DELAY(AXP717, DCDC3, "dcdc3", "vin3",
+ axp717_dcdc3_ranges, AXP717_DCDC3_NUM_VOLTAGES,
+ AXP717_DCDC3_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(2)),
+- AXP_DESC(AXP717, DCDC4, "dcdc4", "vin4", 1000, 3700, 100,
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(2), 640),
++ AXP_DESC_DELAY(AXP717, DCDC4, "dcdc4", "vin4", 1000, 3700, 100,
+ AXP717_DCDC4_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(3)),
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(3), 6400),
+ AXP_DESC(AXP717, ALDO1, "aldo1", "aldoin", 500, 3500, 100,
+ AXP717_ALDO1_CONTROL, AXP717_LDO_V_OUT_MASK,
+ AXP717_LDO0_OUTPUT_CONTROL, BIT(0)),
+diff --git a/drivers/spi/spi-aspeed-smc.c b/drivers/spi/spi-aspeed-smc.c
+index bbd417c55e7f56..b0e3f307b28353 100644
+--- a/drivers/spi/spi-aspeed-smc.c
++++ b/drivers/spi/spi-aspeed-smc.c
+@@ -239,7 +239,7 @@ static ssize_t aspeed_spi_read_user(struct aspeed_spi_chip *chip,
+
+ ret = aspeed_spi_send_cmd_addr(chip, op->addr.nbytes, offset, op->cmd.opcode);
+ if (ret < 0)
+- return ret;
++ goto stop_user;
+
+ if (op->dummy.buswidth && op->dummy.nbytes) {
+ for (i = 0; i < op->dummy.nbytes / op->dummy.buswidth; i++)
+@@ -249,8 +249,9 @@ static ssize_t aspeed_spi_read_user(struct aspeed_spi_chip *chip,
+ aspeed_spi_set_io_mode(chip, io_mode);
+
+ aspeed_spi_read_from_ahb(buf, chip->ahb_base, len);
++stop_user:
+ aspeed_spi_stop_user(chip);
+- return 0;
++ return ret;
+ }
+
+ static ssize_t aspeed_spi_write_user(struct aspeed_spi_chip *chip,
+@@ -261,10 +262,11 @@ static ssize_t aspeed_spi_write_user(struct aspeed_spi_chip *chip,
+ aspeed_spi_start_user(chip);
+ ret = aspeed_spi_send_cmd_addr(chip, op->addr.nbytes, op->addr.val, op->cmd.opcode);
+ if (ret < 0)
+- return ret;
++ goto stop_user;
+ aspeed_spi_write_to_ahb(chip->ahb_base, op->data.buf.out, op->data.nbytes);
++stop_user:
+ aspeed_spi_stop_user(chip);
+- return 0;
++ return ret;
+ }
+
+ /* support for 1-1-1, 1-1-2 or 1-1-4 */
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 0bb33c43b1b46e..40a64a598a7495 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -241,6 +241,20 @@ static void rockchip_spi_set_cs(struct spi_device *spi, bool enable)
+ struct spi_controller *ctlr = spi->controller;
+ struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ bool cs_asserted = spi->mode & SPI_CS_HIGH ? enable : !enable;
++ bool cs_actual;
++
++ /*
++ * SPI subsystem tries to avoid no-op calls that would break the PM
++ * refcount below. It can't however for the first time it is used.
++ * To detect this case we read it here and bail out early for no-ops.
++ */
++ if (spi_get_csgpiod(spi, 0))
++ cs_actual = !!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SER) & 1);
++ else
++ cs_actual = !!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SER) &
++ BIT(spi_get_chipselect(spi, 0)));
++ if (unlikely(cs_actual == cs_asserted))
++ return;
+
+ if (cs_asserted) {
+ /* Keep things powered as long as CS is asserted */
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index b80e9a528e17ff..bdf17eafd3598d 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -157,6 +157,7 @@ struct sci_port {
+
+ bool has_rtscts;
+ bool autorts;
++ bool tx_occurred;
+ };
+
+ #define SCI_NPORTS CONFIG_SERIAL_SH_SCI_NR_UARTS
+@@ -850,6 +851,7 @@ static void sci_transmit_chars(struct uart_port *port)
+ {
+ struct tty_port *tport = &port->state->port;
+ unsigned int stopped = uart_tx_stopped(port);
++ struct sci_port *s = to_sci_port(port);
+ unsigned short status;
+ unsigned short ctrl;
+ int count;
+@@ -885,6 +887,7 @@ static void sci_transmit_chars(struct uart_port *port)
+ }
+
+ sci_serial_out(port, SCxTDR, c);
++ s->tx_occurred = true;
+
+ port->icount.tx++;
+ } while (--count > 0);
+@@ -1241,6 +1244,8 @@ static void sci_dma_tx_complete(void *arg)
+ if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
++ s->tx_occurred = true;
++
+ if (!kfifo_is_empty(&tport->xmit_fifo)) {
+ s->cookie_tx = 0;
+ schedule_work(&s->work_tx);
+@@ -1731,6 +1736,19 @@ static void sci_flush_buffer(struct uart_port *port)
+ s->cookie_tx = -EINVAL;
+ }
+ }
++
++static void sci_dma_check_tx_occurred(struct sci_port *s)
++{
++ struct dma_tx_state state;
++ enum dma_status status;
++
++ if (!s->chan_tx)
++ return;
++
++ status = dmaengine_tx_status(s->chan_tx, s->cookie_tx, &state);
++ if (status == DMA_COMPLETE || status == DMA_IN_PROGRESS)
++ s->tx_occurred = true;
++}
+ #else /* !CONFIG_SERIAL_SH_SCI_DMA */
+ static inline void sci_request_dma(struct uart_port *port)
+ {
+@@ -1740,6 +1758,10 @@ static inline void sci_free_dma(struct uart_port *port)
+ {
+ }
+
++static void sci_dma_check_tx_occurred(struct sci_port *s)
++{
++}
++
+ #define sci_flush_buffer NULL
+ #endif /* !CONFIG_SERIAL_SH_SCI_DMA */
+
+@@ -2076,6 +2098,12 @@ static unsigned int sci_tx_empty(struct uart_port *port)
+ {
+ unsigned short status = sci_serial_in(port, SCxSR);
+ unsigned short in_tx_fifo = sci_txfill(port);
++ struct sci_port *s = to_sci_port(port);
++
++ sci_dma_check_tx_occurred(s);
++
++ if (!s->tx_occurred)
++ return TIOCSER_TEMT;
+
+ return (status & SCxSR_TEND(port)) && !in_tx_fifo ? TIOCSER_TEMT : 0;
+ }
+@@ -2247,6 +2275,7 @@ static int sci_startup(struct uart_port *port)
+
+ dev_dbg(port->dev, "%s(%d)\n", __func__, port->line);
+
++ s->tx_occurred = false;
+ sci_request_dma(port);
+
+ ret = sci_request_irq(s);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index cfebe4a1af9e84..bc13133efaa508 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -5566,6 +5566,7 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
+
+ lrbp = &hba->lrb[task_tag];
+ lrbp->compl_time_stamp = ktime_get();
++ lrbp->compl_time_stamp_local_clock = local_clock();
+ cmd = lrbp->cmd;
+ if (cmd) {
+ if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 500dc35e64774d..0b2490347b9fe7 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2794,8 +2794,14 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ int retval;
+ struct usb_device *rhdev;
+ struct usb_hcd *shared_hcd;
++ int skip_phy_initialization;
+
+- if (!hcd->skip_phy_initialization) {
++ if (usb_hcd_is_primary_hcd(hcd))
++ skip_phy_initialization = hcd->skip_phy_initialization;
++ else
++ skip_phy_initialization = hcd->primary_hcd->skip_phy_initialization;
++
++ if (!skip_phy_initialization) {
+ if (usb_hcd_is_primary_hcd(hcd)) {
+ hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+ if (IS_ERR(hcd->phy_roothub))
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index cb54390e7de488..8c3941ecaaf5d4 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -3546,11 +3546,9 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ port_status |= USB_PORT_STAT_C_OVERCURRENT << 16;
+ }
+
+- if (!hsotg->flags.b.port_connect_status) {
++ if (dwc2_is_device_mode(hsotg)) {
+ /*
+- * The port is disconnected, which means the core is
+- * either in device mode or it soon will be. Just
+- * return 0's for the remainder of the port status
++ * Just return 0's for the remainder of the port status
+ * since the port register can't be read if the core
+ * is in device mode.
+ */
+@@ -3620,13 +3618,11 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ if (wvalue != USB_PORT_FEAT_TEST && (!windex || windex > 1))
+ goto error;
+
+- if (!hsotg->flags.b.port_connect_status) {
++ if (dwc2_is_device_mode(hsotg)) {
+ /*
+- * The port is disconnected, which means the core is
+- * either in device mode or it soon will be. Just
+- * return without doing anything since the port
+- * register can't be written if the core is in device
+- * mode.
++ * Just return 0's for the remainder of the port status
++ * since the port register can't be read if the core
++ * is in device mode.
+ */
+ break;
+ }
+@@ -4349,7 +4345,7 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
+ if (hsotg->bus_suspended)
+ goto skip_power_saving;
+
+- if (hsotg->flags.b.port_connect_status == 0)
++ if (!(dwc2_read_hprt0(hsotg) & HPRT0_CONNSTS))
+ goto skip_power_saving;
+
+ switch (hsotg->params.power_down) {
+@@ -4431,6 +4427,7 @@ static int _dwc2_hcd_resume(struct usb_hcd *hcd)
+ * Power Down mode.
+ */
+ if (hprt0 & HPRT0_CONNSTS) {
++ set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+ hsotg->lx_state = DWC2_L0;
+ goto unlock;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c
+index 64c0cd1995aa06..e99faf014c78a6 100644
+--- a/drivers/usb/dwc3/dwc3-imx8mp.c
++++ b/drivers/usb/dwc3/dwc3-imx8mp.c
+@@ -129,6 +129,16 @@ static void dwc3_imx8mp_wakeup_disable(struct dwc3_imx8mp *dwc3_imx)
+ writel(val, dwc3_imx->hsio_blk_base + USB_WAKEUP_CTRL);
+ }
+
++static const struct property_entry dwc3_imx8mp_properties[] = {
++ PROPERTY_ENTRY_BOOL("xhci-missing-cas-quirk"),
++ PROPERTY_ENTRY_BOOL("xhci-skip-phy-init-quirk"),
++ {},
++};
++
++static const struct software_node dwc3_imx8mp_swnode = {
++ .properties = dwc3_imx8mp_properties,
++};
++
+ static irqreturn_t dwc3_imx8mp_interrupt(int irq, void *_dwc3_imx)
+ {
+ struct dwc3_imx8mp *dwc3_imx = _dwc3_imx;
+@@ -148,17 +158,6 @@ static irqreturn_t dwc3_imx8mp_interrupt(int irq, void *_dwc3_imx)
+ return IRQ_HANDLED;
+ }
+
+-static int dwc3_imx8mp_set_software_node(struct device *dev)
+-{
+- struct property_entry props[3] = { 0 };
+- int prop_idx = 0;
+-
+- props[prop_idx++] = PROPERTY_ENTRY_BOOL("xhci-missing-cas-quirk");
+- props[prop_idx++] = PROPERTY_ENTRY_BOOL("xhci-skip-phy-init-quirk");
+-
+- return device_create_managed_software_node(dev, props, NULL);
+-}
+-
+ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -221,17 +220,17 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ if (err < 0)
+ goto disable_rpm;
+
+- err = dwc3_imx8mp_set_software_node(dev);
++ err = device_add_software_node(dev, &dwc3_imx8mp_swnode);
+ if (err) {
+ err = -ENODEV;
+- dev_err(dev, "failed to create software node\n");
++ dev_err(dev, "failed to add software node\n");
+ goto disable_rpm;
+ }
+
+ err = of_platform_populate(node, NULL, NULL, dev);
+ if (err) {
+ dev_err(&pdev->dev, "failed to create dwc3 core\n");
+- goto disable_rpm;
++ goto remove_swnode;
+ }
+
+ dwc3_imx->dwc3 = of_find_device_by_node(dwc3_np);
+@@ -255,6 +254,8 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+
+ depopulate:
+ of_platform_depopulate(dev);
++remove_swnode:
++ device_remove_software_node(dev);
+ disable_rpm:
+ pm_runtime_disable(dev);
+ pm_runtime_put_noidle(dev);
+@@ -268,6 +269,7 @@ static void dwc3_imx8mp_remove(struct platform_device *pdev)
+
+ pm_runtime_get_sync(dev);
+ of_platform_depopulate(dev);
++ device_remove_software_node(dev);
+
+ pm_runtime_disable(dev);
+ pm_runtime_put_noidle(dev);
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index b5e5be424ce997..96c87dc4757f22 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -121,8 +121,11 @@ static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
+ * in use but the usb3-phy entry is missing from the device tree.
+ * Therefore, skip these operations in this case.
+ */
+- if (!priv_data->usb3_phy)
++ if (!priv_data->usb3_phy) {
++ /* Deselect the PIPE Clock Select bit in FPD PIPE Clock register */
++ writel(PIPE_CLK_DESELECT, priv_data->regs + XLNX_USB_FPD_PIPE_CLK);
+ goto skip_usb3_phy;
++ }
+
+ crst = devm_reset_control_get_exclusive(dev, "usb_crst");
+ if (IS_ERR(crst)) {
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 8285df9ed6fd78..8c9d0074db588b 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -1593,7 +1593,11 @@ static int f_midi2_create_card(struct f_midi2 *midi2)
+ fb->info.midi_ci_version = b->midi_ci_version;
+ fb->info.ui_hint = reverse_dir(b->ui_hint);
+ fb->info.sysex8_streams = b->sysex8_streams;
+- fb->info.flags |= b->is_midi1;
++ if (b->is_midi1 < 2)
++ fb->info.flags |= b->is_midi1;
++ else
++ fb->info.flags |= SNDRV_UMP_BLOCK_IS_MIDI1 |
++ SNDRV_UMP_BLOCK_IS_LOWSPEED;
+ strscpy(fb->info.name, ump_fb_name(b),
+ sizeof(fb->info.name));
+ }
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 0a8c05b2746b4e..53d9fc41acc522 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -579,9 +579,12 @@ static int gs_start_io(struct gs_port *port)
+ * we didn't in gs_start_tx() */
+ tty_wakeup(port->port.tty);
+ } else {
+- gs_free_requests(ep, head, &port->read_allocated);
+- gs_free_requests(port->port_usb->in, &port->write_pool,
+- &port->write_allocated);
++ /* Free reqs only if we are still connected */
++ if (port->port_usb) {
++ gs_free_requests(ep, head, &port->read_allocated);
++ gs_free_requests(port->port_usb->in, &port->write_pool,
++ &port->write_allocated);
++ }
+ status = -EIO;
+ }
+
+diff --git a/drivers/usb/host/ehci-sh.c b/drivers/usb/host/ehci-sh.c
+index d31d9506e41ab0..7c2b2339e674dd 100644
+--- a/drivers/usb/host/ehci-sh.c
++++ b/drivers/usb/host/ehci-sh.c
+@@ -119,8 +119,12 @@ static int ehci_hcd_sh_probe(struct platform_device *pdev)
+ if (IS_ERR(priv->iclk))
+ priv->iclk = NULL;
+
+- clk_enable(priv->fclk);
+- clk_enable(priv->iclk);
++ ret = clk_enable(priv->fclk);
++ if (ret)
++ goto fail_request_resource;
++ ret = clk_enable(priv->iclk);
++ if (ret)
++ goto fail_iclk;
+
+ ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (ret != 0) {
+@@ -136,6 +140,7 @@ static int ehci_hcd_sh_probe(struct platform_device *pdev)
+
+ fail_add_hcd:
+ clk_disable(priv->iclk);
++fail_iclk:
+ clk_disable(priv->fclk);
+
+ fail_request_resource:
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index 9fe4f48b18980c..0881fdd1823e0b 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -779,11 +779,17 @@ max3421_check_unlink(struct usb_hcd *hcd)
+ retval = 1;
+ dev_dbg(&spi->dev, "%s: URB %p unlinked=%d",
+ __func__, urb, urb->unlinked);
+- usb_hcd_unlink_urb_from_ep(hcd, urb);
+- spin_unlock_irqrestore(&max3421_hcd->lock,
+- flags);
+- usb_hcd_giveback_urb(hcd, urb, 0);
+- spin_lock_irqsave(&max3421_hcd->lock, flags);
++ if (urb == max3421_hcd->curr_urb) {
++ max3421_hcd->urb_done = 1;
++ max3421_hcd->hien &= ~(BIT(MAX3421_HI_HXFRDN_BIT) |
++ BIT(MAX3421_HI_RCVDAV_BIT));
++ } else {
++ usb_hcd_unlink_urb_from_ep(hcd, urb);
++ spin_unlock_irqrestore(&max3421_hcd->lock,
++ flags);
++ usb_hcd_giveback_urb(hcd, urb, 0);
++ spin_lock_irqsave(&max3421_hcd->lock, flags);
++ }
+ }
+ }
+ }
+diff --git a/drivers/usb/misc/onboard_usb_dev.c b/drivers/usb/misc/onboard_usb_dev.c
+index 75dfdca04ff1c2..27b0a6e182678b 100644
+--- a/drivers/usb/misc/onboard_usb_dev.c
++++ b/drivers/usb/misc/onboard_usb_dev.c
+@@ -407,8 +407,10 @@ static int onboard_dev_probe(struct platform_device *pdev)
+ }
+
+ if (of_device_is_compatible(pdev->dev.of_node, "usb424,2744") ||
+- of_device_is_compatible(pdev->dev.of_node, "usb424,5744"))
++ of_device_is_compatible(pdev->dev.of_node, "usb424,5744")) {
+ err = onboard_dev_5744_i2c_init(client);
++ onboard_dev->always_powered_in_suspend = true;
++ }
+
+ put_device(&client->dev);
+ if (err < 0)
+diff --git a/drivers/usb/typec/anx7411.c b/drivers/usb/typec/anx7411.c
+index d1e7c487ddfbb5..0ae0a5ee3fae07 100644
+--- a/drivers/usb/typec/anx7411.c
++++ b/drivers/usb/typec/anx7411.c
+@@ -290,6 +290,8 @@ struct anx7411_data {
+ struct power_supply *psy;
+ struct power_supply_desc psy_desc;
+ struct device *dev;
++ struct fwnode_handle *switch_node;
++ struct fwnode_handle *mux_node;
+ };
+
+ static u8 snk_identity[] = {
+@@ -1021,6 +1023,16 @@ static void anx7411_port_unregister_altmodes(struct typec_altmode **adev)
+ }
+ }
+
++static void anx7411_port_unregister(struct typec_params *typecp)
++{
++ fwnode_handle_put(typecp->caps.fwnode);
++ anx7411_port_unregister_altmodes(typecp->port_amode);
++ if (typecp->port)
++ typec_unregister_port(typecp->port);
++ if (typecp->role_sw)
++ usb_role_switch_put(typecp->role_sw);
++}
++
+ static int anx7411_usb_mux_set(struct typec_mux_dev *mux,
+ struct typec_mux_state *state)
+ {
+@@ -1089,6 +1101,7 @@ static void anx7411_unregister_mux(struct anx7411_data *ctx)
+ if (ctx->typec.typec_mux) {
+ typec_mux_unregister(ctx->typec.typec_mux);
+ ctx->typec.typec_mux = NULL;
++ fwnode_handle_put(ctx->mux_node);
+ }
+ }
+
+@@ -1097,6 +1110,7 @@ static void anx7411_unregister_switch(struct anx7411_data *ctx)
+ if (ctx->typec.typec_switch) {
+ typec_switch_unregister(ctx->typec.typec_switch);
+ ctx->typec.typec_switch = NULL;
++ fwnode_handle_put(ctx->switch_node);
+ }
+ }
+
+@@ -1104,28 +1118,29 @@ static int anx7411_typec_switch_probe(struct anx7411_data *ctx,
+ struct device *dev)
+ {
+ int ret;
+- struct device_node *node;
+
+- node = of_get_child_by_name(dev->of_node, "orientation_switch");
+- if (!node)
++ ctx->switch_node = device_get_named_child_node(dev, "orientation_switch");
++ if (!ctx->switch_node)
+ return 0;
+
+- ret = anx7411_register_switch(ctx, dev, &node->fwnode);
++ ret = anx7411_register_switch(ctx, dev, ctx->switch_node);
+ if (ret) {
+ dev_err(dev, "failed register switch");
++ fwnode_handle_put(ctx->switch_node);
+ return ret;
+ }
+
+- node = of_get_child_by_name(dev->of_node, "mode_switch");
+- if (!node) {
++ ctx->mux_node = device_get_named_child_node(dev, "mode_switch");
++ if (!ctx->mux_node) {
+ dev_err(dev, "no typec mux exist");
+ ret = -ENODEV;
+ goto unregister_switch;
+ }
+
+- ret = anx7411_register_mux(ctx, dev, &node->fwnode);
++ ret = anx7411_register_mux(ctx, dev, ctx->mux_node);
+ if (ret) {
+ dev_err(dev, "failed register mode switch");
++ fwnode_handle_put(ctx->mux_node);
+ ret = -ENODEV;
+ goto unregister_switch;
+ }
+@@ -1154,34 +1169,34 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ ret = fwnode_property_read_string(fwnode, "power-role", &buf);
+ if (ret) {
+ dev_err(dev, "power-role not found: %d\n", ret);
+- return ret;
++ goto put_fwnode;
+ }
+
+ ret = typec_find_port_power_role(buf);
+ if (ret < 0)
+- return ret;
++ goto put_fwnode;
+ cap->type = ret;
+
+ ret = fwnode_property_read_string(fwnode, "data-role", &buf);
+ if (ret) {
+ dev_err(dev, "data-role not found: %d\n", ret);
+- return ret;
++ goto put_fwnode;
+ }
+
+ ret = typec_find_port_data_role(buf);
+ if (ret < 0)
+- return ret;
++ goto put_fwnode;
+ cap->data = ret;
+
+ ret = fwnode_property_read_string(fwnode, "try-power-role", &buf);
+ if (ret) {
+ dev_err(dev, "try-power-role not found: %d\n", ret);
+- return ret;
++ goto put_fwnode;
+ }
+
+ ret = typec_find_power_role(buf);
+ if (ret < 0)
+- return ret;
++ goto put_fwnode;
+ cap->prefer_role = ret;
+
+ /* Get source pdos */
+@@ -1193,7 +1208,7 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ typecp->src_pdo_nr);
+ if (ret < 0) {
+ dev_err(dev, "source cap validate failed: %d\n", ret);
+- return -EINVAL;
++ goto put_fwnode;
+ }
+
+ typecp->caps_flags |= HAS_SOURCE_CAP;
+@@ -1207,7 +1222,7 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ typecp->sink_pdo_nr);
+ if (ret < 0) {
+ dev_err(dev, "sink cap validate failed: %d\n", ret);
+- return -EINVAL;
++ goto put_fwnode;
+ }
+
+ for (i = 0; i < typecp->sink_pdo_nr; i++) {
+@@ -1251,13 +1266,21 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ ret = PTR_ERR(ctx->typec.port);
+ ctx->typec.port = NULL;
+ dev_err(dev, "Failed to register type c port %d\n", ret);
+- return ret;
++ goto put_usb_role_switch;
+ }
+
+ typec_port_register_altmodes(ctx->typec.port, NULL, ctx,
+ ctx->typec.port_amode,
+ MAX_ALTMODE);
+ return 0;
++
++put_usb_role_switch:
++ if (ctx->typec.role_sw)
++ usb_role_switch_put(ctx->typec.role_sw);
++put_fwnode:
++ fwnode_handle_put(fwnode);
++
++ return ret;
+ }
+
+ static int anx7411_typec_check_connection(struct anx7411_data *ctx)
+@@ -1523,8 +1546,7 @@ static int anx7411_i2c_probe(struct i2c_client *client)
+ destroy_workqueue(plat->workqueue);
+
+ free_typec_port:
+- typec_unregister_port(plat->typec.port);
+- anx7411_port_unregister_altmodes(plat->typec.port_amode);
++ anx7411_port_unregister(&plat->typec);
+
+ free_typec_switch:
+ anx7411_unregister_switch(plat);
+@@ -1548,17 +1570,11 @@ static void anx7411_i2c_remove(struct i2c_client *client)
+
+ i2c_unregister_device(plat->spi_client);
+
+- if (plat->typec.role_sw)
+- usb_role_switch_put(plat->typec.role_sw);
+-
+ anx7411_unregister_mux(plat);
+
+ anx7411_unregister_switch(plat);
+
+- if (plat->typec.port)
+- typec_unregister_port(plat->typec.port);
+-
+- anx7411_port_unregister_altmodes(plat->typec.port_amode);
++ anx7411_port_unregister(&plat->typec);
+ }
+
+ static const struct i2c_device_id anx7411_id[] = {
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index e0f3925e401b3d..7a3f0f5af38fdb 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -46,11 +46,11 @@ void ucsi_notify_common(struct ucsi *ucsi, u32 cci)
+ ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci));
+
+ if (cci & UCSI_CCI_ACK_COMPLETE &&
+- test_bit(ACK_PENDING, &ucsi->flags))
++ test_and_clear_bit(ACK_PENDING, &ucsi->flags))
+ complete(&ucsi->complete);
+
+ if (cci & UCSI_CCI_COMMAND_COMPLETE &&
+- test_bit(COMMAND_PENDING, &ucsi->flags))
++ test_and_clear_bit(COMMAND_PENDING, &ucsi->flags))
+ complete(&ucsi->complete);
+ }
+ EXPORT_SYMBOL_GPL(ucsi_notify_common);
+@@ -65,6 +65,8 @@ int ucsi_sync_control_common(struct ucsi *ucsi, u64 command)
+ else
+ set_bit(COMMAND_PENDING, &ucsi->flags);
+
++ reinit_completion(&ucsi->complete);
++
+ ret = ucsi->ops->async_control(ucsi, command);
+ if (ret)
+ goto out_clear_bit;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 98374ed7c57723..0112742e4504b9 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -2716,6 +2716,7 @@ EXPORT_SYMBOL_GPL(vring_create_virtqueue_dma);
+ * @_vq: the struct virtqueue we're talking about.
+ * @num: new ring num
+ * @recycle: callback to recycle unused buffers
++ * @recycle_done: callback to be invoked when recycle for all unused buffers done
+ *
+ * When it is really necessary to create a new vring, it will set the current vq
+ * into the reset state. Then call the passed callback to recycle the buffer
+@@ -2736,7 +2737,8 @@ EXPORT_SYMBOL_GPL(vring_create_virtqueue_dma);
+ *
+ */
+ int virtqueue_resize(struct virtqueue *_vq, u32 num,
+- void (*recycle)(struct virtqueue *vq, void *buf))
++ void (*recycle)(struct virtqueue *vq, void *buf),
++ void (*recycle_done)(struct virtqueue *vq))
+ {
+ struct vring_virtqueue *vq = to_vvq(_vq);
+ int err;
+@@ -2753,6 +2755,8 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
+ err = virtqueue_disable_and_recycle(_vq, recycle);
+ if (err)
+ return err;
++ if (recycle_done)
++ recycle_done(_vq);
+
+ if (vq->packed_ring)
+ err = virtqueue_resize_packed(_vq, num);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index b35fe1075503e1..fafc07e38663ca 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1925,6 +1925,7 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ goto unlink_out;
+ }
+
++ netfs_wait_for_outstanding_io(inode);
+ cifs_close_deferred_file_under_dentry(tcon, full_path);
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ if (cap_unix(tcon->ses) && (CIFS_UNIX_POSIX_PATH_OPS_CAP &
+@@ -2442,8 +2443,10 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ }
+
+ cifs_close_deferred_file_under_dentry(tcon, from_name);
+- if (d_inode(target_dentry) != NULL)
++ if (d_inode(target_dentry) != NULL) {
++ netfs_wait_for_outstanding_io(d_inode(target_dentry));
+ cifs_close_deferred_file_under_dentry(tcon, to_name);
++ }
+
+ rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry,
+ to_name);
+diff --git a/fs/smb/server/auth.c b/fs/smb/server/auth.c
+index 611716bc8f27c1..8892177e500f19 100644
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -1016,6 +1016,8 @@ static int ksmbd_get_encryption_key(struct ksmbd_work *work, __u64 ses_id,
+
+ ses_enc_key = enc ? sess->smb3encryptionkey :
+ sess->smb3decryptionkey;
++ if (enc)
++ ksmbd_user_session_get(sess);
+ memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
+
+ return 0;
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index ad02fe555fda7e..d960ddcbba1657 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -263,8 +263,10 @@ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+
+ down_read(&conn->session_lock);
+ sess = xa_load(&conn->sessions, id);
+- if (sess)
++ if (sess) {
+ sess->last_active = jiffies;
++ ksmbd_user_session_get(sess);
++ }
+ up_read(&conn->session_lock);
+ return sess;
+ }
+@@ -275,6 +277,8 @@ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
+
+ down_read(&sessions_table_lock);
+ sess = __session_lookup(id);
++ if (sess)
++ ksmbd_user_session_get(sess);
+ up_read(&sessions_table_lock);
+
+ return sess;
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index c8cc6fa6fc3ebb..698af37e988d7b 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -241,14 +241,14 @@ static void __handle_ksmbd_work(struct ksmbd_work *work,
+ if (work->tcon)
+ ksmbd_tree_connect_put(work->tcon);
+ smb3_preauth_hash_rsp(work);
+- if (work->sess)
+- ksmbd_user_session_put(work->sess);
+ if (work->sess && work->sess->enc && work->encrypted &&
+ conn->ops->encrypt_resp) {
+ rc = conn->ops->encrypt_resp(work);
+ if (rc < 0)
+ conn->ops->set_rsp_status(work, STATUS_DATA_ERROR);
+ }
++ if (work->sess)
++ ksmbd_user_session_put(work->sess);
+
+ ksmbd_conn_write(work);
+ }
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index d0836d710f1814..7d01dd313351f7 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -67,8 +67,10 @@ static inline bool check_session_id(struct ksmbd_conn *conn, u64 id)
+ return false;
+
+ sess = ksmbd_session_lookup_all(conn, id);
+- if (sess)
++ if (sess) {
++ ksmbd_user_session_put(sess);
+ return true;
++ }
+ pr_err("Invalid user session id: %llu\n", id);
+ return false;
+ }
+@@ -605,10 +607,8 @@ int smb2_check_user_session(struct ksmbd_work *work)
+
+ /* Check for validity of user session */
+ work->sess = ksmbd_session_lookup_all(conn, sess_id);
+- if (work->sess) {
+- ksmbd_user_session_get(work->sess);
++ if (work->sess)
+ return 1;
+- }
+ ksmbd_debug(SMB, "Invalid user session, Uid %llu\n", sess_id);
+ return -ENOENT;
+ }
+@@ -1701,29 +1701,35 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ if (conn->dialect != sess->dialect) {
+ rc = -EINVAL;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (!(req->hdr.Flags & SMB2_FLAGS_SIGNED)) {
+ rc = -EINVAL;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (strncmp(conn->ClientGUID, sess->ClientGUID,
+ SMB2_CLIENT_GUID_SIZE)) {
+ rc = -ENOENT;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_IN_PROGRESS) {
+ rc = -EACCES;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_EXPIRED) {
+ rc = -EFAULT;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
++ ksmbd_user_session_put(sess);
+
+ if (ksmbd_conn_need_reconnect(conn)) {
+ rc = -EFAULT;
+@@ -1731,7 +1737,8 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ goto out_err;
+ }
+
+- if (ksmbd_session_lookup(conn, sess_id)) {
++ sess = ksmbd_session_lookup(conn, sess_id);
++ if (!sess) {
+ rc = -EACCES;
+ goto out_err;
+ }
+@@ -1742,7 +1749,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ }
+
+ conn->binding = true;
+- ksmbd_user_session_get(sess);
+ } else if ((conn->dialect < SMB30_PROT_ID ||
+ server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) &&
+ (req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+@@ -1769,7 +1775,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ }
+
+ conn->binding = false;
+- ksmbd_user_session_get(sess);
+ }
+ work->sess = sess;
+
+@@ -2195,9 +2200,9 @@ int smb2_tree_disconnect(struct ksmbd_work *work)
+ int smb2_session_logoff(struct ksmbd_work *work)
+ {
+ struct ksmbd_conn *conn = work->conn;
++ struct ksmbd_session *sess = work->sess;
+ struct smb2_logoff_req *req;
+ struct smb2_logoff_rsp *rsp;
+- struct ksmbd_session *sess;
+ u64 sess_id;
+ int err;
+
+@@ -2219,11 +2224,6 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ ksmbd_close_session_fds(work);
+ ksmbd_conn_wait_idle(conn);
+
+- /*
+- * Re-lookup session to validate if session is deleted
+- * while waiting request complete
+- */
+- sess = ksmbd_session_lookup_all(conn, sess_id);
+ if (ksmbd_tree_conn_session_logoff(sess)) {
+ ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId);
+ rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED;
+@@ -8962,6 +8962,7 @@ int smb3_decrypt_req(struct ksmbd_work *work)
+ le64_to_cpu(tr_hdr->SessionId));
+ return -ECONNABORTED;
+ }
++ ksmbd_user_session_put(sess);
+
+ iov[0].iov_base = buf;
+ iov[0].iov_len = sizeof(struct smb2_transform_hdr) + 4;
+diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
+index a5c4af148853f8..134d87b3489aa4 100644
+--- a/fs/xfs/libxfs/xfs_btree.c
++++ b/fs/xfs/libxfs/xfs_btree.c
+@@ -3569,14 +3569,31 @@ xfs_btree_insrec(
+ xfs_btree_log_block(cur, bp, XFS_BB_NUMRECS);
+
+ /*
+- * If we just inserted into a new tree block, we have to
+- * recalculate nkey here because nkey is out of date.
++ * Update btree keys to reflect the newly added record or keyptr.
++ * There are three cases here to be aware of. Normally, all we have to
++ * do is walk towards the root, updating keys as necessary.
+ *
+- * Otherwise we're just updating an existing block (having shoved
+- * some records into the new tree block), so use the regular key
+- * update mechanism.
++ * If the caller had us target a full block for the insertion, we dealt
++ * with that by calling the _make_block_unfull function. If the
++ * "make unfull" function splits the block, it'll hand us back the key
++ * and pointer of the new block. We haven't yet added the new block to
++ * the next level up, so if we decide to add the new record to the new
++ * block (bp->b_bn != old_bn), we have to update the caller's pointer
++ * so that the caller adds the new block with the correct key.
++ *
++ * However, there is a third possibility-- if the selected block is the
++ * root block of an inode-rooted btree and cannot be expanded further,
++ * the "make unfull" function moves the root block contents to a new
++ * block and updates the root block to point to the new block. In this
++ * case, no block pointer is passed back because the block has already
++ * been added to the btree. In this case, we need to use the regular
++ * key update function, just like the first case. This is critical for
++ * overlapping btrees, because the high key must be updated to reflect
++ * the entire tree, not just the subtree accessible through the first
++ * child of the root (which is now two levels down from the root).
+ */
+- if (bp && xfs_buf_daddr(bp) != old_bn) {
++ if (!xfs_btree_ptr_is_null(cur, &nptr) &&
++ bp && xfs_buf_daddr(bp) != old_bn) {
+ xfs_btree_get_keys(cur, block, lkey);
+ } else if (xfs_btree_needs_key_update(cur, optr)) {
+ error = xfs_btree_update_keys(cur, level);
+@@ -5156,7 +5173,7 @@ xfs_btree_count_blocks_helper(
+ int level,
+ void *data)
+ {
+- xfs_extlen_t *blocks = data;
++ xfs_filblks_t *blocks = data;
+ (*blocks)++;
+
+ return 0;
+@@ -5166,7 +5183,7 @@ xfs_btree_count_blocks_helper(
+ int
+ xfs_btree_count_blocks(
+ struct xfs_btree_cur *cur,
+- xfs_extlen_t *blocks)
++ xfs_filblks_t *blocks)
+ {
+ *blocks = 0;
+ return xfs_btree_visit_blocks(cur, xfs_btree_count_blocks_helper,
+diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h
+index 10b7ddc3b2b34e..91e0b6dac31ec6 100644
+--- a/fs/xfs/libxfs/xfs_btree.h
++++ b/fs/xfs/libxfs/xfs_btree.h
+@@ -485,7 +485,7 @@ typedef int (*xfs_btree_visit_blocks_fn)(struct xfs_btree_cur *cur, int level,
+ int xfs_btree_visit_blocks(struct xfs_btree_cur *cur,
+ xfs_btree_visit_blocks_fn fn, unsigned int flags, void *data);
+
+-int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_extlen_t *blocks);
++int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_filblks_t *blocks);
+
+ union xfs_btree_rec *xfs_btree_rec_addr(struct xfs_btree_cur *cur, int n,
+ struct xfs_btree_block *block);
+diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
+index 401b42d52af686..6aa43f3fc68e03 100644
+--- a/fs/xfs/libxfs/xfs_ialloc_btree.c
++++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
+@@ -743,6 +743,7 @@ xfs_finobt_count_blocks(
+ {
+ struct xfs_buf *agbp = NULL;
+ struct xfs_btree_cur *cur;
++ xfs_filblks_t blocks;
+ int error;
+
+ error = xfs_ialloc_read_agi(pag, tp, 0, &agbp);
+@@ -750,9 +751,10 @@ xfs_finobt_count_blocks(
+ return error;
+
+ cur = xfs_finobt_init_cursor(pag, tp, agbp);
+- error = xfs_btree_count_blocks(cur, tree_blocks);
++ error = xfs_btree_count_blocks(cur, &blocks);
+ xfs_btree_del_cursor(cur, error);
+ xfs_trans_brelse(tp, agbp);
++ *tree_blocks = blocks;
+
+ return error;
+ }
+diff --git a/fs/xfs/libxfs/xfs_symlink_remote.c b/fs/xfs/libxfs/xfs_symlink_remote.c
+index f228127a88ff26..fb47a76ead18c2 100644
+--- a/fs/xfs/libxfs/xfs_symlink_remote.c
++++ b/fs/xfs/libxfs/xfs_symlink_remote.c
+@@ -92,8 +92,10 @@ xfs_symlink_verify(
+ struct xfs_mount *mp = bp->b_mount;
+ struct xfs_dsymlink_hdr *dsl = bp->b_addr;
+
++ /* no verification of non-crc buffers */
+ if (!xfs_has_crc(mp))
+- return __this_address;
++ return NULL;
++
+ if (!xfs_verify_magic(bp, dsl->sl_magic))
+ return __this_address;
+ if (!uuid_equal(&dsl->sl_uuid, &mp->m_sb.sb_meta_uuid))
+diff --git a/fs/xfs/scrub/agheader.c b/fs/xfs/scrub/agheader.c
+index f8e5b67128d25a..da30f926cbe66d 100644
+--- a/fs/xfs/scrub/agheader.c
++++ b/fs/xfs/scrub/agheader.c
+@@ -434,7 +434,7 @@ xchk_agf_xref_btreeblks(
+ {
+ struct xfs_agf *agf = sc->sa.agf_bp->b_addr;
+ struct xfs_mount *mp = sc->mp;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ xfs_agblock_t btreeblks;
+ int error;
+
+@@ -483,7 +483,7 @@ xchk_agf_xref_refcblks(
+ struct xfs_scrub *sc)
+ {
+ struct xfs_agf *agf = sc->sa.agf_bp->b_addr;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ int error;
+
+ if (!sc->sa.refc_cur)
+@@ -816,7 +816,7 @@ xchk_agi_xref_fiblocks(
+ struct xfs_scrub *sc)
+ {
+ struct xfs_agi *agi = sc->sa.agi_bp->b_addr;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ int error = 0;
+
+ if (!xfs_has_inobtcounts(sc->mp))
+diff --git a/fs/xfs/scrub/agheader_repair.c b/fs/xfs/scrub/agheader_repair.c
+index 2f98d90d7fd66d..69b003259784fe 100644
+--- a/fs/xfs/scrub/agheader_repair.c
++++ b/fs/xfs/scrub/agheader_repair.c
+@@ -256,7 +256,7 @@ xrep_agf_calc_from_btrees(
+ struct xfs_agf *agf = agf_bp->b_addr;
+ struct xfs_mount *mp = sc->mp;
+ xfs_agblock_t btreeblks;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ int error;
+
+ /* Update the AGF counters from the bnobt. */
+@@ -946,7 +946,7 @@ xrep_agi_calc_from_btrees(
+ if (error)
+ goto err;
+ if (xfs_has_inobtcounts(mp)) {
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+
+ error = xfs_btree_count_blocks(cur, &blocks);
+ if (error)
+@@ -959,7 +959,7 @@ xrep_agi_calc_from_btrees(
+ agi->agi_freecount = cpu_to_be32(freecount);
+
+ if (xfs_has_finobt(mp) && xfs_has_inobtcounts(mp)) {
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+
+ cur = xfs_finobt_init_cursor(sc->sa.pag, sc->tp, agi_bp);
+ error = xfs_btree_count_blocks(cur, &blocks);
+diff --git a/fs/xfs/scrub/fscounters.c b/fs/xfs/scrub/fscounters.c
+index 1d3e98346933e1..454f17595c9c9e 100644
+--- a/fs/xfs/scrub/fscounters.c
++++ b/fs/xfs/scrub/fscounters.c
+@@ -261,7 +261,7 @@ xchk_fscount_btreeblks(
+ struct xchk_fscounters *fsc,
+ xfs_agnumber_t agno)
+ {
+- xfs_extlen_t blocks;
++ xfs_filblks_t blocks;
+ int error;
+
+ error = xchk_ag_init_existing(sc, agno, &sc->sa);
+diff --git a/fs/xfs/scrub/ialloc.c b/fs/xfs/scrub/ialloc.c
+index 750d7b0cd25a78..a59c44e5903a45 100644
+--- a/fs/xfs/scrub/ialloc.c
++++ b/fs/xfs/scrub/ialloc.c
+@@ -652,8 +652,8 @@ xchk_iallocbt_xref_rmap_btreeblks(
+ struct xfs_scrub *sc)
+ {
+ xfs_filblks_t blocks;
+- xfs_extlen_t inobt_blocks = 0;
+- xfs_extlen_t finobt_blocks = 0;
++ xfs_filblks_t inobt_blocks = 0;
++ xfs_filblks_t finobt_blocks = 0;
+ int error;
+
+ if (!sc->sa.ino_cur || !sc->sa.rmap_cur ||
+diff --git a/fs/xfs/scrub/refcount.c b/fs/xfs/scrub/refcount.c
+index d0c7d4a29c0feb..cccf39d917a09c 100644
+--- a/fs/xfs/scrub/refcount.c
++++ b/fs/xfs/scrub/refcount.c
+@@ -490,7 +490,7 @@ xchk_refcount_xref_rmap(
+ struct xfs_scrub *sc,
+ xfs_filblks_t cow_blocks)
+ {
+- xfs_extlen_t refcbt_blocks = 0;
++ xfs_filblks_t refcbt_blocks = 0;
+ xfs_filblks_t blocks;
+ int error;
+
+diff --git a/fs/xfs/scrub/symlink_repair.c b/fs/xfs/scrub/symlink_repair.c
+index d015a86ef460fb..953ce7be78dc2f 100644
+--- a/fs/xfs/scrub/symlink_repair.c
++++ b/fs/xfs/scrub/symlink_repair.c
+@@ -36,6 +36,7 @@
+ #include "scrub/tempfile.h"
+ #include "scrub/tempexch.h"
+ #include "scrub/reap.h"
++#include "scrub/health.h"
+
+ /*
+ * Symbolic Link Repair
+@@ -233,7 +234,7 @@ xrep_symlink_salvage(
+ * target zapped flag.
+ */
+ if (buflen == 0) {
+- sc->sick_mask |= XFS_SICK_INO_SYMLINK_ZAPPED;
++ xchk_mark_healthy_if_clean(sc, XFS_SICK_INO_SYMLINK_ZAPPED);
+ sprintf(target_buf, DUMMY_TARGET);
+ }
+
+diff --git a/fs/xfs/scrub/trace.h b/fs/xfs/scrub/trace.h
+index c886d5d0eb021a..da773fee8638af 100644
+--- a/fs/xfs/scrub/trace.h
++++ b/fs/xfs/scrub/trace.h
+@@ -601,7 +601,7 @@ TRACE_EVENT(xchk_ifork_btree_op_error,
+ TP_fast_assign(
+ xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
+ __entry->dev = sc->mp->m_super->s_dev;
+- __entry->ino = sc->ip->i_ino;
++ __entry->ino = cur->bc_ino.ip->i_ino;
+ __entry->whichfork = cur->bc_ino.whichfork;
+ __entry->type = sc->sm->sm_type;
+ __assign_str(name);
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index edaf193dbd5ccc..95f8a09f96ae20 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -111,7 +111,7 @@ xfs_bmap_count_blocks(
+ struct xfs_mount *mp = ip->i_mount;
+ struct xfs_ifork *ifp = xfs_ifork_ptr(ip, whichfork);
+ struct xfs_btree_cur *cur;
+- xfs_extlen_t btblocks = 0;
++ xfs_filblks_t btblocks = 0;
+ int error;
+
+ *nextents = 0;
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index b19916b11fd563..aba54e3c583661 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -1228,6 +1228,14 @@ xfs_file_remap_range(
+ xfs_iunlock2_remapping(src, dest);
+ if (ret)
+ trace_xfs_reflink_remap_range_error(dest, ret, _RET_IP_);
++ /*
++ * If the caller did not set CAN_SHORTEN, then it is not prepared to
++ * handle partial results -- either the whole remap succeeds, or we
++ * must say why it did not. In this case, any error should be returned
++ * to the caller.
++ */
++ if (ret && remapped < len && !(remap_flags & REMAP_FILE_CAN_SHORTEN))
++ return ret;
+ return remapped > 0 ? remapped : ret;
+ }
+
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index 3a2005a1e673dc..8caa55b8167467 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -1295,7 +1295,7 @@ xfs_rtallocate(
+ * For an allocation to an empty file at offset 0, pick an extent that
+ * will space things out in the rt area.
+ */
+- if (bno_hint)
++ if (bno_hint != NULLFSBLOCK)
+ start = xfs_rtb_to_rtx(args.mp, bno_hint);
+ else if (initial_user_data)
+ start = xfs_rtpick_extent(args.mp, tp, maxlen);
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index bdf3704dc30118..30e03342287a94 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -834,13 +834,6 @@ __xfs_trans_commit(
+
+ trace_xfs_trans_commit(tp, _RET_IP_);
+
+- error = xfs_trans_run_precommits(tp);
+- if (error) {
+- if (tp->t_flags & XFS_TRANS_PERM_LOG_RES)
+- xfs_defer_cancel(tp);
+- goto out_unreserve;
+- }
+-
+ /*
+ * Finish deferred items on final commit. Only permanent transactions
+ * should ever have deferred ops.
+@@ -851,13 +844,12 @@ __xfs_trans_commit(
+ error = xfs_defer_finish_noroll(&tp);
+ if (error)
+ goto out_unreserve;
+-
+- /* Run precommits from final tx in defer chain. */
+- error = xfs_trans_run_precommits(tp);
+- if (error)
+- goto out_unreserve;
+ }
+
++ error = xfs_trans_run_precommits(tp);
++ if (error)
++ goto out_unreserve;
++
+ /*
+ * If there is nothing to be logged by the transaction,
+ * then unlock all of the items associated with the
+@@ -1382,5 +1374,8 @@ xfs_trans_alloc_dir(
+
+ out_cancel:
+ xfs_trans_cancel(tp);
++ xfs_iunlock(dp, XFS_ILOCK_EXCL);
++ if (dp != ip)
++ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ return error;
+ }
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 6b4bc85f4999ba..b7f327ce797e5b 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -200,8 +200,6 @@ struct gendisk {
+ spinlock_t zone_wplugs_lock;
+ struct mempool_s *zone_wplugs_pool;
+ struct hlist_head *zone_wplugs_hash;
+- struct list_head zone_wplugs_err_list;
+- struct work_struct zone_wplugs_work;
+ struct workqueue_struct *zone_wplugs_wq;
+ #endif /* CONFIG_BLK_DEV_ZONED */
+
+@@ -1386,6 +1384,9 @@ static inline bool bdev_is_zone_start(struct block_device *bdev,
+ return bdev_offset_from_zone_start(bdev, sector) == 0;
+ }
+
++int blk_zone_issue_zeroout(struct block_device *bdev, sector_t sector,
++ sector_t nr_sects, gfp_t gfp_mask);
++
+ static inline int queue_dma_alignment(const struct request_queue *q)
+ {
+ return q->limits.dma_alignment;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index cbe2350912460b..a7af13f550e0d4 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -2157,26 +2157,25 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
+ * rcu-protected dynamically sized maps.
+ */
+ static __always_inline u32
+-bpf_prog_run_array_uprobe(const struct bpf_prog_array __rcu *array_rcu,
++bpf_prog_run_array_uprobe(const struct bpf_prog_array *array,
+ const void *ctx, bpf_prog_run_fn run_prog)
+ {
+ const struct bpf_prog_array_item *item;
+ const struct bpf_prog *prog;
+- const struct bpf_prog_array *array;
+ struct bpf_run_ctx *old_run_ctx;
+ struct bpf_trace_run_ctx run_ctx;
+ u32 ret = 1;
+
+ might_fault();
++ RCU_LOCKDEP_WARN(!rcu_read_lock_trace_held(), "no rcu lock held");
++
++ if (unlikely(!array))
++ return ret;
+
+- rcu_read_lock_trace();
+ migrate_disable();
+
+ run_ctx.is_uprobe = true;
+
+- array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
+- if (unlikely(!array))
+- goto out;
+ old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+ item = &array->items[0];
+ while ((prog = READ_ONCE(item->prog))) {
+@@ -2191,9 +2190,7 @@ bpf_prog_run_array_uprobe(const struct bpf_prog_array __rcu *array_rcu,
+ rcu_read_unlock();
+ }
+ bpf_reset_run_ctx(old_run_ctx);
+-out:
+ migrate_enable();
+- rcu_read_unlock_trace();
+ return ret;
+ }
+
+@@ -3471,10 +3468,4 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
+ return prog->aux->func_idx != 0;
+ }
+
+-static inline bool bpf_prog_is_raw_tp(const struct bpf_prog *prog)
+-{
+- return prog->type == BPF_PROG_TYPE_TRACING &&
+- prog->expected_attach_type == BPF_TRACE_RAW_TP;
+-}
+-
+ #endif /* _LINUX_BPF_H */
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 4d4e23b6e3e761..2d962dade9faee 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -216,28 +216,43 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+
+ #endif /* __KERNEL__ */
+
++/**
++ * offset_to_ptr - convert a relative memory offset to an absolute pointer
++ * @off: the address of the 32-bit offset value
++ */
++static inline void *offset_to_ptr(const int *off)
++{
++ return (void *)((unsigned long)off + *off);
++}
++
++#endif /* __ASSEMBLY__ */
++
++#ifdef CONFIG_64BIT
++#define ARCH_SEL(a,b) a
++#else
++#define ARCH_SEL(a,b) b
++#endif
++
+ /*
+ * Force the compiler to emit 'sym' as a symbol, so that we can reference
+ * it from inline assembler. Necessary in case 'sym' could be inlined
+ * otherwise, or eliminated entirely due to lack of references that are
+ * visible to the compiler.
+ */
+-#define ___ADDRESSABLE(sym, __attrs) \
+- static void * __used __attrs \
++#define ___ADDRESSABLE(sym, __attrs) \
++ static void * __used __attrs \
+ __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)(uintptr_t)&sym;
++
+ #define __ADDRESSABLE(sym) \
+ ___ADDRESSABLE(sym, __section(".discard.addressable"))
+
+-/**
+- * offset_to_ptr - convert a relative memory offset to an absolute pointer
+- * @off: the address of the 32-bit offset value
+- */
+-static inline void *offset_to_ptr(const int *off)
+-{
+- return (void *)((unsigned long)off + *off);
+-}
++#define __ADDRESSABLE_ASM(sym) \
++ .pushsection .discard.addressable,"aw"; \
++ .align ARCH_SEL(8,4); \
++ ARCH_SEL(.quad, .long) __stringify(sym); \
++ .popsection;
+
+-#endif /* __ASSEMBLY__ */
++#define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym))
+
+ /* &a[0] degrades to a pointer: a different type from an array */
+ #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+diff --git a/include/linux/dsa/ocelot.h b/include/linux/dsa/ocelot.h
+index 6fbfbde68a37c3..620a3260fc0802 100644
+--- a/include/linux/dsa/ocelot.h
++++ b/include/linux/dsa/ocelot.h
+@@ -15,6 +15,7 @@
+ struct ocelot_skb_cb {
+ struct sk_buff *clone;
+ unsigned int ptp_class; /* valid only for clones */
++ unsigned long ptp_tx_time; /* valid only for clones */
+ u32 tstamp_lo;
+ u8 ptp_cmd;
+ u8 ts_id;
+diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
+index 66e7d26b70a4fe..11be70a7929f28 100644
+--- a/include/linux/netdev_features.h
++++ b/include/linux/netdev_features.h
+@@ -253,4 +253,11 @@ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
++static inline netdev_features_t netdev_base_features(netdev_features_t features)
++{
++ features &= ~NETIF_F_ONE_FOR_ALL;
++ features |= NETIF_F_ALL_FOR_ALL;
++ return features;
++}
++
+ #endif /* _LINUX_NETDEV_FEATURES_H */
+diff --git a/include/linux/static_call.h b/include/linux/static_call.h
+index 141e6b176a1b30..78a77a4ae0ea87 100644
+--- a/include/linux/static_call.h
++++ b/include/linux/static_call.h
+@@ -160,6 +160,8 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
+
+ #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
++extern int static_call_initialized;
++
+ extern int __init static_call_init(void);
+
+ extern void static_call_force_reinit(void);
+@@ -225,6 +227,8 @@ extern long __static_call_return0(void);
+
+ #elif defined(CONFIG_HAVE_STATIC_CALL)
+
++#define static_call_initialized 0
++
+ static inline int static_call_init(void) { return 0; }
+
+ #define DEFINE_STATIC_CALL(name, _func) \
+@@ -281,6 +285,8 @@ extern long __static_call_return0(void);
+
+ #else /* Generic implementation */
+
++#define static_call_initialized 0
++
+ static inline int static_call_init(void) { return 0; }
+
+ static inline long __static_call_return0(void)
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 306137a15d0753..73c8922e69e095 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -100,7 +100,8 @@ dma_addr_t virtqueue_get_avail_addr(const struct virtqueue *vq);
+ dma_addr_t virtqueue_get_used_addr(const struct virtqueue *vq);
+
+ int virtqueue_resize(struct virtqueue *vq, u32 num,
+- void (*recycle)(struct virtqueue *vq, void *buf));
++ void (*recycle)(struct virtqueue *vq, void *buf),
++ void (*recycle_done)(struct virtqueue *vq));
+ int virtqueue_reset(struct virtqueue *vq,
+ void (*recycle)(struct virtqueue *vq, void *buf));
+
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index f66bc85c6411dd..435250c72d5684 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -123,6 +123,7 @@ struct bt_voice {
+
+ #define BT_VOICE_TRANSPARENT 0x0003
+ #define BT_VOICE_CVSD_16BIT 0x0060
++#define BT_VOICE_TRANSPARENT_16BIT 0x0063
+
+ #define BT_SNDMTU 12
+ #define BT_RCVMTU 13
+@@ -590,15 +591,6 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
+ return skb;
+ }
+
+-static inline int bt_copy_from_sockptr(void *dst, size_t dst_size,
+- sockptr_t src, size_t src_size)
+-{
+- if (dst_size > src_size)
+- return -EINVAL;
+-
+- return copy_from_sockptr(dst, src, dst_size);
+-}
+-
+ int bt_to_errno(u16 code);
+ __u8 bt_status(int err);
+
+diff --git a/include/net/lapb.h b/include/net/lapb.h
+index 124ee122f2c8f8..6c07420644e45a 100644
+--- a/include/net/lapb.h
++++ b/include/net/lapb.h
+@@ -4,7 +4,7 @@
+ #include <linux/lapb.h>
+ #include <linux/refcount.h>
+
+-#define LAPB_HEADER_LEN 20 /* LAPB over Ethernet + a bit more */
++#define LAPB_HEADER_LEN MAX_HEADER /* LAPB over Ethernet + a bit more */
+
+ #define LAPB_ACK_PENDING_CONDITION 0x01
+ #define LAPB_REJECT_CONDITION 0x02
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 333e0fae6796c8..5b712582f9a9ce 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -6770,14 +6770,12 @@ void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success,
+ /**
+ * ieee80211_channel_switch_disconnect - disconnect due to channel switch error
+ * @vif: &struct ieee80211_vif pointer from the add_interface callback.
+- * @block_tx: if %true, do not send deauth frame.
+ *
+ * Instruct mac80211 to disconnect due to a channel switch error. The channel
+ * switch can request to block the tx and so, we need to make sure we do not send
+ * a deauth frame in this case.
+ */
+-void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif,
+- bool block_tx);
++void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif);
+
+ /**
+ * ieee80211_request_smps - request SM PS transition
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index e67b483cc8bbb8..9398c8f4995368 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -80,6 +80,7 @@ struct net {
+ * or to unregister pernet ops
+ * (pernet_ops_rwsem write locked).
+ */
++ struct llist_node defer_free_list;
+ struct llist_node cleanup_list; /* namespaces on death row */
+
+ #ifdef CONFIG_KEYS
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 066a3ea33b12e9..91ae20cb76485b 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1103,7 +1103,6 @@ struct nft_rule_blob {
+ * @name: name of the chain
+ * @udlen: user data length
+ * @udata: user data in the chain
+- * @rcu_head: rcu head for deferred release
+ * @blob_next: rule blob pointer to the next in the chain
+ */
+ struct nft_chain {
+@@ -1121,7 +1120,6 @@ struct nft_chain {
+ char *name;
+ u16 udlen;
+ u8 *udata;
+- struct rcu_head rcu_head;
+
+ /* Only used during control plane commit phase: */
+ struct nft_rule_blob *blob_next;
+@@ -1265,7 +1263,6 @@ static inline void nft_use_inc_restore(u32 *use)
+ * @sets: sets in the table
+ * @objects: stateful objects in the table
+ * @flowtables: flow tables in the table
+- * @net: netnamespace this table belongs to
+ * @hgenerator: handle generator state
+ * @handle: table handle
+ * @use: number of chain references to this table
+@@ -1285,7 +1282,6 @@ struct nft_table {
+ struct list_head sets;
+ struct list_head objects;
+ struct list_head flowtables;
+- possible_net_t net;
+ u64 hgenerator;
+ u64 handle;
+ u32 use;
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 462c653e101746..2db9ae0575b609 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -778,7 +778,6 @@ struct ocelot_port {
+
+ phy_interface_t phy_mode;
+
+- unsigned int ptp_skbs_in_flight;
+ struct sk_buff_head tx_skbs;
+
+ unsigned int trap_proto;
+@@ -786,7 +785,6 @@ struct ocelot_port {
+ u16 mrp_ring_id;
+
+ u8 ptp_cmd;
+- u8 ts_id;
+
+ u8 index;
+
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 346826e3c933da..41d20b7199c4af 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6415,6 +6415,101 @@ int btf_ctx_arg_offset(const struct btf *btf, const struct btf_type *func_proto,
+ return off;
+ }
+
++struct bpf_raw_tp_null_args {
++ const char *func;
++ u64 mask;
++};
++
++static const struct bpf_raw_tp_null_args raw_tp_null_args[] = {
++ /* sched */
++ { "sched_pi_setprio", 0x10 },
++ /* ... from sched_numa_pair_template event class */
++ { "sched_stick_numa", 0x100 },
++ { "sched_swap_numa", 0x100 },
++ /* afs */
++ { "afs_make_fs_call", 0x10 },
++ { "afs_make_fs_calli", 0x10 },
++ { "afs_make_fs_call1", 0x10 },
++ { "afs_make_fs_call2", 0x10 },
++ { "afs_protocol_error", 0x1 },
++ { "afs_flock_ev", 0x10 },
++ /* cachefiles */
++ { "cachefiles_lookup", 0x1 | 0x200 },
++ { "cachefiles_unlink", 0x1 },
++ { "cachefiles_rename", 0x1 },
++ { "cachefiles_prep_read", 0x1 },
++ { "cachefiles_mark_active", 0x1 },
++ { "cachefiles_mark_failed", 0x1 },
++ { "cachefiles_mark_inactive", 0x1 },
++ { "cachefiles_vfs_error", 0x1 },
++ { "cachefiles_io_error", 0x1 },
++ { "cachefiles_ondemand_open", 0x1 },
++ { "cachefiles_ondemand_copen", 0x1 },
++ { "cachefiles_ondemand_close", 0x1 },
++ { "cachefiles_ondemand_read", 0x1 },
++ { "cachefiles_ondemand_cread", 0x1 },
++ { "cachefiles_ondemand_fd_write", 0x1 },
++ { "cachefiles_ondemand_fd_release", 0x1 },
++ /* ext4, from ext4__mballoc event class */
++ { "ext4_mballoc_discard", 0x10 },
++ { "ext4_mballoc_free", 0x10 },
++ /* fib */
++ { "fib_table_lookup", 0x100 },
++ /* filelock */
++ /* ... from filelock_lock event class */
++ { "posix_lock_inode", 0x10 },
++ { "fcntl_setlk", 0x10 },
++ { "locks_remove_posix", 0x10 },
++ { "flock_lock_inode", 0x10 },
++ /* ... from filelock_lease event class */
++ { "break_lease_noblock", 0x10 },
++ { "break_lease_block", 0x10 },
++ { "break_lease_unblock", 0x10 },
++ { "generic_delete_lease", 0x10 },
++ { "time_out_leases", 0x10 },
++ /* host1x */
++ { "host1x_cdma_push_gather", 0x10000 },
++ /* huge_memory */
++ { "mm_khugepaged_scan_pmd", 0x10 },
++ { "mm_collapse_huge_page_isolate", 0x1 },
++ { "mm_khugepaged_scan_file", 0x10 },
++ { "mm_khugepaged_collapse_file", 0x10 },
++ /* kmem */
++ { "mm_page_alloc", 0x1 },
++ { "mm_page_pcpu_drain", 0x1 },
++ /* .. from mm_page event class */
++ { "mm_page_alloc_zone_locked", 0x1 },
++ /* netfs */
++ { "netfs_failure", 0x10 },
++ /* power */
++ { "device_pm_callback_start", 0x10 },
++ /* qdisc */
++ { "qdisc_dequeue", 0x1000 },
++ /* rxrpc */
++ { "rxrpc_recvdata", 0x1 },
++ { "rxrpc_resend", 0x10 },
++ /* sunrpc */
++ { "xs_stream_read_data", 0x1 },
++ /* ... from xprt_cong_event event class */
++ { "xprt_reserve_cong", 0x10 },
++ { "xprt_release_cong", 0x10 },
++ { "xprt_get_cong", 0x10 },
++ { "xprt_put_cong", 0x10 },
++ /* tcp */
++ { "tcp_send_reset", 0x11 },
++ /* tegra_apb_dma */
++ { "tegra_dma_tx_status", 0x100 },
++ /* timer_migration */
++ { "tmigr_update_events", 0x1 },
++ /* writeback, from writeback_folio_template event class */
++ { "writeback_dirty_folio", 0x10 },
++ { "folio_wait_writeback", 0x10 },
++ /* rdma */
++ { "mr_integ_alloc", 0x2000 },
++ /* bpf_testmod */
++ { "bpf_testmod_test_read", 0x0 },
++};
++
+ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+@@ -6425,6 +6520,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ const char *tname = prog->aux->attach_func_name;
+ struct bpf_verifier_log *log = info->log;
+ const struct btf_param *args;
++ bool ptr_err_raw_tp = false;
+ const char *tag_value;
+ u32 nr_args, arg;
+ int i, ret;
+@@ -6519,6 +6615,12 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ return false;
+ }
+
++ if (size != sizeof(u64)) {
++ bpf_log(log, "func '%s' size %d must be 8\n",
++ tname, size);
++ return false;
++ }
++
+ /* check for PTR_TO_RDONLY_BUF_OR_NULL or PTR_TO_RDWR_BUF_OR_NULL */
+ for (i = 0; i < prog->aux->ctx_arg_info_size; i++) {
+ const struct bpf_ctx_arg_aux *ctx_arg_info = &prog->aux->ctx_arg_info[i];
+@@ -6564,12 +6666,42 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ if (prog_args_trusted(prog))
+ info->reg_type |= PTR_TRUSTED;
+
+- /* Raw tracepoint arguments always get marked as maybe NULL */
+- if (bpf_prog_is_raw_tp(prog))
+- info->reg_type |= PTR_MAYBE_NULL;
+- else if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
++ if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
+ info->reg_type |= PTR_MAYBE_NULL;
+
++ if (prog->expected_attach_type == BPF_TRACE_RAW_TP) {
++ struct btf *btf = prog->aux->attach_btf;
++ const struct btf_type *t;
++ const char *tname;
++
++ /* BTF lookups cannot fail, return false on error */
++ t = btf_type_by_id(btf, prog->aux->attach_btf_id);
++ if (!t)
++ return false;
++ tname = btf_name_by_offset(btf, t->name_off);
++ if (!tname)
++ return false;
++ /* Checked by bpf_check_attach_target */
++ tname += sizeof("btf_trace_") - 1;
++ for (i = 0; i < ARRAY_SIZE(raw_tp_null_args); i++) {
++ /* Is this a func with potential NULL args? */
++ if (strcmp(tname, raw_tp_null_args[i].func))
++ continue;
++ if (raw_tp_null_args[i].mask & (0x1 << (arg * 4)))
++ info->reg_type |= PTR_MAYBE_NULL;
++ /* Is the current arg IS_ERR? */
++ if (raw_tp_null_args[i].mask & (0x2 << (arg * 4)))
++ ptr_err_raw_tp = true;
++ break;
++ }
++ /* If we don't know NULL-ness specification and the tracepoint
++ * is coming from a loadable module, be conservative and mark
++ * argument as PTR_MAYBE_NULL.
++ */
++ if (i == ARRAY_SIZE(raw_tp_null_args) && btf_is_module(btf))
++ info->reg_type |= PTR_MAYBE_NULL;
++ }
++
+ if (tgt_prog) {
+ enum bpf_prog_type tgt_type;
+
+@@ -6614,6 +6746,15 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ bpf_log(log, "func '%s' arg%d has btf_id %d type %s '%s'\n",
+ tname, arg, info->btf_id, btf_type_str(t),
+ __btf_name_by_offset(btf, t->name_off));
++
++ /* Perform all checks on the validity of type for this argument, but if
++ * we know it can be IS_ERR at runtime, scrub pointer type and mark as
++ * scalar.
++ */
++ if (ptr_err_raw_tp) {
++ bpf_log(log, "marking pointer arg%d as scalar as it may encode error", arg);
++ info->reg_type = SCALAR_VALUE;
++ }
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(btf_ctx_access);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b2008076df9c26..4c486a0bfcc4d8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -418,25 +418,6 @@ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+ return rec;
+ }
+
+-static bool mask_raw_tp_reg_cond(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) {
+- return reg->type == (PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL) &&
+- bpf_prog_is_raw_tp(env->prog) && !reg->ref_obj_id;
+-}
+-
+-static bool mask_raw_tp_reg(const struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+-{
+- if (!mask_raw_tp_reg_cond(env, reg))
+- return false;
+- reg->type &= ~PTR_MAYBE_NULL;
+- return true;
+-}
+-
+-static void unmask_raw_tp_reg(struct bpf_reg_state *reg, bool result)
+-{
+- if (result)
+- reg->type |= PTR_MAYBE_NULL;
+-}
+-
+ static bool subprog_is_global(const struct bpf_verifier_env *env, int subprog)
+ {
+ struct bpf_func_info_aux *aux = env->prog->aux->func_info_aux;
+@@ -6618,7 +6599,6 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ const char *field_name = NULL;
+ enum bpf_type_flag flag = 0;
+ u32 btf_id = 0;
+- bool mask;
+ int ret;
+
+ if (!env->allow_ptr_leaks) {
+@@ -6690,21 +6670,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+
+ if (ret < 0)
+ return ret;
+- /* For raw_tp progs, we allow dereference of PTR_MAYBE_NULL
+- * trusted PTR_TO_BTF_ID, these are the ones that are possibly
+- * arguments to the raw_tp. Since internal checks in for trusted
+- * reg in check_ptr_to_btf_access would consider PTR_MAYBE_NULL
+- * modifier as problematic, mask it out temporarily for the
+- * check. Don't apply this to pointers with ref_obj_id > 0, as
+- * those won't be raw_tp args.
+- *
+- * We may end up applying this relaxation to other trusted
+- * PTR_TO_BTF_ID with maybe null flag, since we cannot
+- * distinguish PTR_MAYBE_NULL tagged for arguments vs normal
+- * tagging, but that should expand allowed behavior, and not
+- * cause regression for existing behavior.
+- */
+- mask = mask_raw_tp_reg(env, reg);
++
+ if (ret != PTR_TO_BTF_ID) {
+ /* just mark; */
+
+@@ -6765,13 +6731,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ clear_trusted_flags(&flag);
+ }
+
+- if (atype == BPF_READ && value_regno >= 0) {
++ if (atype == BPF_READ && value_regno >= 0)
+ mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
+- /* We've assigned a new type to regno, so don't undo masking. */
+- if (regno == value_regno)
+- mask = false;
+- }
+- unmask_raw_tp_reg(reg, mask);
+
+ return 0;
+ }
+@@ -7146,7 +7107,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown(env, regs, value_regno);
+ } else if (base_type(reg->type) == PTR_TO_BTF_ID &&
+- (mask_raw_tp_reg_cond(env, reg) || !type_may_be_null(reg->type))) {
++ !type_may_be_null(reg->type)) {
+ err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ value_regno);
+ } else if (reg->type == CONST_PTR_TO_MAP) {
+@@ -8844,7 +8805,6 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ enum bpf_reg_type type = reg->type;
+ u32 *arg_btf_id = NULL;
+ int err = 0;
+- bool mask;
+
+ if (arg_type == ARG_DONTCARE)
+ return 0;
+@@ -8885,11 +8845,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
+ arg_btf_id = fn->arg_btf_id[arg];
+
+- mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
++ if (err)
++ return err;
+
+- err = err ?: check_func_arg_reg_off(env, reg, regno, arg_type);
+- unmask_raw_tp_reg(reg, mask);
++ err = check_func_arg_reg_off(env, reg, regno, arg_type);
+ if (err)
+ return err;
+
+@@ -9684,17 +9644,14 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
+ return ret;
+ } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
+ struct bpf_call_arg_meta meta;
+- bool mask;
+ int err;
+
+ if (register_is_null(reg) && type_may_be_null(arg->arg_type))
+ continue;
+
+ memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
+- mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
+- unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+ } else {
+@@ -12009,7 +11966,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 regno = i + 1, ref_id, type_size;
+ bool is_ret_buf_sz = false;
+- bool mask = false;
+ int kf_arg_type;
+
+ t = btf_type_skip_modifiers(btf, args[i].type, NULL);
+@@ -12068,15 +12024,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ return -EINVAL;
+ }
+
+- mask = mask_raw_tp_reg(env, reg);
+ if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
+ (register_is_null(reg) || type_may_be_null(reg->type)) &&
+ !is_kfunc_arg_nullable(meta->btf, &args[i])) {
+ verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
+- unmask_raw_tp_reg(reg, mask);
+ return -EACCES;
+ }
+- unmask_raw_tp_reg(reg, mask);
+
+ if (reg->ref_obj_id) {
+ if (is_kfunc_release(meta) && meta->ref_obj_id) {
+@@ -12134,24 +12087,16 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
+ break;
+
+- /* Allow passing maybe NULL raw_tp arguments to
+- * kfuncs for compatibility. Don't apply this to
+- * arguments with ref_obj_id > 0.
+- */
+- mask = mask_raw_tp_reg(env, reg);
+ if (!is_trusted_reg(reg)) {
+ if (!is_kfunc_rcu(meta)) {
+ verbose(env, "R%d must be referenced or trusted\n", regno);
+- unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ if (!is_rcu_reg(reg)) {
+ verbose(env, "R%d must be a rcu pointer\n", regno);
+- unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ }
+- unmask_raw_tp_reg(reg, mask);
+ fallthrough;
+ case KF_ARG_PTR_TO_CTX:
+ case KF_ARG_PTR_TO_DYNPTR:
+@@ -12174,9 +12119,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+
+ if (is_kfunc_release(meta) && reg->ref_obj_id)
+ arg_type |= OBJ_RELEASE;
+- mask = mask_raw_tp_reg(env, reg);
+ ret = check_func_arg_reg_off(env, reg, regno, arg_type);
+- unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+
+@@ -12353,7 +12296,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ ref_tname = btf_name_by_offset(btf, ref_t->name_off);
+ fallthrough;
+ case KF_ARG_PTR_TO_BTF_ID:
+- mask = mask_raw_tp_reg(env, reg);
+ /* Only base_type is checked, further checks are done here */
+ if ((base_type(reg->type) != PTR_TO_BTF_ID ||
+ (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
+@@ -12362,11 +12304,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ verbose(env, "expected %s or socket\n",
+ reg_type_str(env, base_type(reg->type) |
+ (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
+- unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i);
+- unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+ break;
+@@ -13336,7 +13276,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
+ */
+ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_insn *insn,
+- struct bpf_reg_state *ptr_reg,
++ const struct bpf_reg_state *ptr_reg,
+ const struct bpf_reg_state *off_reg)
+ {
+ struct bpf_verifier_state *vstate = env->cur_state;
+@@ -13350,7 +13290,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_sanitize_info info = {};
+ u8 opcode = BPF_OP(insn->code);
+ u32 dst = insn->dst_reg;
+- bool mask;
+ int ret;
+
+ dst_reg = ®s[dst];
+@@ -13377,14 +13316,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ return -EACCES;
+ }
+
+- mask = mask_raw_tp_reg(env, ptr_reg);
+ if (ptr_reg->type & PTR_MAYBE_NULL) {
+ verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
+ dst, reg_type_str(env, ptr_reg->type));
+- unmask_raw_tp_reg(ptr_reg, mask);
+ return -EACCES;
+ }
+- unmask_raw_tp_reg(ptr_reg, mask);
+
+ switch (base_type(ptr_reg->type)) {
+ case PTR_TO_CTX:
+@@ -19934,7 +19870,6 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ * for this case.
+ */
+ case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
+- case PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL:
+ if (type == BPF_READ) {
+ if (BPF_MODE(insn->code) == BPF_MEM)
+ insn->code = BPF_LDX | BPF_PROBE_MEM |
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 40a1ad4493b4d9..fc6f41ac33eb13 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -781,7 +781,7 @@ static inline void replenish_dl_new_period(struct sched_dl_entity *dl_se,
+ * If it is a deferred reservation, and the server
+ * is not handling an starvation case, defer it.
+ */
+- if (dl_se->dl_defer & !dl_se->dl_defer_running) {
++ if (dl_se->dl_defer && !dl_se->dl_defer_running) {
+ dl_se->dl_throttled = 1;
+ dl_se->dl_defer_armed = 1;
+ }
+diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
+index 5259cda486d058..bb7d066a7c3979 100644
+--- a/kernel/static_call_inline.c
++++ b/kernel/static_call_inline.c
+@@ -15,7 +15,7 @@ extern struct static_call_site __start_static_call_sites[],
+ extern struct static_call_tramp_key __start_static_call_tramp_key[],
+ __stop_static_call_tramp_key[];
+
+-static int static_call_initialized;
++int static_call_initialized;
+
+ /*
+ * Must be called before early_initcall() to be effective.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 792dc35414a3c3..50881898e758d8 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2215,6 +2215,9 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
+ goto unlock;
+
+ old_array = bpf_event_rcu_dereference(event->tp_event->prog_array);
++ if (!old_array)
++ goto put;
++
+ ret = bpf_prog_array_copy(old_array, event->prog, NULL, 0, &new_array);
+ if (ret < 0) {
+ bpf_prog_array_delete_safe(old_array, event->prog);
+@@ -2223,6 +2226,14 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
+ bpf_prog_array_free_sleepable(old_array);
+ }
+
++put:
++ /*
++ * It could be that the bpf_prog is not sleepable (and will be freed
++ * via normal RCU), but is called from a point that supports sleepable
++ * programs and uses tasks-trace-RCU.
++ */
++ synchronize_rcu_tasks_trace();
++
+ bpf_prog_put(event->prog);
+ event->prog = NULL;
+
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index b30fc8fcd0956a..b085a8a164ea03 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1400,9 +1400,13 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
+
+ #ifdef CONFIG_BPF_EVENTS
+ if (bpf_prog_array_valid(call)) {
++ const struct bpf_prog_array *array;
+ u32 ret;
+
+- ret = bpf_prog_run_array_uprobe(call->prog_array, regs, bpf_prog_run);
++ rcu_read_lock_trace();
++ array = rcu_dereference_check(call->prog_array, rcu_read_lock_trace_held());
++ ret = bpf_prog_run_array_uprobe(array, regs, bpf_prog_run);
++ rcu_read_unlock_trace();
+ if (!ret)
+ return;
+ }
+diff --git a/mm/slub.c b/mm/slub.c
+index 15ba89fef89a1f..b9447a955f6112 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -2199,9 +2199,24 @@ bool memcg_slab_post_charge(void *p, gfp_t flags)
+
+ folio = virt_to_folio(p);
+ if (!folio_test_slab(folio)) {
+- return folio_memcg_kmem(folio) ||
+- (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
+- folio_order(folio)) == 0);
++ int size;
++
++ if (folio_memcg_kmem(folio))
++ return true;
++
++ if (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
++ folio_order(folio)))
++ return false;
++
++ /*
++ * This folio has already been accounted in the global stats but
++ * not in the memcg stats. So, subtract from the global and use
++ * the interface which adds to both global and memcg stats.
++ */
++ size = folio_size(folio);
++ node_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -size);
++ lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, size);
++ return true;
+ }
+
+ slab = folio_slab(folio);
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 2243cec18ecc86..53dea8ae96e477 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -990,16 +990,25 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ int tt_diff_len, tt_change_len = 0;
+ int tt_diff_entries_num = 0;
+ int tt_diff_entries_count = 0;
++ bool drop_changes = false;
++ size_t tt_extra_len = 0;
+ u16 tvlv_len;
+
+ tt_diff_entries_num = atomic_read(&bat_priv->tt.local_changes);
+ tt_diff_len = batadv_tt_len(tt_diff_entries_num);
+
+ /* if we have too many changes for one packet don't send any
+- * and wait for the tt table request which will be fragmented
++ * and wait for the tt table request so we can reply with the full
++ * (fragmented) table.
++ *
++ * The local change history should still be cleaned up so the next
++ * TT round can start again with a clean state.
+ */
+- if (tt_diff_len > bat_priv->soft_iface->mtu)
++ if (tt_diff_len > bat_priv->soft_iface->mtu) {
+ tt_diff_len = 0;
++ tt_diff_entries_num = 0;
++ drop_changes = true;
++ }
+
+ tvlv_len = batadv_tt_prepare_tvlv_local_data(bat_priv, &tt_data,
+ &tt_change, &tt_diff_len);
+@@ -1008,7 +1017,7 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+
+ tt_data->flags = BATADV_TT_OGM_DIFF;
+
+- if (tt_diff_len == 0)
++ if (!drop_changes && tt_diff_len == 0)
+ goto container_register;
+
+ spin_lock_bh(&bat_priv->tt.changes_list_lock);
+@@ -1027,6 +1036,9 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ }
+ spin_unlock_bh(&bat_priv->tt.changes_list_lock);
+
++ tt_extra_len = batadv_tt_len(tt_diff_entries_num -
++ tt_diff_entries_count);
++
+ /* Keep the buffer for possible tt_request */
+ spin_lock_bh(&bat_priv->tt.last_changeset_lock);
+ kfree(bat_priv->tt.last_changeset);
+@@ -1035,6 +1047,7 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ tt_change_len = batadv_tt_len(tt_diff_entries_count);
+ /* check whether this new OGM has no changes due to size problems */
+ if (tt_diff_entries_count > 0) {
++ tt_diff_len -= tt_extra_len;
+ /* if kmalloc() fails we will reply with the full table
+ * instead of providing the diff
+ */
+@@ -1047,6 +1060,8 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ }
+ spin_unlock_bh(&bat_priv->tt.last_changeset_lock);
+
++ /* Remove extra packet space for OGM */
++ tvlv_len -= tt_extra_len;
+ container_register:
+ batadv_tvlv_container_register(bat_priv, BATADV_TVLV_TT, 1, tt_data,
+ tvlv_len);
+@@ -2747,14 +2762,16 @@ static bool batadv_tt_global_valid(const void *entry_ptr,
+ *
+ * Fills the tvlv buff with the tt entries from the specified hash. If valid_cb
+ * is not provided then this becomes a no-op.
++ *
++ * Return: Remaining unused length in tvlv_buff.
+ */
+-static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+- struct batadv_hashtable *hash,
+- void *tvlv_buff, u16 tt_len,
+- bool (*valid_cb)(const void *,
+- const void *,
+- u8 *flags),
+- void *cb_data)
++static u16 batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
++ struct batadv_hashtable *hash,
++ void *tvlv_buff, u16 tt_len,
++ bool (*valid_cb)(const void *,
++ const void *,
++ u8 *flags),
++ void *cb_data)
+ {
+ struct batadv_tt_common_entry *tt_common_entry;
+ struct batadv_tvlv_tt_change *tt_change;
+@@ -2768,7 +2785,7 @@ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+ tt_change = tvlv_buff;
+
+ if (!valid_cb)
+- return;
++ return tt_len;
+
+ rcu_read_lock();
+ for (i = 0; i < hash->size; i++) {
+@@ -2794,6 +2811,8 @@ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+ }
+ }
+ rcu_read_unlock();
++
++ return batadv_tt_len(tt_tot - tt_num_entries);
+ }
+
+ /**
+@@ -3069,10 +3088,11 @@ static bool batadv_send_other_tt_response(struct batadv_priv *bat_priv,
+ goto out;
+
+ /* fill the rest of the tvlv with the real TT entries */
+- batadv_tt_tvlv_generate(bat_priv, bat_priv->tt.global_hash,
+- tt_change, tt_len,
+- batadv_tt_global_valid,
+- req_dst_orig_node);
++ tvlv_len -= batadv_tt_tvlv_generate(bat_priv,
++ bat_priv->tt.global_hash,
++ tt_change, tt_len,
++ batadv_tt_global_valid,
++ req_dst_orig_node);
+ }
+
+ /* Don't send the response, if larger than fragmented packet. */
+@@ -3196,9 +3216,11 @@ static bool batadv_send_my_tt_response(struct batadv_priv *bat_priv,
+ goto out;
+
+ /* fill the rest of the tvlv with the real TT entries */
+- batadv_tt_tvlv_generate(bat_priv, bat_priv->tt.local_hash,
+- tt_change, tt_len,
+- batadv_tt_local_valid, NULL);
++ tvlv_len -= batadv_tt_tvlv_generate(bat_priv,
++ bat_priv->tt.local_hash,
++ tt_change, tt_len,
++ batadv_tt_local_valid,
++ NULL);
+ }
+
+ tvlv_tt_data->flags = BATADV_TT_RESPONSE;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 2b5ba8acd1d84a..388d46c6a043d4 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6872,38 +6872,27 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
+ return;
+
+ hci_dev_lock(hdev);
+- rcu_read_lock();
+
+ /* Connect all BISes that are bound to the BIG */
+- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+- if (bacmp(&conn->dst, BDADDR_ANY) ||
+- conn->type != ISO_LINK ||
+- conn->iso_qos.bcast.big != ev->handle)
++ while ((conn = hci_conn_hash_lookup_big_state(hdev, ev->handle,
++ BT_BOUND))) {
++ if (ev->status) {
++ hci_connect_cfm(conn, ev->status);
++ hci_conn_del(conn);
+ continue;
++ }
+
+ if (hci_conn_set_handle(conn,
+ __le16_to_cpu(ev->bis_handle[i++])))
+ continue;
+
+- if (!ev->status) {
+- conn->state = BT_CONNECTED;
+- set_bit(HCI_CONN_BIG_CREATED, &conn->flags);
+- rcu_read_unlock();
+- hci_debugfs_create_conn(conn);
+- hci_conn_add_sysfs(conn);
+- hci_iso_setup_path(conn);
+- rcu_read_lock();
+- continue;
+- }
+-
+- hci_connect_cfm(conn, ev->status);
+- rcu_read_unlock();
+- hci_conn_del(conn);
+- rcu_read_lock();
++ conn->state = BT_CONNECTED;
++ set_bit(HCI_CONN_BIG_CREATED, &conn->flags);
++ hci_debugfs_create_conn(conn);
++ hci_conn_add_sysfs(conn);
++ hci_iso_setup_path(conn);
+ }
+
+- rcu_read_unlock();
+-
+ if (!ev->status && !i)
+ /* If no BISes have been connected for the BIG,
+ * terminate. This is in case all bound connections
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 2272e1849ebd89..022b86797acdc5 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -1926,7 +1926,7 @@ static int hci_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ }
+
+ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+- sockptr_t optval, unsigned int len)
++ sockptr_t optval, unsigned int optlen)
+ {
+ struct hci_ufilter uf = { .opcode = 0 };
+ struct sock *sk = sock->sk;
+@@ -1943,7 +1943,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+
+ switch (optname) {
+ case HCI_DATA_DIR:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1954,7 +1954,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+ break;
+
+ case HCI_TIME_STAMP:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1974,7 +1974,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+ uf.event_mask[1] = *((u32 *) f->event_mask + 1);
+ }
+
+- err = bt_copy_from_sockptr(&uf, sizeof(uf), optval, len);
++ err = copy_safe_from_sockptr(&uf, sizeof(uf), optval, optlen);
+ if (err)
+ break;
+
+@@ -2005,7 +2005,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+ }
+
+ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+- sockptr_t optval, unsigned int len)
++ sockptr_t optval, unsigned int optlen)
+ {
+ struct sock *sk = sock->sk;
+ int err = 0;
+@@ -2015,7 +2015,7 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ if (level == SOL_HCI)
+ return hci_sock_setsockopt_old(sock, level, optname, optval,
+- len);
++ optlen);
+
+ if (level != SOL_BLUETOOTH)
+ return -ENOPROTOOPT;
+@@ -2035,7 +2035,7 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+ goto done;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 5e2d9758bd3c1c..644b606743e212 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -1129,6 +1129,7 @@ static int iso_listen_bis(struct sock *sk)
+ return -EHOSTUNREACH;
+
+ hci_dev_lock(hdev);
++ lock_sock(sk);
+
+ /* Fail if user set invalid QoS */
+ if (iso_pi(sk)->qos_user_set && !check_bcast_qos(&iso_pi(sk)->qos)) {
+@@ -1158,10 +1159,10 @@ static int iso_listen_bis(struct sock *sk)
+ goto unlock;
+ }
+
+- hci_dev_put(hdev);
+-
+ unlock:
++ release_sock(sk);
+ hci_dev_unlock(hdev);
++ hci_dev_put(hdev);
+ return err;
+ }
+
+@@ -1188,6 +1189,7 @@ static int iso_sock_listen(struct socket *sock, int backlog)
+
+ BT_DBG("sk %p backlog %d", sk, backlog);
+
++ sock_hold(sk);
+ lock_sock(sk);
+
+ if (sk->sk_state != BT_BOUND) {
+@@ -1200,10 +1202,16 @@ static int iso_sock_listen(struct socket *sock, int backlog)
+ goto done;
+ }
+
+- if (!bacmp(&iso_pi(sk)->dst, BDADDR_ANY))
++ if (!bacmp(&iso_pi(sk)->dst, BDADDR_ANY)) {
+ err = iso_listen_cis(sk);
+- else
++ } else {
++ /* Drop sock lock to avoid potential
++ * deadlock with the hdev lock.
++ */
++ release_sock(sk);
+ err = iso_listen_bis(sk);
++ lock_sock(sk);
++ }
+
+ if (err)
+ goto done;
+@@ -1215,6 +1223,7 @@ static int iso_sock_listen(struct socket *sock, int backlog)
+
+ done:
+ release_sock(sk);
++ sock_put(sk);
+ return err;
+ }
+
+@@ -1226,7 +1235,11 @@ static int iso_sock_accept(struct socket *sock, struct socket *newsock,
+ long timeo;
+ int err = 0;
+
+- lock_sock(sk);
++ /* Use explicit nested locking to avoid lockdep warnings generated
++ * because the parent socket and the child socket are locked on the
++ * same thread.
++ */
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+
+ timeo = sock_rcvtimeo(sk, arg->flags & O_NONBLOCK);
+
+@@ -1257,7 +1270,7 @@ static int iso_sock_accept(struct socket *sock, struct socket *newsock,
+ release_sock(sk);
+
+ timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+- lock_sock(sk);
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ }
+ remove_wait_queue(sk_sleep(sk), &wait);
+
+@@ -1398,6 +1411,7 @@ static void iso_conn_big_sync(struct sock *sk)
+ * change.
+ */
+ hci_dev_lock(hdev);
++ lock_sock(sk);
+
+ if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+ err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
+@@ -1410,6 +1424,7 @@ static void iso_conn_big_sync(struct sock *sk)
+ err);
+ }
+
++ release_sock(sk);
+ hci_dev_unlock(hdev);
+ }
+
+@@ -1418,39 +1433,57 @@ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ {
+ struct sock *sk = sock->sk;
+ struct iso_pinfo *pi = iso_pi(sk);
++ bool early_ret = false;
++ int err = 0;
+
+ BT_DBG("sk %p", sk);
+
+ if (test_and_clear_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
++ sock_hold(sk);
+ lock_sock(sk);
++
+ switch (sk->sk_state) {
+ case BT_CONNECT2:
+ if (test_bit(BT_SK_PA_SYNC, &pi->flags)) {
++ release_sock(sk);
+ iso_conn_big_sync(sk);
++ lock_sock(sk);
++
+ sk->sk_state = BT_LISTEN;
+ } else {
+ iso_conn_defer_accept(pi->conn->hcon);
+ sk->sk_state = BT_CONFIG;
+ }
+- release_sock(sk);
+- return 0;
++
++ early_ret = true;
++ break;
+ case BT_CONNECTED:
+ if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
++ release_sock(sk);
+ iso_conn_big_sync(sk);
++ lock_sock(sk);
++
+ sk->sk_state = BT_LISTEN;
+- release_sock(sk);
+- return 0;
++ early_ret = true;
+ }
+
+- release_sock(sk);
+ break;
+ case BT_CONNECT:
+ release_sock(sk);
+- return iso_connect_cis(sk);
++ err = iso_connect_cis(sk);
++ lock_sock(sk);
++
++ early_ret = true;
++ break;
+ default:
+- release_sock(sk);
+ break;
+ }
++
++ release_sock(sk);
++ sock_put(sk);
++
++ if (early_ret)
++ return err;
+ }
+
+ return bt_sock_recvmsg(sock, msg, len, flags);
+@@ -1566,7 +1599,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1577,7 +1610,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case BT_PKT_STATUS:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1596,7 +1629,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&qos, sizeof(qos), optval, optlen);
++ err = copy_safe_from_sockptr(&qos, sizeof(qos), optval, optlen);
+ if (err)
+ break;
+
+@@ -1617,8 +1650,8 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(iso_pi(sk)->base, optlen, optval,
+- optlen);
++ err = copy_safe_from_sockptr(iso_pi(sk)->base, optlen, optval,
++ optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 18e89e764f3b42..3d2553dcdb1b3c 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -755,7 +755,8 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
+ opts.max_tx = chan->max_tx;
+ opts.txwin_size = chan->tx_win;
+
+- err = bt_copy_from_sockptr(&opts, sizeof(opts), optval, optlen);
++ err = copy_safe_from_sockptr(&opts, sizeof(opts), optval,
++ optlen);
+ if (err)
+ break;
+
+@@ -800,7 +801,7 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
+ break;
+
+ case L2CAP_LM:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -909,7 +910,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ sec.level = BT_SECURITY_LOW;
+
+- err = bt_copy_from_sockptr(&sec, sizeof(sec), optval, optlen);
++ err = copy_safe_from_sockptr(&sec, sizeof(sec), optval, optlen);
+ if (err)
+ break;
+
+@@ -956,7 +957,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -970,7 +971,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case BT_FLUSHABLE:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1004,7 +1005,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ pwr.force_active = BT_POWER_FORCE_ACTIVE_ON;
+
+- err = bt_copy_from_sockptr(&pwr, sizeof(pwr), optval, optlen);
++ err = copy_safe_from_sockptr(&pwr, sizeof(pwr), optval, optlen);
+ if (err)
+ break;
+
+@@ -1015,7 +1016,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case BT_CHANNEL_POLICY:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1046,7 +1047,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
++ err = copy_safe_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
+ if (err)
+ break;
+
+@@ -1076,7 +1077,8 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&mode, sizeof(mode), optval, optlen);
++ err = copy_safe_from_sockptr(&mode, sizeof(mode), optval,
++ optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 40766f8119ed9c..913402806fa0d4 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -629,10 +629,9 @@ static int rfcomm_sock_setsockopt_old(struct socket *sock, int optname,
+
+ switch (optname) {
+ case RFCOMM_LM:
+- if (bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen)) {
+- err = -EFAULT;
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ if (err)
+ break;
+- }
+
+ if (opt & RFCOMM_LM_FIPS) {
+ err = -EINVAL;
+@@ -685,7 +684,7 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ sec.level = BT_SECURITY_LOW;
+
+- err = bt_copy_from_sockptr(&sec, sizeof(sec), optval, optlen);
++ err = copy_safe_from_sockptr(&sec, sizeof(sec), optval, optlen);
+ if (err)
+ break;
+
+@@ -703,7 +702,7 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 1c7252a3686694..b872a2ca3ff38b 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -267,10 +267,13 @@ static int sco_connect(struct sock *sk)
+ else
+ type = SCO_LINK;
+
+- if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
+- (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
+- err = -EOPNOTSUPP;
+- goto unlock;
++ switch (sco_pi(sk)->setting & SCO_AIRMODE_MASK) {
++ case SCO_AIRMODE_TRANSP:
++ if (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)) {
++ err = -EOPNOTSUPP;
++ goto unlock;
++ }
++ break;
+ }
+
+ hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
+@@ -853,7 +856,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -872,18 +875,11 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ voice.setting = sco_pi(sk)->setting;
+
+- err = bt_copy_from_sockptr(&voice, sizeof(voice), optval,
+- optlen);
++ err = copy_safe_from_sockptr(&voice, sizeof(voice), optval,
++ optlen);
+ if (err)
+ break;
+
+- /* Explicitly check for these values */
+- if (voice.setting != BT_VOICE_TRANSPARENT &&
+- voice.setting != BT_VOICE_CVSD_16BIT) {
+- err = -EINVAL;
+- break;
+- }
+-
+ sco_pi(sk)->setting = voice.setting;
+ hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src,
+ BDADDR_BREDR);
+@@ -891,14 +887,19 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ err = -EBADFD;
+ break;
+ }
+- if (enhanced_sync_conn_capable(hdev) &&
+- voice.setting == BT_VOICE_TRANSPARENT)
+- sco_pi(sk)->codec.id = BT_CODEC_TRANSPARENT;
++
++ switch (sco_pi(sk)->setting & SCO_AIRMODE_MASK) {
++ case SCO_AIRMODE_TRANSP:
++ if (enhanced_sync_conn_capable(hdev))
++ sco_pi(sk)->codec.id = BT_CODEC_TRANSPARENT;
++ break;
++ }
++
+ hci_dev_put(hdev);
+ break;
+
+ case BT_PKT_STATUS:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -941,7 +942,8 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(buffer, optlen, optval, optlen);
++ err = copy_struct_from_sockptr(buffer, sizeof(buffer), optval,
++ optlen);
+ if (err) {
+ hci_dev_put(hdev);
+ break;
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index e39479f1c9a486..70fea7c1a4b0a4 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -443,6 +443,21 @@ static struct net *net_alloc(void)
+ goto out;
+ }
+
++static LLIST_HEAD(defer_free_list);
++
++static void net_complete_free(void)
++{
++ struct llist_node *kill_list;
++ struct net *net, *next;
++
++ /* Get the list of namespaces to free from last round. */
++ kill_list = llist_del_all(&defer_free_list);
++
++ llist_for_each_entry_safe(net, next, kill_list, defer_free_list)
++ kmem_cache_free(net_cachep, net);
++
++}
++
+ static void net_free(struct net *net)
+ {
+ if (refcount_dec_and_test(&net->passive)) {
+@@ -451,7 +466,8 @@ static void net_free(struct net *net)
+ /* There should not be any trackers left there. */
+ ref_tracker_dir_exit(&net->notrefcnt_tracker);
+
+- kmem_cache_free(net_cachep, net);
++ /* Wait for an extra rcu_barrier() before final free. */
++ llist_add(&net->defer_free_list, &defer_free_list);
+ }
+ }
+
+@@ -636,6 +652,8 @@ static void cleanup_net(struct work_struct *work)
+ */
+ rcu_barrier();
+
++ net_complete_free();
++
+ /* Finally it is safe to free my network namespace structure */
+ list_for_each_entry_safe(net, tmp, &net_exit_list, exit_list) {
+ list_del_init(&net->exit_list);
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 78347d7d25ef31..f1b9b3958792cd 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -159,6 +159,7 @@ static void sock_map_del_link(struct sock *sk,
+ verdict_stop = true;
+ list_del(&link->list);
+ sk_psock_free_link(link);
++ break;
+ }
+ }
+ spin_unlock_bh(&psock->link_lock);
+@@ -411,12 +412,11 @@ static void *sock_map_lookup_sys(struct bpf_map *map, void *key)
+ static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test,
+ struct sock **psk)
+ {
+- struct sock *sk;
++ struct sock *sk = NULL;
+ int err = 0;
+
+ spin_lock_bh(&stab->lock);
+- sk = *psk;
+- if (!sk_test || sk_test == sk)
++ if (!sk_test || sk_test == *psk)
+ sk = xchg(psk, NULL);
+
+ if (likely(sk))
+diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c
+index 8e8b1bef6af69d..11ea8cfd62661c 100644
+--- a/net/dsa/tag_ocelot_8021q.c
++++ b/net/dsa/tag_ocelot_8021q.c
+@@ -79,7 +79,7 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
+ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
+ struct net_device *netdev)
+ {
+- int src_port, switch_id;
++ int src_port = -1, switch_id = -1;
+
+ dsa_8021q_rcv(skb, &src_port, &switch_id, NULL, NULL);
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 68804fd01dafc4..8efc58716ce969 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -883,8 +883,10 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb,
+ unsigned int size;
+
+ if (mptcp_syn_options(sk, skb, &size, &opts->mptcp)) {
+- opts->options |= OPTION_MPTCP;
+- remaining -= size;
++ if (remaining >= size) {
++ opts->options |= OPTION_MPTCP;
++ remaining -= size;
++ }
+ }
+ }
+
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 6dfc61a9acd4a5..1b1bf044378d48 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1061,13 +1061,13 @@ ieee80211_copy_mbssid_beacon(u8 *pos, struct cfg80211_mbssid_elems *dst,
+ {
+ int i, offset = 0;
+
++ dst->cnt = src->cnt;
+ for (i = 0; i < src->cnt; i++) {
+ memcpy(pos + offset, src->elem[i].data, src->elem[i].len);
+ dst->elem[i].len = src->elem[i].len;
+ dst->elem[i].data = pos + offset;
+ offset += dst->elem[i].len;
+ }
+- dst->cnt = src->cnt;
+
+ return offset;
+ }
+@@ -1911,6 +1911,8 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ params->eht_capa_len,
+ link_sta);
+
++ ieee80211_sta_init_nss(link_sta);
++
+ if (params->opmode_notif_used) {
+ /* returned value is only needed for rc update, but the
+ * rc isn't initialized here yet, so ignore it
+@@ -1920,8 +1922,6 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ sband->band);
+ }
+
+- ieee80211_sta_init_nss(link_sta);
+-
+ return 0;
+ }
+
+@@ -3674,13 +3674,12 @@ void ieee80211_csa_finish(struct ieee80211_vif *vif, unsigned int link_id)
+ }
+ EXPORT_SYMBOL(ieee80211_csa_finish);
+
+-void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif, bool block_tx)
++void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif)
+ {
+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
+ struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ struct ieee80211_local *local = sdata->local;
+
+- sdata->csa_blocked_queues = block_tx;
+ sdata_info(sdata, "channel switch failed, disconnecting\n");
+ wiphy_work_queue(local->hw.wiphy, &ifmgd->csa_connection_drop_work);
+ }
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 3d3c9139ff5e45..7a0242e937d364 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1106,8 +1106,6 @@ struct ieee80211_sub_if_data {
+
+ unsigned long state;
+
+- bool csa_blocked_queues;
+-
+ char name[IFNAMSIZ];
+
+ struct ieee80211_fragment_cache frags;
+@@ -2411,17 +2409,13 @@ void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata);
+ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_hdr *hdr, bool ack, u16 tx_time);
+-
++unsigned int
++ieee80211_get_vif_queues(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata);
+ void ieee80211_wake_queues_by_reason(struct ieee80211_hw *hw,
+ unsigned long queues,
+ enum queue_stop_reason reason,
+ bool refcounted);
+-void ieee80211_stop_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason);
+-void ieee80211_wake_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason);
+ void ieee80211_stop_queues_by_reason(struct ieee80211_hw *hw,
+ unsigned long queues,
+ enum queue_stop_reason reason,
+@@ -2432,6 +2426,43 @@ void ieee80211_wake_queue_by_reason(struct ieee80211_hw *hw, int queue,
+ void ieee80211_stop_queue_by_reason(struct ieee80211_hw *hw, int queue,
+ enum queue_stop_reason reason,
+ bool refcounted);
++static inline void
++ieee80211_stop_vif_queues(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_stop_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, true);
++}
++
++static inline void
++ieee80211_wake_vif_queues(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_wake_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, true);
++}
++static inline void
++ieee80211_stop_vif_queues_norefcount(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_stop_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, false);
++}
++static inline void
++ieee80211_wake_vif_queues_norefcount(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_wake_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, false);
++}
+ void ieee80211_add_pending_skb(struct ieee80211_local *local,
+ struct sk_buff *skb);
+ void ieee80211_add_pending_skbs(struct ieee80211_local *local,
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 6ef0990d3d296a..af9055252e6dfa 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -2364,18 +2364,14 @@ void ieee80211_vif_block_queues_csa(struct ieee80211_sub_if_data *sdata)
+ if (ieee80211_hw_check(&local->hw, HANDLES_QUIET_CSA))
+ return;
+
+- ieee80211_stop_vif_queues(local, sdata,
+- IEEE80211_QUEUE_STOP_REASON_CSA);
+- sdata->csa_blocked_queues = true;
++ ieee80211_stop_vif_queues_norefcount(local, sdata,
++ IEEE80211_QUEUE_STOP_REASON_CSA);
+ }
+
+ void ieee80211_vif_unblock_queues_csa(struct ieee80211_sub_if_data *sdata)
+ {
+ struct ieee80211_local *local = sdata->local;
+
+- if (sdata->csa_blocked_queues) {
+- ieee80211_wake_vif_queues(local, sdata,
+- IEEE80211_QUEUE_STOP_REASON_CSA);
+- sdata->csa_blocked_queues = false;
+- }
++ ieee80211_wake_vif_queues_norefcount(local, sdata,
++ IEEE80211_QUEUE_STOP_REASON_CSA);
+ }
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 0303972c23e4cb..111066928b963c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2636,8 +2636,6 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
+ */
+ link->conf->csa_active = true;
+ link->u.mgd.csa.blocked_tx = csa_ie.mode;
+- sdata->csa_blocked_queues =
+- csa_ie.mode && !ieee80211_hw_check(&local->hw, HANDLES_QUIET_CSA);
+
+ wiphy_work_queue(sdata->local->hw.wiphy,
+ &ifmgd->csa_connection_drop_work);
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index f94faa86ba8a35..b4814e97cf7422 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -657,7 +657,7 @@ void ieee80211_wake_queues(struct ieee80211_hw *hw)
+ }
+ EXPORT_SYMBOL(ieee80211_wake_queues);
+
+-static unsigned int
++unsigned int
+ ieee80211_get_vif_queues(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata)
+ {
+@@ -669,7 +669,8 @@ ieee80211_get_vif_queues(struct ieee80211_local *local,
+ queues = 0;
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++)
+- queues |= BIT(sdata->vif.hw_queue[ac]);
++ if (sdata->vif.hw_queue[ac] != IEEE80211_INVAL_HW_QUEUE)
++ queues |= BIT(sdata->vif.hw_queue[ac]);
+ if (sdata->vif.cab_queue != IEEE80211_INVAL_HW_QUEUE)
+ queues |= BIT(sdata->vif.cab_queue);
+ } else {
+@@ -724,24 +725,6 @@ void ieee80211_flush_queues(struct ieee80211_local *local,
+ __ieee80211_flush_queues(local, sdata, 0, drop);
+ }
+
+-void ieee80211_stop_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason)
+-{
+- ieee80211_stop_queues_by_reason(&local->hw,
+- ieee80211_get_vif_queues(local, sdata),
+- reason, true);
+-}
+-
+-void ieee80211_wake_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason)
+-{
+- ieee80211_wake_queues_by_reason(&local->hw,
+- ieee80211_get_vif_queues(local, sdata),
+- reason, true);
+-}
+-
+ static void __iterate_interfaces(struct ieee80211_local *local,
+ u32 iter_flags,
+ void (*iterator)(void *data, u8 *mac,
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4a137afaf0b87e..0c5ff4afc37022 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1495,7 +1495,6 @@ static int nf_tables_newtable(struct sk_buff *skb, const struct nfnl_info *info,
+ INIT_LIST_HEAD(&table->sets);
+ INIT_LIST_HEAD(&table->objects);
+ INIT_LIST_HEAD(&table->flowtables);
+- write_pnet(&table->net, net);
+ table->family = family;
+ table->flags = flags;
+ table->handle = ++nft_net->table_handle;
+@@ -3884,8 +3883,11 @@ void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ kfree(rule);
+ }
+
++/* can only be used if rule is no longer visible to dumps */
+ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
++ lockdep_commit_lock_is_held(ctx->net);
++
+ nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ nf_tables_rule_destroy(ctx, rule);
+ }
+@@ -5650,6 +5652,8 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ struct nft_set_binding *binding,
+ enum nft_trans_phase phase)
+ {
++ lockdep_commit_lock_is_held(ctx->net);
++
+ switch (phase) {
+ case NFT_TRANS_PREPARE_ERROR:
+ nft_set_trans_unbind(ctx, set);
+@@ -11456,19 +11460,6 @@ static void __nft_release_basechain_now(struct nft_ctx *ctx)
+ nf_tables_chain_destroy(ctx->chain);
+ }
+
+-static void nft_release_basechain_rcu(struct rcu_head *head)
+-{
+- struct nft_chain *chain = container_of(head, struct nft_chain, rcu_head);
+- struct nft_ctx ctx = {
+- .family = chain->table->family,
+- .chain = chain,
+- .net = read_pnet(&chain->table->net),
+- };
+-
+- __nft_release_basechain_now(&ctx);
+- put_net(ctx.net);
+-}
+-
+ int __nft_release_basechain(struct nft_ctx *ctx)
+ {
+ struct nft_rule *rule;
+@@ -11483,11 +11474,18 @@ int __nft_release_basechain(struct nft_ctx *ctx)
+ nft_chain_del(ctx->chain);
+ nft_use_dec(&ctx->table->use);
+
+- if (maybe_get_net(ctx->net))
+- call_rcu(&ctx->chain->rcu_head, nft_release_basechain_rcu);
+- else
++ if (!maybe_get_net(ctx->net)) {
+ __nft_release_basechain_now(ctx);
++ return 0;
++ }
++
++ /* wait for ruleset dumps to complete. Owning chain is no longer in
++ * lists, so new dumps can't find any of these rules anymore.
++ */
++ synchronize_rcu();
+
++ __nft_release_basechain_now(ctx);
++ put_net(ctx->net);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(__nft_release_basechain);
+diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
+index f8b25b6f5da736..9869ef3c2ab378 100644
+--- a/net/netfilter/xt_IDLETIMER.c
++++ b/net/netfilter/xt_IDLETIMER.c
+@@ -409,21 +409,23 @@ static void idletimer_tg_destroy(const struct xt_tgdtor_param *par)
+
+ mutex_lock(&list_mutex);
+
+- if (--info->timer->refcnt == 0) {
+- pr_debug("deleting timer %s\n", info->label);
+-
+- list_del(&info->timer->entry);
+- timer_shutdown_sync(&info->timer->timer);
+- cancel_work_sync(&info->timer->work);
+- sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
+- kfree(info->timer->attr.attr.name);
+- kfree(info->timer);
+- } else {
++ if (--info->timer->refcnt > 0) {
+ pr_debug("decreased refcnt of timer %s to %u\n",
+ info->label, info->timer->refcnt);
++ mutex_unlock(&list_mutex);
++ return;
+ }
+
++ pr_debug("deleting timer %s\n", info->label);
++
++ list_del(&info->timer->entry);
+ mutex_unlock(&list_mutex);
++
++ timer_shutdown_sync(&info->timer->timer);
++ cancel_work_sync(&info->timer->work);
++ sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
++ kfree(info->timer->attr.attr.name);
++ kfree(info->timer);
+ }
+
+ static void idletimer_tg_destroy_v1(const struct xt_tgdtor_param *par)
+@@ -434,25 +436,27 @@ static void idletimer_tg_destroy_v1(const struct xt_tgdtor_param *par)
+
+ mutex_lock(&list_mutex);
+
+- if (--info->timer->refcnt == 0) {
+- pr_debug("deleting timer %s\n", info->label);
+-
+- list_del(&info->timer->entry);
+- if (info->timer->timer_type & XT_IDLETIMER_ALARM) {
+- alarm_cancel(&info->timer->alarm);
+- } else {
+- timer_shutdown_sync(&info->timer->timer);
+- }
+- cancel_work_sync(&info->timer->work);
+- sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
+- kfree(info->timer->attr.attr.name);
+- kfree(info->timer);
+- } else {
++ if (--info->timer->refcnt > 0) {
+ pr_debug("decreased refcnt of timer %s to %u\n",
+ info->label, info->timer->refcnt);
++ mutex_unlock(&list_mutex);
++ return;
+ }
+
++ pr_debug("deleting timer %s\n", info->label);
++
++ list_del(&info->timer->entry);
+ mutex_unlock(&list_mutex);
++
++ if (info->timer->timer_type & XT_IDLETIMER_ALARM) {
++ alarm_cancel(&info->timer->alarm);
++ } else {
++ timer_shutdown_sync(&info->timer->timer);
++ }
++ cancel_work_sync(&info->timer->work);
++ sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
++ kfree(info->timer->attr.attr.name);
++ kfree(info->timer);
+ }
+
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 39382ee1e33108..3b519adc01259f 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -78,6 +78,8 @@ struct netem_sched_data {
+ struct sk_buff *t_head;
+ struct sk_buff *t_tail;
+
++ u32 t_len;
++
+ /* optional qdisc for classful handling (NULL at netem init) */
+ struct Qdisc *qdisc;
+
+@@ -382,6 +384,7 @@ static void tfifo_reset(struct Qdisc *sch)
+ rtnl_kfree_skbs(q->t_head, q->t_tail);
+ q->t_head = NULL;
+ q->t_tail = NULL;
++ q->t_len = 0;
+ }
+
+ static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
+@@ -411,6 +414,7 @@ static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
+ rb_link_node(&nskb->rbnode, parent, p);
+ rb_insert_color(&nskb->rbnode, &q->t_root);
+ }
++ q->t_len++;
+ sch->q.qlen++;
+ }
+
+@@ -517,7 +521,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 1<<get_random_u32_below(8);
+ }
+
+- if (unlikely(sch->q.qlen >= sch->limit)) {
++ if (unlikely(q->t_len >= sch->limit)) {
+ /* re-link segs, so that qdisc_drop_all() frees them all */
+ skb->next = segs;
+ qdisc_drop_all(skb, sch, to_free);
+@@ -701,8 +705,8 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ tfifo_dequeue:
+ skb = __qdisc_dequeue_head(&sch->q);
+ if (skb) {
+- qdisc_qstats_backlog_dec(sch, skb);
+ deliver:
++ qdisc_qstats_backlog_dec(sch, skb);
+ qdisc_bstats_update(sch, skb);
+ return skb;
+ }
+@@ -718,8 +722,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+
+ if (time_to_send <= now && q->slot.slot_next <= now) {
+ netem_erase_head(q, skb);
+- sch->q.qlen--;
+- qdisc_qstats_backlog_dec(sch, skb);
++ q->t_len--;
+ skb->next = NULL;
+ skb->prev = NULL;
+ /* skb->dev shares skb->rbnode area,
+@@ -746,16 +749,21 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ if (net_xmit_drop_count(err))
+ qdisc_qstats_drop(sch);
+ qdisc_tree_reduce_backlog(sch, 1, pkt_len);
++ sch->qstats.backlog -= pkt_len;
++ sch->q.qlen--;
+ }
+ goto tfifo_dequeue;
+ }
++ sch->q.qlen--;
+ goto deliver;
+ }
+
+ if (q->qdisc) {
+ skb = q->qdisc->ops->dequeue(q->qdisc);
+- if (skb)
++ if (skb) {
++ sch->q.qlen--;
+ goto deliver;
++ }
+ }
+
+ qdisc_watchdog_schedule_ns(&q->watchdog,
+@@ -765,8 +773,10 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+
+ if (q->qdisc) {
+ skb = q->qdisc->ops->dequeue(q->qdisc);
+- if (skb)
++ if (skb) {
++ sch->q.qlen--;
+ goto deliver;
++ }
+ }
+ return NULL;
+ }
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index b7e25e7e9933b6..108a4cc2e00107 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -807,6 +807,7 @@ static void cleanup_bearer(struct work_struct *work)
+ {
+ struct udp_bearer *ub = container_of(work, struct udp_bearer, work);
+ struct udp_replicast *rcast, *tmp;
++ struct tipc_net *tn;
+
+ list_for_each_entry_safe(rcast, tmp, &ub->rcast.list, list) {
+ dst_cache_destroy(&rcast->dst_cache);
+@@ -814,10 +815,14 @@ static void cleanup_bearer(struct work_struct *work)
+ kfree_rcu(rcast, rcu);
+ }
+
++ tn = tipc_net(sock_net(ub->ubsock->sk));
++
+ dst_cache_destroy(&ub->rcast.dst_cache);
+ udp_tunnel_sock_release(ub->ubsock);
++
++ /* Note: could use a call_rcu() to avoid another synchronize_net() */
+ synchronize_net();
+- atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
++ atomic_dec(&tn->wq_count);
+ kfree(ub);
+ }
+
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 001ccc55ef0f93..6b176230044397 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2313,6 +2313,7 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
+ fds_sent = true;
+
+ if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES)) {
++ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ err = skb_splice_from_iter(skb, &msg->msg_iter, size,
+ sk->sk_allocation);
+ if (err < 0) {
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 9b1b9dc5a7eb2a..1e78f575fb5630 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -814,7 +814,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [NL80211_ATTR_MLO_LINKS] =
+ NLA_POLICY_NESTED_ARRAY(nl80211_policy),
+ [NL80211_ATTR_MLO_LINK_ID] =
+- NLA_POLICY_RANGE(NLA_U8, 0, IEEE80211_MLD_MAX_NUM_LINKS),
++ NLA_POLICY_RANGE(NLA_U8, 0, IEEE80211_MLD_MAX_NUM_LINKS - 1),
+ [NL80211_ATTR_MLD_ADDR] = NLA_POLICY_EXACT_LEN(ETH_ALEN),
+ [NL80211_ATTR_MLO_SUPPORT] = { .type = NLA_FLAG },
+ [NL80211_ATTR_MAX_NUM_AKM_SUITES] = { .type = NLA_REJECT },
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 431da30817a6f6..26817160008766 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -83,6 +83,7 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev)
+ if (!request)
+ return -ENOMEM;
+
++ request->n_channels = n_channels;
+ if (wdev->conn->params.channel) {
+ enum nl80211_band band = wdev->conn->params.channel->band;
+ struct ieee80211_supported_band *sband =
+diff --git a/rust/Makefile b/rust/Makefile
+index b5e0a73b78f3e5..9f59baacaf7730 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -267,9 +267,22 @@ endif
+
+ bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__
+
++# Each `bindgen` release may upgrade the list of Rust target versions. By
++# default, the highest stable release in their list is used. Thus we need to set
++# a `--rust-target` to avoid future `bindgen` releases emitting code that
++# `rustc` may not understand. On top of that, `bindgen` does not support passing
++# an unknown Rust target version.
++#
++# Therefore, the Rust target for `bindgen` can be only as high as the minimum
++# Rust version the kernel supports and only as high as the greatest stable Rust
++# target supported by the minimum `bindgen` version the kernel supports (that
++# is, if we do not test the actual `rustc`/`bindgen` versions running).
++#
++# Starting with `bindgen` 0.71.0, we will be able to set any future Rust version
++# instead, i.e. we will be able to set here our minimum supported Rust version.
+ quiet_cmd_bindgen = BINDGEN $@
+ cmd_bindgen = \
+- $(BINDGEN) $< $(bindgen_target_flags) \
++ $(BINDGEN) $< $(bindgen_target_flags) --rust-target 1.68 \
+ --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \
+ --no-debug '.*' --enable-function-attribute-detection \
+ -o $@ -- $(bindgen_c_flags_final) -DMODULE \
+diff --git a/sound/core/control_led.c b/sound/core/control_led.c
+index 65a1ebe877768f..e33dfcf863cf13 100644
+--- a/sound/core/control_led.c
++++ b/sound/core/control_led.c
+@@ -668,10 +668,16 @@ static void snd_ctl_led_sysfs_add(struct snd_card *card)
+ goto cerr;
+ led->cards[card->number] = led_card;
+ snprintf(link_name, sizeof(link_name), "led-%s", led->name);
+- WARN(sysfs_create_link(&card->ctl_dev->kobj, &led_card->dev.kobj, link_name),
+- "can't create symlink to controlC%i device\n", card->number);
+- WARN(sysfs_create_link(&led_card->dev.kobj, &card->card_dev.kobj, "card"),
+- "can't create symlink to card%i\n", card->number);
++ if (sysfs_create_link(&card->ctl_dev->kobj, &led_card->dev.kobj,
++ link_name))
++ dev_err(card->dev,
++ "%s: can't create symlink to controlC%i device\n",
++ __func__, card->number);
++ if (sysfs_create_link(&led_card->dev.kobj, &card->card_dev.kobj,
++ "card"))
++ dev_err(card->dev,
++ "%s: can't create symlink to card%i\n",
++ __func__, card->number);
+
+ continue;
+ cerr:
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 973671e0cdb09d..192fc75b51e6db 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10127,6 +10127,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1025, 0x159c, "Acer Nitro 5 AN515-58", ALC2XX_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x169a, "Acer Swift SFG16", ALC256_FIXUP_ACER_SFG16_MICMUTE_LED),
+ SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index e38c5885dadfbc..ecf57a6cb7c37d 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -578,14 +578,19 @@ static int acp6x_probe(struct platform_device *pdev)
+
+ handle = ACPI_HANDLE(pdev->dev.parent);
+ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
+- if (!ACPI_FAILURE(ret))
++ if (!ACPI_FAILURE(ret)) {
+ wov_en = dmic_status;
++ if (!wov_en)
++ return -ENODEV;
++ } else {
++ /* Incase of ACPI method read failure then jump to check_dmi_entry */
++ goto check_dmi_entry;
++ }
+
+- if (is_dmic_enable && wov_en)
++ if (is_dmic_enable)
+ platform_set_drvdata(pdev, &acp6x_card);
+- else
+- return 0;
+
++check_dmi_entry:
+ /* check for any DMI overrides */
+ dmi_id = dmi_first_match(yc_acp_quirk_table);
+ if (dmi_id)
+diff --git a/sound/soc/codecs/tas2781-i2c.c b/sound/soc/codecs/tas2781-i2c.c
+index 12d093437ba9b6..1b2f55030c3961 100644
+--- a/sound/soc/codecs/tas2781-i2c.c
++++ b/sound/soc/codecs/tas2781-i2c.c
+@@ -370,7 +370,7 @@ static void sngl_calib_start(struct tasdevice_priv *tas_priv, int i,
+ tasdevice_dev_read(tas_priv, i, p[j].reg,
+ (int *)&p[j].val[0]);
+ } else {
+- switch (p[j].reg) {
++ switch (tas2781_cali_start_reg[j].reg) {
+ case 0: {
+ if (!reg[0])
+ continue;
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index b6ff04f7138a2c..ee946e0d3f4969 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -1204,7 +1204,7 @@ static struct snd_kcontrol_new fsl_spdif_ctrls[] = {
+ },
+ /* DPLL lock info get controller */
+ {
+- .iface = SNDRV_CTL_ELEM_IFACE_PCM,
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = RX_SAMPLE_RATE_KCONTROL,
+ .access = SNDRV_CTL_ELEM_ACCESS_READ |
+ SNDRV_CTL_ELEM_ACCESS_VOLATILE,
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index beede7344efd63..4341269eb97780 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -169,7 +169,7 @@ static int fsl_xcvr_capds_put(struct snd_kcontrol *kcontrol,
+ }
+
+ static struct snd_kcontrol_new fsl_xcvr_earc_capds_kctl = {
+- .iface = SNDRV_CTL_ELEM_IFACE_PCM,
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Capabilities Data Structure",
+ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
+ .info = fsl_xcvr_type_capds_bytes_info,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index a58842a8c8a641..db57292c00ca1e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -1003,8 +1003,12 @@ static int sof_card_dai_links_create(struct snd_soc_card *card)
+ return ret;
+ }
+
+- /* One per DAI link, worst case is a DAI link for every endpoint */
+- sof_dais = kcalloc(num_ends, sizeof(*sof_dais), GFP_KERNEL);
++ /*
++ * One per DAI link, worst case is a DAI link for every endpoint, also
++ * add one additional to act as a terminator such that code can iterate
++ * until it hits an uninitialised DAI.
++ */
++ sof_dais = kcalloc(num_ends + 1, sizeof(*sof_dais), GFP_KERNEL);
+ if (!sof_dais)
+ return -ENOMEM;
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 00101875d9a8d5..a0767de7f1b7ed 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2179,6 +2179,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
++ DEVICE_FLG(0x0499, 0x1506, /* Yamaha THR5 */
++ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
+ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x0499, 0x3108, /* Yamaha YIT-W12TX */
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index c6d67fc9e57ef0..83c43dc13313cc 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -47,6 +47,20 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
+ */
+ perf_cpu_map__put(evsel->cpus);
+ evsel->cpus = perf_cpu_map__intersect(evlist->user_requested_cpus, evsel->own_cpus);
++
++ /*
++ * Empty cpu lists would eventually get opened as "any" so remove
++ * genuinely empty ones before they're opened in the wrong place.
++ */
++ if (perf_cpu_map__is_empty(evsel->cpus)) {
++ struct perf_evsel *next = perf_evlist__next(evlist, evsel);
++
++ perf_evlist__remove(evlist, evsel);
++ /* Keep idx contiguous */
++ if (next)
++ list_for_each_entry_from(next, &evlist->entries, node)
++ next->idx--;
++ }
+ } else if (!evsel->own_cpus || evlist->has_user_cpus ||
+ (!evsel->requires_cpu && perf_cpu_map__has_any_cpu(evlist->user_requested_cpus))) {
+ /*
+@@ -80,11 +94,11 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
+
+ static void perf_evlist__propagate_maps(struct perf_evlist *evlist)
+ {
+- struct perf_evsel *evsel;
++ struct perf_evsel *evsel, *n;
+
+ evlist->needs_map_propagation = true;
+
+- perf_evlist__for_each_evsel(evlist, evsel)
++ list_for_each_entry_safe(evsel, n, &evlist->entries, node)
+ __perf_evlist__propagate_maps(evlist, evsel);
+ }
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 6604f5d038aadf..f0d8796b984a80 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -3820,9 +3820,12 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ break;
+
+ case INSN_CONTEXT_SWITCH:
+- if (func && (!next_insn || !next_insn->hint)) {
+- WARN_INSN(insn, "unsupported instruction in callable function");
+- return 1;
++ if (func) {
++ if (!next_insn || !next_insn->hint) {
++ WARN_INSN(insn, "unsupported instruction in callable function");
++ return 1;
++ }
++ break;
+ }
+ return 0;
+
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index 272d3c70810e7d..a56cf8b0a7d405 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -1151,8 +1151,9 @@ static int cmp_profile_data(const void *a, const void *b)
+
+ if (v1 > v2)
+ return -1;
+- else
++ if (v1 < v2)
+ return 1;
++ return 0;
+ }
+
+ static void print_profile_result(struct perf_ftrace *ftrace)
+diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
+index 8982f68e7230cd..e763e8d99a4367 100644
+--- a/tools/perf/util/build-id.c
++++ b/tools/perf/util/build-id.c
+@@ -277,7 +277,7 @@ static int write_buildid(const char *name, size_t name_len, struct build_id *bid
+ struct perf_record_header_build_id b;
+ size_t len;
+
+- len = sizeof(b) + name_len + 1;
++ len = name_len + 1;
+ len = PERF_ALIGN(len, sizeof(u64));
+
+ memset(&b, 0, sizeof(b));
+@@ -286,7 +286,7 @@ static int write_buildid(const char *name, size_t name_len, struct build_id *bid
+ misc |= PERF_RECORD_MISC_BUILD_ID_SIZE;
+ b.pid = pid;
+ b.header.misc = misc;
+- b.header.size = len;
++ b.header.size = sizeof(b) + len;
+
+ err = do_write(fd, &b, sizeof(b));
+ if (err < 0)
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 4f0ac998b0ccfd..27d5345d2b307a 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -134,6 +134,8 @@ struct machine *machine__new_host(void)
+
+ if (machine__create_kernel_maps(machine) < 0)
+ goto out_delete;
++
++ machine->env = &perf_env;
+ }
+
+ return machine;
+diff --git a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S
+index df3230fdac3958..66ab2e0bae5fd0 100644
+--- a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S
++++ b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S
+@@ -81,32 +81,31 @@ do_syscall:
+ stp x27, x28, [sp, #96]
+
+ // Set SVCR if we're doing SME
+- cbz x1, 1f
++ cbz x1, load_gpr
+ adrp x2, svcr_in
+ ldr x2, [x2, :lo12:svcr_in]
+ msr S3_3_C4_C2_2, x2
+-1:
+
+ // Load ZA and ZT0 if enabled - uses x12 as scratch due to SME LDR
+- tbz x2, #SVCR_ZA_SHIFT, 1f
++ tbz x2, #SVCR_ZA_SHIFT, load_gpr
+ mov w12, #0
+ ldr x2, =za_in
+-2: _ldr_za 12, 2
++1: _ldr_za 12, 2
+ add x2, x2, x1
+ add x12, x12, #1
+ cmp x1, x12
+- bne 2b
++ bne 1b
+
+ // ZT0
+ mrs x2, S3_0_C0_C4_5 // ID_AA64SMFR0_EL1
+ ubfx x2, x2, #ID_AA64SMFR0_EL1_SMEver_SHIFT, \
+ #ID_AA64SMFR0_EL1_SMEver_WIDTH
+- cbz x2, 1f
++ cbz x2, load_gpr
+ adrp x2, zt_in
+ add x2, x2, :lo12:zt_in
+ _ldr_zt 2
+-1:
+
++load_gpr:
+ // Load GPRs x8-x28, and save our SP/FP for later comparison
+ ldr x2, =gpr_in
+ add x2, x2, #64
+@@ -125,9 +124,9 @@ do_syscall:
+ str x30, [x2], #8 // LR
+
+ // Load FPRs if we're not doing neither SVE nor streaming SVE
+- cbnz x0, 1f
++ cbnz x0, check_sve_in
+ ldr x2, =svcr_in
+- tbnz x2, #SVCR_SM_SHIFT, 1f
++ tbnz x2, #SVCR_SM_SHIFT, check_sve_in
+
+ ldr x2, =fpr_in
+ ldp q0, q1, [x2]
+@@ -148,8 +147,8 @@ do_syscall:
+ ldp q30, q31, [x2, #16 * 30]
+
+ b 2f
+-1:
+
++check_sve_in:
+ // Load the SVE registers if we're doing SVE/SME
+
+ ldr x2, =z_in
+@@ -256,32 +255,31 @@ do_syscall:
+ stp q30, q31, [x2, #16 * 30]
+
+ // Save SVCR if we're doing SME
+- cbz x1, 1f
++ cbz x1, check_sve_out
+ mrs x2, S3_3_C4_C2_2
+ adrp x3, svcr_out
+ str x2, [x3, :lo12:svcr_out]
+-1:
+
+ // Save ZA if it's enabled - uses x12 as scratch due to SME STR
+- tbz x2, #SVCR_ZA_SHIFT, 1f
++ tbz x2, #SVCR_ZA_SHIFT, check_sve_out
+ mov w12, #0
+ ldr x2, =za_out
+-2: _str_za 12, 2
++1: _str_za 12, 2
+ add x2, x2, x1
+ add x12, x12, #1
+ cmp x1, x12
+- bne 2b
++ bne 1b
+
+ // ZT0
+ mrs x2, S3_0_C0_C4_5 // ID_AA64SMFR0_EL1
+ ubfx x2, x2, #ID_AA64SMFR0_EL1_SMEver_SHIFT, \
+ #ID_AA64SMFR0_EL1_SMEver_WIDTH
+- cbz x2, 1f
++ cbz x2, check_sve_out
+ adrp x2, zt_out
+ add x2, x2, :lo12:zt_out
+ _str_zt 2
+-1:
+
++check_sve_out:
+ // Save the SVE state if we have some
+ cbz x0, 1f
+
+diff --git a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+index 5aaf2b065f86c2..bba3e37f749b86 100644
+--- a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
++++ b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+@@ -7,11 +7,7 @@
+ #include "bpf_misc.h"
+
+ SEC("tp_btf/bpf_testmod_test_nullable_bare")
+-/* This used to be a failure test, but raw_tp nullable arguments can now
+- * directly be dereferenced, whether they have nullable annotation or not,
+- * and don't need to be explicitly checked.
+- */
+-__success
++__failure __msg("R1 invalid mem access 'trusted_ptr_or_null_'")
+ int BPF_PROG(handle_tp_btf_nullable_bare1, struct bpf_testmod_test_read_ctx *nullable_ctx)
+ {
+ return nullable_ctx->len;
+diff --git a/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
+index a570e48b917acc..bfc3bf18fed4fe 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
++++ b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
+@@ -11,7 +11,7 @@ __success __retval(0)
+ __naked void btf_ctx_access_accept(void)
+ {
+ asm volatile (" \
+- r2 = *(u32*)(r1 + 8); /* load 2nd argument value (int pointer) */\
++ r2 = *(u64 *)(r1 + 8); /* load 2nd argument value (int pointer) */\
+ r0 = 0; \
+ exit; \
+ " ::: __clobber_all);
+@@ -23,7 +23,7 @@ __success __retval(0)
+ __naked void ctx_access_u32_pointer_accept(void)
+ {
+ asm volatile (" \
+- r2 = *(u32*)(r1 + 0); /* load 1nd argument value (u32 pointer) */\
++ r2 = *(u64 *)(r1 + 0); /* load 1nd argument value (u32 pointer) */\
+ r0 = 0; \
+ exit; \
+ " ::: __clobber_all);
+diff --git a/tools/testing/selftests/bpf/progs/verifier_d_path.c b/tools/testing/selftests/bpf/progs/verifier_d_path.c
+index ec79cbcfde91ef..87e51a215558fd 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_d_path.c
++++ b/tools/testing/selftests/bpf/progs/verifier_d_path.c
+@@ -11,7 +11,7 @@ __success __retval(0)
+ __naked void d_path_accept(void)
+ {
+ asm volatile (" \
+- r1 = *(u32*)(r1 + 0); \
++ r1 = *(u64 *)(r1 + 0); \
+ r2 = r10; \
+ r2 += -8; \
+ r6 = 0; \
+@@ -31,7 +31,7 @@ __failure __msg("helper call is not allowed in probe")
+ __naked void d_path_reject(void)
+ {
+ asm volatile (" \
+- r1 = *(u32*)(r1 + 0); \
++ r1 = *(u64 *)(r1 + 0); \
+ r2 = r10; \
+ r2 += -8; \
+ r6 = 0; \
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
+index 0c47faff9274b1..c068e6c2a580ea 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
+@@ -22,20 +22,34 @@ SB_ITC=0
+ h1_create()
+ {
+ simple_if_init $h1 192.0.1.1/24
++ tc qdisc add dev $h1 clsact
++
++ # Add egress filter on $h1 that will guarantee that the packet sent,
++ # will be the only packet being passed to the device.
++ tc filter add dev $h1 egress pref 2 handle 102 matchall action drop
+ }
+
+ h1_destroy()
+ {
++ tc filter del dev $h1 egress pref 2 handle 102 matchall action drop
++ tc qdisc del dev $h1 clsact
+ simple_if_fini $h1 192.0.1.1/24
+ }
+
+ h2_create()
+ {
+ simple_if_init $h2 192.0.1.2/24
++ tc qdisc add dev $h2 clsact
++
++ # Add egress filter on $h2 that will guarantee that the packet sent,
++ # will be the only packet being passed to the device.
++ tc filter add dev $h2 egress pref 1 handle 101 matchall action drop
+ }
+
+ h2_destroy()
+ {
++ tc filter del dev $h2 egress pref 1 handle 101 matchall action drop
++ tc qdisc del dev $h2 clsact
+ simple_if_fini $h2 192.0.1.2/24
+ }
+
+@@ -101,6 +115,11 @@ port_pool_test()
+ local exp_max_occ=$(devlink_cell_size_get)
+ local max_occ
+
++ tc filter add dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
++
+ devlink sb occupancy clearmax $DEVLINK_DEV
+
+ $MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
+@@ -108,11 +127,6 @@ port_pool_test()
+
+ devlink sb occupancy snapshot $DEVLINK_DEV
+
+- RET=0
+- max_occ=$(sb_occ_pool_check $dl_port1 $SB_POOL_ING $exp_max_occ)
+- check_err $? "Expected iPool($SB_POOL_ING) max occupancy to be $exp_max_occ, but got $max_occ"
+- log_test "physical port's($h1) ingress pool"
+-
+ RET=0
+ max_occ=$(sb_occ_pool_check $dl_port2 $SB_POOL_ING $exp_max_occ)
+ check_err $? "Expected iPool($SB_POOL_ING) max occupancy to be $exp_max_occ, but got $max_occ"
+@@ -122,6 +136,11 @@ port_pool_test()
+ max_occ=$(sb_occ_pool_check $cpu_dl_port $SB_POOL_EGR_CPU $exp_max_occ)
+ check_err $? "Expected ePool($SB_POOL_EGR_CPU) max occupancy to be $exp_max_occ, but got $max_occ"
+ log_test "CPU port's egress pool"
++
++ tc filter del dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
+ }
+
+ port_tc_ip_test()
+@@ -129,6 +148,11 @@ port_tc_ip_test()
+ local exp_max_occ=$(devlink_cell_size_get)
+ local max_occ
+
++ tc filter add dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
++
+ devlink sb occupancy clearmax $DEVLINK_DEV
+
+ $MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
+@@ -136,11 +160,6 @@ port_tc_ip_test()
+
+ devlink sb occupancy snapshot $DEVLINK_DEV
+
+- RET=0
+- max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+- check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+- log_test "physical port's($h1) ingress TC - IP packet"
+-
+ RET=0
+ max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+ check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+@@ -150,6 +169,11 @@ port_tc_ip_test()
+ max_occ=$(sb_occ_etc_check $cpu_dl_port $SB_ITC_CPU_IP $exp_max_occ)
+ check_err $? "Expected egress TC($SB_ITC_CPU_IP) max occupancy to be $exp_max_occ, but got $max_occ"
+ log_test "CPU port's egress TC - IP packet"
++
++ tc filter del dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
+ }
+
+ port_tc_arp_test()
+@@ -157,17 +181,15 @@ port_tc_arp_test()
+ local exp_max_occ=$(devlink_cell_size_get)
+ local max_occ
+
++ tc filter add dev $h1 egress protocol arp pref 1 handle 101 flower \
++ src_mac $h1mac action pass
++
+ devlink sb occupancy clearmax $DEVLINK_DEV
+
+ $MZ $h1 -c 1 -p 10 -a $h1mac -A 192.0.1.1 -t arp -q
+
+ devlink sb occupancy snapshot $DEVLINK_DEV
+
+- RET=0
+- max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+- check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+- log_test "physical port's($h1) ingress TC - ARP packet"
+-
+ RET=0
+ max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+ check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+@@ -177,6 +199,9 @@ port_tc_arp_test()
+ max_occ=$(sb_occ_etc_check $cpu_dl_port $SB_ITC_CPU_ARP $exp_max_occ)
+ check_err $? "Expected egress TC($SB_ITC_IP2ME) max occupancy to be $exp_max_occ, but got $max_occ"
+ log_test "CPU port's egress TC - ARP packet"
++
++ tc filter del dev $h1 egress protocol arp pref 1 handle 101 flower \
++ src_mac $h1mac action pass
+ }
+
+ setup_prepare()
+diff --git a/tools/testing/selftests/net/netfilter/rpath.sh b/tools/testing/selftests/net/netfilter/rpath.sh
+index 4485fd7675ed7e..86ec4e68594dc3 100755
+--- a/tools/testing/selftests/net/netfilter/rpath.sh
++++ b/tools/testing/selftests/net/netfilter/rpath.sh
+@@ -61,9 +61,20 @@ ip -net "$ns2" a a 192.168.42.1/24 dev d0
+ ip -net "$ns1" a a fec0:42::2/64 dev v0 nodad
+ ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
+
++# avoid neighbor lookups and enable martian IPv6 pings
++ns2_hwaddr=$(ip -net "$ns2" link show dev v0 | \
++ sed -n 's, *link/ether \([^ ]*\) .*,\1,p')
++ns1_hwaddr=$(ip -net "$ns1" link show dev v0 | \
++ sed -n 's, *link/ether \([^ ]*\) .*,\1,p')
++ip -net "$ns1" neigh add fec0:42::1 lladdr "$ns2_hwaddr" nud permanent dev v0
++ip -net "$ns1" neigh add fec0:23::1 lladdr "$ns2_hwaddr" nud permanent dev v0
++ip -net "$ns2" neigh add fec0:42::2 lladdr "$ns1_hwaddr" nud permanent dev d0
++ip -net "$ns2" neigh add fec0:23::2 lladdr "$ns1_hwaddr" nud permanent dev v0
++
+ # firewall matches to test
+ [ -n "$iptables" ] && {
+ common='-t raw -A PREROUTING -s 192.168.0.0/16'
++ common+=' -p icmp --icmp-type echo-request'
+ if ! ip netns exec "$ns2" "$iptables" $common -m rpfilter;then
+ echo "Cannot add rpfilter rule"
+ exit $ksft_skip
+@@ -72,6 +83,7 @@ ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
+ }
+ [ -n "$ip6tables" ] && {
+ common='-t raw -A PREROUTING -s fec0::/16'
++ common+=' -p icmpv6 --icmpv6-type echo-request'
+ if ! ip netns exec "$ns2" "$ip6tables" $common -m rpfilter;then
+ echo "Cannot add rpfilter rule"
+ exit $ksft_skip
+@@ -82,8 +94,10 @@ ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
+ table inet t {
+ chain c {
+ type filter hook prerouting priority raw;
+- ip saddr 192.168.0.0/16 fib saddr . iif oif exists counter
+- ip6 saddr fec0::/16 fib saddr . iif oif exists counter
++ ip saddr 192.168.0.0/16 icmp type echo-request \
++ fib saddr . iif oif exists counter
++ ip6 saddr fec0::/16 icmpv6 type echo-request \
++ fib saddr . iif oif exists counter
+ }
+ }
+ EOF
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-27 14:08 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2024-12-27 14:08 UTC (permalink / raw
To: gentoo-commits
commit: 671fd61e3eafab207b759f2bca79a6eda9cf710a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 27 14:07:47 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 27 14:07:47 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=671fd61e
Linux patch 6.12.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-6.12.7.patch | 6443 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6447 insertions(+)
diff --git a/0000_README b/0000_README
index 1bb8df77..6961ab2e 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-6.12.6.patch
From: https://www.kernel.org
Desc: Linux 6.12.6
+Patch: 1006_linux-6.12.7.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.7
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1006_linux-6.12.7.patch b/1006_linux-6.12.7.patch
new file mode 100644
index 00000000..17157109
--- /dev/null
+++ b/1006_linux-6.12.7.patch
@@ -0,0 +1,6443 @@
+diff --git a/Makefile b/Makefile
+index c10952585c14b0..685a57f6c8d279 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index fbed433283c9b9..42791971f75887 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -2503,7 +2503,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ ID_WRITABLE(ID_AA64MMFR0_EL1, ~(ID_AA64MMFR0_EL1_RES0 |
+ ID_AA64MMFR0_EL1_TGRAN4_2 |
+ ID_AA64MMFR0_EL1_TGRAN64_2 |
+- ID_AA64MMFR0_EL1_TGRAN16_2)),
++ ID_AA64MMFR0_EL1_TGRAN16_2 |
++ ID_AA64MMFR0_EL1_ASIDBITS)),
+ ID_WRITABLE(ID_AA64MMFR1_EL1, ~(ID_AA64MMFR1_EL1_RES0 |
+ ID_AA64MMFR1_EL1_HCX |
+ ID_AA64MMFR1_EL1_TWED |
+diff --git a/arch/hexagon/Makefile b/arch/hexagon/Makefile
+index 92d005958dfb23..ff172cbe5881a0 100644
+--- a/arch/hexagon/Makefile
++++ b/arch/hexagon/Makefile
+@@ -32,3 +32,9 @@ KBUILD_LDFLAGS += $(ldflags-y)
+ TIR_NAME := r19
+ KBUILD_CFLAGS += -ffixed-$(TIR_NAME) -DTHREADINFO_REG=$(TIR_NAME) -D__linux__
+ KBUILD_AFLAGS += -DTHREADINFO_REG=$(TIR_NAME)
++
++# Disable HexagonConstExtenders pass for LLVM versions prior to 19.1.0
++# https://github.com/llvm/llvm-project/issues/99714
++ifneq ($(call clang-min-version, 190100),y)
++KBUILD_CFLAGS += -mllvm -hexagon-cext=false
++endif
+diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
+index 2967d305c44278..9f3b527596ded8 100644
+--- a/arch/riscv/kvm/aia.c
++++ b/arch/riscv/kvm/aia.c
+@@ -552,7 +552,7 @@ void kvm_riscv_aia_enable(void)
+ csr_set(CSR_HIE, BIT(IRQ_S_GEXT));
+ /* Enable IRQ filtering for overflow interrupt only if sscofpmf is present */
+ if (__riscv_isa_extension_available(NULL, RISCV_ISA_EXT_SSCOFPMF))
+- csr_write(CSR_HVIEN, BIT(IRQ_PMU_OVF));
++ csr_set(CSR_HVIEN, BIT(IRQ_PMU_OVF));
+ }
+
+ void kvm_riscv_aia_disable(void)
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index c8f149ad77e584..c2ee0745f59edc 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -231,6 +231,8 @@ static unsigned long get_vmem_size(unsigned long identity_size,
+ vsize = round_up(SZ_2G + max_mappable, rte_size) +
+ round_up(vmemmap_size, rte_size) +
+ FIXMAP_SIZE + MODULES_LEN + KASLR_LEN;
++ if (IS_ENABLED(CONFIG_KMSAN))
++ vsize += MODULES_LEN * 2;
+ return size_add(vsize, vmalloc_size);
+ }
+
+diff --git a/arch/s390/boot/vmem.c b/arch/s390/boot/vmem.c
+index 145035f84a0e3e..3fa28db2fe59f4 100644
+--- a/arch/s390/boot/vmem.c
++++ b/arch/s390/boot/vmem.c
+@@ -306,7 +306,7 @@ static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long e
+ pages++;
+ }
+ }
+- if (mode == POPULATE_DIRECT)
++ if (mode == POPULATE_IDENTITY)
+ update_page_count(PG_DIRECT_MAP_4K, pages);
+ }
+
+@@ -339,7 +339,7 @@ static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long e
+ }
+ pgtable_pte_populate(pmd, addr, next, mode);
+ }
+- if (mode == POPULATE_DIRECT)
++ if (mode == POPULATE_IDENTITY)
+ update_page_count(PG_DIRECT_MAP_1M, pages);
+ }
+
+@@ -372,7 +372,7 @@ static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long e
+ }
+ pgtable_pmd_populate(pud, addr, next, mode);
+ }
+- if (mode == POPULATE_DIRECT)
++ if (mode == POPULATE_IDENTITY)
+ update_page_count(PG_DIRECT_MAP_2G, pages);
+ }
+
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index f17bb7bf939242..5fa203f4bc6b80 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -270,7 +270,7 @@ static ssize_t sys_##_prefix##_##_name##_store(struct kobject *kobj, \
+ if (len >= sizeof(_value)) \
+ return -E2BIG; \
+ len = strscpy(_value, buf, sizeof(_value)); \
+- if (len < 0) \
++ if ((ssize_t)len < 0) \
+ return len; \
+ strim(_value); \
+ return len; \
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index d18078834dedac..dc12fe5ef3caa9 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -223,6 +223,63 @@ static void hv_machine_crash_shutdown(struct pt_regs *regs)
+ hyperv_cleanup();
+ }
+ #endif /* CONFIG_CRASH_DUMP */
++
++static u64 hv_ref_counter_at_suspend;
++static void (*old_save_sched_clock_state)(void);
++static void (*old_restore_sched_clock_state)(void);
++
++/*
++ * Hyper-V clock counter resets during hibernation. Save and restore clock
++ * offset during suspend/resume, while also considering the time passed
++ * before suspend. This is to make sure that sched_clock using hv tsc page
++ * based clocksource, proceeds from where it left off during suspend and
++ * it shows correct time for the timestamps of kernel messages after resume.
++ */
++static void save_hv_clock_tsc_state(void)
++{
++ hv_ref_counter_at_suspend = hv_read_reference_counter();
++}
++
++static void restore_hv_clock_tsc_state(void)
++{
++ /*
++ * Adjust the offsets used by hv tsc clocksource to
++ * account for the time spent before hibernation.
++ * adjusted value = reference counter (time) at suspend
++ * - reference counter (time) now.
++ */
++ hv_adj_sched_clock_offset(hv_ref_counter_at_suspend - hv_read_reference_counter());
++}
++
++/*
++ * Functions to override save_sched_clock_state and restore_sched_clock_state
++ * functions of x86_platform. The Hyper-V clock counter is reset during
++ * suspend-resume and the offset used to measure time needs to be
++ * corrected, post resume.
++ */
++static void hv_save_sched_clock_state(void)
++{
++ old_save_sched_clock_state();
++ save_hv_clock_tsc_state();
++}
++
++static void hv_restore_sched_clock_state(void)
++{
++ restore_hv_clock_tsc_state();
++ old_restore_sched_clock_state();
++}
++
++static void __init x86_setup_ops_for_tsc_pg_clock(void)
++{
++ if (!(ms_hyperv.features & HV_MSR_REFERENCE_TSC_AVAILABLE))
++ return;
++
++ old_save_sched_clock_state = x86_platform.save_sched_clock_state;
++ x86_platform.save_sched_clock_state = hv_save_sched_clock_state;
++
++ old_restore_sched_clock_state = x86_platform.restore_sched_clock_state;
++ x86_platform.restore_sched_clock_state = hv_restore_sched_clock_state;
++}
+ #endif /* CONFIG_HYPERV */
+
+ static uint32_t __init ms_hyperv_platform(void)
+@@ -579,6 +636,7 @@ static void __init ms_hyperv_init_platform(void)
+
+ /* Register Hyper-V specific clocksource */
+ hv_init_clocksource();
++ x86_setup_ops_for_tsc_pg_clock();
+ hv_vtl_init_platform();
+ #endif
+ /*
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 41786b834b1635..83bfecd1a6e40c 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -36,6 +36,26 @@
+ u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
+ EXPORT_SYMBOL_GPL(kvm_cpu_caps);
+
++struct cpuid_xstate_sizes {
++ u32 eax;
++ u32 ebx;
++ u32 ecx;
++};
++
++static struct cpuid_xstate_sizes xstate_sizes[XFEATURE_MAX] __ro_after_init;
++
++void __init kvm_init_xstate_sizes(void)
++{
++ u32 ign;
++ int i;
++
++ for (i = XFEATURE_YMM; i < ARRAY_SIZE(xstate_sizes); i++) {
++ struct cpuid_xstate_sizes *xs = &xstate_sizes[i];
++
++ cpuid_count(0xD, i, &xs->eax, &xs->ebx, &xs->ecx, &ign);
++ }
++}
++
+ u32 xstate_required_size(u64 xstate_bv, bool compacted)
+ {
+ int feature_bit = 0;
+@@ -44,14 +64,15 @@ u32 xstate_required_size(u64 xstate_bv, bool compacted)
+ xstate_bv &= XFEATURE_MASK_EXTEND;
+ while (xstate_bv) {
+ if (xstate_bv & 0x1) {
+- u32 eax, ebx, ecx, edx, offset;
+- cpuid_count(0xD, feature_bit, &eax, &ebx, &ecx, &edx);
++ struct cpuid_xstate_sizes *xs = &xstate_sizes[feature_bit];
++ u32 offset;
++
+ /* ECX[1]: 64B alignment in compacted form */
+ if (compacted)
+- offset = (ecx & 0x2) ? ALIGN(ret, 64) : ret;
++ offset = (xs->ecx & 0x2) ? ALIGN(ret, 64) : ret;
+ else
+- offset = ebx;
+- ret = max(ret, offset + eax);
++ offset = xs->ebx;
++ ret = max(ret, offset + xs->eax);
+ }
+
+ xstate_bv >>= 1;
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index 41697cca354e6b..ad479cfb91bc7b 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -32,6 +32,7 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
+ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
+ u32 *ecx, u32 *edx, bool exact_only);
+
++void __init kvm_init_xstate_sizes(void);
+ u32 xstate_required_size(u64 xstate_bv, bool compacted);
+
+ int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 9df3e1e5ae81a1..4543dd6bcab2cb 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3199,15 +3199,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ if (data & ~supported_de_cfg)
+ return 1;
+
+- /*
+- * Don't let the guest change the host-programmed value. The
+- * MSR is very model specific, i.e. contains multiple bits that
+- * are completely unknown to KVM, and the one bit known to KVM
+- * is simply a reflection of hardware capabilities.
+- */
+- if (!msr->host_initiated && data != svm->msr_decfg)
+- return 1;
+-
+ svm->msr_decfg = data;
+ break;
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 83fe0a78146fc1..b49e2eb4893080 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9991,7 +9991,7 @@ static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
+ {
+ u64 ret = vcpu->run->hypercall.ret;
+
+- if (!is_64_bit_mode(vcpu))
++ if (!is_64_bit_hypercall(vcpu))
+ ret = (u32)ret;
+ kvm_rax_write(vcpu, ret);
+ ++vcpu->stat.hypercalls;
+@@ -14010,6 +14010,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_rmp_fault);
+
+ static int __init kvm_x86_init(void)
+ {
++ kvm_init_xstate_sizes();
++
+ kvm_mmu_x86_module_init();
+ mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible();
+ return 0;
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index cd5ea6eaa76b09..156e9bb07abf1a 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -275,13 +275,15 @@ void blk_mq_sysfs_unregister_hctxs(struct request_queue *q)
+ struct blk_mq_hw_ctx *hctx;
+ unsigned long i;
+
+- lockdep_assert_held(&q->sysfs_dir_lock);
+-
++ mutex_lock(&q->sysfs_dir_lock);
+ if (!q->mq_sysfs_init_done)
+- return;
++ goto unlock;
+
+ queue_for_each_hw_ctx(q, hctx, i)
+ blk_mq_unregister_hctx(hctx);
++
++unlock:
++ mutex_unlock(&q->sysfs_dir_lock);
+ }
+
+ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+@@ -290,10 +292,9 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ unsigned long i;
+ int ret = 0;
+
+- lockdep_assert_held(&q->sysfs_dir_lock);
+-
++ mutex_lock(&q->sysfs_dir_lock);
+ if (!q->mq_sysfs_init_done)
+- return ret;
++ goto unlock;
+
+ queue_for_each_hw_ctx(q, hctx, i) {
+ ret = blk_mq_register_hctx(hctx);
+@@ -301,5 +302,8 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ break;
+ }
+
++unlock:
++ mutex_unlock(&q->sysfs_dir_lock);
++
+ return ret;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cc1b3202383840..d5995021815ddf 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4421,6 +4421,15 @@ struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q,
+ }
+ EXPORT_SYMBOL(blk_mq_alloc_disk_for_queue);
+
++/*
++ * Only hctx removed from cpuhp list can be reused
++ */
++static bool blk_mq_hctx_is_reusable(struct blk_mq_hw_ctx *hctx)
++{
++ return hlist_unhashed(&hctx->cpuhp_online) &&
++ hlist_unhashed(&hctx->cpuhp_dead);
++}
++
+ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ struct blk_mq_tag_set *set, struct request_queue *q,
+ int hctx_idx, int node)
+@@ -4430,7 +4439,7 @@ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ /* reuse dead hctx first */
+ spin_lock(&q->unused_hctx_lock);
+ list_for_each_entry(tmp, &q->unused_hctx_list, hctx_list) {
+- if (tmp->numa_node == node) {
++ if (tmp->numa_node == node && blk_mq_hctx_is_reusable(tmp)) {
+ hctx = tmp;
+ break;
+ }
+@@ -4462,8 +4471,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ unsigned long i, j;
+
+ /* protect against switching io scheduler */
+- lockdep_assert_held(&q->sysfs_lock);
+-
++ mutex_lock(&q->sysfs_lock);
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ int old_node;
+ int node = blk_mq_get_hctx_node(set, i);
+@@ -4496,6 +4504,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+
+ xa_for_each_start(&q->hctx_table, j, hctx, j)
+ blk_mq_exit_hctx(q, set, hctx, j);
++ mutex_unlock(&q->sysfs_lock);
+
+ /* unregister cpuhp callbacks for exited hctxs */
+ blk_mq_remove_hw_queues_cpuhp(q);
+@@ -4527,14 +4536,10 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+
+ xa_init(&q->hctx_table);
+
+- mutex_lock(&q->sysfs_lock);
+-
+ blk_mq_realloc_hw_ctxs(set, q);
+ if (!q->nr_hw_queues)
+ goto err_hctxs;
+
+- mutex_unlock(&q->sysfs_lock);
+-
+ INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
+ blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
+
+@@ -4553,7 +4558,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ return 0;
+
+ err_hctxs:
+- mutex_unlock(&q->sysfs_lock);
+ blk_mq_release(q);
+ err_exit:
+ q->mq_ops = NULL;
+@@ -4934,12 +4938,12 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ return false;
+
+ /* q->elevator needs protection from ->sysfs_lock */
+- lockdep_assert_held(&q->sysfs_lock);
++ mutex_lock(&q->sysfs_lock);
+
+ /* the check has to be done with holding sysfs_lock */
+ if (!q->elevator) {
+ kfree(qe);
+- goto out;
++ goto unlock;
+ }
+
+ INIT_LIST_HEAD(&qe->node);
+@@ -4949,7 +4953,9 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ __elevator_get(qe->type);
+ list_add(&qe->node, head);
+ elevator_disable(q);
+-out:
++unlock:
++ mutex_unlock(&q->sysfs_lock);
++
+ return true;
+ }
+
+@@ -4978,9 +4984,11 @@ static void blk_mq_elv_switch_back(struct list_head *head,
+ list_del(&qe->node);
+ kfree(qe);
+
++ mutex_lock(&q->sysfs_lock);
+ elevator_switch(q, t);
+ /* drop the reference acquired in blk_mq_elv_switch_none */
+ elevator_put(t);
++ mutex_unlock(&q->sysfs_lock);
+ }
+
+ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+@@ -5000,11 +5008,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues)
+ return;
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list) {
+- mutex_lock(&q->sysfs_dir_lock);
+- mutex_lock(&q->sysfs_lock);
++ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_freeze_queue(q);
+- }
+ /*
+ * Switch IO scheduler to 'none', cleaning up the data associated
+ * with the previous scheduler. We will switch back once we are done
+@@ -5060,11 +5065,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_elv_switch_back(&head, q);
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list) {
++ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_unfreeze_queue(q);
+- mutex_unlock(&q->sysfs_lock);
+- mutex_unlock(&q->sysfs_dir_lock);
+- }
+
+ /* Free the excess tags when nr_hw_queues shrink. */
+ for (i = set->nr_hw_queues; i < prev_nr_hw_queues; i++)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 42c2cb97d778af..207577145c54f4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -690,11 +690,11 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
+ return res;
+ }
+
+- mutex_lock(&q->sysfs_lock);
+ blk_mq_freeze_queue(q);
++ mutex_lock(&q->sysfs_lock);
+ res = entry->store(disk, page, length);
+- blk_mq_unfreeze_queue(q);
+ mutex_unlock(&q->sysfs_lock);
++ blk_mq_unfreeze_queue(q);
+ return res;
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
+index 1b409dbd332d80..c8daffd90f3001 100644
+--- a/drivers/accel/ivpu/ivpu_gem.c
++++ b/drivers/accel/ivpu/ivpu_gem.c
+@@ -406,7 +406,7 @@ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p)
+ mutex_lock(&bo->lock);
+
+ drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u",
+- bo, bo->ctx->id, bo->vpu_addr, bo->base.base.size,
++ bo, bo->ctx ? bo->ctx->id : 0, bo->vpu_addr, bo->base.base.size,
+ bo->flags, kref_read(&bo->base.base.refcount));
+
+ if (bo->base.pages)
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index 59d3170f5e3541..10b7ae0f866c98 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -364,6 +364,7 @@ void ivpu_pm_init(struct ivpu_device *vdev)
+
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_autosuspend_delay(dev, delay);
++ pm_runtime_set_active(dev);
+
+ ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay);
+ }
+@@ -378,7 +379,6 @@ void ivpu_pm_enable(struct ivpu_device *vdev)
+ {
+ struct device *dev = vdev->drm.dev;
+
+- pm_runtime_set_active(dev);
+ pm_runtime_allow(dev);
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index d0432b1707ceb6..bf83a104086cce 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -524,6 +524,12 @@ static ssize_t backing_dev_store(struct device *dev,
+ }
+
+ nr_pages = i_size_read(inode) >> PAGE_SHIFT;
++ /* Refuse to use zero sized device (also prevents self reference) */
++ if (!nr_pages) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long);
+ bitmap = kvzalloc(bitmap_sz, GFP_KERNEL);
+ if (!bitmap) {
+@@ -1319,12 +1325,16 @@ static void zram_meta_free(struct zram *zram, u64 disksize)
+ size_t num_pages = disksize >> PAGE_SHIFT;
+ size_t index;
+
++ if (!zram->table)
++ return;
++
+ /* Free all pages that are still in this zram device */
+ for (index = 0; index < num_pages; index++)
+ zram_free_page(zram, index);
+
+ zs_destroy_pool(zram->mem_pool);
+ vfree(zram->table);
++ zram->table = NULL;
+ }
+
+ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
+@@ -2165,11 +2175,6 @@ static void zram_reset_device(struct zram *zram)
+
+ zram->limit_pages = 0;
+
+- if (!init_done(zram)) {
+- up_write(&zram->init_lock);
+- return;
+- }
+-
+ set_capacity_and_notify(zram->disk, 0);
+ part_stat_set_all(zram->disk->part0, 0);
+
+diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
+index 99177835cadec4..b39dee7b93af04 100644
+--- a/drivers/clocksource/hyperv_timer.c
++++ b/drivers/clocksource/hyperv_timer.c
+@@ -27,7 +27,8 @@
+ #include <asm/mshyperv.h>
+
+ static struct clock_event_device __percpu *hv_clock_event;
+-static u64 hv_sched_clock_offset __ro_after_init;
++/* Note: offset can hold negative values after hibernation. */
++static u64 hv_sched_clock_offset __read_mostly;
+
+ /*
+ * If false, we're using the old mechanism for stimer0 interrupts
+@@ -470,6 +471,17 @@ static void resume_hv_clock_tsc(struct clocksource *arg)
+ hv_set_msr(HV_MSR_REFERENCE_TSC, tsc_msr.as_uint64);
+ }
+
++/*
++ * Called during resume from hibernation, from overridden
++ * x86_platform.restore_sched_clock_state routine. This is to adjust offsets
++ * used to calculate time for hv tsc page based sched_clock, to account for
++ * time spent before hibernation.
++ */
++void hv_adj_sched_clock_offset(u64 offset)
++{
++ hv_sched_clock_offset -= offset;
++}
++
+ #ifdef HAVE_VDSO_CLOCKMODE_HVCLOCK
+ static int hv_cs_enable(struct clocksource *cs)
+ {
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index dff618c708dc68..a0d6e8d7f42c8a 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -1295,6 +1295,7 @@ static int cxl_port_setup_targets(struct cxl_port *port,
+ struct cxl_region_params *p = &cxlr->params;
+ struct cxl_decoder *cxld = cxl_rr->decoder;
+ struct cxl_switch_decoder *cxlsd;
++ struct cxl_port *iter = port;
+ u16 eig, peig;
+ u8 eiw, peiw;
+
+@@ -1311,16 +1312,26 @@ static int cxl_port_setup_targets(struct cxl_port *port,
+
+ cxlsd = to_cxl_switch_decoder(&cxld->dev);
+ if (cxl_rr->nr_targets_set) {
+- int i, distance;
++ int i, distance = 1;
++ struct cxl_region_ref *cxl_rr_iter;
+
+ /*
+- * Passthrough decoders impose no distance requirements between
+- * peers
++ * The "distance" between peer downstream ports represents which
++ * endpoint positions in the region interleave a given port can
++ * host.
++ *
++ * For example, at the root of a hierarchy the distance is
++ * always 1 as every index targets a different host-bridge. At
++ * each subsequent switch level those ports map every Nth region
++ * position where N is the width of the switch == distance.
+ */
+- if (cxl_rr->nr_targets == 1)
+- distance = 0;
+- else
+- distance = p->nr_targets / cxl_rr->nr_targets;
++ do {
++ cxl_rr_iter = cxl_rr_load(iter, cxlr);
++ distance *= cxl_rr_iter->nr_targets;
++ iter = to_cxl_port(iter->dev.parent);
++ } while (!is_cxl_root(iter));
++ distance *= cxlrd->cxlsd.cxld.interleave_ways;
++
+ for (i = 0; i < cxl_rr->nr_targets_set; i++)
+ if (ep->dport == cxlsd->target[i]) {
+ rc = check_last_peer(cxled, ep, cxl_rr,
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 188412d45e0d26..6e553b5752b1dd 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -942,8 +942,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (rc)
+ return rc;
+
+- rc = cxl_pci_ras_unmask(pdev);
+- if (rc)
++ if (cxl_pci_ras_unmask(pdev))
+ dev_dbg(&pdev->dev, "No RAS reporting unmasked\n");
+
+ pci_save_state(pdev);
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 8892bc701a662d..afb8c1c5010735 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -60,7 +60,7 @@ static void __dma_buf_debugfs_list_add(struct dma_buf *dmabuf)
+ {
+ }
+
+-static void __dma_buf_debugfs_list_del(struct file *file)
++static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf)
+ {
+ }
+ #endif
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index a3638ccc15f571..5e836e4e5b449a 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -256,15 +256,12 @@ static const struct dma_buf_ops udmabuf_ops = {
+ };
+
+ #define SEALS_WANTED (F_SEAL_SHRINK)
+-#define SEALS_DENIED (F_SEAL_WRITE)
++#define SEALS_DENIED (F_SEAL_WRITE|F_SEAL_FUTURE_WRITE)
+
+ static int check_memfd_seals(struct file *memfd)
+ {
+ int seals;
+
+- if (!memfd)
+- return -EBADFD;
+-
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EBADFD;
+
+@@ -279,12 +276,10 @@ static int check_memfd_seals(struct file *memfd)
+ return 0;
+ }
+
+-static int export_udmabuf(struct udmabuf *ubuf,
+- struct miscdevice *device,
+- u32 flags)
++static struct dma_buf *export_udmabuf(struct udmabuf *ubuf,
++ struct miscdevice *device)
+ {
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+- struct dma_buf *buf;
+
+ ubuf->device = device;
+ exp_info.ops = &udmabuf_ops;
+@@ -292,24 +287,72 @@ static int export_udmabuf(struct udmabuf *ubuf,
+ exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
+
+- buf = dma_buf_export(&exp_info);
+- if (IS_ERR(buf))
+- return PTR_ERR(buf);
++ return dma_buf_export(&exp_info);
++}
++
++static long udmabuf_pin_folios(struct udmabuf *ubuf, struct file *memfd,
++ loff_t start, loff_t size)
++{
++ pgoff_t pgoff, pgcnt, upgcnt = ubuf->pagecount;
++ struct folio **folios = NULL;
++ u32 cur_folio, cur_pgcnt;
++ long nr_folios;
++ long ret = 0;
++ loff_t end;
++
++ pgcnt = size >> PAGE_SHIFT;
++ folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
++ if (!folios)
++ return -ENOMEM;
++
++ end = start + (pgcnt << PAGE_SHIFT) - 1;
++ nr_folios = memfd_pin_folios(memfd, start, end, folios, pgcnt, &pgoff);
++ if (nr_folios <= 0) {
++ ret = nr_folios ? nr_folios : -EINVAL;
++ goto end;
++ }
+
+- return dma_buf_fd(buf, flags);
++ cur_pgcnt = 0;
++ for (cur_folio = 0; cur_folio < nr_folios; ++cur_folio) {
++ pgoff_t subpgoff = pgoff;
++ size_t fsize = folio_size(folios[cur_folio]);
++
++ ret = add_to_unpin_list(&ubuf->unpin_list, folios[cur_folio]);
++ if (ret < 0)
++ goto end;
++
++ for (; subpgoff < fsize; subpgoff += PAGE_SIZE) {
++ ubuf->folios[upgcnt] = folios[cur_folio];
++ ubuf->offsets[upgcnt] = subpgoff;
++ ++upgcnt;
++
++ if (++cur_pgcnt >= pgcnt)
++ goto end;
++ }
++
++ /**
++ * In a given range, only the first subpage of the first folio
++ * has an offset, that is returned by memfd_pin_folios().
++ * The first subpages of other folios (in the range) have an
++ * offset of 0.
++ */
++ pgoff = 0;
++ }
++end:
++ ubuf->pagecount = upgcnt;
++ kvfree(folios);
++ return ret;
+ }
+
+ static long udmabuf_create(struct miscdevice *device,
+ struct udmabuf_create_list *head,
+ struct udmabuf_create_item *list)
+ {
+- pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0;
+- long nr_folios, ret = -EINVAL;
+- struct file *memfd = NULL;
+- struct folio **folios;
++ pgoff_t pgcnt = 0, pglimit;
+ struct udmabuf *ubuf;
+- u32 i, j, k, flags;
+- loff_t end;
++ struct dma_buf *dmabuf;
++ long ret = -EINVAL;
++ u32 i, flags;
+
+ ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
+ if (!ubuf)
+@@ -318,93 +361,76 @@ static long udmabuf_create(struct miscdevice *device,
+ INIT_LIST_HEAD(&ubuf->unpin_list);
+ pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
+ for (i = 0; i < head->count; i++) {
+- if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
++ if (!PAGE_ALIGNED(list[i].offset))
+ goto err;
+- if (!IS_ALIGNED(list[i].size, PAGE_SIZE))
++ if (!PAGE_ALIGNED(list[i].size))
+ goto err;
+- ubuf->pagecount += list[i].size >> PAGE_SHIFT;
+- if (ubuf->pagecount > pglimit)
++
++ pgcnt += list[i].size >> PAGE_SHIFT;
++ if (pgcnt > pglimit)
+ goto err;
+ }
+
+- if (!ubuf->pagecount)
++ if (!pgcnt)
+ goto err;
+
+- ubuf->folios = kvmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
+- GFP_KERNEL);
++ ubuf->folios = kvmalloc_array(pgcnt, sizeof(*ubuf->folios), GFP_KERNEL);
+ if (!ubuf->folios) {
+ ret = -ENOMEM;
+ goto err;
+ }
+- ubuf->offsets = kvcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+- GFP_KERNEL);
++
++ ubuf->offsets = kvcalloc(pgcnt, sizeof(*ubuf->offsets), GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+- pgbuf = 0;
+ for (i = 0; i < head->count; i++) {
+- memfd = fget(list[i].memfd);
+- ret = check_memfd_seals(memfd);
+- if (ret < 0)
+- goto err;
+-
+- pgcnt = list[i].size >> PAGE_SHIFT;
+- folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+- if (!folios) {
+- ret = -ENOMEM;
+- goto err;
+- }
++ struct file *memfd = fget(list[i].memfd);
+
+- end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1;
+- ret = memfd_pin_folios(memfd, list[i].offset, end,
+- folios, pgcnt, &pgoff);
+- if (ret <= 0) {
+- kvfree(folios);
+- if (!ret)
+- ret = -EINVAL;
++ if (!memfd) {
++ ret = -EBADFD;
+ goto err;
+ }
+
+- nr_folios = ret;
+- pgoff >>= PAGE_SHIFT;
+- for (j = 0, k = 0; j < pgcnt; j++) {
+- ubuf->folios[pgbuf] = folios[k];
+- ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT;
+-
+- if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) {
+- ret = add_to_unpin_list(&ubuf->unpin_list,
+- folios[k]);
+- if (ret < 0) {
+- kfree(folios);
+- goto err;
+- }
+- }
+-
+- pgbuf++;
+- if (++pgoff == folio_nr_pages(folios[k])) {
+- pgoff = 0;
+- if (++k == nr_folios)
+- break;
+- }
+- }
++ /*
++ * Take the inode lock to protect against concurrent
++ * memfd_add_seals(), which takes this lock in write mode.
++ */
++ inode_lock_shared(file_inode(memfd));
++ ret = check_memfd_seals(memfd);
++ if (ret)
++ goto out_unlock;
+
+- kvfree(folios);
++ ret = udmabuf_pin_folios(ubuf, memfd, list[i].offset,
++ list[i].size);
++out_unlock:
++ inode_unlock_shared(file_inode(memfd));
+ fput(memfd);
+- memfd = NULL;
++ if (ret)
++ goto err;
+ }
+
+ flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0;
+- ret = export_udmabuf(ubuf, device, flags);
+- if (ret < 0)
++ dmabuf = export_udmabuf(ubuf, device);
++ if (IS_ERR(dmabuf)) {
++ ret = PTR_ERR(dmabuf);
+ goto err;
++ }
++ /*
++ * Ownership of ubuf is held by the dmabuf from here.
++ * If the following dma_buf_fd() fails, dma_buf_put() cleans up both the
++ * dmabuf and the ubuf (through udmabuf_ops.release).
++ */
++
++ ret = dma_buf_fd(dmabuf, flags);
++ if (ret < 0)
++ dma_buf_put(dmabuf);
+
+ return ret;
+
+ err:
+- if (memfd)
+- fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
+ kvfree(ubuf->offsets);
+ kvfree(ubuf->folios);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index ddfbdb66b794d7..5d356b7c45897c 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -3362,36 +3362,24 @@ static bool dct_ecc_enabled(struct amd64_pvt *pvt)
+
+ static bool umc_ecc_enabled(struct amd64_pvt *pvt)
+ {
+- u8 umc_en_mask = 0, ecc_en_mask = 0;
+- u16 nid = pvt->mc_node_id;
+ struct amd64_umc *umc;
+- u8 ecc_en = 0, i;
++ bool ecc_en = false;
++ int i;
+
++ /* Check whether at least one UMC is enabled: */
+ for_each_umc(i) {
+ umc = &pvt->umc[i];
+
+- /* Only check enabled UMCs. */
+- if (!(umc->sdp_ctrl & UMC_SDP_INIT))
+- continue;
+-
+- umc_en_mask |= BIT(i);
+-
+- if (umc->umc_cap_hi & UMC_ECC_ENABLED)
+- ecc_en_mask |= BIT(i);
++ if (umc->sdp_ctrl & UMC_SDP_INIT &&
++ umc->umc_cap_hi & UMC_ECC_ENABLED) {
++ ecc_en = true;
++ break;
++ }
+ }
+
+- /* Check whether at least one UMC is enabled: */
+- if (umc_en_mask)
+- ecc_en = umc_en_mask == ecc_en_mask;
+- else
+- edac_dbg(0, "Node %d: No enabled UMCs.\n", nid);
+-
+- edac_dbg(3, "Node %d: DRAM ECC %s.\n", nid, (ecc_en ? "enabled" : "disabled"));
++ edac_dbg(3, "Node %d: DRAM ECC %s.\n", pvt->mc_node_id, (ecc_en ? "enabled" : "disabled"));
+
+- if (!ecc_en)
+- return false;
+- else
+- return true;
++ return ecc_en;
+ }
+
+ static inline void
+diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c
+index eb17d03b66fec9..dfda5ffc14db72 100644
+--- a/drivers/firmware/arm_ffa/bus.c
++++ b/drivers/firmware/arm_ffa/bus.c
+@@ -187,13 +187,18 @@ bool ffa_device_is_valid(struct ffa_device *ffa_dev)
+ return valid;
+ }
+
+-struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+- const struct ffa_ops *ops)
++struct ffa_device *
++ffa_device_register(const struct ffa_partition_info *part_info,
++ const struct ffa_ops *ops)
+ {
+ int id, ret;
++ uuid_t uuid;
+ struct device *dev;
+ struct ffa_device *ffa_dev;
+
++ if (!part_info)
++ return NULL;
++
+ id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL);
+ if (id < 0)
+ return NULL;
+@@ -210,9 +215,11 @@ struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+ dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
+
+ ffa_dev->id = id;
+- ffa_dev->vm_id = vm_id;
++ ffa_dev->vm_id = part_info->id;
++ ffa_dev->properties = part_info->properties;
+ ffa_dev->ops = ops;
+- uuid_copy(&ffa_dev->uuid, uuid);
++ import_uuid(&uuid, (u8 *)part_info->uuid);
++ uuid_copy(&ffa_dev->uuid, &uuid);
+
+ ret = device_register(&ffa_dev->dev);
+ if (ret) {
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index b14cbdae94e82b..2c2ec3c35f1561 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -1387,7 +1387,6 @@ static struct notifier_block ffa_bus_nb = {
+ static int ffa_setup_partitions(void)
+ {
+ int count, idx, ret;
+- uuid_t uuid;
+ struct ffa_device *ffa_dev;
+ struct ffa_dev_part_info *info;
+ struct ffa_partition_info *pbuf, *tpbuf;
+@@ -1406,23 +1405,19 @@ static int ffa_setup_partitions(void)
+
+ xa_init(&drv_info->partition_info);
+ for (idx = 0, tpbuf = pbuf; idx < count; idx++, tpbuf++) {
+- import_uuid(&uuid, (u8 *)tpbuf->uuid);
+-
+ /* Note that if the UUID will be uuid_null, that will require
+ * ffa_bus_notifier() to find the UUID of this partition id
+ * with help of ffa_device_match_uuid(). FF-A v1.1 and above
+ * provides UUID here for each partition as part of the
+ * discovery API and the same is passed.
+ */
+- ffa_dev = ffa_device_register(&uuid, tpbuf->id, &ffa_drv_ops);
++ ffa_dev = ffa_device_register(tpbuf, &ffa_drv_ops);
+ if (!ffa_dev) {
+ pr_err("%s: failed to register partition ID 0x%x\n",
+ __func__, tpbuf->id);
+ continue;
+ }
+
+- ffa_dev->properties = tpbuf->properties;
+-
+ if (drv_info->version > FFA_VERSION_1_0 &&
+ !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC))
+ ffa_mode_32bit_set(ffa_dev);
+diff --git a/drivers/firmware/arm_scmi/vendors/imx/Kconfig b/drivers/firmware/arm_scmi/vendors/imx/Kconfig
+index 2883ed24a84d65..a01bf5e47301d2 100644
+--- a/drivers/firmware/arm_scmi/vendors/imx/Kconfig
++++ b/drivers/firmware/arm_scmi/vendors/imx/Kconfig
+@@ -15,6 +15,7 @@ config IMX_SCMI_BBM_EXT
+ config IMX_SCMI_MISC_EXT
+ tristate "i.MX SCMI MISC EXTENSION"
+ depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
++ depends on IMX_SCMI_MISC_DRV
+ default y if ARCH_MXC
+ help
+ This enables i.MX System MISC control logic such as gpio expander
+diff --git a/drivers/firmware/imx/Kconfig b/drivers/firmware/imx/Kconfig
+index 477d3f32d99a6b..907cd149c40a8b 100644
+--- a/drivers/firmware/imx/Kconfig
++++ b/drivers/firmware/imx/Kconfig
+@@ -25,7 +25,6 @@ config IMX_SCU
+
+ config IMX_SCMI_MISC_DRV
+ tristate "IMX SCMI MISC Protocol driver"
+- depends on IMX_SCMI_MISC_EXT || COMPILE_TEST
+ default y if ARCH_MXC
+ help
+ The System Controller Management Interface firmware (SCMI FW) is
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
+index 5ac59b62020cf2..18b3b1aaa1d3b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
+@@ -345,11 +345,10 @@ void amdgpu_coredump(struct amdgpu_device *adev, bool skip_vram_check,
+ coredump->skip_vram_check = skip_vram_check;
+ coredump->reset_vram_lost = vram_lost;
+
+- if (job && job->vm) {
+- struct amdgpu_vm *vm = job->vm;
++ if (job && job->pasid) {
+ struct amdgpu_task_info *ti;
+
+- ti = amdgpu_vm_get_task_info_vm(vm);
++ ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid);
+ if (ti) {
+ coredump->reset_task_info = *ti;
+ amdgpu_vm_put_task_info(ti);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 16f2605ac50b99..1ce20a19be8ba9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -253,7 +253,6 @@ void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
+
+ void amdgpu_job_free_resources(struct amdgpu_job *job)
+ {
+- struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
+ struct dma_fence *f;
+ unsigned i;
+
+@@ -266,7 +265,7 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
+ f = NULL;
+
+ for (i = 0; i < job->num_ibs; ++i)
+- amdgpu_ib_free(ring->adev, &job->ibs[i], f);
++ amdgpu_ib_free(NULL, &job->ibs[i], f);
+ }
+
+ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 8d2562d0f143c7..73e02141a6e215 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -1260,10 +1260,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va,
+ * next command submission.
+ */
+ if (amdgpu_vm_is_bo_always_valid(vm, bo)) {
+- uint32_t mem_type = bo->tbo.resource->mem_type;
+-
+- if (!(bo->preferred_domains &
+- amdgpu_mem_type_to_domain(mem_type)))
++ if (bo->tbo.resource &&
++ !(bo->preferred_domains &
++ amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type)))
+ amdgpu_vm_bo_evicted(&bo_va->base);
+ else
+ amdgpu_vm_bo_idle(&bo_va->base);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 47b47d21f46447..6c19626ec59e9d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -4105,7 +4105,7 @@ static int gfx_v12_0_set_clockgating_state(void *handle,
+ if (amdgpu_sriov_vf(adev))
+ return 0;
+
+- switch (adev->ip_versions[GC_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
+ case IP_VERSION(12, 0, 0):
+ case IP_VERSION(12, 0, 1):
+ gfx_v12_0_update_gfx_clock_gating(adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+index 0fbc3be81f140f..f2ab5001b49249 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+@@ -108,7 +108,7 @@ mmhub_v4_1_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
+ dev_err(adev->dev,
+ "MMVM_L2_PROTECTION_FAULT_STATUS_LO32:0x%08X\n",
+ status);
+- switch (adev->ip_versions[MMHUB_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
+ case IP_VERSION(4, 1, 0):
+ mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw];
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
+index b1b57dcc5a7370..d1032e9992b49c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
+@@ -271,8 +271,19 @@ const struct nbio_hdp_flush_reg nbio_v7_0_hdp_flush_reg = {
+ .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__SDMA1_MASK,
+ };
+
++#define regRCC_DEV0_EPF6_STRAP4 0xd304
++#define regRCC_DEV0_EPF6_STRAP4_BASE_IDX 5
++
+ static void nbio_v7_0_init_registers(struct amdgpu_device *adev)
+ {
++ uint32_t data;
++
++ switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
++ case IP_VERSION(2, 5, 0):
++ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4) & ~BIT(23);
++ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4, data);
++ break;
++ }
+ }
+
+ #define MMIO_REG_HOLE_OFFSET (0x80000 - PAGE_SIZE)
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+index 814ab59fdd4a3a..41421da63a0846 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+@@ -275,7 +275,7 @@ static void nbio_v7_11_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF_BIF256_CI256_RC3X4_USB4_PCIE_MST_CTRL_3, data);
+
+- switch (adev->ip_versions[NBIO_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
+ case IP_VERSION(7, 11, 0):
+ case IP_VERSION(7, 11, 1):
+ case IP_VERSION(7, 11, 2):
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+index 1ac730328516ff..3fb6d2aa7e3b39 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+@@ -247,7 +247,7 @@ static void nbio_v7_7_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF0_PCIE_MST_CTRL_3, data);
+
+- switch (adev->ip_versions[NBIO_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
+ case IP_VERSION(7, 7, 0):
+ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23);
+ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index b22fb7eafcd3f2..9ec53431f2c32d 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -2108,7 +2108,7 @@ static int smu_v14_0_2_enable_gfx_features(struct smu_context *smu)
+ {
+ struct amdgpu_device *adev = smu->adev;
+
+- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(14, 0, 2))
++ if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 2))
+ return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_EnableAllSmuFeatures,
+ FEATURE_PWR_GFX, NULL);
+ else
+diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c
+index 48b2df120086c9..90fe07a89260e2 100644
+--- a/drivers/gpu/drm/display/drm_dp_tunnel.c
++++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
+@@ -1896,8 +1896,8 @@ static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
+ *
+ * Creates a DP tunnel manager for @dev.
+ *
+- * Returns a pointer to the tunnel manager if created successfully or NULL in
+- * case of an error.
++ * Returns a pointer to the tunnel manager if created successfully or error
++ * pointer in case of failure.
+ */
+ struct drm_dp_tunnel_mgr *
+ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+@@ -1907,7 +1907,7 @@ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+
+ mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
+ if (!mgr)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ mgr->dev = dev;
+ init_waitqueue_head(&mgr->bw_req_queue);
+@@ -1916,7 +1916,7 @@ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+ if (!mgr->groups) {
+ kfree(mgr);
+
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ #ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL_STATE_DEBUG
+@@ -1927,7 +1927,7 @@ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+ if (!init_group(mgr, &mgr->groups[i])) {
+ destroy_mgr(mgr);
+
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ mgr->group_count++;
+diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
+index 6ba167a3346134..71573b85d9242e 100644
+--- a/drivers/gpu/drm/drm_modes.c
++++ b/drivers/gpu/drm/drm_modes.c
+@@ -1287,14 +1287,11 @@ EXPORT_SYMBOL(drm_mode_set_name);
+ */
+ int drm_mode_vrefresh(const struct drm_display_mode *mode)
+ {
+- unsigned int num, den;
++ unsigned int num = 1, den = 1;
+
+ if (mode->htotal == 0 || mode->vtotal == 0)
+ return 0;
+
+- num = mode->clock;
+- den = mode->htotal * mode->vtotal;
+-
+ if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ num *= 2;
+ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+@@ -1302,6 +1299,12 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode)
+ if (mode->vscan > 1)
+ den *= mode->vscan;
+
++ if (check_mul_overflow(mode->clock, num, &num))
++ return 0;
++
++ if (check_mul_overflow(mode->htotal * mode->vtotal, den, &den))
++ return 0;
++
+ return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(num, 1000), den);
+ }
+ EXPORT_SYMBOL(drm_mode_vrefresh);
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+index ba55c059063dbb..fe1f85e5dda330 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+@@ -343,6 +343,11 @@ struct intel_engine_guc_stats {
+ * @start_gt_clk: GT clock time of last idle to active transition.
+ */
+ u64 start_gt_clk;
++
++ /**
++ * @total: The last value of total returned
++ */
++ u64 total;
+ };
+
+ union intel_engine_tlb_inv_reg {
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index ed979847187f53..ee12ee0ed41871 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -1243,6 +1243,21 @@ static void __get_engine_usage_record(struct intel_engine_cs *engine,
+ } while (++i < 6);
+ }
+
++static void __set_engine_usage_record(struct intel_engine_cs *engine,
++ u32 last_in, u32 id, u32 total)
++{
++ struct iosys_map rec_map = intel_guc_engine_usage_record_map(engine);
++
++#define record_write(map_, field_, val_) \
++ iosys_map_wr_field(map_, 0, struct guc_engine_usage_record, field_, val_)
++
++ record_write(&rec_map, last_switch_in_stamp, last_in);
++ record_write(&rec_map, current_context_index, id);
++ record_write(&rec_map, total_runtime, total);
++
++#undef record_write
++}
++
+ static void guc_update_engine_gt_clks(struct intel_engine_cs *engine)
+ {
+ struct intel_engine_guc_stats *stats = &engine->stats.guc;
+@@ -1363,9 +1378,12 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now)
+ total += intel_gt_clock_interval_to_ns(gt, clk);
+ }
+
++ if (total > stats->total)
++ stats->total = total;
++
+ spin_unlock_irqrestore(&guc->timestamp.lock, flags);
+
+- return ns_to_ktime(total);
++ return ns_to_ktime(stats->total);
+ }
+
+ static void guc_enable_busyness_worker(struct intel_guc *guc)
+@@ -1431,8 +1449,21 @@ static void __reset_guc_busyness_stats(struct intel_guc *guc)
+
+ guc_update_pm_timestamp(guc, &unused);
+ for_each_engine(engine, gt, id) {
++ struct intel_engine_guc_stats *stats = &engine->stats.guc;
++
+ guc_update_engine_gt_clks(engine);
+- engine->stats.guc.prev_total = 0;
++
++ /*
++ * If resetting a running context, accumulate the active
++ * time as well since there will be no context switch.
++ */
++ if (stats->running) {
++ u64 clk = guc->timestamp.gt_stamp - stats->start_gt_clk;
++
++ stats->total_gt_clks += clk;
++ }
++ stats->prev_total = 0;
++ stats->running = 0;
+ }
+
+ spin_unlock_irqrestore(&guc->timestamp.lock, flags);
+@@ -1543,6 +1574,9 @@ static void guc_timestamp_ping(struct work_struct *wrk)
+
+ static int guc_action_enable_usage_stats(struct intel_guc *guc)
+ {
++ struct intel_gt *gt = guc_to_gt(guc);
++ struct intel_engine_cs *engine;
++ enum intel_engine_id id;
+ u32 offset = intel_guc_engine_usage_offset(guc);
+ u32 action[] = {
+ INTEL_GUC_ACTION_SET_ENG_UTIL_BUFF,
+@@ -1550,6 +1584,9 @@ static int guc_action_enable_usage_stats(struct intel_guc *guc)
+ 0,
+ };
+
++ for_each_engine(engine, gt, id)
++ __set_engine_usage_record(engine, 0, 0xffffffff, 0);
++
+ return intel_guc_send(guc, action, ARRAY_SIZE(action));
+ }
+
+diff --git a/drivers/gpu/drm/panel/panel-himax-hx83102.c b/drivers/gpu/drm/panel/panel-himax-hx83102.c
+index 8b48bba181316c..3644a7544b935d 100644
+--- a/drivers/gpu/drm/panel/panel-himax-hx83102.c
++++ b/drivers/gpu/drm/panel/panel-himax-hx83102.c
+@@ -565,6 +565,8 @@ static int hx83102_get_modes(struct drm_panel *panel,
+ struct drm_display_mode *mode;
+
+ mode = drm_mode_duplicate(connector->dev, m);
++ if (!mode)
++ return -ENOMEM;
+
+ mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+ drm_mode_set_name(mode);
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35950.c b/drivers/gpu/drm/panel/panel-novatek-nt35950.c
+index b036208f93560e..08b22b592ab045 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35950.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35950.c
+@@ -481,9 +481,9 @@ static int nt35950_probe(struct mipi_dsi_device *dsi)
+ return dev_err_probe(dev, -EPROBE_DEFER, "Cannot get secondary DSI host\n");
+
+ nt->dsi[1] = mipi_dsi_device_register_full(dsi_r_host, info);
+- if (!nt->dsi[1]) {
++ if (IS_ERR(nt->dsi[1])) {
+ dev_err(dev, "Cannot get secondary DSI node\n");
+- return -ENODEV;
++ return PTR_ERR(nt->dsi[1]);
+ }
+ num_dsis++;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+index eef03d04e0cd2d..1f72ef7ca74c93 100644
+--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
++++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+@@ -1177,6 +1177,7 @@ static int st7701_probe(struct device *dev, int connector_type)
+ return dev_err_probe(dev, ret, "Failed to get orientation\n");
+
+ drm_panel_init(&st7701->panel, dev, &st7701_funcs, connector_type);
++ st7701->panel.prepare_prev_first = true;
+
+ /**
+ * Once sleep out has been issued, ST7701 IC required to wait 120ms
+diff --git a/drivers/gpu/drm/panel/panel-synaptics-r63353.c b/drivers/gpu/drm/panel/panel-synaptics-r63353.c
+index 169c629746c714..17349825543fe6 100644
+--- a/drivers/gpu/drm/panel/panel-synaptics-r63353.c
++++ b/drivers/gpu/drm/panel/panel-synaptics-r63353.c
+@@ -325,7 +325,7 @@ static void r63353_panel_shutdown(struct mipi_dsi_device *dsi)
+ {
+ struct r63353_panel *rpanel = mipi_dsi_get_drvdata(dsi);
+
+- r63353_panel_unprepare(&rpanel->base);
++ drm_panel_unprepare(&rpanel->base);
+ }
+
+ static const struct r63353_desc sharp_ls068b3sx02_data = {
+diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
+index d35b60c0611486..77017d9518267c 100644
+--- a/drivers/hv/hv_kvp.c
++++ b/drivers/hv/hv_kvp.c
+@@ -767,6 +767,12 @@ hv_kvp_init(struct hv_util_service *srv)
+ */
+ kvp_transaction.state = HVUTIL_DEVICE_INIT;
+
++ return 0;
++}
++
++int
++hv_kvp_init_transport(void)
++{
+ hvt = hvutil_transport_init(kvp_devname, CN_KVP_IDX, CN_KVP_VAL,
+ kvp_on_msg, kvp_on_reset);
+ if (!hvt)
+diff --git a/drivers/hv/hv_snapshot.c b/drivers/hv/hv_snapshot.c
+index 0d2184be169125..397f4c8fa46c31 100644
+--- a/drivers/hv/hv_snapshot.c
++++ b/drivers/hv/hv_snapshot.c
+@@ -388,6 +388,12 @@ hv_vss_init(struct hv_util_service *srv)
+ */
+ vss_transaction.state = HVUTIL_DEVICE_INIT;
+
++ return 0;
++}
++
++int
++hv_vss_init_transport(void)
++{
+ hvt = hvutil_transport_init(vss_devname, CN_VSS_IDX, CN_VSS_VAL,
+ vss_on_msg, vss_on_reset);
+ if (!hvt) {
+diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
+index c4f525325790fa..3d9360fd909acc 100644
+--- a/drivers/hv/hv_util.c
++++ b/drivers/hv/hv_util.c
+@@ -141,6 +141,7 @@ static struct hv_util_service util_heartbeat = {
+ static struct hv_util_service util_kvp = {
+ .util_cb = hv_kvp_onchannelcallback,
+ .util_init = hv_kvp_init,
++ .util_init_transport = hv_kvp_init_transport,
+ .util_pre_suspend = hv_kvp_pre_suspend,
+ .util_pre_resume = hv_kvp_pre_resume,
+ .util_deinit = hv_kvp_deinit,
+@@ -149,6 +150,7 @@ static struct hv_util_service util_kvp = {
+ static struct hv_util_service util_vss = {
+ .util_cb = hv_vss_onchannelcallback,
+ .util_init = hv_vss_init,
++ .util_init_transport = hv_vss_init_transport,
+ .util_pre_suspend = hv_vss_pre_suspend,
+ .util_pre_resume = hv_vss_pre_resume,
+ .util_deinit = hv_vss_deinit,
+@@ -613,6 +615,13 @@ static int util_probe(struct hv_device *dev,
+ if (ret)
+ goto error;
+
++ if (srv->util_init_transport) {
++ ret = srv->util_init_transport();
++ if (ret) {
++ vmbus_close(dev->channel);
++ goto error;
++ }
++ }
+ return 0;
+
+ error:
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index d2856023d53c9a..52cb744b4d7fde 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -370,12 +370,14 @@ void vmbus_on_event(unsigned long data);
+ void vmbus_on_msg_dpc(unsigned long data);
+
+ int hv_kvp_init(struct hv_util_service *srv);
++int hv_kvp_init_transport(void);
+ void hv_kvp_deinit(void);
+ int hv_kvp_pre_suspend(void);
+ int hv_kvp_pre_resume(void);
+ void hv_kvp_onchannelcallback(void *context);
+
+ int hv_vss_init(struct hv_util_service *srv);
++int hv_vss_init_transport(void);
+ void hv_vss_deinit(void);
+ int hv_vss_pre_suspend(void);
+ int hv_vss_pre_resume(void);
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index 926d28cd3fab55..1c2cb12071b808 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -182,7 +182,7 @@ struct tmp51x_data {
+ struct regmap *regmap;
+ };
+
+-// Set the shift based on the gain 8=4, 4=3, 2=2, 1=1
++// Set the shift based on the gain: 8 -> 1, 4 -> 2, 2 -> 3, 1 -> 4
+ static inline u8 tmp51x_get_pga_shift(struct tmp51x_data *data)
+ {
+ return 5 - ffs(data->pga_gain);
+@@ -204,7 +204,9 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ * 2's complement number shifted by one to four depending
+ * on the pga gain setting. 1lsb = 10uV
+ */
+- *val = sign_extend32(regval, 17 - tmp51x_get_pga_shift(data));
++ *val = sign_extend32(regval,
++ reg == TMP51X_SHUNT_CURRENT_RESULT ?
++ 16 - tmp51x_get_pga_shift(data) : 15);
+ *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms);
+ break;
+ case TMP51X_BUS_VOLTAGE_RESULT:
+@@ -220,7 +222,7 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ break;
+ case TMP51X_BUS_CURRENT_RESULT:
+ // Current = (ShuntVoltage * CalibrationRegister) / 4096
+- *val = sign_extend32(regval, 16) * data->curr_lsb_ua;
++ *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua;
+ *val = DIV_ROUND_CLOSEST(*val, MILLI);
+ break;
+ case TMP51X_LOCAL_TEMP_RESULT:
+@@ -232,7 +234,7 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ case TMP51X_REMOTE_TEMP_LIMIT_2:
+ case TMP513_REMOTE_TEMP_LIMIT_3:
+ // 1lsb = 0.0625 degrees centigrade
+- *val = sign_extend32(regval, 16) >> TMP51X_TEMP_SHIFT;
++ *val = sign_extend32(regval, 15) >> TMP51X_TEMP_SHIFT;
+ *val = DIV_ROUND_CLOSEST(*val * 625, 10);
+ break;
+ case TMP51X_N_FACTOR_AND_HYST_1:
+diff --git a/drivers/i2c/busses/i2c-pnx.c b/drivers/i2c/busses/i2c-pnx.c
+index 1dafadda73af3a..135300f3b53428 100644
+--- a/drivers/i2c/busses/i2c-pnx.c
++++ b/drivers/i2c/busses/i2c-pnx.c
+@@ -95,7 +95,7 @@ enum {
+
+ static inline int wait_timeout(struct i2c_pnx_algo_data *data)
+ {
+- long timeout = data->timeout;
++ long timeout = jiffies_to_msecs(data->timeout);
+ while (timeout > 0 &&
+ (ioread32(I2C_REG_STS(data)) & mstatus_active)) {
+ mdelay(1);
+@@ -106,7 +106,7 @@ static inline int wait_timeout(struct i2c_pnx_algo_data *data)
+
+ static inline int wait_reset(struct i2c_pnx_algo_data *data)
+ {
+- long timeout = data->timeout;
++ long timeout = jiffies_to_msecs(data->timeout);
+ while (timeout > 0 &&
+ (ioread32(I2C_REG_CTL(data)) & mcntrl_reset)) {
+ mdelay(1);
+diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c
+index c7f3a4c0247023..2c982199782f9b 100644
+--- a/drivers/i2c/busses/i2c-riic.c
++++ b/drivers/i2c/busses/i2c-riic.c
+@@ -352,7 +352,7 @@ static int riic_init_hw(struct riic_dev *riic)
+ if (brl <= (0x1F + 3))
+ break;
+
+- total_ticks /= 2;
++ total_ticks = DIV_ROUND_UP(total_ticks, 2);
+ rate /= 2;
+ }
+
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 8b6159f4cdafa4..b0bfb61539c202 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -161,7 +161,22 @@ static bool cpus_have_group0 __ro_after_init;
+
+ static void __init gic_prio_init(void)
+ {
+- cpus_have_security_disabled = gic_dist_security_disabled();
++ bool ds;
++
++ ds = gic_dist_security_disabled();
++ if (!ds) {
++ u32 val;
++
++ val = readl_relaxed(gic_data.dist_base + GICD_CTLR);
++ val |= GICD_CTLR_DS;
++ writel_relaxed(val, gic_data.dist_base + GICD_CTLR);
++
++ ds = gic_dist_security_disabled();
++ if (ds)
++ pr_warn("Broken GIC integration, security disabled");
++ }
++
++ cpus_have_security_disabled = ds;
+ cpus_have_group0 = gic_has_group0();
+
+ /*
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 813bc20cfb5a6c..6e62415de2e5ec 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2924,6 +2924,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ msdc_gate_clock(host);
+ platform_set_drvdata(pdev, NULL);
+ release_mem:
++ device_init_wakeup(&pdev->dev, false);
+ if (host->dma.gpd)
+ dma_free_coherent(&pdev->dev,
+ 2 * sizeof(struct mt_gpdma_desc),
+@@ -2957,6 +2958,7 @@ static void msdc_drv_remove(struct platform_device *pdev)
+ host->dma.gpd, host->dma.gpd_addr);
+ dma_free_coherent(&pdev->dev, MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+ host->dma.bd, host->dma.bd_addr);
++ device_init_wakeup(&pdev->dev, false);
+ }
+
+ static void msdc_save_reg(struct msdc_host *host)
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 1ad0a6b3a2eb77..7b6b82bec8556c 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -1525,7 +1525,6 @@ static const struct sdhci_pltfm_data sdhci_tegra186_pdata = {
+ .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
+ SDHCI_QUIRK_SINGLE_POWER_WRITE |
+ SDHCI_QUIRK_NO_HISPD_BIT |
+- SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+ SDHCI_QUIRK2_ISSUE_CMD_DAT_RESET_TOGETHER,
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 533bcb77c9f934..97cd8bbf2e32a9 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1220,20 +1220,32 @@ static void m_can_coalescing_update(struct m_can_classdev *cdev, u32 ir)
+ static int m_can_interrupt_handler(struct m_can_classdev *cdev)
+ {
+ struct net_device *dev = cdev->net;
+- u32 ir;
++ u32 ir = 0, ir_read;
+ int ret;
+
+ if (pm_runtime_suspended(cdev->dev))
+ return IRQ_NONE;
+
+- ir = m_can_read(cdev, M_CAN_IR);
++ /* The m_can controller signals its interrupt status as a level, but
++ * depending in the integration the CPU may interpret the signal as
++ * edge-triggered (for example with m_can_pci). For these
++ * edge-triggered integrations, we must observe that IR is 0 at least
++ * once to be sure that the next interrupt will generate an edge.
++ */
++ while ((ir_read = m_can_read(cdev, M_CAN_IR)) != 0) {
++ ir |= ir_read;
++
++ /* ACK all irqs */
++ m_can_write(cdev, M_CAN_IR, ir);
++
++ if (!cdev->irq_edge_triggered)
++ break;
++ }
++
+ m_can_coalescing_update(cdev, ir);
+ if (!ir)
+ return IRQ_NONE;
+
+- /* ACK all irqs */
+- m_can_write(cdev, M_CAN_IR, ir);
+-
+ if (cdev->ops->clear_interrupts)
+ cdev->ops->clear_interrupts(cdev);
+
+@@ -1695,6 +1707,14 @@ static int m_can_dev_setup(struct m_can_classdev *cdev)
+ return -EINVAL;
+ }
+
++ /* Write the INIT bit, in case no hardware reset has happened before
++ * the probe (for example, it was observed that the Intel Elkhart Lake
++ * SoCs do not properly reset the CAN controllers on reboot)
++ */
++ err = m_can_cccr_update_bits(cdev, CCCR_INIT, CCCR_INIT);
++ if (err)
++ return err;
++
+ if (!cdev->is_peripheral)
+ netif_napi_add(dev, &cdev->napi, m_can_poll);
+
+@@ -1746,11 +1766,7 @@ static int m_can_dev_setup(struct m_can_classdev *cdev)
+ return -EINVAL;
+ }
+
+- /* Forcing standby mode should be redundant, as the chip should be in
+- * standby after a reset. Write the INIT bit anyways, should the chip
+- * be configured by previous stage.
+- */
+- return m_can_cccr_update_bits(cdev, CCCR_INIT, CCCR_INIT);
++ return 0;
+ }
+
+ static void m_can_stop(struct net_device *dev)
+diff --git a/drivers/net/can/m_can/m_can.h b/drivers/net/can/m_can/m_can.h
+index 92b2bd8628e6b3..ef39e8e527ab67 100644
+--- a/drivers/net/can/m_can/m_can.h
++++ b/drivers/net/can/m_can/m_can.h
+@@ -99,6 +99,7 @@ struct m_can_classdev {
+ int pm_clock_support;
+ int pm_wake_source;
+ int is_peripheral;
++ bool irq_edge_triggered;
+
+ // Cached M_CAN_IE register content
+ u32 active_interrupts;
+diff --git a/drivers/net/can/m_can/m_can_pci.c b/drivers/net/can/m_can/m_can_pci.c
+index d72fe771dfc7aa..9ad7419f88f830 100644
+--- a/drivers/net/can/m_can/m_can_pci.c
++++ b/drivers/net/can/m_can/m_can_pci.c
+@@ -127,6 +127,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ mcan_class->pm_clock_support = 1;
+ mcan_class->pm_wake_source = 0;
+ mcan_class->can.clock.freq = id->driver_data;
++ mcan_class->irq_edge_triggered = true;
+ mcan_class->ops = &m_can_pci_ops;
+
+ pci_set_drvdata(pci, mcan_class);
+diff --git a/drivers/net/ethernet/broadcom/bgmac-platform.c b/drivers/net/ethernet/broadcom/bgmac-platform.c
+index 77425c7a32dbf8..78f7862ca00669 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-platform.c
++++ b/drivers/net/ethernet/broadcom/bgmac-platform.c
+@@ -171,6 +171,7 @@ static int platform_phy_connect(struct bgmac *bgmac)
+ static int bgmac_probe(struct platform_device *pdev)
+ {
+ struct device_node *np = pdev->dev.of_node;
++ struct device_node *phy_node;
+ struct bgmac *bgmac;
+ struct resource *regs;
+ int ret;
+@@ -236,7 +237,9 @@ static int bgmac_probe(struct platform_device *pdev)
+ bgmac->cco_ctl_maskset = platform_bgmac_cco_ctl_maskset;
+ bgmac->get_bus_clock = platform_bgmac_get_bus_clock;
+ bgmac->cmn_maskset32 = platform_bgmac_cmn_maskset32;
+- if (of_parse_phandle(np, "phy-handle", 0)) {
++ phy_node = of_parse_phandle(np, "phy-handle", 0);
++ if (phy_node) {
++ of_node_put(phy_node);
+ bgmac->phy_connect = platform_phy_connect;
+ } else {
+ bgmac->phy_connect = bgmac_phy_connect_direct;
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
+index 455a54708be440..a83e7d3c2485bd 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
+@@ -346,8 +346,9 @@ static struct sk_buff *copy_gl_to_skb_pkt(const struct pkt_gl *gl,
+ * driver. Once driver synthesizes cpl_pass_accpet_req the skb will go
+ * through the regular cpl_pass_accept_req processing in TOM.
+ */
+- skb = alloc_skb(gl->tot_len + sizeof(struct cpl_pass_accept_req)
+- - pktshift, GFP_ATOMIC);
++ skb = alloc_skb(size_add(gl->tot_len,
++ sizeof(struct cpl_pass_accept_req)) -
++ pktshift, GFP_ATOMIC);
+ if (unlikely(!skb))
+ return NULL;
+ __skb_put(skb, gl->tot_len + sizeof(struct cpl_pass_accept_req)
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index 890f213da8d180..ae1f523d6841b5 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -172,6 +172,7 @@ static int create_txqs(struct hinic_dev *nic_dev)
+ hinic_sq_dbgfs_uninit(nic_dev);
+
+ devm_kfree(&netdev->dev, nic_dev->txqs);
++ nic_dev->txqs = NULL;
+ return err;
+ }
+
+@@ -268,6 +269,7 @@ static int create_rxqs(struct hinic_dev *nic_dev)
+ hinic_rq_dbgfs_uninit(nic_dev);
+
+ devm_kfree(&netdev->dev, nic_dev->rxqs);
++ nic_dev->rxqs = NULL;
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 3d72aa7b130503..ef93df52088710 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1432,7 +1432,7 @@ void ocelot_ifh_set_basic(void *ifh, struct ocelot *ocelot, int port,
+
+ memset(ifh, 0, OCELOT_TAG_LEN);
+ ocelot_ifh_set_bypass(ifh, 1);
+- ocelot_ifh_set_src(ifh, BIT_ULL(ocelot->num_phys_ports));
++ ocelot_ifh_set_src(ifh, ocelot->num_phys_ports);
+ ocelot_ifh_set_dest(ifh, BIT_ULL(port));
+ ocelot_ifh_set_qos_class(ifh, qos_class);
+ ocelot_ifh_set_tag_type(ifh, tag_type);
+diff --git a/drivers/net/ethernet/oa_tc6.c b/drivers/net/ethernet/oa_tc6.c
+index f9c0dcd965c2e7..db200e4ec284d7 100644
+--- a/drivers/net/ethernet/oa_tc6.c
++++ b/drivers/net/ethernet/oa_tc6.c
+@@ -113,6 +113,7 @@ struct oa_tc6 {
+ struct mii_bus *mdiobus;
+ struct spi_device *spi;
+ struct mutex spi_ctrl_lock; /* Protects spi control transfer */
++ spinlock_t tx_skb_lock; /* Protects tx skb handling */
+ void *spi_ctrl_tx_buf;
+ void *spi_ctrl_rx_buf;
+ void *spi_data_tx_buf;
+@@ -1004,8 +1005,10 @@ static u16 oa_tc6_prepare_spi_tx_buf_for_tx_skbs(struct oa_tc6 *tc6)
+ for (used_tx_credits = 0; used_tx_credits < tc6->tx_credits;
+ used_tx_credits++) {
+ if (!tc6->ongoing_tx_skb) {
++ spin_lock_bh(&tc6->tx_skb_lock);
+ tc6->ongoing_tx_skb = tc6->waiting_tx_skb;
+ tc6->waiting_tx_skb = NULL;
++ spin_unlock_bh(&tc6->tx_skb_lock);
+ }
+ if (!tc6->ongoing_tx_skb)
+ break;
+@@ -1111,8 +1114,9 @@ static int oa_tc6_spi_thread_handler(void *data)
+ /* This kthread will be waken up if there is a tx skb or mac-phy
+ * interrupt to perform spi transfer with tx chunks.
+ */
+- wait_event_interruptible(tc6->spi_wq, tc6->waiting_tx_skb ||
+- tc6->int_flag ||
++ wait_event_interruptible(tc6->spi_wq, tc6->int_flag ||
++ (tc6->waiting_tx_skb &&
++ tc6->tx_credits) ||
+ kthread_should_stop());
+
+ if (kthread_should_stop())
+@@ -1209,7 +1213,9 @@ netdev_tx_t oa_tc6_start_xmit(struct oa_tc6 *tc6, struct sk_buff *skb)
+ return NETDEV_TX_OK;
+ }
+
++ spin_lock_bh(&tc6->tx_skb_lock);
+ tc6->waiting_tx_skb = skb;
++ spin_unlock_bh(&tc6->tx_skb_lock);
+
+ /* Wake spi kthread to perform spi transfer */
+ wake_up_interruptible(&tc6->spi_wq);
+@@ -1239,6 +1245,7 @@ struct oa_tc6 *oa_tc6_init(struct spi_device *spi, struct net_device *netdev)
+ tc6->netdev = netdev;
+ SET_NETDEV_DEV(netdev, &spi->dev);
+ mutex_init(&tc6->spi_ctrl_lock);
++ spin_lock_init(&tc6->tx_skb_lock);
+
+ /* Set the SPI controller to pump at realtime priority */
+ tc6->spi->rt = true;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+index 9e42d599840ded..57edcde9e6f8c6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+@@ -277,7 +277,10 @@ void ionic_dev_teardown(struct ionic *ionic)
+ idev->phy_cmb_pages = 0;
+ idev->cmb_npages = 0;
+
+- destroy_workqueue(ionic->wq);
++ if (ionic->wq) {
++ destroy_workqueue(ionic->wq);
++ ionic->wq = NULL;
++ }
+ mutex_destroy(&idev->cmb_inuse_lock);
+ }
+
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+index dda22fa4448cff..9b7f78b6cdb1e3 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+@@ -961,8 +961,8 @@ static int ionic_get_module_eeprom(struct net_device *netdev,
+ len = min_t(u32, sizeof(xcvr->sprom), ee->len);
+
+ do {
+- memcpy(data, xcvr->sprom, len);
+- memcpy(tbuf, xcvr->sprom, len);
++ memcpy(data, &xcvr->sprom[ee->offset], len);
++ memcpy(tbuf, &xcvr->sprom[ee->offset], len);
+
+ /* Let's make sure we got a consistent copy */
+ if (!memcmp(data, tbuf, len))
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 40496587b2b318..3d3f936779f7d9 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -3869,8 +3869,8 @@ int ionic_lif_register(struct ionic_lif *lif)
+ /* only register LIF0 for now */
+ err = register_netdev(lif->netdev);
+ if (err) {
+- dev_err(lif->ionic->dev, "Cannot register net device, aborting\n");
+- ionic_lif_unregister_phc(lif);
++ dev_err(lif->ionic->dev, "Cannot register net device: %d, aborting\n", err);
++ ionic_lif_unregister(lif);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
+index 09117110e3dd2a..f86fcecb91a8bd 100644
+--- a/drivers/net/ethernet/renesas/rswitch.c
++++ b/drivers/net/ethernet/renesas/rswitch.c
+@@ -547,7 +547,6 @@ static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
+ desc = &gq->ts_ring[gq->ring_size];
+ desc->desc.die_dt = DT_LINKFIX;
+ rswitch_desc_set_dptr(&desc->desc, gq->ring_dma);
+- INIT_LIST_HEAD(&priv->gwca.ts_info_list);
+
+ return 0;
+ }
+@@ -1003,9 +1002,10 @@ static int rswitch_gwca_request_irqs(struct rswitch_private *priv)
+ static void rswitch_ts(struct rswitch_private *priv)
+ {
+ struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+- struct rswitch_gwca_ts_info *ts_info, *ts_info2;
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct rswitch_ts_desc *desc;
++ struct rswitch_device *rdev;
++ struct sk_buff *ts_skb;
+ struct timespec64 ts;
+ unsigned int num;
+ u32 tag, port;
+@@ -1015,23 +1015,28 @@ static void rswitch_ts(struct rswitch_private *priv)
+ dma_rmb();
+
+ port = TS_DESC_DPN(__le32_to_cpu(desc->desc.dptrl));
+- tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl));
+-
+- list_for_each_entry_safe(ts_info, ts_info2, &priv->gwca.ts_info_list, list) {
+- if (!(ts_info->port == port && ts_info->tag == tag))
+- continue;
+-
+- memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+- ts.tv_sec = __le32_to_cpu(desc->ts_sec);
+- ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff));
+- shhwtstamps.hwtstamp = timespec64_to_ktime(ts);
+- skb_tstamp_tx(ts_info->skb, &shhwtstamps);
+- dev_consume_skb_irq(ts_info->skb);
+- list_del(&ts_info->list);
+- kfree(ts_info);
+- break;
+- }
++ if (unlikely(port >= RSWITCH_NUM_PORTS))
++ goto next;
++ rdev = priv->rdev[port];
+
++ tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl));
++ if (unlikely(tag >= TS_TAGS_PER_PORT))
++ goto next;
++ ts_skb = xchg(&rdev->ts_skb[tag], NULL);
++ smp_mb(); /* order rdev->ts_skb[] read before bitmap update */
++ clear_bit(tag, rdev->ts_skb_used);
++
++ if (unlikely(!ts_skb))
++ goto next;
++
++ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
++ ts.tv_sec = __le32_to_cpu(desc->ts_sec);
++ ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff));
++ shhwtstamps.hwtstamp = timespec64_to_ktime(ts);
++ skb_tstamp_tx(ts_skb, &shhwtstamps);
++ dev_consume_skb_irq(ts_skb);
++
++next:
+ gq->cur = rswitch_next_queue_index(gq, true, 1);
+ desc = &gq->ts_ring[gq->cur];
+ }
+@@ -1576,8 +1581,9 @@ static int rswitch_open(struct net_device *ndev)
+ static int rswitch_stop(struct net_device *ndev)
+ {
+ struct rswitch_device *rdev = netdev_priv(ndev);
+- struct rswitch_gwca_ts_info *ts_info, *ts_info2;
++ struct sk_buff *ts_skb;
+ unsigned long flags;
++ unsigned int tag;
+
+ netif_tx_stop_all_queues(ndev);
+
+@@ -1594,12 +1600,13 @@ static int rswitch_stop(struct net_device *ndev)
+ if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
+ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
+
+- list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) {
+- if (ts_info->port != rdev->port)
+- continue;
+- dev_kfree_skb_irq(ts_info->skb);
+- list_del(&ts_info->list);
+- kfree(ts_info);
++ for (tag = find_first_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT);
++ tag < TS_TAGS_PER_PORT;
++ tag = find_next_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT, tag + 1)) {
++ ts_skb = xchg(&rdev->ts_skb[tag], NULL);
++ clear_bit(tag, rdev->ts_skb_used);
++ if (ts_skb)
++ dev_kfree_skb(ts_skb);
+ }
+
+ return 0;
+@@ -1612,20 +1619,17 @@ static bool rswitch_ext_desc_set_info1(struct rswitch_device *rdev,
+ desc->info1 = cpu_to_le64(INFO1_DV(BIT(rdev->etha->index)) |
+ INFO1_IPV(GWCA_IPV_NUM) | INFO1_FMT);
+ if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
+- struct rswitch_gwca_ts_info *ts_info;
++ unsigned int tag;
+
+- ts_info = kzalloc(sizeof(*ts_info), GFP_ATOMIC);
+- if (!ts_info)
++ tag = find_first_zero_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT);
++ if (tag == TS_TAGS_PER_PORT)
+ return false;
++ smp_mb(); /* order bitmap read before rdev->ts_skb[] write */
++ rdev->ts_skb[tag] = skb_get(skb);
++ set_bit(tag, rdev->ts_skb_used);
+
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+- rdev->ts_tag++;
+- desc->info1 |= cpu_to_le64(INFO1_TSUN(rdev->ts_tag) | INFO1_TXC);
+-
+- ts_info->skb = skb_get(skb);
+- ts_info->port = rdev->port;
+- ts_info->tag = rdev->ts_tag;
+- list_add_tail(&ts_info->list, &rdev->priv->gwca.ts_info_list);
++ desc->info1 |= cpu_to_le64(INFO1_TSUN(tag) | INFO1_TXC);
+
+ skb_tx_timestamp(skb);
+ }
+diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h
+index e020800dcc570e..d8d4ed7d7f8b6a 100644
+--- a/drivers/net/ethernet/renesas/rswitch.h
++++ b/drivers/net/ethernet/renesas/rswitch.h
+@@ -972,14 +972,6 @@ struct rswitch_gwca_queue {
+ };
+ };
+
+-struct rswitch_gwca_ts_info {
+- struct sk_buff *skb;
+- struct list_head list;
+-
+- int port;
+- u8 tag;
+-};
+-
+ #define RSWITCH_NUM_IRQ_REGS (RSWITCH_MAX_NUM_QUEUES / BITS_PER_TYPE(u32))
+ struct rswitch_gwca {
+ unsigned int index;
+@@ -989,7 +981,6 @@ struct rswitch_gwca {
+ struct rswitch_gwca_queue *queues;
+ int num_queues;
+ struct rswitch_gwca_queue ts_queue;
+- struct list_head ts_info_list;
+ DECLARE_BITMAP(used, RSWITCH_MAX_NUM_QUEUES);
+ u32 tx_irq_bits[RSWITCH_NUM_IRQ_REGS];
+ u32 rx_irq_bits[RSWITCH_NUM_IRQ_REGS];
+@@ -997,6 +988,7 @@ struct rswitch_gwca {
+ };
+
+ #define NUM_QUEUES_PER_NDEV 2
++#define TS_TAGS_PER_PORT 256
+ struct rswitch_device {
+ struct rswitch_private *priv;
+ struct net_device *ndev;
+@@ -1004,7 +996,8 @@ struct rswitch_device {
+ void __iomem *addr;
+ struct rswitch_gwca_queue *tx_queue;
+ struct rswitch_gwca_queue *rx_queue;
+- u8 ts_tag;
++ struct sk_buff *ts_skb[TS_TAGS_PER_PORT];
++ DECLARE_BITMAP(ts_skb_used, TS_TAGS_PER_PORT);
+ bool disabled;
+
+ int port;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 766213ee82c16e..cf7b59b8cc64b3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4220,8 +4220,8 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ struct stmmac_txq_stats *txq_stats;
+ struct stmmac_tx_queue *tx_q;
+ u32 pay_len, mss, queue;
++ dma_addr_t tso_des, des;
+ u8 proto_hdr_len, hdr;
+- dma_addr_t des;
+ bool set_ic;
+ int i;
+
+@@ -4317,14 +4317,15 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ /* If needed take extra descriptors to fill the remaining payload */
+ tmp_pay_len = pay_len - TSO_MAX_BUFF_SIZE;
++ tso_des = des;
+ } else {
+ stmmac_set_desc_addr(priv, first, des);
+ tmp_pay_len = pay_len;
+- des += proto_hdr_len;
++ tso_des = des + proto_hdr_len;
+ pay_len = 0;
+ }
+
+- stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
++ stmmac_tso_allocator(priv, tso_des, tmp_pay_len, (nfrags == 0), queue);
+
+ /* In case two or more DMA transmit descriptors are allocated for this
+ * non-paged SKB data, the DMA buffer address should be saved to
+diff --git a/drivers/net/mdio/fwnode_mdio.c b/drivers/net/mdio/fwnode_mdio.c
+index b156493d708415..aea0f03575689a 100644
+--- a/drivers/net/mdio/fwnode_mdio.c
++++ b/drivers/net/mdio/fwnode_mdio.c
+@@ -40,6 +40,7 @@ fwnode_find_pse_control(struct fwnode_handle *fwnode)
+ static struct mii_timestamper *
+ fwnode_find_mii_timestamper(struct fwnode_handle *fwnode)
+ {
++ struct mii_timestamper *mii_ts;
+ struct of_phandle_args arg;
+ int err;
+
+@@ -53,10 +54,16 @@ fwnode_find_mii_timestamper(struct fwnode_handle *fwnode)
+ else if (err)
+ return ERR_PTR(err);
+
+- if (arg.args_count != 1)
+- return ERR_PTR(-EINVAL);
++ if (arg.args_count != 1) {
++ mii_ts = ERR_PTR(-EINVAL);
++ goto put_node;
++ }
++
++ mii_ts = register_mii_timestamper(arg.np, arg.args[0]);
+
+- return register_mii_timestamper(arg.np, arg.args[0]);
++put_node:
++ of_node_put(arg.np);
++ return mii_ts;
+ }
+
+ int fwnode_mdiobus_phy_device_register(struct mii_bus *mdio,
+diff --git a/drivers/net/netdevsim/health.c b/drivers/net/netdevsim/health.c
+index 70e8bdf34be900..688f05316b5e10 100644
+--- a/drivers/net/netdevsim/health.c
++++ b/drivers/net/netdevsim/health.c
+@@ -149,6 +149,8 @@ static ssize_t nsim_dev_health_break_write(struct file *file,
+ char *break_msg;
+ int err;
+
++ if (count == 0 || count > PAGE_SIZE)
++ return -EINVAL;
+ break_msg = memdup_user_nul(data, count);
+ if (IS_ERR(break_msg))
+ return PTR_ERR(break_msg);
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 017a6102be0a22..1b29d1d794a201 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -596,10 +596,10 @@ nsim_pp_hold_write(struct file *file, const char __user *data,
+ page_pool_put_full_page(ns->page->pp, ns->page, false);
+ ns->page = NULL;
+ }
+- rtnl_unlock();
+
+ exit:
+- return count;
++ rtnl_unlock();
++ return ret;
+ }
+
+ static const struct file_operations nsim_pp_hold_fops = {
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 6ace5a74cddb57..1c85dda83825d8 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -998,9 +998,13 @@ static void __team_compute_features(struct team *team)
+ unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
+ IFF_XMIT_DST_RELEASE_PERM;
+
++ rcu_read_lock();
++ if (list_empty(&team->port_list))
++ goto done;
++
+ vlan_features = netdev_base_features(vlan_features);
++ enc_features = netdev_base_features(enc_features);
+
+- rcu_read_lock();
+ list_for_each_entry_rcu(port, &team->port_list, list) {
+ vlan_features = netdev_increment_features(vlan_features,
+ port->dev->vlan_features,
+@@ -1010,11 +1014,11 @@ static void __team_compute_features(struct team *team)
+ port->dev->hw_enc_features,
+ TEAM_ENC_FEATURES);
+
+-
+ dst_release_flag &= port->dev->priv_flags;
+ if (port->dev->hard_header_len > max_hard_header_len)
+ max_hard_header_len = port->dev->hard_header_len;
+ }
++done:
+ rcu_read_unlock();
+
+ team->dev->vlan_features = vlan_features;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 9a0f6eb3201661..03fe9e3ee7af15 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1481,7 +1481,7 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
+ skb->truesize += skb->data_len;
+
+ for (i = 1; i < it->nr_segs; i++) {
+- const struct iovec *iov = iter_iov(it);
++ const struct iovec *iov = iter_iov(it) + i;
+ size_t fragsz = iov->iov_len;
+ struct page *page;
+ void *frag;
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 286f0c161e332f..a565b8c91da593 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -455,7 +455,8 @@ static int of_translate_one(struct device_node *parent, struct of_bus *bus,
+ }
+ if (ranges == NULL || rlen == 0) {
+ offset = of_read_number(addr, na);
+- memset(addr, 0, pna * 4);
++ /* set address to zero, pass flags through */
++ memset(addr + pbus->flag_cells, 0, (pna - pbus->flag_cells) * 4);
+ pr_debug("empty ranges; 1:1 translation\n");
+ goto finish;
+ }
+@@ -615,7 +616,7 @@ struct device_node *__of_get_dma_parent(const struct device_node *np)
+ if (ret < 0)
+ return of_get_parent(np);
+
+- return of_node_get(args.np);
++ return args.np;
+ }
+ #endif
+
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 20603d3c9931b8..63161d0f72b4e8 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -1455,8 +1455,10 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ map_len--;
+
+ /* Check if not found */
+- if (!new)
++ if (!new) {
++ ret = -EINVAL;
+ goto put;
++ }
+
+ if (!of_device_is_available(new))
+ match = 0;
+@@ -1466,17 +1468,20 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ goto put;
+
+ /* Check for malformed properties */
+- if (WARN_ON(new_size > MAX_PHANDLE_ARGS))
+- goto put;
+- if (map_len < new_size)
++ if (WARN_ON(new_size > MAX_PHANDLE_ARGS) ||
++ map_len < new_size) {
++ ret = -EINVAL;
+ goto put;
++ }
+
+ /* Move forward by new node's #<list>-cells amount */
+ map += new_size;
+ map_len -= new_size;
+ }
+- if (!match)
++ if (!match) {
++ ret = -ENOENT;
+ goto put;
++ }
+
+ /* Get the <list>-map-pass-thru property (optional) */
+ pass = of_get_property(cur, pass_name, NULL);
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index a494f56a0d0ee4..1fb329c0a55b8c 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -111,6 +111,7 @@ const __be32 *of_irq_parse_imap_parent(const __be32 *imap, int len, struct of_ph
+ else
+ np = of_find_node_by_phandle(be32_to_cpup(imap));
+ imap++;
++ len--;
+
+ /* Check if not found */
+ if (!np) {
+@@ -354,6 +355,7 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ return of_irq_parse_oldworld(device, index, out_irq);
+
+ /* Get the reg property (if any) */
++ addr_len = 0;
+ addr = of_get_property(device, "reg", &addr_len);
+
+ /* Prevent out-of-bounds read in case of longer interrupt parent address size */
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 11b922fde7af16..7bd8390f2fba5e 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1213,7 +1213,6 @@ DEFINE_SIMPLE_PROP(iommus, "iommus", "#iommu-cells")
+ DEFINE_SIMPLE_PROP(mboxes, "mboxes", "#mbox-cells")
+ DEFINE_SIMPLE_PROP(io_channels, "io-channels", "#io-channel-cells")
+ DEFINE_SIMPLE_PROP(io_backends, "io-backends", "#io-backend-cells")
+-DEFINE_SIMPLE_PROP(interrupt_parent, "interrupt-parent", NULL)
+ DEFINE_SIMPLE_PROP(dmas, "dmas", "#dma-cells")
+ DEFINE_SIMPLE_PROP(power_domains, "power-domains", "#power-domain-cells")
+ DEFINE_SIMPLE_PROP(hwlocks, "hwlocks", "#hwlock-cells")
+@@ -1359,7 +1358,6 @@ static const struct supplier_bindings of_supplier_bindings[] = {
+ { .parse_prop = parse_mboxes, },
+ { .parse_prop = parse_io_channels, },
+ { .parse_prop = parse_io_backends, },
+- { .parse_prop = parse_interrupt_parent, },
+ { .parse_prop = parse_dmas, .optional = true, },
+ { .parse_prop = parse_power_domains, },
+ { .parse_prop = parse_hwlocks, },
+diff --git a/drivers/platform/x86/p2sb.c b/drivers/platform/x86/p2sb.c
+index 31f38309b389ab..c56650b9ff9628 100644
+--- a/drivers/platform/x86/p2sb.c
++++ b/drivers/platform/x86/p2sb.c
+@@ -42,6 +42,7 @@ struct p2sb_res_cache {
+ };
+
+ static struct p2sb_res_cache p2sb_resources[NR_P2SB_RES_CACHE];
++static bool p2sb_hidden_by_bios;
+
+ static void p2sb_get_devfn(unsigned int *devfn)
+ {
+@@ -96,6 +97,12 @@ static void p2sb_scan_and_cache_devfn(struct pci_bus *bus, unsigned int devfn)
+
+ static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)
+ {
++ /*
++ * The BIOS prevents the P2SB device from being enumerated by the PCI
++ * subsystem, so we need to unhide and hide it back to lookup the BAR.
++ */
++ pci_bus_write_config_dword(bus, devfn, P2SBC, 0);
++
+ /* Scan the P2SB device and cache its BAR0 */
+ p2sb_scan_and_cache_devfn(bus, devfn);
+
+@@ -103,6 +110,8 @@ static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)
+ if (devfn == P2SB_DEVFN_GOLDMONT)
+ p2sb_scan_and_cache_devfn(bus, SPI_DEVFN_GOLDMONT);
+
++ pci_bus_write_config_dword(bus, devfn, P2SBC, P2SBC_HIDE);
++
+ if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res))
+ return -ENOENT;
+
+@@ -128,7 +137,7 @@ static int p2sb_cache_resources(void)
+ u32 value = P2SBC_HIDE;
+ struct pci_bus *bus;
+ u16 class;
+- int ret;
++ int ret = 0;
+
+ /* Get devfn for P2SB device itself */
+ p2sb_get_devfn(&devfn_p2sb);
+@@ -151,22 +160,53 @@ static int p2sb_cache_resources(void)
+ */
+ pci_lock_rescan_remove();
+
++ pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);
++ p2sb_hidden_by_bios = value & P2SBC_HIDE;
++
+ /*
+- * The BIOS prevents the P2SB device from being enumerated by the PCI
+- * subsystem, so we need to unhide and hide it back to lookup the BAR.
+- * Unhide the P2SB device here, if needed.
++ * If the BIOS does not hide the P2SB device then its resources
++ * are accesilble. Cache them only if the P2SB device is hidden.
+ */
+- pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);
+- if (value & P2SBC_HIDE)
+- pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0);
++ if (p2sb_hidden_by_bios)
++ ret = p2sb_scan_and_cache(bus, devfn_p2sb);
+
+- ret = p2sb_scan_and_cache(bus, devfn_p2sb);
++ pci_unlock_rescan_remove();
+
+- /* Hide the P2SB device, if it was hidden */
+- if (value & P2SBC_HIDE)
+- pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE);
++ return ret;
++}
+
+- pci_unlock_rescan_remove();
++static int p2sb_read_from_cache(struct pci_bus *bus, unsigned int devfn,
++ struct resource *mem)
++{
++ struct p2sb_res_cache *cache = &p2sb_resources[PCI_FUNC(devfn)];
++
++ if (cache->bus_dev_id != bus->dev.id)
++ return -ENODEV;
++
++ if (!p2sb_valid_resource(&cache->res))
++ return -ENOENT;
++
++ memcpy(mem, &cache->res, sizeof(*mem));
++
++ return 0;
++}
++
++static int p2sb_read_from_dev(struct pci_bus *bus, unsigned int devfn,
++ struct resource *mem)
++{
++ struct pci_dev *pdev;
++ int ret = 0;
++
++ pdev = pci_get_slot(bus, devfn);
++ if (!pdev)
++ return -ENODEV;
++
++ if (p2sb_valid_resource(pci_resource_n(pdev, 0)))
++ p2sb_read_bar0(pdev, mem);
++ else
++ ret = -ENOENT;
++
++ pci_dev_put(pdev);
+
+ return ret;
+ }
+@@ -187,8 +227,6 @@ static int p2sb_cache_resources(void)
+ */
+ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ {
+- struct p2sb_res_cache *cache;
+-
+ bus = p2sb_get_bus(bus);
+ if (!bus)
+ return -ENODEV;
+@@ -196,15 +234,10 @@ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ if (!devfn)
+ p2sb_get_devfn(&devfn);
+
+- cache = &p2sb_resources[PCI_FUNC(devfn)];
+- if (cache->bus_dev_id != bus->dev.id)
+- return -ENODEV;
++ if (p2sb_hidden_by_bios)
++ return p2sb_read_from_cache(bus, devfn, mem);
+
+- if (!p2sb_valid_resource(&cache->res))
+- return -ENOENT;
+-
+- memcpy(mem, &cache->res, sizeof(*mem));
+- return 0;
++ return p2sb_read_from_dev(bus, devfn, mem);
+ }
+ EXPORT_SYMBOL_GPL(p2sb_bar);
+
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index 7af2642b97cb81..7c42e303740af2 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -1520,6 +1520,14 @@ static struct pci_device_id nhi_ids[] = {
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
+
+diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
+index 7a07c7c1a9c2c6..16744f25a9a069 100644
+--- a/drivers/thunderbolt/nhi.h
++++ b/drivers/thunderbolt/nhi.h
+@@ -92,6 +92,10 @@ extern const struct tb_nhi_ops icl_nhi_ops;
+ #define PCI_DEVICE_ID_INTEL_RPL_NHI1 0xa76d
+ #define PCI_DEVICE_ID_INTEL_LNL_NHI0 0xa833
+ #define PCI_DEVICE_ID_INTEL_LNL_NHI1 0xa834
++#define PCI_DEVICE_ID_INTEL_PTL_M_NHI0 0xe333
++#define PCI_DEVICE_ID_INTEL_PTL_M_NHI1 0xe334
++#define PCI_DEVICE_ID_INTEL_PTL_P_NHI0 0xe433
++#define PCI_DEVICE_ID_INTEL_PTL_P_NHI1 0xe434
+
+ #define PCI_CLASS_SERIAL_USB_USB4 0x0c0340
+
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 89d2919d0193e8..eeb64433ebbca0 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -103,6 +103,7 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt)
+
+ err_nvm:
+ dev_dbg(&rt->dev, "NVM upgrade disabled\n");
++ rt->no_nvm_upgrade = true;
+ if (!IS_ERR(nvm))
+ tb_nvm_free(nvm);
+
+@@ -182,8 +183,6 @@ static ssize_t nvm_authenticate_show(struct device *dev,
+
+ if (!rt->nvm)
+ ret = -EAGAIN;
+- else if (rt->no_nvm_upgrade)
+- ret = -EOPNOTSUPP;
+ else
+ ret = sysfs_emit(buf, "%#x\n", rt->auth_status);
+
+@@ -323,8 +322,6 @@ static ssize_t nvm_version_show(struct device *dev,
+
+ if (!rt->nvm)
+ ret = -EAGAIN;
+- else if (rt->no_nvm_upgrade)
+- ret = -EOPNOTSUPP;
+ else
+ ret = sysfs_emit(buf, "%x.%x\n", rt->nvm->major, rt->nvm->minor);
+
+@@ -342,6 +339,19 @@ static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_RO(vendor);
+
++static umode_t retimer_is_visible(struct kobject *kobj, struct attribute *attr,
++ int n)
++{
++ struct device *dev = kobj_to_dev(kobj);
++ struct tb_retimer *rt = tb_to_retimer(dev);
++
++ if (attr == &dev_attr_nvm_authenticate.attr ||
++ attr == &dev_attr_nvm_version.attr)
++ return rt->no_nvm_upgrade ? 0 : attr->mode;
++
++ return attr->mode;
++}
++
+ static struct attribute *retimer_attrs[] = {
+ &dev_attr_device.attr,
+ &dev_attr_nvm_authenticate.attr,
+@@ -351,6 +361,7 @@ static struct attribute *retimer_attrs[] = {
+ };
+
+ static const struct attribute_group retimer_group = {
++ .is_visible = retimer_is_visible,
+ .attrs = retimer_attrs,
+ };
+
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 4f777788e9179c..a7c6919fbf9788 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -2059,6 +2059,37 @@ static void tb_exit_redrive(struct tb_port *port)
+ }
+ }
+
++static void tb_switch_enter_redrive(struct tb_switch *sw)
++{
++ struct tb_port *port;
++
++ tb_switch_for_each_port(sw, port)
++ tb_enter_redrive(port);
++}
++
++/*
++ * Called during system and runtime suspend to forcefully exit redrive
++ * mode without querying whether the resource is available.
++ */
++static void tb_switch_exit_redrive(struct tb_switch *sw)
++{
++ struct tb_port *port;
++
++ if (!(sw->quirks & QUIRK_KEEP_POWER_IN_DP_REDRIVE))
++ return;
++
++ tb_switch_for_each_port(sw, port) {
++ if (!tb_port_is_dpin(port))
++ continue;
++
++ if (port->redrive) {
++ port->redrive = false;
++ pm_runtime_put(&sw->dev);
++ tb_port_dbg(port, "exit redrive mode\n");
++ }
++ }
++}
++
+ static void tb_dp_resource_unavailable(struct tb *tb, struct tb_port *port)
+ {
+ struct tb_port *in, *out;
+@@ -2909,6 +2940,7 @@ static int tb_start(struct tb *tb, bool reset)
+ tb_create_usb3_tunnels(tb->root_switch);
+ /* Add DP IN resources for the root switch */
+ tb_add_dp_resources(tb->root_switch);
++ tb_switch_enter_redrive(tb->root_switch);
+ /* Make the discovered switches available to the userspace */
+ device_for_each_child(&tb->root_switch->dev, NULL,
+ tb_scan_finalize_switch);
+@@ -2924,6 +2956,7 @@ static int tb_suspend_noirq(struct tb *tb)
+
+ tb_dbg(tb, "suspending...\n");
+ tb_disconnect_and_release_dp(tb);
++ tb_switch_exit_redrive(tb->root_switch);
+ tb_switch_suspend(tb->root_switch, false);
+ tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
+ tb_dbg(tb, "suspend finished\n");
+@@ -3016,6 +3049,7 @@ static int tb_resume_noirq(struct tb *tb)
+ tb_dbg(tb, "tunnels restarted, sleeping for 100ms\n");
+ msleep(100);
+ }
++ tb_switch_enter_redrive(tb->root_switch);
+ /* Allow tb_handle_hotplug to progress events */
+ tcm->hotplug_active = true;
+ tb_dbg(tb, "resume finished\n");
+@@ -3079,6 +3113,12 @@ static int tb_runtime_suspend(struct tb *tb)
+ struct tb_cm *tcm = tb_priv(tb);
+
+ mutex_lock(&tb->lock);
++ /*
++ * The below call only releases DP resources to allow exiting and
++ * re-entering redrive mode.
++ */
++ tb_disconnect_and_release_dp(tb);
++ tb_switch_exit_redrive(tb->root_switch);
+ tb_switch_suspend(tb->root_switch, true);
+ tcm->hotplug_active = false;
+ mutex_unlock(&tb->lock);
+@@ -3110,6 +3150,7 @@ static int tb_runtime_resume(struct tb *tb)
+ tb_restore_children(tb->root_switch);
+ list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
+ tb_tunnel_restart(tunnel);
++ tb_switch_enter_redrive(tb->root_switch);
+ tcm->hotplug_active = true;
+ mutex_unlock(&tb->lock);
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index f318864732f2db..b267dae14d3904 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1192,8 +1192,6 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ * Keep retrying until the EP starts and stops again, on
+ * chips where this is known to help. Wait for 100ms.
+ */
+- if (!(xhci->quirks & XHCI_NEC_HOST))
+- break;
+ if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
+ break;
+ fallthrough;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 9ba5584061c8c4..64317b390d2285 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -625,6 +625,8 @@ static void option_instat_callback(struct urb *urb);
+ #define MEIGSMART_PRODUCT_SRM825L 0x4d22
+ /* MeiG Smart SLM320 based on UNISOC UIS8910 */
+ #define MEIGSMART_PRODUCT_SLM320 0x4d41
++/* MeiG Smart SLM770A based on ASR1803 */
++#define MEIGSMART_PRODUCT_SLM770A 0x4d57
+
+ /* Device flags */
+
+@@ -1395,6 +1397,12 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */
+ .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */
++ .driver_info = RSVD(0) | NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */
++ .driver_info = RSVD(0) | NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */
++ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -2247,6 +2255,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(2) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7127, 0xff, 0x00, 0x00),
+ .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) },
++ { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7129, 0xff, 0x00, 0x00), /* MediaTek T7XX */
++ .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) },
+ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
+ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200),
+ .driver_info = RSVD(1) | RSVD(4) },
+@@ -2375,6 +2385,18 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for Golbal EDU */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0x00, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+@@ -2382,9 +2404,14 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
++ .driver_info = NCTRL(1) },
++ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */
++ .driver_info = NCTRL(3) },
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index 7e0f9600b80c43..3d2376caedfa68 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -649,8 +649,14 @@ static u64 btrfs_append_map_length(struct btrfs_bio *bbio, u64 map_length)
+ map_length = min(map_length, bbio->fs_info->max_zone_append_size);
+ sector_offset = bio_split_rw_at(&bbio->bio, &bbio->fs_info->limits,
+ &nr_segs, map_length);
+- if (sector_offset)
+- return sector_offset << SECTOR_SHIFT;
++ if (sector_offset) {
++ /*
++ * bio_split_rw_at() could split at a size smaller than our
++ * sectorsize and thus cause unaligned I/Os. Fix that by
++ * always rounding down to the nearest boundary.
++ */
++ return ALIGN_DOWN(sector_offset << SECTOR_SHIFT, bbio->fs_info->sectorsize);
++ }
+ return map_length;
+ }
+
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 317a3712270fc0..2034d371083331 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -370,6 +370,25 @@ static inline void btrfs_set_root_last_trans(struct btrfs_root *root, u64 transi
+ WRITE_ONCE(root->last_trans, transid);
+ }
+
++/*
++ * Return the generation this root started with.
++ *
++ * Every normal root that is created with root->root_key.offset set to it's
++ * originating generation. If it is a snapshot it is the generation when the
++ * snapshot was created.
++ *
++ * However for TREE_RELOC roots root_key.offset is the objectid of the owning
++ * tree root. Thankfully we copy the root item of the owning tree root, which
++ * has it's last_snapshot set to what we would have root_key.offset set to, so
++ * return that if this is a TREE_RELOC root.
++ */
++static inline u64 btrfs_root_origin_generation(const struct btrfs_root *root)
++{
++ if (btrfs_root_id(root) == BTRFS_TREE_RELOC_OBJECTID)
++ return btrfs_root_last_snapshot(&root->root_item);
++ return root->root_key.offset;
++}
++
+ /*
+ * Structure that conveys information about an extent that is going to replace
+ * all the extents in a file range.
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index b43a8611aca5c6..f3e93ba7ec97fa 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5308,7 +5308,7 @@ static bool visit_node_for_delete(struct btrfs_root *root, struct walk_control *
+ * reference to it.
+ */
+ generation = btrfs_node_ptr_generation(eb, slot);
+- if (!wc->update_ref || generation <= root->root_key.offset)
++ if (!wc->update_ref || generation <= btrfs_root_origin_generation(root))
+ return false;
+
+ /*
+@@ -5363,7 +5363,7 @@ static noinline void reada_walk_down(struct btrfs_trans_handle *trans,
+ goto reada;
+
+ if (wc->stage == UPDATE_BACKREF &&
+- generation <= root->root_key.offset)
++ generation <= btrfs_root_origin_generation(root))
+ continue;
+
+ /* We don't lock the tree block, it's OK to be racy here */
+@@ -5706,7 +5706,7 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ * for the subtree
+ */
+ if (wc->stage == UPDATE_BACKREF &&
+- generation <= root->root_key.offset) {
++ generation <= btrfs_root_origin_generation(root)) {
+ wc->lookup_info = 1;
+ return 1;
+ }
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 7b50263723bc1a..ffa5b83d3a4a3a 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1527,6 +1527,11 @@ static int check_extent_item(struct extent_buffer *leaf,
+ dref_offset, fs_info->sectorsize);
+ return -EUCLEAN;
+ }
++ if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid data ref count, should have non-zero value");
++ return -EUCLEAN;
++ }
+ inline_refs += btrfs_extent_data_ref_count(leaf, dref);
+ break;
+ /* Contains parent bytenr and ref count */
+@@ -1539,6 +1544,11 @@ static int check_extent_item(struct extent_buffer *leaf,
+ inline_offset, fs_info->sectorsize);
+ return -EUCLEAN;
+ }
++ if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid shared data ref count, should have non-zero value");
++ return -EUCLEAN;
++ }
+ inline_refs += btrfs_shared_data_ref_count(leaf, sref);
+ break;
+ case BTRFS_EXTENT_OWNER_REF_KEY:
+@@ -1611,8 +1621,18 @@ static int check_simple_keyed_refs(struct extent_buffer *leaf,
+ {
+ u32 expect_item_size = 0;
+
+- if (key->type == BTRFS_SHARED_DATA_REF_KEY)
++ if (key->type == BTRFS_SHARED_DATA_REF_KEY) {
++ struct btrfs_shared_data_ref *sref;
++
++ sref = btrfs_item_ptr(leaf, slot, struct btrfs_shared_data_ref);
++ if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid shared data backref count, should have non-zero value");
++ return -EUCLEAN;
++ }
++
+ expect_item_size = sizeof(struct btrfs_shared_data_ref);
++ }
+
+ if (unlikely(btrfs_item_size(leaf, slot) != expect_item_size)) {
+ generic_err(leaf, slot,
+@@ -1689,6 +1709,11 @@ static int check_extent_data_ref(struct extent_buffer *leaf,
+ offset, leaf->fs_info->sectorsize);
+ return -EUCLEAN;
+ }
++ if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid extent data backref count, should have non-zero value");
++ return -EUCLEAN;
++ }
+ }
+ return 0;
+ }
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 4b8d59ebda0092..67468d88f13908 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1066,7 +1066,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ if (ceph_inode_is_shutdown(inode))
+ return -EIO;
+
+- if (!len)
++ if (!len || !i_size)
+ return 0;
+ /*
+ * flush any page cache pages in this range. this
+@@ -1086,7 +1086,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ int num_pages;
+ size_t page_off;
+ bool more;
+- int idx;
++ int idx = 0;
+ size_t left;
+ struct ceph_osd_req_op *op;
+ u64 read_off = off;
+@@ -1116,6 +1116,16 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ len = read_off + read_len - off;
+ more = len < iov_iter_count(to);
+
++ op = &req->r_ops[0];
++ if (sparse) {
++ extent_cnt = __ceph_sparse_read_ext_count(inode, read_len);
++ ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
++ if (ret) {
++ ceph_osdc_put_request(req);
++ break;
++ }
++ }
++
+ num_pages = calc_pages_for(read_off, read_len);
+ page_off = offset_in_page(off);
+ pages = ceph_alloc_page_vector(num_pages, GFP_KERNEL);
+@@ -1127,17 +1137,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+
+ osd_req_op_extent_osd_data_pages(req, 0, pages, read_len,
+ offset_in_page(read_off),
+- false, false);
+-
+- op = &req->r_ops[0];
+- if (sparse) {
+- extent_cnt = __ceph_sparse_read_ext_count(inode, read_len);
+- ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
+- if (ret) {
+- ceph_osdc_put_request(req);
+- break;
+- }
+- }
++ false, true);
+
+ ceph_osdc_start_request(osdc, req);
+ ret = ceph_osdc_wait_request(osdc, req);
+@@ -1160,7 +1160,14 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ else if (ret == -ENOENT)
+ ret = 0;
+
+- if (ret > 0 && IS_ENCRYPTED(inode)) {
++ if (ret < 0) {
++ ceph_osdc_put_request(req);
++ if (ret == -EBLOCKLISTED)
++ fsc->blocklisted = true;
++ break;
++ }
++
++ if (IS_ENCRYPTED(inode)) {
+ int fret;
+
+ fret = ceph_fscrypt_decrypt_extents(inode, pages,
+@@ -1186,10 +1193,8 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ ret = min_t(ssize_t, fret, len);
+ }
+
+- ceph_osdc_put_request(req);
+-
+ /* Short read but not EOF? Zero out the remainder. */
+- if (ret >= 0 && ret < len && (off + ret < i_size)) {
++ if (ret < len && (off + ret < i_size)) {
+ int zlen = min(len - ret, i_size - off - ret);
+ int zoff = page_off + ret;
+
+@@ -1199,13 +1204,11 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ ret += zlen;
+ }
+
+- idx = 0;
+- if (ret <= 0)
+- left = 0;
+- else if (off + ret > i_size)
+- left = i_size - off;
++ if (off + ret > i_size)
++ left = (i_size > off) ? i_size - off : 0;
+ else
+ left = ret;
++
+ while (left > 0) {
+ size_t plen, copied;
+
+@@ -1221,13 +1224,8 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ break;
+ }
+ }
+- ceph_release_page_vector(pages, num_pages);
+
+- if (ret < 0) {
+- if (ret == -EBLOCKLISTED)
+- fsc->blocklisted = true;
+- break;
+- }
++ ceph_osdc_put_request(req);
+
+ if (off >= i_size || !more)
+ break;
+@@ -1553,6 +1551,16 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ break;
+ }
+
++ op = &req->r_ops[0];
++ if (sparse) {
++ extent_cnt = __ceph_sparse_read_ext_count(inode, size);
++ ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
++ if (ret) {
++ ceph_osdc_put_request(req);
++ break;
++ }
++ }
++
+ len = iter_get_bvecs_alloc(iter, size, &bvecs, &num_pages);
+ if (len < 0) {
+ ceph_osdc_put_request(req);
+@@ -1562,6 +1570,8 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ if (len != size)
+ osd_req_op_extent_update(req, 0, len);
+
++ osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len);
++
+ /*
+ * To simplify error handling, allow AIO when IO within i_size
+ * or IO can be satisfied by single OSD request.
+@@ -1593,17 +1603,6 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ req->r_mtime = mtime;
+ }
+
+- osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len);
+- op = &req->r_ops[0];
+- if (sparse) {
+- extent_cnt = __ceph_sparse_read_ext_count(inode, size);
+- ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
+- if (ret) {
+- ceph_osdc_put_request(req);
+- break;
+- }
+- }
+-
+ if (aio_req) {
+ aio_req->total_len += len;
+ aio_req->num_reqs++;
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index cf92b75745e2a5..f48242262b2177 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -2808,12 +2808,11 @@ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+
+ if (pos < 0) {
+ /*
+- * A rename didn't occur, but somehow we didn't end up where
+- * we thought we would. Throw a warning and try again.
++ * The path is longer than PATH_MAX and this function
++ * cannot ever succeed. Creating paths that long is
++ * possible with Ceph, but Linux cannot use them.
+ */
+- pr_warn_client(cl, "did not end path lookup where expected (pos = %d)\n",
+- pos);
+- goto retry;
++ return ERR_PTR(-ENAMETOOLONG);
+ }
+
+ *pbase = base;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 86480e5a215e51..c235f9a60394c2 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -431,6 +431,8 @@ static int ceph_parse_mount_param(struct fs_context *fc,
+
+ switch (token) {
+ case Opt_snapdirname:
++ if (strlen(param->string) > NAME_MAX)
++ return invalfc(fc, "snapdirname too long");
+ kfree(fsopt->snapdir_name);
+ fsopt->snapdir_name = param->string;
+ param->string = NULL;
+diff --git a/fs/efivarfs/inode.c b/fs/efivarfs/inode.c
+index 586446e02ef72d..ec23da8405ff8e 100644
+--- a/fs/efivarfs/inode.c
++++ b/fs/efivarfs/inode.c
+@@ -51,7 +51,7 @@ struct inode *efivarfs_get_inode(struct super_block *sb,
+ *
+ * VariableName-12345678-1234-1234-1234-1234567891bc
+ */
+-bool efivarfs_valid_name(const char *str, int len)
++static bool efivarfs_valid_name(const char *str, int len)
+ {
+ const char *s = str + len - EFI_VARIABLE_GUID_LEN;
+
+diff --git a/fs/efivarfs/internal.h b/fs/efivarfs/internal.h
+index d71d2e08422f09..74f0602a9e016c 100644
+--- a/fs/efivarfs/internal.h
++++ b/fs/efivarfs/internal.h
+@@ -60,7 +60,6 @@ bool efivar_variable_is_removable(efi_guid_t vendor, const char *name,
+
+ extern const struct file_operations efivarfs_file_operations;
+ extern const struct inode_operations efivarfs_dir_inode_operations;
+-extern bool efivarfs_valid_name(const char *str, int len);
+ extern struct inode *efivarfs_get_inode(struct super_block *sb,
+ const struct inode *dir, int mode, dev_t dev,
+ bool is_removable);
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index a929f1b613be84..beba15673be8d3 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -144,9 +144,6 @@ static int efivarfs_d_hash(const struct dentry *dentry, struct qstr *qstr)
+ const unsigned char *s = qstr->name;
+ unsigned int len = qstr->len;
+
+- if (!efivarfs_valid_name(s, len))
+- return -EINVAL;
+-
+ while (len-- > EFI_VARIABLE_GUID_LEN)
+ hash = partial_name_hash(*s++, hash);
+
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index fa51437e1d99d9..722151d3fee8b4 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -63,10 +63,10 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+
+ buf->file = NULL;
+ if (erofs_is_fileio_mode(sbi)) {
+- buf->file = sbi->fdev; /* some fs like FUSE needs it */
++ buf->file = sbi->dif0.file; /* some fs like FUSE needs it */
+ buf->mapping = buf->file->f_mapping;
+ } else if (erofs_is_fscache_mode(sb))
+- buf->mapping = sbi->s_fscache->inode->i_mapping;
++ buf->mapping = sbi->dif0.fscache->inode->i_mapping;
+ else
+ buf->mapping = sb->s_bdev->bd_mapping;
+ }
+@@ -186,19 +186,13 @@ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+ }
+
+ static void erofs_fill_from_devinfo(struct erofs_map_dev *map,
+- struct erofs_device_info *dif)
++ struct super_block *sb, struct erofs_device_info *dif)
+ {
++ map->m_sb = sb;
++ map->m_dif = dif;
+ map->m_bdev = NULL;
+- map->m_fp = NULL;
+- if (dif->file) {
+- if (S_ISBLK(file_inode(dif->file)->i_mode))
+- map->m_bdev = file_bdev(dif->file);
+- else
+- map->m_fp = dif->file;
+- }
+- map->m_daxdev = dif->dax_dev;
+- map->m_dax_part_off = dif->dax_part_off;
+- map->m_fscache = dif->fscache;
++ if (dif->file && S_ISBLK(file_inode(dif->file)->i_mode))
++ map->m_bdev = file_bdev(dif->file);
+ }
+
+ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+@@ -208,12 +202,8 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ erofs_off_t startoff, length;
+ int id;
+
+- map->m_bdev = sb->s_bdev;
+- map->m_daxdev = EROFS_SB(sb)->dax_dev;
+- map->m_dax_part_off = EROFS_SB(sb)->dax_part_off;
+- map->m_fscache = EROFS_SB(sb)->s_fscache;
+- map->m_fp = EROFS_SB(sb)->fdev;
+-
++ erofs_fill_from_devinfo(map, sb, &EROFS_SB(sb)->dif0);
++ map->m_bdev = sb->s_bdev; /* use s_bdev for the primary device */
+ if (map->m_deviceid) {
+ down_read(&devs->rwsem);
+ dif = idr_find(&devs->tree, map->m_deviceid - 1);
+@@ -226,7 +216,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ up_read(&devs->rwsem);
+ return 0;
+ }
+- erofs_fill_from_devinfo(map, dif);
++ erofs_fill_from_devinfo(map, sb, dif);
+ up_read(&devs->rwsem);
+ } else if (devs->extra_devices && !devs->flatdev) {
+ down_read(&devs->rwsem);
+@@ -239,7 +229,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ if (map->m_pa >= startoff &&
+ map->m_pa < startoff + length) {
+ map->m_pa -= startoff;
+- erofs_fill_from_devinfo(map, dif);
++ erofs_fill_from_devinfo(map, sb, dif);
+ break;
+ }
+ }
+@@ -309,7 +299,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+
+ iomap->offset = map.m_la;
+ if (flags & IOMAP_DAX)
+- iomap->dax_dev = mdev.m_daxdev;
++ iomap->dax_dev = mdev.m_dif->dax_dev;
+ else
+ iomap->bdev = mdev.m_bdev;
+ iomap->length = map.m_llen;
+@@ -338,7 +328,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ iomap->type = IOMAP_MAPPED;
+ iomap->addr = mdev.m_pa;
+ if (flags & IOMAP_DAX)
+- iomap->addr += mdev.m_dax_part_off;
++ iomap->addr += mdev.m_dif->dax_part_off;
+ }
+ return 0;
+ }
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 3af96b1e2c2aa8..33f8539dda4aeb 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -9,6 +9,7 @@ struct erofs_fileio_rq {
+ struct bio_vec bvecs[BIO_MAX_VECS];
+ struct bio bio;
+ struct kiocb iocb;
++ struct super_block *sb;
+ };
+
+ struct erofs_fileio {
+@@ -52,8 +53,9 @@ static void erofs_fileio_rq_submit(struct erofs_fileio_rq *rq)
+ rq->iocb.ki_pos = rq->bio.bi_iter.bi_sector << SECTOR_SHIFT;
+ rq->iocb.ki_ioprio = get_current_ioprio();
+ rq->iocb.ki_complete = erofs_fileio_ki_complete;
+- rq->iocb.ki_flags = (rq->iocb.ki_filp->f_mode & FMODE_CAN_ODIRECT) ?
+- IOCB_DIRECT : 0;
++ if (test_opt(&EROFS_SB(rq->sb)->opt, DIRECT_IO) &&
++ rq->iocb.ki_filp->f_mode & FMODE_CAN_ODIRECT)
++ rq->iocb.ki_flags = IOCB_DIRECT;
+ iov_iter_bvec(&iter, ITER_DEST, rq->bvecs, rq->bio.bi_vcnt,
+ rq->bio.bi_iter.bi_size);
+ ret = vfs_iocb_iter_read(rq->iocb.ki_filp, &rq->iocb, &iter);
+@@ -67,7 +69,8 @@ static struct erofs_fileio_rq *erofs_fileio_rq_alloc(struct erofs_map_dev *mdev)
+ GFP_KERNEL | __GFP_NOFAIL);
+
+ bio_init(&rq->bio, NULL, rq->bvecs, BIO_MAX_VECS, REQ_OP_READ);
+- rq->iocb.ki_filp = mdev->m_fp;
++ rq->iocb.ki_filp = mdev->m_dif->file;
++ rq->sb = mdev->m_sb;
+ return rq;
+ }
+
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index fda16eedafb578..ce3d8737df85d4 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -198,7 +198,7 @@ struct bio *erofs_fscache_bio_alloc(struct erofs_map_dev *mdev)
+
+ io = kmalloc(sizeof(*io), GFP_KERNEL | __GFP_NOFAIL);
+ bio_init(&io->bio, NULL, io->bvecs, BIO_MAX_VECS, REQ_OP_READ);
+- io->io.private = mdev->m_fscache->cookie;
++ io->io.private = mdev->m_dif->fscache->cookie;
+ io->io.end_io = erofs_fscache_bio_endio;
+ refcount_set(&io->io.ref, 1);
+ return &io->bio;
+@@ -316,7 +316,7 @@ static int erofs_fscache_data_read_slice(struct erofs_fscache_rq *req)
+ if (!io)
+ return -ENOMEM;
+ iov_iter_xarray(&io->iter, ITER_DEST, &mapping->i_pages, pos, count);
+- ret = erofs_fscache_read_io_async(mdev.m_fscache->cookie,
++ ret = erofs_fscache_read_io_async(mdev.m_dif->fscache->cookie,
+ mdev.m_pa + (pos - map.m_la), io);
+ erofs_fscache_req_io_put(io);
+
+@@ -657,7 +657,7 @@ int erofs_fscache_register_fs(struct super_block *sb)
+ if (IS_ERR(fscache))
+ return PTR_ERR(fscache);
+
+- sbi->s_fscache = fscache;
++ sbi->dif0.fscache = fscache;
+ return 0;
+ }
+
+@@ -665,14 +665,14 @@ void erofs_fscache_unregister_fs(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- erofs_fscache_unregister_cookie(sbi->s_fscache);
++ erofs_fscache_unregister_cookie(sbi->dif0.fscache);
+
+ if (sbi->domain)
+ erofs_fscache_domain_put(sbi->domain);
+ else
+ fscache_relinquish_volume(sbi->volume, NULL, false);
+
+- sbi->s_fscache = NULL;
++ sbi->dif0.fscache = NULL;
+ sbi->volume = NULL;
+ sbi->domain = NULL;
+ }
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 9b03c8f323a762..77e785a6dfa7ff 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -113,6 +113,7 @@ struct erofs_xattr_prefix_item {
+ };
+
+ struct erofs_sb_info {
++ struct erofs_device_info dif0;
+ struct erofs_mount_opts opt; /* options */
+ #ifdef CONFIG_EROFS_FS_ZIP
+ /* list for all registered superblocks, mainly for shrinker */
+@@ -130,13 +131,9 @@ struct erofs_sb_info {
+
+ struct erofs_sb_lz4_info lz4;
+ #endif /* CONFIG_EROFS_FS_ZIP */
+- struct file *fdev;
+ struct inode *packed_inode;
+ struct erofs_dev_context *devs;
+- struct dax_device *dax_dev;
+- u64 dax_part_off;
+ u64 total_blocks;
+- u32 primarydevice_blocks;
+
+ u32 meta_blkaddr;
+ #ifdef CONFIG_EROFS_FS_XATTR
+@@ -172,7 +169,6 @@ struct erofs_sb_info {
+
+ /* fscache support */
+ struct fscache_volume *volume;
+- struct erofs_fscache *s_fscache;
+ struct erofs_domain *domain;
+ char *fsid;
+ char *domain_id;
+@@ -186,6 +182,7 @@ struct erofs_sb_info {
+ #define EROFS_MOUNT_POSIX_ACL 0x00000020
+ #define EROFS_MOUNT_DAX_ALWAYS 0x00000040
+ #define EROFS_MOUNT_DAX_NEVER 0x00000080
++#define EROFS_MOUNT_DIRECT_IO 0x00000100
+
+ #define clear_opt(opt, option) ((opt)->mount_opt &= ~EROFS_MOUNT_##option)
+ #define set_opt(opt, option) ((opt)->mount_opt |= EROFS_MOUNT_##option)
+@@ -193,7 +190,7 @@ struct erofs_sb_info {
+
+ static inline bool erofs_is_fileio_mode(struct erofs_sb_info *sbi)
+ {
+- return IS_ENABLED(CONFIG_EROFS_FS_BACKED_BY_FILE) && sbi->fdev;
++ return IS_ENABLED(CONFIG_EROFS_FS_BACKED_BY_FILE) && sbi->dif0.file;
+ }
+
+ static inline bool erofs_is_fscache_mode(struct super_block *sb)
+@@ -370,11 +367,9 @@ enum {
+ };
+
+ struct erofs_map_dev {
+- struct erofs_fscache *m_fscache;
++ struct super_block *m_sb;
++ struct erofs_device_info *m_dif;
+ struct block_device *m_bdev;
+- struct dax_device *m_daxdev;
+- struct file *m_fp;
+- u64 m_dax_part_off;
+
+ erofs_off_t m_pa;
+ unsigned int m_deviceid;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 2dd7d819572f40..5b279977c9d5d6 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -218,7 +218,7 @@ static int erofs_scan_devices(struct super_block *sb,
+ struct erofs_device_info *dif;
+ int id, err = 0;
+
+- sbi->total_blocks = sbi->primarydevice_blocks;
++ sbi->total_blocks = sbi->dif0.blocks;
+ if (!erofs_sb_has_device_table(sbi))
+ ondisk_extradevs = 0;
+ else
+@@ -322,7 +322,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ sbi->sb_size);
+ goto out;
+ }
+- sbi->primarydevice_blocks = le32_to_cpu(dsb->blocks);
++ sbi->dif0.blocks = le32_to_cpu(dsb->blocks);
+ sbi->meta_blkaddr = le32_to_cpu(dsb->meta_blkaddr);
+ #ifdef CONFIG_EROFS_FS_XATTR
+ sbi->xattr_blkaddr = le32_to_cpu(dsb->xattr_blkaddr);
+@@ -379,14 +379,8 @@ static void erofs_default_options(struct erofs_sb_info *sbi)
+ }
+
+ enum {
+- Opt_user_xattr,
+- Opt_acl,
+- Opt_cache_strategy,
+- Opt_dax,
+- Opt_dax_enum,
+- Opt_device,
+- Opt_fsid,
+- Opt_domain_id,
++ Opt_user_xattr, Opt_acl, Opt_cache_strategy, Opt_dax, Opt_dax_enum,
++ Opt_device, Opt_fsid, Opt_domain_id, Opt_directio,
+ Opt_err
+ };
+
+@@ -413,6 +407,7 @@ static const struct fs_parameter_spec erofs_fs_parameters[] = {
+ fsparam_string("device", Opt_device),
+ fsparam_string("fsid", Opt_fsid),
+ fsparam_string("domain_id", Opt_domain_id),
++ fsparam_flag_no("directio", Opt_directio),
+ {}
+ };
+
+@@ -526,6 +521,16 @@ static int erofs_fc_parse_param(struct fs_context *fc,
+ errorfc(fc, "%s option not supported", erofs_fs_parameters[opt].name);
+ break;
+ #endif
++ case Opt_directio:
++#ifdef CONFIG_EROFS_FS_BACKED_BY_FILE
++ if (result.boolean)
++ set_opt(&sbi->opt, DIRECT_IO);
++ else
++ clear_opt(&sbi->opt, DIRECT_IO);
++#else
++ errorfc(fc, "%s option not supported", erofs_fs_parameters[opt].name);
++#endif
++ break;
+ default:
+ return -ENOPARAM;
+ }
+@@ -617,9 +622,8 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ return -EINVAL;
+ }
+
+- sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev,
+- &sbi->dax_part_off,
+- NULL, NULL);
++ sbi->dif0.dax_dev = fs_dax_get_by_bdev(sb->s_bdev,
++ &sbi->dif0.dax_part_off, NULL, NULL);
+ }
+
+ err = erofs_read_superblock(sb);
+@@ -642,7 +646,7 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ }
+
+ if (test_opt(&sbi->opt, DAX_ALWAYS)) {
+- if (!sbi->dax_dev) {
++ if (!sbi->dif0.dax_dev) {
+ errorfc(fc, "DAX unsupported by block device. Turning off DAX.");
+ clear_opt(&sbi->opt, DAX_ALWAYS);
+ } else if (sbi->blkszbits != PAGE_SHIFT) {
+@@ -718,16 +722,18 @@ static int erofs_fc_get_tree(struct fs_context *fc)
+ GET_TREE_BDEV_QUIET_LOOKUP : 0);
+ #ifdef CONFIG_EROFS_FS_BACKED_BY_FILE
+ if (ret == -ENOTBLK) {
++ struct file *file;
++
+ if (!fc->source)
+ return invalf(fc, "No source specified");
+- sbi->fdev = filp_open(fc->source, O_RDONLY | O_LARGEFILE, 0);
+- if (IS_ERR(sbi->fdev))
+- return PTR_ERR(sbi->fdev);
++ file = filp_open(fc->source, O_RDONLY | O_LARGEFILE, 0);
++ if (IS_ERR(file))
++ return PTR_ERR(file);
++ sbi->dif0.file = file;
+
+- if (S_ISREG(file_inode(sbi->fdev)->i_mode) &&
+- sbi->fdev->f_mapping->a_ops->read_folio)
++ if (S_ISREG(file_inode(sbi->dif0.file)->i_mode) &&
++ sbi->dif0.file->f_mapping->a_ops->read_folio)
+ return get_tree_nodev(fc, erofs_fc_fill_super);
+- fput(sbi->fdev);
+ }
+ #endif
+ return ret;
+@@ -778,19 +784,24 @@ static void erofs_free_dev_context(struct erofs_dev_context *devs)
+ kfree(devs);
+ }
+
+-static void erofs_fc_free(struct fs_context *fc)
++static void erofs_sb_free(struct erofs_sb_info *sbi)
+ {
+- struct erofs_sb_info *sbi = fc->s_fs_info;
+-
+- if (!sbi)
+- return;
+-
+ erofs_free_dev_context(sbi->devs);
+ kfree(sbi->fsid);
+ kfree(sbi->domain_id);
++ if (sbi->dif0.file)
++ fput(sbi->dif0.file);
+ kfree(sbi);
+ }
+
++static void erofs_fc_free(struct fs_context *fc)
++{
++ struct erofs_sb_info *sbi = fc->s_fs_info;
++
++ if (sbi) /* free here if an error occurs before transferring to sb */
++ erofs_sb_free(sbi);
++}
++
+ static const struct fs_context_operations erofs_context_ops = {
+ .parse_param = erofs_fc_parse_param,
+ .get_tree = erofs_fc_get_tree,
+@@ -824,19 +835,14 @@ static void erofs_kill_sb(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- if ((IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) || sbi->fdev)
++ if ((IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) ||
++ sbi->dif0.file)
+ kill_anon_super(sb);
+ else
+ kill_block_super(sb);
+-
+- erofs_free_dev_context(sbi->devs);
+- fs_put_dax(sbi->dax_dev, NULL);
++ fs_put_dax(sbi->dif0.dax_dev, NULL);
+ erofs_fscache_unregister_fs(sb);
+- kfree(sbi->fsid);
+- kfree(sbi->domain_id);
+- if (sbi->fdev)
+- fput(sbi->fdev);
+- kfree(sbi);
++ erofs_sb_free(sbi);
+ sb->s_fs_info = NULL;
+ }
+
+@@ -962,6 +968,8 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
+ seq_puts(seq, ",dax=always");
+ if (test_opt(opt, DAX_NEVER))
+ seq_puts(seq, ",dax=never");
++ if (erofs_is_fileio_mode(sbi) && test_opt(opt, DIRECT_IO))
++ seq_puts(seq, ",directio");
+ #ifdef CONFIG_EROFS_FS_ONDEMAND
+ if (sbi->fsid)
+ seq_printf(seq, ",fsid=%s", sbi->fsid);
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index a569ff9dfd0442..1a00f061798a3c 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1679,9 +1679,9 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ erofs_fscache_submit_bio(bio);
+ else
+ submit_bio(bio);
+- if (memstall)
+- psi_memstall_leave(&pflags);
+ }
++ if (memstall)
++ psi_memstall_leave(&pflags);
+
+ /*
+ * although background is preferred, no one is pending for submission.
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 90fbab6b6f0363..1a06e462b6efba 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1373,7 +1373,10 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
+ break;
+ }
+ }
+- wake_up(&ep->wq);
++ if (sync)
++ wake_up_sync(&ep->wq);
++ else
++ wake_up(&ep->wq);
+ }
+ if (waitqueue_active(&ep->poll_wait))
+ pwake++;
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 5cf327337e2276..c0856585bb6386 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -893,7 +893,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+ error = PTR_ERR(folio);
+ goto out;
+ }
+- folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
++ folio_zero_user(folio, addr);
+ __folio_mark_uptodate(folio);
+ error = hugetlb_add_to_page_cache(folio, mapping, index);
+ if (unlikely(error)) {
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 0d16b383a45262..5f582713bf05eb 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1308,7 +1308,7 @@ pnfs_prepare_layoutreturn(struct pnfs_layout_hdr *lo,
+ enum pnfs_iomode *iomode)
+ {
+ /* Serialise LAYOUTGET/LAYOUTRETURN */
+- if (atomic_read(&lo->plh_outstanding) != 0)
++ if (atomic_read(&lo->plh_outstanding) != 0 && lo->plh_return_seq == 0)
+ return false;
+ if (test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags))
+ return false;
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index 501ad7be5174cb..54a3fa0cf67edb 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -35,6 +35,7 @@ void nilfs_init_btnc_inode(struct inode *btnc_inode)
+ ii->i_flags = 0;
+ memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap));
+ mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS);
++ btnc_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops;
+ }
+
+ void nilfs_btnode_cache_clear(struct address_space *btnc)
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index ace22253fed0f2..2dbb15767df16e 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -163,7 +163,7 @@ int nilfs_init_gcinode(struct inode *inode)
+
+ inode->i_mode = S_IFREG;
+ mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+- inode->i_mapping->a_ops = &empty_aops;
++ inode->i_mapping->a_ops = &nilfs_buffer_cache_aops;
+
+ ii->i_flags = 0;
+ nilfs_bmap_init_gc(ii->i_bmap);
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index be6acf6e2bfc59..aaca34ec678f26 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -307,6 +307,10 @@ const struct address_space_operations nilfs_aops = {
+ .is_partially_uptodate = block_is_partially_uptodate,
+ };
+
++const struct address_space_operations nilfs_buffer_cache_aops = {
++ .invalidate_folio = block_invalidate_folio,
++};
++
+ static int nilfs_insert_inode_locked(struct inode *inode,
+ struct nilfs_root *root,
+ unsigned long ino)
+@@ -575,8 +579,14 @@ struct inode *nilfs_iget(struct super_block *sb, struct nilfs_root *root,
+ inode = nilfs_iget_locked(sb, root, ino);
+ if (unlikely(!inode))
+ return ERR_PTR(-ENOMEM);
+- if (!(inode->i_state & I_NEW))
++
++ if (!(inode->i_state & I_NEW)) {
++ if (!inode->i_nlink) {
++ iput(inode);
++ return ERR_PTR(-ESTALE);
++ }
+ return inode;
++ }
+
+ err = __nilfs_read_inode(sb, root, ino, inode);
+ if (unlikely(err)) {
+@@ -706,6 +716,7 @@ struct inode *nilfs_iget_for_shadow(struct inode *inode)
+ NILFS_I(s_inode)->i_flags = 0;
+ memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap));
+ mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS);
++ s_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops;
+
+ err = nilfs_attach_btree_node_cache(s_inode);
+ if (unlikely(err)) {
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index 9b108052d9f71f..1d836a5540f3b1 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -67,6 +67,11 @@ nilfs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
+ inode = NULL;
+ } else {
+ inode = nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino);
++ if (inode == ERR_PTR(-ESTALE)) {
++ nilfs_error(dir->i_sb,
++ "deleted inode referenced: %lu", ino);
++ return ERR_PTR(-EIO);
++ }
+ }
+
+ return d_splice_alias(inode, dentry);
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index 45d03826eaf157..dff241c53fc583 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -401,6 +401,7 @@ extern const struct file_operations nilfs_dir_operations;
+ extern const struct inode_operations nilfs_file_inode_operations;
+ extern const struct file_operations nilfs_file_operations;
+ extern const struct address_space_operations nilfs_aops;
++extern const struct address_space_operations nilfs_buffer_cache_aops;
+ extern const struct inode_operations nilfs_dir_inode_operations;
+ extern const struct inode_operations nilfs_special_inode_operations;
+ extern const struct inode_operations nilfs_symlink_inode_operations;
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 5df34561c551c6..d1aa04a5af1b1c 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -971,9 +971,9 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ start = count = 0;
+ left = le32_to_cpu(alloc->id1.bitmap1.i_total);
+
+- while ((bit_off = ocfs2_find_next_zero_bit(bitmap, left, start)) <
+- left) {
+- if (bit_off == start) {
++ while (1) {
++ bit_off = ocfs2_find_next_zero_bit(bitmap, left, start);
++ if ((bit_off < left) && (bit_off == start)) {
+ count++;
+ start++;
+ continue;
+@@ -998,6 +998,8 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ }
+ }
+
++ if (bit_off >= left)
++ break;
+ count = 1;
+ start = bit_off + 1;
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index feff3324d39c6d..fe40152b915d82 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -987,9 +987,13 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ msleep(125);
+ if (cifs_rdma_enabled(server))
+ smbd_destroy(server);
++
+ if (server->ssocket) {
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
++
++ /* Release netns reference for the socket. */
++ put_net(cifs_net_ns(server));
+ }
+
+ if (!list_empty(&server->pending_mid_q)) {
+@@ -1037,6 +1041,7 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ */
+ }
+
++ /* Release netns reference for this server. */
+ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
+ kfree(server);
+@@ -1713,6 +1718,8 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+
+ tcp_ses->ops = ctx->ops;
+ tcp_ses->vals = ctx->vals;
++
++ /* Grab netns reference for this server. */
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+
+ tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId);
+@@ -1844,6 +1851,7 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ out_err_crypto_release:
+ cifs_crypto_secmech_release(tcp_ses);
+
++ /* Release netns reference for this server. */
+ put_net(cifs_net_ns(tcp_ses));
+
+ out_err:
+@@ -1852,8 +1860,10 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ cifs_put_tcp_session(tcp_ses->primary_server, false);
+ kfree(tcp_ses->hostname);
+ kfree(tcp_ses->leaf_fullpath);
+- if (tcp_ses->ssocket)
++ if (tcp_ses->ssocket) {
+ sock_release(tcp_ses->ssocket);
++ put_net(cifs_net_ns(tcp_ses));
++ }
+ kfree(tcp_ses);
+ }
+ return ERR_PTR(rc);
+@@ -3111,20 +3121,20 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ socket = server->ssocket;
+ } else {
+ struct net *net = cifs_net_ns(server);
+- struct sock *sk;
+
+- rc = __sock_create(net, sfamily, SOCK_STREAM,
+- IPPROTO_TCP, &server->ssocket, 1);
++ rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket);
+ if (rc < 0) {
+ cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ return rc;
+ }
+
+- sk = server->ssocket->sk;
+- __netns_tracker_free(net, &sk->ns_tracker, false);
+- sk->sk_net_refcnt = 1;
+- get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
+- sock_inuse_add(net, 1);
++ /*
++ * Grab netns reference for the socket.
++ *
++ * It'll be released here, on error, or in clean_demultiplex_info() upon server
++ * teardown.
++ */
++ get_net(net);
+
+ /* BB other socket options to set KEEPALIVE, NODELAY? */
+ cifs_dbg(FYI, "Socket created\n");
+@@ -3138,8 +3148,10 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ }
+
+ rc = bind_socket(server);
+- if (rc < 0)
++ if (rc < 0) {
++ put_net(cifs_net_ns(server));
+ return rc;
++ }
+
+ /*
+ * Eventually check for other socket options to change from
+@@ -3176,6 +3188,7 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (rc < 0) {
+ cifs_dbg(FYI, "Error %d connecting to server\n", rc);
+ trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc);
++ put_net(cifs_net_ns(server));
+ sock_release(socket);
+ server->ssocket = NULL;
+ return rc;
+@@ -3184,6 +3197,9 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (sport == htons(RFC1001_PORT))
+ rc = ip_rfc1001_connect(server);
+
++ if (rc < 0)
++ put_net(cifs_net_ns(server));
++
+ return rc;
+ }
+
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index e6a72f75ab94ba..bf45822db5d589 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -70,7 +70,6 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ atomic_set(&conn->req_running, 0);
+ atomic_set(&conn->r_count, 0);
+ atomic_set(&conn->refcnt, 1);
+- atomic_set(&conn->mux_smb_requests, 0);
+ conn->total_credits = 1;
+ conn->outstanding_credits = 0;
+
+@@ -120,8 +119,8 @@ void ksmbd_conn_enqueue_request(struct ksmbd_work *work)
+ if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE)
+ requests_queue = &conn->requests;
+
++ atomic_inc(&conn->req_running);
+ if (requests_queue) {
+- atomic_inc(&conn->req_running);
+ spin_lock(&conn->request_lock);
+ list_add_tail(&work->request_entry, requests_queue);
+ spin_unlock(&conn->request_lock);
+@@ -132,11 +131,14 @@ void ksmbd_conn_try_dequeue_request(struct ksmbd_work *work)
+ {
+ struct ksmbd_conn *conn = work->conn;
+
++ atomic_dec(&conn->req_running);
++ if (waitqueue_active(&conn->req_running_q))
++ wake_up(&conn->req_running_q);
++
+ if (list_empty(&work->request_entry) &&
+ list_empty(&work->async_request_entry))
+ return;
+
+- atomic_dec(&conn->req_running);
+ spin_lock(&conn->request_lock);
+ list_del_init(&work->request_entry);
+ spin_unlock(&conn->request_lock);
+@@ -308,7 +310,7 @@ int ksmbd_conn_handler_loop(void *p)
+ {
+ struct ksmbd_conn *conn = (struct ksmbd_conn *)p;
+ struct ksmbd_transport *t = conn->transport;
+- unsigned int pdu_size, max_allowed_pdu_size;
++ unsigned int pdu_size, max_allowed_pdu_size, max_req;
+ char hdr_buf[4] = {0,};
+ int size;
+
+@@ -318,6 +320,7 @@ int ksmbd_conn_handler_loop(void *p)
+ if (t->ops->prepare && t->ops->prepare(t))
+ goto out;
+
++ max_req = server_conf.max_inflight_req;
+ conn->last_active = jiffies;
+ set_freezable();
+ while (ksmbd_conn_alive(conn)) {
+@@ -327,6 +330,13 @@ int ksmbd_conn_handler_loop(void *p)
+ kvfree(conn->request_buf);
+ conn->request_buf = NULL;
+
++recheck:
++ if (atomic_read(&conn->req_running) + 1 > max_req) {
++ wait_event_interruptible(conn->req_running_q,
++ atomic_read(&conn->req_running) < max_req);
++ goto recheck;
++ }
++
+ size = t->ops->read(t, hdr_buf, sizeof(hdr_buf), -1);
+ if (size != sizeof(hdr_buf))
+ break;
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 8ddd5a3c7bafb6..b379ae4fdcdffa 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -107,7 +107,6 @@ struct ksmbd_conn {
+ __le16 signing_algorithm;
+ bool binding;
+ atomic_t refcnt;
+- atomic_t mux_smb_requests;
+ };
+
+ struct ksmbd_conn_ops {
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index 698af37e988d7b..d146b0e7c3a9dd 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -270,7 +270,6 @@ static void handle_ksmbd_work(struct work_struct *wk)
+
+ ksmbd_conn_try_dequeue_request(work);
+ ksmbd_free_work_struct(work);
+- atomic_dec(&conn->mux_smb_requests);
+ /*
+ * Checking waitqueue to dropping pending requests on
+ * disconnection. waitqueue_active is safe because it
+@@ -300,11 +299,6 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn)
+ if (err)
+ return 0;
+
+- if (atomic_inc_return(&conn->mux_smb_requests) >= conn->vals->max_credits) {
+- atomic_dec_return(&conn->mux_smb_requests);
+- return -ENOSPC;
+- }
+-
+ work = ksmbd_alloc_work_struct();
+ if (!work) {
+ pr_err("allocation for work failed\n");
+@@ -367,6 +361,7 @@ static int server_conf_init(void)
+ server_conf.auth_mechs |= KSMBD_AUTH_KRB5 |
+ KSMBD_AUTH_MSKRB5;
+ #endif
++ server_conf.max_inflight_req = SMB2_MAX_CREDITS;
+ return 0;
+ }
+
+diff --git a/fs/smb/server/server.h b/fs/smb/server/server.h
+index 4fc529335271f7..94187628ff089f 100644
+--- a/fs/smb/server/server.h
++++ b/fs/smb/server/server.h
+@@ -42,6 +42,7 @@ struct ksmbd_server_config {
+ struct smb_sid domain_sid;
+ unsigned int auth_mechs;
+ unsigned int max_connections;
++ unsigned int max_inflight_req;
+
+ char *conf[SERVER_CONF_WORK_GROUP + 1];
+ struct task_struct *dh_task;
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 2f27afb695f62e..6de351cc2b60e0 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -319,8 +319,11 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ init_smb2_max_write_size(req->smb2_max_write);
+ if (req->smb2_max_trans)
+ init_smb2_max_trans_size(req->smb2_max_trans);
+- if (req->smb2_max_credits)
++ if (req->smb2_max_credits) {
+ init_smb2_max_credits(req->smb2_max_credits);
++ server_conf.max_inflight_req =
++ req->smb2_max_credits;
++ }
+ if (req->smbd_max_io_size)
+ init_smbd_max_io_size(req->smbd_max_io_size);
+
+diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
+index 271855227514cb..6258527315f28b 100644
+--- a/fs/xfs/libxfs/xfs_ialloc.c
++++ b/fs/xfs/libxfs/xfs_ialloc.c
+@@ -855,7 +855,8 @@ xfs_ialloc_ag_alloc(
+ * the end of the AG.
+ */
+ args.min_agbno = args.mp->m_sb.sb_inoalignmt;
+- args.max_agbno = round_down(args.mp->m_sb.sb_agblocks,
++ args.max_agbno = round_down(xfs_ag_block_count(args.mp,
++ pag->pag_agno),
+ args.mp->m_sb.sb_inoalignmt) -
+ igeo->ialloc_blks;
+
+@@ -2332,9 +2333,9 @@ xfs_difree(
+ return -EINVAL;
+ }
+ agbno = XFS_AGINO_TO_AGBNO(mp, agino);
+- if (agbno >= mp->m_sb.sb_agblocks) {
+- xfs_warn(mp, "%s: agbno >= mp->m_sb.sb_agblocks (%d >= %d).",
+- __func__, agbno, mp->m_sb.sb_agblocks);
++ if (agbno >= xfs_ag_block_count(mp, pag->pag_agno)) {
++ xfs_warn(mp, "%s: agbno >= xfs_ag_block_count (%d >= %d).",
++ __func__, agbno, xfs_ag_block_count(mp, pag->pag_agno));
+ ASSERT(0);
+ return -EINVAL;
+ }
+@@ -2457,7 +2458,7 @@ xfs_imap(
+ */
+ agino = XFS_INO_TO_AGINO(mp, ino);
+ agbno = XFS_AGINO_TO_AGBNO(mp, agino);
+- if (agbno >= mp->m_sb.sb_agblocks ||
++ if (agbno >= xfs_ag_block_count(mp, pag->pag_agno) ||
+ ino != XFS_AGINO_TO_INO(mp, pag->pag_agno, agino)) {
+ error = -EINVAL;
+ #ifdef DEBUG
+@@ -2467,11 +2468,12 @@ xfs_imap(
+ */
+ if (flags & XFS_IGET_UNTRUSTED)
+ return error;
+- if (agbno >= mp->m_sb.sb_agblocks) {
++ if (agbno >= xfs_ag_block_count(mp, pag->pag_agno)) {
+ xfs_alert(mp,
+ "%s: agbno (0x%llx) >= mp->m_sb.sb_agblocks (0x%lx)",
+ __func__, (unsigned long long)agbno,
+- (unsigned long)mp->m_sb.sb_agblocks);
++ (unsigned long)xfs_ag_block_count(mp,
++ pag->pag_agno));
+ }
+ if (ino != XFS_AGINO_TO_INO(mp, pag->pag_agno, agino)) {
+ xfs_alert(mp,
+diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
+index 02ebcbc4882f5b..e27b63281d012a 100644
+--- a/fs/xfs/libxfs/xfs_sb.c
++++ b/fs/xfs/libxfs/xfs_sb.c
+@@ -391,6 +391,21 @@ xfs_validate_sb_common(
+ sbp->sb_inoalignmt, align);
+ return -EINVAL;
+ }
++
++ if (sbp->sb_spino_align &&
++ (sbp->sb_spino_align > sbp->sb_inoalignmt ||
++ (sbp->sb_inoalignmt % sbp->sb_spino_align) != 0)) {
++ xfs_warn(mp,
++"Sparse inode alignment (%u) is invalid, must be integer factor of (%u).",
++ sbp->sb_spino_align,
++ sbp->sb_inoalignmt);
++ return -EINVAL;
++ }
++ } else if (sbp->sb_spino_align) {
++ xfs_warn(mp,
++ "Sparse inode alignment (%u) should be zero.",
++ sbp->sb_spino_align);
++ return -EINVAL;
+ }
+ } else if (sbp->sb_qflags & (XFS_PQUOTA_ENFD | XFS_GQUOTA_ENFD |
+ XFS_PQUOTA_CHKD | XFS_GQUOTA_CHKD)) {
+diff --git a/fs/xfs/scrub/agheader.c b/fs/xfs/scrub/agheader.c
+index da30f926cbe66d..0f2f1852d58fe7 100644
+--- a/fs/xfs/scrub/agheader.c
++++ b/fs/xfs/scrub/agheader.c
+@@ -59,6 +59,30 @@ xchk_superblock_xref(
+ /* scrub teardown will take care of sc->sa for us */
+ }
+
++/*
++ * Calculate the ondisk superblock size in bytes given the feature set of the
++ * mounted filesystem (aka the primary sb). This is subtlely different from
++ * the logic in xfs_repair, which computes the size of a secondary sb given the
++ * featureset listed in the secondary sb.
++ */
++STATIC size_t
++xchk_superblock_ondisk_size(
++ struct xfs_mount *mp)
++{
++ if (xfs_has_metauuid(mp))
++ return offsetofend(struct xfs_dsb, sb_meta_uuid);
++ if (xfs_has_crc(mp))
++ return offsetofend(struct xfs_dsb, sb_lsn);
++ if (xfs_sb_version_hasmorebits(&mp->m_sb))
++ return offsetofend(struct xfs_dsb, sb_bad_features2);
++ if (xfs_has_logv2(mp))
++ return offsetofend(struct xfs_dsb, sb_logsunit);
++ if (xfs_has_sector(mp))
++ return offsetofend(struct xfs_dsb, sb_logsectsize);
++ /* only support dirv2 or more recent */
++ return offsetofend(struct xfs_dsb, sb_dirblklog);
++}
++
+ /*
+ * Scrub the filesystem superblock.
+ *
+@@ -75,6 +99,7 @@ xchk_superblock(
+ struct xfs_buf *bp;
+ struct xfs_dsb *sb;
+ struct xfs_perag *pag;
++ size_t sblen;
+ xfs_agnumber_t agno;
+ uint32_t v2_ok;
+ __be32 features_mask;
+@@ -350,8 +375,8 @@ xchk_superblock(
+ }
+
+ /* Everything else must be zero. */
+- if (memchr_inv(sb + 1, 0,
+- BBTOB(bp->b_length) - sizeof(struct xfs_dsb)))
++ sblen = xchk_superblock_ondisk_size(mp);
++ if (memchr_inv((char *)sb + sblen, 0, BBTOB(bp->b_length) - sblen))
+ xchk_block_set_corrupt(sc, bp);
+
+ xchk_superblock_xref(sc, bp);
+diff --git a/fs/xfs/xfs_fsmap.c b/fs/xfs/xfs_fsmap.c
+index ae18ab86e608b5..8712b891defbc7 100644
+--- a/fs/xfs/xfs_fsmap.c
++++ b/fs/xfs/xfs_fsmap.c
+@@ -162,7 +162,8 @@ struct xfs_getfsmap_info {
+ xfs_daddr_t next_daddr; /* next daddr we expect */
+ /* daddr of low fsmap key when we're using the rtbitmap */
+ xfs_daddr_t low_daddr;
+- xfs_daddr_t end_daddr; /* daddr of high fsmap key */
++ /* daddr of high fsmap key, or the last daddr on the device */
++ xfs_daddr_t end_daddr;
+ u64 missing_owner; /* owner of holes */
+ u32 dev; /* device id */
+ /*
+@@ -306,7 +307,7 @@ xfs_getfsmap_helper(
+ * Note that if the btree query found a mapping, there won't be a gap.
+ */
+ if (info->last && info->end_daddr != XFS_BUF_DADDR_NULL)
+- rec_daddr = info->end_daddr;
++ rec_daddr = info->end_daddr + 1;
+
+ /* Are we just counting mappings? */
+ if (info->head->fmh_count == 0) {
+@@ -898,7 +899,10 @@ xfs_getfsmap(
+ struct xfs_trans *tp = NULL;
+ struct xfs_fsmap dkeys[2]; /* per-dev keys */
+ struct xfs_getfsmap_dev handlers[XFS_GETFSMAP_DEVS];
+- struct xfs_getfsmap_info info = { NULL };
++ struct xfs_getfsmap_info info = {
++ .fsmap_recs = fsmap_recs,
++ .head = head,
++ };
+ bool use_rmap;
+ int i;
+ int error = 0;
+@@ -963,9 +967,6 @@ xfs_getfsmap(
+
+ info.next_daddr = head->fmh_keys[0].fmr_physical +
+ head->fmh_keys[0].fmr_length;
+- info.end_daddr = XFS_BUF_DADDR_NULL;
+- info.fsmap_recs = fsmap_recs;
+- info.head = head;
+
+ /* For each device we support... */
+ for (i = 0; i < XFS_GETFSMAP_DEVS; i++) {
+@@ -978,17 +979,23 @@ xfs_getfsmap(
+ break;
+
+ /*
+- * If this device number matches the high key, we have
+- * to pass the high key to the handler to limit the
+- * query results. If the device number exceeds the
+- * low key, zero out the low key so that we get
+- * everything from the beginning.
++ * If this device number matches the high key, we have to pass
++ * the high key to the handler to limit the query results, and
++ * set the end_daddr so that we can synthesize records at the
++ * end of the query range or device.
+ */
+ if (handlers[i].dev == head->fmh_keys[1].fmr_device) {
+ dkeys[1] = head->fmh_keys[1];
+ info.end_daddr = min(handlers[i].nr_sectors - 1,
+ dkeys[1].fmr_physical);
++ } else {
++ info.end_daddr = handlers[i].nr_sectors - 1;
+ }
++
++ /*
++ * If the device number exceeds the low key, zero out the low
++ * key so that we get everything from the beginning.
++ */
+ if (handlers[i].dev > head->fmh_keys[0].fmr_device)
+ memset(&dkeys[0], 0, sizeof(struct xfs_fsmap));
+
+diff --git a/include/clocksource/hyperv_timer.h b/include/clocksource/hyperv_timer.h
+index 6cdc873ac907f5..aa5233b1eba970 100644
+--- a/include/clocksource/hyperv_timer.h
++++ b/include/clocksource/hyperv_timer.h
+@@ -38,6 +38,8 @@ extern void hv_remap_tsc_clocksource(void);
+ extern unsigned long hv_get_tsc_pfn(void);
+ extern struct ms_hyperv_tsc_page *hv_get_tsc_page(void);
+
++extern void hv_adj_sched_clock_offset(u64 offset);
++
+ static __always_inline bool
+ hv_read_tsc_page_tsc(const struct ms_hyperv_tsc_page *tsc_pg,
+ u64 *cur_tsc, u64 *time)
+diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
+index 941deffc590dfd..6073a8f13c413c 100644
+--- a/include/linux/alloc_tag.h
++++ b/include/linux/alloc_tag.h
+@@ -48,7 +48,12 @@ static inline void set_codetag_empty(union codetag_ref *ref)
+ #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
+
+ static inline bool is_codetag_empty(union codetag_ref *ref) { return false; }
+-static inline void set_codetag_empty(union codetag_ref *ref) {}
++
++static inline void set_codetag_empty(union codetag_ref *ref)
++{
++ if (ref)
++ ref->ct = NULL;
++}
+
+ #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
+
+diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h
+index a28e2a6a13d05a..74169dd0f65948 100644
+--- a/include/linux/arm_ffa.h
++++ b/include/linux/arm_ffa.h
+@@ -166,9 +166,12 @@ static inline void *ffa_dev_get_drvdata(struct ffa_device *fdev)
+ return dev_get_drvdata(&fdev->dev);
+ }
+
++struct ffa_partition_info;
++
+ #if IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)
+-struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+- const struct ffa_ops *ops);
++struct ffa_device *
++ffa_device_register(const struct ffa_partition_info *part_info,
++ const struct ffa_ops *ops);
+ void ffa_device_unregister(struct ffa_device *ffa_dev);
+ int ffa_driver_register(struct ffa_driver *driver, struct module *owner,
+ const char *mod_name);
+@@ -176,9 +179,9 @@ void ffa_driver_unregister(struct ffa_driver *driver);
+ bool ffa_device_is_valid(struct ffa_device *ffa_dev);
+
+ #else
+-static inline
+-struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+- const struct ffa_ops *ops)
++static inline struct ffa_device *
++ffa_device_register(const struct ffa_partition_info *part_info,
++ const struct ffa_ops *ops)
+ {
+ return NULL;
+ }
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 22c22fb9104214..02a226bcf0edc9 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1559,6 +1559,7 @@ struct hv_util_service {
+ void *channel;
+ void (*util_cb)(void *);
+ int (*util_init)(struct hv_util_service *);
++ int (*util_init_transport)(void);
+ void (*util_deinit)(void);
+ int (*util_pre_suspend)(void);
+ int (*util_pre_resume)(void);
+diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
+index e123d5e17b5261..85fe4e6b275c7d 100644
+--- a/include/linux/io_uring.h
++++ b/include/linux/io_uring.h
+@@ -15,10 +15,8 @@ bool io_is_uring_fops(struct file *file);
+
+ static inline void io_uring_files_cancel(void)
+ {
+- if (current->io_uring) {
+- io_uring_unreg_ringfd();
++ if (current->io_uring)
+ __io_uring_cancel(false);
+- }
+ }
+ static inline void io_uring_task_cancel(void)
+ {
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index 74aa9fbbdae70b..48c66b84668281 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -860,18 +860,10 @@ static inline void ClearPageCompound(struct page *page)
+ ClearPageHead(page);
+ }
+ FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
+-FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+-/*
+- * PG_partially_mapped is protected by deferred_split split_queue_lock,
+- * so its safe to use non-atomic set/clear.
+- */
+-__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+-__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
++FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+ #else
+ FOLIO_FLAG_FALSE(large_rmappable)
+-FOLIO_TEST_FLAG_FALSE(partially_mapped)
+-__FOLIO_SET_FLAG_NOOP(partially_mapped)
+-__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
++FOLIO_FLAG_FALSE(partially_mapped)
+ #endif
+
+ #define PG_head_mask ((1UL << PG_head))
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index bb343136ddd05d..c14446c6164d72 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -656,6 +656,12 @@ struct sched_dl_entity {
+ * @dl_defer_armed tells if the deferrable server is waiting
+ * for the replenishment timer to activate it.
+ *
++ * @dl_server_active tells if the dlserver is active(started).
++ * dlserver is started on first cfs enqueue on an idle runqueue
++ * and is stopped when a dequeue results in 0 cfs tasks on the
++ * runqueue. In other words, dlserver is active only when cpu's
++ * runqueue has atleast one cfs task.
++ *
+ * @dl_defer_running tells if the deferrable server is actually
+ * running, skipping the defer phase.
+ */
+@@ -664,6 +670,7 @@ struct sched_dl_entity {
+ unsigned int dl_non_contending : 1;
+ unsigned int dl_overrun : 1;
+ unsigned int dl_server : 1;
++ unsigned int dl_server_active : 1;
+ unsigned int dl_defer : 1;
+ unsigned int dl_defer_armed : 1;
+ unsigned int dl_defer_running : 1;
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 42bedcddd5113e..4df2ff81d3dea5 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -285,7 +285,8 @@ struct trace_event_fields {
+ const char *name;
+ const int size;
+ const int align;
+- const int is_signed;
++ const unsigned int is_signed:1;
++ unsigned int needs_test:1;
+ const int filter_type;
+ const int len;
+ };
+@@ -337,6 +338,7 @@ enum {
+ TRACE_EVENT_FL_EPROBE_BIT,
+ TRACE_EVENT_FL_FPROBE_BIT,
+ TRACE_EVENT_FL_CUSTOM_BIT,
++ TRACE_EVENT_FL_TEST_STR_BIT,
+ };
+
+ /*
+@@ -354,6 +356,7 @@ enum {
+ * CUSTOM - Event is a custom event (to be attached to an exsiting tracepoint)
+ * This is set when the custom event has not been attached
+ * to a tracepoint yet, then it is cleared when it is.
++ * TEST_STR - The event has a "%s" that points to a string outside the event
+ */
+ enum {
+ TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
+@@ -367,6 +370,7 @@ enum {
+ TRACE_EVENT_FL_EPROBE = (1 << TRACE_EVENT_FL_EPROBE_BIT),
+ TRACE_EVENT_FL_FPROBE = (1 << TRACE_EVENT_FL_FPROBE_BIT),
+ TRACE_EVENT_FL_CUSTOM = (1 << TRACE_EVENT_FL_CUSTOM_BIT),
++ TRACE_EVENT_FL_TEST_STR = (1 << TRACE_EVENT_FL_TEST_STR_BIT),
+ };
+
+ #define TRACE_EVENT_FL_UKPROBE (TRACE_EVENT_FL_KPROBE | TRACE_EVENT_FL_UPROBE)
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index 8aa3372f21a080..2b322a9b88a2bd 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -221,6 +221,7 @@ void __wake_up_pollfree(struct wait_queue_head *wq_head);
+ #define wake_up_all(x) __wake_up(x, TASK_NORMAL, 0, NULL)
+ #define wake_up_locked(x) __wake_up_locked((x), TASK_NORMAL, 1)
+ #define wake_up_all_locked(x) __wake_up_locked((x), TASK_NORMAL, 0)
++#define wake_up_sync(x) __wake_up_sync(x, TASK_NORMAL)
+
+ #define wake_up_interruptible(x) __wake_up(x, TASK_INTERRUPTIBLE, 1, NULL)
+ #define wake_up_interruptible_nr(x, nr) __wake_up(x, TASK_INTERRUPTIBLE, nr, NULL)
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index b2736e3491b862..9849da128364af 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -515,7 +515,11 @@ static void io_queue_iowq(struct io_kiocb *req)
+ struct io_uring_task *tctx = req->task->io_uring;
+
+ BUG_ON(!tctx);
+- BUG_ON(!tctx->io_wq);
++
++ if ((current->flags & PF_KTHREAD) || !tctx->io_wq) {
++ io_req_task_queue_fail(req, -ECANCELED);
++ return;
++ }
+
+ /* init ->work of the whole link before punting */
+ io_prep_async_link(req);
+@@ -3230,6 +3234,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
+
+ void __io_uring_cancel(bool cancel_all)
+ {
++ io_uring_unreg_ringfd();
+ io_uring_cancel_generic(cancel_all, NULL);
+ }
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 6cc12777bb11ab..d07dc87787dff3 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1300,7 +1300,7 @@ bool sched_can_stop_tick(struct rq *rq)
+ if (scx_enabled() && !scx_can_stop_tick(rq))
+ return false;
+
+- if (rq->cfs.nr_running > 1)
++ if (rq->cfs.h_nr_running > 1)
+ return false;
+
+ /*
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index fc6f41ac33eb13..a17c23b53049cc 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
+ if (!dl_se->dl_runtime)
+ return;
+
++ dl_se->dl_server_active = 1;
+ enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
+ if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
+ resched_curr(dl_se->rq);
+@@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
+ hrtimer_try_to_cancel(&dl_se->dl_timer);
+ dl_se->dl_defer_armed = 0;
+ dl_se->dl_throttled = 0;
++ dl_se->dl_server_active = 0;
+ }
+
+ void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
+@@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
+ if (dl_server(dl_se)) {
+ p = dl_se->server_pick_task(dl_se);
+ if (!p) {
+- dl_se->dl_yielded = 1;
+- update_curr_dl_se(rq, dl_se, 0);
++ if (dl_server_active(dl_se)) {
++ dl_se->dl_yielded = 1;
++ update_curr_dl_se(rq, dl_se, 0);
++ }
+ goto again;
+ }
+ rq->dl_server = dl_se;
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index f4035c7a0fa1df..82b165bf48c423 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -844,6 +844,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
+ SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread));
+ SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running);
+ SEQ_printf(m, " .%-30s: %d\n", "h_nr_running", cfs_rq->h_nr_running);
++ SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed);
+ SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running",
+ cfs_rq->idle_nr_running);
+ SEQ_printf(m, " .%-30s: %d\n", "idle_h_nr_running",
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 782ce70ebd1b08..1ca96c99872f08 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct task_struct *p, s64 delta_exec)
+ trace_sched_stat_runtime(p, delta_exec);
+ account_group_exec_runtime(p, delta_exec);
+ cgroup_account_cputime(p, delta_exec);
+- if (p->dl_server)
+- dl_server_update(p->dl_server, delta_exec);
+ }
+
+ static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct sched_entity *curr)
+@@ -1237,11 +1235,16 @@ static void update_curr(struct cfs_rq *cfs_rq)
+ update_curr_task(p, delta_exec);
+
+ /*
+- * Any fair task that runs outside of fair_server should
+- * account against fair_server such that it can account for
+- * this time and possibly avoid running this period.
++ * If the fair_server is active, we need to account for the
++ * fair_server time whether or not the task is running on
++ * behalf of fair_server or not:
++ * - If the task is running on behalf of fair_server, we need
++ * to limit its time based on the assigned runtime.
++ * - Fair task that runs outside of fair_server should account
++ * against fair_server such that it can account for this time
++ * and possibly avoid running this period.
+ */
+- if (p->dl_server != &rq->fair_server)
++ if (dl_server_active(&rq->fair_server))
+ dl_server_update(&rq->fair_server, delta_exec);
+ }
+
+@@ -5471,9 +5474,33 @@ static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
+
+ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
+
+-static inline void finish_delayed_dequeue_entity(struct sched_entity *se)
++static void set_delayed(struct sched_entity *se)
++{
++ se->sched_delayed = 1;
++ for_each_sched_entity(se) {
++ struct cfs_rq *cfs_rq = cfs_rq_of(se);
++
++ cfs_rq->h_nr_delayed++;
++ if (cfs_rq_throttled(cfs_rq))
++ break;
++ }
++}
++
++static void clear_delayed(struct sched_entity *se)
+ {
+ se->sched_delayed = 0;
++ for_each_sched_entity(se) {
++ struct cfs_rq *cfs_rq = cfs_rq_of(se);
++
++ cfs_rq->h_nr_delayed--;
++ if (cfs_rq_throttled(cfs_rq))
++ break;
++ }
++}
++
++static inline void finish_delayed_dequeue_entity(struct sched_entity *se)
++{
++ clear_delayed(se);
+ if (sched_feat(DELAY_ZERO) && se->vlag > 0)
+ se->vlag = 0;
+ }
+@@ -5484,6 +5511,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ bool sleep = flags & DEQUEUE_SLEEP;
+
+ update_curr(cfs_rq);
++ clear_buddies(cfs_rq, se);
+
+ if (flags & DEQUEUE_DELAYED) {
+ SCHED_WARN_ON(!se->sched_delayed);
+@@ -5500,10 +5528,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+
+ if (sched_feat(DELAY_DEQUEUE) && delay &&
+ !entity_eligible(cfs_rq, se)) {
+- if (cfs_rq->next == se)
+- cfs_rq->next = NULL;
+ update_load_avg(cfs_rq, se, 0);
+- se->sched_delayed = 1;
++ set_delayed(se);
+ return false;
+ }
+ }
+@@ -5526,8 +5552,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+
+ update_stats_dequeue_fair(cfs_rq, se, flags);
+
+- clear_buddies(cfs_rq, se);
+-
+ update_entity_lag(cfs_rq, se);
+ if (sched_feat(PLACE_REL_DEADLINE) && !sleep) {
+ se->deadline -= se->vruntime;
+@@ -5923,7 +5947,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+ struct rq *rq = rq_of(cfs_rq);
+ struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
+ struct sched_entity *se;
+- long task_delta, idle_task_delta, dequeue = 1;
++ long task_delta, idle_task_delta, delayed_delta, dequeue = 1;
+ long rq_h_nr_running = rq->cfs.h_nr_running;
+
+ raw_spin_lock(&cfs_b->lock);
+@@ -5956,6 +5980,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ task_delta = cfs_rq->h_nr_running;
+ idle_task_delta = cfs_rq->idle_h_nr_running;
++ delayed_delta = cfs_rq->h_nr_delayed;
+ for_each_sched_entity(se) {
+ struct cfs_rq *qcfs_rq = cfs_rq_of(se);
+ int flags;
+@@ -5979,6 +6004,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running -= task_delta;
+ qcfs_rq->idle_h_nr_running -= idle_task_delta;
++ qcfs_rq->h_nr_delayed -= delayed_delta;
+
+ if (qcfs_rq->load.weight) {
+ /* Avoid re-evaluating load for this entity: */
+@@ -6001,6 +6027,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running -= task_delta;
+ qcfs_rq->idle_h_nr_running -= idle_task_delta;
++ qcfs_rq->h_nr_delayed -= delayed_delta;
+ }
+
+ /* At this point se is NULL and we are at root level*/
+@@ -6026,7 +6053,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+ struct rq *rq = rq_of(cfs_rq);
+ struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
+ struct sched_entity *se;
+- long task_delta, idle_task_delta;
++ long task_delta, idle_task_delta, delayed_delta;
+ long rq_h_nr_running = rq->cfs.h_nr_running;
+
+ se = cfs_rq->tg->se[cpu_of(rq)];
+@@ -6062,6 +6089,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ task_delta = cfs_rq->h_nr_running;
+ idle_task_delta = cfs_rq->idle_h_nr_running;
++ delayed_delta = cfs_rq->h_nr_delayed;
+ for_each_sched_entity(se) {
+ struct cfs_rq *qcfs_rq = cfs_rq_of(se);
+
+@@ -6079,6 +6107,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running += task_delta;
+ qcfs_rq->idle_h_nr_running += idle_task_delta;
++ qcfs_rq->h_nr_delayed += delayed_delta;
+
+ /* end evaluation on encountering a throttled cfs_rq */
+ if (cfs_rq_throttled(qcfs_rq))
+@@ -6096,6 +6125,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running += task_delta;
+ qcfs_rq->idle_h_nr_running += idle_task_delta;
++ qcfs_rq->h_nr_delayed += delayed_delta;
+
+ /* end evaluation on encountering a throttled cfs_rq */
+ if (cfs_rq_throttled(qcfs_rq))
+@@ -6949,7 +6979,7 @@ requeue_delayed_entity(struct sched_entity *se)
+ }
+
+ update_load_avg(cfs_rq, se, 0);
+- se->sched_delayed = 0;
++ clear_delayed(se);
+ }
+
+ /*
+@@ -6963,6 +6993,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ struct cfs_rq *cfs_rq;
+ struct sched_entity *se = &p->se;
+ int idle_h_nr_running = task_has_idle_policy(p);
++ int h_nr_delayed = 0;
+ int task_new = !(flags & ENQUEUE_WAKEUP);
+ int rq_h_nr_running = rq->cfs.h_nr_running;
+ u64 slice = 0;
+@@ -6989,6 +7020,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ if (p->in_iowait)
+ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
+
++ if (task_new)
++ h_nr_delayed = !!se->sched_delayed;
++
+ for_each_sched_entity(se) {
+ if (se->on_rq) {
+ if (se->sched_delayed)
+@@ -7011,6 +7045,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+
+ cfs_rq->h_nr_running++;
+ cfs_rq->idle_h_nr_running += idle_h_nr_running;
++ cfs_rq->h_nr_delayed += h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = 1;
+@@ -7034,6 +7069,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+
+ cfs_rq->h_nr_running++;
+ cfs_rq->idle_h_nr_running += idle_h_nr_running;
++ cfs_rq->h_nr_delayed += h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = 1;
+@@ -7096,6 +7132,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ struct task_struct *p = NULL;
+ int idle_h_nr_running = 0;
+ int h_nr_running = 0;
++ int h_nr_delayed = 0;
+ struct cfs_rq *cfs_rq;
+ u64 slice = 0;
+
+@@ -7103,6 +7140,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ p = task_of(se);
+ h_nr_running = 1;
+ idle_h_nr_running = task_has_idle_policy(p);
++ if (!task_sleep && !task_delayed)
++ h_nr_delayed = !!se->sched_delayed;
+ } else {
+ cfs_rq = group_cfs_rq(se);
+ slice = cfs_rq_min_slice(cfs_rq);
+@@ -7120,6 +7159,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+
+ cfs_rq->h_nr_running -= h_nr_running;
+ cfs_rq->idle_h_nr_running -= idle_h_nr_running;
++ cfs_rq->h_nr_delayed -= h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = h_nr_running;
+@@ -7158,6 +7198,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+
+ cfs_rq->h_nr_running -= h_nr_running;
+ cfs_rq->idle_h_nr_running -= idle_h_nr_running;
++ cfs_rq->h_nr_delayed -= h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = h_nr_running;
+@@ -8786,7 +8827,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ if (unlikely(throttled_hierarchy(cfs_rq_of(pse))))
+ return;
+
+- if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK)) {
++ if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK) && !pse->sched_delayed) {
+ set_next_buddy(pse);
+ }
+
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index a9c65d97b3cac6..171a802420a10a 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
+ {
+ if (___update_load_sum(now, &cfs_rq->avg,
+ scale_load_down(cfs_rq->load.weight),
+- cfs_rq->h_nr_running,
++ cfs_rq->h_nr_running - cfs_rq->h_nr_delayed,
+ cfs_rq->curr != NULL)) {
+
+ ___update_load_avg(&cfs_rq->avg, 1);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c03b3d7b320e9c..f2ef520513c4a2 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -398,6 +398,11 @@ extern void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq
+ extern int dl_server_apply_params(struct sched_dl_entity *dl_se,
+ u64 runtime, u64 period, bool init);
+
++static inline bool dl_server_active(struct sched_dl_entity *dl_se)
++{
++ return dl_se->dl_server_active;
++}
++
+ #ifdef CONFIG_CGROUP_SCHED
+
+ extern struct list_head task_groups;
+@@ -649,6 +654,7 @@ struct cfs_rq {
+ unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */
+ unsigned int idle_nr_running; /* SCHED_IDLE */
+ unsigned int idle_h_nr_running; /* SCHED_IDLE */
++ unsigned int h_nr_delayed;
+
+ s64 avg_vruntime;
+ u64 avg_load;
+@@ -898,8 +904,11 @@ struct dl_rq {
+
+ static inline void se_update_runnable(struct sched_entity *se)
+ {
+- if (!entity_is_task(se))
+- se->runnable_weight = se->my_q->h_nr_running;
++ if (!entity_is_task(se)) {
++ struct cfs_rq *cfs_rq = se->my_q;
++
++ se->runnable_weight = cfs_rq->h_nr_running - cfs_rq->h_nr_delayed;
++ }
+ }
+
+ static inline long se_runnable(struct sched_entity *se)
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 69e226a48daa92..72bcbfad53db04 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1160,7 +1160,7 @@ void fgraph_update_pid_func(void)
+ static int start_graph_tracing(void)
+ {
+ unsigned long **ret_stack_list;
+- int ret;
++ int ret, cpu;
+
+ ret_stack_list = kcalloc(FTRACE_RETSTACK_ALLOC_SIZE,
+ sizeof(*ret_stack_list), GFP_KERNEL);
+@@ -1168,6 +1168,12 @@ static int start_graph_tracing(void)
+ if (!ret_stack_list)
+ return -ENOMEM;
+
++ /* The cpu_boot init_task->ret_stack will never be freed */
++ for_each_online_cpu(cpu) {
++ if (!idle_task(cpu)->ret_stack)
++ ftrace_graph_init_idle_task(idle_task(cpu), cpu);
++ }
++
+ do {
+ ret = alloc_retstack_tasklist(ret_stack_list);
+ } while (ret == -EAGAIN);
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 366eb4c4f28e57..703978b2d557d7 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -7019,7 +7019,11 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ lockdep_assert_held(&cpu_buffer->mapping_lock);
+
+ nr_subbufs = cpu_buffer->nr_pages + 1; /* + reader-subbuf */
+- nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff; /* + meta-page */
++ nr_pages = ((nr_subbufs + 1) << subbuf_order); /* + meta-page */
++ if (nr_pages <= pgoff)
++ return -EINVAL;
++
++ nr_pages -= pgoff;
+
+ nr_vma_pages = vma_pages(vma);
+ if (!nr_vma_pages || nr_vma_pages > nr_pages)
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 17d2ffde0bb604..35515192aa0fda 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3635,17 +3635,12 @@ char *trace_iter_expand_format(struct trace_iterator *iter)
+ }
+
+ /* Returns true if the string is safe to dereference from an event */
+-static bool trace_safe_str(struct trace_iterator *iter, const char *str,
+- bool star, int len)
++static bool trace_safe_str(struct trace_iterator *iter, const char *str)
+ {
+ unsigned long addr = (unsigned long)str;
+ struct trace_event *trace_event;
+ struct trace_event_call *event;
+
+- /* Ignore strings with no length */
+- if (star && !len)
+- return true;
+-
+ /* OK if part of the event data */
+ if ((addr >= (unsigned long)iter->ent) &&
+ (addr < (unsigned long)iter->ent + iter->ent_size))
+@@ -3685,181 +3680,69 @@ static bool trace_safe_str(struct trace_iterator *iter, const char *str,
+ return false;
+ }
+
+-static DEFINE_STATIC_KEY_FALSE(trace_no_verify);
+-
+-static int test_can_verify_check(const char *fmt, ...)
+-{
+- char buf[16];
+- va_list ap;
+- int ret;
+-
+- /*
+- * The verifier is dependent on vsnprintf() modifies the va_list
+- * passed to it, where it is sent as a reference. Some architectures
+- * (like x86_32) passes it by value, which means that vsnprintf()
+- * does not modify the va_list passed to it, and the verifier
+- * would then need to be able to understand all the values that
+- * vsnprintf can use. If it is passed by value, then the verifier
+- * is disabled.
+- */
+- va_start(ap, fmt);
+- vsnprintf(buf, 16, "%d", ap);
+- ret = va_arg(ap, int);
+- va_end(ap);
+-
+- return ret;
+-}
+-
+-static void test_can_verify(void)
+-{
+- if (!test_can_verify_check("%d %d", 0, 1)) {
+- pr_info("trace event string verifier disabled\n");
+- static_branch_inc(&trace_no_verify);
+- }
+-}
+-
+ /**
+- * trace_check_vprintf - Check dereferenced strings while writing to the seq buffer
++ * ignore_event - Check dereferenced fields while writing to the seq buffer
+ * @iter: The iterator that holds the seq buffer and the event being printed
+- * @fmt: The format used to print the event
+- * @ap: The va_list holding the data to print from @fmt.
+ *
+- * This writes the data into the @iter->seq buffer using the data from
+- * @fmt and @ap. If the format has a %s, then the source of the string
+- * is examined to make sure it is safe to print, otherwise it will
+- * warn and print "[UNSAFE MEMORY]" in place of the dereferenced string
+- * pointer.
++ * At boot up, test_event_printk() will flag any event that dereferences
++ * a string with "%s" that does exist in the ring buffer. It may still
++ * be valid, as the string may point to a static string in the kernel
++ * rodata that never gets freed. But if the string pointer is pointing
++ * to something that was allocated, there's a chance that it can be freed
++ * by the time the user reads the trace. This would cause a bad memory
++ * access by the kernel and possibly crash the system.
++ *
++ * This function will check if the event has any fields flagged as needing
++ * to be checked at runtime and perform those checks.
++ *
++ * If it is found that a field is unsafe, it will write into the @iter->seq
++ * a message stating what was found to be unsafe.
++ *
++ * @return: true if the event is unsafe and should be ignored,
++ * false otherwise.
+ */
+-void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+- va_list ap)
++bool ignore_event(struct trace_iterator *iter)
+ {
+- long text_delta = 0;
+- long data_delta = 0;
+- const char *p = fmt;
+- const char *str;
+- bool good;
+- int i, j;
++ struct ftrace_event_field *field;
++ struct trace_event *trace_event;
++ struct trace_event_call *event;
++ struct list_head *head;
++ struct trace_seq *seq;
++ const void *ptr;
+
+- if (WARN_ON_ONCE(!fmt))
+- return;
++ trace_event = ftrace_find_event(iter->ent->type);
+
+- if (static_branch_unlikely(&trace_no_verify))
+- goto print;
++ seq = &iter->seq;
+
+- /*
+- * When the kernel is booted with the tp_printk command line
+- * parameter, trace events go directly through to printk().
+- * It also is checked by this function, but it does not
+- * have an associated trace_array (tr) for it.
+- */
+- if (iter->tr) {
+- text_delta = iter->tr->text_delta;
+- data_delta = iter->tr->data_delta;
++ if (!trace_event) {
++ trace_seq_printf(seq, "EVENT ID %d NOT FOUND?\n", iter->ent->type);
++ return true;
+ }
+
+- /* Don't bother checking when doing a ftrace_dump() */
+- if (iter->fmt == static_fmt_buf)
+- goto print;
+-
+- while (*p) {
+- bool star = false;
+- int len = 0;
+-
+- j = 0;
+-
+- /*
+- * We only care about %s and variants
+- * as well as %p[sS] if delta is non-zero
+- */
+- for (i = 0; p[i]; i++) {
+- if (i + 1 >= iter->fmt_size) {
+- /*
+- * If we can't expand the copy buffer,
+- * just print it.
+- */
+- if (!trace_iter_expand_format(iter))
+- goto print;
+- }
+-
+- if (p[i] == '\\' && p[i+1]) {
+- i++;
+- continue;
+- }
+- if (p[i] == '%') {
+- /* Need to test cases like %08.*s */
+- for (j = 1; p[i+j]; j++) {
+- if (isdigit(p[i+j]) ||
+- p[i+j] == '.')
+- continue;
+- if (p[i+j] == '*') {
+- star = true;
+- continue;
+- }
+- break;
+- }
+- if (p[i+j] == 's')
+- break;
+-
+- if (text_delta && p[i+1] == 'p' &&
+- ((p[i+2] == 's' || p[i+2] == 'S')))
+- break;
+-
+- star = false;
+- }
+- j = 0;
+- }
+- /* If no %s found then just print normally */
+- if (!p[i])
+- break;
+-
+- /* Copy up to the %s, and print that */
+- strncpy(iter->fmt, p, i);
+- iter->fmt[i] = '\0';
+- trace_seq_vprintf(&iter->seq, iter->fmt, ap);
++ event = container_of(trace_event, struct trace_event_call, event);
++ if (!(event->flags & TRACE_EVENT_FL_TEST_STR))
++ return false;
+
+- /* Add delta to %pS pointers */
+- if (p[i+1] == 'p') {
+- unsigned long addr;
+- char fmt[4];
++ head = trace_get_fields(event);
++ if (!head) {
++ trace_seq_printf(seq, "FIELDS FOR EVENT '%s' NOT FOUND?\n",
++ trace_event_name(event));
++ return true;
++ }
+
+- fmt[0] = '%';
+- fmt[1] = 'p';
+- fmt[2] = p[i+2]; /* Either %ps or %pS */
+- fmt[3] = '\0';
++ /* Offsets are from the iter->ent that points to the raw event */
++ ptr = iter->ent;
+
+- addr = va_arg(ap, unsigned long);
+- addr += text_delta;
+- trace_seq_printf(&iter->seq, fmt, (void *)addr);
++ list_for_each_entry(field, head, link) {
++ const char *str;
++ bool good;
+
+- p += i + 3;
++ if (!field->needs_test)
+ continue;
+- }
+
+- /*
+- * If iter->seq is full, the above call no longer guarantees
+- * that ap is in sync with fmt processing, and further calls
+- * to va_arg() can return wrong positional arguments.
+- *
+- * Ensure that ap is no longer used in this case.
+- */
+- if (iter->seq.full) {
+- p = "";
+- break;
+- }
++ str = *(const char **)(ptr + field->offset);
+
+- if (star)
+- len = va_arg(ap, int);
+-
+- /* The ap now points to the string data of the %s */
+- str = va_arg(ap, const char *);
+-
+- good = trace_safe_str(iter, str, star, len);
+-
+- /* Could be from the last boot */
+- if (data_delta && !good) {
+- str += data_delta;
+- good = trace_safe_str(iter, str, star, len);
+- }
++ good = trace_safe_str(iter, str);
+
+ /*
+ * If you hit this warning, it is likely that the
+@@ -3870,44 +3753,14 @@ void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+ * instead. See samples/trace_events/trace-events-sample.h
+ * for reference.
+ */
+- if (WARN_ONCE(!good, "fmt: '%s' current_buffer: '%s'",
+- fmt, seq_buf_str(&iter->seq.seq))) {
+- int ret;
+-
+- /* Try to safely read the string */
+- if (star) {
+- if (len + 1 > iter->fmt_size)
+- len = iter->fmt_size - 1;
+- if (len < 0)
+- len = 0;
+- ret = copy_from_kernel_nofault(iter->fmt, str, len);
+- iter->fmt[len] = 0;
+- star = false;
+- } else {
+- ret = strncpy_from_kernel_nofault(iter->fmt, str,
+- iter->fmt_size);
+- }
+- if (ret < 0)
+- trace_seq_printf(&iter->seq, "(0x%px)", str);
+- else
+- trace_seq_printf(&iter->seq, "(0x%px:%s)",
+- str, iter->fmt);
+- str = "[UNSAFE-MEMORY]";
+- strcpy(iter->fmt, "%s");
+- } else {
+- strncpy(iter->fmt, p + i, j + 1);
+- iter->fmt[j+1] = '\0';
++ if (WARN_ONCE(!good, "event '%s' has unsafe pointer field '%s'",
++ trace_event_name(event), field->name)) {
++ trace_seq_printf(seq, "EVENT %s: HAS UNSAFE POINTER FIELD '%s'\n",
++ trace_event_name(event), field->name);
++ return true;
+ }
+- if (star)
+- trace_seq_printf(&iter->seq, iter->fmt, len, str);
+- else
+- trace_seq_printf(&iter->seq, iter->fmt, str);
+-
+- p += i + j + 1;
+ }
+- print:
+- if (*p)
+- trace_seq_vprintf(&iter->seq, p, ap);
++ return false;
+ }
+
+ const char *trace_event_format(struct trace_iterator *iter, const char *fmt)
+@@ -4377,6 +4230,15 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
+ if (event) {
+ if (tr->trace_flags & TRACE_ITER_FIELDS)
+ return print_event_fields(iter, event);
++ /*
++ * For TRACE_EVENT() events, the print_fmt is not
++ * safe to use if the array has delta offsets
++ * Force printing via the fields.
++ */
++ if ((tr->text_delta || tr->data_delta) &&
++ event->type > __TRACE_LAST_TYPE)
++ return print_event_fields(iter, event);
++
+ return event->funcs->trace(iter, sym_flags, event);
+ }
+
+@@ -10794,8 +10656,6 @@ __init static int tracer_alloc_buffers(void)
+
+ register_snapshot_cmd();
+
+- test_can_verify();
+-
+ return 0;
+
+ out_free_pipe_cpumask:
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 30d6675c78cfe1..04ea327198ba80 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -664,9 +664,8 @@ void trace_buffer_unlock_commit_nostack(struct trace_buffer *buffer,
+
+ bool trace_is_tracepoint_string(const char *str);
+ const char *trace_event_format(struct trace_iterator *iter, const char *fmt);
+-void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+- va_list ap) __printf(2, 0);
+ char *trace_iter_expand_format(struct trace_iterator *iter);
++bool ignore_event(struct trace_iterator *iter);
+
+ int trace_empty(struct trace_iterator *iter);
+
+@@ -1402,7 +1401,8 @@ struct ftrace_event_field {
+ int filter_type;
+ int offset;
+ int size;
+- int is_signed;
++ unsigned int is_signed:1;
++ unsigned int needs_test:1;
+ int len;
+ };
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 7266ec2a4eea00..7149cd6fd4795e 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -82,7 +82,7 @@ static int system_refcount_dec(struct event_subsystem *system)
+ }
+
+ static struct ftrace_event_field *
+-__find_event_field(struct list_head *head, char *name)
++__find_event_field(struct list_head *head, const char *name)
+ {
+ struct ftrace_event_field *field;
+
+@@ -114,7 +114,8 @@ trace_find_event_field(struct trace_event_call *call, char *name)
+
+ static int __trace_define_field(struct list_head *head, const char *type,
+ const char *name, int offset, int size,
+- int is_signed, int filter_type, int len)
++ int is_signed, int filter_type, int len,
++ int need_test)
+ {
+ struct ftrace_event_field *field;
+
+@@ -133,6 +134,7 @@ static int __trace_define_field(struct list_head *head, const char *type,
+ field->offset = offset;
+ field->size = size;
+ field->is_signed = is_signed;
++ field->needs_test = need_test;
+ field->len = len;
+
+ list_add(&field->link, head);
+@@ -151,13 +153,13 @@ int trace_define_field(struct trace_event_call *call, const char *type,
+
+ head = trace_get_fields(call);
+ return __trace_define_field(head, type, name, offset, size,
+- is_signed, filter_type, 0);
++ is_signed, filter_type, 0, 0);
+ }
+ EXPORT_SYMBOL_GPL(trace_define_field);
+
+ static int trace_define_field_ext(struct trace_event_call *call, const char *type,
+ const char *name, int offset, int size, int is_signed,
+- int filter_type, int len)
++ int filter_type, int len, int need_test)
+ {
+ struct list_head *head;
+
+@@ -166,13 +168,13 @@ static int trace_define_field_ext(struct trace_event_call *call, const char *typ
+
+ head = trace_get_fields(call);
+ return __trace_define_field(head, type, name, offset, size,
+- is_signed, filter_type, len);
++ is_signed, filter_type, len, need_test);
+ }
+
+ #define __generic_field(type, item, filter_type) \
+ ret = __trace_define_field(&ftrace_generic_fields, #type, \
+ #item, 0, 0, is_signed_type(type), \
+- filter_type, 0); \
++ filter_type, 0, 0); \
+ if (ret) \
+ return ret;
+
+@@ -181,7 +183,8 @@ static int trace_define_field_ext(struct trace_event_call *call, const char *typ
+ "common_" #item, \
+ offsetof(typeof(ent), item), \
+ sizeof(ent.item), \
+- is_signed_type(type), FILTER_OTHER, 0); \
++ is_signed_type(type), FILTER_OTHER, \
++ 0, 0); \
+ if (ret) \
+ return ret;
+
+@@ -244,19 +247,16 @@ int trace_event_get_offsets(struct trace_event_call *call)
+ return tail->offset + tail->size;
+ }
+
+-/*
+- * Check if the referenced field is an array and return true,
+- * as arrays are OK to dereference.
+- */
+-static bool test_field(const char *fmt, struct trace_event_call *call)
++
++static struct trace_event_fields *find_event_field(const char *fmt,
++ struct trace_event_call *call)
+ {
+ struct trace_event_fields *field = call->class->fields_array;
+- const char *array_descriptor;
+ const char *p = fmt;
+ int len;
+
+ if (!(len = str_has_prefix(fmt, "REC->")))
+- return false;
++ return NULL;
+ fmt += len;
+ for (p = fmt; *p; p++) {
+ if (!isalnum(*p) && *p != '_')
+@@ -265,16 +265,129 @@ static bool test_field(const char *fmt, struct trace_event_call *call)
+ len = p - fmt;
+
+ for (; field->type; field++) {
+- if (strncmp(field->name, fmt, len) ||
+- field->name[len])
++ if (strncmp(field->name, fmt, len) || field->name[len])
+ continue;
+- array_descriptor = strchr(field->type, '[');
+- /* This is an array and is OK to dereference. */
+- return array_descriptor != NULL;
++
++ return field;
++ }
++ return NULL;
++}
++
++/*
++ * Check if the referenced field is an array and return true,
++ * as arrays are OK to dereference.
++ */
++static bool test_field(const char *fmt, struct trace_event_call *call)
++{
++ struct trace_event_fields *field;
++
++ field = find_event_field(fmt, call);
++ if (!field)
++ return false;
++
++ /* This is an array and is OK to dereference. */
++ return strchr(field->type, '[') != NULL;
++}
++
++/* Look for a string within an argument */
++static bool find_print_string(const char *arg, const char *str, const char *end)
++{
++ const char *r;
++
++ r = strstr(arg, str);
++ return r && r < end;
++}
++
++/* Return true if the argument pointer is safe */
++static bool process_pointer(const char *fmt, int len, struct trace_event_call *call)
++{
++ const char *r, *e, *a;
++
++ e = fmt + len;
++
++ /* Find the REC-> in the argument */
++ r = strstr(fmt, "REC->");
++ if (r && r < e) {
++ /*
++ * Addresses of events on the buffer, or an array on the buffer is
++ * OK to dereference. There's ways to fool this, but
++ * this is to catch common mistakes, not malicious code.
++ */
++ a = strchr(fmt, '&');
++ if ((a && (a < r)) || test_field(r, call))
++ return true;
++ } else if (find_print_string(fmt, "__get_dynamic_array(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_rel_dynamic_array(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_dynamic_array_len(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_rel_dynamic_array_len(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_sockaddr(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_rel_sockaddr(", e)) {
++ return true;
+ }
+ return false;
+ }
+
++/* Return true if the string is safe */
++static bool process_string(const char *fmt, int len, struct trace_event_call *call)
++{
++ struct trace_event_fields *field;
++ const char *r, *e, *s;
++
++ e = fmt + len;
++
++ /*
++ * There are several helper functions that return strings.
++ * If the argument contains a function, then assume its field is valid.
++ * It is considered that the argument has a function if it has:
++ * alphanumeric or '_' before a parenthesis.
++ */
++ s = fmt;
++ do {
++ r = strstr(s, "(");
++ if (!r || r >= e)
++ break;
++ for (int i = 1; r - i >= s; i++) {
++ char ch = *(r - i);
++ if (isspace(ch))
++ continue;
++ if (isalnum(ch) || ch == '_')
++ return true;
++ /* Anything else, this isn't a function */
++ break;
++ }
++ /* A function could be wrapped in parethesis, try the next one */
++ s = r + 1;
++ } while (s < e);
++
++ /*
++ * If there's any strings in the argument consider this arg OK as it
++ * could be: REC->field ? "foo" : "bar" and we don't want to get into
++ * verifying that logic here.
++ */
++ if (find_print_string(fmt, "\"", e))
++ return true;
++
++ /* Dereferenced strings are also valid like any other pointer */
++ if (process_pointer(fmt, len, call))
++ return true;
++
++ /* Make sure the field is found */
++ field = find_event_field(fmt, call);
++ if (!field)
++ return false;
++
++ /* Test this field's string before printing the event */
++ call->flags |= TRACE_EVENT_FL_TEST_STR;
++ field->needs_test = 1;
++
++ return true;
++}
++
+ /*
+ * Examine the print fmt of the event looking for unsafe dereference
+ * pointers using %p* that could be recorded in the trace event and
+@@ -284,13 +397,14 @@ static bool test_field(const char *fmt, struct trace_event_call *call)
+ static void test_event_printk(struct trace_event_call *call)
+ {
+ u64 dereference_flags = 0;
++ u64 string_flags = 0;
+ bool first = true;
+- const char *fmt, *c, *r, *a;
++ const char *fmt;
+ int parens = 0;
+ char in_quote = 0;
+ int start_arg = 0;
+ int arg = 0;
+- int i;
++ int i, e;
+
+ fmt = call->print_fmt;
+
+@@ -374,8 +488,16 @@ static void test_event_printk(struct trace_event_call *call)
+ star = true;
+ continue;
+ }
+- if ((fmt[i + j] == 's') && star)
+- arg++;
++ if ((fmt[i + j] == 's')) {
++ if (star)
++ arg++;
++ if (WARN_ONCE(arg == 63,
++ "Too many args for event: %s",
++ trace_event_name(call)))
++ return;
++ dereference_flags |= 1ULL << arg;
++ string_flags |= 1ULL << arg;
++ }
+ break;
+ }
+ break;
+@@ -403,42 +525,47 @@ static void test_event_printk(struct trace_event_call *call)
+ case ',':
+ if (in_quote || parens)
+ continue;
++ e = i;
+ i++;
+ while (isspace(fmt[i]))
+ i++;
+- start_arg = i;
+- if (!(dereference_flags & (1ULL << arg)))
+- goto next_arg;
+
+- /* Find the REC-> in the argument */
+- c = strchr(fmt + i, ',');
+- r = strstr(fmt + i, "REC->");
+- if (r && (!c || r < c)) {
+- /*
+- * Addresses of events on the buffer,
+- * or an array on the buffer is
+- * OK to dereference.
+- * There's ways to fool this, but
+- * this is to catch common mistakes,
+- * not malicious code.
+- */
+- a = strchr(fmt + i, '&');
+- if ((a && (a < r)) || test_field(r, call))
++ /*
++ * If start_arg is zero, then this is the start of the
++ * first argument. The processing of the argument happens
++ * when the end of the argument is found, as it needs to
++ * handle paranthesis and such.
++ */
++ if (!start_arg) {
++ start_arg = i;
++ /* Balance out the i++ in the for loop */
++ i--;
++ continue;
++ }
++
++ if (dereference_flags & (1ULL << arg)) {
++ if (string_flags & (1ULL << arg)) {
++ if (process_string(fmt + start_arg, e - start_arg, call))
++ dereference_flags &= ~(1ULL << arg);
++ } else if (process_pointer(fmt + start_arg, e - start_arg, call))
+ dereference_flags &= ~(1ULL << arg);
+- } else if ((r = strstr(fmt + i, "__get_dynamic_array(")) &&
+- (!c || r < c)) {
+- dereference_flags &= ~(1ULL << arg);
+- } else if ((r = strstr(fmt + i, "__get_sockaddr(")) &&
+- (!c || r < c)) {
+- dereference_flags &= ~(1ULL << arg);
+ }
+
+- next_arg:
+- i--;
++ start_arg = i;
+ arg++;
++ /* Balance out the i++ in the for loop */
++ i--;
+ }
+ }
+
++ if (dereference_flags & (1ULL << arg)) {
++ if (string_flags & (1ULL << arg)) {
++ if (process_string(fmt + start_arg, i - start_arg, call))
++ dereference_flags &= ~(1ULL << arg);
++ } else if (process_pointer(fmt + start_arg, i - start_arg, call))
++ dereference_flags &= ~(1ULL << arg);
++ }
++
+ /*
+ * If you triggered the below warning, the trace event reported
+ * uses an unsafe dereference pointer %p*. As the data stored
+@@ -2471,7 +2598,7 @@ event_define_fields(struct trace_event_call *call)
+ ret = trace_define_field_ext(call, field->type, field->name,
+ offset, field->size,
+ field->is_signed, field->filter_type,
+- field->len);
++ field->len, field->needs_test);
+ if (WARN_ON_ONCE(ret)) {
+ pr_err("error code is %d\n", ret);
+ break;
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index c14573e5a90337..6e7090e8bf3097 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -317,10 +317,14 @@ EXPORT_SYMBOL(trace_raw_output_prep);
+
+ void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...)
+ {
++ struct trace_seq *s = &iter->seq;
+ va_list ap;
+
++ if (ignore_event(iter))
++ return;
++
+ va_start(ap, fmt);
+- trace_check_vprintf(iter, trace_event_format(iter, fmt), ap);
++ trace_seq_vprintf(s, trace_event_format(iter, fmt), ap);
+ va_end(ap);
+ }
+ EXPORT_SYMBOL(trace_event_printf);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 5734d5d5060f32..7e0f72cd9fd4a0 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -3503,7 +3503,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
+ !list_empty(&folio->_deferred_list)) {
+ ds_queue->split_queue_len--;
+ if (folio_test_partially_mapped(folio)) {
+- __folio_clear_partially_mapped(folio);
++ folio_clear_partially_mapped(folio);
+ mod_mthp_stat(folio_order(folio),
+ MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
+ }
+@@ -3615,7 +3615,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
+ if (!list_empty(&folio->_deferred_list)) {
+ ds_queue->split_queue_len--;
+ if (folio_test_partially_mapped(folio)) {
+- __folio_clear_partially_mapped(folio);
++ folio_clear_partially_mapped(folio);
+ mod_mthp_stat(folio_order(folio),
+ MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
+ }
+@@ -3659,7 +3659,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ if (partially_mapped) {
+ if (!folio_test_partially_mapped(folio)) {
+- __folio_set_partially_mapped(folio);
++ folio_set_partially_mapped(folio);
+ if (folio_test_pmd_mappable(folio))
+ count_vm_event(THP_DEFERRED_SPLIT_PAGE);
+ count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
+@@ -3752,7 +3752,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
+ } else {
+ /* We lost race with folio_put() */
+ if (folio_test_partially_mapped(folio)) {
+- __folio_clear_partially_mapped(folio);
++ folio_clear_partially_mapped(folio);
+ mod_mthp_stat(folio_order(folio),
+ MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 190fa05635f4a9..5dc57b74a8fe9a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5333,7 +5333,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
+ break;
+ }
+ ret = copy_user_large_folio(new_folio, pte_folio,
+- ALIGN_DOWN(addr, sz), dst_vma);
++ addr, dst_vma);
+ folio_put(pte_folio);
+ if (ret) {
+ folio_put(new_folio);
+@@ -6632,8 +6632,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
+ *foliop = NULL;
+ goto out;
+ }
+- ret = copy_user_large_folio(folio, *foliop,
+- ALIGN_DOWN(dst_addr, size), dst_vma);
++ ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
+ folio_put(*foliop);
+ *foliop = NULL;
+ if (ret) {
+diff --git a/mm/memory.c b/mm/memory.c
+index bdf77a3ec47bc2..d322ddfe679167 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -6780,9 +6780,10 @@ static inline int process_huge_page(
+ return 0;
+ }
+
+-static void clear_gigantic_page(struct folio *folio, unsigned long addr,
++static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint,
+ unsigned int nr_pages)
+ {
++ unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio));
+ int i;
+
+ might_sleep();
+@@ -6816,13 +6817,14 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
+ }
+
+ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
+- unsigned long addr,
++ unsigned long addr_hint,
+ struct vm_area_struct *vma,
+ unsigned int nr_pages)
+ {
+- int i;
++ unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst));
+ struct page *dst_page;
+ struct page *src_page;
++ int i;
+
+ for (i = 0; i < nr_pages; i++) {
+ dst_page = folio_page(dst, i);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index b6958333054d06..de65e8b4f75f21 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1238,13 +1238,15 @@ static void split_large_buddy(struct zone *zone, struct page *page,
+ if (order > pageblock_order)
+ order = pageblock_order;
+
+- while (pfn != end) {
++ do {
+ int mt = get_pfnblock_migratetype(page, pfn);
+
+ __free_one_page(page, pfn, zone, order, mt, fpi);
+ pfn += 1 << order;
++ if (pfn == end)
++ break;
+ page = pfn_to_page(pfn);
+- }
++ } while (1);
+ }
+
+ static void free_one_page(struct zone *zone, struct page *page,
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 568bb290bdce3e..b03ced0c3d4858 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -779,6 +779,14 @@ static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
+ }
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
++static void shmem_update_stats(struct folio *folio, int nr_pages)
++{
++ if (folio_test_pmd_mappable(folio))
++ __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages);
++ __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages);
++ __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages);
++}
++
+ /*
+ * Somewhat like filemap_add_folio, but error if expected item has gone.
+ */
+@@ -813,10 +821,7 @@ static int shmem_add_to_page_cache(struct folio *folio,
+ xas_store(&xas, folio);
+ if (xas_error(&xas))
+ goto unlock;
+- if (folio_test_pmd_mappable(folio))
+- __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
+- __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr);
+- __lruvec_stat_mod_folio(folio, NR_SHMEM, nr);
++ shmem_update_stats(folio, nr);
+ mapping->nrpages += nr;
+ unlock:
+ xas_unlock_irq(&xas);
+@@ -844,8 +849,7 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
+ error = shmem_replace_entry(mapping, folio->index, folio, radswap);
+ folio->mapping = NULL;
+ mapping->nrpages -= nr;
+- __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
+- __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
++ shmem_update_stats(folio, -nr);
+ xa_unlock_irq(&mapping->i_pages);
+ folio_put_refs(folio, nr);
+ BUG_ON(error);
+@@ -1944,10 +1948,8 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
+ }
+ if (!error) {
+ mem_cgroup_replace_folio(old, new);
+- __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages);
+- __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages);
+- __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages);
+- __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages);
++ shmem_update_stats(new, nr_pages);
++ shmem_update_stats(old, -nr_pages);
+ }
+ xa_unlock_irq(&swap_mapping->i_pages);
+
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 0161cb4391e1d1..3f9255dfacb0c1 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3369,7 +3369,8 @@ void vfree(const void *addr)
+ struct page *page = vm->pages[i];
+
+ BUG_ON(!page);
+- mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
++ if (!(vm->flags & VM_MAP_PUT_PAGES))
++ mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
+ /*
+ * High-order allocs for huge vmallocs are split, so
+ * can be freed as an array of order-0 allocations
+@@ -3377,7 +3378,8 @@ void vfree(const void *addr)
+ __free_page(page);
+ cond_resched();
+ }
+- atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
++ if (!(vm->flags & VM_MAP_PUT_PAGES))
++ atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
+ kvfree(vm->pages);
+ kfree(vm);
+ }
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index d2baa1af9df09e..7ce22f40db5b04 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -359,10 +359,10 @@ static int
+ netdev_nl_queue_fill(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx,
+ u32 q_type, const struct genl_info *info)
+ {
+- int err = 0;
++ int err;
+
+ if (!(netdev->flags & IFF_UP))
+- return err;
++ return -ENOENT;
+
+ err = netdev_nl_queue_validate(netdev, q_idx, q_type);
+ if (err)
+@@ -417,24 +417,21 @@ netdev_nl_queue_dump_one(struct net_device *netdev, struct sk_buff *rsp,
+ struct netdev_nl_dump_ctx *ctx)
+ {
+ int err = 0;
+- int i;
+
+ if (!(netdev->flags & IFF_UP))
+ return err;
+
+- for (i = ctx->rxq_idx; i < netdev->real_num_rx_queues;) {
+- err = netdev_nl_queue_fill_one(rsp, netdev, i,
++ for (; ctx->rxq_idx < netdev->real_num_rx_queues; ctx->rxq_idx++) {
++ err = netdev_nl_queue_fill_one(rsp, netdev, ctx->rxq_idx,
+ NETDEV_QUEUE_TYPE_RX, info);
+ if (err)
+ return err;
+- ctx->rxq_idx = i++;
+ }
+- for (i = ctx->txq_idx; i < netdev->real_num_tx_queues;) {
+- err = netdev_nl_queue_fill_one(rsp, netdev, i,
++ for (; ctx->txq_idx < netdev->real_num_tx_queues; ctx->txq_idx++) {
++ err = netdev_nl_queue_fill_one(rsp, netdev, ctx->txq_idx,
+ NETDEV_QUEUE_TYPE_TX, info);
+ if (err)
+ return err;
+- ctx->txq_idx = i++;
+ }
+
+ return err;
+@@ -600,7 +597,7 @@ netdev_nl_stats_by_queue(struct net_device *netdev, struct sk_buff *rsp,
+ i, info);
+ if (err)
+ return err;
+- ctx->rxq_idx = i++;
++ ctx->rxq_idx = ++i;
+ }
+ i = ctx->txq_idx;
+ while (ops->get_queue_stats_tx && i < netdev->real_num_tx_queues) {
+@@ -608,7 +605,7 @@ netdev_nl_stats_by_queue(struct net_device *netdev, struct sk_buff *rsp,
+ i, info);
+ if (err)
+ return err;
+- ctx->txq_idx = i++;
++ ctx->txq_idx = ++i;
+ }
+
+ ctx->rxq_idx = 0;
+diff --git a/net/dsa/tag.h b/net/dsa/tag.h
+index d5707870906bc9..5d80ddad4ff6b1 100644
+--- a/net/dsa/tag.h
++++ b/net/dsa/tag.h
+@@ -138,9 +138,10 @@ static inline void dsa_software_untag_vlan_unaware_bridge(struct sk_buff *skb,
+ * dsa_software_vlan_untag: Software VLAN untagging in DSA receive path
+ * @skb: Pointer to socket buffer (packet)
+ *
+- * Receive path method for switches which cannot avoid tagging all packets
+- * towards the CPU port. Called when ds->untag_bridge_pvid (legacy) or
+- * ds->untag_vlan_aware_bridge_pvid is set to true.
++ * Receive path method for switches which send some packets as VLAN-tagged
++ * towards the CPU port (generally from VLAN-aware bridge ports) even when the
++ * packet was not tagged on the wire. Called when ds->untag_bridge_pvid
++ * (legacy) or ds->untag_vlan_aware_bridge_pvid is set to true.
+ *
+ * As a side effect of this method, any VLAN tag from the skb head is moved
+ * to hwaccel.
+@@ -149,14 +150,19 @@ static inline struct sk_buff *dsa_software_vlan_untag(struct sk_buff *skb)
+ {
+ struct dsa_port *dp = dsa_user_to_port(skb->dev);
+ struct net_device *br = dsa_port_bridge_dev_get(dp);
+- u16 vid;
++ u16 vid, proto;
++ int err;
+
+ /* software untagging for standalone ports not yet necessary */
+ if (!br)
+ return skb;
+
++ err = br_vlan_get_proto(br, &proto);
++ if (err)
++ return skb;
++
+ /* Move VLAN tag from data to hwaccel */
+- if (!skb_vlan_tag_present(skb)) {
++ if (!skb_vlan_tag_present(skb) && skb->protocol == htons(proto)) {
+ skb = skb_vlan_untag(skb);
+ if (!skb)
+ return NULL;
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 597e9cf5aa6444..3f2bd65ff5e3c9 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -374,8 +374,13 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ msk = NULL;
+ rc = -EINVAL;
+
+- /* we may be receiving a locally-routed packet; drop source sk
+- * accounting
++ /* We may be receiving a locally-routed packet; drop source sk
++ * accounting.
++ *
++ * From here, we will either queue the skb - either to a frag_queue, or
++ * to a receiving socket. When that succeeds, we clear the skb pointer;
++ * a non-NULL skb on exit will be otherwise unowned, and hence
++ * kfree_skb()-ed.
+ */
+ skb_orphan(skb);
+
+@@ -434,7 +439,9 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ * pending key.
+ */
+ if (flags & MCTP_HDR_FLAG_EOM) {
+- sock_queue_rcv_skb(&msk->sk, skb);
++ rc = sock_queue_rcv_skb(&msk->sk, skb);
++ if (!rc)
++ skb = NULL;
+ if (key) {
+ /* we've hit a pending reassembly; not much we
+ * can do but drop it
+@@ -443,7 +450,6 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ MCTP_TRACE_KEY_REPLIED);
+ key = NULL;
+ }
+- rc = 0;
+ goto out_unlock;
+ }
+
+@@ -470,8 +476,10 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ * this function.
+ */
+ rc = mctp_key_add(key, msk);
+- if (!rc)
++ if (!rc) {
+ trace_mctp_key_acquire(key);
++ skb = NULL;
++ }
+
+ /* we don't need to release key->lock on exit, so
+ * clean up here and suppress the unlock via
+@@ -489,6 +497,8 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ key = NULL;
+ } else {
+ rc = mctp_frag_queue(key, skb);
++ if (!rc)
++ skb = NULL;
+ }
+ }
+
+@@ -503,12 +513,19 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ else
+ rc = mctp_frag_queue(key, skb);
+
++ if (rc)
++ goto out_unlock;
++
++ /* we've queued; the queue owns the skb now */
++ skb = NULL;
++
+ /* end of message? deliver to socket, and we're done with
+ * the reassembly/response key
+ */
+- if (!rc && flags & MCTP_HDR_FLAG_EOM) {
+- sock_queue_rcv_skb(key->sk, key->reasm_head);
+- key->reasm_head = NULL;
++ if (flags & MCTP_HDR_FLAG_EOM) {
++ rc = sock_queue_rcv_skb(key->sk, key->reasm_head);
++ if (!rc)
++ key->reasm_head = NULL;
+ __mctp_key_done_in(key, net, f, MCTP_TRACE_KEY_REPLIED);
+ key = NULL;
+ }
+@@ -527,8 +544,7 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ if (any_key)
+ mctp_key_unref(any_key);
+ out:
+- if (rc)
+- kfree_skb(skb);
++ kfree_skb(skb);
+ return rc;
+ }
+
+diff --git a/net/mctp/test/route-test.c b/net/mctp/test/route-test.c
+index 8551dab1d1e698..17165b86ce22d4 100644
+--- a/net/mctp/test/route-test.c
++++ b/net/mctp/test/route-test.c
+@@ -837,6 +837,90 @@ static void mctp_test_route_input_multiple_nets_key(struct kunit *test)
+ mctp_test_route_input_multiple_nets_key_fini(test, &t2);
+ }
+
++/* Input route to socket, using a single-packet message, where sock delivery
++ * fails. Ensure we're handling the failure appropriately.
++ */
++static void mctp_test_route_input_sk_fail_single(struct kunit *test)
++{
++ const struct mctp_hdr hdr = RX_HDR(1, 10, 8, FL_S | FL_E | FL_TO);
++ struct mctp_test_route *rt;
++ struct mctp_test_dev *dev;
++ struct socket *sock;
++ struct sk_buff *skb;
++ int rc;
++
++ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY);
++
++ /* No rcvbuf space, so delivery should fail. __sock_set_rcvbuf will
++ * clamp the minimum to SOCK_MIN_RCVBUF, so we open-code this.
++ */
++ lock_sock(sock->sk);
++ WRITE_ONCE(sock->sk->sk_rcvbuf, 0);
++ release_sock(sock->sk);
++
++ skb = mctp_test_create_skb(&hdr, 10);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, skb);
++ skb_get(skb);
++
++ mctp_test_skb_set_dev(skb, dev);
++
++ /* do route input, which should fail */
++ rc = mctp_route_input(&rt->rt, skb);
++ KUNIT_EXPECT_NE(test, rc, 0);
++
++ /* we should hold the only reference to skb */
++ KUNIT_EXPECT_EQ(test, refcount_read(&skb->users), 1);
++ kfree_skb(skb);
++
++ __mctp_route_test_fini(test, dev, rt, sock);
++}
++
++/* Input route to socket, using a fragmented message, where sock delivery fails.
++ */
++static void mctp_test_route_input_sk_fail_frag(struct kunit *test)
++{
++ const struct mctp_hdr hdrs[2] = { RX_FRAG(FL_S, 0), RX_FRAG(FL_E, 1) };
++ struct mctp_test_route *rt;
++ struct mctp_test_dev *dev;
++ struct sk_buff *skbs[2];
++ struct socket *sock;
++ unsigned int i;
++ int rc;
++
++ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY);
++
++ lock_sock(sock->sk);
++ WRITE_ONCE(sock->sk->sk_rcvbuf, 0);
++ release_sock(sock->sk);
++
++ for (i = 0; i < ARRAY_SIZE(skbs); i++) {
++ skbs[i] = mctp_test_create_skb(&hdrs[i], 10);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, skbs[i]);
++ skb_get(skbs[i]);
++
++ mctp_test_skb_set_dev(skbs[i], dev);
++ }
++
++ /* first route input should succeed, we're only queueing to the
++ * frag list
++ */
++ rc = mctp_route_input(&rt->rt, skbs[0]);
++ KUNIT_EXPECT_EQ(test, rc, 0);
++
++ /* final route input should fail to deliver to the socket */
++ rc = mctp_route_input(&rt->rt, skbs[1]);
++ KUNIT_EXPECT_NE(test, rc, 0);
++
++ /* we should hold the only reference to both skbs */
++ KUNIT_EXPECT_EQ(test, refcount_read(&skbs[0]->users), 1);
++ kfree_skb(skbs[0]);
++
++ KUNIT_EXPECT_EQ(test, refcount_read(&skbs[1]->users), 1);
++ kfree_skb(skbs[1]);
++
++ __mctp_route_test_fini(test, dev, rt, sock);
++}
++
+ #if IS_ENABLED(CONFIG_MCTP_FLOWS)
+
+ static void mctp_test_flow_init(struct kunit *test,
+@@ -1053,6 +1137,8 @@ static struct kunit_case mctp_test_cases[] = {
+ mctp_route_input_sk_reasm_gen_params),
+ KUNIT_CASE_PARAM(mctp_test_route_input_sk_keys,
+ mctp_route_input_sk_keys_gen_params),
++ KUNIT_CASE(mctp_test_route_input_sk_fail_single),
++ KUNIT_CASE(mctp_test_route_input_sk_fail_frag),
+ KUNIT_CASE(mctp_test_route_input_multiple_nets_bind),
+ KUNIT_CASE(mctp_test_route_input_multiple_nets_key),
+ KUNIT_CASE(mctp_test_packet_flow),
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index bfae7066936bb9..db794fe1300e69 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -611,6 +611,8 @@ init_list_set(struct net *net, struct ip_set *set, u32 size)
+ return true;
+ }
+
++static struct lock_class_key list_set_lockdep_key;
++
+ static int
+ list_set_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ u32 flags)
+@@ -627,6 +629,7 @@ list_set_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ if (size < IP_SET_LIST_MIN_SIZE)
+ size = IP_SET_LIST_MIN_SIZE;
+
++ lockdep_set_class(&set->lock, &list_set_lockdep_key);
+ set->variant = &set_variant;
+ set->dsize = ip_set_elem_len(set, tb, sizeof(struct set_elem),
+ __alignof__(struct set_elem));
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index 98d7dbe3d78760..c0289f83f96df8 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1495,8 +1495,8 @@ int __init ip_vs_conn_init(void)
+ max_avail -= 2; /* ~4 in hash row */
+ max_avail -= 1; /* IPVS up to 1/2 of mem */
+ max_avail -= order_base_2(sizeof(struct ip_vs_conn));
+- max = clamp(max, min, max_avail);
+- ip_vs_conn_tab_bits = clamp_val(ip_vs_conn_tab_bits, min, max);
++ max = clamp(max_avail, min, max);
++ ip_vs_conn_tab_bits = clamp(ip_vs_conn_tab_bits, min, max);
+ ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits;
+ ip_vs_conn_tab_mask = ip_vs_conn_tab_size - 1;
+
+diff --git a/net/psample/psample.c b/net/psample/psample.c
+index a0ddae8a65f917..25f92ba0840c67 100644
+--- a/net/psample/psample.c
++++ b/net/psample/psample.c
+@@ -393,7 +393,9 @@ void psample_sample_packet(struct psample_group *group,
+ nla_total_size_64bit(sizeof(u64)) + /* timestamp */
+ nla_total_size(sizeof(u16)) + /* protocol */
+ (md->user_cookie_len ?
+- nla_total_size(md->user_cookie_len) : 0); /* user cookie */
++ nla_total_size(md->user_cookie_len) : 0) + /* user cookie */
++ (md->rate_as_probability ?
++ nla_total_size(0) : 0); /* rate as probability */
+
+ #ifdef CONFIG_INET
+ tun_info = skb_tunnel_info(skb);
+@@ -498,8 +500,9 @@ void psample_sample_packet(struct psample_group *group,
+ md->user_cookie))
+ goto error;
+
+- if (md->rate_as_probability)
+- nla_put_flag(nl_skb, PSAMPLE_ATTR_SAMPLE_PROBABILITY);
++ if (md->rate_as_probability &&
++ nla_put_flag(nl_skb, PSAMPLE_ATTR_SAMPLE_PROBABILITY))
++ goto error;
+
+ genlmsg_end(nl_skb, data);
+ genlmsg_multicast_netns(&psample_nl_family, group->net, nl_skb, 0,
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index f2f9b75008bb05..8d8b2db4653c0c 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -1525,7 +1525,6 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
+ b->backlogs[idx] -= len;
+ b->tin_backlog -= len;
+ sch->qstats.backlog -= len;
+- qdisc_tree_reduce_backlog(sch, 1, len);
+
+ flow->dropped++;
+ b->tin_dropped++;
+@@ -1536,6 +1535,7 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
+
+ __qdisc_drop(skb, to_free);
+ sch->q.qlen--;
++ qdisc_tree_reduce_backlog(sch, 1, len);
+
+ cake_heapify(q, 0);
+
+diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
+index 91072010923d18..757b89292e7e6f 100644
+--- a/net/sched/sch_choke.c
++++ b/net/sched/sch_choke.c
+@@ -123,10 +123,10 @@ static void choke_drop_by_idx(struct Qdisc *sch, unsigned int idx,
+ if (idx == q->tail)
+ choke_zap_tail_holes(q);
+
++ --sch->q.qlen;
+ qdisc_qstats_backlog_dec(sch, skb);
+ qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(skb));
+ qdisc_drop(skb, sch, to_free);
+- --sch->q.qlen;
+ }
+
+ struct choke_skb_cb {
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 9e6c69d18581ce..6cc7b846cff1bb 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2032,6 +2032,8 @@ static int smc_listen_prfx_check(struct smc_sock *new_smc,
+ if (pclc->hdr.typev1 == SMC_TYPE_N)
+ return 0;
+ pclc_prfx = smc_clc_proposal_get_prefix(pclc);
++ if (!pclc_prfx)
++ return -EPROTO;
+ if (smc_clc_prfx_match(newclcsock, pclc_prfx))
+ return SMC_CLC_DECL_DIFFPREFIX;
+
+@@ -2145,6 +2147,8 @@ static void smc_find_ism_v2_device_serv(struct smc_sock *new_smc,
+ pclc_smcd = smc_get_clc_msg_smcd(pclc);
+ smc_v2_ext = smc_get_clc_v2_ext(pclc);
+ smcd_v2_ext = smc_get_clc_smcd_v2_ext(smc_v2_ext);
++ if (!pclc_smcd || !smc_v2_ext || !smcd_v2_ext)
++ goto not_found;
+
+ mutex_lock(&smcd_dev_list.mutex);
+ if (pclc_smcd->ism.chid) {
+@@ -2221,7 +2225,9 @@ static void smc_find_ism_v1_device_serv(struct smc_sock *new_smc,
+ int rc = 0;
+
+ /* check if ISM V1 is available */
+- if (!(ini->smcd_version & SMC_V1) || !smcd_indicated(ini->smc_type_v1))
++ if (!(ini->smcd_version & SMC_V1) ||
++ !smcd_indicated(ini->smc_type_v1) ||
++ !pclc_smcd)
+ goto not_found;
+ ini->is_smcd = true; /* prepare ISM check */
+ ini->ism_peer_gid[0].gid = ntohll(pclc_smcd->ism.gid);
+@@ -2272,7 +2278,8 @@ static void smc_find_rdma_v2_device_serv(struct smc_sock *new_smc,
+ goto not_found;
+
+ smc_v2_ext = smc_get_clc_v2_ext(pclc);
+- if (!smc_clc_match_eid(ini->negotiated_eid, smc_v2_ext, NULL, NULL))
++ if (!smc_v2_ext ||
++ !smc_clc_match_eid(ini->negotiated_eid, smc_v2_ext, NULL, NULL))
+ goto not_found;
+
+ /* prepare RDMA check */
+@@ -2881,6 +2888,13 @@ __poll_t smc_poll(struct file *file, struct socket *sock,
+ } else {
+ sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
++
++ if (sk->sk_state != SMC_INIT) {
++ /* Race breaker the same way as tcp_poll(). */
++ smp_mb__after_atomic();
++ if (atomic_read(&smc->conn.sndbuf_space))
++ mask |= EPOLLOUT | EPOLLWRNORM;
++ }
+ }
+ if (atomic_read(&smc->conn.bytes_to_rcv))
+ mask |= EPOLLIN | EPOLLRDNORM;
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index 33fa787c28ebb2..521f5df80e10ca 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -352,8 +352,11 @@ static bool smc_clc_msg_prop_valid(struct smc_clc_msg_proposal *pclc)
+ struct smc_clc_msg_hdr *hdr = &pclc->hdr;
+ struct smc_clc_v2_extension *v2_ext;
+
+- v2_ext = smc_get_clc_v2_ext(pclc);
+ pclc_prfx = smc_clc_proposal_get_prefix(pclc);
++ if (!pclc_prfx ||
++ pclc_prfx->ipv6_prefixes_cnt > SMC_CLC_MAX_V6_PREFIX)
++ return false;
++
+ if (hdr->version == SMC_V1) {
+ if (hdr->typev1 == SMC_TYPE_N)
+ return false;
+@@ -365,6 +368,13 @@ static bool smc_clc_msg_prop_valid(struct smc_clc_msg_proposal *pclc)
+ sizeof(struct smc_clc_msg_trail))
+ return false;
+ } else {
++ v2_ext = smc_get_clc_v2_ext(pclc);
++ if ((hdr->typev2 != SMC_TYPE_N &&
++ (!v2_ext || v2_ext->hdr.eid_cnt > SMC_CLC_MAX_UEID)) ||
++ (smcd_indicated(hdr->typev2) &&
++ v2_ext->hdr.ism_gid_cnt > SMCD_CLC_MAX_V2_GID_ENTRIES))
++ return false;
++
+ if (ntohs(hdr->length) !=
+ sizeof(*pclc) +
+ sizeof(struct smc_clc_msg_smcd) +
+@@ -764,6 +774,11 @@ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
+ SMC_CLC_RECV_BUF_LEN : datlen;
+ iov_iter_kvec(&msg.msg_iter, ITER_DEST, &vec, 1, recvlen);
+ len = sock_recvmsg(smc->clcsock, &msg, krflags);
++ if (len < recvlen) {
++ smc->sk.sk_err = EPROTO;
++ reason_code = -EPROTO;
++ goto out;
++ }
+ datlen -= len;
+ }
+ if (clcm->type == SMC_CLC_DECLINE) {
+diff --git a/net/smc/smc_clc.h b/net/smc/smc_clc.h
+index 5625fda2960b03..1a7676227f16c5 100644
+--- a/net/smc/smc_clc.h
++++ b/net/smc/smc_clc.h
+@@ -336,8 +336,12 @@ struct smc_clc_msg_decline_v2 { /* clc decline message */
+ static inline struct smc_clc_msg_proposal_prefix *
+ smc_clc_proposal_get_prefix(struct smc_clc_msg_proposal *pclc)
+ {
++ u16 offset = ntohs(pclc->iparea_offset);
++
++ if (offset > sizeof(struct smc_clc_msg_smcd))
++ return NULL;
+ return (struct smc_clc_msg_proposal_prefix *)
+- ((u8 *)pclc + sizeof(*pclc) + ntohs(pclc->iparea_offset));
++ ((u8 *)pclc + sizeof(*pclc) + offset);
+ }
+
+ static inline bool smcr_indicated(int smc_type)
+@@ -376,8 +380,14 @@ static inline struct smc_clc_v2_extension *
+ smc_get_clc_v2_ext(struct smc_clc_msg_proposal *prop)
+ {
+ struct smc_clc_msg_smcd *prop_smcd = smc_get_clc_msg_smcd(prop);
++ u16 max_offset;
+
+- if (!prop_smcd || !ntohs(prop_smcd->v2_ext_offset))
++ max_offset = offsetof(struct smc_clc_msg_proposal_area, pclc_v2_ext) -
++ offsetof(struct smc_clc_msg_proposal_area, pclc_smcd) -
++ offsetofend(struct smc_clc_msg_smcd, v2_ext_offset);
++
++ if (!prop_smcd || !ntohs(prop_smcd->v2_ext_offset) ||
++ ntohs(prop_smcd->v2_ext_offset) > max_offset)
+ return NULL;
+
+ return (struct smc_clc_v2_extension *)
+@@ -390,9 +400,15 @@ smc_get_clc_v2_ext(struct smc_clc_msg_proposal *prop)
+ static inline struct smc_clc_smcd_v2_extension *
+ smc_get_clc_smcd_v2_ext(struct smc_clc_v2_extension *prop_v2ext)
+ {
++ u16 max_offset = offsetof(struct smc_clc_msg_proposal_area, pclc_smcd_v2_ext) -
++ offsetof(struct smc_clc_msg_proposal_area, pclc_v2_ext) -
++ offsetof(struct smc_clc_v2_extension, hdr) -
++ offsetofend(struct smc_clnt_opts_area_hdr, smcd_v2_ext_offset);
++
+ if (!prop_v2ext)
+ return NULL;
+- if (!ntohs(prop_v2ext->hdr.smcd_v2_ext_offset))
++ if (!ntohs(prop_v2ext->hdr.smcd_v2_ext_offset) ||
++ ntohs(prop_v2ext->hdr.smcd_v2_ext_offset) > max_offset)
+ return NULL;
+
+ return (struct smc_clc_smcd_v2_extension *)
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 4e694860ece4ac..68515a41d776c4 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1818,7 +1818,9 @@ void smcr_link_down_cond_sched(struct smc_link *lnk)
+ {
+ if (smc_link_downing(&lnk->state)) {
+ trace_smcr_link_down(lnk, __builtin_return_address(0));
+- schedule_work(&lnk->link_down_wrk);
++ smcr_link_hold(lnk); /* smcr_link_put in link_down_wrk */
++ if (!schedule_work(&lnk->link_down_wrk))
++ smcr_link_put(lnk);
+ }
+ }
+
+@@ -1850,11 +1852,14 @@ static void smc_link_down_work(struct work_struct *work)
+ struct smc_link_group *lgr = link->lgr;
+
+ if (list_empty(&lgr->list))
+- return;
++ goto out;
+ wake_up_all(&lgr->llc_msg_waiter);
+ down_write(&lgr->llc_conf_mutex);
+ smcr_link_down(link);
+ up_write(&lgr->llc_conf_mutex);
++
++out:
++ smcr_link_put(link); /* smcr_link_hold by schedulers of link_down_work */
+ }
+
+ static int smc_vlan_by_tcpsk_walk(struct net_device *lower_dev,
+diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
+index e283751abfefe8..678540b7828059 100644
+--- a/sound/soc/fsl/Kconfig
++++ b/sound/soc/fsl/Kconfig
+@@ -29,6 +29,7 @@ config SND_SOC_FSL_SAI
+ config SND_SOC_FSL_MQS
+ tristate "Medium Quality Sound (MQS) module support"
+ depends on SND_SOC_FSL_SAI
++ depends on IMX_SCMI_MISC_DRV || !IMX_SCMI_MISC_DRV
+ select REGMAP_MMIO
+ help
+ Say Y if you want to add Medium Quality Sound (MQS)
+diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
+index 7a00f3066a9807..12743d7f164f0d 100644
+--- a/tools/hv/hv_fcopy_uio_daemon.c
++++ b/tools/hv/hv_fcopy_uio_daemon.c
+@@ -35,8 +35,6 @@
+ #define WIN8_SRV_MINOR 1
+ #define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR)
+
+-#define MAX_FOLDER_NAME 15
+-#define MAX_PATH_LEN 15
+ #define FCOPY_UIO "/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio"
+
+ #define FCOPY_VER_COUNT 1
+@@ -51,7 +49,7 @@ static const int fw_versions[] = {
+
+ #define HV_RING_SIZE 0x4000 /* 16KB ring buffer size */
+
+-unsigned char desc[HV_RING_SIZE];
++static unsigned char desc[HV_RING_SIZE];
+
+ static int target_fd;
+ static char target_fname[PATH_MAX];
+@@ -409,8 +407,8 @@ int main(int argc, char *argv[])
+ struct vmbus_br txbr, rxbr;
+ void *ring;
+ uint32_t len = HV_RING_SIZE;
+- char uio_name[MAX_FOLDER_NAME] = {0};
+- char uio_dev_path[MAX_PATH_LEN] = {0};
++ char uio_name[NAME_MAX] = {0};
++ char uio_dev_path[PATH_MAX] = {0};
+
+ static struct option long_options[] = {
+ {"help", no_argument, 0, 'h' },
+diff --git a/tools/hv/hv_set_ifconfig.sh b/tools/hv/hv_set_ifconfig.sh
+index 440a91b35823bf..2f8baed2b8f796 100755
+--- a/tools/hv/hv_set_ifconfig.sh
++++ b/tools/hv/hv_set_ifconfig.sh
+@@ -81,7 +81,7 @@ echo "ONBOOT=yes" >> $1
+
+ cp $1 /etc/sysconfig/network-scripts/
+
+-chmod 600 $2
++umask 0177
+ interface=$(echo $2 | awk -F - '{ print $2 }')
+ filename="${2##*/}"
+
+diff --git a/tools/net/ynl/lib/ynl.py b/tools/net/ynl/lib/ynl.py
+index c22c22bf2cb7d1..a3f741fed0a343 100644
+--- a/tools/net/ynl/lib/ynl.py
++++ b/tools/net/ynl/lib/ynl.py
+@@ -553,10 +553,10 @@ class YnlFamily(SpecFamily):
+ if attr["type"] == 'nest':
+ nl_type |= Netlink.NLA_F_NESTED
+ attr_payload = b''
+- sub_attrs = SpaceAttrs(self.attr_sets[space], value, search_attrs)
++ sub_space = attr['nested-attributes']
++ sub_attrs = SpaceAttrs(self.attr_sets[sub_space], value, search_attrs)
+ for subname, subvalue in value.items():
+- attr_payload += self._add_attr(attr['nested-attributes'],
+- subname, subvalue, sub_attrs)
++ attr_payload += self._add_attr(sub_space, subname, subvalue, sub_attrs)
+ elif attr["type"] == 'flag':
+ if not value:
+ # If value is absent or false then skip attribute creation.
+diff --git a/tools/testing/selftests/bpf/sdt.h b/tools/testing/selftests/bpf/sdt.h
+index ca0162b4dc5752..1fcfa5160231de 100644
+--- a/tools/testing/selftests/bpf/sdt.h
++++ b/tools/testing/selftests/bpf/sdt.h
+@@ -102,6 +102,8 @@
+ # define STAP_SDT_ARG_CONSTRAINT nZr
+ # elif defined __arm__
+ # define STAP_SDT_ARG_CONSTRAINT g
++# elif defined __loongarch__
++# define STAP_SDT_ARG_CONSTRAINT nmr
+ # else
+ # define STAP_SDT_ARG_CONSTRAINT nor
+ # endif
+diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
+index 95af2d78fd318c..0a0b5551602808 100644
+--- a/tools/testing/selftests/memfd/memfd_test.c
++++ b/tools/testing/selftests/memfd/memfd_test.c
+@@ -9,6 +9,7 @@
+ #include <fcntl.h>
+ #include <linux/memfd.h>
+ #include <sched.h>
++#include <stdbool.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <signal.h>
+@@ -1557,6 +1558,11 @@ static void test_share_fork(char *banner, char *b_suffix)
+ close(fd);
+ }
+
++static bool pid_ns_supported(void)
++{
++ return access("/proc/self/ns/pid", F_OK) == 0;
++}
++
+ int main(int argc, char **argv)
+ {
+ pid_t pid;
+@@ -1591,8 +1597,12 @@ int main(int argc, char **argv)
+ test_seal_grow();
+ test_seal_resize();
+
+- test_sysctl_simple();
+- test_sysctl_nested();
++ if (pid_ns_supported()) {
++ test_sysctl_simple();
++ test_sysctl_nested();
++ } else {
++ printf("PID namespaces are not supported; skipping sysctl tests\n");
++ }
+
+ test_share_dup("SHARE-DUP", "");
+ test_share_mmap("SHARE-MMAP", "");
+diff --git a/tools/testing/selftests/net/openvswitch/openvswitch.sh b/tools/testing/selftests/net/openvswitch/openvswitch.sh
+index cc0bfae2bafa1b..960e1ab4dd04b1 100755
+--- a/tools/testing/selftests/net/openvswitch/openvswitch.sh
++++ b/tools/testing/selftests/net/openvswitch/openvswitch.sh
+@@ -171,8 +171,10 @@ ovs_add_netns_and_veths () {
+ ovs_add_if "$1" "$2" "$4" -u || return 1
+ fi
+
+- [ $TRACING -eq 1 ] && ovs_netns_spawn_daemon "$1" "$ns" \
+- tcpdump -i any -s 65535
++ if [ $TRACING -eq 1 ]; then
++ ovs_netns_spawn_daemon "$1" "$3" tcpdump -l -i any -s 6553
++ ovs_wait grep -q "listening on any" ${ovs_dir}/stderr
++ fi
+
+ return 0
+ }
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-02 12:31 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-01-02 12:31 UTC (permalink / raw
To: gentoo-commits
commit: dd372ef01fb36fbfcde98200d805cbd936e6afc8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 2 12:26:21 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 2 12:26:21 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dd372ef0
Linux patch 6.12.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-6.12.8.patch | 4586 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4590 insertions(+)
diff --git a/0000_README b/0000_README
index 6961ab2e..483a9fde 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-6.12.7.patch
From: https://www.kernel.org
Desc: Linux 6.12.7
+Patch: 1007_linux-6.12.8.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.8
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1007_linux-6.12.8.patch b/1007_linux-6.12.8.patch
new file mode 100644
index 00000000..6c1c3893
--- /dev/null
+++ b/1007_linux-6.12.8.patch
@@ -0,0 +1,4586 @@
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 77db10e944f039..b42fea07c5cec8 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -255,8 +255,9 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Hisilicon | Hip08 SMMU PMCG | #162001800 | N/A |
+ +----------------+-----------------+-----------------+-----------------------------+
+-| Hisilicon | Hip{08,09,10,10C| #162001900 | N/A |
+-| | ,11} SMMU PMCG | | |
++| Hisilicon | Hip{08,09,09A,10| #162001900 | N/A |
++| | ,10C,11} | | |
++| | SMMU PMCG | | |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Hisilicon | Hip09 | #162100801 | HISILICON_ERRATUM_162100801 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml b/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
+index 13f09f1bc8003a..0a698798c22be2 100644
+--- a/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
++++ b/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
+@@ -51,7 +51,7 @@ properties:
+ description: Power supply for AVDD, providing 1.8V.
+
+ cpvdd-supply:
+- description: Power supply for CPVDD, providing 3.5V.
++ description: Power supply for CPVDD, providing 1.8V.
+
+ hp-detect-gpios:
+ description:
+diff --git a/Makefile b/Makefile
+index 685a57f6c8d279..8a10105c2539cf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/broadcom/bcm2712.dtsi b/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
+index 6e5a984c1d4ea1..26a29e5e5078d5 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
+@@ -67,7 +67,7 @@ cpu0: cpu@0 {
+ l2_cache_l0: l2-cache-l0 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+@@ -91,7 +91,7 @@ cpu1: cpu@1 {
+ l2_cache_l1: l2-cache-l1 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+@@ -115,7 +115,7 @@ cpu2: cpu@2 {
+ l2_cache_l2: l2-cache-l2 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+@@ -139,7 +139,7 @@ cpu3: cpu@3 {
+ l2_cache_l3: l2-cache-l3 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h
+index 944482063f14e3..3089785ca97e78 100644
+--- a/arch/loongarch/include/asm/inst.h
++++ b/arch/loongarch/include/asm/inst.h
+@@ -683,7 +683,17 @@ DEF_EMIT_REG2I16_FORMAT(blt, blt_op)
+ DEF_EMIT_REG2I16_FORMAT(bge, bge_op)
+ DEF_EMIT_REG2I16_FORMAT(bltu, bltu_op)
+ DEF_EMIT_REG2I16_FORMAT(bgeu, bgeu_op)
+-DEF_EMIT_REG2I16_FORMAT(jirl, jirl_op)
++
++static inline void emit_jirl(union loongarch_instruction *insn,
++ enum loongarch_gpr rd,
++ enum loongarch_gpr rj,
++ int offset)
++{
++ insn->reg2i16_format.opcode = jirl_op;
++ insn->reg2i16_format.immediate = offset;
++ insn->reg2i16_format.rd = rd;
++ insn->reg2i16_format.rj = rj;
++}
+
+ #define DEF_EMIT_REG2BSTRD_FORMAT(NAME, OP) \
+ static inline void emit_##NAME(union loongarch_instruction *insn, \
+diff --git a/arch/loongarch/kernel/efi.c b/arch/loongarch/kernel/efi.c
+index 2bf86aeda874c7..de21e72759eebc 100644
+--- a/arch/loongarch/kernel/efi.c
++++ b/arch/loongarch/kernel/efi.c
+@@ -95,7 +95,7 @@ static void __init init_screen_info(void)
+ memset(si, 0, sizeof(*si));
+ early_memunmap(si, sizeof(*si));
+
+- memblock_reserve(screen_info.lfb_base, screen_info.lfb_size);
++ memblock_reserve(__screen_info_lfb_base(&screen_info), screen_info.lfb_size);
+ }
+
+ void __init efi_init(void)
+diff --git a/arch/loongarch/kernel/inst.c b/arch/loongarch/kernel/inst.c
+index 3050329556d118..14d7d700bcb98f 100644
+--- a/arch/loongarch/kernel/inst.c
++++ b/arch/loongarch/kernel/inst.c
+@@ -332,7 +332,7 @@ u32 larch_insn_gen_jirl(enum loongarch_gpr rd, enum loongarch_gpr rj, int imm)
+ return INSN_BREAK;
+ }
+
+- emit_jirl(&insn, rj, rd, imm >> 2);
++ emit_jirl(&insn, rd, rj, imm >> 2);
+
+ return insn.word;
+ }
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index dd350cba1252f9..ea357a3edc0943 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -181,13 +181,13 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
+ /* Set return value */
+ emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0);
+ /* Return to the caller */
+- emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
++ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_RA, 0);
+ } else {
+ /*
+ * Call the next bpf prog and skip the first instruction
+ * of TCC initialization.
+ */
+- emit_insn(ctx, jirl, LOONGARCH_GPR_T3, LOONGARCH_GPR_ZERO, 1);
++ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T3, 1);
+ }
+ }
+
+@@ -904,7 +904,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ return ret;
+
+ move_addr(ctx, t1, func_addr);
+- emit_insn(ctx, jirl, t1, LOONGARCH_GPR_RA, 0);
++ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, t1, 0);
+ move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
+ break;
+
+diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
+index f381b177ea06ad..0b6365d85d1171 100644
+--- a/arch/powerpc/platforms/book3s/vas-api.c
++++ b/arch/powerpc/platforms/book3s/vas-api.c
+@@ -464,7 +464,43 @@ static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
+ return VM_FAULT_SIGBUS;
+ }
+
++/*
++ * During mmap() paste address, mapping VMA is saved in VAS window
++ * struct which is used to unmap during migration if the window is
++ * still open. But the user space can remove this mapping with
++ * munmap() before closing the window and the VMA address will
++ * be invalid. Set VAS window VMA to NULL in this function which
++ * is called before VMA free.
++ */
++static void vas_mmap_close(struct vm_area_struct *vma)
++{
++ struct file *fp = vma->vm_file;
++ struct coproc_instance *cp_inst = fp->private_data;
++ struct vas_window *txwin;
++
++ /* Should not happen */
++ if (!cp_inst || !cp_inst->txwin) {
++ pr_err("No attached VAS window for the paste address mmap\n");
++ return;
++ }
++
++ txwin = cp_inst->txwin;
++ /*
++ * task_ref.vma is set in coproc_mmap() during mmap paste
++ * address. So it has to be the same VMA that is getting freed.
++ */
++ if (WARN_ON(txwin->task_ref.vma != vma)) {
++ pr_err("Invalid paste address mmaping\n");
++ return;
++ }
++
++ mutex_lock(&txwin->task_ref.mmap_mutex);
++ txwin->task_ref.vma = NULL;
++ mutex_unlock(&txwin->task_ref.mmap_mutex);
++}
++
+ static const struct vm_operations_struct vas_vm_ops = {
++ .close = vas_mmap_close,
+ .fault = vas_mmap_fault,
+ };
+
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index d879478db3f572..28b4312f25631c 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -429,6 +429,16 @@ static struct event_constraint intel_lnc_event_constraints[] = {
+ EVENT_CONSTRAINT_END
+ };
+
++static struct extra_reg intel_lnc_extra_regs[] __read_mostly = {
++ INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0xfffffffffffull, RSP_0),
++ INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0xfffffffffffull, RSP_1),
++ INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
++ INTEL_UEVENT_EXTRA_REG(0x02c6, MSR_PEBS_FRONTEND, 0x9, FE),
++ INTEL_UEVENT_EXTRA_REG(0x03c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE),
++ INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0xf, FE),
++ INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
++ EVENT_EXTRA_END
++};
+
+ EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3");
+ EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3");
+@@ -6344,7 +6354,7 @@ static __always_inline void intel_pmu_init_lnc(struct pmu *pmu)
+ intel_pmu_init_glc(pmu);
+ hybrid(pmu, event_constraints) = intel_lnc_event_constraints;
+ hybrid(pmu, pebs_constraints) = intel_lnc_pebs_event_constraints;
+- hybrid(pmu, extra_regs) = intel_rwc_extra_regs;
++ hybrid(pmu, extra_regs) = intel_lnc_extra_regs;
+ }
+
+ static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 6188650707ab27..19a9fd974e3e1d 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2496,6 +2496,7 @@ void __init intel_ds_init(void)
+ x86_pmu.large_pebs_flags |= PERF_SAMPLE_TIME;
+ break;
+
++ case 6:
+ case 5:
+ x86_pmu.pebs_ept = 1;
+ fallthrough;
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index d98fac56768469..e7aba7349231d1 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -1910,6 +1910,7 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
+ X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &adl_uncore_init),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &gnr_uncore_init),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &gnr_uncore_init),
++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, &gnr_uncore_init),
+ {},
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_uncore_match);
+diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c
+index d2c732a34e5d90..303bf74d175b30 100644
+--- a/arch/x86/kernel/cet.c
++++ b/arch/x86/kernel/cet.c
+@@ -81,6 +81,34 @@ static void do_user_cp_fault(struct pt_regs *regs, unsigned long error_code)
+
+ static __ro_after_init bool ibt_fatal = true;
+
++/*
++ * By definition, all missing-ENDBRANCH #CPs are a result of WFE && !ENDBR.
++ *
++ * For the kernel IBT no ENDBR selftest where #CPs are deliberately triggered,
++ * the WFE state of the interrupted context needs to be cleared to let execution
++ * continue. Otherwise when the CPU resumes from the instruction that just
++ * caused the previous #CP, another missing-ENDBRANCH #CP is raised and the CPU
++ * enters a dead loop.
++ *
++ * This is not a problem with IDT because it doesn't preserve WFE and IRET doesn't
++ * set WFE. But FRED provides space on the entry stack (in an expanded CS area)
++ * to save and restore the WFE state, thus the WFE state is no longer clobbered,
++ * so software must clear it.
++ */
++static void ibt_clear_fred_wfe(struct pt_regs *regs)
++{
++ /*
++ * No need to do any FRED checks.
++ *
++ * For IDT event delivery, the high-order 48 bits of CS are pushed
++ * as 0s into the stack, and later IRET ignores these bits.
++ *
++ * For FRED, a test to check if fred_cs.wfe is set would be dropped
++ * by compilers.
++ */
++ regs->fred_cs.wfe = 0;
++}
++
+ static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
+ {
+ if ((error_code & CP_EC) != CP_ENDBR) {
+@@ -90,6 +118,7 @@ static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
+
+ if (unlikely(regs->ip == (unsigned long)&ibt_selftest_noendbr)) {
+ regs->ax = 0;
++ ibt_clear_fred_wfe(regs);
+ return;
+ }
+
+@@ -97,6 +126,7 @@ static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
+ if (!ibt_fatal) {
+ printk(KERN_DEFAULT CUT_HERE);
+ __warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL);
++ ibt_clear_fred_wfe(regs);
+ return;
+ }
+ BUG();
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index d5995021815ddf..4e76651e786d19 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3903,16 +3903,11 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ {
+ hctx->queue_num = hctx_idx;
+
+- if (!(hctx->flags & BLK_MQ_F_STACKING))
+- cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
+- &hctx->cpuhp_online);
+- cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
+-
+ hctx->tags = set->tags[hctx_idx];
+
+ if (set->ops->init_hctx &&
+ set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
+- goto unregister_cpu_notifier;
++ goto fail;
+
+ if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx,
+ hctx->numa_node))
+@@ -3921,6 +3916,11 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ if (xa_insert(&q->hctx_table, hctx_idx, hctx, GFP_KERNEL))
+ goto exit_flush_rq;
+
++ if (!(hctx->flags & BLK_MQ_F_STACKING))
++ cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
++ &hctx->cpuhp_online);
++ cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
++
+ return 0;
+
+ exit_flush_rq:
+@@ -3929,8 +3929,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ exit_hctx:
+ if (set->ops->exit_hctx)
+ set->ops->exit_hctx(hctx, hctx_idx);
+- unregister_cpu_notifier:
+- blk_mq_remove_cpuhp(hctx);
++ fail:
+ return -1;
+ }
+
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 4c745a26226b27..bf3be532e0895d 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1703,6 +1703,8 @@ static struct acpi_platform_list pmcg_plat_info[] __initdata = {
+ /* HiSilicon Hip09 Platform */
+ {"HISI ", "HIP09 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
+ "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
++ {"HISI ", "HIP09A ", 0, ACPI_SIG_IORT, greater_than_or_equal,
++ "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
+ /* HiSilicon Hip10/11 Platform uses the same SMMU IP with Hip09 */
+ {"HISI ", "HIP10 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
+ "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index e3e2afc2c83c6b..5962ea1230a17e 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1063,13 +1063,13 @@ struct regmap *__regmap_init(struct device *dev,
+
+ /* Sanity check */
+ if (range_cfg->range_max < range_cfg->range_min) {
+- dev_err(map->dev, "Invalid range %d: %d < %d\n", i,
++ dev_err(map->dev, "Invalid range %d: %u < %u\n", i,
+ range_cfg->range_max, range_cfg->range_min);
+ goto err_range;
+ }
+
+ if (range_cfg->range_max > map->max_register) {
+- dev_err(map->dev, "Invalid range %d: %d > %d\n", i,
++ dev_err(map->dev, "Invalid range %d: %u > %u\n", i,
+ range_cfg->range_max, map->max_register);
+ goto err_range;
+ }
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 90bc605ff6c299..458ac54e7b201e 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1599,6 +1599,21 @@ static void ublk_unquiesce_dev(struct ublk_device *ub)
+ blk_mq_kick_requeue_list(ub->ub_disk->queue);
+ }
+
++static struct gendisk *ublk_detach_disk(struct ublk_device *ub)
++{
++ struct gendisk *disk;
++
++ /* Sync with ublk_abort_queue() by holding the lock */
++ spin_lock(&ub->lock);
++ disk = ub->ub_disk;
++ ub->dev_info.state = UBLK_S_DEV_DEAD;
++ ub->dev_info.ublksrv_pid = -1;
++ ub->ub_disk = NULL;
++ spin_unlock(&ub->lock);
++
++ return disk;
++}
++
+ static void ublk_stop_dev(struct ublk_device *ub)
+ {
+ struct gendisk *disk;
+@@ -1612,14 +1627,7 @@ static void ublk_stop_dev(struct ublk_device *ub)
+ ublk_unquiesce_dev(ub);
+ }
+ del_gendisk(ub->ub_disk);
+-
+- /* Sync with ublk_abort_queue() by holding the lock */
+- spin_lock(&ub->lock);
+- disk = ub->ub_disk;
+- ub->dev_info.state = UBLK_S_DEV_DEAD;
+- ub->dev_info.ublksrv_pid = -1;
+- ub->ub_disk = NULL;
+- spin_unlock(&ub->lock);
++ disk = ublk_detach_disk(ub);
+ put_disk(disk);
+ unlock:
+ mutex_unlock(&ub->mutex);
+@@ -2295,7 +2303,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
+
+ out_put_cdev:
+ if (ret) {
+- ub->dev_info.state = UBLK_S_DEV_DEAD;
++ ublk_detach_disk(ub);
+ ublk_put_device(ub);
+ }
+ if (ret)
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 43c96b73a7118f..0e50b65e1dbf5a 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -1587,9 +1587,12 @@ static void virtblk_remove(struct virtio_device *vdev)
+ static int virtblk_freeze(struct virtio_device *vdev)
+ {
+ struct virtio_blk *vblk = vdev->priv;
++ struct request_queue *q = vblk->disk->queue;
+
+ /* Ensure no requests in virtqueues before deleting vqs. */
+- blk_mq_freeze_queue(vblk->disk->queue);
++ blk_mq_freeze_queue(q);
++ blk_mq_quiesce_queue_nowait(q);
++ blk_mq_unfreeze_queue(q);
+
+ /* Ensure we don't receive any more interrupts */
+ virtio_reset_device(vdev);
+@@ -1613,8 +1616,8 @@ static int virtblk_restore(struct virtio_device *vdev)
+ return ret;
+
+ virtio_device_ready(vdev);
++ blk_mq_unquiesce_queue(vblk->disk->queue);
+
+- blk_mq_unfreeze_queue(vblk->disk->queue);
+ return 0;
+ }
+ #endif
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 11755cb1eb1635..0c85c981a8334a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -870,6 +870,7 @@ struct btusb_data {
+
+ int (*suspend)(struct hci_dev *hdev);
+ int (*resume)(struct hci_dev *hdev);
++ int (*disconnect)(struct hci_dev *hdev);
+
+ int oob_wake_irq; /* irq for out-of-band wake-on-bt */
+ unsigned cmd_timeout_cnt;
+@@ -2643,11 +2644,11 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ init_usb_anchor(&btmtk_data->isopkt_anchor);
+ }
+
+-static void btusb_mtk_release_iso_intf(struct btusb_data *data)
++static void btusb_mtk_release_iso_intf(struct hci_dev *hdev)
+ {
+- struct btmtk_data *btmtk_data = hci_get_priv(data->hdev);
++ struct btmtk_data *btmtk_data = hci_get_priv(hdev);
+
+- if (btmtk_data->isopkt_intf) {
++ if (test_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags)) {
+ usb_kill_anchored_urbs(&btmtk_data->isopkt_anchor);
+ clear_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags);
+
+@@ -2661,6 +2662,16 @@ static void btusb_mtk_release_iso_intf(struct btusb_data *data)
+ clear_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags);
+ }
+
++static int btusb_mtk_disconnect(struct hci_dev *hdev)
++{
++ /* This function describes the specific additional steps taken by MediaTek
++ * when Bluetooth usb driver's resume function is called.
++ */
++ btusb_mtk_release_iso_intf(hdev);
++
++ return 0;
++}
++
+ static int btusb_mtk_reset(struct hci_dev *hdev, void *rst_data)
+ {
+ struct btusb_data *data = hci_get_drvdata(hdev);
+@@ -2677,8 +2688,8 @@ static int btusb_mtk_reset(struct hci_dev *hdev, void *rst_data)
+ if (err < 0)
+ return err;
+
+- if (test_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags))
+- btusb_mtk_release_iso_intf(data);
++ /* Release MediaTek ISO data interface */
++ btusb_mtk_release_iso_intf(hdev);
+
+ btusb_stop_traffic(data);
+ usb_kill_anchored_urbs(&data->tx_anchor);
+@@ -2723,22 +2734,24 @@ static int btusb_mtk_setup(struct hci_dev *hdev)
+ btmtk_data->reset_sync = btusb_mtk_reset;
+
+ /* Claim ISO data interface and endpoint */
+- btmtk_data->isopkt_intf = usb_ifnum_to_if(data->udev, MTK_ISO_IFNUM);
+- if (btmtk_data->isopkt_intf)
++ if (!test_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags)) {
++ btmtk_data->isopkt_intf = usb_ifnum_to_if(data->udev, MTK_ISO_IFNUM);
+ btusb_mtk_claim_iso_intf(data);
++ }
+
+ return btmtk_usb_setup(hdev);
+ }
+
+ static int btusb_mtk_shutdown(struct hci_dev *hdev)
+ {
+- struct btusb_data *data = hci_get_drvdata(hdev);
+- struct btmtk_data *btmtk_data = hci_get_priv(hdev);
++ int ret;
+
+- if (test_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags))
+- btusb_mtk_release_iso_intf(data);
++ ret = btmtk_usb_shutdown(hdev);
+
+- return btmtk_usb_shutdown(hdev);
++ /* Release MediaTek iso interface after shutdown */
++ btusb_mtk_release_iso_intf(hdev);
++
++ return ret;
+ }
+
+ #ifdef CONFIG_PM
+@@ -3850,6 +3863,7 @@ static int btusb_probe(struct usb_interface *intf,
+ data->recv_acl = btmtk_usb_recv_acl;
+ data->suspend = btmtk_usb_suspend;
+ data->resume = btmtk_usb_resume;
++ data->disconnect = btusb_mtk_disconnect;
+ }
+
+ if (id->driver_info & BTUSB_SWAVE) {
+@@ -4040,6 +4054,9 @@ static void btusb_disconnect(struct usb_interface *intf)
+ if (data->diag)
+ usb_set_intfdata(data->diag, NULL);
+
++ if (data->disconnect)
++ data->disconnect(hdev);
++
+ hci_unregister_dev(hdev);
+
+ if (intf == data->intf) {
+diff --git a/drivers/dma/amd/qdma/qdma.c b/drivers/dma/amd/qdma/qdma.c
+index b0a1f3ad851b1e..4761fa25501561 100644
+--- a/drivers/dma/amd/qdma/qdma.c
++++ b/drivers/dma/amd/qdma/qdma.c
+@@ -7,9 +7,9 @@
+ #include <linux/bitfield.h>
+ #include <linux/bitops.h>
+ #include <linux/dmaengine.h>
++#include <linux/dma-mapping.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+-#include <linux/dma-map-ops.h>
+ #include <linux/platform_device.h>
+ #include <linux/platform_data/amd_qdma.h>
+ #include <linux/regmap.h>
+@@ -492,18 +492,9 @@ static int qdma_device_verify(struct qdma_device *qdev)
+
+ static int qdma_device_setup(struct qdma_device *qdev)
+ {
+- struct device *dev = &qdev->pdev->dev;
+ u32 ring_sz = QDMA_DEFAULT_RING_SIZE;
+ int ret = 0;
+
+- while (dev && get_dma_ops(dev))
+- dev = dev->parent;
+- if (!dev) {
+- qdma_err(qdev, "dma device not found");
+- return -EINVAL;
+- }
+- set_dma_ops(&qdev->pdev->dev, get_dma_ops(dev));
+-
+ ret = qdma_setup_fmap_context(qdev);
+ if (ret) {
+ qdma_err(qdev, "Failed setup fmap context");
+@@ -548,11 +539,12 @@ static void qdma_free_queue_resources(struct dma_chan *chan)
+ {
+ struct qdma_queue *queue = to_qdma_queue(chan);
+ struct qdma_device *qdev = queue->qdev;
+- struct device *dev = qdev->dma_dev.dev;
++ struct qdma_platdata *pdata;
+
+ qdma_clear_queue_context(queue);
+ vchan_free_chan_resources(&queue->vchan);
+- dma_free_coherent(dev, queue->ring_size * QDMA_MM_DESC_SIZE,
++ pdata = dev_get_platdata(&qdev->pdev->dev);
++ dma_free_coherent(pdata->dma_dev, queue->ring_size * QDMA_MM_DESC_SIZE,
+ queue->desc_base, queue->dma_desc_base);
+ }
+
+@@ -565,6 +557,7 @@ static int qdma_alloc_queue_resources(struct dma_chan *chan)
+ struct qdma_queue *queue = to_qdma_queue(chan);
+ struct qdma_device *qdev = queue->qdev;
+ struct qdma_ctxt_sw_desc desc;
++ struct qdma_platdata *pdata;
+ size_t size;
+ int ret;
+
+@@ -572,8 +565,9 @@ static int qdma_alloc_queue_resources(struct dma_chan *chan)
+ if (ret)
+ return ret;
+
++ pdata = dev_get_platdata(&qdev->pdev->dev);
+ size = queue->ring_size * QDMA_MM_DESC_SIZE;
+- queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size,
++ queue->desc_base = dma_alloc_coherent(pdata->dma_dev, size,
+ &queue->dma_desc_base,
+ GFP_KERNEL);
+ if (!queue->desc_base) {
+@@ -588,7 +582,7 @@ static int qdma_alloc_queue_resources(struct dma_chan *chan)
+ if (ret) {
+ qdma_err(qdev, "Failed to setup SW desc ctxt for %s",
+ chan->name);
+- dma_free_coherent(qdev->dma_dev.dev, size, queue->desc_base,
++ dma_free_coherent(pdata->dma_dev, size, queue->desc_base,
+ queue->dma_desc_base);
+ return ret;
+ }
+@@ -948,8 +942,9 @@ static int qdma_init_error_irq(struct qdma_device *qdev)
+
+ static int qdmam_alloc_qintr_rings(struct qdma_device *qdev)
+ {
+- u32 ctxt[QDMA_CTXT_REGMAP_LEN];
++ struct qdma_platdata *pdata = dev_get_platdata(&qdev->pdev->dev);
+ struct device *dev = &qdev->pdev->dev;
++ u32 ctxt[QDMA_CTXT_REGMAP_LEN];
+ struct qdma_intr_ring *ring;
+ struct qdma_ctxt_intr intr_ctxt;
+ u32 vector;
+@@ -969,7 +964,8 @@ static int qdmam_alloc_qintr_rings(struct qdma_device *qdev)
+ ring->msix_id = qdev->err_irq_idx + i + 1;
+ ring->ridx = i;
+ ring->color = 1;
+- ring->base = dmam_alloc_coherent(dev, QDMA_INTR_RING_SIZE,
++ ring->base = dmam_alloc_coherent(pdata->dma_dev,
++ QDMA_INTR_RING_SIZE,
+ &ring->dev_base, GFP_KERNEL);
+ if (!ring->base) {
+ qdma_err(qdev, "Failed to alloc intr ring %d", i);
+diff --git a/drivers/dma/apple-admac.c b/drivers/dma/apple-admac.c
+index 9588773dd2eb67..037ec38730cf98 100644
+--- a/drivers/dma/apple-admac.c
++++ b/drivers/dma/apple-admac.c
+@@ -153,6 +153,8 @@ static int admac_alloc_sram_carveout(struct admac_data *ad,
+ {
+ struct admac_sram *sram;
+ int i, ret = 0, nblocks;
++ ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
++ ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
+
+ if (dir == DMA_MEM_TO_DEV)
+ sram = &ad->txcache;
+@@ -912,12 +914,7 @@ static int admac_probe(struct platform_device *pdev)
+ goto free_irq;
+ }
+
+- ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
+- ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
+-
+ dev_info(&pdev->dev, "Audio DMA Controller\n");
+- dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n",
+- readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size);
+
+ return 0;
+
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 299396121e6dc5..e847ad66dc0b49 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1363,6 +1363,8 @@ at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
+ return NULL;
+
+ desc = at_xdmac_memset_create_desc(chan, atchan, dest, len, value);
++ if (!desc)
++ return NULL;
+ list_add_tail(&desc->desc_node, &desc->descs_list);
+
+ desc->tx_dma_desc.cookie = -EBUSY;
+diff --git a/drivers/dma/dw/acpi.c b/drivers/dma/dw/acpi.c
+index c510c109d2c3ad..b6452fffa657ad 100644
+--- a/drivers/dma/dw/acpi.c
++++ b/drivers/dma/dw/acpi.c
+@@ -8,13 +8,15 @@
+
+ static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param)
+ {
++ struct dw_dma *dw = to_dw_dma(chan->device);
++ struct dw_dma_chip_pdata *data = dev_get_drvdata(dw->dma.dev);
+ struct acpi_dma_spec *dma_spec = param;
+ struct dw_dma_slave slave = {
+ .dma_dev = dma_spec->dev,
+ .src_id = dma_spec->slave_id,
+ .dst_id = dma_spec->slave_id,
+- .m_master = 0,
+- .p_master = 1,
++ .m_master = data->m_master,
++ .p_master = data->p_master,
+ };
+
+ return dw_dma_filter(chan, &slave);
+diff --git a/drivers/dma/dw/internal.h b/drivers/dma/dw/internal.h
+index 563ce73488db32..f1bd06a20cd611 100644
+--- a/drivers/dma/dw/internal.h
++++ b/drivers/dma/dw/internal.h
+@@ -51,11 +51,15 @@ struct dw_dma_chip_pdata {
+ int (*probe)(struct dw_dma_chip *chip);
+ int (*remove)(struct dw_dma_chip *chip);
+ struct dw_dma_chip *chip;
++ u8 m_master;
++ u8 p_master;
+ };
+
+ static __maybe_unused const struct dw_dma_chip_pdata dw_dma_chip_pdata = {
+ .probe = dw_dma_probe,
+ .remove = dw_dma_remove,
++ .m_master = 0,
++ .p_master = 1,
+ };
+
+ static const struct dw_dma_platform_data idma32_pdata = {
+@@ -72,6 +76,8 @@ static __maybe_unused const struct dw_dma_chip_pdata idma32_chip_pdata = {
+ .pdata = &idma32_pdata,
+ .probe = idma32_dma_probe,
+ .remove = idma32_dma_remove,
++ .m_master = 0,
++ .p_master = 0,
+ };
+
+ static const struct dw_dma_platform_data xbar_pdata = {
+@@ -88,6 +94,8 @@ static __maybe_unused const struct dw_dma_chip_pdata xbar_chip_pdata = {
+ .pdata = &xbar_pdata,
+ .probe = idma32_dma_probe,
+ .remove = idma32_dma_remove,
++ .m_master = 0,
++ .p_master = 0,
+ };
+
+ #endif /* _DMA_DW_INTERNAL_H */
+diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c
+index ad2d4d012cf729..e8a0eb81726a56 100644
+--- a/drivers/dma/dw/pci.c
++++ b/drivers/dma/dw/pci.c
+@@ -56,10 +56,10 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
+ if (ret)
+ return ret;
+
+- dw_dma_acpi_controller_register(chip->dw);
+-
+ pci_set_drvdata(pdev, data);
+
++ dw_dma_acpi_controller_register(chip->dw);
++
+ return 0;
+ }
+
+diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
+index ce37e1ee9c462d..fe8f103d4a6378 100644
+--- a/drivers/dma/fsl-edma-common.h
++++ b/drivers/dma/fsl-edma-common.h
+@@ -166,6 +166,7 @@ struct fsl_edma_chan {
+ struct work_struct issue_worker;
+ struct platform_device *pdev;
+ struct device *pd_dev;
++ struct device_link *pd_dev_link;
+ u32 srcid;
+ struct clk *clk;
+ int priority;
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index f9f1eda792546e..70cb7fda757a94 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -417,10 +417,33 @@ static const struct of_device_id fsl_edma_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids);
+
++static void fsl_edma3_detach_pd(struct fsl_edma_engine *fsl_edma)
++{
++ struct fsl_edma_chan *fsl_chan;
++ int i;
++
++ for (i = 0; i < fsl_edma->n_chans; i++) {
++ if (fsl_edma->chan_masked & BIT(i))
++ continue;
++ fsl_chan = &fsl_edma->chans[i];
++ if (fsl_chan->pd_dev_link)
++ device_link_del(fsl_chan->pd_dev_link);
++ if (fsl_chan->pd_dev) {
++ dev_pm_domain_detach(fsl_chan->pd_dev, false);
++ pm_runtime_dont_use_autosuspend(fsl_chan->pd_dev);
++ pm_runtime_set_suspended(fsl_chan->pd_dev);
++ }
++ }
++}
++
++static void devm_fsl_edma3_detach_pd(void *data)
++{
++ fsl_edma3_detach_pd(data);
++}
++
+ static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
+ {
+ struct fsl_edma_chan *fsl_chan;
+- struct device_link *link;
+ struct device *pd_chan;
+ struct device *dev;
+ int i;
+@@ -436,15 +459,16 @@ static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_eng
+ pd_chan = dev_pm_domain_attach_by_id(dev, i);
+ if (IS_ERR_OR_NULL(pd_chan)) {
+ dev_err(dev, "Failed attach pd %d\n", i);
+- return -EINVAL;
++ goto detach;
+ }
+
+- link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS |
++ fsl_chan->pd_dev_link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS |
+ DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+- if (!link) {
++ if (!fsl_chan->pd_dev_link) {
+ dev_err(dev, "Failed to add device_link to %d\n", i);
+- return -EINVAL;
++ dev_pm_domain_detach(pd_chan, false);
++ goto detach;
+ }
+
+ fsl_chan->pd_dev = pd_chan;
+@@ -455,6 +479,10 @@ static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_eng
+ }
+
+ return 0;
++
++detach:
++ fsl_edma3_detach_pd(fsl_edma);
++ return -EINVAL;
+ }
+
+ static int fsl_edma_probe(struct platform_device *pdev)
+@@ -544,6 +572,9 @@ static int fsl_edma_probe(struct platform_device *pdev)
+ ret = fsl_edma3_attach_pd(pdev, fsl_edma);
+ if (ret)
+ return ret;
++ ret = devm_add_action_or_reset(&pdev->dev, devm_fsl_edma3_detach_pd, fsl_edma);
++ if (ret)
++ return ret;
+ }
+
+ if (drvdata->flags & FSL_EDMA_DRV_TCD64)
+diff --git a/drivers/dma/ls2x-apb-dma.c b/drivers/dma/ls2x-apb-dma.c
+index 9652e86667224b..b4f18be6294574 100644
+--- a/drivers/dma/ls2x-apb-dma.c
++++ b/drivers/dma/ls2x-apb-dma.c
+@@ -31,7 +31,7 @@
+ #define LDMA_ASK_VALID BIT(2)
+ #define LDMA_START BIT(3) /* DMA start operation */
+ #define LDMA_STOP BIT(4) /* DMA stop operation */
+-#define LDMA_CONFIG_MASK GENMASK(4, 0) /* DMA controller config bits mask */
++#define LDMA_CONFIG_MASK GENMASK_ULL(4, 0) /* DMA controller config bits mask */
+
+ /* Bitfields in ndesc_addr field of HW descriptor */
+ #define LDMA_DESC_EN BIT(0) /*1: The next descriptor is valid */
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index 43efce77bb5770..40b76b40bc30c2 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -1388,6 +1388,7 @@ static int mv_xor_probe(struct platform_device *pdev)
+ irq = irq_of_parse_and_map(np, 0);
+ if (!irq) {
+ ret = -ENODEV;
++ of_node_put(np);
+ goto err_channel_add;
+ }
+
+@@ -1396,6 +1397,7 @@ static int mv_xor_probe(struct platform_device *pdev)
+ if (IS_ERR(chan)) {
+ ret = PTR_ERR(chan);
+ irq_dispose_mapping(irq);
++ of_node_put(np);
+ goto err_channel_add;
+ }
+
+diff --git a/drivers/dma/tegra186-gpc-dma.c b/drivers/dma/tegra186-gpc-dma.c
+index 3642508e88bb22..adca05ee98c922 100644
+--- a/drivers/dma/tegra186-gpc-dma.c
++++ b/drivers/dma/tegra186-gpc-dma.c
+@@ -231,6 +231,7 @@ struct tegra_dma_channel {
+ bool config_init;
+ char name[30];
+ enum dma_transfer_direction sid_dir;
++ enum dma_status status;
+ int id;
+ int irq;
+ int slave_id;
+@@ -393,6 +394,8 @@ static int tegra_dma_pause(struct tegra_dma_channel *tdc)
+ tegra_dma_dump_chan_regs(tdc);
+ }
+
++ tdc->status = DMA_PAUSED;
++
+ return ret;
+ }
+
+@@ -419,6 +422,8 @@ static void tegra_dma_resume(struct tegra_dma_channel *tdc)
+ val = tdc_read(tdc, TEGRA_GPCDMA_CHAN_CSRE);
+ val &= ~TEGRA_GPCDMA_CHAN_CSRE_PAUSE;
+ tdc_write(tdc, TEGRA_GPCDMA_CHAN_CSRE, val);
++
++ tdc->status = DMA_IN_PROGRESS;
+ }
+
+ static int tegra_dma_device_resume(struct dma_chan *dc)
+@@ -544,6 +549,7 @@ static void tegra_dma_xfer_complete(struct tegra_dma_channel *tdc)
+
+ tegra_dma_sid_free(tdc);
+ tdc->dma_desc = NULL;
++ tdc->status = DMA_COMPLETE;
+ }
+
+ static void tegra_dma_chan_decode_error(struct tegra_dma_channel *tdc,
+@@ -716,6 +722,7 @@ static int tegra_dma_terminate_all(struct dma_chan *dc)
+ tdc->dma_desc = NULL;
+ }
+
++ tdc->status = DMA_COMPLETE;
+ tegra_dma_sid_free(tdc);
+ vchan_get_all_descriptors(&tdc->vc, &head);
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+@@ -769,6 +776,9 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc,
+ if (ret == DMA_COMPLETE)
+ return ret;
+
++ if (tdc->status == DMA_PAUSED)
++ ret = DMA_PAUSED;
++
+ spin_lock_irqsave(&tdc->vc.lock, flags);
+ vd = vchan_find_desc(&tdc->vc, cookie);
+ if (vd) {
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index bcf3a33123be1c..f0c6d50d8c3345 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4108,9 +4108,10 @@ static void drm_dp_mst_up_req_work(struct work_struct *work)
+ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ {
+ struct drm_dp_pending_up_req *up_req;
++ struct drm_dp_mst_branch *mst_primary;
+
+ if (!drm_dp_get_one_sb_msg(mgr, true, NULL))
+- goto out;
++ goto out_clear_reply;
+
+ if (!mgr->up_req_recv.have_eomt)
+ return 0;
+@@ -4128,10 +4129,19 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ drm_dbg_kms(mgr->dev, "Received unknown up req type, ignoring: %x\n",
+ up_req->msg.req_type);
+ kfree(up_req);
+- goto out;
++ goto out_clear_reply;
++ }
++
++ mutex_lock(&mgr->lock);
++ mst_primary = mgr->mst_primary;
++ if (!mst_primary || !drm_dp_mst_topology_try_get_mstb(mst_primary)) {
++ mutex_unlock(&mgr->lock);
++ kfree(up_req);
++ goto out_clear_reply;
+ }
++ mutex_unlock(&mgr->lock);
+
+- drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, up_req->msg.req_type,
++ drm_dp_send_up_ack_reply(mgr, mst_primary, up_req->msg.req_type,
+ false);
+
+ if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
+@@ -4148,13 +4158,13 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ conn_stat->peer_device_type);
+
+ mutex_lock(&mgr->probe_lock);
+- handle_csn = mgr->mst_primary->link_address_sent;
++ handle_csn = mst_primary->link_address_sent;
+ mutex_unlock(&mgr->probe_lock);
+
+ if (!handle_csn) {
+ drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.");
+ kfree(up_req);
+- goto out;
++ goto out_put_primary;
+ }
+ } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
+ const struct drm_dp_resource_status_notify *res_stat =
+@@ -4171,7 +4181,9 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ mutex_unlock(&mgr->up_req_lock);
+ queue_work(system_long_wq, &mgr->up_req_work);
+
+-out:
++out_put_primary:
++ drm_dp_mst_topology_put_mstb(mst_primary);
++out_clear_reply:
+ memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index 5221ee3f12149b..c18e463092afa5 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -20,6 +20,7 @@
+ #include "xe_guc_ct.h"
+ #include "xe_guc_submit.h"
+ #include "xe_hw_engine.h"
++#include "xe_pm.h"
+ #include "xe_sched_job.h"
+ #include "xe_vm.h"
+
+@@ -143,31 +144,6 @@ static void xe_devcoredump_snapshot_free(struct xe_devcoredump_snapshot *ss)
+ ss->vm = NULL;
+ }
+
+-static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
+-{
+- struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
+- struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
+- unsigned int fw_ref;
+-
+- /* keep going if fw fails as we still want to save the memory and SW data */
+- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
+- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+- xe_vm_snapshot_capture_delayed(ss->vm);
+- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+- xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+-
+- /* Calculate devcoredump size */
+- ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
+-
+- ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
+- if (!ss->read.buffer)
+- return;
+-
+- __xe_devcoredump_read(ss->read.buffer, ss->read.size, coredump);
+- xe_devcoredump_snapshot_free(ss);
+-}
+-
+ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
+ size_t count, void *data, size_t datalen)
+ {
+@@ -216,6 +192,45 @@ static void xe_devcoredump_free(void *data)
+ "Xe device coredump has been deleted.\n");
+ }
+
++static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
++{
++ struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
++ struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
++ struct xe_device *xe = coredump_to_xe(coredump);
++ unsigned int fw_ref;
++
++ /*
++ * NB: Despite passing a GFP_ flags parameter here, more allocations are done
++ * internally using GFP_KERNEL expliictly. Hence this call must be in the worker
++ * thread and not in the initial capture call.
++ */
++ dev_coredumpm_timeout(gt_to_xe(ss->gt)->drm.dev, THIS_MODULE, coredump, 0, GFP_KERNEL,
++ xe_devcoredump_read, xe_devcoredump_free,
++ XE_COREDUMP_TIMEOUT_JIFFIES);
++
++ xe_pm_runtime_get(xe);
++
++ /* keep going if fw fails as we still want to save the memory and SW data */
++ fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
++ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
++ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
++ xe_vm_snapshot_capture_delayed(ss->vm);
++ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
++ xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
++
++ xe_pm_runtime_put(xe);
++
++ /* Calculate devcoredump size */
++ ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
++
++ ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
++ if (!ss->read.buffer)
++ return;
++
++ __xe_devcoredump_read(ss->read.buffer, ss->read.size, coredump);
++ xe_devcoredump_snapshot_free(ss);
++}
++
+ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
+ struct xe_sched_job *job)
+ {
+@@ -299,10 +314,6 @@ void xe_devcoredump(struct xe_sched_job *job)
+ drm_info(&xe->drm, "Xe device coredump has been created\n");
+ drm_info(&xe->drm, "Check your /sys/class/drm/card%d/device/devcoredump/data\n",
+ xe->drm.primary->index);
+-
+- dev_coredumpm_timeout(xe->drm.dev, THIS_MODULE, coredump, 0, GFP_KERNEL,
+- xe_devcoredump_read, xe_devcoredump_free,
+- XE_COREDUMP_TIMEOUT_JIFFIES);
+ }
+
+ static void xe_driver_devcoredump_fini(void *arg)
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 98539313cbc970..c5224d43eea45e 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -282,6 +282,7 @@ static const struct of_device_id i2c_imx_dt_ids[] = {
+ { .compatible = "fsl,imx6sll-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx6sx-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx6ul-i2c", .data = &imx6_i2c_hwdata, },
++ { .compatible = "fsl,imx7d-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx7s-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx8mm-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx8mn-i2c", .data = &imx6_i2c_hwdata, },
+diff --git a/drivers/i2c/busses/i2c-microchip-corei2c.c b/drivers/i2c/busses/i2c-microchip-corei2c.c
+index 0b0a1c4d17caef..b0a51695138ad0 100644
+--- a/drivers/i2c/busses/i2c-microchip-corei2c.c
++++ b/drivers/i2c/busses/i2c-microchip-corei2c.c
+@@ -93,27 +93,35 @@
+ * @base: pointer to register struct
+ * @dev: device reference
+ * @i2c_clk: clock reference for i2c input clock
++ * @msg_queue: pointer to the messages requiring sending
+ * @buf: pointer to msg buffer for easier use
+ * @msg_complete: xfer completion object
+ * @adapter: core i2c abstraction
+ * @msg_err: error code for completed message
+ * @bus_clk_rate: current i2c bus clock rate
+ * @isr_status: cached copy of local ISR status
++ * @total_num: total number of messages to be sent/received
++ * @current_num: index of the current message being sent/received
+ * @msg_len: number of bytes transferred in msg
+ * @addr: address of the current slave
++ * @restart_needed: whether or not a repeated start is required after current message
+ */
+ struct mchp_corei2c_dev {
+ void __iomem *base;
+ struct device *dev;
+ struct clk *i2c_clk;
++ struct i2c_msg *msg_queue;
+ u8 *buf;
+ struct completion msg_complete;
+ struct i2c_adapter adapter;
+ int msg_err;
++ int total_num;
++ int current_num;
+ u32 bus_clk_rate;
+ u32 isr_status;
+ u16 msg_len;
+ u8 addr;
++ bool restart_needed;
+ };
+
+ static void mchp_corei2c_core_disable(struct mchp_corei2c_dev *idev)
+@@ -222,6 +230,47 @@ static int mchp_corei2c_fill_tx(struct mchp_corei2c_dev *idev)
+ return 0;
+ }
+
++static void mchp_corei2c_next_msg(struct mchp_corei2c_dev *idev)
++{
++ struct i2c_msg *this_msg;
++ u8 ctrl;
++
++ if (idev->current_num >= idev->total_num) {
++ complete(&idev->msg_complete);
++ return;
++ }
++
++ /*
++ * If there's been an error, the isr needs to return control
++ * to the "main" part of the driver, so as not to keep sending
++ * messages once it completes and clears the SI bit.
++ */
++ if (idev->msg_err) {
++ complete(&idev->msg_complete);
++ return;
++ }
++
++ this_msg = idev->msg_queue++;
++
++ if (idev->current_num < (idev->total_num - 1)) {
++ struct i2c_msg *next_msg = idev->msg_queue;
++
++ idev->restart_needed = next_msg->flags & I2C_M_RD;
++ } else {
++ idev->restart_needed = false;
++ }
++
++ idev->addr = i2c_8bit_addr_from_msg(this_msg);
++ idev->msg_len = this_msg->len;
++ idev->buf = this_msg->buf;
++
++ ctrl = readb(idev->base + CORE_I2C_CTRL);
++ ctrl |= CTRL_STA;
++ writeb(ctrl, idev->base + CORE_I2C_CTRL);
++
++ idev->current_num++;
++}
++
+ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ {
+ u32 status = idev->isr_status;
+@@ -238,8 +287,6 @@ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ ctrl &= ~CTRL_STA;
+ writeb(idev->addr, idev->base + CORE_I2C_DATA);
+ writeb(ctrl, idev->base + CORE_I2C_CTRL);
+- if (idev->msg_len == 0)
+- finished = true;
+ break;
+ case STATUS_M_ARB_LOST:
+ idev->msg_err = -EAGAIN;
+@@ -247,10 +294,14 @@ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ break;
+ case STATUS_M_SLAW_ACK:
+ case STATUS_M_TX_DATA_ACK:
+- if (idev->msg_len > 0)
++ if (idev->msg_len > 0) {
+ mchp_corei2c_fill_tx(idev);
+- else
+- last_byte = true;
++ } else {
++ if (idev->restart_needed)
++ finished = true;
++ else
++ last_byte = true;
++ }
+ break;
+ case STATUS_M_TX_DATA_NACK:
+ case STATUS_M_SLAR_NACK:
+@@ -287,7 +338,7 @@ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ mchp_corei2c_stop(idev);
+
+ if (last_byte || finished)
+- complete(&idev->msg_complete);
++ mchp_corei2c_next_msg(idev);
+
+ return IRQ_HANDLED;
+ }
+@@ -311,21 +362,48 @@ static irqreturn_t mchp_corei2c_isr(int irq, void *_dev)
+ return ret;
+ }
+
+-static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev,
+- struct i2c_msg *msg)
++static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
++ int num)
+ {
+- u8 ctrl;
++ struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap);
++ struct i2c_msg *this_msg = msgs;
+ unsigned long time_left;
++ u8 ctrl;
++
++ mchp_corei2c_core_enable(idev);
++
++ /*
++ * The isr controls the flow of a transfer, this info needs to be saved
++ * to a location that it can access the queue information from.
++ */
++ idev->restart_needed = false;
++ idev->msg_queue = msgs;
++ idev->total_num = num;
++ idev->current_num = 0;
+
+- idev->addr = i2c_8bit_addr_from_msg(msg);
+- idev->msg_len = msg->len;
+- idev->buf = msg->buf;
++ /*
++ * But the first entry to the isr is triggered by the start in this
++ * function, so the first message needs to be "dequeued".
++ */
++ idev->addr = i2c_8bit_addr_from_msg(this_msg);
++ idev->msg_len = this_msg->len;
++ idev->buf = this_msg->buf;
+ idev->msg_err = 0;
+
+- reinit_completion(&idev->msg_complete);
++ if (idev->total_num > 1) {
++ struct i2c_msg *next_msg = msgs + 1;
+
+- mchp_corei2c_core_enable(idev);
++ idev->restart_needed = next_msg->flags & I2C_M_RD;
++ }
+
++ idev->current_num++;
++ idev->msg_queue++;
++
++ reinit_completion(&idev->msg_complete);
++
++ /*
++ * Send the first start to pass control to the isr
++ */
+ ctrl = readb(idev->base + CORE_I2C_CTRL);
+ ctrl |= CTRL_STA;
+ writeb(ctrl, idev->base + CORE_I2C_CTRL);
+@@ -335,20 +413,8 @@ static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev,
+ if (!time_left)
+ return -ETIMEDOUT;
+
+- return idev->msg_err;
+-}
+-
+-static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+- int num)
+-{
+- struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap);
+- int i, ret;
+-
+- for (i = 0; i < num; i++) {
+- ret = mchp_corei2c_xfer_msg(idev, msgs++);
+- if (ret)
+- return ret;
+- }
++ if (idev->msg_err)
++ return idev->msg_err;
+
+ return num;
+ }
+diff --git a/drivers/media/dvb-frontends/dib3000mb.c b/drivers/media/dvb-frontends/dib3000mb.c
+index c598b2a6332565..7c452ddd9e40fa 100644
+--- a/drivers/media/dvb-frontends/dib3000mb.c
++++ b/drivers/media/dvb-frontends/dib3000mb.c
+@@ -51,7 +51,7 @@ MODULE_PARM_DESC(debug, "set debugging level (1=info,2=xfer,4=setfe,8=getfe (|-a
+ static int dib3000_read_reg(struct dib3000_state *state, u16 reg)
+ {
+ u8 wb[] = { ((reg >> 8) | 0x80) & 0xff, reg & 0xff };
+- u8 rb[2];
++ u8 rb[2] = {};
+ struct i2c_msg msg[] = {
+ { .addr = state->config.demod_address, .flags = 0, .buf = wb, .len = 2 },
+ { .addr = state->config.demod_address, .flags = I2C_M_RD, .buf = rb, .len = 2 },
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 5436ec4a8fde42..a52a9f5a75e021 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -1409,8 +1409,8 @@ static int anfc_parse_cs(struct arasan_nfc *nfc)
+ * case, the "not" chosen CS is assigned to nfc->spare_cs and selected
+ * whenever a GPIO CS must be asserted.
+ */
+- if (nfc->cs_array && nfc->ncs > 2) {
+- if (!nfc->cs_array[0] && !nfc->cs_array[1]) {
++ if (nfc->cs_array) {
++ if (nfc->ncs > 2 && !nfc->cs_array[0] && !nfc->cs_array[1]) {
+ dev_err(nfc->dev,
+ "Assign a single native CS when using GPIOs\n");
+ return -EINVAL;
+@@ -1478,8 +1478,15 @@ static int anfc_probe(struct platform_device *pdev)
+
+ static void anfc_remove(struct platform_device *pdev)
+ {
++ int i;
+ struct arasan_nfc *nfc = platform_get_drvdata(pdev);
+
++ for (i = 0; i < nfc->ncs; i++) {
++ if (nfc->cs_array[i]) {
++ gpiod_put(nfc->cs_array[i]);
++ }
++ }
++
+ anfc_chips_cleanup(nfc);
+ }
+
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index a22aab4ed4e8ab..3c7dee1be21df1 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -380,10 +380,8 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ user->delta = user->dmu + req->ecc.strength + 1;
+
+ gf_tables = atmel_pmecc_get_gf_tables(req);
+- if (IS_ERR(gf_tables)) {
+- kfree(user);
++ if (IS_ERR(gf_tables))
+ return ERR_CAST(gf_tables);
+- }
+
+ user->gf_tables = gf_tables;
+
+diff --git a/drivers/mtd/nand/raw/diskonchip.c b/drivers/mtd/nand/raw/diskonchip.c
+index 8db7fc42457111..70d6c2250f32c8 100644
+--- a/drivers/mtd/nand/raw/diskonchip.c
++++ b/drivers/mtd/nand/raw/diskonchip.c
+@@ -1098,7 +1098,7 @@ static inline int __init inftl_partscan(struct mtd_info *mtd, struct mtd_partiti
+ (i == 0) && (ip->firstUnit > 0)) {
+ parts[0].name = " DiskOnChip IPL / Media Header partition";
+ parts[0].offset = 0;
+- parts[0].size = mtd->erasesize * ip->firstUnit;
++ parts[0].size = (uint64_t)mtd->erasesize * ip->firstUnit;
+ numparts = 1;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+index e95ffe3035473c..c70da7281551a2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+@@ -1074,12 +1074,13 @@ int iwl_trans_read_config32(struct iwl_trans *trans, u32 ofs,
+ void iwl_trans_debugfs_cleanup(struct iwl_trans *trans);
+ #endif
+
+-#define iwl_trans_read_mem_bytes(trans, addr, buf, bufsize) \
+- do { \
+- if (__builtin_constant_p(bufsize)) \
+- BUILD_BUG_ON((bufsize) % sizeof(u32)); \
+- iwl_trans_read_mem(trans, addr, buf, (bufsize) / sizeof(u32));\
+- } while (0)
++#define iwl_trans_read_mem_bytes(trans, addr, buf, bufsize) \
++ ({ \
++ if (__builtin_constant_p(bufsize)) \
++ BUILD_BUG_ON((bufsize) % sizeof(u32)); \
++ iwl_trans_read_mem(trans, addr, buf, \
++ (bufsize) / sizeof(u32)); \
++ })
+
+ int iwl_trans_write_imr_mem(struct iwl_trans *trans, u32 dst_addr,
+ u64 src_addr, u32 byte_cnt);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 244ca8cab9d1a2..1a814eb6743e80 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -3032,13 +3032,18 @@ static bool iwl_mvm_rt_status(struct iwl_trans *trans, u32 base, u32 *err_id)
+ /* cf. struct iwl_error_event_table */
+ u32 valid;
+ __le32 err_id;
+- } err_info;
++ } err_info = {};
++ int ret;
+
+ if (!base)
+ return false;
+
+- iwl_trans_read_mem_bytes(trans, base,
+- &err_info, sizeof(err_info));
++ ret = iwl_trans_read_mem_bytes(trans, base,
++ &err_info, sizeof(err_info));
++
++ if (ret)
++ return true;
++
+ if (err_info.valid && err_id)
+ *err_id = le32_to_cpu(err_info.err_id);
+
+@@ -3635,22 +3640,31 @@ int iwl_mvm_fast_resume(struct iwl_mvm *mvm)
+ iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt);
+
+ if (iwl_mvm_check_rt_status(mvm, NULL)) {
++ IWL_ERR(mvm,
++ "iwl_mvm_check_rt_status failed, device is gone during suspend\n");
+ set_bit(STATUS_FW_ERROR, &mvm->trans->status);
+ iwl_mvm_dump_nic_error_log(mvm);
+ iwl_dbg_tlv_time_point(&mvm->fwrt,
+ IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL);
+ iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert,
+ false, 0);
+- return -ENODEV;
++ mvm->trans->state = IWL_TRANS_NO_FW;
++ ret = -ENODEV;
++
++ goto out;
+ }
+ ret = iwl_mvm_d3_notif_wait(mvm, &d3_data);
++
++ if (ret) {
++ IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret);
++ mvm->trans->state = IWL_TRANS_NO_FW;
++ }
++
++out:
+ clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
+ mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
+ mvm->fast_resume = false;
+
+- if (ret)
+- IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret);
+-
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 3b9943eb69341e..d19b3bd0866bda 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1643,6 +1643,8 @@ int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ out:
+ if (*status == IWL_D3_STATUS_ALIVE)
+ ret = iwl_pcie_d3_handshake(trans, false);
++ else
++ trans->state = IWL_TRANS_NO_FW;
+
+ return ret;
+ }
+diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
+index 569125726b3e19..d7ba8795d60f81 100644
+--- a/drivers/pci/msi/irqdomain.c
++++ b/drivers/pci/msi/irqdomain.c
+@@ -350,8 +350,11 @@ bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
+
+ domain = dev_get_msi_domain(&pdev->dev);
+
+- if (!domain || !irq_domain_is_hierarchy(domain))
+- return mode == ALLOW_LEGACY;
++ if (!domain || !irq_domain_is_hierarchy(domain)) {
++ if (IS_ENABLED(CONFIG_PCI_MSI_ARCH_FALLBACKS))
++ return mode == ALLOW_LEGACY;
++ return false;
++ }
+
+ if (!irq_domain_is_msi_parent(domain)) {
+ /*
+diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c
+index 3a45879d85db96..2f647cac4cae34 100644
+--- a/drivers/pci/msi/msi.c
++++ b/drivers/pci/msi/msi.c
+@@ -433,6 +433,10 @@ int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ if (WARN_ON_ONCE(dev->msi_enabled))
+ return -EINVAL;
+
++ /* Test for the availability of MSI support */
++ if (!pci_msi_domain_supports(dev, 0, ALLOW_LEGACY))
++ return -ENOTSUPP;
++
+ nvec = pci_msi_vec_count(dev);
+ if (nvec < 0)
+ return nvec;
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c b/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
+index 950b7ae1d1a838..dc452610934add 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
++++ b/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
+@@ -325,6 +325,12 @@ static void usb_init_common_7216(struct brcm_usb_init_params *params)
+ void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
+
+ USB_CTRL_UNSET(ctrl, USB_PM, XHC_S2_CLK_SWITCH_EN);
++
++ /*
++ * The PHY might be in a bad state if it is already powered
++ * up. Toggle the power just in case.
++ */
++ USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
+ USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN);
+
+ /* 1 millisecond - for USB clocks to settle down */
+diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c
+index f053b525ccffab..413f76e2d1744d 100644
+--- a/drivers/phy/phy-core.c
++++ b/drivers/phy/phy-core.c
+@@ -145,8 +145,10 @@ static struct phy_provider *of_phy_provider_lookup(struct device_node *node)
+ return phy_provider;
+
+ for_each_child_of_node(phy_provider->children, child)
+- if (child == node)
++ if (child == node) {
++ of_node_put(child);
+ return phy_provider;
++ }
+ }
+
+ return ERR_PTR(-EPROBE_DEFER);
+@@ -629,8 +631,10 @@ static struct phy *_of_phy_get(struct device_node *np, int index)
+ return ERR_PTR(-ENODEV);
+
+ /* This phy type handled by the usb-phy subsystem for now */
+- if (of_device_is_compatible(args.np, "usb-nop-xceiv"))
+- return ERR_PTR(-ENODEV);
++ if (of_device_is_compatible(args.np, "usb-nop-xceiv")) {
++ phy = ERR_PTR(-ENODEV);
++ goto out_put_node;
++ }
+
+ mutex_lock(&phy_provider_mutex);
+ phy_provider = of_phy_provider_lookup(args.np);
+@@ -652,6 +656,7 @@ static struct phy *_of_phy_get(struct device_node *np, int index)
+
+ out_unlock:
+ mutex_unlock(&phy_provider_mutex);
++out_put_node:
+ of_node_put(args.np);
+
+ return phy;
+@@ -737,7 +742,7 @@ void devm_phy_put(struct device *dev, struct phy *phy)
+ if (!phy)
+ return;
+
+- r = devres_destroy(dev, devm_phy_release, devm_phy_match, phy);
++ r = devres_release(dev, devm_phy_release, devm_phy_match, phy);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_phy_put);
+@@ -1121,7 +1126,7 @@ void devm_phy_destroy(struct device *dev, struct phy *phy)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_phy_consume, devm_phy_match, phy);
++ r = devres_release(dev, devm_phy_consume, devm_phy_match, phy);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_phy_destroy);
+@@ -1259,12 +1264,12 @@ EXPORT_SYMBOL_GPL(of_phy_provider_unregister);
+ * of_phy_provider_unregister to unregister the phy provider.
+ */
+ void devm_of_phy_provider_unregister(struct device *dev,
+- struct phy_provider *phy_provider)
++ struct phy_provider *phy_provider)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_phy_provider_release, devm_phy_match,
+- phy_provider);
++ r = devres_release(dev, devm_phy_provider_release, devm_phy_match,
++ phy_provider);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY provider device resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_of_phy_provider_unregister);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index 1246d3bc8b92f8..8e2cd2c178d6b2 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -1008,7 +1008,7 @@ static const struct qmp_phy_init_tbl sc8280xp_usb3_uniphy_rx_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_FO_GAIN, 0x2f),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_LOW, 0xff),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_HIGH, 0x0f),
+- QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_SO_GAIN, 0x0a),
++ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FO_GAIN, 0x0a),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL1, 0x54),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL2, 0x0f),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_RX_EQU_ADAPTOR_CNTRL2, 0x0f),
+diff --git a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+index 0a9989e41237f1..2eb3329ca23f67 100644
+--- a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
++++ b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+@@ -309,7 +309,7 @@ static int rockchip_combphy_parse_dt(struct device *dev, struct rockchip_combphy
+
+ priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
+
+- priv->phy_rst = devm_reset_control_array_get_exclusive(dev);
++ priv->phy_rst = devm_reset_control_get(dev, "phy");
+ if (IS_ERR(priv->phy_rst))
+ return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
+
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 9f084697dd05ce..69c3ec0938f74f 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -1116,6 +1116,8 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+ return dev_err_probe(dev, PTR_ERR(hdptx->grf),
+ "Could not get GRF syscon\n");
+
++ platform_set_drvdata(pdev, hdptx);
++
+ ret = devm_pm_runtime_enable(dev);
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to enable runtime PM\n");
+@@ -1125,7 +1127,6 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+ return dev_err_probe(dev, PTR_ERR(hdptx->phy),
+ "Failed to create HDMI PHY\n");
+
+- platform_set_drvdata(pdev, hdptx);
+ phy_set_drvdata(hdptx->phy, hdptx);
+ phy_set_bus_width(hdptx->phy, 8);
+
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index c784119ab5dc0c..626e2635e3da70 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -707,7 +707,7 @@ static const struct dmi_system_id cros_ec_lpc_dmi_table[] __initconst = {
+ /* Framework Laptop (12th Gen Intel Core) */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "12th Gen Intel Core"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"),
+ },
+ .driver_data = (void *)&framework_laptop_mec_lpc_driver_data,
+ },
+@@ -715,7 +715,7 @@ static const struct dmi_system_id cros_ec_lpc_dmi_table[] __initconst = {
+ /* Framework Laptop (13th Gen Intel Core) */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "13th Gen Intel Core"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"),
+ },
+ .driver_data = (void *)&framework_laptop_mec_lpc_driver_data,
+ },
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index ef04d396f61c77..a5933980ade3d6 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -623,6 +623,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ { KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ { KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ { KE_IGNORE, 0xC6, }, /* Ambient Light Sensor notification */
++ { KE_IGNORE, 0xCF, }, /* AC mode */
+ { KE_KEY, 0xFA, { KEY_PROG2 } }, /* Lid flip action */
+ { KE_KEY, 0xBD, { KEY_PROG2 } }, /* Lid flip action on ROG xflow laptops */
+ { KE_END, 0},
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 2b393eb5c2820e..c47f32f152e602 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -567,6 +567,7 @@ static int bq24190_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
+
+ static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
+ {
++ union power_supply_propval val = { .intval = bdi->charge_type };
+ int ret;
+
+ ret = pm_runtime_resume_and_get(bdi->dev);
+@@ -587,13 +588,18 @@ static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
+
+ ret = bq24190_write_mask(bdi, BQ24190_REG_POC,
+ BQ24296_REG_POC_OTG_CONFIG_MASK,
+- BQ24296_REG_POC_CHG_CONFIG_SHIFT,
++ BQ24296_REG_POC_OTG_CONFIG_SHIFT,
+ BQ24296_REG_POC_OTG_CONFIG_OTG);
+- } else
++ } else {
+ ret = bq24190_write_mask(bdi, BQ24190_REG_POC,
+ BQ24296_REG_POC_OTG_CONFIG_MASK,
+- BQ24296_REG_POC_CHG_CONFIG_SHIFT,
++ BQ24296_REG_POC_OTG_CONFIG_SHIFT,
+ BQ24296_REG_POC_OTG_CONFIG_DISABLE);
++ if (ret < 0)
++ goto out;
++
++ ret = bq24190_charger_set_charge_type(bdi, &val);
++ }
+
+ out:
+ pm_runtime_mark_last_busy(bdi->dev);
+diff --git a/drivers/power/supply/cros_charge-control.c b/drivers/power/supply/cros_charge-control.c
+index 17c53591ce197d..9b0a7500296b4d 100644
+--- a/drivers/power/supply/cros_charge-control.c
++++ b/drivers/power/supply/cros_charge-control.c
+@@ -7,8 +7,10 @@
+ #include <acpi/battery.h>
+ #include <linux/container_of.h>
+ #include <linux/dmi.h>
++#include <linux/lockdep.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
+ #include <linux/platform_device.h>
+@@ -49,6 +51,7 @@ struct cros_chctl_priv {
+ struct attribute *attributes[_CROS_CHCTL_ATTR_COUNT];
+ struct attribute_group group;
+
++ struct mutex lock; /* protects fields below and cros_ec */
+ enum power_supply_charge_behaviour current_behaviour;
+ u8 current_start_threshold, current_end_threshold;
+ };
+@@ -85,6 +88,8 @@ static int cros_chctl_configure_ec(struct cros_chctl_priv *priv)
+ {
+ struct ec_params_charge_control req = {};
+
++ lockdep_assert_held(&priv->lock);
++
+ req.cmd = EC_CHARGE_CONTROL_CMD_SET;
+
+ switch (priv->current_behaviour) {
+@@ -134,11 +139,15 @@ static ssize_t cros_chctl_store_threshold(struct device *dev, struct cros_chctl_
+ return -EINVAL;
+
+ if (is_end_threshold) {
+- if (val <= priv->current_start_threshold)
++ /* Start threshold is not exposed, use fixed value */
++ if (priv->cmd_version == 2)
++ priv->current_start_threshold = val == 100 ? 0 : val;
++
++ if (val < priv->current_start_threshold)
+ return -EINVAL;
+ priv->current_end_threshold = val;
+ } else {
+- if (val >= priv->current_end_threshold)
++ if (val > priv->current_end_threshold)
+ return -EINVAL;
+ priv->current_start_threshold = val;
+ }
+@@ -159,6 +168,7 @@ static ssize_t charge_control_start_threshold_show(struct device *dev,
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_START_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_start_threshold);
+ }
+
+@@ -169,6 +179,7 @@ static ssize_t charge_control_start_threshold_store(struct device *dev,
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_START_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return cros_chctl_store_threshold(dev, priv, 0, buf, count);
+ }
+
+@@ -178,6 +189,7 @@ static ssize_t charge_control_end_threshold_show(struct device *dev, struct devi
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_END_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_end_threshold);
+ }
+
+@@ -187,6 +199,7 @@ static ssize_t charge_control_end_threshold_store(struct device *dev, struct dev
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_END_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return cros_chctl_store_threshold(dev, priv, 1, buf, count);
+ }
+
+@@ -195,6 +208,7 @@ static ssize_t charge_behaviour_show(struct device *dev, struct device_attribute
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_CHARGE_BEHAVIOUR);
+
++ guard(mutex)(&priv->lock);
+ return power_supply_charge_behaviour_show(dev, EC_CHARGE_CONTROL_BEHAVIOURS,
+ priv->current_behaviour, buf);
+ }
+@@ -210,6 +224,7 @@ static ssize_t charge_behaviour_store(struct device *dev, struct device_attribut
+ if (ret < 0)
+ return ret;
+
++ guard(mutex)(&priv->lock);
+ priv->current_behaviour = ret;
+
+ ret = cros_chctl_configure_ec(priv);
+@@ -223,12 +238,10 @@ static umode_t cros_chtl_attr_is_visible(struct kobject *kobj, struct attribute
+ {
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(attr, n);
+
+- if (priv->cmd_version < 2) {
+- if (n == CROS_CHCTL_ATTR_START_THRESHOLD)
+- return 0;
+- if (n == CROS_CHCTL_ATTR_END_THRESHOLD)
+- return 0;
+- }
++ if (n == CROS_CHCTL_ATTR_START_THRESHOLD && priv->cmd_version < 3)
++ return 0;
++ else if (n == CROS_CHCTL_ATTR_END_THRESHOLD && priv->cmd_version < 2)
++ return 0;
+
+ return attr->mode;
+ }
+@@ -290,6 +303,10 @@ static int cros_chctl_probe(struct platform_device *pdev)
+ if (!priv)
+ return -ENOMEM;
+
++ ret = devm_mutex_init(dev, &priv->lock);
++ if (ret)
++ return ret;
++
+ ret = cros_ec_get_cmd_versions(cros_ec, EC_CMD_CHARGE_CONTROL);
+ if (ret < 0)
+ return ret;
+@@ -327,7 +344,8 @@ static int cros_chctl_probe(struct platform_device *pdev)
+ priv->current_end_threshold = 100;
+
+ /* Bring EC into well-known state */
+- ret = cros_chctl_configure_ec(priv);
++ scoped_guard(mutex, &priv->lock)
++ ret = cros_chctl_configure_ec(priv);
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/power/supply/gpio-charger.c b/drivers/power/supply/gpio-charger.c
+index 68212b39785bea..6139f736ecbe4f 100644
+--- a/drivers/power/supply/gpio-charger.c
++++ b/drivers/power/supply/gpio-charger.c
+@@ -67,6 +67,14 @@ static int set_charge_current_limit(struct gpio_charger *gpio_charger, int val)
+ if (gpio_charger->current_limit_map[i].limit_ua <= val)
+ break;
+ }
++
++ /*
++ * If a valid charge current limit isn't found, default to smallest
++ * current limitation for safety reasons.
++ */
++ if (i >= gpio_charger->current_limit_map_size)
++ i = gpio_charger->current_limit_map_size - 1;
++
+ mapping = gpio_charger->current_limit_map[i];
+
+ for (i = 0; i < ndescs; i++) {
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 8e75e2e279a40a..50f1dcb6d58460 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -8907,8 +8907,11 @@ megasas_aen_polling(struct work_struct *work)
+ (ld_target_id / MEGASAS_MAX_DEV_PER_CHANNEL),
+ (ld_target_id % MEGASAS_MAX_DEV_PER_CHANNEL),
+ 0);
+- if (sdev1)
++ if (sdev1) {
++ mutex_unlock(&instance->reset_mutex);
+ megasas_remove_scsi_device(sdev1);
++ mutex_lock(&instance->reset_mutex);
++ }
+
+ event_type = SCAN_VD_CHANNEL;
+ break;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index 81bb408ce56d8f..1e715fd65a7d4b 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -134,8 +134,6 @@ extern atomic64_t event_counter;
+
+ #define MPI3MR_WATCHDOG_INTERVAL 1000 /* in milli seconds */
+
+-#define MPI3MR_DEFAULT_CFG_PAGE_SZ 1024 /* in bytes */
+-
+ #define MPI3MR_RESET_TOPOLOGY_SETTLE_TIME 10
+
+ #define MPI3MR_SCMD_TIMEOUT (60 * HZ)
+@@ -1133,9 +1131,6 @@ struct scmd_priv {
+ * @io_throttle_low: I/O size to stop throttle in 512b blocks
+ * @num_io_throttle_group: Maximum number of throttle groups
+ * @throttle_groups: Pointer to throttle group info structures
+- * @cfg_page: Default memory for configuration pages
+- * @cfg_page_dma: Configuration page DMA address
+- * @cfg_page_sz: Default configuration page memory size
+ * @sas_transport_enabled: SAS transport enabled or not
+ * @scsi_device_channel: Channel ID for SCSI devices
+ * @transport_cmds: Command tracker for SAS transport commands
+@@ -1332,10 +1327,6 @@ struct mpi3mr_ioc {
+ u16 num_io_throttle_group;
+ struct mpi3mr_throttle_group_info *throttle_groups;
+
+- void *cfg_page;
+- dma_addr_t cfg_page_dma;
+- u16 cfg_page_sz;
+-
+ u8 sas_transport_enabled;
+ u8 scsi_device_channel;
+ struct mpi3mr_drv_cmd transport_cmds;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 01f035f9330e4b..10b8e4dc64f8b0 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -2329,6 +2329,15 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ if (!mrioc)
+ return -ENODEV;
+
++ if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex))
++ return -ERESTARTSYS;
++
++ if (mrioc->bsg_cmds.state & MPI3MR_CMD_PENDING) {
++ dprint_bsg_err(mrioc, "%s: command is in use\n", __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
++ return -EAGAIN;
++ }
++
+ if (!mrioc->ioctl_sges_allocated) {
+ dprint_bsg_err(mrioc, "%s: DMA memory was not allocated\n",
+ __func__);
+@@ -2339,13 +2348,16 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ karg->timeout = MPI3MR_APP_DEFAULT_TIMEOUT;
+
+ mpi_req = kzalloc(MPI3MR_ADMIN_REQ_FRAME_SZ, GFP_KERNEL);
+- if (!mpi_req)
++ if (!mpi_req) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ return -ENOMEM;
++ }
+ mpi_header = (struct mpi3_request_header *)mpi_req;
+
+ bufcnt = karg->buf_entry_list.num_of_entries;
+ drv_bufs = kzalloc((sizeof(*drv_bufs) * bufcnt), GFP_KERNEL);
+ if (!drv_bufs) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -ENOMEM;
+ goto out;
+ }
+@@ -2353,6 +2365,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dout_buf = kzalloc(job->request_payload.payload_len,
+ GFP_KERNEL);
+ if (!dout_buf) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -ENOMEM;
+ goto out;
+ }
+@@ -2360,6 +2373,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ din_buf = kzalloc(job->reply_payload.payload_len,
+ GFP_KERNEL);
+ if (!din_buf) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -ENOMEM;
+ goto out;
+ }
+@@ -2435,6 +2449,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ (mpi_msg_size > MPI3MR_ADMIN_REQ_FRAME_SZ)) {
+ dprint_bsg_err(mrioc, "%s: invalid MPI message size\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2447,6 +2462,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ if (invalid_be) {
+ dprint_bsg_err(mrioc, "%s: invalid buffer entries passed\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2454,12 +2470,14 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ if (sgl_dout_iter > (dout_buf + job->request_payload.payload_len)) {
+ dprint_bsg_err(mrioc, "%s: data_out buffer length mismatch\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+ if (sgl_din_iter > (din_buf + job->reply_payload.payload_len)) {
+ dprint_bsg_err(mrioc, "%s: data_in buffer length mismatch\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2472,6 +2490,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc, "%s:%d: invalid data transfer size passed for function 0x%x din_size = %d, dout_size = %d\n",
+ __func__, __LINE__, mpi_header->function, din_size,
+ dout_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2480,6 +2499,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc,
+ "%s:%d: invalid data transfer size passed for function 0x%x din_size=%d\n",
+ __func__, __LINE__, mpi_header->function, din_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2487,6 +2507,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc,
+ "%s:%d: invalid data transfer size passed for function 0x%x dout_size = %d\n",
+ __func__, __LINE__, mpi_header->function, dout_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2497,6 +2518,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc, "%s:%d: invalid message size passed:%d:%d:%d:%d\n",
+ __func__, __LINE__, din_cnt, dout_cnt, din_size,
+ dout_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2544,6 +2566,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ continue;
+ if (mpi3mr_map_data_buffer_dma(mrioc, drv_buf_iter, desc_count)) {
+ rval = -ENOMEM;
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ dprint_bsg_err(mrioc, "%s:%d: mapping data buffers failed\n",
+ __func__, __LINE__);
+ goto out;
+@@ -2556,20 +2579,11 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ sense_buff_k = kzalloc(erbsz, GFP_KERNEL);
+ if (!sense_buff_k) {
+ rval = -ENOMEM;
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ goto out;
+ }
+ }
+
+- if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex)) {
+- rval = -ERESTARTSYS;
+- goto out;
+- }
+- if (mrioc->bsg_cmds.state & MPI3MR_CMD_PENDING) {
+- rval = -EAGAIN;
+- dprint_bsg_err(mrioc, "%s: command is in use\n", __func__);
+- mutex_unlock(&mrioc->bsg_cmds.mutex);
+- goto out;
+- }
+ if (mrioc->unrecoverable) {
+ dprint_bsg_err(mrioc, "%s: unrecoverable controller\n",
+ __func__);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index f1ab76351bd81e..5ed31fe57474a3 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -1035,6 +1035,36 @@ static const char *mpi3mr_reset_type_name(u16 reset_type)
+ return name;
+ }
+
++/**
++ * mpi3mr_is_fault_recoverable - Read fault code and decide
++ * whether the controller can be recoverable
++ * @mrioc: Adapter instance reference
++ * Return: true if fault is recoverable, false otherwise.
++ */
++static inline bool mpi3mr_is_fault_recoverable(struct mpi3mr_ioc *mrioc)
++{
++ u32 fault;
++
++ fault = (readl(&mrioc->sysif_regs->fault) &
++ MPI3_SYSIF_FAULT_CODE_MASK);
++
++ switch (fault) {
++ case MPI3_SYSIF_FAULT_CODE_COMPLETE_RESET_NEEDED:
++ case MPI3_SYSIF_FAULT_CODE_POWER_CYCLE_REQUIRED:
++ ioc_warn(mrioc,
++ "controller requires system power cycle, marking controller as unrecoverable\n");
++ return false;
++ case MPI3_SYSIF_FAULT_CODE_INSUFFICIENT_PCI_SLOT_POWER:
++ ioc_warn(mrioc,
++ "controller faulted due to insufficient power,\n"
++ " try by connecting it to a different slot\n");
++ return false;
++ default:
++ break;
++ }
++ return true;
++}
++
+ /**
+ * mpi3mr_print_fault_info - Display fault information
+ * @mrioc: Adapter instance reference
+@@ -1373,6 +1403,11 @@ static int mpi3mr_bring_ioc_ready(struct mpi3mr_ioc *mrioc)
+ ioc_info(mrioc, "ioc_status(0x%08x), ioc_config(0x%08x), ioc_info(0x%016llx) at the bringup\n",
+ ioc_status, ioc_config, base_info);
+
++ if (!mpi3mr_is_fault_recoverable(mrioc)) {
++ mrioc->unrecoverable = 1;
++ goto out_device_not_present;
++ }
++
+ /*The timeout value is in 2sec unit, changing it to seconds*/
+ mrioc->ready_timeout =
+ ((base_info & MPI3_SYSIF_IOC_INFO_LOW_TIMEOUT_MASK) >>
+@@ -2734,6 +2769,11 @@ static void mpi3mr_watchdog_work(struct work_struct *work)
+ mpi3mr_print_fault_info(mrioc);
+ mrioc->diagsave_timeout = 0;
+
++ if (!mpi3mr_is_fault_recoverable(mrioc)) {
++ mrioc->unrecoverable = 1;
++ goto schedule_work;
++ }
++
+ switch (trigger_data.fault) {
+ case MPI3_SYSIF_FAULT_CODE_COMPLETE_RESET_NEEDED:
+ case MPI3_SYSIF_FAULT_CODE_POWER_CYCLE_REQUIRED:
+@@ -4186,17 +4226,6 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)
+ mpi3mr_read_tsu_interval(mrioc);
+ mpi3mr_print_ioc_info(mrioc);
+
+- if (!mrioc->cfg_page) {
+- dprint_init(mrioc, "allocating config page buffers\n");
+- mrioc->cfg_page_sz = MPI3MR_DEFAULT_CFG_PAGE_SZ;
+- mrioc->cfg_page = dma_alloc_coherent(&mrioc->pdev->dev,
+- mrioc->cfg_page_sz, &mrioc->cfg_page_dma, GFP_KERNEL);
+- if (!mrioc->cfg_page) {
+- retval = -1;
+- goto out_failed_noretry;
+- }
+- }
+-
+ dprint_init(mrioc, "allocating host diag buffers\n");
+ mpi3mr_alloc_diag_bufs(mrioc);
+
+@@ -4768,11 +4797,7 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)
+ mrioc->admin_req_base, mrioc->admin_req_dma);
+ mrioc->admin_req_base = NULL;
+ }
+- if (mrioc->cfg_page) {
+- dma_free_coherent(&mrioc->pdev->dev, mrioc->cfg_page_sz,
+- mrioc->cfg_page, mrioc->cfg_page_dma);
+- mrioc->cfg_page = NULL;
+- }
++
+ if (mrioc->pel_seqnum_virt) {
+ dma_free_coherent(&mrioc->pdev->dev, mrioc->pel_seqnum_sz,
+ mrioc->pel_seqnum_virt, mrioc->pel_seqnum_dma);
+@@ -5392,55 +5417,6 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ return retval;
+ }
+
+-
+-/**
+- * mpi3mr_free_config_dma_memory - free memory for config page
+- * @mrioc: Adapter instance reference
+- * @mem_desc: memory descriptor structure
+- *
+- * Check whether the size of the buffer specified by the memory
+- * descriptor is greater than the default page size if so then
+- * free the memory pointed by the descriptor.
+- *
+- * Return: Nothing.
+- */
+-static void mpi3mr_free_config_dma_memory(struct mpi3mr_ioc *mrioc,
+- struct dma_memory_desc *mem_desc)
+-{
+- if ((mem_desc->size > mrioc->cfg_page_sz) && mem_desc->addr) {
+- dma_free_coherent(&mrioc->pdev->dev, mem_desc->size,
+- mem_desc->addr, mem_desc->dma_addr);
+- mem_desc->addr = NULL;
+- }
+-}
+-
+-/**
+- * mpi3mr_alloc_config_dma_memory - Alloc memory for config page
+- * @mrioc: Adapter instance reference
+- * @mem_desc: Memory descriptor to hold dma memory info
+- *
+- * This function allocates new dmaable memory or provides the
+- * default config page dmaable memory based on the memory size
+- * described by the descriptor.
+- *
+- * Return: 0 on success, non-zero on failure.
+- */
+-static int mpi3mr_alloc_config_dma_memory(struct mpi3mr_ioc *mrioc,
+- struct dma_memory_desc *mem_desc)
+-{
+- if (mem_desc->size > mrioc->cfg_page_sz) {
+- mem_desc->addr = dma_alloc_coherent(&mrioc->pdev->dev,
+- mem_desc->size, &mem_desc->dma_addr, GFP_KERNEL);
+- if (!mem_desc->addr)
+- return -ENOMEM;
+- } else {
+- mem_desc->addr = mrioc->cfg_page;
+- mem_desc->dma_addr = mrioc->cfg_page_dma;
+- memset(mem_desc->addr, 0, mrioc->cfg_page_sz);
+- }
+- return 0;
+-}
+-
+ /**
+ * mpi3mr_post_cfg_req - Issue config requests and wait
+ * @mrioc: Adapter instance reference
+@@ -5596,8 +5572,12 @@ static int mpi3mr_process_cfg_req(struct mpi3mr_ioc *mrioc,
+ cfg_req->page_length = cfg_hdr->page_length;
+ cfg_req->page_version = cfg_hdr->page_version;
+ }
+- if (mpi3mr_alloc_config_dma_memory(mrioc, &mem_desc))
+- goto out;
++
++ mem_desc.addr = dma_alloc_coherent(&mrioc->pdev->dev,
++ mem_desc.size, &mem_desc.dma_addr, GFP_KERNEL);
++
++ if (!mem_desc.addr)
++ return retval;
+
+ mpi3mr_add_sg_single(&cfg_req->sgl, sgl_flags, mem_desc.size,
+ mem_desc.dma_addr);
+@@ -5626,7 +5606,12 @@ static int mpi3mr_process_cfg_req(struct mpi3mr_ioc *mrioc,
+ }
+
+ out:
+- mpi3mr_free_config_dma_memory(mrioc, &mem_desc);
++ if (mem_desc.addr) {
++ dma_free_coherent(&mrioc->pdev->dev, mem_desc.size,
++ mem_desc.addr, mem_desc.dma_addr);
++ mem_desc.addr = NULL;
++ }
++
+ return retval;
+ }
+
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 5f2f67acf8bf31..1bef88130d0c06 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -5215,7 +5215,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ }
+
+ mrioc = shost_priv(shost);
+- retval = ida_alloc_range(&mrioc_ida, 1, U8_MAX, GFP_KERNEL);
++ retval = ida_alloc_range(&mrioc_ida, 0, U8_MAX, GFP_KERNEL);
+ if (retval < 0)
+ goto id_alloc_failed;
+ mrioc->id = (u8)retval;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index ed5046593fdab6..16ac2267c71e19 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -7041,11 +7041,12 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ int i;
+ u8 failed;
+ __le32 *mfp;
++ int ret_val;
+
+ /* make sure doorbell is not in use */
+ if ((ioc->base_readl_ext_retry(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
+ ioc_err(ioc, "doorbell is in use (line=%d)\n", __LINE__);
+- return -EFAULT;
++ goto doorbell_diag_reset;
+ }
+
+ /* clear pending doorbell interrupts from previous state changes */
+@@ -7135,6 +7136,10 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ le32_to_cpu(mfp[i]));
+ }
+ return 0;
++
++doorbell_diag_reset:
++ ret_val = _base_diag_reset(ioc);
++ return ret_val;
+ }
+
+ /**
+diff --git a/drivers/scsi/qla1280.h b/drivers/scsi/qla1280.h
+index d309e2ca14deb3..dea2290b37d4d7 100644
+--- a/drivers/scsi/qla1280.h
++++ b/drivers/scsi/qla1280.h
+@@ -116,12 +116,12 @@ struct device_reg {
+ uint16_t id_h; /* ID high */
+ uint16_t cfg_0; /* Configuration 0 */
+ #define ISP_CFG0_HWMSK 0x000f /* Hardware revision mask */
+-#define ISP_CFG0_1020 BIT_0 /* ISP1020 */
+-#define ISP_CFG0_1020A BIT_1 /* ISP1020A */
+-#define ISP_CFG0_1040 BIT_2 /* ISP1040 */
+-#define ISP_CFG0_1040A BIT_3 /* ISP1040A */
+-#define ISP_CFG0_1040B BIT_4 /* ISP1040B */
+-#define ISP_CFG0_1040C BIT_5 /* ISP1040C */
++#define ISP_CFG0_1020 1 /* ISP1020 */
++#define ISP_CFG0_1020A 2 /* ISP1020A */
++#define ISP_CFG0_1040 3 /* ISP1040 */
++#define ISP_CFG0_1040A 4 /* ISP1040A */
++#define ISP_CFG0_1040B 5 /* ISP1040B */
++#define ISP_CFG0_1040C 6 /* ISP1040C */
+ uint16_t cfg_1; /* Configuration 1 */
+ #define ISP_CFG1_F128 BIT_6 /* 128-byte FIFO threshold */
+ #define ISP_CFG1_F64 BIT_4|BIT_5 /* 128-byte FIFO threshold */
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 7ceb982040a5df..d0b55c1fa908a5 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -149,6 +149,8 @@ struct hv_fc_wwn_packet {
+ */
+ static int vmstor_proto_version;
+
++static bool hv_dev_is_fc(struct hv_device *hv_dev);
++
+ #define STORVSC_LOGGING_NONE 0
+ #define STORVSC_LOGGING_ERROR 1
+ #define STORVSC_LOGGING_WARN 2
+@@ -1138,6 +1140,7 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
+ * not correctly handle:
+ * INQUIRY command with page code parameter set to 0x80
+ * MODE_SENSE command with cmd[2] == 0x1c
++ * MAINTENANCE_IN is not supported by HyperV FC passthrough
+ *
+ * Setup srb and scsi status so this won't be fatal.
+ * We do this so we can distinguish truly fatal failues
+@@ -1145,7 +1148,9 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
+ */
+
+ if ((stor_pkt->vm_srb.cdb[0] == INQUIRY) ||
+- (stor_pkt->vm_srb.cdb[0] == MODE_SENSE)) {
++ (stor_pkt->vm_srb.cdb[0] == MODE_SENSE) ||
++ (stor_pkt->vm_srb.cdb[0] == MAINTENANCE_IN &&
++ hv_dev_is_fc(device))) {
+ vstor_packet->vm_srb.scsi_status = 0;
+ vstor_packet->vm_srb.srb_status = SRB_STATUS_SUCCESS;
+ }
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index 4337ca51d7aa21..5c0dec90eec1df 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -86,6 +86,8 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0xa324), (unsigned long)&cnl_info },
+ { PCI_VDEVICE(INTEL, 0xa3a4), (unsigned long)&cnl_info },
+ { PCI_VDEVICE(INTEL, 0xa823), (unsigned long)&cnl_info },
++ { PCI_VDEVICE(INTEL, 0xe323), (unsigned long)&cnl_info },
++ { PCI_VDEVICE(INTEL, 0xe423), (unsigned long)&cnl_info },
+ { },
+ };
+ MODULE_DEVICE_TABLE(pci, intel_spi_pci_ids);
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 2c043817c66a88..4a2f84c4d22e5f 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -1561,10 +1561,10 @@ static int omap2_mcspi_probe(struct platform_device *pdev)
+ }
+
+ mcspi->ref_clk = devm_clk_get_optional_enabled(&pdev->dev, NULL);
+- if (mcspi->ref_clk)
+- mcspi->ref_clk_hz = clk_get_rate(mcspi->ref_clk);
+- else
++ if (IS_ERR(mcspi->ref_clk))
+ mcspi->ref_clk_hz = OMAP2_MCSPI_MAX_FREQ;
++ else
++ mcspi->ref_clk_hz = clk_get_rate(mcspi->ref_clk);
+ ctlr->max_speed_hz = mcspi->ref_clk_hz;
+ ctlr->min_speed_hz = mcspi->ref_clk_hz >> 15;
+
+diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
+index d7db6c824e13de..224e7dde9cdee8 100644
+--- a/drivers/virt/coco/tdx-guest/tdx-guest.c
++++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
+@@ -124,10 +124,8 @@ static void *alloc_quote_buf(void)
+ if (!addr)
+ return NULL;
+
+- if (set_memory_decrypted((unsigned long)addr, count)) {
+- free_pages_exact(addr, len);
++ if (set_memory_decrypted((unsigned long)addr, count))
+ return NULL;
+- }
+
+ return addr;
+ }
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 94c96bcfefe347..0b59c669c26d35 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -549,6 +549,7 @@ config S3C2410_WATCHDOG
+ tristate "S3C6410/S5Pv210/Exynos Watchdog"
+ depends on ARCH_S3C64XX || ARCH_S5PV210 || ARCH_EXYNOS || COMPILE_TEST
+ select WATCHDOG_CORE
++ select MFD_SYSCON if ARCH_EXYNOS
+ help
+ Watchdog timer block in the Samsung S3C64xx, S5Pv210 and Exynos
+ SoCs. This will reboot the system when the timer expires with
+diff --git a/drivers/watchdog/it87_wdt.c b/drivers/watchdog/it87_wdt.c
+index 3e8c15138eddad..1a5a0a2c3f2e37 100644
+--- a/drivers/watchdog/it87_wdt.c
++++ b/drivers/watchdog/it87_wdt.c
+@@ -20,6 +20,8 @@
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#include <linux/bits.h>
++#include <linux/dmi.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+@@ -40,6 +42,7 @@
+ #define VAL 0x2f
+
+ /* Logical device Numbers LDN */
++#define EC 0x04
+ #define GPIO 0x07
+
+ /* Configuration Registers and Functions */
+@@ -73,6 +76,12 @@
+ #define IT8784_ID 0x8784
+ #define IT8786_ID 0x8786
+
++/* Environment Controller Configuration Registers LDN=0x04 */
++#define SCR1 0xfa
++
++/* Environment Controller Bits SCR1 */
++#define WDT_PWRGD 0x20
++
+ /* GPIO Configuration Registers LDN=0x07 */
+ #define WDTCTRL 0x71
+ #define WDTCFG 0x72
+@@ -240,6 +249,21 @@ static int wdt_set_timeout(struct watchdog_device *wdd, unsigned int t)
+ return ret;
+ }
+
++enum {
++ IT87_WDT_OUTPUT_THROUGH_PWRGD = BIT(0),
++};
++
++static const struct dmi_system_id it87_quirks[] = {
++ {
++ /* Qotom Q30900P (IT8786) */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_BOARD_NAME, "QCML04"),
++ },
++ .driver_data = (void *)IT87_WDT_OUTPUT_THROUGH_PWRGD,
++ },
++ {}
++};
++
+ static const struct watchdog_info ident = {
+ .options = WDIOF_SETTIMEOUT | WDIOF_MAGICCLOSE | WDIOF_KEEPALIVEPING,
+ .firmware_version = 1,
+@@ -261,8 +285,10 @@ static struct watchdog_device wdt_dev = {
+
+ static int __init it87_wdt_init(void)
+ {
++ const struct dmi_system_id *dmi_id;
+ u8 chip_rev;
+ u8 ctrl;
++ int quirks = 0;
+ int rc;
+
+ rc = superio_enter();
+@@ -273,6 +299,10 @@ static int __init it87_wdt_init(void)
+ chip_rev = superio_inb(CHIPREV) & 0x0f;
+ superio_exit();
+
++ dmi_id = dmi_first_match(it87_quirks);
++ if (dmi_id)
++ quirks = (long)dmi_id->driver_data;
++
+ switch (chip_type) {
+ case IT8702_ID:
+ max_units = 255;
+@@ -333,6 +363,15 @@ static int __init it87_wdt_init(void)
+ superio_outb(0x00, WDTCTRL);
+ }
+
++ if (quirks & IT87_WDT_OUTPUT_THROUGH_PWRGD) {
++ superio_select(EC);
++ ctrl = superio_inb(SCR1);
++ if (!(ctrl & WDT_PWRGD)) {
++ ctrl |= WDT_PWRGD;
++ superio_outb(ctrl, SCR1);
++ }
++ }
++
+ superio_exit();
+
+ if (timeout < 1 || timeout > max_units * 60) {
+diff --git a/drivers/watchdog/mtk_wdt.c b/drivers/watchdog/mtk_wdt.c
+index e2d7a57d6ea2e7..91d110646e16f7 100644
+--- a/drivers/watchdog/mtk_wdt.c
++++ b/drivers/watchdog/mtk_wdt.c
+@@ -10,6 +10,7 @@
+ */
+
+ #include <dt-bindings/reset/mt2712-resets.h>
++#include <dt-bindings/reset/mediatek,mt6735-wdt.h>
+ #include <dt-bindings/reset/mediatek,mt6795-resets.h>
+ #include <dt-bindings/reset/mt7986-resets.h>
+ #include <dt-bindings/reset/mt8183-resets.h>
+@@ -87,6 +88,10 @@ static const struct mtk_wdt_data mt2712_data = {
+ .toprgu_sw_rst_num = MT2712_TOPRGU_SW_RST_NUM,
+ };
+
++static const struct mtk_wdt_data mt6735_data = {
++ .toprgu_sw_rst_num = MT6735_TOPRGU_RST_NUM,
++};
++
+ static const struct mtk_wdt_data mt6795_data = {
+ .toprgu_sw_rst_num = MT6795_TOPRGU_SW_RST_NUM,
+ };
+@@ -489,6 +494,7 @@ static int mtk_wdt_resume(struct device *dev)
+ static const struct of_device_id mtk_wdt_dt_ids[] = {
+ { .compatible = "mediatek,mt2712-wdt", .data = &mt2712_data },
+ { .compatible = "mediatek,mt6589-wdt" },
++ { .compatible = "mediatek,mt6735-wdt", .data = &mt6735_data },
+ { .compatible = "mediatek,mt6795-wdt", .data = &mt6795_data },
+ { .compatible = "mediatek,mt7986-wdt", .data = &mt7986_data },
+ { .compatible = "mediatek,mt7988-wdt", .data = &mt7988_data },
+diff --git a/drivers/watchdog/rzg2l_wdt.c b/drivers/watchdog/rzg2l_wdt.c
+index 2a35f890a2883a..11bbe48160ec9c 100644
+--- a/drivers/watchdog/rzg2l_wdt.c
++++ b/drivers/watchdog/rzg2l_wdt.c
+@@ -12,6 +12,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_domain.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <linux/units.h>
+@@ -166,8 +167,22 @@ static int rzg2l_wdt_restart(struct watchdog_device *wdev,
+ struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
+ int ret;
+
+- clk_prepare_enable(priv->pclk);
+- clk_prepare_enable(priv->osc_clk);
++ /*
++ * In case of RZ/G3S the watchdog device may be part of an IRQ safe power
++ * domain that is currently powered off. In this case we need to power
++ * it on before accessing registers. Along with this the clocks will be
++ * enabled. We don't undo the pm_runtime_resume_and_get() as the device
++ * need to be on for the reboot to happen.
++ *
++ * For the rest of SoCs not registering a watchdog IRQ safe power
++ * domain it is safe to call pm_runtime_resume_and_get() as the
++ * irq_safe_dev_in_sleep_domain() call in genpd_runtime_resume()
++ * returns non zero value and the genpd_lock() is avoided, thus, there
++ * will be no invalid wait context reported by lockdep.
++ */
++ ret = pm_runtime_resume_and_get(wdev->parent);
++ if (ret)
++ return ret;
+
+ if (priv->devtype == WDT_RZG2L) {
+ ret = reset_control_deassert(priv->rstc);
+@@ -275,6 +290,7 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+
+ priv->devtype = (uintptr_t)of_device_get_match_data(dev);
+
++ pm_runtime_irq_safe(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+
+ priv->wdev.info = &rzg2l_wdt_ident;
+diff --git a/drivers/watchdog/s3c2410_wdt.c b/drivers/watchdog/s3c2410_wdt.c
+index 686cf544d0ae7a..349d30462c8c0c 100644
+--- a/drivers/watchdog/s3c2410_wdt.c
++++ b/drivers/watchdog/s3c2410_wdt.c
+@@ -24,9 +24,9 @@
+ #include <linux/slab.h>
+ #include <linux/err.h>
+ #include <linux/of.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
+ #include <linux/delay.h>
+-#include <linux/soc/samsung/exynos-pmu.h>
+
+ #define S3C2410_WTCON 0x00
+ #define S3C2410_WTDAT 0x04
+@@ -699,11 +699,11 @@ static int s3c2410wdt_probe(struct platform_device *pdev)
+ return ret;
+
+ if (wdt->drv_data->quirks & QUIRKS_HAVE_PMUREG) {
+- wdt->pmureg = exynos_get_pmu_regmap_by_phandle(dev->of_node,
+- "samsung,syscon-phandle");
++ wdt->pmureg = syscon_regmap_lookup_by_phandle(dev->of_node,
++ "samsung,syscon-phandle");
+ if (IS_ERR(wdt->pmureg))
+ return dev_err_probe(dev, PTR_ERR(wdt->pmureg),
+- "PMU regmap lookup failed.\n");
++ "syscon regmap lookup failed.\n");
+ }
+
+ wdt_irq = platform_get_irq(pdev, 0);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 9c05cab473f577..29c16459740112 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -654,6 +654,8 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans,
+ goto error_unlock_cow;
+ }
+ }
++
++ trace_btrfs_cow_block(root, buf, cow);
+ if (unlock_orig)
+ btrfs_tree_unlock(buf);
+ free_extent_buffer_stale(buf);
+@@ -710,7 +712,6 @@ int btrfs_cow_block(struct btrfs_trans_handle *trans,
+ {
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ u64 search_start;
+- int ret;
+
+ if (unlikely(test_bit(BTRFS_ROOT_DELETING, &root->state))) {
+ btrfs_abort_transaction(trans, -EUCLEAN);
+@@ -751,12 +752,8 @@ int btrfs_cow_block(struct btrfs_trans_handle *trans,
+ * Also We don't care about the error, as it's handled internally.
+ */
+ btrfs_qgroup_trace_subtree_after_cow(trans, root, buf);
+- ret = btrfs_force_cow_block(trans, root, buf, parent, parent_slot,
+- cow_ret, search_start, 0, nest);
+-
+- trace_btrfs_cow_block(root, buf, *cow_ret);
+-
+- return ret;
++ return btrfs_force_cow_block(trans, root, buf, parent, parent_slot,
++ cow_ret, search_start, 0, nest);
+ }
+ ALLOW_ERROR_INJECTION(btrfs_cow_block, ERRNO);
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 58ffe78132d9d6..4b3e256e0d0b88 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7117,6 +7117,8 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ ret = -EAGAIN;
+ goto out;
+ }
++
++ cond_resched();
+ }
+
+ if (file_extent)
+@@ -9780,15 +9782,25 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+ struct extent_state *cached_state = NULL;
+- struct extent_map *em = NULL;
+ struct btrfs_chunk_map *map = NULL;
+ struct btrfs_device *device = NULL;
+ struct btrfs_swap_info bsi = {
+ .lowest_ppage = (sector_t)-1ULL,
+ };
++ struct btrfs_backref_share_check_ctx *backref_ctx = NULL;
++ struct btrfs_path *path = NULL;
+ int ret = 0;
+ u64 isize;
+- u64 start;
++ u64 prev_extent_end = 0;
++
++ /*
++ * Acquire the inode's mmap lock to prevent races with memory mapped
++ * writes, as they could happen after we flush delalloc below and before
++ * we lock the extent range further below. The inode was already locked
++ * up in the call chain.
++ */
++ btrfs_assert_inode_locked(BTRFS_I(inode));
++ down_write(&BTRFS_I(inode)->i_mmap_lock);
+
+ /*
+ * If the swap file was just created, make sure delalloc is done. If the
+@@ -9797,22 +9809,32 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ */
+ ret = btrfs_wait_ordered_range(BTRFS_I(inode), 0, (u64)-1);
+ if (ret)
+- return ret;
++ goto out_unlock_mmap;
+
+ /*
+ * The inode is locked, so these flags won't change after we check them.
+ */
+ if (BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS) {
+ btrfs_warn(fs_info, "swapfile must not be compressed");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
+ }
+ if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)) {
+ btrfs_warn(fs_info, "swapfile must not be copy-on-write");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
+ }
+ if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) {
+ btrfs_warn(fs_info, "swapfile must not be checksummed");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
++ }
++
++ path = btrfs_alloc_path();
++ backref_ctx = btrfs_alloc_backref_share_check_ctx();
++ if (!path || !backref_ctx) {
++ ret = -ENOMEM;
++ goto out_unlock_mmap;
+ }
+
+ /*
+@@ -9827,7 +9849,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_SWAP_ACTIVATE)) {
+ btrfs_warn(fs_info,
+ "cannot activate swapfile while exclusive operation is running");
+- return -EBUSY;
++ ret = -EBUSY;
++ goto out_unlock_mmap;
+ }
+
+ /*
+@@ -9841,7 +9864,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ btrfs_exclop_finish(fs_info);
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because snapshot creation is in progress");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
+ }
+ /*
+ * Snapshots can create extents which require COW even if NODATACOW is
+@@ -9862,7 +9886,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because subvolume %llu is being deleted",
+ btrfs_root_id(root));
+- return -EPERM;
++ ret = -EPERM;
++ goto out_unlock_mmap;
+ }
+ atomic_inc(&root->nr_swapfiles);
+ spin_unlock(&root->root_item_lock);
+@@ -9870,24 +9895,39 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
+
+ lock_extent(io_tree, 0, isize - 1, &cached_state);
+- start = 0;
+- while (start < isize) {
+- u64 logical_block_start, physical_block_start;
++ while (prev_extent_end < isize) {
++ struct btrfs_key key;
++ struct extent_buffer *leaf;
++ struct btrfs_file_extent_item *ei;
+ struct btrfs_block_group *bg;
+- u64 len = isize - start;
++ u64 logical_block_start;
++ u64 physical_block_start;
++ u64 extent_gen;
++ u64 disk_bytenr;
++ u64 len;
+
+- em = btrfs_get_extent(BTRFS_I(inode), NULL, start, len);
+- if (IS_ERR(em)) {
+- ret = PTR_ERR(em);
++ key.objectid = btrfs_ino(BTRFS_I(inode));
++ key.type = BTRFS_EXTENT_DATA_KEY;
++ key.offset = prev_extent_end;
++
++ ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
++ if (ret < 0)
+ goto out;
+- }
+
+- if (em->disk_bytenr == EXTENT_MAP_HOLE) {
++ /*
++ * If key not found it means we have an implicit hole (NO_HOLES
++ * is enabled).
++ */
++ if (ret > 0) {
+ btrfs_warn(fs_info, "swapfile must not have holes");
+ ret = -EINVAL;
+ goto out;
+ }
+- if (em->disk_bytenr == EXTENT_MAP_INLINE) {
++
++ leaf = path->nodes[0];
++ ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item);
++
++ if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_INLINE) {
+ /*
+ * It's unlikely we'll ever actually find ourselves
+ * here, as a file small enough to fit inline won't be
+@@ -9899,23 +9939,45 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ ret = -EINVAL;
+ goto out;
+ }
+- if (extent_map_is_compressed(em)) {
++
++ if (btrfs_file_extent_compression(leaf, ei) != BTRFS_COMPRESS_NONE) {
+ btrfs_warn(fs_info, "swapfile must not be compressed");
+ ret = -EINVAL;
+ goto out;
+ }
+
+- logical_block_start = extent_map_block_start(em) + (start - em->start);
+- len = min(len, em->len - (start - em->start));
+- free_extent_map(em);
+- em = NULL;
++ disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, ei);
++ if (disk_bytenr == 0) {
++ btrfs_warn(fs_info, "swapfile must not have holes");
++ ret = -EINVAL;
++ goto out;
++ }
++
++ logical_block_start = disk_bytenr + btrfs_file_extent_offset(leaf, ei);
++ extent_gen = btrfs_file_extent_generation(leaf, ei);
++ prev_extent_end = btrfs_file_extent_end(path);
++
++ if (prev_extent_end > isize)
++ len = isize - key.offset;
++ else
++ len = btrfs_file_extent_num_bytes(leaf, ei);
++
++ backref_ctx->curr_leaf_bytenr = leaf->start;
++
++ /*
++ * Don't need the path anymore, release to avoid deadlocks when
++ * calling btrfs_is_data_extent_shared() because when joining a
++ * transaction it can block waiting for the current one's commit
++ * which in turn may be trying to lock the same leaf to flush
++ * delayed items for example.
++ */
++ btrfs_release_path(path);
+
+- ret = can_nocow_extent(inode, start, &len, NULL, false, true);
++ ret = btrfs_is_data_extent_shared(BTRFS_I(inode), disk_bytenr,
++ extent_gen, backref_ctx);
+ if (ret < 0) {
+ goto out;
+- } else if (ret) {
+- ret = 0;
+- } else {
++ } else if (ret > 0) {
+ btrfs_warn(fs_info,
+ "swapfile must not be copy-on-write");
+ ret = -EINVAL;
+@@ -9950,7 +10012,6 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+
+ physical_block_start = (map->stripes[0].physical +
+ (logical_block_start - map->start));
+- len = min(len, map->chunk_len - (logical_block_start - map->start));
+ btrfs_free_chunk_map(map);
+ map = NULL;
+
+@@ -9991,20 +10052,16 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (ret)
+ goto out;
+ }
+- bsi.start = start;
++ bsi.start = key.offset;
+ bsi.block_start = physical_block_start;
+ bsi.block_len = len;
+ }
+-
+- start += len;
+ }
+
+ if (bsi.block_len)
+ ret = btrfs_add_swap_extent(sis, &bsi);
+
+ out:
+- if (!IS_ERR_OR_NULL(em))
+- free_extent_map(em);
+ if (!IS_ERR_OR_NULL(map))
+ btrfs_free_chunk_map(map);
+
+@@ -10017,6 +10074,10 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+
+ btrfs_exclop_finish(fs_info);
+
++out_unlock_mmap:
++ up_write(&BTRFS_I(inode)->i_mmap_lock);
++ btrfs_free_backref_share_ctx(backref_ctx);
++ btrfs_free_path(path);
+ if (ret)
+ return ret;
+
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index a0e8deca87a7a6..e70ed857fc743b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1122,6 +1122,7 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info,
+ fs_info->qgroup_flags = BTRFS_QGROUP_STATUS_FLAG_ON;
+ if (simple) {
+ fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE;
++ btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA);
+ btrfs_set_qgroup_status_enable_gen(leaf, ptr, trans->transid);
+ } else {
+ fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+@@ -1255,8 +1256,6 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info,
+ spin_lock(&fs_info->qgroup_lock);
+ fs_info->quota_root = quota_root;
+ set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+- if (simple)
+- btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA);
+ spin_unlock(&fs_info->qgroup_lock);
+
+ /* Skip rescan for simple qgroups. */
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index f3834f8d26b456..adcbdc970f9ea4 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -2902,6 +2902,7 @@ static int relocate_one_folio(struct reloc_control *rc,
+ const bool use_rst = btrfs_need_stripe_tree_update(fs_info, rc->block_group->flags);
+
+ ASSERT(index <= last_index);
++again:
+ folio = filemap_lock_folio(inode->i_mapping, index);
+ if (IS_ERR(folio)) {
+
+@@ -2937,6 +2938,11 @@ static int relocate_one_folio(struct reloc_control *rc,
+ ret = -EIO;
+ goto release_folio;
+ }
++ if (folio->mapping != inode->i_mapping) {
++ folio_unlock(folio);
++ folio_put(folio);
++ goto again;
++ }
+ }
+
+ /*
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 0cb11dcd10cd4b..b1015f383f75ef 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5291,6 +5291,7 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
+ unsigned cur_len = min_t(unsigned, len,
+ PAGE_SIZE - pg_offset);
+
++again:
+ folio = filemap_lock_folio(mapping, index);
+ if (IS_ERR(folio)) {
+ page_cache_sync_readahead(mapping,
+@@ -5323,6 +5324,11 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
+ ret = -EIO;
+ break;
+ }
++ if (folio->mapping != mapping) {
++ folio_unlock(folio);
++ folio_put(folio);
++ goto again;
++ }
+ }
+
+ memcpy_from_folio(sctx->send_buf + sctx->send_size, folio,
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 03926ad467c919..5912d505776660 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1118,7 +1118,7 @@ static ssize_t btrfs_nodesize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return sysfs_emit(buf, "%u\n", fs_info->super_copy->nodesize);
++ return sysfs_emit(buf, "%u\n", fs_info->nodesize);
+ }
+
+ BTRFS_ATTR(, nodesize, btrfs_nodesize_show);
+@@ -1128,7 +1128,7 @@ static ssize_t btrfs_sectorsize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize);
++ return sysfs_emit(buf, "%u\n", fs_info->sectorsize);
+ }
+
+ BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show);
+@@ -1180,7 +1180,7 @@ static ssize_t btrfs_clone_alignment_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize);
++ return sysfs_emit(buf, "%u\n", fs_info->sectorsize);
+ }
+
+ BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 67468d88f13908..851d70200c6b8f 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1552,7 +1552,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ }
+
+ op = &req->r_ops[0];
+- if (sparse) {
++ if (!write && sparse) {
+ extent_cnt = __ceph_sparse_read_ext_count(inode, size);
+ ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
+ if (ret) {
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 6d0455973d641e..49aede376d8668 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -40,24 +40,15 @@
+ #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)
+ #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
+
+-static void expkey_put_work(struct work_struct *work)
++static void expkey_put(struct kref *ref)
+ {
+- struct svc_expkey *key =
+- container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
++ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
+
+ if (test_bit(CACHE_VALID, &key->h.flags) &&
+ !test_bit(CACHE_NEGATIVE, &key->h.flags))
+ path_put(&key->ek_path);
+ auth_domain_put(key->ek_client);
+- kfree(key);
+-}
+-
+-static void expkey_put(struct kref *ref)
+-{
+- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
+-
+- INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work);
+- queue_rcu_work(system_wq, &key->ek_rcu_work);
++ kfree_rcu(key, ek_rcu);
+ }
+
+ static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)
+@@ -364,26 +355,16 @@ static void export_stats_destroy(struct export_stats *stats)
+ EXP_STATS_COUNTERS_NUM);
+ }
+
+-static void svc_export_put_work(struct work_struct *work)
++static void svc_export_put(struct kref *ref)
+ {
+- struct svc_export *exp =
+- container_of(to_rcu_work(work), struct svc_export, ex_rcu_work);
+-
++ struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+ path_put(&exp->ex_path);
+ auth_domain_put(exp->ex_client);
+ nfsd4_fslocs_free(&exp->ex_fslocs);
+ export_stats_destroy(exp->ex_stats);
+ kfree(exp->ex_stats);
+ kfree(exp->ex_uuid);
+- kfree(exp);
+-}
+-
+-static void svc_export_put(struct kref *ref)
+-{
+- struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+-
+- INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work);
+- queue_rcu_work(system_wq, &exp->ex_rcu_work);
++ kfree_rcu(exp, ex_rcu);
+ }
+
+ static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index 081afb68681e14..3794ae253a7016 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -75,7 +75,7 @@ struct svc_export {
+ u32 ex_layout_types;
+ struct nfsd4_deviceid_map *ex_devid_map;
+ struct cache_detail *cd;
+- struct rcu_work ex_rcu_work;
++ struct rcu_head ex_rcu;
+ unsigned long ex_xprtsec_modes;
+ struct export_stats *ex_stats;
+ };
+@@ -92,7 +92,7 @@ struct svc_expkey {
+ u32 ek_fsid[6];
+
+ struct path ek_path;
+- struct rcu_work ek_rcu_work;
++ struct rcu_head ek_rcu;
+ };
+
+ #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index b8cbb15560040f..de076365254978 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1058,7 +1058,7 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ args.authflavor = clp->cl_cred.cr_flavor;
+ clp->cl_cb_ident = conn->cb_ident;
+ } else {
+- if (!conn->cb_xprt)
++ if (!conn->cb_xprt || !ses)
+ return -EINVAL;
+ clp->cl_cb_session = ses;
+ args.bc_xprt = conn->cb_xprt;
+@@ -1461,8 +1461,6 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ ses = c->cn_session;
+ }
+ spin_unlock(&clp->cl_lock);
+- if (!c)
+- return;
+
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+diff --git a/fs/smb/client/Kconfig b/fs/smb/client/Kconfig
+index 2aff6d1395ce39..9f05f94e265a6d 100644
+--- a/fs/smb/client/Kconfig
++++ b/fs/smb/client/Kconfig
+@@ -2,7 +2,6 @@
+ config CIFS
+ tristate "SMB3 and CIFS support (advanced network filesystem)"
+ depends on INET
+- select NETFS_SUPPORT
+ select NLS
+ select NLS_UCS2_UTILS
+ select CRYPTO
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index d1bd69cbfe09a5..4750505465ae63 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4855,6 +4855,8 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ if (written > wdata->subreq.len)
+ written &= 0xFFFF;
+
++ cifs_stats_bytes_written(tcon, written);
++
+ if (written < wdata->subreq.len)
+ wdata->result = -ENOSPC;
+ else
+@@ -5171,6 +5173,7 @@ SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms,
+ cifs_dbg(VFS, "Send error in write = %d\n", rc);
+ } else {
+ *nbytes = le32_to_cpu(rsp->DataLength);
++ cifs_stats_bytes_written(io_parms->tcon, *nbytes);
+ trace_smb3_write_done(0, 0, xid,
+ req->PersistentFileId,
+ io_parms->tcon->tid,
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 75b4eb856d32f7..af8e24163bf261 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -18,8 +18,8 @@
+ #include "mgmt/share_config.h"
+
+ /*for shortname implementation */
+-static const char basechars[43] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_-!@#$%";
+-#define MANGLE_BASE (sizeof(basechars) / sizeof(char) - 1)
++static const char *basechars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_-!@#$%";
++#define MANGLE_BASE (strlen(basechars) - 1)
+ #define MAGIC_CHAR '~'
+ #define PERIOD '.'
+ #define mangle(V) ((char)(basechars[(V) % MANGLE_BASE]))
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 78a603129dd583..2cb49b6b07168a 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -517,7 +517,11 @@ static int udf_rmdir(struct inode *dir, struct dentry *dentry)
+ inode->i_nlink);
+ clear_nlink(inode);
+ inode->i_size = 0;
+- inode_dec_link_count(dir);
++ if (dir->i_nlink >= 3)
++ inode_dec_link_count(dir);
++ else
++ udf_warn(inode->i_sb, "parent dir link count too low (%u)\n",
++ dir->i_nlink);
+ udf_add_fid_counter(dir->i_sb, true, -1);
+ inode_set_mtime_to_ts(dir,
+ inode_set_ctime_to_ts(dir, inode_set_ctime_current(inode)));
+@@ -787,8 +791,18 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ retval = -ENOTEMPTY;
+ if (!empty_dir(new_inode))
+ goto out_oiter;
++ retval = -EFSCORRUPTED;
++ if (new_inode->i_nlink != 2)
++ goto out_oiter;
+ }
++ retval = -EFSCORRUPTED;
++ if (old_dir->i_nlink < 3)
++ goto out_oiter;
+ is_dir = true;
++ } else if (new_inode) {
++ retval = -EFSCORRUPTED;
++ if (new_inode->i_nlink < 1)
++ goto out_oiter;
+ }
+ if (is_dir && old_dir != new_dir) {
+ retval = udf_fiiter_find_entry(old_inode, &dotdot_name,
+diff --git a/include/linux/platform_data/amd_qdma.h b/include/linux/platform_data/amd_qdma.h
+index 576d952f97edd4..967a6ef31cf982 100644
+--- a/include/linux/platform_data/amd_qdma.h
++++ b/include/linux/platform_data/amd_qdma.h
+@@ -26,11 +26,13 @@ struct dma_slave_map;
+ * @max_mm_channels: Maximum number of MM DMA channels in each direction
+ * @device_map: DMA slave map
+ * @irq_index: The index of first IRQ
++ * @dma_dev: The device pointer for dma operations
+ */
+ struct qdma_platdata {
+ u32 max_mm_channels;
+ u32 irq_index;
+ struct dma_slave_map *device_map;
++ struct device *dma_dev;
+ };
+
+ #endif /* _PLATDATA_AMD_QDMA_H */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index c14446c6164d72..02eaf84c8626f4 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1633,8 +1633,9 @@ static inline unsigned int __task_state_index(unsigned int tsk_state,
+ * We're lying here, but rather than expose a completely new task state
+ * to userspace, we can make this appear as if the task has gone through
+ * a regular rt_mutex_lock() call.
++ * Report frozen tasks as uninterruptible.
+ */
+- if (tsk_state & TASK_RTLOCK_WAIT)
++ if ((tsk_state & TASK_RTLOCK_WAIT) || (tsk_state & TASK_FROZEN))
+ state = TASK_UNINTERRUPTIBLE;
+
+ return fls(state);
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index d9b03e0746e7a4..2cbe0c22a32f3c 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -317,17 +317,22 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb)
+ kfree_skb(skb);
+ }
+
+-static inline void sk_psock_queue_msg(struct sk_psock *psock,
++static inline bool sk_psock_queue_msg(struct sk_psock *psock,
+ struct sk_msg *msg)
+ {
++ bool ret;
++
+ spin_lock_bh(&psock->ingress_lock);
+- if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
++ if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+ list_add_tail(&msg->list, &psock->ingress_msg);
+- else {
++ ret = true;
++ } else {
+ sk_msg_free(psock->sk, msg);
+ kfree(msg);
++ ret = false;
+ }
+ spin_unlock_bh(&psock->ingress_lock);
++ return ret;
+ }
+
+ static inline struct sk_msg *sk_psock_dequeue_msg(struct sk_psock *psock)
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 4df2ff81d3dea5..77769ff5054441 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -379,7 +379,7 @@ struct trace_event_call {
+ struct list_head list;
+ struct trace_event_class *class;
+ union {
+- char *name;
++ const char *name;
+ /* Set TRACE_EVENT_FL_TRACEPOINT flag when using "tp" */
+ struct tracepoint *tp;
+ };
+diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
+index d2761bf8ff32c9..9f3a04345b8606 100644
+--- a/include/linux/vmstat.h
++++ b/include/linux/vmstat.h
+@@ -515,7 +515,7 @@ static inline const char *node_stat_name(enum node_stat_item item)
+
+ static inline const char *lru_list_name(enum lru_list lru)
+ {
+- return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
++ return node_stat_name(NR_LRU_BASE + (enum node_stat_item)lru) + 3; // skip "nr_"
+ }
+
+ #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f29c1444893875..fa055cf1785efd 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1521,7 +1521,7 @@ static inline bool sk_wmem_schedule(struct sock *sk, int size)
+ }
+
+ static inline bool
+-sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
++__sk_rmem_schedule(struct sock *sk, int size, bool pfmemalloc)
+ {
+ int delta;
+
+@@ -1529,7 +1529,13 @@ sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
+ return true;
+ delta = size - sk->sk_forward_alloc;
+ return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) ||
+- skb_pfmemalloc(skb);
++ pfmemalloc;
++}
++
++static inline bool
++sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
++{
++ return __sk_rmem_schedule(sk, size, skb_pfmemalloc(skb));
+ }
+
+ static inline int sk_unused_reserved_mem(const struct sock *sk)
+diff --git a/include/uapi/linux/stddef.h b/include/uapi/linux/stddef.h
+index 58154117d9b090..a6fce46aeb37c9 100644
+--- a/include/uapi/linux/stddef.h
++++ b/include/uapi/linux/stddef.h
+@@ -8,6 +8,13 @@
+ #define __always_inline inline
+ #endif
+
++/* Not all C++ standards support type declarations inside an anonymous union */
++#ifndef __cplusplus
++#define __struct_group_tag(TAG) TAG
++#else
++#define __struct_group_tag(TAG)
++#endif
++
+ /**
+ * __struct_group() - Create a mirrored named and anonyomous struct
+ *
+@@ -20,13 +27,13 @@
+ * and size: one anonymous and one named. The former's members can be used
+ * normally without sub-struct naming, and the latter can be used to
+ * reason about the start, end, and size of the group of struct members.
+- * The named struct can also be explicitly tagged for layer reuse, as well
+- * as both having struct attributes appended.
++ * The named struct can also be explicitly tagged for layer reuse (C only),
++ * as well as both having struct attributes appended.
+ */
+ #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
+ union { \
+ struct { MEMBERS } ATTRS; \
+- struct TAG { MEMBERS } ATTRS NAME; \
++ struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
+ } ATTRS
+
+ #ifdef __cplusplus
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index a26593979887f3..1cfcc735b8e38e 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -412,6 +412,7 @@ void io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ struct io_uring_params *p)
+ {
++ struct task_struct *task_to_put = NULL;
+ int ret;
+
+ /* Retain compatibility with failing for an invalid attach attempt */
+@@ -492,6 +493,7 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ }
+
+ sqd->thread = tsk;
++ task_to_put = get_task_struct(tsk);
+ ret = io_uring_alloc_task_context(tsk, ctx);
+ wake_up_new_task(tsk);
+ if (ret)
+@@ -502,11 +504,15 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ goto err;
+ }
+
++ if (task_to_put)
++ put_task_struct(task_to_put);
+ return 0;
+ err_sqpoll:
+ complete(&ctx->sq_data->exited);
+ err:
+ io_sq_thread_finish(ctx);
++ if (task_to_put)
++ put_task_struct(task_to_put);
+ return ret;
+ }
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4c486a0bfcc4d8..767f1cb8c27e17 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7868,7 +7868,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
+ verbose(env,
+ "arg#%d expected pointer to stack or const struct bpf_dynptr\n",
+- regno);
++ regno - 1);
+ return -EINVAL;
+ }
+
+@@ -7922,7 +7922,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ if (!is_dynptr_reg_valid_init(env, reg)) {
+ verbose(env,
+ "Expected an initialized dynptr as arg #%d\n",
+- regno);
++ regno - 1);
+ return -EINVAL;
+ }
+
+@@ -7930,7 +7930,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) {
+ verbose(env,
+ "Expected a dynptr of type %s as arg #%d\n",
+- dynptr_type_str(arg_to_dynptr_type(arg_type)), regno);
++ dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1);
+ return -EINVAL;
+ }
+
+@@ -7999,7 +7999,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ */
+ btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1);
+ if (btf_id < 0) {
+- verbose(env, "expected valid iter pointer as arg #%d\n", regno);
++ verbose(env, "expected valid iter pointer as arg #%d\n", regno - 1);
+ return -EINVAL;
+ }
+ t = btf_type_by_id(meta->btf, btf_id);
+@@ -8009,7 +8009,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ /* bpf_iter_<type>_new() expects pointer to uninit iter state */
+ if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) {
+ verbose(env, "expected uninitialized iter_%s as arg #%d\n",
+- iter_type_str(meta->btf, btf_id), regno);
++ iter_type_str(meta->btf, btf_id), regno - 1);
+ return -EINVAL;
+ }
+
+@@ -8033,7 +8033,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ break;
+ case -EINVAL:
+ verbose(env, "expected an initialized iter_%s as arg #%d\n",
+- iter_type_str(meta->btf, btf_id), regno);
++ iter_type_str(meta->btf, btf_id), regno - 1);
+ return err;
+ case -EPROTO:
+ verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
+@@ -21085,11 +21085,15 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ * changed in some incompatible and hard to support
+ * way, it's fine to back out this inlining logic
+ */
++#ifdef CONFIG_SMP
+ insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
+ insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+ insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
+ cnt = 3;
+-
++#else
++ insn_buf[0] = BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_0);
++ cnt = 1;
++#endif
+ new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+ if (!new_prog)
+ return -ENOMEM;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index ce8be55e5e04b3..e192bdbc9adebb 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -640,11 +640,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ LIST_HEAD(uf);
+ VMA_ITERATOR(vmi, mm, 0);
+
+- uprobe_start_dup_mmap();
+- if (mmap_write_lock_killable(oldmm)) {
+- retval = -EINTR;
+- goto fail_uprobe_end;
+- }
++ if (mmap_write_lock_killable(oldmm))
++ return -EINTR;
+ flush_cache_dup_mm(oldmm);
+ uprobe_dup_mmap(oldmm, mm);
+ /*
+@@ -783,8 +780,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ dup_userfaultfd_complete(&uf);
+ else
+ dup_userfaultfd_fail(&uf);
+-fail_uprobe_end:
+- uprobe_end_dup_mmap();
+ return retval;
+
+ fail_nomem_anon_vma_fork:
+@@ -1692,9 +1687,11 @@ static struct mm_struct *dup_mm(struct task_struct *tsk,
+ if (!mm_init(mm, tsk, mm->user_ns))
+ goto fail_nomem;
+
++ uprobe_start_dup_mmap();
+ err = dup_mmap(mm, oldmm);
+ if (err)
+ goto free_pt;
++ uprobe_end_dup_mmap();
+
+ mm->hiwater_rss = get_mm_rss(mm);
+ mm->hiwater_vm = mm->total_vm;
+@@ -1709,6 +1706,8 @@ static struct mm_struct *dup_mm(struct task_struct *tsk,
+ mm->binfmt = NULL;
+ mm_init_owner(mm, NULL);
+ mmput(mm);
++ if (err)
++ uprobe_end_dup_mmap();
+
+ fail_nomem:
+ return NULL;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 35515192aa0fda..b04990385a6a87 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5111,6 +5111,9 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
+ cpumask_var_t tracing_cpumask_new;
+ int err;
+
++ if (count == 0 || count > KMALLOC_MAX_SIZE)
++ return -EINVAL;
++
+ if (!zalloc_cpumask_var(&tracing_cpumask_new, GFP_KERNEL))
+ return -ENOMEM;
+
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 263fac44d3ca32..935a886af40c90 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -725,7 +725,7 @@ static int trace_kprobe_module_callback(struct notifier_block *nb,
+
+ static struct notifier_block trace_kprobe_module_nb = {
+ .notifier_call = trace_kprobe_module_callback,
+- .priority = 1 /* Invoked after kprobe module callback */
++ .priority = 2 /* Invoked after kprobe and jump_label module callback */
+ };
+ static int trace_kprobe_register_module_notifier(void)
+ {
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 9d078b37fe0b9b..abac770bc0b4c7 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -1173,6 +1173,8 @@ EXPORT_SYMBOL(ceph_osdc_new_request);
+
+ int __ceph_alloc_sparse_ext_map(struct ceph_osd_req_op *op, int cnt)
+ {
++ WARN_ON(op->op != CEPH_OSD_OP_SPARSE_READ);
++
+ op->extent.sparse_ext_cnt = cnt;
+ op->extent.sparse_ext = kmalloc_array(cnt,
+ sizeof(*op->extent.sparse_ext),
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9a459213d283f1..55495063621d6c 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3751,13 +3751,22 @@ static const struct bpf_func_proto bpf_skb_adjust_room_proto = {
+
+ static u32 __bpf_skb_min_len(const struct sk_buff *skb)
+ {
+- u32 min_len = skb_network_offset(skb);
++ int offset = skb_network_offset(skb);
++ u32 min_len = 0;
+
+- if (skb_transport_header_was_set(skb))
+- min_len = skb_transport_offset(skb);
+- if (skb->ip_summed == CHECKSUM_PARTIAL)
+- min_len = skb_checksum_start_offset(skb) +
+- skb->csum_offset + sizeof(__sum16);
++ if (offset > 0)
++ min_len = offset;
++ if (skb_transport_header_was_set(skb)) {
++ offset = skb_transport_offset(skb);
++ if (offset > 0)
++ min_len = offset;
++ }
++ if (skb->ip_summed == CHECKSUM_PARTIAL) {
++ offset = skb_checksum_start_offset(skb) +
++ skb->csum_offset + sizeof(__sum16);
++ if (offset > 0)
++ min_len = offset;
++ }
+ return min_len;
+ }
+
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index e90fbab703b2db..8ad7e6755fd642 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -445,8 +445,10 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ if (likely(!peek)) {
+ sge->offset += copy;
+ sge->length -= copy;
+- if (!msg_rx->skb)
++ if (!msg_rx->skb) {
+ sk_mem_uncharge(sk, copy);
++ atomic_sub(copy, &sk->sk_rmem_alloc);
++ }
+ msg_rx->sg.size -= copy;
+
+ if (!sge->length) {
+@@ -772,6 +774,8 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock)
+
+ list_for_each_entry_safe(msg, tmp, &psock->ingress_msg, list) {
+ list_del(&msg->list);
++ if (!msg->skb)
++ atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc);
+ sk_msg_free(psock->sk, msg);
+ kfree(msg);
+ }
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 99cef92e6290cf..392678ae80f4ed 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -49,13 +49,14 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+ sge = sk_msg_elem(msg, i);
+ size = (apply && apply_bytes < sge->length) ?
+ apply_bytes : sge->length;
+- if (!sk_wmem_schedule(sk, size)) {
++ if (!__sk_rmem_schedule(sk, size, false)) {
+ if (!copied)
+ ret = -ENOMEM;
+ break;
+ }
+
+ sk_mem_charge(sk, size);
++ atomic_add(size, &sk->sk_rmem_alloc);
+ sk_msg_xfer(tmp, msg, i, size);
+ copied += size;
+ if (sge->length)
+@@ -74,7 +75,8 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+
+ if (!ret) {
+ msg->sg.start = i;
+- sk_psock_queue_msg(psock, tmp);
++ if (!sk_psock_queue_msg(psock, tmp))
++ atomic_sub(copied, &sk->sk_rmem_alloc);
+ sk_psock_data_ready(sk, psock);
+ } else {
+ sk_msg_free(sk, tmp);
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 13b71069ae1874..b3853583d2ae1c 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -505,7 +505,7 @@ static void *snd_dma_wc_alloc(struct snd_dma_buffer *dmab, size_t size)
+ if (!p)
+ return NULL;
+ dmab->addr = dma_map_single(dmab->dev.dev, p, size, DMA_BIDIRECTIONAL);
+- if (dmab->addr == DMA_MAPPING_ERROR) {
++ if (dma_mapping_error(dmab->dev.dev, dmab->addr)) {
+ do_free_pages(dmab->area, size, true);
+ return NULL;
+ }
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 8d37f237f83b2e..bd26bb2210cbd4 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -37,6 +37,7 @@ static int process_legacy_output(struct snd_ump_endpoint *ump,
+ u32 *buffer, int count);
+ static void process_legacy_input(struct snd_ump_endpoint *ump, const u32 *src,
+ int words);
++static void update_legacy_names(struct snd_ump_endpoint *ump);
+ #else
+ static inline int process_legacy_output(struct snd_ump_endpoint *ump,
+ u32 *buffer, int count)
+@@ -47,6 +48,9 @@ static inline void process_legacy_input(struct snd_ump_endpoint *ump,
+ const u32 *src, int words)
+ {
+ }
++static inline void update_legacy_names(struct snd_ump_endpoint *ump)
++{
++}
+ #endif
+
+ static const struct snd_rawmidi_global_ops snd_ump_rawmidi_ops = {
+@@ -861,6 +865,7 @@ static int ump_handle_fb_info_msg(struct snd_ump_endpoint *ump,
+ fill_fb_info(ump, &fb->info, buf);
+ if (ump->parsed) {
+ snd_ump_update_group_attrs(ump);
++ update_legacy_names(ump);
+ seq_notify_fb_change(ump, fb);
+ }
+ }
+@@ -893,6 +898,7 @@ static int ump_handle_fb_name_msg(struct snd_ump_endpoint *ump,
+ /* notify the FB name update to sequencer, too */
+ if (ret > 0 && ump->parsed) {
+ snd_ump_update_group_attrs(ump);
++ update_legacy_names(ump);
+ seq_notify_fb_change(ump, fb);
+ }
+ return ret;
+@@ -1087,6 +1093,8 @@ static int snd_ump_legacy_open(struct snd_rawmidi_substream *substream)
+ guard(mutex)(&ump->open_mutex);
+ if (ump->legacy_substreams[dir][group])
+ return -EBUSY;
++ if (!ump->groups[group].active)
++ return -ENODEV;
+ if (dir == SNDRV_RAWMIDI_STREAM_OUTPUT) {
+ if (!ump->legacy_out_opens) {
+ err = snd_rawmidi_kernel_open(&ump->core, 0,
+@@ -1254,11 +1262,20 @@ static void fill_substream_names(struct snd_ump_endpoint *ump,
+ name = ump->groups[idx].name;
+ if (!*name)
+ name = ump->info.name;
+- snprintf(s->name, sizeof(s->name), "Group %d (%.16s)",
+- idx + 1, name);
++ scnprintf(s->name, sizeof(s->name), "Group %d (%.16s)%s",
++ idx + 1, name,
++ ump->groups[idx].active ? "" : " [Inactive]");
+ }
+ }
+
++static void update_legacy_names(struct snd_ump_endpoint *ump)
++{
++ struct snd_rawmidi *rmidi = ump->legacy_rmidi;
++
++ fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_INPUT);
++ fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT);
++}
++
+ int snd_ump_attach_legacy_rawmidi(struct snd_ump_endpoint *ump,
+ char *id, int device)
+ {
+@@ -1295,10 +1312,7 @@ int snd_ump_attach_legacy_rawmidi(struct snd_ump_endpoint *ump,
+ rmidi->ops = &snd_ump_legacy_ops;
+ rmidi->private_data = ump;
+ ump->legacy_rmidi = rmidi;
+- if (input)
+- fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_INPUT);
+- if (output)
+- fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT);
++ update_legacy_names(ump);
+
+ ump_dbg(ump, "Created a legacy rawmidi #%d (%s)\n", device, id);
+ return 0;
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 2e9f817b948eb3..538c37a78a56f7 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -307,6 +307,7 @@ enum {
+ CXT_FIXUP_HP_MIC_NO_PRESENCE,
+ CXT_PINCFG_SWS_JS201D,
+ CXT_PINCFG_TOP_SPEAKER,
++ CXT_FIXUP_HP_A_U,
+ };
+
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -774,6 +775,18 @@ static void cxt_setup_mute_led(struct hda_codec *codec,
+ }
+ }
+
++static void cxt_setup_gpio_unmute(struct hda_codec *codec,
++ unsigned int gpio_mute_mask)
++{
++ if (gpio_mute_mask) {
++ // set gpio data to 0.
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA, 0);
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_MASK, gpio_mute_mask);
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DIRECTION, gpio_mute_mask);
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_STICKY_MASK, 0);
++ }
++}
++
+ static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -788,6 +801,15 @@ static void cxt_fixup_hp_zbook_mute_led(struct hda_codec *codec,
+ cxt_setup_mute_led(codec, 0x10, 0x20);
+ }
+
++static void cxt_fixup_hp_a_u(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ // Init vers in BIOS mute the spk/hp by set gpio high to avoid pop noise,
++ // so need to unmute once by clearing the gpio data when runs into the system.
++ if (action == HDA_FIXUP_ACT_INIT)
++ cxt_setup_gpio_unmute(codec, 0x2);
++}
++
+ /* ThinkPad X200 & co with cxt5051 */
+ static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
+ { 0x16, 0x042140ff }, /* HP (seq# overridden) */
+@@ -998,6 +1020,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ { }
+ },
+ },
++ [CXT_FIXUP_HP_A_U] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cxt_fixup_hp_a_u,
++ },
+ };
+
+ static const struct hda_quirk cxt5045_fixups[] = {
+@@ -1072,6 +1098,7 @@ static const struct hda_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
++ SND_PCI_QUIRK(0x14f1, 0x0252, "MBX-Z60MR100", CXT_FIXUP_HP_A_U),
+ SND_PCI_QUIRK(0x14f1, 0x0265, "SWS JS201D", CXT_PINCFG_SWS_JS201D),
+ SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+@@ -1117,6 +1144,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ { .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
+ { .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" },
+ { .id = CXT_PINCFG_TOP_SPEAKER, .name = "sirius-top-speaker" },
++ { .id = CXT_FIXUP_HP_A_U, .name = "HP-U-support" },
+ {}
+ };
+
+diff --git a/sound/sh/sh_dac_audio.c b/sound/sh/sh_dac_audio.c
+index e7b6ce7bd086bd..1c1c14708f0181 100644
+--- a/sound/sh/sh_dac_audio.c
++++ b/sound/sh/sh_dac_audio.c
+@@ -163,7 +163,7 @@ static int snd_sh_dac_pcm_copy(struct snd_pcm_substream *substream,
+ /* channel is not used (interleaved data) */
+ struct snd_sh_dac *chip = snd_pcm_substream_chip(substream);
+
+- if (copy_from_iter_toio(chip->data_buffer + pos, src, count))
++ if (copy_from_iter(chip->data_buffer + pos, count, src) != count)
+ return -EFAULT;
+ chip->buffer_end = chip->data_buffer + pos + count;
+
+@@ -182,7 +182,7 @@ static int snd_sh_dac_pcm_silence(struct snd_pcm_substream *substream,
+ /* channel is not used (interleaved data) */
+ struct snd_sh_dac *chip = snd_pcm_substream_chip(substream);
+
+- memset_io(chip->data_buffer + pos, 0, count);
++ memset(chip->data_buffer + pos, 0, count);
+ chip->buffer_end = chip->data_buffer + pos + count;
+
+ if (chip->empty) {
+@@ -211,7 +211,6 @@ static const struct snd_pcm_ops snd_sh_dac_pcm_ops = {
+ .pointer = snd_sh_dac_pcm_pointer,
+ .copy = snd_sh_dac_pcm_copy,
+ .fill_silence = snd_sh_dac_pcm_silence,
+- .mmap = snd_pcm_lib_mmap_iomem,
+ };
+
+ static int snd_sh_dac_pcm(struct snd_sh_dac *chip, int device)
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index c72d666d51bdf4..5c4a0be7a78892 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -375,11 +375,18 @@ static int get_acp63_device_config(struct pci_dev *pci, struct acp63_dev_data *a
+ {
+ struct acpi_device *pdm_dev;
+ const union acpi_object *obj;
++ acpi_handle handle;
++ acpi_integer dmic_status;
+ u32 config;
+ bool is_dmic_dev = false;
+ bool is_sdw_dev = false;
++ bool wov_en, dmic_en;
+ int ret;
+
++ /* IF WOV entry not found, enable dmic based on acp-audio-device-type entry*/
++ wov_en = true;
++ dmic_en = false;
++
+ config = readl(acp_data->acp63_base + ACP_PIN_CONFIG);
+ switch (config) {
+ case ACP_CONFIG_4:
+@@ -412,10 +419,18 @@ static int get_acp63_device_config(struct pci_dev *pci, struct acp63_dev_data *a
+ if (!acpi_dev_get_property(pdm_dev, "acp-audio-device-type",
+ ACPI_TYPE_INTEGER, &obj) &&
+ obj->integer.value == ACP_DMIC_DEV)
+- is_dmic_dev = true;
++ dmic_en = true;
+ }
++
++ handle = ACPI_HANDLE(&pci->dev);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ wov_en = dmic_status;
+ }
+
++ if (dmic_en && wov_en)
++ is_dmic_dev = true;
++
+ if (acp_data->is_sdw_config) {
+ ret = acp_scan_sdw_devices(&pci->dev, ACP63_SDW_ADDR);
+ if (!ret && acp_data->info.link_mask)
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index db57292c00ca1e..41042259f2b26e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -608,7 +608,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C")
++ DMI_MATCH(DMI_PRODUCT_NAME, "21QB")
+ },
+ /* Note this quirk excludes the CODEC mic */
+ .driver_data = (void *)(SOC_SDW_CODEC_MIC),
+@@ -617,9 +617,26 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B")
++ DMI_MATCH(DMI_PRODUCT_NAME, "21QA")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ /* Note this quirk excludes the CODEC mic */
++ .driver_data = (void *)(SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21Q6")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21Q7")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
+ },
+
+ /* ArrowLake devices */
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index ac505c7ad34295..82f46ecd94301e 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -103,8 +103,10 @@ hda_dai_get_ops(struct snd_pcm_substream *substream, struct snd_soc_dai *cpu_dai
+ return sdai->platform_private;
+ }
+
+-int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream,
+- struct snd_soc_dai *cpu_dai)
++static int
++hda_link_dma_cleanup(struct snd_pcm_substream *substream,
++ struct hdac_ext_stream *hext_stream,
++ struct snd_soc_dai *cpu_dai, bool release)
+ {
+ const struct hda_dai_widget_dma_ops *ops = hda_dai_get_ops(substream, cpu_dai);
+ struct sof_intel_hda_stream *hda_stream;
+@@ -128,6 +130,17 @@ int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_st
+ snd_hdac_ext_bus_link_clear_stream_id(hlink, stream_tag);
+ }
+
++ if (!release) {
++ /*
++ * Force stream reconfiguration without releasing the channel on
++ * subsequent stream restart (without free), including LinkDMA
++ * reset.
++ * The stream is released via hda_dai_hw_free()
++ */
++ hext_stream->link_prepared = 0;
++ return 0;
++ }
++
+ if (ops->release_hext_stream)
+ ops->release_hext_stream(sdev, cpu_dai, substream);
+
+@@ -211,7 +224,7 @@ static int __maybe_unused hda_dai_hw_free(struct snd_pcm_substream *substream,
+ if (!hext_stream)
+ return 0;
+
+- return hda_link_dma_cleanup(substream, hext_stream, cpu_dai);
++ return hda_link_dma_cleanup(substream, hext_stream, cpu_dai, true);
+ }
+
+ static int __maybe_unused hda_dai_hw_params_data(struct snd_pcm_substream *substream,
+@@ -304,7 +317,8 @@ static int __maybe_unused hda_dai_trigger(struct snd_pcm_substream *substream, i
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+- ret = hda_link_dma_cleanup(substream, hext_stream, dai);
++ ret = hda_link_dma_cleanup(substream, hext_stream, dai,
++ cmd == SNDRV_PCM_TRIGGER_STOP ? false : true);
+ if (ret < 0) {
+ dev_err(sdev->dev, "%s: failed to clean up link DMA\n", __func__);
+ return ret;
+@@ -656,8 +670,7 @@ static int hda_dai_suspend(struct hdac_bus *bus)
+ }
+
+ ret = hda_link_dma_cleanup(hext_stream->link_substream,
+- hext_stream,
+- cpu_dai);
++ hext_stream, cpu_dai, true);
+ if (ret < 0)
+ return ret;
+ }
+diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h
+index b74a472435b5d2..4a4a0b55f0bc60 100644
+--- a/sound/soc/sof/intel/hda.h
++++ b/sound/soc/sof/intel/hda.h
+@@ -1028,8 +1028,6 @@ const struct hda_dai_widget_dma_ops *
+ hda_select_dai_widget_ops(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget);
+ int hda_dai_config(struct snd_soc_dapm_widget *w, unsigned int flags,
+ struct snd_sof_dai_config_data *data);
+-int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream,
+- struct snd_soc_dai *cpu_dai);
+
+ static inline struct snd_sof_dev *widget_to_sdev(struct snd_soc_dapm_widget *w)
+ {
+diff --git a/tools/include/uapi/linux/stddef.h b/tools/include/uapi/linux/stddef.h
+index bb6ea517efb511..c53cde425406b7 100644
+--- a/tools/include/uapi/linux/stddef.h
++++ b/tools/include/uapi/linux/stddef.h
+@@ -8,6 +8,13 @@
+ #define __always_inline __inline__
+ #endif
+
++/* Not all C++ standards support type declarations inside an anonymous union */
++#ifndef __cplusplus
++#define __struct_group_tag(TAG) TAG
++#else
++#define __struct_group_tag(TAG)
++#endif
++
+ /**
+ * __struct_group() - Create a mirrored named and anonyomous struct
+ *
+@@ -20,14 +27,14 @@
+ * and size: one anonymous and one named. The former's members can be used
+ * normally without sub-struct naming, and the latter can be used to
+ * reason about the start, end, and size of the group of struct members.
+- * The named struct can also be explicitly tagged for layer reuse, as well
+- * as both having struct attributes appended.
++ * The named struct can also be explicitly tagged for layer reuse (C only),
++ * as well as both having struct attributes appended.
+ */
+ #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
+ union { \
+ struct { MEMBERS } ATTRS; \
+- struct TAG { MEMBERS } ATTRS NAME; \
+- }
++ struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
++ } ATTRS
+
+ /**
+ * __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
+diff --git a/tools/objtool/noreturns.h b/tools/objtool/noreturns.h
+index e7da92489167e9..f98dc0e1c99c4a 100644
+--- a/tools/objtool/noreturns.h
++++ b/tools/objtool/noreturns.h
+@@ -20,6 +20,7 @@ NORETURN(__x64_sys_exit_group)
+ NORETURN(arch_cpu_idle_dead)
+ NORETURN(bch2_trans_in_restart_error)
+ NORETURN(bch2_trans_restart_error)
++NORETURN(bch2_trans_unlocked_error)
+ NORETURN(cpu_bringup_and_idle)
+ NORETURN(cpu_startup_entry)
+ NORETURN(do_exit)
+diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
+index 8f36c9de759152..dfd817d0348c47 100644
+--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
++++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
+@@ -149,7 +149,7 @@ int ringbuf_release_uninit_dynptr(void *ctx)
+
+ /* A dynptr can't be used after it has been invalidated */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int use_after_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -428,7 +428,7 @@ int invalid_helper2(void *ctx)
+
+ /* A bpf_dynptr is invalidated if it's been written into */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int invalid_write1(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -1407,7 +1407,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
+
+ /* bpf_dynptr_adjust can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_adjust_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1420,7 +1420,7 @@ int dynptr_adjust_invalid(void *ctx)
+
+ /* bpf_dynptr_is_null can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_is_null_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1433,7 +1433,7 @@ int dynptr_is_null_invalid(void *ctx)
+
+ /* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_is_rdonly_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1446,7 +1446,7 @@ int dynptr_is_rdonly_invalid(void *ctx)
+
+ /* bpf_dynptr_size can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_size_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1459,7 +1459,7 @@ int dynptr_size_invalid(void *ctx)
+
+ /* Only initialized dynptrs can be cloned */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int clone_invalid1(void *ctx)
+ {
+ struct bpf_dynptr ptr1 = {};
+@@ -1493,7 +1493,7 @@ int clone_invalid2(struct xdp_md *xdp)
+
+ /* Invalidating a dynptr should invalidate its clones */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int clone_invalidate1(void *ctx)
+ {
+ struct bpf_dynptr clone;
+@@ -1514,7 +1514,7 @@ int clone_invalidate1(void *ctx)
+
+ /* Invalidating a dynptr should invalidate its parent */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int clone_invalidate2(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -1535,7 +1535,7 @@ int clone_invalidate2(void *ctx)
+
+ /* Invalidating a dynptr should invalidate its siblings */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int clone_invalidate3(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -1723,7 +1723,7 @@ __noinline long global_call_bpf_dynptr(const struct bpf_dynptr *dynptr)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("arg#1 expected pointer to stack or const struct bpf_dynptr")
++__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+ int test_dynptr_reg_type(void *ctx)
+ {
+ struct task_struct *current = NULL;
+diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c
+index d47e59aba6de35..f41257eadbb258 100644
+--- a/tools/testing/selftests/bpf/progs/iters_state_safety.c
++++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c
+@@ -73,7 +73,7 @@ int create_and_forget_to_destroy_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int destroy_without_creating_fail(void *ctx)
+ {
+ /* init with zeros to stop verifier complaining about uninit stack */
+@@ -91,7 +91,7 @@ int destroy_without_creating_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int compromise_iter_w_direct_write_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -143,7 +143,7 @@ int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int compromise_iter_w_helper_write_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -230,7 +230,7 @@ int valid_stack_reuse(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected uninitialized iter_num as arg #1")
++__failure __msg("expected uninitialized iter_num as arg #0")
+ int double_create_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -258,7 +258,7 @@ int double_create_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int double_destroy_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -284,7 +284,7 @@ int double_destroy_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int next_without_new_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -305,7 +305,7 @@ int next_without_new_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int next_after_destroy_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+index 4a176e6aede897..6543d5b6e0a976 100644
+--- a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
++++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+@@ -79,7 +79,7 @@ int testmod_seq_truncated(const void *ctx)
+
+ SEC("?raw_tp")
+ __failure
+-__msg("expected an initialized iter_testmod_seq as arg #2")
++__msg("expected an initialized iter_testmod_seq as arg #1")
+ int testmod_seq_getter_before_bad(const void *ctx)
+ {
+ struct bpf_iter_testmod_seq it;
+@@ -89,7 +89,7 @@ int testmod_seq_getter_before_bad(const void *ctx)
+
+ SEC("?raw_tp")
+ __failure
+-__msg("expected an initialized iter_testmod_seq as arg #2")
++__msg("expected an initialized iter_testmod_seq as arg #1")
+ int testmod_seq_getter_after_bad(const void *ctx)
+ {
+ struct bpf_iter_testmod_seq it;
+diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+index e68667aec6a652..cd4d752bd089ca 100644
+--- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
++++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+@@ -45,7 +45,7 @@ int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size)
+ }
+
+ SEC("?lsm.s/bpf")
+-__failure __msg("arg#1 expected pointer to stack or const struct bpf_dynptr")
++__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+ int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size)
+ {
+ unsigned long val = 0;
+diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+index a7a6ae6c162fe0..8bcddadfc4daed 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
++++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+@@ -32,7 +32,7 @@ int BPF_PROG(no_destroy, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+
+ SEC("iter/cgroup")
+ __description("uninitialized iter in ->next()")
+-__failure __msg("expected an initialized iter_bits as arg #1")
++__failure __msg("expected an initialized iter_bits as arg #0")
+ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+ {
+ struct bpf_iter_bits it = {};
+@@ -43,7 +43,7 @@ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+
+ SEC("iter/cgroup")
+ __description("uninitialized iter in ->destroy()")
+-__failure __msg("expected an initialized iter_bits as arg #1")
++__failure __msg("expected an initialized iter_bits as arg #0")
+ int BPF_PROG(destroy_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+ {
+ struct bpf_iter_bits it = {};
+diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
+index 2d742fdac6b977..81943c6254e6bc 100644
+--- a/tools/testing/selftests/bpf/trace_helpers.c
++++ b/tools/testing/selftests/bpf/trace_helpers.c
+@@ -293,6 +293,10 @@ static int procmap_query(int fd, const void *addr, __u32 query_flags, size_t *st
+ return 0;
+ }
+ #else
++# ifndef PROCMAP_QUERY_VMA_EXECUTABLE
++# define PROCMAP_QUERY_VMA_EXECUTABLE 0x04
++# endif
++
+ static int procmap_query(int fd, const void *addr, __u32 query_flags, size_t *start, size_t *offset, int *flags)
+ {
+ return -EOPNOTSUPP;
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index ae55cd79128336..2cc3ffcbc983d3 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -280,6 +280,21 @@ static void timerlat_hist_header(struct osnoise_tool *tool)
+ trace_seq_reset(s);
+ }
+
++/*
++ * format_summary_value - format a line of summary value (min, max or avg)
++ * of hist data
++ */
++static void format_summary_value(struct trace_seq *seq,
++ int count,
++ unsigned long long val,
++ bool avg)
++{
++ if (count)
++ trace_seq_printf(seq, "%9llu ", avg ? val / count : val);
++ else
++ trace_seq_printf(seq, "%9c ", '-');
++}
++
+ /*
+ * timerlat_print_summary - print the summary of the hist data to the output
+ */
+@@ -327,29 +342,23 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
+ continue;
+
+- if (!params->no_irq) {
+- if (data->hist[cpu].irq_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].min_irq);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_irq)
++ format_summary_value(trace->seq,
++ data->hist[cpu].irq_count,
++ data->hist[cpu].min_irq,
++ false);
+
+- if (!params->no_thread) {
+- if (data->hist[cpu].thread_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].min_thread);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_thread)
++ format_summary_value(trace->seq,
++ data->hist[cpu].thread_count,
++ data->hist[cpu].min_thread,
++ false);
+
+- if (params->user_hist) {
+- if (data->hist[cpu].user_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].min_user);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (params->user_hist)
++ format_summary_value(trace->seq,
++ data->hist[cpu].user_count,
++ data->hist[cpu].min_user,
++ false);
+ }
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -363,29 +372,23 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
+ continue;
+
+- if (!params->no_irq) {
+- if (data->hist[cpu].irq_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].sum_irq / data->hist[cpu].irq_count);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_irq)
++ format_summary_value(trace->seq,
++ data->hist[cpu].irq_count,
++ data->hist[cpu].sum_irq,
++ true);
+
+- if (!params->no_thread) {
+- if (data->hist[cpu].thread_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].sum_thread / data->hist[cpu].thread_count);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_thread)
++ format_summary_value(trace->seq,
++ data->hist[cpu].thread_count,
++ data->hist[cpu].sum_thread,
++ true);
+
+- if (params->user_hist) {
+- if (data->hist[cpu].user_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].sum_user / data->hist[cpu].user_count);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (params->user_hist)
++ format_summary_value(trace->seq,
++ data->hist[cpu].user_count,
++ data->hist[cpu].sum_user,
++ true);
+ }
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -399,29 +402,23 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
+ continue;
+
+- if (!params->no_irq) {
+- if (data->hist[cpu].irq_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].max_irq);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_irq)
++ format_summary_value(trace->seq,
++ data->hist[cpu].irq_count,
++ data->hist[cpu].max_irq,
++ false);
+
+- if (!params->no_thread) {
+- if (data->hist[cpu].thread_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].max_thread);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_thread)
++ format_summary_value(trace->seq,
++ data->hist[cpu].thread_count,
++ data->hist[cpu].max_thread,
++ false);
+
+- if (params->user_hist) {
+- if (data->hist[cpu].user_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].max_user);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (params->user_hist)
++ format_summary_value(trace->seq,
++ data->hist[cpu].user_count,
++ data->hist[cpu].max_user,
++ false);
+ }
+ trace_seq_printf(trace->seq, "\n");
+ trace_seq_do_printf(trace->seq);
+@@ -505,16 +502,22 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "min: ");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.min_irq);
++ format_summary_value(trace->seq,
++ sum.irq_count,
++ sum.min_irq,
++ false);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.min_thread);
++ format_summary_value(trace->seq,
++ sum.thread_count,
++ sum.min_thread,
++ false);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.min_user);
++ format_summary_value(trace->seq,
++ sum.user_count,
++ sum.min_user,
++ false);
+
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -522,16 +525,22 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "avg: ");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.sum_irq / sum.irq_count);
++ format_summary_value(trace->seq,
++ sum.irq_count,
++ sum.sum_irq,
++ true);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.sum_thread / sum.thread_count);
++ format_summary_value(trace->seq,
++ sum.thread_count,
++ sum.sum_thread,
++ true);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.sum_user / sum.user_count);
++ format_summary_value(trace->seq,
++ sum.user_count,
++ sum.sum_user,
++ true);
+
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -539,16 +548,22 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "max: ");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.max_irq);
++ format_summary_value(trace->seq,
++ sum.irq_count,
++ sum.max_irq,
++ false);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.max_thread);
++ format_summary_value(trace->seq,
++ sum.thread_count,
++ sum.max_thread,
++ false);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.max_user);
++ format_summary_value(trace->seq,
++ sum.user_count,
++ sum.max_user,
++ false);
+
+ trace_seq_printf(trace->seq, "\n");
+ trace_seq_do_printf(trace->seq);
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-09 13:51 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-01-09 13:51 UTC (permalink / raw
To: gentoo-commits
commit: dce11bba7397f8cff2d315b9195b222824bbeed4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 9 13:51:24 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 9 13:51:24 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dce11bba
Linux patch 6.12.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-6.12.9.patch | 6461 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6465 insertions(+)
diff --git a/0000_README b/0000_README
index 483a9fde..29d9187b 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-6.12.8.patch
From: https://www.kernel.org
Desc: Linux 6.12.8
+Patch: 1008_linux-6.12.9.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.9
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1008_linux-6.12.9.patch b/1008_linux-6.12.9.patch
new file mode 100644
index 00000000..9db1b6b3
--- /dev/null
+++ b/1008_linux-6.12.9.patch
@@ -0,0 +1,6461 @@
+diff --git a/Documentation/admin-guide/laptops/thinkpad-acpi.rst b/Documentation/admin-guide/laptops/thinkpad-acpi.rst
+index 7f674a6cfa8a7b..4ab0fef7d440d1 100644
+--- a/Documentation/admin-guide/laptops/thinkpad-acpi.rst
++++ b/Documentation/admin-guide/laptops/thinkpad-acpi.rst
+@@ -445,8 +445,10 @@ event code Key Notes
+ 0x1008 0x07 FN+F8 IBM: toggle screen expand
+ Lenovo: configure UltraNav,
+ or toggle screen expand.
+- On newer platforms (2024+)
+- replaced by 0x131f (see below)
++ On 2024 platforms replaced by
++ 0x131f (see below) and on newer
++ platforms (2025 +) keycode is
++ replaced by 0x1401 (see below).
+
+ 0x1009 0x08 FN+F9 -
+
+@@ -506,9 +508,11 @@ event code Key Notes
+
+ 0x1019 0x18 unknown
+
+-0x131f ... FN+F8 Platform Mode change.
++0x131f ... FN+F8 Platform Mode change (2024 systems).
+ Implemented in driver.
+
++0x1401 ... FN+F8 Platform Mode change (2025 + systems).
++ Implemented in driver.
+ ... ... ...
+
+ 0x1020 0x1F unknown
+diff --git a/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml b/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
+index df20a3c9c74479..ec89115c74e4d3 100644
+--- a/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
++++ b/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
+@@ -90,7 +90,7 @@ properties:
+ adi,dsi-lanes:
+ description: Number of DSI data lanes connected to the DSI host.
+ $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [ 1, 2, 3, 4 ]
++ enum: [ 2, 3, 4 ]
+
+ "#sound-dai-cells":
+ const: 0
+diff --git a/Makefile b/Makefile
+index 8a10105c2539cf..80151f53d8ee0f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
+index 5b248814204147..69c6e71fa1e6ba 100644
+--- a/arch/arc/Kconfig
++++ b/arch/arc/Kconfig
+@@ -297,7 +297,6 @@ config ARC_PAGE_SIZE_16K
+ config ARC_PAGE_SIZE_4K
+ bool "4KB"
+ select HAVE_PAGE_SIZE_4KB
+- depends on ARC_MMU_V3 || ARC_MMU_V4
+
+ endchoice
+
+@@ -474,7 +473,8 @@ config HIGHMEM
+
+ config ARC_HAS_PAE40
+ bool "Support for the 40-bit Physical Address Extension"
+- depends on ISA_ARCV2
++ depends on MMU_V4
++ depends on !ARC_PAGE_SIZE_4K
+ select HIGHMEM
+ select PHYS_ADDR_T_64BIT
+ help
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index 2390dd042e3636..fb98478ed1ab09 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -6,7 +6,7 @@
+ KBUILD_DEFCONFIG := haps_hs_smp_defconfig
+
+ ifeq ($(CROSS_COMPILE),)
+-CROSS_COMPILE := $(call cc-cross-prefix, arc-linux- arceb-linux-)
++CROSS_COMPILE := $(call cc-cross-prefix, arc-linux- arceb-linux- arc-linux-gnu-)
+ endif
+
+ cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
+diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
+index 58045c89834045..76f43db0890fcd 100644
+--- a/arch/arc/include/asm/cmpxchg.h
++++ b/arch/arc/include/asm/cmpxchg.h
+@@ -48,7 +48,7 @@
+ \
+ switch(sizeof((_p_))) { \
+ case 1: \
+- _prev_ = (__typeof__(*(ptr)))cmpxchg_emu_u8((volatile u8 *)_p_, (uintptr_t)_o_, (uintptr_t)_n_); \
++ _prev_ = (__typeof__(*(ptr)))cmpxchg_emu_u8((volatile u8 *__force)_p_, (uintptr_t)_o_, (uintptr_t)_n_); \
+ break; \
+ case 4: \
+ _prev_ = __cmpxchg(_p_, _o_, _n_); \
+diff --git a/arch/arc/net/bpf_jit_arcv2.c b/arch/arc/net/bpf_jit_arcv2.c
+index 4458e409ca0a84..6d989b6d88c69b 100644
+--- a/arch/arc/net/bpf_jit_arcv2.c
++++ b/arch/arc/net/bpf_jit_arcv2.c
+@@ -2916,7 +2916,7 @@ bool check_jmp_32(u32 curr_off, u32 targ_off, u8 cond)
+ addendum = (cond == ARC_CC_AL) ? 0 : INSN_len_normal;
+ disp = get_displacement(curr_off + addendum, targ_off);
+
+- if (ARC_CC_AL)
++ if (cond == ARC_CC_AL)
+ return is_valid_far_disp(disp);
+ else
+ return is_valid_near_disp(disp);
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 28b4312f25631c..f558be868a50b6 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -7067,6 +7067,7 @@ __init int intel_pmu_init(void)
+
+ case INTEL_METEORLAKE:
+ case INTEL_METEORLAKE_L:
++ case INTEL_ARROWLAKE_U:
+ intel_pmu_init_hybrid(hybrid_big_small);
+
+ x86_pmu.pebs_latency_data = cmt_latency_data;
+diff --git a/block/blk.h b/block/blk.h
+index 88fab6a81701ed..1426f9c281973e 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -469,11 +469,6 @@ static inline bool bio_zone_write_plugging(struct bio *bio)
+ {
+ return bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING);
+ }
+-static inline bool bio_is_zone_append(struct bio *bio)
+-{
+- return bio_op(bio) == REQ_OP_ZONE_APPEND ||
+- bio_flagged(bio, BIO_EMULATES_ZONE_APPEND);
+-}
+ void blk_zone_write_plug_bio_merged(struct bio *bio);
+ void blk_zone_write_plug_init_request(struct request *rq);
+ static inline void blk_zone_update_request_bio(struct request *rq,
+@@ -522,10 +517,6 @@ static inline bool bio_zone_write_plugging(struct bio *bio)
+ {
+ return false;
+ }
+-static inline bool bio_is_zone_append(struct bio *bio)
+-{
+- return false;
+-}
+ static inline void blk_zone_write_plug_bio_merged(struct bio *bio)
+ {
+ }
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index b2cb157703c57f..c409fc7e061869 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -278,7 +278,8 @@ static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
+
+ #else /* !CONFIG_RESET_CONTROLLER */
+
+-static int clk_imx8mp_audiomix_reset_controller_register(struct clk_imx8mp_audiomix_priv *priv)
++static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
++ struct clk_imx8mp_audiomix_priv *priv)
+ {
+ return 0;
+ }
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 17e32ae08720cb..1015fab9525157 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -779,6 +779,13 @@ static struct ccu_div dpu1_clk = {
+ },
+ };
+
++static CLK_FIXED_FACTOR_HW(emmc_sdio_ref_clk, "emmc-sdio-ref",
++ &video_pll_clk.common.hw, 4, 1, 0);
++
++static const struct clk_parent_data emmc_sdio_ref_clk_pd[] = {
++ { .hw = &emmc_sdio_ref_clk.hw },
++};
++
+ static CCU_GATE(CLK_BROM, brom_clk, "brom", ahb2_cpusys_hclk_pd, 0x100, BIT(4), 0);
+ static CCU_GATE(CLK_BMU, bmu_clk, "bmu", axi4_cpusys2_aclk_pd, 0x100, BIT(5), 0);
+ static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_aclk_pd,
+@@ -798,7 +805,7 @@ static CCU_GATE(CLK_PERISYS_APB4_HCLK, perisys_apb4_hclk, "perisys-apb4-hclk", p
+ 0x150, BIT(12), 0);
+ static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0);
+ static CCU_GATE(CLK_CPU2VP, cpu2vp_clk, "cpu2vp", axi_aclk_pd, 0x1e0, BIT(13), 0);
+-static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", video_pll_clk_pd, 0x204, BIT(30), 0);
++static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", emmc_sdio_ref_clk_pd, 0x204, BIT(30), 0);
+ static CCU_GATE(CLK_GMAC1, gmac1_clk, "gmac1", gmac_pll_clk_pd, 0x204, BIT(26), 0);
+ static CCU_GATE(CLK_PADCTRL1, padctrl1_clk, "padctrl1", perisys_apb_pclk_pd, 0x204, BIT(24), 0);
+ static CCU_GATE(CLK_DSMART, dsmart_clk, "dsmart", perisys_apb_pclk_pd, 0x204, BIT(23), 0);
+@@ -1059,6 +1066,10 @@ static int th1520_clk_probe(struct platform_device *pdev)
+ return ret;
+ priv->hws[CLK_PLL_GMAC_100M] = &gmac_pll_clk_100m.hw;
+
++ ret = devm_clk_hw_register(dev, &emmc_sdio_ref_clk.hw);
++ if (ret)
++ return ret;
++
+ ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, priv);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 51904906545e59..45e28726e148e9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3721,8 +3721,12 @@ static int amdgpu_device_ip_resume_phase3(struct amdgpu_device *adev)
+ continue;
+ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_DCE) {
+ r = adev->ip_blocks[i].version->funcs->resume(adev);
+- if (r)
++ if (r) {
++ DRM_ERROR("resume of IP block <%s> failed %d\n",
++ adev->ip_blocks[i].version->funcs->name, r);
+ return r;
++ }
++ adev->ip_blocks[i].status.hw = true;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index c100845409f794..ffdb966c4127ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -45,6 +45,8 @@ MODULE_FIRMWARE("amdgpu/gc_9_4_3_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_9_4_4_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_9_4_3_rlc.bin");
+ MODULE_FIRMWARE("amdgpu/gc_9_4_4_rlc.bin");
++MODULE_FIRMWARE("amdgpu/gc_9_4_3_sjt_mec.bin");
++MODULE_FIRMWARE("amdgpu/gc_9_4_4_sjt_mec.bin");
+
+ #define GFX9_MEC_HPD_SIZE 4096
+ #define RLCG_UCODE_LOADING_START_ADDRESS 0x00002000L
+@@ -574,8 +576,12 @@ static int gfx_v9_4_3_init_cp_compute_microcode(struct amdgpu_device *adev,
+ {
+ int err;
+
+- err = amdgpu_ucode_request(adev, &adev->gfx.mec_fw,
+- "amdgpu/%s_mec.bin", chip_name);
++ if (amdgpu_sriov_vf(adev))
++ err = amdgpu_ucode_request(adev, &adev->gfx.mec_fw,
++ "amdgpu/%s_sjt_mec.bin", chip_name);
++ else
++ err = amdgpu_ucode_request(adev, &adev->gfx.mec_fw,
++ "amdgpu/%s_mec.bin", chip_name);
+ if (err)
+ goto out;
+ amdgpu_gfx_cp_init_microcode(adev, AMDGPU_UCODE_ID_CP_MEC1);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+index 8ee3d07ffbdfa2..f31e9fbf634a0f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+@@ -306,7 +306,7 @@ svm_migrate_copy_to_vram(struct kfd_node *node, struct svm_range *prange,
+ spage = migrate_pfn_to_page(migrate->src[i]);
+ if (spage && !is_zone_device_page(spage)) {
+ src[i] = dma_map_page(dev, spage, 0, PAGE_SIZE,
+- DMA_TO_DEVICE);
++ DMA_BIDIRECTIONAL);
+ r = dma_mapping_error(dev, src[i]);
+ if (r) {
+ dev_err(dev, "%s: fail %d dma_map_page\n",
+@@ -630,7 +630,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
+ goto out_oom;
+ }
+
+- dst[i] = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_FROM_DEVICE);
++ dst[i] = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
+ r = dma_mapping_error(dev, dst[i]);
+ if (r) {
+ dev_err(adev->dev, "%s: fail %d dma_map_page\n", __func__, r);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+index 61f4a38e7d2bf6..8f786592143b6c 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+@@ -153,7 +153,16 @@ static int adv7511_hdmi_hw_params(struct device *dev, void *data,
+ ADV7511_AUDIO_CFG3_LEN_MASK, len);
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_I2C_FREQ_ID_CFG,
+ ADV7511_I2C_FREQ_ID_CFG_RATE_MASK, rate << 4);
+- regmap_write(adv7511->regmap, 0x73, 0x1);
++
++ /* send current Audio infoframe values while updating */
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
++ BIT(5), BIT(5));
++
++ regmap_write(adv7511->regmap, ADV7511_REG_AUDIO_INFOFRAME(0), 0x1);
++
++ /* use Audio infoframe updated info */
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
++ BIT(5), 0);
+
+ return 0;
+ }
+@@ -184,8 +193,9 @@ static int audio_startup(struct device *dev, void *data)
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(0),
+ BIT(7) | BIT(6), BIT(7));
+ /* use Audio infoframe updated info */
+- regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(1),
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
+ BIT(5), 0);
++
+ /* enable SPDIF receiver */
+ if (adv7511->audio_source == ADV7511_AUDIO_SOURCE_SPDIF)
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_AUDIO_CONFIG,
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index eb5919b382635e..a13b3d8ab6ac60 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1241,8 +1241,10 @@ static int adv7511_probe(struct i2c_client *i2c)
+ return ret;
+
+ ret = adv7511_init_regulators(adv7511);
+- if (ret)
+- return dev_err_probe(dev, ret, "failed to init regulators\n");
++ if (ret) {
++ dev_err_probe(dev, ret, "failed to init regulators\n");
++ goto err_of_node_put;
++ }
+
+ /*
+ * The power down GPIO is optional. If present, toggle it from active to
+@@ -1363,6 +1365,8 @@ static int adv7511_probe(struct i2c_client *i2c)
+ i2c_unregister_device(adv7511->i2c_edid);
+ uninit_regulators:
+ adv7511_uninit_regulators(adv7511);
++err_of_node_put:
++ of_node_put(adv7511->host_node);
+
+ return ret;
+ }
+@@ -1371,6 +1375,8 @@ static void adv7511_remove(struct i2c_client *i2c)
+ {
+ struct adv7511 *adv7511 = i2c_get_clientdata(i2c);
+
++ of_node_put(adv7511->host_node);
++
+ adv7511_uninit_regulators(adv7511);
+
+ drm_bridge_remove(&adv7511->bridge);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7533.c b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+index 4481489aaf5ebf..122ad91e8a3293 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7533.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+@@ -172,7 +172,7 @@ int adv7533_parse_dt(struct device_node *np, struct adv7511 *adv)
+
+ of_property_read_u32(np, "adi,dsi-lanes", &num_lanes);
+
+- if (num_lanes < 1 || num_lanes > 4)
++ if (num_lanes < 2 || num_lanes > 4)
+ return -EINVAL;
+
+ adv->num_dsi_lanes = num_lanes;
+@@ -181,8 +181,6 @@ int adv7533_parse_dt(struct device_node *np, struct adv7511 *adv)
+ if (!adv->host_node)
+ return -ENODEV;
+
+- of_node_put(adv->host_node);
+-
+ adv->use_timing_gen = !of_property_read_bool(np,
+ "adi,disable-timing-generator");
+
+diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.c b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
+index 4a6c3040ca15ef..f11309efff3398 100644
+--- a/drivers/gpu/drm/i915/display/intel_cx0_phy.c
++++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
+@@ -2084,14 +2084,6 @@ static void intel_c10_pll_program(struct drm_i915_private *i915,
+ 0, C10_VDR_CTRL_MSGBUS_ACCESS,
+ MB_WRITE_COMMITTED);
+
+- /* Custom width needs to be programmed to 0 for both the phy lanes */
+- intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH,
+- C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10,
+- MB_WRITE_COMMITTED);
+- intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1),
+- 0, C10_VDR_CTRL_UPDATE_CFG,
+- MB_WRITE_COMMITTED);
+-
+ /* Program the pll values only for the master lane */
+ for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++)
+ intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i),
+@@ -2101,6 +2093,10 @@ static void intel_c10_pll_program(struct drm_i915_private *i915,
+ intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED);
+ intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED);
+
++ /* Custom width needs to be programmed to 0 for both the phy lanes */
++ intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH,
++ C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10,
++ MB_WRITE_COMMITTED);
+ intel_cx0_rmw(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1),
+ 0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG,
+ MB_WRITE_COMMITTED);
+diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
+index c864d101faf941..9378d5901c4939 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
++++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
+@@ -133,7 +133,7 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
+ GEN9_MEDIA_PG_ENABLE |
+ GEN11_MEDIA_SAMPLER_PG_ENABLE;
+
+- if (GRAPHICS_VER(gt->i915) >= 12) {
++ if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) {
+ for (i = 0; i < I915_MAX_VCS; i++)
+ if (HAS_ENGINE(gt, _VCS(i)))
+ pg_enable |= (VDN_HCP_POWERGATE_ENABLE(i) |
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 2a093540354e89..84e327b569252f 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -722,7 +722,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ new_mem->mem_type == XE_PL_SYSTEM) {
+ long timeout = dma_resv_wait_timeout(ttm_bo->base.resv,
+ DMA_RESV_USAGE_BOOKKEEP,
+- true,
++ false,
+ MAX_SCHEDULE_TIMEOUT);
+ if (timeout < 0) {
+ ret = timeout;
+@@ -846,8 +846,16 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+
+ out:
+ if ((!ttm_bo->resource || ttm_bo->resource->mem_type == XE_PL_SYSTEM) &&
+- ttm_bo->ttm)
++ ttm_bo->ttm) {
++ long timeout = dma_resv_wait_timeout(ttm_bo->base.resv,
++ DMA_RESV_USAGE_KERNEL,
++ false,
++ MAX_SCHEDULE_TIMEOUT);
++ if (timeout < 0)
++ ret = timeout;
++
+ xe_tt_unmap_sg(ttm_bo->ttm);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index c18e463092afa5..85aa3ab0da3b87 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -104,7 +104,11 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+ drm_puts(&p, "\n**** GuC CT ****\n");
+ xe_guc_ct_snapshot_print(ss->ct, &p);
+
+- drm_puts(&p, "\n**** Contexts ****\n");
++ /*
++ * Don't add a new section header here because the mesa debug decoder
++ * tool expects the context information to be in the 'GuC CT' section.
++ */
++ /* drm_puts(&p, "\n**** Contexts ****\n"); */
+ xe_guc_exec_queue_snapshot_print(ss->ge, &p);
+
+ drm_puts(&p, "\n**** Job ****\n");
+@@ -358,6 +362,15 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
+ char buff[ASCII85_BUFSZ], *line_buff;
+ size_t line_pos = 0;
+
++ /*
++ * Splitting blobs across multiple lines is not compatible with the mesa
++ * debug decoder tool. Note that even dropping the explicit '\n' below
++ * doesn't help because the GuC log is so big some underlying implementation
++ * still splits the lines at 512K characters. So just bail completely for
++ * the moment.
++ */
++ return;
++
+ #define DMESG_MAX_LINE_LEN 800
+ #define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
+
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index fd0f3b3c9101d4..268cd3123be9d9 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -8,6 +8,7 @@
+ #include <linux/nospec.h>
+
+ #include <drm/drm_device.h>
++#include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
+ #include <uapi/drm/xe_drm.h>
+
+@@ -762,9 +763,11 @@ bool xe_exec_queue_is_idle(struct xe_exec_queue *q)
+ */
+ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
+ {
++ struct xe_device *xe = gt_to_xe(q->gt);
+ struct xe_file *xef;
+ struct xe_lrc *lrc;
+ u32 old_ts, new_ts;
++ int idx;
+
+ /*
+ * Jobs that are run during driver load may use an exec_queue, but are
+@@ -774,6 +777,10 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
+ if (!q->vm || !q->vm->xef)
+ return;
+
++ /* Synchronize with unbind while holding the xe file open */
++ if (!drm_dev_enter(&xe->drm, &idx))
++ return;
++
+ xef = q->vm->xef;
+
+ /*
+@@ -787,6 +794,8 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
+ lrc = q->lrc[0];
+ new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
+ xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;
++
++ drm_dev_exit(idx);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index afdb477ecf833d..c9ed996b9cb0c3 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -2038,7 +2038,7 @@ static int pf_validate_vf_config(struct xe_gt *gt, unsigned int vfid)
+ valid_any = valid_any || (valid_ggtt && is_primary);
+
+ if (IS_DGFX(xe)) {
+- bool valid_lmem = pf_get_vf_config_ggtt(primary_gt, vfid);
++ bool valid_lmem = pf_get_vf_config_lmem(primary_gt, vfid);
+
+ valid_any = valid_any || (valid_lmem && is_primary);
+ valid_all = valid_all && valid_lmem;
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 64ace0b968f07f..91db10515d7472 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -690,6 +690,7 @@ cma_validate_port(struct ib_device *device, u32 port,
+ int bound_if_index = dev_addr->bound_dev_if;
+ int dev_type = dev_addr->dev_type;
+ struct net_device *ndev = NULL;
++ struct net_device *pdev = NULL;
+
+ if (!rdma_dev_access_netns(device, id_priv->id.route.addr.dev_addr.net))
+ goto out;
+@@ -714,6 +715,21 @@ cma_validate_port(struct ib_device *device, u32 port,
+
+ rcu_read_lock();
+ ndev = rcu_dereference(sgid_attr->ndev);
++ if (ndev->ifindex != bound_if_index) {
++ pdev = dev_get_by_index_rcu(dev_addr->net, bound_if_index);
++ if (pdev) {
++ if (is_vlan_dev(pdev)) {
++ pdev = vlan_dev_real_dev(pdev);
++ if (ndev->ifindex == pdev->ifindex)
++ bound_if_index = pdev->ifindex;
++ }
++ if (is_vlan_dev(ndev)) {
++ pdev = vlan_dev_real_dev(ndev);
++ if (bound_if_index == pdev->ifindex)
++ bound_if_index = ndev->ifindex;
++ }
++ }
++ }
+ if (!net_eq(dev_net(ndev), dev_addr->net) ||
+ ndev->ifindex != bound_if_index) {
+ rdma_put_gid_attr(sgid_attr);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index 7dc8e2ec62cc8b..f121899863034a 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -2802,8 +2802,8 @@ int rdma_nl_notify_event(struct ib_device *device, u32 port_num,
+ enum rdma_nl_notify_event_type type)
+ {
+ struct sk_buff *skb;
++ int ret = -EMSGSIZE;
+ struct net *net;
+- int ret = 0;
+ void *nlh;
+
+ net = read_pnet(&device->coredev.rdma_net);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index a4cce360df2178..edef79daed3fa8 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -161,7 +161,7 @@ static const void __user *uverbs_request_next_ptr(struct uverbs_req_iter *iter,
+ {
+ const void __user *res = iter->cur;
+
+- if (iter->cur + len > iter->end)
++ if (len > iter->end - iter->cur)
+ return (void __force __user *)ERR_PTR(-ENOSPC);
+ iter->cur += len;
+ return res;
+@@ -2010,11 +2010,13 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs)
+ ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd));
+ if (ret)
+ return ret;
+- wqes = uverbs_request_next_ptr(&iter, cmd.wqe_size * cmd.wr_count);
++ wqes = uverbs_request_next_ptr(&iter, size_mul(cmd.wqe_size,
++ cmd.wr_count));
+ if (IS_ERR(wqes))
+ return PTR_ERR(wqes);
+- sgls = uverbs_request_next_ptr(
+- &iter, cmd.sge_count * sizeof(struct ib_uverbs_sge));
++ sgls = uverbs_request_next_ptr(&iter,
++ size_mul(cmd.sge_count,
++ sizeof(struct ib_uverbs_sge)));
+ if (IS_ERR(sgls))
+ return PTR_ERR(sgls);
+ ret = uverbs_request_finish(&iter);
+@@ -2200,11 +2202,11 @@ ib_uverbs_unmarshall_recv(struct uverbs_req_iter *iter, u32 wr_count,
+ if (wqe_size < sizeof(struct ib_uverbs_recv_wr))
+ return ERR_PTR(-EINVAL);
+
+- wqes = uverbs_request_next_ptr(iter, wqe_size * wr_count);
++ wqes = uverbs_request_next_ptr(iter, size_mul(wqe_size, wr_count));
+ if (IS_ERR(wqes))
+ return ERR_CAST(wqes);
+- sgls = uverbs_request_next_ptr(
+- iter, sge_count * sizeof(struct ib_uverbs_sge));
++ sgls = uverbs_request_next_ptr(iter, size_mul(sge_count,
++ sizeof(struct ib_uverbs_sge)));
+ if (IS_ERR(sgls))
+ return ERR_CAST(sgls);
+ ret = uverbs_request_finish(iter);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 160096792224b1..b20cffcc3e7d2d 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -156,7 +156,7 @@ int bnxt_re_query_device(struct ib_device *ibdev,
+
+ ib_attr->vendor_id = rdev->en_dev->pdev->vendor;
+ ib_attr->vendor_part_id = rdev->en_dev->pdev->device;
+- ib_attr->hw_ver = rdev->en_dev->pdev->subsystem_device;
++ ib_attr->hw_ver = rdev->en_dev->pdev->revision;
+ ib_attr->max_qp = dev_attr->max_qp;
+ ib_attr->max_qp_wr = dev_attr->max_qp_wqes;
+ ib_attr->device_cap_flags =
+@@ -2107,18 +2107,20 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ }
+ }
+
+- if (qp_attr_mask & IB_QP_PATH_MTU) {
+- qp->qplib_qp.modify_flags |=
+- CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+- qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu);
+- qp->qplib_qp.mtu = ib_mtu_enum_to_int(qp_attr->path_mtu);
+- } else if (qp_attr->qp_state == IB_QPS_RTR) {
+- qp->qplib_qp.modify_flags |=
+- CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+- qp->qplib_qp.path_mtu =
+- __from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu));
+- qp->qplib_qp.mtu =
+- ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
++ if (qp_attr->qp_state == IB_QPS_RTR) {
++ enum ib_mtu qpmtu;
++
++ qpmtu = iboe_get_mtu(rdev->netdev->mtu);
++ if (qp_attr_mask & IB_QP_PATH_MTU) {
++ if (ib_mtu_enum_to_int(qp_attr->path_mtu) >
++ ib_mtu_enum_to_int(qpmtu))
++ return -EINVAL;
++ qpmtu = qp_attr->path_mtu;
++ }
++
++ qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
++ qp->qplib_qp.path_mtu = __from_ib_mtu(qpmtu);
++ qp->qplib_qp.mtu = ib_mtu_enum_to_int(qpmtu);
+ }
+
+ if (qp_attr_mask & IB_QP_TIMEOUT) {
+@@ -2763,7 +2765,8 @@ static int bnxt_re_post_send_shadow_qp(struct bnxt_re_dev *rdev,
+ wr = wr->next;
+ }
+ bnxt_qplib_post_send_db(&qp->qplib_qp);
+- bnxt_ud_qp_hw_stall_workaround(qp);
++ if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx))
++ bnxt_ud_qp_hw_stall_workaround(qp);
+ spin_unlock_irqrestore(&qp->sq_lock, flags);
+ return rc;
+ }
+@@ -2875,7 +2878,8 @@ int bnxt_re_post_send(struct ib_qp *ib_qp, const struct ib_send_wr *wr,
+ wr = wr->next;
+ }
+ bnxt_qplib_post_send_db(&qp->qplib_qp);
+- bnxt_ud_qp_hw_stall_workaround(qp);
++ if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx))
++ bnxt_ud_qp_hw_stall_workaround(qp);
+ spin_unlock_irqrestore(&qp->sq_lock, flags);
+
+ return rc;
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 2ac8ddbed576f5..8abd1b723f8ff5 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -1435,11 +1435,8 @@ static bool bnxt_re_is_qp1_or_shadow_qp(struct bnxt_re_dev *rdev,
+
+ static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev)
+ {
+- int mask = IB_QP_STATE;
+- struct ib_qp_attr qp_attr;
+ struct bnxt_re_qp *qp;
+
+- qp_attr.qp_state = IB_QPS_ERR;
+ mutex_lock(&rdev->qp_lock);
+ list_for_each_entry(qp, &rdev->qp_list, list) {
+ /* Modify the state of all QPs except QP1/Shadow QP */
+@@ -1447,12 +1444,9 @@ static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev)
+ if (qp->qplib_qp.state !=
+ CMDQ_MODIFY_QP_NEW_STATE_RESET &&
+ qp->qplib_qp.state !=
+- CMDQ_MODIFY_QP_NEW_STATE_ERR) {
++ CMDQ_MODIFY_QP_NEW_STATE_ERR)
+ bnxt_re_dispatch_event(&rdev->ibdev, &qp->ib_qp,
+ 1, IB_EVENT_QP_FATAL);
+- bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, mask,
+- NULL);
+- }
+ }
+ }
+ mutex_unlock(&rdev->qp_lock);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 7ad83566ab0f41..828e2f9808012b 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -658,13 +658,6 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ rc = bnxt_qplib_alloc_init_hwq(&srq->hwq, &hwq_attr);
+ if (rc)
+ return rc;
+-
+- srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq),
+- GFP_KERNEL);
+- if (!srq->swq) {
+- rc = -ENOMEM;
+- goto fail;
+- }
+ srq->dbinfo.flags = 0;
+ bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
+ CMDQ_BASE_OPCODE_CREATE_SRQ,
+@@ -693,9 +686,17 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ spin_lock_init(&srq->lock);
+ srq->start_idx = 0;
+ srq->last_idx = srq->hwq.max_elements - 1;
+- for (idx = 0; idx < srq->hwq.max_elements; idx++)
+- srq->swq[idx].next_idx = idx + 1;
+- srq->swq[srq->last_idx].next_idx = -1;
++ if (!srq->hwq.is_user) {
++ srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq),
++ GFP_KERNEL);
++ if (!srq->swq) {
++ rc = -ENOMEM;
++ goto fail;
++ }
++ for (idx = 0; idx < srq->hwq.max_elements; idx++)
++ srq->swq[idx].next_idx = idx + 1;
++ srq->swq[srq->last_idx].next_idx = -1;
++ }
+
+ srq->id = le32_to_cpu(resp.xid);
+ srq->dbinfo.hwq = &srq->hwq;
+@@ -999,9 +1000,7 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ u32 tbl_indx;
+ u16 nsge;
+
+- if (res->dattr)
+- qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
+-
++ qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
+ sq->dbinfo.flags = 0;
+ bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
+ CMDQ_BASE_OPCODE_CREATE_QP,
+@@ -1033,7 +1032,12 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ : 0;
+ /* Update msn tbl size */
+ if (qp->is_host_msn_tbl && psn_sz) {
+- hwq_attr.aux_depth = roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
++ if (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC)
++ hwq_attr.aux_depth =
++ roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
++ else
++ hwq_attr.aux_depth =
++ roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)) / 2;
+ qp->msn_tbl_sz = hwq_attr.aux_depth;
+ qp->msn = 0;
+ }
+@@ -1043,13 +1047,14 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ if (rc)
+ return rc;
+
+- rc = bnxt_qplib_alloc_init_swq(sq);
+- if (rc)
+- goto fail_sq;
+-
+- if (psn_sz)
+- bnxt_qplib_init_psn_ptr(qp, psn_sz);
++ if (!sq->hwq.is_user) {
++ rc = bnxt_qplib_alloc_init_swq(sq);
++ if (rc)
++ goto fail_sq;
+
++ if (psn_sz)
++ bnxt_qplib_init_psn_ptr(qp, psn_sz);
++ }
+ req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
+ pbl = &sq->hwq.pbl[PBL_LVL_0];
+ req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+@@ -1075,9 +1080,11 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr);
+ if (rc)
+ goto sq_swq;
+- rc = bnxt_qplib_alloc_init_swq(rq);
+- if (rc)
+- goto fail_rq;
++ if (!rq->hwq.is_user) {
++ rc = bnxt_qplib_alloc_init_swq(rq);
++ if (rc)
++ goto fail_rq;
++ }
+
+ req.rq_size = cpu_to_le32(rq->max_wqe);
+ pbl = &rq->hwq.pbl[PBL_LVL_0];
+@@ -1173,9 +1180,11 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ rq->dbinfo.db = qp->dpi->dbr;
+ rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size);
+ }
++ spin_lock_bh(&rcfw->tbl_lock);
+ tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
+ rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
+ rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp;
++ spin_unlock_bh(&rcfw->tbl_lock);
+
+ return 0;
+ fail:
+@@ -2596,10 +2605,12 @@ static int bnxt_qplib_cq_process_req(struct bnxt_qplib_cq *cq,
+ bnxt_qplib_add_flush_qp(qp);
+ } else {
+ /* Before we complete, do WA 9060 */
+- if (do_wa9060(qp, cq, cq_cons, sq->swq_last,
+- cqe_sq_cons)) {
+- *lib_qp = qp;
+- goto out;
++ if (!bnxt_qplib_is_chip_gen_p5_p7(qp->cctx)) {
++ if (do_wa9060(qp, cq, cq_cons, sq->swq_last,
++ cqe_sq_cons)) {
++ *lib_qp = qp;
++ goto out;
++ }
+ }
+ if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) {
+ cqe->status = CQ_REQ_STATUS_OK;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index f55958e5fddb4a..d8c71c024613bf 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -114,7 +114,6 @@ struct bnxt_qplib_sge {
+ u32 size;
+ };
+
+-#define BNXT_QPLIB_QP_MAX_SGL 6
+ struct bnxt_qplib_swq {
+ u64 wr_id;
+ int next_idx;
+@@ -154,7 +153,7 @@ struct bnxt_qplib_swqe {
+ #define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE BIT(2)
+ #define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT BIT(3)
+ #define BNXT_QPLIB_SWQE_FLAGS_INLINE BIT(4)
+- struct bnxt_qplib_sge sg_list[BNXT_QPLIB_QP_MAX_SGL];
++ struct bnxt_qplib_sge sg_list[BNXT_VAR_MAX_SGE];
+ int num_sge;
+ /* Max inline data is 96 bytes */
+ u32 inline_len;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index e82bd37158ad6c..7a099580ca8bff 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -424,7 +424,8 @@ static int __send_message_basic_sanity(struct bnxt_qplib_rcfw *rcfw,
+
+ /* Prevent posting if f/w is not in a state to process */
+ if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags))
+- return bnxt_qplib_map_rc(opcode);
++ return -ENXIO;
++
+ if (test_bit(FIRMWARE_STALL_DETECTED, &cmdq->flags))
+ return -ETIMEDOUT;
+
+@@ -493,7 +494,7 @@ static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+
+ rc = __send_message_basic_sanity(rcfw, msg, opcode);
+ if (rc)
+- return rc;
++ return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc;
+
+ rc = __send_message(rcfw, msg, opcode);
+ if (rc)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index e29fbbdab9fd68..3cca7b1395f6a7 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -129,12 +129,18 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+ attr->max_qp_init_rd_atom =
+ sb->max_qp_init_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ?
+ BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_init_rd_atom;
+- attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr);
+- /*
+- * 128 WQEs needs to be reserved for the HW (8916). Prevent
+- * reporting the max number
+- */
+- attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1;
++ attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr) - 1;
++ if (!bnxt_qplib_is_chip_gen_p5_p7(rcfw->res->cctx)) {
++ /*
++ * 128 WQEs needs to be reserved for the HW (8916). Prevent
++ * reporting the max number on legacy devices
++ */
++ attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1;
++ }
++
++ /* Adjust for max_qp_wqes for variable wqe */
++ if (cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE)
++ attr->max_qp_wqes = BNXT_VAR_MAX_WQE - 1;
+
+ attr->max_qp_sges = cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE ?
+ min_t(u32, sb->max_sge_var_wqe, BNXT_VAR_MAX_SGE) : 6;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index f84521be3bea4a..605562122ecce2 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -931,6 +931,7 @@ struct hns_roce_hem_item {
+ size_t count; /* max ba numbers */
+ int start; /* start buf offset in this hem */
+ int end; /* end buf offset in this hem */
++ bool exist_bt;
+ };
+
+ /* All HEM items are linked in a tree structure */
+@@ -959,6 +960,7 @@ hem_list_alloc_item(struct hns_roce_dev *hr_dev, int start, int end, int count,
+ }
+ }
+
++ hem->exist_bt = exist_bt;
+ hem->count = count;
+ hem->start = start;
+ hem->end = end;
+@@ -969,22 +971,22 @@ hem_list_alloc_item(struct hns_roce_dev *hr_dev, int start, int end, int count,
+ }
+
+ static void hem_list_free_item(struct hns_roce_dev *hr_dev,
+- struct hns_roce_hem_item *hem, bool exist_bt)
++ struct hns_roce_hem_item *hem)
+ {
+- if (exist_bt)
++ if (hem->exist_bt)
+ dma_free_coherent(hr_dev->dev, hem->count * BA_BYTE_LEN,
+ hem->addr, hem->dma_addr);
+ kfree(hem);
+ }
+
+ static void hem_list_free_all(struct hns_roce_dev *hr_dev,
+- struct list_head *head, bool exist_bt)
++ struct list_head *head)
+ {
+ struct hns_roce_hem_item *hem, *temp_hem;
+
+ list_for_each_entry_safe(hem, temp_hem, head, list) {
+ list_del(&hem->list);
+- hem_list_free_item(hr_dev, hem, exist_bt);
++ hem_list_free_item(hr_dev, hem);
+ }
+ }
+
+@@ -1084,6 +1086,10 @@ int hns_roce_hem_list_calc_root_ba(const struct hns_roce_buf_region *regions,
+
+ for (i = 0; i < region_cnt; i++) {
+ r = (struct hns_roce_buf_region *)®ions[i];
++ /* when r->hopnum = 0, the region should not occupy root_ba. */
++ if (!r->hopnum)
++ continue;
++
+ if (r->hopnum > 1) {
+ step = hem_list_calc_ba_range(r->hopnum, 1, unit);
+ if (step > 0)
+@@ -1177,7 +1183,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+
+ err_exit:
+ for (level = 1; level < hopnum; level++)
+- hem_list_free_all(hr_dev, &temp_list[level], true);
++ hem_list_free_all(hr_dev, &temp_list[level]);
+
+ return ret;
+ }
+@@ -1218,16 +1224,26 @@ static int alloc_fake_root_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
+ {
+ struct hns_roce_hem_item *hem;
+
++ /* This is on the has_mtt branch, if r->hopnum
++ * is 0, there is no root_ba to reuse for the
++ * region's fake hem, so a dma_alloc request is
++ * necessary here.
++ */
+ hem = hem_list_alloc_item(hr_dev, r->offset, r->offset + r->count - 1,
+- r->count, false);
++ r->count, !r->hopnum);
+ if (!hem)
+ return -ENOMEM;
+
+- hem_list_assign_bt(hem, cpu_base, phy_base);
++ /* The root_ba can be reused only when r->hopnum > 0. */
++ if (r->hopnum)
++ hem_list_assign_bt(hem, cpu_base, phy_base);
+ list_add(&hem->list, branch_head);
+ list_add(&hem->sibling, leaf_head);
+
+- return r->count;
++ /* If r->hopnum == 0, 0 is returned,
++ * so that the root_bt entry is not occupied.
++ */
++ return r->hopnum ? r->count : 0;
+ }
+
+ static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
+@@ -1271,7 +1287,7 @@ setup_root_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem_list *hem_list,
+ return -ENOMEM;
+
+ total = 0;
+- for (i = 0; i < region_cnt && total < max_ba_num; i++) {
++ for (i = 0; i < region_cnt && total <= max_ba_num; i++) {
+ r = ®ions[i];
+ if (!r->count)
+ continue;
+@@ -1337,9 +1353,9 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
+ region_cnt);
+ if (ret) {
+ for (i = 0; i < region_cnt; i++)
+- hem_list_free_all(hr_dev, &head.branch[i], false);
++ hem_list_free_all(hr_dev, &head.branch[i]);
+
+- hem_list_free_all(hr_dev, &head.root, true);
++ hem_list_free_all(hr_dev, &head.root);
+ }
+
+ return ret;
+@@ -1402,10 +1418,9 @@ void hns_roce_hem_list_release(struct hns_roce_dev *hr_dev,
+
+ for (i = 0; i < HNS_ROCE_MAX_BT_REGION; i++)
+ for (j = 0; j < HNS_ROCE_MAX_BT_LEVEL; j++)
+- hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j],
+- j != 0);
++ hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j]);
+
+- hem_list_free_all(hr_dev, &hem_list->root_bt, true);
++ hem_list_free_all(hr_dev, &hem_list->root_bt);
+ INIT_LIST_HEAD(&hem_list->btm_bt);
+ hem_list->root_ba = 0;
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 697b17cca02e71..0144e7210d05a1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -468,7 +468,7 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
+ valid_num_sge = calc_wr_sge_num(wr, &msg_len);
+
+ ret = set_ud_opcode(ud_sq_wqe, wr);
+- if (WARN_ON(ret))
++ if (WARN_ON_ONCE(ret))
+ return ret;
+
+ ud_sq_wqe->msg_len = cpu_to_le32(msg_len);
+@@ -572,7 +572,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
+ rc_sq_wqe->msg_len = cpu_to_le32(msg_len);
+
+ ret = set_rc_opcode(hr_dev, rc_sq_wqe, wr);
+- if (WARN_ON(ret))
++ if (WARN_ON_ONCE(ret))
+ return ret;
+
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
+@@ -670,6 +670,10 @@ static void write_dwqe(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
+ #define HNS_ROCE_SL_SHIFT 2
+ struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe;
+
++ if (unlikely(qp->state == IB_QPS_ERR)) {
++ flush_cqe(hr_dev, qp);
++ return;
++ }
+ /* All kinds of DirectWQE have the same header field layout */
+ hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_FLAG);
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_L, qp->sl);
+@@ -5619,6 +5623,9 @@ static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev,
+ {
+ struct hns_roce_dip *hr_dip = hr_qp->dip;
+
++ if (!hr_dip)
++ return;
++
+ xa_lock(&hr_dev->qp_table.dip_xa);
+
+ hr_dip->qp_cnt--;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index bf30b3a65a9ba9..55b9283bfc6f03 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -814,11 +814,6 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ for (i = 0, mapped_cnt = 0; i < mtr->hem_cfg.region_count &&
+ mapped_cnt < page_cnt; i++) {
+ r = &mtr->hem_cfg.region[i];
+- /* if hopnum is 0, no need to map pages in this region */
+- if (!r->hopnum) {
+- mapped_cnt += r->count;
+- continue;
+- }
+
+ if (r->offset + r->count > page_cnt) {
+ ret = -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index ac20ab3bbabf47..8c47cb4edd0a0a 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2831,7 +2831,7 @@ static int mlx5_ib_get_plane_num(struct mlx5_core_dev *mdev, u8 *num_plane)
+ int err;
+
+ *num_plane = 0;
+- if (!MLX5_CAP_GEN(mdev, ib_virt))
++ if (!MLX5_CAP_GEN(mdev, ib_virt) || !MLX5_CAP_GEN_2(mdev, multiplane))
+ return 0;
+
+ err = mlx5_query_hca_vport_context(mdev, 0, 1, 0, &vport_ctx);
+@@ -3631,7 +3631,8 @@ static int mlx5_ib_init_multiport_master(struct mlx5_ib_dev *dev)
+ list_for_each_entry(mpi, &mlx5_ib_unaffiliated_port_list,
+ list) {
+ if (dev->sys_image_guid == mpi->sys_image_guid &&
+- (mlx5_core_native_port_num(mpi->mdev) - 1) == i) {
++ (mlx5_core_native_port_num(mpi->mdev) - 1) == i &&
++ mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) {
+ bound = mlx5_ib_bind_slave_port(dev, mpi);
+ }
+
+@@ -4776,7 +4777,8 @@ static int mlx5r_mp_probe(struct auxiliary_device *adev,
+
+ mutex_lock(&mlx5_ib_multiport_mutex);
+ list_for_each_entry(dev, &mlx5_ib_dev_list, ib_dev_list) {
+- if (dev->sys_image_guid == mpi->sys_image_guid)
++ if (dev->sys_image_guid == mpi->sys_image_guid &&
++ mlx5_core_same_coredev_type(dev->mdev, mpi->mdev))
+ bound = mlx5_ib_bind_slave_port(dev, mpi);
+
+ if (bound) {
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 255677bc12b2ab..1ba4a0c8726aed 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -40,6 +40,8 @@ void rxe_dealloc(struct ib_device *ib_dev)
+ /* initialize rxe device parameters */
+ static void rxe_init_device_param(struct rxe_dev *rxe)
+ {
++ struct net_device *ndev;
++
+ rxe->max_inline_data = RXE_MAX_INLINE_DATA;
+
+ rxe->attr.vendor_id = RXE_VENDOR_ID;
+@@ -71,8 +73,15 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
+ rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN;
+ rxe->attr.max_pkeys = RXE_MAX_PKEYS;
+ rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return;
++
+ addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid,
+- rxe->ndev->dev_addr);
++ ndev->dev_addr);
++
++ dev_put(ndev);
+
+ rxe->max_ucontext = RXE_MAX_UCONTEXT;
+ }
+@@ -109,10 +118,15 @@ static void rxe_init_port_param(struct rxe_port *port)
+ static void rxe_init_ports(struct rxe_dev *rxe)
+ {
+ struct rxe_port *port = &rxe->port;
++ struct net_device *ndev;
+
+ rxe_init_port_param(port);
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return;
+ addrconf_addr_eui48((unsigned char *)&port->port_guid,
+- rxe->ndev->dev_addr);
++ ndev->dev_addr);
++ dev_put(ndev);
+ spin_lock_init(&port->port_lock);
+ }
+
+@@ -167,12 +181,13 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
+ /* called by ifc layer to create new rxe device.
+ * The caller should allocate memory for rxe by calling ib_alloc_device.
+ */
+-int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name)
++int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
++ struct net_device *ndev)
+ {
+ rxe_init(rxe);
+ rxe_set_mtu(rxe, mtu);
+
+- return rxe_register_device(rxe, ibdev_name);
++ return rxe_register_device(rxe, ibdev_name, ndev);
+ }
+
+ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h
+index d8fb2c7af30a7e..fe7f9706673255 100644
+--- a/drivers/infiniband/sw/rxe/rxe.h
++++ b/drivers/infiniband/sw/rxe/rxe.h
+@@ -139,7 +139,8 @@ enum resp_states {
+
+ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu);
+
+-int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name);
++int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
++ struct net_device *ndev);
+
+ void rxe_rcv(struct sk_buff *skb);
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
+index 86cc2e18a7fdaf..07ff47bae31df9 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mcast.c
++++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
+@@ -31,10 +31,19 @@
+ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid)
+ {
+ unsigned char ll_addr[ETH_ALEN];
++ struct net_device *ndev;
++ int ret;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return -ENODEV;
+
+ ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr);
+
+- return dev_mc_add(rxe->ndev, ll_addr);
++ ret = dev_mc_add(ndev, ll_addr);
++ dev_put(ndev);
++
++ return ret;
+ }
+
+ /**
+@@ -47,10 +56,19 @@ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid)
+ static int rxe_mcast_del(struct rxe_dev *rxe, union ib_gid *mgid)
+ {
+ unsigned char ll_addr[ETH_ALEN];
++ struct net_device *ndev;
++ int ret;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return -ENODEV;
+
+ ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr);
+
+- return dev_mc_del(rxe->ndev, ll_addr);
++ ret = dev_mc_del(ndev, ll_addr);
++ dev_put(ndev);
++
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 75d1407db52d4d..8cc64ceeb3569b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -524,7 +524,16 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
+ */
+ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num)
+ {
+- return rxe->ndev->name;
++ struct net_device *ndev;
++ char *ndev_name;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return NULL;
++ ndev_name = ndev->name;
++ dev_put(ndev);
++
++ return ndev_name;
+ }
+
+ int rxe_net_add(const char *ibdev_name, struct net_device *ndev)
+@@ -536,10 +545,9 @@ int rxe_net_add(const char *ibdev_name, struct net_device *ndev)
+ if (!rxe)
+ return -ENOMEM;
+
+- rxe->ndev = ndev;
+ ib_mark_name_assigned_by_user(&rxe->ib_dev);
+
+- err = rxe_add(rxe, ndev->mtu, ibdev_name);
++ err = rxe_add(rxe, ndev->mtu, ibdev_name, ndev);
+ if (err) {
+ ib_dealloc_device(&rxe->ib_dev);
+ return err;
+@@ -587,10 +595,18 @@ void rxe_port_down(struct rxe_dev *rxe)
+
+ void rxe_set_port_state(struct rxe_dev *rxe)
+ {
+- if (netif_running(rxe->ndev) && netif_carrier_ok(rxe->ndev))
++ struct net_device *ndev;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return;
++
++ if (netif_running(ndev) && netif_carrier_ok(ndev))
+ rxe_port_up(rxe);
+ else
+ rxe_port_down(rxe);
++
++ dev_put(ndev);
+ }
+
+ static int rxe_notify(struct notifier_block *not_blk,
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 5c18f7e342f294..8a5fc20fd18692 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -41,6 +41,7 @@ static int rxe_query_port(struct ib_device *ibdev,
+ u32 port_num, struct ib_port_attr *attr)
+ {
+ struct rxe_dev *rxe = to_rdev(ibdev);
++ struct net_device *ndev;
+ int err, ret;
+
+ if (port_num != 1) {
+@@ -49,6 +50,12 @@ static int rxe_query_port(struct ib_device *ibdev,
+ goto err_out;
+ }
+
++ ndev = rxe_ib_device_get_netdev(ibdev);
++ if (!ndev) {
++ err = -ENODEV;
++ goto err_out;
++ }
++
+ memcpy(attr, &rxe->port.attr, sizeof(*attr));
+
+ mutex_lock(&rxe->usdev_lock);
+@@ -57,13 +64,14 @@ static int rxe_query_port(struct ib_device *ibdev,
+
+ if (attr->state == IB_PORT_ACTIVE)
+ attr->phys_state = IB_PORT_PHYS_STATE_LINK_UP;
+- else if (dev_get_flags(rxe->ndev) & IFF_UP)
++ else if (dev_get_flags(ndev) & IFF_UP)
+ attr->phys_state = IB_PORT_PHYS_STATE_POLLING;
+ else
+ attr->phys_state = IB_PORT_PHYS_STATE_DISABLED;
+
+ mutex_unlock(&rxe->usdev_lock);
+
++ dev_put(ndev);
+ return ret;
+
+ err_out:
+@@ -1425,9 +1433,16 @@ static const struct attribute_group rxe_attr_group = {
+ static int rxe_enable_driver(struct ib_device *ib_dev)
+ {
+ struct rxe_dev *rxe = container_of(ib_dev, struct rxe_dev, ib_dev);
++ struct net_device *ndev;
++
++ ndev = rxe_ib_device_get_netdev(ib_dev);
++ if (!ndev)
++ return -ENODEV;
+
+ rxe_set_port_state(rxe);
+- dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(rxe->ndev));
++ dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(ndev));
++
++ dev_put(ndev);
+ return 0;
+ }
+
+@@ -1495,7 +1510,8 @@ static const struct ib_device_ops rxe_dev_ops = {
+ INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw),
+ };
+
+-int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
++int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name,
++ struct net_device *ndev)
+ {
+ int err;
+ struct ib_device *dev = &rxe->ib_dev;
+@@ -1507,13 +1523,13 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
+ dev->num_comp_vectors = num_possible_cpus();
+ dev->local_dma_lkey = 0;
+ addrconf_addr_eui48((unsigned char *)&dev->node_guid,
+- rxe->ndev->dev_addr);
++ ndev->dev_addr);
+
+ dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND) |
+ BIT_ULL(IB_USER_VERBS_CMD_REQ_NOTIFY_CQ);
+
+ ib_set_device_ops(dev, &rxe_dev_ops);
+- err = ib_device_set_netdev(&rxe->ib_dev, rxe->ndev, 1);
++ err = ib_device_set_netdev(&rxe->ib_dev, ndev, 1);
+ if (err)
+ return err;
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 3c1354f82283e6..6573ceec0ef583 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -370,6 +370,7 @@ struct rxe_port {
+ u32 qp_gsi_index;
+ };
+
++#define RXE_PORT 1
+ struct rxe_dev {
+ struct ib_device ib_dev;
+ struct ib_device_attr attr;
+@@ -377,8 +378,6 @@ struct rxe_dev {
+ int max_inline_data;
+ struct mutex usdev_lock;
+
+- struct net_device *ndev;
+-
+ struct rxe_pool uc_pool;
+ struct rxe_pool pd_pool;
+ struct rxe_pool ah_pool;
+@@ -406,6 +405,11 @@ struct rxe_dev {
+ struct crypto_shash *tfm;
+ };
+
++static inline struct net_device *rxe_ib_device_get_netdev(struct ib_device *dev)
++{
++ return ib_device_get_netdev(dev, RXE_PORT);
++}
++
+ static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index)
+ {
+ atomic64_inc(&rxe->stats_counters[index]);
+@@ -471,6 +475,7 @@ static inline struct rxe_pd *rxe_mw_pd(struct rxe_mw *mw)
+ return to_rpd(mw->ibmw.pd);
+ }
+
+-int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name);
++int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name,
++ struct net_device *ndev);
+
+ #endif /* RXE_VERBS_H */
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index 86d4d6a2170e17..ea5eee50dc39d0 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -46,6 +46,9 @@
+ */
+ #define SIW_IRQ_MAXBURST_SQ_ACTIVE 4
+
++/* There is always only a port 1 per siw device */
++#define SIW_PORT 1
++
+ struct siw_dev_cap {
+ int max_qp;
+ int max_qp_wr;
+@@ -69,16 +72,12 @@ struct siw_pd {
+
+ struct siw_device {
+ struct ib_device base_dev;
+- struct net_device *netdev;
+ struct siw_dev_cap attrs;
+
+ u32 vendor_part_id;
+ int numa_node;
+ char raw_gid[ETH_ALEN];
+
+- /* physical port state (only one port per device) */
+- enum ib_port_state state;
+-
+ spinlock_t lock;
+
+ struct xarray qp_xa;
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index 86323918a570eb..708b13993fdfd3 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -1759,6 +1759,7 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ {
+ struct socket *s;
+ struct siw_cep *cep = NULL;
++ struct net_device *ndev = NULL;
+ struct siw_device *sdev = to_siw_dev(id->device);
+ int addr_family = id->local_addr.ss_family;
+ int rv = 0;
+@@ -1779,9 +1780,15 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ struct sockaddr_in *laddr = &to_sockaddr_in(id->local_addr);
+
+ /* For wildcard addr, limit binding to current device only */
+- if (ipv4_is_zeronet(laddr->sin_addr.s_addr))
+- s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
+-
++ if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) {
++ ndev = ib_device_get_netdev(id->device, SIW_PORT);
++ if (ndev) {
++ s->sk->sk_bound_dev_if = ndev->ifindex;
++ } else {
++ rv = -ENODEV;
++ goto error;
++ }
++ }
+ rv = s->ops->bind(s, (struct sockaddr *)laddr,
+ sizeof(struct sockaddr_in));
+ } else {
+@@ -1797,9 +1804,15 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ }
+
+ /* For wildcard addr, limit binding to current device only */
+- if (ipv6_addr_any(&laddr->sin6_addr))
+- s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
+-
++ if (ipv6_addr_any(&laddr->sin6_addr)) {
++ ndev = ib_device_get_netdev(id->device, SIW_PORT);
++ if (ndev) {
++ s->sk->sk_bound_dev_if = ndev->ifindex;
++ } else {
++ rv = -ENODEV;
++ goto error;
++ }
++ }
+ rv = s->ops->bind(s, (struct sockaddr *)laddr,
+ sizeof(struct sockaddr_in6));
+ }
+@@ -1860,6 +1873,7 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ }
+ list_add_tail(&cep->listenq, (struct list_head *)id->provider_data);
+ cep->state = SIW_EPSTATE_LISTENING;
++ dev_put(ndev);
+
+ siw_dbg(id->device, "Listen at laddr %pISp\n", &id->local_addr);
+
+@@ -1879,6 +1893,7 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ siw_cep_set_free_and_put(cep);
+ }
+ sock_release(s);
++ dev_put(ndev);
+
+ return rv;
+ }
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 17abef48abcd22..14d3103aee6f8a 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -287,7 +287,6 @@ static struct siw_device *siw_device_create(struct net_device *netdev)
+ return NULL;
+
+ base_dev = &sdev->base_dev;
+- sdev->netdev = netdev;
+
+ if (netdev->addr_len) {
+ memcpy(sdev->raw_gid, netdev->dev_addr,
+@@ -381,12 +380,10 @@ static int siw_netdev_event(struct notifier_block *nb, unsigned long event,
+
+ switch (event) {
+ case NETDEV_UP:
+- sdev->state = IB_PORT_ACTIVE;
+ siw_port_event(sdev, 1, IB_EVENT_PORT_ACTIVE);
+ break;
+
+ case NETDEV_DOWN:
+- sdev->state = IB_PORT_DOWN;
+ siw_port_event(sdev, 1, IB_EVENT_PORT_ERR);
+ break;
+
+@@ -407,12 +404,8 @@ static int siw_netdev_event(struct notifier_block *nb, unsigned long event,
+ siw_port_event(sdev, 1, IB_EVENT_LID_CHANGE);
+ break;
+ /*
+- * Todo: Below netdev events are currently not handled.
++ * All other events are not handled
+ */
+- case NETDEV_CHANGEMTU:
+- case NETDEV_CHANGE:
+- break;
+-
+ default:
+ break;
+ }
+@@ -442,12 +435,6 @@ static int siw_newlink(const char *basedev_name, struct net_device *netdev)
+ sdev = siw_device_create(netdev);
+ if (sdev) {
+ dev_dbg(&netdev->dev, "siw: new device\n");
+-
+- if (netif_running(netdev) && netif_carrier_ok(netdev))
+- sdev->state = IB_PORT_ACTIVE;
+- else
+- sdev->state = IB_PORT_DOWN;
+-
+ ib_mark_name_assigned_by_user(&sdev->base_dev);
+ rv = siw_device_register(sdev, basedev_name);
+ if (rv)
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 986666c19378a1..7ca0297d68a4a7 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -171,21 +171,29 @@ int siw_query_device(struct ib_device *base_dev, struct ib_device_attr *attr,
+ int siw_query_port(struct ib_device *base_dev, u32 port,
+ struct ib_port_attr *attr)
+ {
+- struct siw_device *sdev = to_siw_dev(base_dev);
++ struct net_device *ndev;
+ int rv;
+
+ memset(attr, 0, sizeof(*attr));
+
+ rv = ib_get_eth_speed(base_dev, port, &attr->active_speed,
+ &attr->active_width);
++ if (rv)
++ return rv;
++
++ ndev = ib_device_get_netdev(base_dev, SIW_PORT);
++ if (!ndev)
++ return -ENODEV;
++
+ attr->gid_tbl_len = 1;
+ attr->max_msg_sz = -1;
+- attr->max_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
+- attr->active_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
+- attr->phys_state = sdev->state == IB_PORT_ACTIVE ?
++ attr->max_mtu = ib_mtu_int_to_enum(ndev->max_mtu);
++ attr->active_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu));
++ attr->phys_state = (netif_running(ndev) && netif_carrier_ok(ndev)) ?
+ IB_PORT_PHYS_STATE_LINK_UP : IB_PORT_PHYS_STATE_DISABLED;
++ attr->state = attr->phys_state == IB_PORT_PHYS_STATE_LINK_UP ?
++ IB_PORT_ACTIVE : IB_PORT_DOWN;
+ attr->port_cap_flags = IB_PORT_CM_SUP | IB_PORT_DEVICE_MGMT_SUP;
+- attr->state = sdev->state;
+ /*
+ * All zero
+ *
+@@ -199,6 +207,7 @@ int siw_query_port(struct ib_device *base_dev, u32 port,
+ * attr->subnet_timeout = 0;
+ * attr->init_type_repy = 0;
+ */
++ dev_put(ndev);
+ return rv;
+ }
+
+@@ -505,21 +514,24 @@ int siw_query_qp(struct ib_qp *base_qp, struct ib_qp_attr *qp_attr,
+ int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+ {
+ struct siw_qp *qp;
+- struct siw_device *sdev;
++ struct net_device *ndev;
+
+- if (base_qp && qp_attr && qp_init_attr) {
++ if (base_qp && qp_attr && qp_init_attr)
+ qp = to_siw_qp(base_qp);
+- sdev = to_siw_dev(base_qp->device);
+- } else {
++ else
+ return -EINVAL;
+- }
++
++ ndev = ib_device_get_netdev(base_qp->device, SIW_PORT);
++ if (!ndev)
++ return -ENODEV;
++
+ qp_attr->qp_state = siw_qp_state_to_ib_qp_state[qp->attrs.state];
+ qp_attr->cap.max_inline_data = SIW_MAX_INLINE;
+ qp_attr->cap.max_send_wr = qp->attrs.sq_size;
+ qp_attr->cap.max_send_sge = qp->attrs.sq_max_sges;
+ qp_attr->cap.max_recv_wr = qp->attrs.rq_size;
+ qp_attr->cap.max_recv_sge = qp->attrs.rq_max_sges;
+- qp_attr->path_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
++ qp_attr->path_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu));
+ qp_attr->max_rd_atomic = qp->attrs.irq_size;
+ qp_attr->max_dest_rd_atomic = qp->attrs.orq_size;
+
+@@ -534,6 +546,7 @@ int siw_query_qp(struct ib_qp *base_qp, struct ib_qp_attr *qp_attr,
+
+ qp_init_attr->cap = qp_attr->cap;
+
++ dev_put(ndev);
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index e83d956478521d..ef4abdea3c2d2e 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -349,6 +349,7 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ struct rtrs_srv_mr *srv_mr;
+ bool need_inval = false;
+ enum ib_send_flags flags;
++ struct ib_sge list;
+ u32 imm;
+ int err;
+
+@@ -401,7 +402,6 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval);
+ imm_wr.wr.next = NULL;
+ if (always_invalidate) {
+- struct ib_sge list;
+ struct rtrs_msg_rkey_rsp *msg;
+
+ srv_mr = &srv_path->mrs[id->msg_id];
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index 3be7bd8cd8cdeb..32abc2916b40ff 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -64,7 +64,7 @@ static void gic_check_cpu_features(void)
+
+ union gic_base {
+ void __iomem *common_base;
+- void __percpu * __iomem *percpu_base;
++ void __iomem * __percpu *percpu_base;
+ };
+
+ struct gic_chip_data {
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index e113b99a3eab59..8716004fcf6c90 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1867,20 +1867,20 @@ static int sdhci_msm_program_key(struct cqhci_host *cq_host,
+ struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
+ union cqhci_crypto_cap_entry cap;
+
++ if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
++ return qcom_ice_evict_key(msm_host->ice, slot);
++
+ /* Only AES-256-XTS has been tested so far. */
+ cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
+ if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
+ cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
+ return -EINVAL;
+
+- if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)
+- return qcom_ice_program_key(msm_host->ice,
+- QCOM_ICE_CRYPTO_ALG_AES_XTS,
+- QCOM_ICE_CRYPTO_KEY_SIZE_256,
+- cfg->crypto_key,
+- cfg->data_unit_size, slot);
+- else
+- return qcom_ice_evict_key(msm_host->ice, slot);
++ return qcom_ice_program_key(msm_host->ice,
++ QCOM_ICE_CRYPTO_ALG_AES_XTS,
++ QCOM_ICE_CRYPTO_KEY_SIZE_256,
++ cfg->crypto_key,
++ cfg->data_unit_size, slot);
+ }
+
+ #else /* CONFIG_MMC_CRYPTO */
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index 0ba658a72d8fea..22556d339d6ea5 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -2,7 +2,7 @@
+ /*
+ * Microchip KSZ9477 switch driver main logic
+ *
+- * Copyright (C) 2017-2019 Microchip Technology Inc.
++ * Copyright (C) 2017-2024 Microchip Technology Inc.
+ */
+
+ #include <linux/kernel.h>
+@@ -983,26 +983,51 @@ void ksz9477_get_caps(struct ksz_device *dev, int port,
+ int ksz9477_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
+ {
+ u32 secs = msecs / 1000;
+- u8 value;
+- u8 data;
++ u8 data, mult, value;
++ u32 max_val;
+ int ret;
+
+- value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs);
++#define MAX_TIMER_VAL ((1 << 8) - 1)
+
+- ret = ksz_write8(dev, REG_SW_LUE_CTRL_3, value);
+- if (ret < 0)
+- return ret;
++ /* The aging timer comprises a 3-bit multiplier and an 8-bit second
++ * value. Either of them cannot be zero. The maximum timer is then
++ * 7 * 255 = 1785 seconds.
++ */
++ if (!secs)
++ secs = 1;
+
+- data = FIELD_GET(SW_AGE_PERIOD_10_8_M, secs);
++ /* Return error if too large. */
++ else if (secs > 7 * MAX_TIMER_VAL)
++ return -EINVAL;
+
+ ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value);
+ if (ret < 0)
+ return ret;
+
+- value &= ~SW_AGE_CNT_M;
+- value |= FIELD_PREP(SW_AGE_CNT_M, data);
++ /* Check whether there is need to update the multiplier. */
++ mult = FIELD_GET(SW_AGE_CNT_M, value);
++ max_val = MAX_TIMER_VAL;
++ if (mult > 0) {
++ /* Try to use the same multiplier already in the register as
++ * the hardware default uses multiplier 4 and 75 seconds for
++ * 300 seconds.
++ */
++ max_val = DIV_ROUND_UP(secs, mult);
++ if (max_val > MAX_TIMER_VAL || max_val * mult != secs)
++ max_val = MAX_TIMER_VAL;
++ }
++
++ data = DIV_ROUND_UP(secs, max_val);
++ if (mult != data) {
++ value &= ~SW_AGE_CNT_M;
++ value |= FIELD_PREP(SW_AGE_CNT_M, data);
++ ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
++ if (ret < 0)
++ return ret;
++ }
+
+- return ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
++ value = DIV_ROUND_UP(secs, data);
++ return ksz_write8(dev, REG_SW_LUE_CTRL_3, value);
+ }
+
+ void ksz9477_port_queue_split(struct ksz_device *dev, int port)
+diff --git a/drivers/net/dsa/microchip/ksz9477_reg.h b/drivers/net/dsa/microchip/ksz9477_reg.h
+index 04235c22bf40e4..ff579920078ee3 100644
+--- a/drivers/net/dsa/microchip/ksz9477_reg.h
++++ b/drivers/net/dsa/microchip/ksz9477_reg.h
+@@ -2,7 +2,7 @@
+ /*
+ * Microchip KSZ9477 register definitions
+ *
+- * Copyright (C) 2017-2018 Microchip Technology Inc.
++ * Copyright (C) 2017-2024 Microchip Technology Inc.
+ */
+
+ #ifndef __KSZ9477_REGS_H
+@@ -165,8 +165,6 @@
+ #define SW_VLAN_ENABLE BIT(7)
+ #define SW_DROP_INVALID_VID BIT(6)
+ #define SW_AGE_CNT_M GENMASK(5, 3)
+-#define SW_AGE_CNT_S 3
+-#define SW_AGE_PERIOD_10_8_M GENMASK(10, 8)
+ #define SW_RESV_MCAST_ENABLE BIT(2)
+ #define SW_HASH_OPTION_M 0x03
+ #define SW_HASH_OPTION_CRC 1
+diff --git a/drivers/net/dsa/microchip/lan937x_main.c b/drivers/net/dsa/microchip/lan937x_main.c
+index 824d9309a3d35e..7fe127a075de31 100644
+--- a/drivers/net/dsa/microchip/lan937x_main.c
++++ b/drivers/net/dsa/microchip/lan937x_main.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Microchip LAN937X switch driver main logic
+- * Copyright (C) 2019-2022 Microchip Technology Inc.
++ * Copyright (C) 2019-2024 Microchip Technology Inc.
+ */
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+@@ -260,10 +260,66 @@ int lan937x_change_mtu(struct ksz_device *dev, int port, int new_mtu)
+
+ int lan937x_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
+ {
+- u32 secs = msecs / 1000;
+- u32 value;
++ u8 data, mult, value8;
++ bool in_msec = false;
++ u32 max_val, value;
++ u32 secs = msecs;
+ int ret;
+
++#define MAX_TIMER_VAL ((1 << 20) - 1)
++
++ /* The aging timer comprises a 3-bit multiplier and a 20-bit second
++ * value. Either of them cannot be zero. The maximum timer is then
++ * 7 * 1048575 = 7340025 seconds. As this value is too large for
++ * practical use it can be interpreted as microseconds, making the
++ * maximum timer 7340 seconds with finer control. This allows for
++ * maximum 122 minutes compared to 29 minutes in KSZ9477 switch.
++ */
++ if (msecs % 1000)
++ in_msec = true;
++ else
++ secs /= 1000;
++ if (!secs)
++ secs = 1;
++
++ /* Return error if too large. */
++ else if (secs > 7 * MAX_TIMER_VAL)
++ return -EINVAL;
++
++ /* Configure how to interpret the number value. */
++ ret = ksz_rmw8(dev, REG_SW_LUE_CTRL_2, SW_AGE_CNT_IN_MICROSEC,
++ in_msec ? SW_AGE_CNT_IN_MICROSEC : 0);
++ if (ret < 0)
++ return ret;
++
++ ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value8);
++ if (ret < 0)
++ return ret;
++
++ /* Check whether there is need to update the multiplier. */
++ mult = FIELD_GET(SW_AGE_CNT_M, value8);
++ max_val = MAX_TIMER_VAL;
++ if (mult > 0) {
++ /* Try to use the same multiplier already in the register as
++ * the hardware default uses multiplier 4 and 75 seconds for
++ * 300 seconds.
++ */
++ max_val = DIV_ROUND_UP(secs, mult);
++ if (max_val > MAX_TIMER_VAL || max_val * mult != secs)
++ max_val = MAX_TIMER_VAL;
++ }
++
++ data = DIV_ROUND_UP(secs, max_val);
++ if (mult != data) {
++ value8 &= ~SW_AGE_CNT_M;
++ value8 |= FIELD_PREP(SW_AGE_CNT_M, data);
++ ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value8);
++ if (ret < 0)
++ return ret;
++ }
++
++ secs = DIV_ROUND_UP(secs, data);
++
+ value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs);
+
+ ret = ksz_write8(dev, REG_SW_AGE_PERIOD__1, value);
+diff --git a/drivers/net/dsa/microchip/lan937x_reg.h b/drivers/net/dsa/microchip/lan937x_reg.h
+index 2f22a9d01de36b..35269f74a314b4 100644
+--- a/drivers/net/dsa/microchip/lan937x_reg.h
++++ b/drivers/net/dsa/microchip/lan937x_reg.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /* Microchip LAN937X switch register definitions
+- * Copyright (C) 2019-2021 Microchip Technology Inc.
++ * Copyright (C) 2019-2024 Microchip Technology Inc.
+ */
+ #ifndef __LAN937X_REG_H
+ #define __LAN937X_REG_H
+@@ -52,8 +52,7 @@
+
+ #define SW_VLAN_ENABLE BIT(7)
+ #define SW_DROP_INVALID_VID BIT(6)
+-#define SW_AGE_CNT_M 0x7
+-#define SW_AGE_CNT_S 3
++#define SW_AGE_CNT_M GENMASK(5, 3)
+ #define SW_RESV_MCAST_ENABLE BIT(2)
+
+ #define REG_SW_LUE_CTRL_1 0x0311
+@@ -66,6 +65,10 @@
+ #define SW_FAST_AGING BIT(1)
+ #define SW_LINK_AUTO_AGING BIT(0)
+
++#define REG_SW_LUE_CTRL_2 0x0312
++
++#define SW_AGE_CNT_IN_MICROSEC BIT(7)
++
+ #define REG_SW_AGE_PERIOD__1 0x0313
+ #define SW_AGE_PERIOD_7_0_M GENMASK(7, 0)
+
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 0a68b526e4a821..2b784ced06451f 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1967,7 +1967,11 @@ static int bcm_sysport_open(struct net_device *dev)
+ unsigned int i;
+ int ret;
+
+- clk_prepare_enable(priv->clk);
++ ret = clk_prepare_enable(priv->clk);
++ if (ret) {
++ netdev_err(dev, "could not enable priv clock\n");
++ return ret;
++ }
+
+ /* Reset UniMAC */
+ umac_reset(priv);
+@@ -2625,7 +2629,11 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ goto err_deregister_notifier;
+ }
+
+- clk_prepare_enable(priv->clk);
++ ret = clk_prepare_enable(priv->clk);
++ if (ret) {
++ dev_err(&pdev->dev, "could not enable priv clock\n");
++ goto err_deregister_netdev;
++ }
+
+ priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK;
+ dev_info(&pdev->dev,
+@@ -2639,6 +2647,8 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+
+ return 0;
+
++err_deregister_netdev:
++ unregister_netdev(dev);
+ err_deregister_notifier:
+ unregister_netdevice_notifier(&priv->netdev_notifier);
+ err_deregister_fixed_link:
+@@ -2808,7 +2818,12 @@ static int __maybe_unused bcm_sysport_resume(struct device *d)
+ if (!netif_running(dev))
+ return 0;
+
+- clk_prepare_enable(priv->clk);
++ ret = clk_prepare_enable(priv->clk);
++ if (ret) {
++ netdev_err(dev, "could not enable priv clock\n");
++ return ret;
++ }
++
+ if (priv->wolopts)
+ clk_disable_unprepare(priv->wol_clk);
+
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index 301fa1ea4f5167..95471cfcff420a 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -1134,6 +1134,7 @@ int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
+ void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
+ bool gve_tx_poll(struct gve_notify_block *block, int budget);
+ bool gve_xdp_poll(struct gve_notify_block *block, int budget);
++int gve_xsk_tx_poll(struct gve_notify_block *block, int budget);
+ int gve_tx_alloc_rings_gqi(struct gve_priv *priv,
+ struct gve_tx_alloc_rings_cfg *cfg);
+ void gve_tx_free_rings_gqi(struct gve_priv *priv,
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 661566db68c860..d404819ebc9b3f 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -333,6 +333,14 @@ int gve_napi_poll(struct napi_struct *napi, int budget)
+
+ if (block->rx) {
+ work_done = gve_rx_poll(block, budget);
++
++ /* Poll XSK TX as part of RX NAPI. Setup re-poll based on max of
++ * TX and RX work done.
++ */
++ if (priv->xdp_prog)
++ work_done = max_t(int, work_done,
++ gve_xsk_tx_poll(block, budget));
++
+ reschedule |= work_done == budget;
+ }
+
+@@ -922,11 +930,13 @@ static void gve_init_sync_stats(struct gve_priv *priv)
+ static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv,
+ struct gve_tx_alloc_rings_cfg *cfg)
+ {
++ int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0;
++
+ cfg->qcfg = &priv->tx_cfg;
+ cfg->raw_addressing = !gve_is_qpl(priv);
+ cfg->ring_size = priv->tx_desc_cnt;
+ cfg->start_idx = 0;
+- cfg->num_rings = gve_num_tx_queues(priv);
++ cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues;
+ cfg->tx = priv->tx;
+ }
+
+@@ -1623,8 +1633,8 @@ static int gve_xsk_pool_enable(struct net_device *dev,
+ if (err)
+ return err;
+
+- /* If XDP prog is not installed, return */
+- if (!priv->xdp_prog)
++ /* If XDP prog is not installed or interface is down, return. */
++ if (!priv->xdp_prog || !netif_running(dev))
+ return 0;
+
+ rx = &priv->rx[qid];
+@@ -1669,21 +1679,16 @@ static int gve_xsk_pool_disable(struct net_device *dev,
+ if (qid >= priv->rx_cfg.num_queues)
+ return -EINVAL;
+
+- /* If XDP prog is not installed, unmap DMA and return */
+- if (!priv->xdp_prog)
+- goto done;
+-
+- tx_qid = gve_xdp_tx_queue_id(priv, qid);
+- if (!netif_running(dev)) {
+- priv->rx[qid].xsk_pool = NULL;
+- xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
+- priv->tx[tx_qid].xsk_pool = NULL;
++ /* If XDP prog is not installed or interface is down, unmap DMA and
++ * return.
++ */
++ if (!priv->xdp_prog || !netif_running(dev))
+ goto done;
+- }
+
+ napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi;
+ napi_disable(napi_rx); /* make sure current rx poll is done */
+
++ tx_qid = gve_xdp_tx_queue_id(priv, qid);
+ napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi;
+ napi_disable(napi_tx); /* make sure current tx poll is done */
+
+@@ -1709,24 +1714,20 @@ static int gve_xsk_pool_disable(struct net_device *dev,
+ static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
+ {
+ struct gve_priv *priv = netdev_priv(dev);
+- int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id);
++ struct napi_struct *napi;
++
++ if (!gve_get_napi_enabled(priv))
++ return -ENETDOWN;
+
+ if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog)
+ return -EINVAL;
+
+- if (flags & XDP_WAKEUP_TX) {
+- struct gve_tx_ring *tx = &priv->tx[tx_queue_id];
+- struct napi_struct *napi =
+- &priv->ntfy_blocks[tx->ntfy_id].napi;
+-
+- if (!napi_if_scheduled_mark_missed(napi)) {
+- /* Call local_bh_enable to trigger SoftIRQ processing */
+- local_bh_disable();
+- napi_schedule(napi);
+- local_bh_enable();
+- }
+-
+- tx->xdp_xsk_wakeup++;
++ napi = &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_id)].napi;
++ if (!napi_if_scheduled_mark_missed(napi)) {
++ /* Call local_bh_enable to trigger SoftIRQ processing */
++ local_bh_disable();
++ napi_schedule(napi);
++ local_bh_enable();
+ }
+
+ return 0;
+@@ -1837,6 +1838,7 @@ int gve_adjust_queues(struct gve_priv *priv,
+ {
+ struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0};
+ struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0};
++ int num_xdp_queues;
+ int err;
+
+ gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg);
+@@ -1847,6 +1849,10 @@ int gve_adjust_queues(struct gve_priv *priv,
+ rx_alloc_cfg.qcfg = &new_rx_config;
+ tx_alloc_cfg.num_rings = new_tx_config.num_queues;
+
++ /* Add dedicated XDP TX queues if enabled. */
++ num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0;
++ tx_alloc_cfg.num_rings += num_xdp_queues;
++
+ if (netif_running(priv->dev)) {
+ err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
+ return err;
+@@ -1891,6 +1897,9 @@ static void gve_turndown(struct gve_priv *priv)
+
+ gve_clear_napi_enabled(priv);
+ gve_clear_report_stats(priv);
++
++ /* Make sure that all traffic is finished processing. */
++ synchronize_net();
+ }
+
+ static void gve_turnup(struct gve_priv *priv)
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index e7fb7d6d283df1..4350ebd9c2bd9e 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -206,7 +206,10 @@ void gve_tx_stop_ring_gqi(struct gve_priv *priv, int idx)
+ return;
+
+ gve_remove_napi(priv, ntfy_idx);
+- gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
++ if (tx->q_num < priv->tx_cfg.num_queues)
++ gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
++ else
++ gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
+ netdev_tx_reset_queue(tx->netdev_txq);
+ gve_tx_remove_from_block(priv, idx);
+ }
+@@ -834,9 +837,12 @@ int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ struct gve_tx_ring *tx;
+ int i, err = 0, qid;
+
+- if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
++ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK) || !priv->xdp_prog)
+ return -EINVAL;
+
++ if (!gve_get_napi_enabled(priv))
++ return -ENETDOWN;
++
+ qid = gve_xdp_tx_queue_id(priv,
+ smp_processor_id() % priv->num_xdp_queues);
+
+@@ -975,33 +981,41 @@ static int gve_xsk_tx(struct gve_priv *priv, struct gve_tx_ring *tx,
+ return sent;
+ }
+
++int gve_xsk_tx_poll(struct gve_notify_block *rx_block, int budget)
++{
++ struct gve_rx_ring *rx = rx_block->rx;
++ struct gve_priv *priv = rx->gve;
++ struct gve_tx_ring *tx;
++ int sent = 0;
++
++ tx = &priv->tx[gve_xdp_tx_queue_id(priv, rx->q_num)];
++ if (tx->xsk_pool) {
++ sent = gve_xsk_tx(priv, tx, budget);
++
++ u64_stats_update_begin(&tx->statss);
++ tx->xdp_xsk_sent += sent;
++ u64_stats_update_end(&tx->statss);
++ if (xsk_uses_need_wakeup(tx->xsk_pool))
++ xsk_set_tx_need_wakeup(tx->xsk_pool);
++ }
++
++ return sent;
++}
++
+ bool gve_xdp_poll(struct gve_notify_block *block, int budget)
+ {
+ struct gve_priv *priv = block->priv;
+ struct gve_tx_ring *tx = block->tx;
+ u32 nic_done;
+- bool repoll;
+ u32 to_do;
+
+ /* Find out how much work there is to be done */
+ nic_done = gve_tx_load_event_counter(priv, tx);
+ to_do = min_t(u32, (nic_done - tx->done), budget);
+ gve_clean_xdp_done(priv, tx, to_do);
+- repoll = nic_done != tx->done;
+-
+- if (tx->xsk_pool) {
+- int sent = gve_xsk_tx(priv, tx, budget);
+-
+- u64_stats_update_begin(&tx->statss);
+- tx->xdp_xsk_sent += sent;
+- u64_stats_update_end(&tx->statss);
+- repoll |= (sent == budget);
+- if (xsk_uses_need_wakeup(tx->xsk_pool))
+- xsk_set_tx_need_wakeup(tx->xsk_pool);
+- }
+
+ /* If we still have work we want to repoll */
+- return repoll;
++ return nic_done != tx->done;
+ }
+
+ bool gve_tx_poll(struct gve_notify_block *block, int budget)
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index 9e80899546d996..83b9905666e24f 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2708,9 +2708,15 @@ static struct platform_device *port_platdev[3];
+
+ static void mv643xx_eth_shared_of_remove(void)
+ {
++ struct mv643xx_eth_platform_data *pd;
+ int n;
+
+ for (n = 0; n < 3; n++) {
++ if (!port_platdev[n])
++ continue;
++ pd = dev_get_platdata(&port_platdev[n]->dev);
++ if (pd)
++ of_node_put(pd->phy_node);
+ platform_device_del(port_platdev[n]);
+ port_platdev[n] = NULL;
+ }
+@@ -2773,8 +2779,10 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+ }
+
+ ppdev = platform_device_alloc(MV643XX_ETH_NAME, dev_num);
+- if (!ppdev)
+- return -ENOMEM;
++ if (!ppdev) {
++ ret = -ENOMEM;
++ goto put_err;
++ }
+ ppdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+ ppdev->dev.of_node = pnp;
+
+@@ -2796,6 +2804,8 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+
+ port_err:
+ platform_device_put(ppdev);
++put_err:
++ of_node_put(ppd.phy_node);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index a7a16eac189134..52d99908d0e9d3 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -130,6 +130,7 @@ static const struct pci_device_id sky2_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436C) }, /* 88E8072 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436D) }, /* 88E8055 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4370) }, /* 88E8075 */
++ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4373) }, /* 88E8075 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4380) }, /* 88E8057 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4381) }, /* 88E8059 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4382) }, /* 88E8079 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+index cc9bcc42003242..6ab02f3fc29123 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+@@ -339,9 +339,13 @@ static int mlx5e_macsec_init_sa_fs(struct macsec_context *ctx,
+ {
+ struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev);
+ struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs;
++ const struct macsec_tx_sc *tx_sc = &ctx->secy->tx_sc;
+ struct mlx5_macsec_rule_attrs rule_attrs;
+ union mlx5_macsec_rule *macsec_rule;
+
++ if (is_tx && tx_sc->encoding_sa != sa->assoc_num)
++ return 0;
++
+ rule_attrs.macsec_obj_id = sa->macsec_obj_id;
+ rule_attrs.sci = sa->sci;
+ rule_attrs.assoc_num = sa->assoc_num;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index c14bef83d84d0f..62b8a7c1c6b54a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -6510,8 +6510,23 @@ static void _mlx5e_remove(struct auxiliary_device *adev)
+
+ mlx5_core_uplink_netdev_set(mdev, NULL);
+ mlx5e_dcbnl_delete_app(priv);
+- unregister_netdev(priv->netdev);
+- _mlx5e_suspend(adev, false);
++ /* When unload driver, the netdev is in registered state
++ * if it's from legacy mode. If from switchdev mode, it
++ * is already unregistered before changing to NIC profile.
++ */
++ if (priv->netdev->reg_state == NETREG_REGISTERED) {
++ unregister_netdev(priv->netdev);
++ _mlx5e_suspend(adev, false);
++ } else {
++ struct mlx5_core_dev *pos;
++ int i;
++
++ if (test_bit(MLX5E_STATE_DESTROYING, &priv->state))
++ mlx5_sd_for_each_dev(i, mdev, pos)
++ mlx5e_destroy_mdev_resources(pos);
++ else
++ _mlx5e_suspend(adev, true);
++ }
+ /* Avoid cleanup if profile rollback failed. */
+ if (priv->profile)
+ priv->profile->cleanup(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 92094bf60d5986..0657d107653577 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1508,6 +1508,21 @@ mlx5e_vport_uplink_rep_unload(struct mlx5e_rep_priv *rpriv)
+
+ priv = netdev_priv(netdev);
+
++ /* This bit is set when using devlink to change eswitch mode from
++ * switchdev to legacy. As need to keep uplink netdev ifindex, we
++ * detach uplink representor profile and attach NIC profile only.
++ * The netdev will be unregistered later when unload NIC auxiliary
++ * driver for this case.
++ * We explicitly block devlink eswitch mode change if any IPSec rules
++ * offloaded, but can't block other cases, such as driver unload
++ * and devlink reload. We have to unregister netdev before profile
++ * change for those cases. This is to avoid resource leak because
++ * the offloaded rules don't have the chance to be unoffloaded before
++ * cleanup which is triggered by detach uplink representor profile.
++ */
++ if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY))
++ unregister_netdev(netdev);
++
+ mlx5e_netdev_attach_nic_profile(priv);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+index 5a0047bdcb5105..ed977ae75fab89 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+@@ -150,11 +150,11 @@ void mlx5_esw_ipsec_restore_dest_uplink(struct mlx5_core_dev *mdev)
+ unsigned long i;
+ int err;
+
+- xa_for_each(&esw->offloads.vport_reps, i, rep) {
+- rpriv = rep->rep_data[REP_ETH].priv;
+- if (!rpriv || !rpriv->netdev)
++ mlx5_esw_for_each_rep(esw, i, rep) {
++ if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
+ continue;
+
++ rpriv = rep->rep_data[REP_ETH].priv;
+ rhashtable_walk_enter(&rpriv->tc_ht, &iter);
+ rhashtable_walk_start(&iter);
+ while ((flow = rhashtable_walk_next(&iter)) != NULL) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index f44b4c7ebcfd73..48fd0400ffd4ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -716,6 +716,9 @@ void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
+ MLX5_CAP_GEN_2((esw->dev), ec_vf_vport_base) +\
+ (last) - 1)
+
++#define mlx5_esw_for_each_rep(esw, i, rep) \
++ xa_for_each(&((esw)->offloads.vport_reps), i, rep)
++
+ struct mlx5_eswitch *__must_check
+ mlx5_devlink_eswitch_get(struct devlink *devlink);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 8cf61ae8b89d24..3950b1d4b3d8e5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -53,9 +53,6 @@
+ #include "lag/lag.h"
+ #include "en/tc/post_meter.h"
+
+-#define mlx5_esw_for_each_rep(esw, i, rep) \
+- xa_for_each(&((esw)->offloads.vport_reps), i, rep)
+-
+ /* There are two match-all miss flows, one for unicast dst mac and
+ * one for multicast.
+ */
+@@ -3762,6 +3759,8 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
+ esw->eswitch_operation_in_progress = true;
+ up_write(&esw->mode_lock);
+
++ if (mode == DEVLINK_ESWITCH_MODE_LEGACY)
++ esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY;
+ mlx5_eswitch_disable_locked(esw);
+ if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) {
+ if (mlx5_devlink_trap_get_num_active(esw->dev)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 6fa06ba2d34653..f57c84e5128bc7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -1067,7 +1067,6 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ int inlen, err, eqn;
+ void *cqc, *in;
+ __be64 *pas;
+- int vector;
+ u32 i;
+
+ cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+@@ -1096,8 +1095,7 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ if (!in)
+ goto err_cqwq;
+
+- vector = raw_smp_processor_id() % mlx5_comp_vectors_max(mdev);
+- err = mlx5_comp_eqn_get(mdev, vector, &eqn);
++ err = mlx5_comp_eqn_get(mdev, 0, &eqn);
+ if (err) {
+ kvfree(in);
+ goto err_cqwq;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index 4b5fd71c897ddb..32d2e61f2b8238 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -423,8 +423,7 @@ mlxsw_sp_span_gretap4_route(const struct net_device *to_dev,
+
+ parms = mlxsw_sp_ipip_netdev_parms4(to_dev);
+ ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp,
+- 0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0,
+- 0);
++ 0, 0, tun->net, parms.link, tun->fwmark, 0, 0);
+
+ rt = ip_route_output_key(tun->net, &fl4);
+ if (IS_ERR(rt))
+diff --git a/drivers/net/ethernet/sfc/tc_conntrack.c b/drivers/net/ethernet/sfc/tc_conntrack.c
+index d90206f27161e4..c0603f54cec3ad 100644
+--- a/drivers/net/ethernet/sfc/tc_conntrack.c
++++ b/drivers/net/ethernet/sfc/tc_conntrack.c
+@@ -16,7 +16,7 @@ static int efx_tc_flow_block(enum tc_setup_type type, void *type_data,
+ void *cb_priv);
+
+ static const struct rhashtable_params efx_tc_ct_zone_ht_params = {
+- .key_len = offsetof(struct efx_tc_ct_zone, linkage),
++ .key_len = sizeof_field(struct efx_tc_ct_zone, zone),
+ .key_offset = 0,
+ .head_offset = offsetof(struct efx_tc_ct_zone, linkage),
+ };
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index ad868e8d195d59..aaf008bdbbcd46 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -405,22 +405,6 @@ static int stmmac_of_get_mac_mode(struct device_node *np)
+ return -ENODEV;
+ }
+
+-/**
+- * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt()
+- * @pdev: platform_device structure
+- * @plat: driver data platform structure
+- *
+- * Release resources claimed by stmmac_probe_config_dt().
+- */
+-static void stmmac_remove_config_dt(struct platform_device *pdev,
+- struct plat_stmmacenet_data *plat)
+-{
+- clk_disable_unprepare(plat->stmmac_clk);
+- clk_disable_unprepare(plat->pclk);
+- of_node_put(plat->phy_node);
+- of_node_put(plat->mdio_node);
+-}
+-
+ /**
+ * stmmac_probe_config_dt - parse device-tree driver parameters
+ * @pdev: platform_device structure
+@@ -490,8 +474,10 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n");
+
+ rc = stmmac_mdio_setup(plat, np, &pdev->dev);
+- if (rc)
+- return ERR_PTR(rc);
++ if (rc) {
++ ret = ERR_PTR(rc);
++ goto error_put_phy;
++ }
+
+ of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size);
+
+@@ -580,8 +566,8 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg),
+ GFP_KERNEL);
+ if (!dma_cfg) {
+- stmmac_remove_config_dt(pdev, plat);
+- return ERR_PTR(-ENOMEM);
++ ret = ERR_PTR(-ENOMEM);
++ goto error_put_mdio;
+ }
+ plat->dma_cfg = dma_cfg;
+
+@@ -609,8 +595,8 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+
+ rc = stmmac_mtl_setup(pdev, plat);
+ if (rc) {
+- stmmac_remove_config_dt(pdev, plat);
+- return ERR_PTR(rc);
++ ret = ERR_PTR(rc);
++ goto error_put_mdio;
+ }
+
+ /* clock setup */
+@@ -662,6 +648,10 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ clk_disable_unprepare(plat->pclk);
+ error_pclk_get:
+ clk_disable_unprepare(plat->stmmac_clk);
++error_put_mdio:
++ of_node_put(plat->mdio_node);
++error_put_phy:
++ of_node_put(plat->phy_node);
+
+ return ret;
+ }
+@@ -670,16 +660,17 @@ static void devm_stmmac_remove_config_dt(void *data)
+ {
+ struct plat_stmmacenet_data *plat = data;
+
+- /* Platform data argument is unused */
+- stmmac_remove_config_dt(NULL, plat);
++ clk_disable_unprepare(plat->stmmac_clk);
++ clk_disable_unprepare(plat->pclk);
++ of_node_put(plat->mdio_node);
++ of_node_put(plat->phy_node);
+ }
+
+ /**
+ * devm_stmmac_probe_config_dt
+ * @pdev: platform_device structure
+ * @mac: MAC address to use
+- * Description: Devres variant of stmmac_probe_config_dt(). Does not require
+- * the user to call stmmac_remove_config_dt() at driver detach.
++ * Description: Devres variant of stmmac_probe_config_dt().
+ */
+ struct plat_stmmacenet_data *
+ devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index ba6db61dd227c4..dfca13b82bdce2 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -3525,7 +3525,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ init_completion(&common->tdown_complete);
+ common->tx_ch_num = AM65_CPSW_DEFAULT_TX_CHNS;
+ common->rx_ch_num_flows = AM65_CPSW_DEFAULT_RX_CHN_FLOWS;
+- common->pf_p0_rx_ptype_rrobin = false;
++ common->pf_p0_rx_ptype_rrobin = true;
+ common->default_vlan = 1;
+
+ common->ports = devm_kcalloc(dev, common->port_num,
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index 5d6d1cf78e93f2..768578c0d9587d 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -215,6 +215,9 @@ static void icss_iep_enable_shadow_mode(struct icss_iep *iep)
+ for (cmp = IEP_MIN_CMP; cmp < IEP_MAX_CMP; cmp++) {
+ regmap_update_bits(iep->map, ICSS_IEP_CMP_STAT_REG,
+ IEP_CMP_STATUS(cmp), IEP_CMP_STATUS(cmp));
++
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(cmp), 0);
+ }
+
+ /* enable reset counter on CMP0 event */
+@@ -780,6 +783,11 @@ int icss_iep_exit(struct icss_iep *iep)
+ }
+ icss_iep_disable(iep);
+
++ if (iep->pps_enabled)
++ icss_iep_pps_enable(iep, false);
++ else if (iep->perout_enabled)
++ icss_iep_perout_enable(iep, NULL, false);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(icss_iep_exit);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/net/ethernet/ti/icssg/icssg_common.c
+index fdebeb2f84e00c..74f0f200a89d4f 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_common.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
+@@ -855,31 +855,6 @@ irqreturn_t prueth_rx_irq(int irq, void *dev_id)
+ }
+ EXPORT_SYMBOL_GPL(prueth_rx_irq);
+
+-void prueth_emac_stop(struct prueth_emac *emac)
+-{
+- struct prueth *prueth = emac->prueth;
+- int slice;
+-
+- switch (emac->port_id) {
+- case PRUETH_PORT_MII0:
+- slice = ICSS_SLICE0;
+- break;
+- case PRUETH_PORT_MII1:
+- slice = ICSS_SLICE1;
+- break;
+- default:
+- netdev_err(emac->ndev, "invalid port\n");
+- return;
+- }
+-
+- emac->fw_running = 0;
+- if (!emac->is_sr1)
+- rproc_shutdown(prueth->txpru[slice]);
+- rproc_shutdown(prueth->rtu[slice]);
+- rproc_shutdown(prueth->pru[slice]);
+-}
+-EXPORT_SYMBOL_GPL(prueth_emac_stop);
+-
+ void prueth_cleanup_tx_ts(struct prueth_emac *emac)
+ {
+ int i;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.c b/drivers/net/ethernet/ti/icssg/icssg_config.c
+index 5d2491c2943a8b..ddfd1c02a88544 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.c
+@@ -397,7 +397,7 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)
+ return 0;
+ }
+
+-static void icssg_init_emac_mode(struct prueth *prueth)
++void icssg_init_emac_mode(struct prueth *prueth)
+ {
+ /* When the device is configured as a bridge and it is being brought
+ * back to the emac mode, the host mac address has to be set as 0.
+@@ -406,9 +406,6 @@ static void icssg_init_emac_mode(struct prueth *prueth)
+ int i;
+ u8 mac[ETH_ALEN] = { 0 };
+
+- if (prueth->emacs_initialized)
+- return;
+-
+ /* Set VLAN TABLE address base */
+ regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK,
+ addr << SMEM_VLAN_OFFSET);
+@@ -423,15 +420,13 @@ static void icssg_init_emac_mode(struct prueth *prueth)
+ /* Clear host MAC address */
+ icssg_class_set_host_mac_addr(prueth->miig_rt, mac);
+ }
++EXPORT_SYMBOL_GPL(icssg_init_emac_mode);
+
+-static void icssg_init_fw_offload_mode(struct prueth *prueth)
++void icssg_init_fw_offload_mode(struct prueth *prueth)
+ {
+ u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET;
+ int i;
+
+- if (prueth->emacs_initialized)
+- return;
+-
+ /* Set VLAN TABLE address base */
+ regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK,
+ addr << SMEM_VLAN_OFFSET);
+@@ -448,6 +443,7 @@ static void icssg_init_fw_offload_mode(struct prueth *prueth)
+ icssg_class_set_host_mac_addr(prueth->miig_rt, prueth->hw_bridge_dev->dev_addr);
+ icssg_set_pvid(prueth, prueth->default_vlan, PRUETH_PORT_HOST);
+ }
++EXPORT_SYMBOL_GPL(icssg_init_fw_offload_mode);
+
+ int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice)
+ {
+@@ -455,11 +451,6 @@ int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice)
+ struct icssg_flow_cfg __iomem *flow_cfg;
+ int ret;
+
+- if (prueth->is_switch_mode || prueth->is_hsr_offload_mode)
+- icssg_init_fw_offload_mode(prueth);
+- else
+- icssg_init_emac_mode(prueth);
+-
+ memset_io(config, 0, TAS_GATE_MASK_LIST0);
+ icssg_miig_queues_init(prueth, slice);
+
+@@ -786,3 +777,27 @@ void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port)
+ writel(pvid, prueth->shram.va + EMAC_ICSSG_SWITCH_PORT0_DEFAULT_VLAN_OFFSET);
+ }
+ EXPORT_SYMBOL_GPL(icssg_set_pvid);
++
++int emac_fdb_flow_id_updated(struct prueth_emac *emac)
++{
++ struct mgmt_cmd_rsp fdb_cmd_rsp = { 0 };
++ int slice = prueth_emac_slice(emac);
++ struct mgmt_cmd fdb_cmd = { 0 };
++ int ret;
++
++ fdb_cmd.header = ICSSG_FW_MGMT_CMD_HEADER;
++ fdb_cmd.type = ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW;
++ fdb_cmd.seqnum = ++(emac->prueth->icssg_hwcmdseq);
++ fdb_cmd.param = 0;
++
++ fdb_cmd.param |= (slice << 4);
++ fdb_cmd.cmd_args[0] = 0;
++
++ ret = icssg_send_fdb_msg(emac, &fdb_cmd, &fdb_cmd_rsp);
++ if (ret)
++ return ret;
++
++ WARN_ON(fdb_cmd.seqnum != fdb_cmd_rsp.seqnum);
++ return fdb_cmd_rsp.status == 1 ? 0 : -EINVAL;
++}
++EXPORT_SYMBOL_GPL(emac_fdb_flow_id_updated);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.h b/drivers/net/ethernet/ti/icssg/icssg_config.h
+index 92c2deaa306835..c884e9fa099e6f 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.h
+@@ -55,6 +55,7 @@ struct icssg_rxq_ctx {
+ #define ICSSG_FW_MGMT_FDB_CMD_TYPE 0x03
+ #define ICSSG_FW_MGMT_CMD_TYPE 0x04
+ #define ICSSG_FW_MGMT_PKT 0x80000000
++#define ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW 0x05
+
+ struct icssg_r30_cmd {
+ u32 cmd[4];
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index fe2fd1bfc904db..cb11635a8d1209 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -164,11 +164,26 @@ static struct icssg_firmwares icssg_emac_firmwares[] = {
+ }
+ };
+
+-static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
++static int prueth_start(struct rproc *rproc, const char *fw_name)
++{
++ int ret;
++
++ ret = rproc_set_firmware(rproc, fw_name);
++ if (ret)
++ return ret;
++ return rproc_boot(rproc);
++}
++
++static void prueth_shutdown(struct rproc *rproc)
++{
++ rproc_shutdown(rproc);
++}
++
++static int prueth_emac_start(struct prueth *prueth)
+ {
+ struct icssg_firmwares *firmwares;
+ struct device *dev = prueth->dev;
+- int slice, ret;
++ int ret, slice;
+
+ if (prueth->is_switch_mode)
+ firmwares = icssg_switch_firmwares;
+@@ -177,49 +192,126 @@ static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
+ else
+ firmwares = icssg_emac_firmwares;
+
+- slice = prueth_emac_slice(emac);
+- if (slice < 0) {
+- netdev_err(emac->ndev, "invalid port\n");
+- return -EINVAL;
++ for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
++ ret = prueth_start(prueth->pru[slice], firmwares[slice].pru);
++ if (ret) {
++ dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret);
++ goto unwind_slices;
++ }
++
++ ret = prueth_start(prueth->rtu[slice], firmwares[slice].rtu);
++ if (ret) {
++ dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret);
++ rproc_shutdown(prueth->pru[slice]);
++ goto unwind_slices;
++ }
++
++ ret = prueth_start(prueth->txpru[slice], firmwares[slice].txpru);
++ if (ret) {
++ dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret);
++ rproc_shutdown(prueth->rtu[slice]);
++ rproc_shutdown(prueth->pru[slice]);
++ goto unwind_slices;
++ }
+ }
+
+- ret = icssg_config(prueth, emac, slice);
+- if (ret)
+- return ret;
++ return 0;
+
+- ret = rproc_set_firmware(prueth->pru[slice], firmwares[slice].pru);
+- ret = rproc_boot(prueth->pru[slice]);
+- if (ret) {
+- dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret);
+- return -EINVAL;
++unwind_slices:
++ while (--slice >= 0) {
++ prueth_shutdown(prueth->txpru[slice]);
++ prueth_shutdown(prueth->rtu[slice]);
++ prueth_shutdown(prueth->pru[slice]);
+ }
+
+- ret = rproc_set_firmware(prueth->rtu[slice], firmwares[slice].rtu);
+- ret = rproc_boot(prueth->rtu[slice]);
+- if (ret) {
+- dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret);
+- goto halt_pru;
++ return ret;
++}
++
++static void prueth_emac_stop(struct prueth *prueth)
++{
++ int slice;
++
++ for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
++ prueth_shutdown(prueth->txpru[slice]);
++ prueth_shutdown(prueth->rtu[slice]);
++ prueth_shutdown(prueth->pru[slice]);
+ }
++}
++
++static int prueth_emac_common_start(struct prueth *prueth)
++{
++ struct prueth_emac *emac;
++ int ret = 0;
++ int slice;
++
++ if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1])
++ return -EINVAL;
++
++ /* clear SMEM and MSMC settings for all slices */
++ memset_io(prueth->msmcram.va, 0, prueth->msmcram.size);
++ memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS);
++
++ icssg_class_default(prueth->miig_rt, ICSS_SLICE0, 0, false);
++ icssg_class_default(prueth->miig_rt, ICSS_SLICE1, 0, false);
++
++ if (prueth->is_switch_mode || prueth->is_hsr_offload_mode)
++ icssg_init_fw_offload_mode(prueth);
++ else
++ icssg_init_emac_mode(prueth);
++
++ for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
++ emac = prueth->emac[slice];
++ if (!emac)
++ continue;
++ ret = icssg_config(prueth, emac, slice);
++ if (ret)
++ goto disable_class;
++ }
++
++ ret = prueth_emac_start(prueth);
++ if (ret)
++ goto disable_class;
+
+- ret = rproc_set_firmware(prueth->txpru[slice], firmwares[slice].txpru);
+- ret = rproc_boot(prueth->txpru[slice]);
++ emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] :
++ prueth->emac[ICSS_SLICE1];
++ ret = icss_iep_init(emac->iep, &prueth_iep_clockops,
++ emac, IEP_DEFAULT_CYCLE_TIME_NS);
+ if (ret) {
+- dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret);
+- goto halt_rtu;
++ dev_err(prueth->dev, "Failed to initialize IEP module\n");
++ goto stop_pruss;
+ }
+
+- emac->fw_running = 1;
+ return 0;
+
+-halt_rtu:
+- rproc_shutdown(prueth->rtu[slice]);
++stop_pruss:
++ prueth_emac_stop(prueth);
+
+-halt_pru:
+- rproc_shutdown(prueth->pru[slice]);
++disable_class:
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE0);
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE1);
+
+ return ret;
+ }
+
++static int prueth_emac_common_stop(struct prueth *prueth)
++{
++ struct prueth_emac *emac;
++
++ if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1])
++ return -EINVAL;
++
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE0);
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE1);
++
++ prueth_emac_stop(prueth);
++
++ emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] :
++ prueth->emac[ICSS_SLICE1];
++ icss_iep_exit(emac->iep);
++
++ return 0;
++}
++
+ /* called back by PHY layer if there is change in link state of hw port*/
+ static void emac_adjust_link(struct net_device *ndev)
+ {
+@@ -374,9 +466,6 @@ static void prueth_iep_settime(void *clockops_data, u64 ns)
+ u32 cycletime;
+ int timeout;
+
+- if (!emac->fw_running)
+- return;
+-
+ sc_descp = emac->prueth->shram.va + TIMESYNC_FW_WC_SETCLOCK_DESC_OFFSET;
+
+ cycletime = IEP_DEFAULT_CYCLE_TIME_NS;
+@@ -543,23 +632,17 @@ static int emac_ndo_open(struct net_device *ndev)
+ {
+ struct prueth_emac *emac = netdev_priv(ndev);
+ int ret, i, num_data_chn = emac->tx_ch_num;
++ struct icssg_flow_cfg __iomem *flow_cfg;
+ struct prueth *prueth = emac->prueth;
+ int slice = prueth_emac_slice(emac);
+ struct device *dev = prueth->dev;
+ int max_rx_flows;
+ int rx_flow;
+
+- /* clear SMEM and MSMC settings for all slices */
+- if (!prueth->emacs_initialized) {
+- memset_io(prueth->msmcram.va, 0, prueth->msmcram.size);
+- memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS);
+- }
+-
+ /* set h/w MAC as user might have re-configured */
+ ether_addr_copy(emac->mac_addr, ndev->dev_addr);
+
+ icssg_class_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr);
+- icssg_class_default(prueth->miig_rt, slice, 0, false);
+ icssg_ft1_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr);
+
+ /* Notify the stack of the actual queue counts. */
+@@ -597,18 +680,23 @@ static int emac_ndo_open(struct net_device *ndev)
+ goto cleanup_napi;
+ }
+
+- /* reset and start PRU firmware */
+- ret = prueth_emac_start(prueth, emac);
+- if (ret)
+- goto free_rx_irq;
++ if (!prueth->emacs_initialized) {
++ ret = prueth_emac_common_start(prueth);
++ if (ret)
++ goto free_rx_irq;
++ }
+
+- icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu);
++ flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET;
++ writew(emac->rx_flow_id_base, &flow_cfg->rx_base_flow);
++ ret = emac_fdb_flow_id_updated(emac);
+
+- if (!prueth->emacs_initialized) {
+- ret = icss_iep_init(emac->iep, &prueth_iep_clockops,
+- emac, IEP_DEFAULT_CYCLE_TIME_NS);
++ if (ret) {
++ netdev_err(ndev, "Failed to update Rx Flow ID %d", ret);
++ goto stop;
+ }
+
++ icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu);
++
+ ret = request_threaded_irq(emac->tx_ts_irq, NULL, prueth_tx_ts_irq,
+ IRQF_ONESHOT, dev_name(dev), emac);
+ if (ret)
+@@ -653,7 +741,8 @@ static int emac_ndo_open(struct net_device *ndev)
+ free_tx_ts_irq:
+ free_irq(emac->tx_ts_irq, emac);
+ stop:
+- prueth_emac_stop(emac);
++ if (!prueth->emacs_initialized)
++ prueth_emac_common_stop(prueth);
+ free_rx_irq:
+ free_irq(emac->rx_chns.irq[rx_flow], emac);
+ cleanup_napi:
+@@ -689,8 +778,6 @@ static int emac_ndo_stop(struct net_device *ndev)
+ if (ndev->phydev)
+ phy_stop(ndev->phydev);
+
+- icssg_class_disable(prueth->miig_rt, prueth_emac_slice(emac));
+-
+ if (emac->prueth->is_hsr_offload_mode)
+ __dev_mc_unsync(ndev, icssg_prueth_hsr_del_mcast);
+ else
+@@ -728,11 +815,9 @@ static int emac_ndo_stop(struct net_device *ndev)
+ /* Destroying the queued work in ndo_stop() */
+ cancel_delayed_work_sync(&emac->stats_work);
+
+- if (prueth->emacs_initialized == 1)
+- icss_iep_exit(emac->iep);
+-
+ /* stop PRUs */
+- prueth_emac_stop(emac);
++ if (prueth->emacs_initialized == 1)
++ prueth_emac_common_stop(prueth);
+
+ free_irq(emac->tx_ts_irq, emac);
+
+@@ -1010,10 +1095,11 @@ static void prueth_offload_fwd_mark_update(struct prueth *prueth)
+ }
+ }
+
+-static void prueth_emac_restart(struct prueth *prueth)
++static int prueth_emac_restart(struct prueth *prueth)
+ {
+ struct prueth_emac *emac0 = prueth->emac[PRUETH_MAC0];
+ struct prueth_emac *emac1 = prueth->emac[PRUETH_MAC1];
++ int ret;
+
+ /* Detach the net_device for both PRUeth ports*/
+ if (netif_running(emac0->ndev))
+@@ -1022,36 +1108,46 @@ static void prueth_emac_restart(struct prueth *prueth)
+ netif_device_detach(emac1->ndev);
+
+ /* Disable both PRUeth ports */
+- icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE);
+- icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE);
++ ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE);
++ ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE);
++ if (ret)
++ return ret;
+
+ /* Stop both pru cores for both PRUeth ports*/
+- prueth_emac_stop(emac0);
+- prueth->emacs_initialized--;
+- prueth_emac_stop(emac1);
+- prueth->emacs_initialized--;
++ ret = prueth_emac_common_stop(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to stop the firmwares");
++ return ret;
++ }
+
+ /* Start both pru cores for both PRUeth ports */
+- prueth_emac_start(prueth, emac0);
+- prueth->emacs_initialized++;
+- prueth_emac_start(prueth, emac1);
+- prueth->emacs_initialized++;
++ ret = prueth_emac_common_start(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to start the firmwares");
++ return ret;
++ }
+
+ /* Enable forwarding for both PRUeth ports */
+- icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD);
+- icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD);
++ ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD);
++ ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD);
+
+ /* Attache net_device for both PRUeth ports */
+ netif_device_attach(emac0->ndev);
+ netif_device_attach(emac1->ndev);
++
++ return ret;
+ }
+
+ static void icssg_change_mode(struct prueth *prueth)
+ {
+ struct prueth_emac *emac;
+- int mac;
++ int mac, ret;
+
+- prueth_emac_restart(prueth);
++ ret = prueth_emac_restart(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
++ return;
++ }
+
+ for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
+ emac = prueth->emac[mac];
+@@ -1130,13 +1226,18 @@ static void prueth_netdevice_port_unlink(struct net_device *ndev)
+ {
+ struct prueth_emac *emac = netdev_priv(ndev);
+ struct prueth *prueth = emac->prueth;
++ int ret;
+
+ prueth->br_members &= ~BIT(emac->port_id);
+
+ if (prueth->is_switch_mode) {
+ prueth->is_switch_mode = false;
+ emac->port_vlan = 0;
+- prueth_emac_restart(prueth);
++ ret = prueth_emac_restart(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
++ return;
++ }
+ }
+
+ prueth_offload_fwd_mark_update(prueth);
+@@ -1185,6 +1286,7 @@ static void prueth_hsr_port_unlink(struct net_device *ndev)
+ struct prueth *prueth = emac->prueth;
+ struct prueth_emac *emac0;
+ struct prueth_emac *emac1;
++ int ret;
+
+ emac0 = prueth->emac[PRUETH_MAC0];
+ emac1 = prueth->emac[PRUETH_MAC1];
+@@ -1195,7 +1297,11 @@ static void prueth_hsr_port_unlink(struct net_device *ndev)
+ emac0->port_vlan = 0;
+ emac1->port_vlan = 0;
+ prueth->hsr_dev = NULL;
+- prueth_emac_restart(prueth);
++ ret = prueth_emac_restart(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
++ return;
++ }
+ netdev_dbg(ndev, "Disabling HSR Offload mode\n");
+ }
+ }
+@@ -1370,13 +1476,10 @@ static int prueth_probe(struct platform_device *pdev)
+ prueth->pa_stats = NULL;
+ }
+
+- if (eth0_node) {
++ if (eth0_node || eth1_node) {
+ ret = prueth_get_cores(prueth, ICSS_SLICE0, false);
+ if (ret)
+ goto put_cores;
+- }
+-
+- if (eth1_node) {
+ ret = prueth_get_cores(prueth, ICSS_SLICE1, false);
+ if (ret)
+ goto put_cores;
+@@ -1575,14 +1678,12 @@ static int prueth_probe(struct platform_device *pdev)
+ pruss_put(prueth->pruss);
+
+ put_cores:
+- if (eth1_node) {
+- prueth_put_cores(prueth, ICSS_SLICE1);
+- of_node_put(eth1_node);
+- }
+-
+- if (eth0_node) {
++ if (eth0_node || eth1_node) {
+ prueth_put_cores(prueth, ICSS_SLICE0);
+ of_node_put(eth0_node);
++
++ prueth_put_cores(prueth, ICSS_SLICE1);
++ of_node_put(eth1_node);
+ }
+
+ return ret;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+index f5c1d473e9f991..5473315ea20406 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+@@ -140,7 +140,6 @@ struct prueth_rx_chn {
+ /* data for each emac port */
+ struct prueth_emac {
+ bool is_sr1;
+- bool fw_running;
+ struct prueth *prueth;
+ struct net_device *ndev;
+ u8 mac_addr[6];
+@@ -361,6 +360,8 @@ int icssg_set_port_state(struct prueth_emac *emac,
+ enum icssg_port_state_cmd state);
+ void icssg_config_set_speed(struct prueth_emac *emac);
+ void icssg_config_half_duplex(struct prueth_emac *emac);
++void icssg_init_emac_mode(struct prueth *prueth);
++void icssg_init_fw_offload_mode(struct prueth *prueth);
+
+ /* Buffer queue helpers */
+ int icssg_queue_pop(struct prueth *prueth, u8 queue);
+@@ -377,6 +378,7 @@ void icssg_vtbl_modify(struct prueth_emac *emac, u8 vid, u8 port_mask,
+ u8 untag_mask, bool add);
+ u16 icssg_get_pvid(struct prueth_emac *emac);
+ void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port);
++int emac_fdb_flow_id_updated(struct prueth_emac *emac);
+ #define prueth_napi_to_tx_chn(pnapi) \
+ container_of(pnapi, struct prueth_tx_chn, napi_tx)
+
+@@ -407,7 +409,6 @@ void emac_rx_timestamp(struct prueth_emac *emac,
+ struct sk_buff *skb, u32 *psdata);
+ enum netdev_tx icssg_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev);
+ irqreturn_t prueth_rx_irq(int irq, void *dev_id);
+-void prueth_emac_stop(struct prueth_emac *emac);
+ void prueth_cleanup_tx_ts(struct prueth_emac *emac);
+ int icssg_napi_rx_poll(struct napi_struct *napi_rx, int budget);
+ int prueth_prepare_rx_chan(struct prueth_emac *emac,
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c b/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
+index 292f04d29f4f7b..f88cdc8f012f12 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
+@@ -440,7 +440,6 @@ static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
+ goto halt_pru;
+ }
+
+- emac->fw_running = 1;
+ return 0;
+
+ halt_pru:
+@@ -449,6 +448,29 @@ static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
+ return ret;
+ }
+
++static void prueth_emac_stop(struct prueth_emac *emac)
++{
++ struct prueth *prueth = emac->prueth;
++ int slice;
++
++ switch (emac->port_id) {
++ case PRUETH_PORT_MII0:
++ slice = ICSS_SLICE0;
++ break;
++ case PRUETH_PORT_MII1:
++ slice = ICSS_SLICE1;
++ break;
++ default:
++ netdev_err(emac->ndev, "invalid port\n");
++ return;
++ }
++
++ if (!emac->is_sr1)
++ rproc_shutdown(prueth->txpru[slice]);
++ rproc_shutdown(prueth->rtu[slice]);
++ rproc_shutdown(prueth->pru[slice]);
++}
++
+ /**
+ * emac_ndo_open - EMAC device open
+ * @ndev: network adapter device
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 65b0a3115e14cd..64926240b0071d 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -432,10 +432,12 @@ struct kszphy_ptp_priv {
+ struct kszphy_priv {
+ struct kszphy_ptp_priv ptp_priv;
+ const struct kszphy_type *type;
++ struct clk *clk;
+ int led_mode;
+ u16 vct_ctrl1000;
+ bool rmii_ref_clk_sel;
+ bool rmii_ref_clk_sel_val;
++ bool clk_enable;
+ u64 stats[ARRAY_SIZE(kszphy_hw_stats)];
+ };
+
+@@ -2052,6 +2054,46 @@ static void kszphy_get_stats(struct phy_device *phydev,
+ data[i] = kszphy_get_stat(phydev, i);
+ }
+
++static void kszphy_enable_clk(struct phy_device *phydev)
++{
++ struct kszphy_priv *priv = phydev->priv;
++
++ if (!priv->clk_enable && priv->clk) {
++ clk_prepare_enable(priv->clk);
++ priv->clk_enable = true;
++ }
++}
++
++static void kszphy_disable_clk(struct phy_device *phydev)
++{
++ struct kszphy_priv *priv = phydev->priv;
++
++ if (priv->clk_enable && priv->clk) {
++ clk_disable_unprepare(priv->clk);
++ priv->clk_enable = false;
++ }
++}
++
++static int kszphy_generic_resume(struct phy_device *phydev)
++{
++ kszphy_enable_clk(phydev);
++
++ return genphy_resume(phydev);
++}
++
++static int kszphy_generic_suspend(struct phy_device *phydev)
++{
++ int ret;
++
++ ret = genphy_suspend(phydev);
++ if (ret)
++ return ret;
++
++ kszphy_disable_clk(phydev);
++
++ return 0;
++}
++
+ static int kszphy_suspend(struct phy_device *phydev)
+ {
+ /* Disable PHY Interrupts */
+@@ -2061,7 +2103,7 @@ static int kszphy_suspend(struct phy_device *phydev)
+ phydev->drv->config_intr(phydev);
+ }
+
+- return genphy_suspend(phydev);
++ return kszphy_generic_suspend(phydev);
+ }
+
+ static void kszphy_parse_led_mode(struct phy_device *phydev)
+@@ -2092,7 +2134,9 @@ static int kszphy_resume(struct phy_device *phydev)
+ {
+ int ret;
+
+- genphy_resume(phydev);
++ ret = kszphy_generic_resume(phydev);
++ if (ret)
++ return ret;
+
+ /* After switching from power-down to normal mode, an internal global
+ * reset is automatically generated. Wait a minimum of 1 ms before
+@@ -2114,6 +2158,24 @@ static int kszphy_resume(struct phy_device *phydev)
+ return 0;
+ }
+
++/* Because of errata DS80000700A, receiver error following software
++ * power down. Suspend and resume callbacks only disable and enable
++ * external rmii reference clock.
++ */
++static int ksz8041_resume(struct phy_device *phydev)
++{
++ kszphy_enable_clk(phydev);
++
++ return 0;
++}
++
++static int ksz8041_suspend(struct phy_device *phydev)
++{
++ kszphy_disable_clk(phydev);
++
++ return 0;
++}
++
+ static int ksz9477_resume(struct phy_device *phydev)
+ {
+ int ret;
+@@ -2161,7 +2223,10 @@ static int ksz8061_resume(struct phy_device *phydev)
+ if (!(ret & BMCR_PDOWN))
+ return 0;
+
+- genphy_resume(phydev);
++ ret = kszphy_generic_resume(phydev);
++ if (ret)
++ return ret;
++
+ usleep_range(1000, 2000);
+
+ /* Re-program the value after chip is reset. */
+@@ -2179,6 +2244,11 @@ static int ksz8061_resume(struct phy_device *phydev)
+ return 0;
+ }
+
++static int ksz8061_suspend(struct phy_device *phydev)
++{
++ return kszphy_suspend(phydev);
++}
++
+ static int kszphy_probe(struct phy_device *phydev)
+ {
+ const struct kszphy_type *type = phydev->drv->driver_data;
+@@ -2219,10 +2289,14 @@ static int kszphy_probe(struct phy_device *phydev)
+ } else if (!clk) {
+ /* unnamed clock from the generic ethernet-phy binding */
+ clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL);
+- if (IS_ERR(clk))
+- return PTR_ERR(clk);
+ }
+
++ if (IS_ERR(clk))
++ return PTR_ERR(clk);
++
++ clk_disable_unprepare(clk);
++ priv->clk = clk;
++
+ if (ksz8041_fiber_mode(phydev))
+ phydev->port = PORT_FIBRE;
+
+@@ -5292,6 +5366,21 @@ static int lan8841_probe(struct phy_device *phydev)
+ return 0;
+ }
+
++static int lan8804_resume(struct phy_device *phydev)
++{
++ return kszphy_resume(phydev);
++}
++
++static int lan8804_suspend(struct phy_device *phydev)
++{
++ return kszphy_generic_suspend(phydev);
++}
++
++static int lan8841_resume(struct phy_device *phydev)
++{
++ return kszphy_generic_resume(phydev);
++}
++
+ static int lan8841_suspend(struct phy_device *phydev)
+ {
+ struct kszphy_priv *priv = phydev->priv;
+@@ -5300,7 +5389,7 @@ static int lan8841_suspend(struct phy_device *phydev)
+ if (ptp_priv->ptp_clock)
+ ptp_cancel_worker_sync(ptp_priv->ptp_clock);
+
+- return genphy_suspend(phydev);
++ return kszphy_generic_suspend(phydev);
+ }
+
+ static struct phy_driver ksphy_driver[] = {
+@@ -5360,9 +5449,8 @@ static struct phy_driver ksphy_driver[] = {
+ .get_sset_count = kszphy_get_sset_count,
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+- /* No suspend/resume callbacks because of errata DS80000700A,
+- * receiver error following software power down.
+- */
++ .suspend = ksz8041_suspend,
++ .resume = ksz8041_resume,
+ }, {
+ .phy_id = PHY_ID_KSZ8041RNLI,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
+@@ -5438,7 +5526,7 @@ static struct phy_driver ksphy_driver[] = {
+ .soft_reset = genphy_soft_reset,
+ .config_intr = kszphy_config_intr,
+ .handle_interrupt = kszphy_handle_interrupt,
+- .suspend = kszphy_suspend,
++ .suspend = ksz8061_suspend,
+ .resume = ksz8061_resume,
+ }, {
+ .phy_id = PHY_ID_KSZ9021,
+@@ -5509,8 +5597,8 @@ static struct phy_driver ksphy_driver[] = {
+ .get_sset_count = kszphy_get_sset_count,
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+- .suspend = genphy_suspend,
+- .resume = kszphy_resume,
++ .suspend = lan8804_suspend,
++ .resume = lan8804_resume,
+ .config_intr = lan8804_config_intr,
+ .handle_interrupt = lan8804_handle_interrupt,
+ }, {
+@@ -5528,7 +5616,7 @@ static struct phy_driver ksphy_driver[] = {
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+ .suspend = lan8841_suspend,
+- .resume = genphy_resume,
++ .resume = lan8841_resume,
+ .cable_test_start = lan8814_cable_test_start,
+ .cable_test_get_status = ksz886x_cable_test_get_status,
+ }, {
+diff --git a/drivers/net/pse-pd/tps23881.c b/drivers/net/pse-pd/tps23881.c
+index 5c4e88be46ee33..8797ca1a8a219c 100644
+--- a/drivers/net/pse-pd/tps23881.c
++++ b/drivers/net/pse-pd/tps23881.c
+@@ -64,15 +64,11 @@ static int tps23881_pi_enable(struct pse_controller_dev *pcdev, int id)
+ if (id >= TPS23881_MAX_CHANS)
+ return -ERANGE;
+
+- ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS);
+- if (ret < 0)
+- return ret;
+-
+ chan = priv->port[id].chan[0];
+ if (chan < 4)
+- val = (u16)(ret | BIT(chan));
++ val = BIT(chan);
+ else
+- val = (u16)(ret | BIT(chan + 4));
++ val = BIT(chan + 4);
+
+ if (priv->port[id].is_4p) {
+ chan = priv->port[id].chan[1];
+@@ -100,15 +96,11 @@ static int tps23881_pi_disable(struct pse_controller_dev *pcdev, int id)
+ if (id >= TPS23881_MAX_CHANS)
+ return -ERANGE;
+
+- ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS);
+- if (ret < 0)
+- return ret;
+-
+ chan = priv->port[id].chan[0];
+ if (chan < 4)
+- val = (u16)(ret | BIT(chan + 4));
++ val = BIT(chan + 4);
+ else
+- val = (u16)(ret | BIT(chan + 8));
++ val = BIT(chan + 8);
+
+ if (priv->port[id].is_4p) {
+ chan = priv->port[id].chan[1];
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 0c011d8f5d4db2..9fe7f704a2f7b8 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1365,6 +1365,9 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c0, 0)}, /* Telit FE910C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c4, 0)}, /* Telit FE910C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c8, 0)}, /* Telit FE910C04 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+index fa1be8c54d3c1a..c18c6e933f478e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+@@ -161,6 +161,7 @@ const struct iwl_cfg_trans_params iwl_gl_trans_cfg = {
+
+ const char iwl_bz_name[] = "Intel(R) TBD Bz device";
+ const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz";
++const char iwl_wh_name[] = "Intel(R) Wi-Fi 7 BE211 320MHz";
+ const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz";
+ const char iwl_mtp_name[] = "Intel(R) Wi-Fi 7 BE202 160MHz";
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 34c91deca57b1b..17721bb47e2511 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -545,6 +545,7 @@ extern const char iwl_ax231_name[];
+ extern const char iwl_ax411_name[];
+ extern const char iwl_bz_name[];
+ extern const char iwl_fm_name[];
++extern const char iwl_wh_name[];
+ extern const char iwl_gl_name[];
+ extern const char iwl_mtp_name[];
+ extern const char iwl_sc_name[];
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 1a814eb6743e80..6a4300c01d41d1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2871,6 +2871,7 @@ static void iwl_mvm_query_set_freqs(struct iwl_mvm *mvm,
+ int idx)
+ {
+ int i;
++ int n_channels = 0;
+
+ if (fw_has_api(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_API_SCAN_OFFLOAD_CHANS)) {
+@@ -2879,7 +2880,7 @@ static void iwl_mvm_query_set_freqs(struct iwl_mvm *mvm,
+
+ for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN * 8; i++)
+ if (matches[idx].matching_channels[i / 8] & (BIT(i % 8)))
+- match->channels[match->n_channels++] =
++ match->channels[n_channels++] =
+ mvm->nd_channels[i]->center_freq;
+ } else {
+ struct iwl_scan_offload_profile_match_v1 *matches =
+@@ -2887,9 +2888,11 @@ static void iwl_mvm_query_set_freqs(struct iwl_mvm *mvm,
+
+ for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN_V1 * 8; i++)
+ if (matches[idx].matching_channels[i / 8] & (BIT(i % 8)))
+- match->channels[match->n_channels++] =
++ match->channels[n_channels++] =
+ mvm->nd_channels[i]->center_freq;
+ }
++ /* We may have ended up with fewer channels than we allocated. */
++ match->n_channels = n_channels;
+ }
+
+ /**
+@@ -2970,6 +2973,8 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+ GFP_KERNEL);
+ if (!net_detect || !n_matches)
+ goto out_report_nd;
++ net_detect->n_matches = n_matches;
++ n_matches = 0;
+
+ for_each_set_bit(i, &matched_profiles, mvm->n_nd_match_sets) {
+ struct cfg80211_wowlan_nd_match *match;
+@@ -2983,8 +2988,9 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+ GFP_KERNEL);
+ if (!match)
+ goto out_report_nd;
++ match->n_channels = n_channels;
+
+- net_detect->matches[net_detect->n_matches++] = match;
++ net_detect->matches[n_matches++] = match;
+
+ /* We inverted the order of the SSIDs in the scan
+ * request, so invert the index here.
+@@ -2999,6 +3005,8 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+
+ iwl_mvm_query_set_freqs(mvm, d3_data->nd_results, match, i);
+ }
++ /* We may have fewer matches than we allocated. */
++ net_detect->n_matches = n_matches;
+
+ out_report_nd:
+ wakeup.net_detect = net_detect;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 805fb249a0c6a2..8fb2aa28224212 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1106,19 +1106,54 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ iwlax210_2ax_cfg_so_jf_b0, iwl9462_name),
+
+ /* Bz */
+-/* FIXME: need to change the naming according to the actual CRF */
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax201_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax211_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ iwl_cfg_bz, iwl_fm_name),
+
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_wh_name),
++
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax201_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax211_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ iwl_cfg_bz, iwl_fm_name),
+
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_wh_name),
++
+ /* Ga (Gl) */
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_mmio.c b/drivers/net/wwan/iosm/iosm_ipc_mmio.c
+index 63eb08c43c0517..6764c13530b9bd 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_mmio.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_mmio.c
+@@ -104,7 +104,7 @@ struct iosm_mmio *ipc_mmio_init(void __iomem *mmio, struct device *dev)
+ break;
+
+ msleep(20);
+- } while (retries-- > 0);
++ } while (--retries > 0);
+
+ if (!retries) {
+ dev_err(ipc_mmio->dev, "invalid exec stage %X", stage);
+diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+index 3931c7a13f5ab2..cbdbb91e8381fc 100644
+--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
++++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+@@ -104,14 +104,21 @@ void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state)
+ fsm_state_notify(ctl->md, state);
+ }
+
++static void fsm_release_command(struct kref *ref)
++{
++ struct t7xx_fsm_command *cmd = container_of(ref, typeof(*cmd), refcnt);
++
++ kfree(cmd);
++}
++
+ static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result)
+ {
+ if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+- *cmd->ret = result;
+- complete_all(cmd->done);
++ cmd->result = result;
++ complete_all(&cmd->done);
+ }
+
+- kfree(cmd);
++ kref_put(&cmd->refcnt, fsm_release_command);
+ }
+
+ static void fsm_del_kf_event(struct t7xx_fsm_event *event)
+@@ -475,7 +482,6 @@ static int fsm_main_thread(void *data)
+
+ int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)
+ {
+- DECLARE_COMPLETION_ONSTACK(done);
+ struct t7xx_fsm_command *cmd;
+ unsigned long flags;
+ int ret;
+@@ -487,11 +493,13 @@ int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id
+ INIT_LIST_HEAD(&cmd->entry);
+ cmd->cmd_id = cmd_id;
+ cmd->flag = flag;
++ kref_init(&cmd->refcnt);
+ if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+- cmd->done = &done;
+- cmd->ret = &ret;
++ init_completion(&cmd->done);
++ kref_get(&cmd->refcnt);
+ }
+
++ kref_get(&cmd->refcnt);
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ list_add_tail(&cmd->entry, &ctl->command_queue);
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+@@ -501,11 +509,11 @@ int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id
+ if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+ unsigned long wait_ret;
+
+- wait_ret = wait_for_completion_timeout(&done,
++ wait_ret = wait_for_completion_timeout(&cmd->done,
+ msecs_to_jiffies(FSM_CMD_TIMEOUT_MS));
+- if (!wait_ret)
+- return -ETIMEDOUT;
+
++ ret = wait_ret ? cmd->result : -ETIMEDOUT;
++ kref_put(&cmd->refcnt, fsm_release_command);
+ return ret;
+ }
+
+diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+index 7b0a9baf488c18..6e0601bb752e51 100644
+--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.h
++++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+@@ -110,8 +110,9 @@ struct t7xx_fsm_command {
+ struct list_head entry;
+ enum t7xx_fsm_cmd_state cmd_id;
+ unsigned int flag;
+- struct completion *done;
+- int *ret;
++ struct completion done;
++ int result;
++ struct kref refcnt;
+ };
+
+ struct t7xx_fsm_notifier {
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 093cb423f536be..61bba5513de05a 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -173,6 +173,11 @@ enum nvme_quirks {
+ * MSI (but not MSI-X) interrupts are broken and never fire.
+ */
+ NVME_QUIRK_BROKEN_MSI = (1 << 21),
++
++ /*
++ * Align dma pool segment size to 512 bytes
++ */
++ NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22),
+ };
+
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 55af3dfbc2607b..76b3f7b396c86b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2690,15 +2690,20 @@ static int nvme_disable_prepare_reset(struct nvme_dev *dev, bool shutdown)
+
+ static int nvme_setup_prp_pools(struct nvme_dev *dev)
+ {
++ size_t small_align = 256;
++
+ dev->prp_page_pool = dma_pool_create("prp list page", dev->dev,
+ NVME_CTRL_PAGE_SIZE,
+ NVME_CTRL_PAGE_SIZE, 0);
+ if (!dev->prp_page_pool)
+ return -ENOMEM;
+
++ if (dev->ctrl.quirks & NVME_QUIRK_DMAPOOL_ALIGN_512)
++ small_align = 512;
++
+ /* Optimisation for I/Os between 4k and 128k */
+ dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev,
+- 256, 256, 0);
++ 256, small_align, 0);
+ if (!dev->prp_small_pool) {
+ dma_pool_destroy(dev->prp_page_pool);
+ return -ENOMEM;
+@@ -3446,7 +3451,7 @@ static const struct pci_device_id nvme_id_table[] = {
+ { PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
+- .driver_data = NVME_QUIRK_QDEPTH_ONE },
++ .driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, },
+ { PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ NVME_QUIRK_BOGUS_NID, },
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 685e89b35d330d..cfbab198693b03 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -2227,12 +2227,17 @@ static ssize_t nvmet_root_discovery_nqn_store(struct config_item *item,
+ const char *page, size_t count)
+ {
+ struct list_head *entry;
++ char *old_nqn, *new_nqn;
+ size_t len;
+
+ len = strcspn(page, "\n");
+ if (!len || len > NVMF_NQN_FIELD_LEN - 1)
+ return -EINVAL;
+
++ new_nqn = kstrndup(page, len, GFP_KERNEL);
++ if (!new_nqn)
++ return -ENOMEM;
++
+ down_write(&nvmet_config_sem);
+ list_for_each(entry, &nvmet_subsystems_group.cg_children) {
+ struct config_item *item =
+@@ -2241,13 +2246,15 @@ static ssize_t nvmet_root_discovery_nqn_store(struct config_item *item,
+ if (!strncmp(config_item_name(item), page, len)) {
+ pr_err("duplicate NQN %s\n", config_item_name(item));
+ up_write(&nvmet_config_sem);
++ kfree(new_nqn);
+ return -EINVAL;
+ }
+ }
+- memset(nvmet_disc_subsys->subsysnqn, 0, NVMF_NQN_FIELD_LEN);
+- memcpy(nvmet_disc_subsys->subsysnqn, page, len);
++ old_nqn = nvmet_disc_subsys->subsysnqn;
++ nvmet_disc_subsys->subsysnqn = new_nqn;
+ up_write(&nvmet_config_sem);
+
++ kfree(old_nqn);
+ return len;
+ }
+
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 737d0ae3d0b662..f384c72d955452 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -86,6 +86,7 @@ const struct regmap_config mcp23x08_regmap = {
+ .num_reg_defaults = ARRAY_SIZE(mcp23x08_defaults),
+ .cache_type = REGCACHE_FLAT,
+ .max_register = MCP_OLAT,
++ .disable_locking = true, /* mcp->lock protects the regmap */
+ };
+ EXPORT_SYMBOL_GPL(mcp23x08_regmap);
+
+@@ -132,6 +133,7 @@ const struct regmap_config mcp23x17_regmap = {
+ .num_reg_defaults = ARRAY_SIZE(mcp23x17_defaults),
+ .cache_type = REGCACHE_FLAT,
+ .val_format_endian = REGMAP_ENDIAN_LITTLE,
++ .disable_locking = true, /* mcp->lock protects the regmap */
+ };
+ EXPORT_SYMBOL_GPL(mcp23x17_regmap);
+
+@@ -228,7 +230,9 @@ static int mcp_pinconf_get(struct pinctrl_dev *pctldev, unsigned int pin,
+
+ switch (param) {
+ case PIN_CONFIG_BIAS_PULL_UP:
++ mutex_lock(&mcp->lock);
+ ret = mcp_read(mcp, MCP_GPPU, &data);
++ mutex_unlock(&mcp->lock);
+ if (ret < 0)
+ return ret;
+ status = (data & BIT(pin)) ? 1 : 0;
+@@ -257,7 +261,9 @@ static int mcp_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+
+ switch (param) {
+ case PIN_CONFIG_BIAS_PULL_UP:
++ mutex_lock(&mcp->lock);
+ ret = mcp_set_bit(mcp, MCP_GPPU, pin, arg);
++ mutex_unlock(&mcp->lock);
+ break;
+ default:
+ dev_dbg(mcp->dev, "Invalid config param %04x\n", param);
+diff --git a/drivers/platform/x86/hp/hp-wmi.c b/drivers/platform/x86/hp/hp-wmi.c
+index 8c05e0dd2a218e..3ba9c43d5516ae 100644
+--- a/drivers/platform/x86/hp/hp-wmi.c
++++ b/drivers/platform/x86/hp/hp-wmi.c
+@@ -64,7 +64,7 @@ static const char * const omen_thermal_profile_boards[] = {
+ "874A", "8603", "8604", "8748", "886B", "886C", "878A", "878B", "878C",
+ "88C8", "88CB", "8786", "8787", "8788", "88D1", "88D2", "88F4", "88FD",
+ "88F5", "88F6", "88F7", "88FE", "88FF", "8900", "8901", "8902", "8912",
+- "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42"
++ "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42", "8A15"
+ };
+
+ /* DMI Board names of Omen laptops that are specifically set to be thermal
+@@ -80,7 +80,7 @@ static const char * const omen_thermal_profile_force_v0_boards[] = {
+ * "balanced" when reaching zero.
+ */
+ static const char * const omen_timed_thermal_profile_boards[] = {
+- "8BAD", "8A42"
++ "8BAD", "8A42", "8A15"
+ };
+
+ /* DMI Board names of Victus laptops */
+diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c
+index 9d70146fd7420a..1a09f2dfb7bca0 100644
+--- a/drivers/platform/x86/mlx-platform.c
++++ b/drivers/platform/x86/mlx-platform.c
+@@ -6237,6 +6237,7 @@ mlxplat_pci_fpga_device_init(unsigned int device, const char *res_name, struct p
+ fail_pci_request_regions:
+ pci_disable_device(pci_dev);
+ fail_pci_enable_device:
++ pci_dev_put(pci_dev);
+ return err;
+ }
+
+@@ -6247,6 +6248,7 @@ mlxplat_pci_fpga_device_exit(struct pci_dev *pci_bridge,
+ iounmap(pci_bridge_addr);
+ pci_release_regions(pci_bridge);
+ pci_disable_device(pci_bridge);
++ pci_dev_put(pci_bridge);
+ }
+
+ static int
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 6371a9f765c139..2cfb2ac3f465aa 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -184,7 +184,8 @@ enum tpacpi_hkey_event_t {
+ */
+ TP_HKEY_EV_AMT_TOGGLE = 0x131a, /* Toggle AMT on/off */
+ TP_HKEY_EV_DOUBLETAP_TOGGLE = 0x131c, /* Toggle trackpoint doubletap on/off */
+- TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile */
++ TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile in 2024 systems */
++ TP_HKEY_EV_PROFILE_TOGGLE2 = 0x1401, /* Toggle platform profile in 2025 + systems */
+
+ /* Reasons for waking up from S3/S4 */
+ TP_HKEY_EV_WKUP_S3_UNDOCK = 0x2304, /* undock requested, S3 */
+@@ -11200,6 +11201,7 @@ static bool tpacpi_driver_event(const unsigned int hkey_event)
+ tp_features.trackpoint_doubletap = !tp_features.trackpoint_doubletap;
+ return true;
+ case TP_HKEY_EV_PROFILE_TOGGLE:
++ case TP_HKEY_EV_PROFILE_TOGGLE2:
+ platform_profile_cycle();
+ return true;
+ }
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 778ff187ac59e6..88819659df83a2 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -2141,6 +2141,11 @@ static int genpd_set_default_power_state(struct generic_pm_domain *genpd)
+ return 0;
+ }
+
++static void genpd_provider_release(struct device *dev)
++{
++ /* nothing to be done here */
++}
++
+ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+ {
+ struct genpd_governor_data *gd = NULL;
+@@ -2172,6 +2177,7 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+
+ genpd->gd = gd;
+ device_initialize(&genpd->dev);
++ genpd->dev.release = genpd_provider_release;
+
+ if (!genpd_is_dev_name_fw(genpd)) {
+ dev_set_name(&genpd->dev, "%s", genpd->name);
+diff --git a/drivers/pmdomain/imx/gpcv2.c b/drivers/pmdomain/imx/gpcv2.c
+index 3f0e6960f47fc2..e03c2cb39a6936 100644
+--- a/drivers/pmdomain/imx/gpcv2.c
++++ b/drivers/pmdomain/imx/gpcv2.c
+@@ -1458,12 +1458,12 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
+ .max_register = SZ_4K,
+ };
+ struct device *dev = &pdev->dev;
+- struct device_node *pgc_np;
++ struct device_node *pgc_np __free(device_node) =
++ of_get_child_by_name(dev->of_node, "pgc");
+ struct regmap *regmap;
+ void __iomem *base;
+ int ret;
+
+- pgc_np = of_get_child_by_name(dev->of_node, "pgc");
+ if (!pgc_np) {
+ dev_err(dev, "No power domains specified in DT\n");
+ return -EINVAL;
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 1755ca026f08ff..73b1edd0531b43 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -43,6 +43,7 @@ static_assert(CQSPI_MAX_CHIPSELECT <= SPI_CS_CNT_MAX);
+ #define CQSPI_SLOW_SRAM BIT(4)
+ #define CQSPI_NEEDS_APB_AHB_HAZARD_WAR BIT(5)
+ #define CQSPI_RD_NO_IRQ BIT(6)
++#define CQSPI_DISABLE_STIG_MODE BIT(7)
+
+ /* Capabilities */
+ #define CQSPI_SUPPORTS_OCTAL BIT(0)
+@@ -103,6 +104,7 @@ struct cqspi_st {
+ bool apb_ahb_hazard;
+
+ bool is_jh7110; /* Flag for StarFive JH7110 SoC */
++ bool disable_stig_mode;
+
+ const struct cqspi_driver_platdata *ddata;
+ };
+@@ -1416,7 +1418,8 @@ static int cqspi_mem_process(struct spi_mem *mem, const struct spi_mem_op *op)
+ * reads, prefer STIG mode for such small reads.
+ */
+ if (!op->addr.nbytes ||
+- op->data.nbytes <= CQSPI_STIG_DATA_LEN_MAX)
++ (op->data.nbytes <= CQSPI_STIG_DATA_LEN_MAX &&
++ !cqspi->disable_stig_mode))
+ return cqspi_command_read(f_pdata, op);
+
+ return cqspi_read(f_pdata, op);
+@@ -1880,6 +1883,8 @@ static int cqspi_probe(struct platform_device *pdev)
+ if (ret)
+ goto probe_reset_failed;
+ }
++ if (ddata->quirks & CQSPI_DISABLE_STIG_MODE)
++ cqspi->disable_stig_mode = true;
+
+ if (of_device_is_compatible(pdev->dev.of_node,
+ "xlnx,versal-ospi-1.0")) {
+@@ -2043,7 +2048,8 @@ static const struct cqspi_driver_platdata intel_lgm_qspi = {
+ static const struct cqspi_driver_platdata socfpga_qspi = {
+ .quirks = CQSPI_DISABLE_DAC_MODE
+ | CQSPI_NO_SUPPORT_WR_COMPLETION
+- | CQSPI_SLOW_SRAM,
++ | CQSPI_SLOW_SRAM
++ | CQSPI_DISABLE_STIG_MODE,
+ };
+
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index 3d2376caedfa68..5fd0b39d8c703b 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -81,6 +81,9 @@ static struct btrfs_bio *btrfs_split_bio(struct btrfs_fs_info *fs_info,
+
+ bio = bio_split(&orig_bbio->bio, map_length >> SECTOR_SHIFT, GFP_NOFS,
+ &btrfs_clone_bioset);
++ if (IS_ERR(bio))
++ return ERR_CAST(bio);
++
+ bbio = btrfs_bio(bio);
+ btrfs_bio_init(bbio, fs_info, NULL, orig_bbio);
+ bbio->inode = orig_bbio->inode;
+@@ -355,7 +358,7 @@ static void btrfs_simple_end_io(struct bio *bio)
+ INIT_WORK(&bbio->end_io_work, btrfs_end_bio_work);
+ queue_work(btrfs_end_io_wq(fs_info, bio), &bbio->end_io_work);
+ } else {
+- if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
++ if (bio_is_zone_append(bio) && !bio->bi_status)
+ btrfs_record_physical_zoned(bbio);
+ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+ }
+@@ -398,7 +401,7 @@ static void btrfs_orig_write_end_io(struct bio *bio)
+ else
+ bio->bi_status = BLK_STS_OK;
+
+- if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
++ if (bio_is_zone_append(bio) && !bio->bi_status)
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+
+ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+@@ -412,7 +415,7 @@ static void btrfs_clone_write_end_io(struct bio *bio)
+ if (bio->bi_status) {
+ atomic_inc(&stripe->bioc->error);
+ btrfs_log_dev_io_error(bio, stripe->dev);
+- } else if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
++ } else if (bio_is_zone_append(bio)) {
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+ }
+
+@@ -684,7 +687,8 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ &bioc, &smap, &mirror_num);
+ if (error) {
+ ret = errno_to_blk_status(error);
+- goto fail;
++ btrfs_bio_counter_dec(fs_info);
++ goto end_bbio;
+ }
+
+ map_length = min(map_length, length);
+@@ -692,7 +696,15 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ map_length = btrfs_append_map_length(bbio, map_length);
+
+ if (map_length < length) {
+- bbio = btrfs_split_bio(fs_info, bbio, map_length);
++ struct btrfs_bio *split;
++
++ split = btrfs_split_bio(fs_info, bbio, map_length);
++ if (IS_ERR(split)) {
++ ret = errno_to_blk_status(PTR_ERR(split));
++ btrfs_bio_counter_dec(fs_info);
++ goto end_bbio;
++ }
++ bbio = split;
+ bio = &bbio->bio;
+ }
+
+@@ -766,6 +778,7 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+
+ btrfs_bio_end_io(remaining, ret);
+ }
++end_bbio:
+ btrfs_bio_end_io(bbio, ret);
+ /* Do not submit another chunk */
+ return true;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 43b7b331b2da36..563f106774e592 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4264,6 +4264,15 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ * already the cleaner, but below we run all pending delayed iputs.
+ */
+ btrfs_flush_workqueue(fs_info->fixup_workers);
++ /*
++ * Similar case here, we have to wait for delalloc workers before we
++ * proceed below and stop the cleaner kthread, otherwise we trigger a
++ * use-after-tree on the cleaner kthread task_struct when a delalloc
++ * worker running submit_compressed_extents() adds a delayed iput, which
++ * does a wake up on the cleaner kthread, which was already freed below
++ * when we call kthread_stop().
++ */
++ btrfs_flush_workqueue(fs_info->delalloc_workers);
+
+ /*
+ * After we parked the cleaner kthread, ordered extents may have
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4b3e256e0d0b88..b5cfb85af937fc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -10056,6 +10056,11 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ bsi.block_start = physical_block_start;
+ bsi.block_len = len;
+ }
++
++ if (fatal_signal_pending(current)) {
++ ret = -EINTR;
++ goto out;
++ }
+ }
+
+ if (bsi.block_len)
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index 2b0daced98ebb4..3404e7a30c330c 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -893,7 +893,7 @@ static int ocfs2_get_next_id(struct super_block *sb, struct kqid *qid)
+ int status = 0;
+
+ trace_ocfs2_get_next_id(from_kqid(&init_user_ns, *qid), type);
+- if (!sb_has_quota_loaded(sb, type)) {
++ if (!sb_has_quota_active(sb, type)) {
+ status = -ESRCH;
+ goto out;
+ }
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 73d3367c533b8a..2956d888c13145 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -867,6 +867,7 @@ static int ocfs2_local_free_info(struct super_block *sb, int type)
+ brelse(oinfo->dqi_libh);
+ brelse(oinfo->dqi_lqi_bh);
+ kfree(oinfo);
++ info->dqi_priv = NULL;
+ return status;
+ }
+
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 7eb010de39fe26..536b7dc4538182 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1810,7 +1810,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ }
+
+ for (; addr != end; addr += PAGE_SIZE, idx++) {
+- unsigned long cur_flags = flags;
++ u64 cur_flags = flags;
+ pagemap_entry_t pme;
+
+ if (folio && (flags & PM_PRESENT) &&
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index bf909c2f6b963b..0ceebde38f9fe0 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -2018,6 +2018,7 @@ exit_cifs(void)
+ destroy_workqueue(decrypt_wq);
+ destroy_workqueue(fileinfo_put_wq);
+ destroy_workqueue(serverclose_wq);
++ destroy_workqueue(cfid_put_wq);
+ destroy_workqueue(cifsiod_wq);
+ cifs_proc_clean();
+ }
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 7d01dd313351f7..04ffc5b158c3bf 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -4224,6 +4224,7 @@ static bool __query_dir(struct dir_context *ctx, const char *name, int namlen,
+ /* dot and dotdot entries are already reserved */
+ if (!strcmp(".", name) || !strcmp("..", name))
+ return true;
++ d_info->num_scan++;
+ if (ksmbd_share_veto_filename(priv->work->tcon->share_conf, name))
+ return true;
+ if (!match_pattern(name, namlen, priv->search_pattern))
+@@ -4384,8 +4385,17 @@ int smb2_query_dir(struct ksmbd_work *work)
+ query_dir_private.info_level = req->FileInformationClass;
+ dir_fp->readdir_data.private = &query_dir_private;
+ set_ctx_actor(&dir_fp->readdir_data.ctx, __query_dir);
+-
++again:
++ d_info.num_scan = 0;
+ rc = iterate_dir(dir_fp->filp, &dir_fp->readdir_data.ctx);
++ /*
++ * num_entry can be 0 if the directory iteration stops before reaching
++ * the end of the directory and no file is matched with the search
++ * pattern.
++ */
++ if (rc >= 0 && !d_info.num_entry && d_info.num_scan &&
++ d_info.out_buf_len > 0)
++ goto again;
+ /*
+ * req->OutputBufferLength is too small to contain even one entry.
+ * In this case, it immediately returns OutputBufferLength 0 to client.
+@@ -6006,15 +6016,13 @@ static int set_file_basic_info(struct ksmbd_file *fp,
+ attrs.ia_valid |= (ATTR_ATIME | ATTR_ATIME_SET);
+ }
+
+- attrs.ia_valid |= ATTR_CTIME;
+ if (file_info->ChangeTime)
+- attrs.ia_ctime = ksmbd_NTtimeToUnix(file_info->ChangeTime);
+- else
+- attrs.ia_ctime = inode_get_ctime(inode);
++ inode_set_ctime_to_ts(inode,
++ ksmbd_NTtimeToUnix(file_info->ChangeTime));
+
+ if (file_info->LastWriteTime) {
+ attrs.ia_mtime = ksmbd_NTtimeToUnix(file_info->LastWriteTime);
+- attrs.ia_valid |= (ATTR_MTIME | ATTR_MTIME_SET);
++ attrs.ia_valid |= (ATTR_MTIME | ATTR_MTIME_SET | ATTR_CTIME);
+ }
+
+ if (file_info->Attributes) {
+@@ -6056,8 +6064,6 @@ static int set_file_basic_info(struct ksmbd_file *fp,
+ return -EACCES;
+
+ inode_lock(inode);
+- inode_set_ctime_to_ts(inode, attrs.ia_ctime);
+- attrs.ia_valid &= ~ATTR_CTIME;
+ rc = notify_change(idmap, dentry, &attrs, NULL);
+ inode_unlock(inode);
+ }
+diff --git a/fs/smb/server/vfs.h b/fs/smb/server/vfs.h
+index cb76f4b5bafe8c..06903024a2d88b 100644
+--- a/fs/smb/server/vfs.h
++++ b/fs/smb/server/vfs.h
+@@ -43,6 +43,7 @@ struct ksmbd_dir_info {
+ char *rptr;
+ int name_len;
+ int out_buf_len;
++ int num_scan;
+ int num_entry;
+ int data_count;
+ int last_entry_offset;
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index faceadb040f9ac..66b7620a1b5333 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -677,6 +677,23 @@ static inline void bio_clear_polled(struct bio *bio)
+ bio->bi_opf &= ~REQ_POLLED;
+ }
+
++/**
++ * bio_is_zone_append - is this a zone append bio?
++ * @bio: bio to check
++ *
++ * Check if @bio is a zone append operation. Core block layer code and end_io
++ * handlers must use this instead of an open coded REQ_OP_ZONE_APPEND check
++ * because the block layer can rewrite REQ_OP_ZONE_APPEND to REQ_OP_WRITE if
++ * it is not natively supported.
++ */
++static inline bool bio_is_zone_append(struct bio *bio)
++{
++ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
++ return false;
++ return bio_op(bio) == REQ_OP_ZONE_APPEND ||
++ bio_flagged(bio, BIO_EMULATES_ZONE_APPEND);
++}
++
+ struct bio *blk_next_bio(struct bio *bio, struct block_device *bdev,
+ unsigned int nr_pages, blk_opf_t opf, gfp_t gfp);
+ struct bio *bio_chain_and_submit(struct bio *prev, struct bio *new);
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 7d7578a8eac10b..5118caf8aa1c70 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -1121,7 +1121,7 @@ bool bpf_jit_supports_arena(void);
+ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena);
+ u64 bpf_arch_uaddress_limit(void);
+ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie);
+-bool bpf_helper_changes_pkt_data(void *func);
++bool bpf_helper_changes_pkt_data(enum bpf_func_id func_id);
+
+ static inline bool bpf_dump_raw_ok(const struct cred *cred)
+ {
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index c1645c86eed969..d65b5d71b93bf8 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -585,13 +585,16 @@ static inline int vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci)
+ * vlan_get_protocol - get protocol EtherType.
+ * @skb: skbuff to query
+ * @type: first vlan protocol
++ * @mac_offset: MAC offset
+ * @depth: buffer to store length of eth and vlan tags in bytes
+ *
+ * Returns the EtherType of the packet, regardless of whether it is
+ * vlan encapsulated (normal or hardware accelerated) or not.
+ */
+-static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+- int *depth)
++static inline __be16 __vlan_get_protocol_offset(const struct sk_buff *skb,
++ __be16 type,
++ int mac_offset,
++ int *depth)
+ {
+ unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH;
+
+@@ -610,7 +613,8 @@ static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+ do {
+ struct vlan_hdr vhdr, *vh;
+
+- vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr);
++ vh = skb_header_pointer(skb, mac_offset + vlan_depth,
++ sizeof(vhdr), &vhdr);
+ if (unlikely(!vh || !--parse_depth))
+ return 0;
+
+@@ -625,6 +629,12 @@ static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+ return type;
+ }
+
++static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
++ int *depth)
++{
++ return __vlan_get_protocol_offset(skb, type, 0, depth);
++}
++
+ /**
+ * vlan_get_protocol - get protocol EtherType.
+ * @skb: skbuff to query
+diff --git a/include/linux/memfd.h b/include/linux/memfd.h
+index 3f2cf339ceafd9..d437e30708502e 100644
+--- a/include/linux/memfd.h
++++ b/include/linux/memfd.h
+@@ -7,6 +7,7 @@
+ #ifdef CONFIG_MEMFD_CREATE
+ extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned int arg);
+ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx);
++unsigned int *memfd_file_seals_ptr(struct file *file);
+ #else
+ static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a)
+ {
+@@ -16,6 +17,19 @@ static inline struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
+ {
+ return ERR_PTR(-EINVAL);
+ }
++
++static inline unsigned int *memfd_file_seals_ptr(struct file *file)
++{
++ return NULL;
++}
+ #endif
+
++/* Retrieve memfd seals associated with the file, if any. */
++static inline unsigned int memfd_file_seals(struct file *file)
++{
++ unsigned int *sealsp = memfd_file_seals_ptr(file);
++
++ return sealsp ? *sealsp : 0;
++}
++
+ #endif /* __LINUX_MEMFD_H */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index e23c692a34c702..82c7056e27599e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -555,6 +555,7 @@ enum {
+ * creation/deletion on drivers rescan. Unset during device attach.
+ */
+ MLX5_PRIV_FLAGS_DETACH = 1 << 2,
++ MLX5_PRIV_FLAGS_SWITCH_LEGACY = 1 << 3,
+ };
+
+ struct mlx5_adev {
+@@ -1233,6 +1234,12 @@ static inline bool mlx5_core_is_vf(const struct mlx5_core_dev *dev)
+ return dev->coredev_type == MLX5_COREDEV_VF;
+ }
+
++static inline bool mlx5_core_same_coredev_type(const struct mlx5_core_dev *dev1,
++ const struct mlx5_core_dev *dev2)
++{
++ return dev1->coredev_type == dev2->coredev_type;
++}
++
+ static inline bool mlx5_core_is_ecpf(const struct mlx5_core_dev *dev)
+ {
+ return dev->caps.embedded_cpu;
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 96d369112bfa03..512e25c416ae29 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -2113,7 +2113,9 @@ struct mlx5_ifc_cmd_hca_cap_2_bits {
+ u8 migration_in_chunks[0x1];
+ u8 reserved_at_d1[0x1];
+ u8 sf_eq_usage[0x1];
+- u8 reserved_at_d3[0xd];
++ u8 reserved_at_d3[0x5];
++ u8 multiplane[0x1];
++ u8 reserved_at_d9[0x7];
+
+ u8 cross_vhca_object_to_object_supported[0x20];
+
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 61fff5d34ed532..8617adc6becd1f 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -3100,6 +3100,7 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc)
+ if (!pmd_ptlock_init(ptdesc))
+ return false;
+ __folio_set_pgtable(folio);
++ ptdesc_pmd_pts_init(ptdesc);
+ lruvec_stat_add_folio(folio, NR_PAGETABLE);
+ return true;
+ }
+@@ -4079,6 +4080,37 @@ void mem_dump_obj(void *object);
+ static inline void mem_dump_obj(void *object) {}
+ #endif
+
++static inline bool is_write_sealed(int seals)
++{
++ return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE);
++}
++
++/**
++ * is_readonly_sealed - Checks whether write-sealed but mapped read-only,
++ * in which case writes should be disallowing moving
++ * forwards.
++ * @seals: the seals to check
++ * @vm_flags: the VMA flags to check
++ *
++ * Returns whether readonly sealed, in which case writess should be disallowed
++ * going forward.
++ */
++static inline bool is_readonly_sealed(int seals, vm_flags_t vm_flags)
++{
++ /*
++ * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
++ * MAP_SHARED and read-only, take care to not allow mprotect to
++ * revert protections on such mappings. Do this only for shared
++ * mappings. For private mappings, don't need to mask
++ * VM_MAYWRITE as we still want them to be COW-writable.
++ */
++ if (is_write_sealed(seals) &&
++ ((vm_flags & (VM_SHARED | VM_WRITE)) == VM_SHARED))
++ return true;
++
++ return false;
++}
++
+ /**
+ * seal_check_write - Check for F_SEAL_WRITE or F_SEAL_FUTURE_WRITE flags and
+ * handle them.
+@@ -4090,24 +4122,15 @@ static inline void mem_dump_obj(void *object) {}
+ */
+ static inline int seal_check_write(int seals, struct vm_area_struct *vma)
+ {
+- if (seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
+- /*
+- * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
+- * write seals are active.
+- */
+- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
+- return -EPERM;
+-
+- /*
+- * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
+- * MAP_SHARED and read-only, take care to not allow mprotect to
+- * revert protections on such mappings. Do this only for shared
+- * mappings. For private mappings, don't need to mask
+- * VM_MAYWRITE as we still want them to be COW-writable.
+- */
+- if (vma->vm_flags & VM_SHARED)
+- vm_flags_clear(vma, VM_MAYWRITE);
+- }
++ if (!is_write_sealed(seals))
++ return 0;
++
++ /*
++ * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
++ * write seals are active.
++ */
++ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
++ return -EPERM;
+
+ return 0;
+ }
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 6e3bdf8e38bcae..6894de506b364f 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -445,6 +445,7 @@ FOLIO_MATCH(compound_head, _head_2a);
+ * @pt_index: Used for s390 gmap.
+ * @pt_mm: Used for x86 pgds.
+ * @pt_frag_refcount: For fragmented page table tracking. Powerpc only.
++ * @pt_share_count: Used for HugeTLB PMD page table share count.
+ * @_pt_pad_2: Padding to ensure proper alignment.
+ * @ptl: Lock for the page table.
+ * @__page_type: Same as page->page_type. Unused for page tables.
+@@ -471,6 +472,9 @@ struct ptdesc {
+ pgoff_t pt_index;
+ struct mm_struct *pt_mm;
+ atomic_t pt_frag_refcount;
++#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
++ atomic_t pt_share_count;
++#endif
+ };
+
+ union {
+@@ -516,6 +520,32 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
+ const struct page *: (const struct ptdesc *)(p), \
+ struct page *: (struct ptdesc *)(p)))
+
++#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
++static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc)
++{
++ atomic_set(&ptdesc->pt_share_count, 0);
++}
++
++static inline void ptdesc_pmd_pts_inc(struct ptdesc *ptdesc)
++{
++ atomic_inc(&ptdesc->pt_share_count);
++}
++
++static inline void ptdesc_pmd_pts_dec(struct ptdesc *ptdesc)
++{
++ atomic_dec(&ptdesc->pt_share_count);
++}
++
++static inline int ptdesc_pmd_pts_count(struct ptdesc *ptdesc)
++{
++ return atomic_read(&ptdesc->pt_share_count);
++}
++#else
++static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc)
++{
++}
++#endif
++
+ /*
+ * Used for sizing the vmemmap region on some architectures
+ */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c95f7e6ba25514..ba7b52584770d7 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -804,7 +804,6 @@ struct hci_conn_params {
+ extern struct list_head hci_dev_list;
+ extern struct list_head hci_cb_list;
+ extern rwlock_t hci_dev_list_lock;
+-extern struct mutex hci_cb_list_lock;
+
+ #define hci_dev_set_flag(hdev, nr) set_bit((nr), (hdev)->dev_flags)
+ #define hci_dev_clear_flag(hdev, nr) clear_bit((nr), (hdev)->dev_flags)
+@@ -2007,24 +2006,47 @@ struct hci_cb {
+
+ char *name;
+
++ bool (*match) (struct hci_conn *conn);
+ void (*connect_cfm) (struct hci_conn *conn, __u8 status);
+ void (*disconn_cfm) (struct hci_conn *conn, __u8 status);
+ void (*security_cfm) (struct hci_conn *conn, __u8 status,
+- __u8 encrypt);
++ __u8 encrypt);
+ void (*key_change_cfm) (struct hci_conn *conn, __u8 status);
+ void (*role_switch_cfm) (struct hci_conn *conn, __u8 status, __u8 role);
+ };
+
++static inline void hci_cb_lookup(struct hci_conn *conn, struct list_head *list)
++{
++ struct hci_cb *cb, *cpy;
++
++ rcu_read_lock();
++ list_for_each_entry_rcu(cb, &hci_cb_list, list) {
++ if (cb->match && cb->match(conn)) {
++ cpy = kmalloc(sizeof(*cpy), GFP_ATOMIC);
++ if (!cpy)
++ break;
++
++ *cpy = *cb;
++ INIT_LIST_HEAD(&cpy->list);
++ list_add_rcu(&cpy->list, list);
++ }
++ }
++ rcu_read_unlock();
++}
++
+ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->connect_cfm)
+ cb->connect_cfm(conn, status);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->connect_cfm_cb)
+ conn->connect_cfm_cb(conn, status);
+@@ -2032,43 +2054,55 @@ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status)
+
+ static inline void hci_disconn_cfm(struct hci_conn *conn, __u8 reason)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->disconn_cfm)
+ cb->disconn_cfm(conn, reason);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->disconn_cfm_cb)
+ conn->disconn_cfm_cb(conn, reason);
+ }
+
+-static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status)
++static inline void hci_security_cfm(struct hci_conn *conn, __u8 status,
++ __u8 encrypt)
+ {
+- struct hci_cb *cb;
+- __u8 encrypt;
+-
+- if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
+- return;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
+
+- encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00;
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->security_cfm)
+ cb->security_cfm(conn, status, encrypt);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->security_cfm_cb)
+ conn->security_cfm_cb(conn, status);
+ }
+
++static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status)
++{
++ __u8 encrypt;
++
++ if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
++ return;
++
++ encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00;
++
++ hci_security_cfm(conn, status, encrypt);
++}
++
+ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct hci_cb *cb;
+ __u8 encrypt;
+
+ if (conn->state == BT_CONFIG) {
+@@ -2095,40 +2129,38 @@ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ conn->sec_level = conn->pending_sec_level;
+ }
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
+- if (cb->security_cfm)
+- cb->security_cfm(conn, status, encrypt);
+- }
+- mutex_unlock(&hci_cb_list_lock);
+-
+- if (conn->security_cfm_cb)
+- conn->security_cfm_cb(conn, status);
++ hci_security_cfm(conn, status, encrypt);
+ }
+
+ static inline void hci_key_change_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->key_change_cfm)
+ cb->key_change_cfm(conn, status);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+ }
+
+ static inline void hci_role_switch_cfm(struct hci_conn *conn, __u8 status,
+ __u8 role)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->role_switch_cfm)
+ cb->role_switch_cfm(conn, status, role);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+ }
+
+ static inline bool hci_bdaddr_is_rpa(bdaddr_t *bdaddr, u8 addr_type)
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 91ae20cb76485b..471c353d32a4a5 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -733,15 +733,18 @@ struct nft_set_ext_tmpl {
+ /**
+ * struct nft_set_ext - set extensions
+ *
+- * @genmask: generation mask
++ * @genmask: generation mask, but also flags (see NFT_SET_ELEM_DEAD_BIT)
+ * @offset: offsets of individual extension types
+ * @data: beginning of extension data
++ *
++ * This structure must be aligned to word size, otherwise atomic bitops
++ * on genmask field can cause alignment failure on some archs.
+ */
+ struct nft_set_ext {
+ u8 genmask;
+ u8 offset[NFT_SET_EXT_NUM];
+ char data[];
+-};
++} __aligned(BITS_PER_LONG / 8);
+
+ static inline void nft_set_ext_prepare(struct nft_set_ext_tmpl *tmpl)
+ {
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index 94e8185c4795fe..3dc7a1551ac350 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -271,12 +271,6 @@ struct cs35l56_base {
+ struct gpio_desc *reset_gpio;
+ };
+
+-/* Temporary to avoid a build break with the HDA driver */
+-static inline int cs35l56_force_sync_asp1_registers_from_cache(struct cs35l56_base *cs35l56_base)
+-{
+- return 0;
+-}
+-
+ static inline bool cs35l56_is_otp_register(unsigned int reg)
+ {
+ return (reg >> 16) == 3;
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index d407576ddfb782..eec5eb7de8430e 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -139,6 +139,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ struct io_uring_buf_ring *br = bl->buf_ring;
+ __u16 tail, head = bl->head;
+ struct io_uring_buf *buf;
++ void __user *ret;
+
+ tail = smp_load_acquire(&br->tail);
+ if (unlikely(tail == head))
+@@ -153,6 +154,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ req->flags |= REQ_F_BUFFER_RING | REQ_F_BUFFERS_COMMIT;
+ req->buf_list = bl;
+ req->buf_index = buf->bid;
++ ret = u64_to_user_ptr(buf->addr);
+
+ if (issue_flags & IO_URING_F_UNLOCKED || !io_file_can_poll(req)) {
+ /*
+@@ -168,7 +170,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ io_kbuf_commit(req, bl, *len, 1);
+ req->buf_list = NULL;
+ }
+- return u64_to_user_ptr(buf->addr);
++ return ret;
+ }
+
+ void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 18507658a921d7..7f549be9abd1e6 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -748,6 +748,7 @@ static int io_recvmsg_prep_setup(struct io_kiocb *req)
+ if (req->opcode == IORING_OP_RECV) {
+ kmsg->msg.msg_name = NULL;
+ kmsg->msg.msg_namelen = 0;
++ kmsg->msg.msg_inq = 0;
+ kmsg->msg.msg_control = NULL;
+ kmsg->msg.msg_get_inq = 1;
+ kmsg->msg.msg_controllen = 0;
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 155938f1009313..39ad25d16ed404 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -979,6 +979,8 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ io_kbuf_recycle(req, issue_flags);
+ if (ret < 0)
+ req_set_fail(req);
++ } else if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
++ cflags = io_put_kbuf(req, ret, issue_flags);
+ } else {
+ /*
+ * Any successful return value will keep the multishot read
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 233ea78f8f1bd9..2b9c8c168a0ba3 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -539,6 +539,8 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
+
+ int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
+ {
++ int err;
++
+ /* Branch offsets can't overflow when program is shrinking, no need
+ * to call bpf_adj_branches(..., true) here
+ */
+@@ -546,7 +548,9 @@ int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
+ sizeof(struct bpf_insn) * (prog->len - off - cnt));
+ prog->len -= cnt;
+
+- return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false));
++ err = bpf_adj_branches(prog, off, off + cnt, off, false);
++ WARN_ON_ONCE(err);
++ return err;
+ }
+
+ static void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
+@@ -2936,7 +2940,7 @@ void __weak bpf_jit_compile(struct bpf_prog *prog)
+ {
+ }
+
+-bool __weak bpf_helper_changes_pkt_data(void *func)
++bool __weak bpf_helper_changes_pkt_data(enum bpf_func_id func_id)
+ {
+ return false;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 767f1cb8c27e17..a0cab0d0252fab 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -10476,7 +10476,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ }
+
+ /* With LD_ABS/IND some JITs save/restore skb from r1. */
+- changes_data = bpf_helper_changes_pkt_data(fn->func);
++ changes_data = bpf_helper_changes_pkt_data(func_id);
+ if (changes_data && fn->arg1_type != ARG_PTR_TO_CTX) {
+ verbose(env, "kernel subsystem misconfigured func %s#%d: r1 != ctx\n",
+ func_id_name(func_id), func_id);
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index 28a6be6e64fdd7..187ba1b80bda16 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -166,7 +166,7 @@ static void kcov_remote_area_put(struct kcov_remote_area *area,
+ * Unlike in_serving_softirq(), this function returns false when called during
+ * a hardirq or an NMI that happened in the softirq context.
+ */
+-static inline bool in_softirq_really(void)
++static __always_inline bool in_softirq_really(void)
+ {
+ return in_serving_softirq() && !in_hardirq() && !in_nmi();
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 79bb18651cdb8b..40f915f893e2ed 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -4367,7 +4367,7 @@ static void scx_ops_bypass(bool bypass)
+ * sees scx_rq_bypassing() before moving tasks to SCX.
+ */
+ if (!scx_enabled()) {
+- rq_unlock_irqrestore(rq, &rf);
++ rq_unlock(rq, &rf);
+ continue;
+ }
+
+@@ -6637,7 +6637,7 @@ __bpf_kfunc int bpf_iter_scx_dsq_new(struct bpf_iter_scx_dsq *it, u64 dsq_id,
+ return -ENOENT;
+
+ INIT_LIST_HEAD(&kit->cursor.node);
+- kit->cursor.flags |= SCX_DSQ_LNODE_ITER_CURSOR | flags;
++ kit->cursor.flags = SCX_DSQ_LNODE_ITER_CURSOR | flags;
+ kit->cursor.priv = READ_ONCE(kit->dsq->seq);
+
+ return 0;
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 72bcbfad53db04..c12335499ec91e 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -802,7 +802,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
+ #endif
+ {
+ for_each_set_bit(i, &bitmap, sizeof(bitmap) * BITS_PER_BYTE) {
+- struct fgraph_ops *gops = fgraph_array[i];
++ struct fgraph_ops *gops = READ_ONCE(fgraph_array[i]);
+
+ if (gops == &fgraph_stub)
+ continue;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 3dd3b97d8049ae..cd9dbfb3038330 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -883,16 +883,13 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
+ }
+
+ static struct fgraph_ops fprofiler_ops = {
+- .ops = {
+- .flags = FTRACE_OPS_FL_INITIALIZED,
+- INIT_OPS_HASH(fprofiler_ops.ops)
+- },
+ .entryfunc = &profile_graph_entry,
+ .retfunc = &profile_graph_return,
+ };
+
+ static int register_ftrace_profiler(void)
+ {
++ ftrace_ops_set_global_filter(&fprofiler_ops.ops);
+ return register_ftrace_graph(&fprofiler_ops);
+ }
+
+@@ -903,12 +900,11 @@ static void unregister_ftrace_profiler(void)
+ #else
+ static struct ftrace_ops ftrace_profile_ops __read_mostly = {
+ .func = function_profile_call,
+- .flags = FTRACE_OPS_FL_INITIALIZED,
+- INIT_OPS_HASH(ftrace_profile_ops)
+ };
+
+ static int register_ftrace_profiler(void)
+ {
++ ftrace_ops_set_global_filter(&ftrace_profile_ops);
+ return register_ftrace_function(&ftrace_profile_ops);
+ }
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 7149cd6fd4795e..ea9b44847ce6b7 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -364,6 +364,18 @@ static bool process_string(const char *fmt, int len, struct trace_event_call *ca
+ s = r + 1;
+ } while (s < e);
+
++ /*
++ * Check for arrays. If the argument has: foo[REC->val]
++ * then it is very likely that foo is an array of strings
++ * that are safe to use.
++ */
++ r = strstr(s, "[");
++ if (r && r < e) {
++ r = strstr(r, "REC->");
++ if (r && r < e)
++ return true;
++ }
++
+ /*
+ * If there's any strings in the argument consider this arg OK as it
+ * could be: REC->field ? "foo" : "bar" and we don't want to get into
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 9949ffad8df09d..cee65cb4310816 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3680,23 +3680,27 @@ void workqueue_softirq_dead(unsigned int cpu)
+ * check_flush_dependency - check for flush dependency sanity
+ * @target_wq: workqueue being flushed
+ * @target_work: work item being flushed (NULL for workqueue flushes)
++ * @from_cancel: are we called from the work cancel path
+ *
+ * %current is trying to flush the whole @target_wq or @target_work on it.
+- * If @target_wq doesn't have %WQ_MEM_RECLAIM, verify that %current is not
+- * reclaiming memory or running on a workqueue which doesn't have
+- * %WQ_MEM_RECLAIM as that can break forward-progress guarantee leading to
+- * a deadlock.
++ * If this is not the cancel path (which implies work being flushed is either
++ * already running, or will not be at all), check if @target_wq doesn't have
++ * %WQ_MEM_RECLAIM and verify that %current is not reclaiming memory or running
++ * on a workqueue which doesn't have %WQ_MEM_RECLAIM as that can break forward-
++ * progress guarantee leading to a deadlock.
+ */
+ static void check_flush_dependency(struct workqueue_struct *target_wq,
+- struct work_struct *target_work)
++ struct work_struct *target_work,
++ bool from_cancel)
+ {
+- work_func_t target_func = target_work ? target_work->func : NULL;
++ work_func_t target_func;
+ struct worker *worker;
+
+- if (target_wq->flags & WQ_MEM_RECLAIM)
++ if (from_cancel || target_wq->flags & WQ_MEM_RECLAIM)
+ return;
+
+ worker = current_wq_worker();
++ target_func = target_work ? target_work->func : NULL;
+
+ WARN_ONCE(current->flags & PF_MEMALLOC,
+ "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps",
+@@ -3966,7 +3970,7 @@ void __flush_workqueue(struct workqueue_struct *wq)
+ list_add_tail(&this_flusher.list, &wq->flusher_overflow);
+ }
+
+- check_flush_dependency(wq, NULL);
++ check_flush_dependency(wq, NULL, false);
+
+ mutex_unlock(&wq->mutex);
+
+@@ -4141,7 +4145,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
+ }
+
+ wq = pwq->wq;
+- check_flush_dependency(wq, work);
++ check_flush_dependency(wq, work, from_cancel);
+
+ insert_wq_barrier(pwq, barr, work, worker);
+ raw_spin_unlock_irq(&pool->lock);
+@@ -5627,6 +5631,7 @@ static void wq_adjust_max_active(struct workqueue_struct *wq)
+ } while (activated);
+ }
+
++__printf(1, 0)
+ static struct workqueue_struct *__alloc_workqueue(const char *fmt,
+ unsigned int flags,
+ int max_active, va_list args)
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 8d83e217271967..0cbe913634be4b 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -4367,6 +4367,7 @@ int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
+ ret = 1;
+ }
+ if (ret < 0 && range_lo > min) {
++ mas_reset(mas);
+ ret = mas_empty_area(mas, min, range_hi, 1);
+ if (ret == 0)
+ ret = 1;
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 511c3f61ab44c4..54f4dd8d549f06 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -868,6 +868,11 @@ static int damon_commit_schemes(struct damon_ctx *dst, struct damon_ctx *src)
+ NUMA_NO_NODE);
+ if (!new_scheme)
+ return -ENOMEM;
++ err = damos_commit(new_scheme, src_scheme);
++ if (err) {
++ damon_destroy_scheme(new_scheme);
++ return err;
++ }
+ damon_add_scheme(dst, new_scheme);
+ }
+ return 0;
+@@ -961,8 +966,11 @@ static int damon_commit_targets(
+ return -ENOMEM;
+ err = damon_commit_target(new_target, false,
+ src_target, damon_target_has_pid(src));
+- if (err)
++ if (err) {
++ damon_destroy_target(new_target);
+ return err;
++ }
++ damon_add_target(dst, new_target);
+ }
+ return 0;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 5dc57b74a8fe9a..2fa87b9ecec6c7 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -7200,7 +7200,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ spte = hugetlb_walk(svma, saddr,
+ vma_mmu_pagesize(svma));
+ if (spte) {
+- get_page(virt_to_page(spte));
++ ptdesc_pmd_pts_inc(virt_to_ptdesc(spte));
+ break;
+ }
+ }
+@@ -7215,7 +7215,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ (pmd_t *)((unsigned long)spte & PAGE_MASK));
+ mm_inc_nr_pmds(mm);
+ } else {
+- put_page(virt_to_page(spte));
++ ptdesc_pmd_pts_dec(virt_to_ptdesc(spte));
+ }
+ spin_unlock(&mm->page_table_lock);
+ out:
+@@ -7227,10 +7227,6 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ /*
+ * unmap huge page backed by shared pte.
+ *
+- * Hugetlb pte page is ref counted at the time of mapping. If pte is shared
+- * indicated by page_count > 1, unmap is achieved by clearing pud and
+- * decrementing the ref count. If count == 1, the pte page is not shared.
+- *
+ * Called with page table lock held.
+ *
+ * returns: 1 successfully unmapped a shared pte page
+@@ -7239,18 +7235,20 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+ {
++ unsigned long sz = huge_page_size(hstate_vma(vma));
+ pgd_t *pgd = pgd_offset(mm, addr);
+ p4d_t *p4d = p4d_offset(pgd, addr);
+ pud_t *pud = pud_offset(p4d, addr);
+
+ i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ hugetlb_vma_assert_locked(vma);
+- BUG_ON(page_count(virt_to_page(ptep)) == 0);
+- if (page_count(virt_to_page(ptep)) == 1)
++ if (sz != PMD_SIZE)
++ return 0;
++ if (!ptdesc_pmd_pts_count(virt_to_ptdesc(ptep)))
+ return 0;
+
+ pud_clear(pud);
+- put_page(virt_to_page(ptep));
++ ptdesc_pmd_pts_dec(virt_to_ptdesc(ptep));
+ mm_dec_nr_pmds(mm);
+ return 1;
+ }
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 0400f5e8ac60de..74f5f4c51ab8c8 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -373,7 +373,7 @@ static void print_unreferenced(struct seq_file *seq,
+
+ for (i = 0; i < nr_entries; i++) {
+ void *ptr = (void *)entries[i];
+- warn_or_seq_printf(seq, " [<%pK>] %pS\n", ptr, ptr);
++ warn_or_seq_printf(seq, " %pS\n", ptr);
+ }
+ }
+
+diff --git a/mm/memfd.c b/mm/memfd.c
+index c17c3ea701a17e..35a370d75c9ad7 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -170,7 +170,7 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ return error;
+ }
+
+-static unsigned int *memfd_file_seals_ptr(struct file *file)
++unsigned int *memfd_file_seals_ptr(struct file *file)
+ {
+ if (shmem_file(file))
+ return &SHMEM_I(file_inode(file))->seals;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 7fb4c1e97175f9..6183805f6f9e6e 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -47,6 +47,7 @@
+ #include <linux/oom.h>
+ #include <linux/sched/mm.h>
+ #include <linux/ksm.h>
++#include <linux/memfd.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/cacheflush.h>
+@@ -368,6 +369,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+
+ if (file) {
+ struct inode *inode = file_inode(file);
++ unsigned int seals = memfd_file_seals(file);
+ unsigned long flags_mask;
+
+ if (!file_mmap_ok(file, inode, pgoff, len))
+@@ -408,6 +410,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ vm_flags |= VM_SHARED | VM_MAYSHARE;
+ if (!(file->f_mode & FMODE_WRITE))
+ vm_flags &= ~(VM_MAYWRITE | VM_SHARED);
++ else if (is_readonly_sealed(seals, vm_flags))
++ vm_flags &= ~VM_MAYWRITE;
+ fallthrough;
+ case MAP_PRIVATE:
+ if (!(file->f_mode & FMODE_READ))
+diff --git a/mm/readahead.c b/mm/readahead.c
+index 99fdb2b5b56862..bf79275060f3be 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -641,7 +641,11 @@ void page_cache_async_ra(struct readahead_control *ractl,
+ 1UL << order);
+ if (index == expected) {
+ ra->start += ra->size;
+- ra->size = get_next_ra_size(ra, max_pages);
++ /*
++ * In the case of MADV_HUGEPAGE, the actual size might exceed
++ * the readahead window.
++ */
++ ra->size = max(ra->size, get_next_ra_size(ra, max_pages));
+ ra->async_size = ra->size;
+ goto readit;
+ }
+diff --git a/mm/shmem.c b/mm/shmem.c
+index b03ced0c3d4858..dd4eb11c84b59e 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1527,7 +1527,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
+ !shmem_falloc->waitq &&
+ index >= shmem_falloc->start &&
+ index < shmem_falloc->next)
+- shmem_falloc->nr_unswapped++;
++ shmem_falloc->nr_unswapped += nr_pages;
+ else
+ shmem_falloc = NULL;
+ spin_unlock(&inode->i_lock);
+@@ -1664,6 +1664,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ unsigned long mask = READ_ONCE(huge_shmem_orders_always);
+ unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
+ unsigned long vm_flags = vma ? vma->vm_flags : 0;
++ pgoff_t aligned_index;
+ bool global_huge;
+ loff_t i_size;
+ int order;
+@@ -1698,9 +1699,9 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ /* Allow mTHP that will be fully within i_size. */
+ order = highest_order(within_size_orders);
+ while (within_size_orders) {
+- index = round_up(index + 1, order);
++ aligned_index = round_up(index + 1, 1 << order);
+ i_size = round_up(i_size_read(inode), PAGE_SIZE);
+- if (i_size >> PAGE_SHIFT >= index) {
++ if (i_size >> PAGE_SHIFT >= aligned_index) {
+ mask |= within_size_orders;
+ break;
+ }
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 28ba2b06fc7dc2..67a680e4b484d7 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -374,7 +374,14 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
+ if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL))
+ nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
+ zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
+-
++ /*
++ * If there are no reclaimable file-backed or anonymous pages,
++ * ensure zones with sufficient free pages are not skipped.
++ * This prevents zones like DMA32 from being ignored in reclaim
++ * scenarios where they can still help alleviate memory pressure.
++ */
++ if (nr == 0)
++ nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
+ return nr;
+ }
+
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 72439764186ed2..b5553c08e73162 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -57,7 +57,6 @@ DEFINE_RWLOCK(hci_dev_list_lock);
+
+ /* HCI callback list */
+ LIST_HEAD(hci_cb_list);
+-DEFINE_MUTEX(hci_cb_list_lock);
+
+ /* HCI ID Numbering */
+ static DEFINE_IDA(hci_index_ida);
+@@ -2993,9 +2992,7 @@ int hci_register_cb(struct hci_cb *cb)
+ {
+ BT_DBG("%p name %s", cb, cb->name);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_add_tail(&cb->list, &hci_cb_list);
+- mutex_unlock(&hci_cb_list_lock);
++ list_add_tail_rcu(&cb->list, &hci_cb_list);
+
+ return 0;
+ }
+@@ -3005,9 +3002,8 @@ int hci_unregister_cb(struct hci_cb *cb)
+ {
+ BT_DBG("%p name %s", cb, cb->name);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_del(&cb->list);
+- mutex_unlock(&hci_cb_list_lock);
++ list_del_rcu(&cb->list);
++ synchronize_rcu();
+
+ return 0;
+ }
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 644b606743e212..bda2f2da7d7311 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -2137,6 +2137,11 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ return HCI_LM_ACCEPT;
+ }
+
++static bool iso_match(struct hci_conn *hcon)
++{
++ return hcon->type == ISO_LINK || hcon->type == LE_LINK;
++}
++
+ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
+ if (hcon->type != ISO_LINK) {
+@@ -2318,6 +2323,7 @@ void iso_recv(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+
+ static struct hci_cb iso_cb = {
+ .name = "ISO",
++ .match = iso_match,
+ .connect_cfm = iso_connect_cfm,
+ .disconn_cfm = iso_disconn_cfm,
+ };
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 6544c1ed714344..27b4c4a2ba1fdd 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -7217,6 +7217,11 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
+ return NULL;
+ }
+
++static bool l2cap_match(struct hci_conn *hcon)
++{
++ return hcon->type == ACL_LINK || hcon->type == LE_LINK;
++}
++
+ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ {
+ struct hci_dev *hdev = hcon->hdev;
+@@ -7224,9 +7229,6 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ struct l2cap_chan *pchan;
+ u8 dst_type;
+
+- if (hcon->type != ACL_LINK && hcon->type != LE_LINK)
+- return;
+-
+ BT_DBG("hcon %p bdaddr %pMR status %d", hcon, &hcon->dst, status);
+
+ if (status) {
+@@ -7291,9 +7293,6 @@ int l2cap_disconn_ind(struct hci_conn *hcon)
+
+ static void l2cap_disconn_cfm(struct hci_conn *hcon, u8 reason)
+ {
+- if (hcon->type != ACL_LINK && hcon->type != LE_LINK)
+- return;
+-
+ BT_DBG("hcon %p reason %d", hcon, reason);
+
+ l2cap_conn_del(hcon, bt_to_errno(reason));
+@@ -7572,6 +7571,7 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+
+ static struct hci_cb l2cap_cb = {
+ .name = "L2CAP",
++ .match = l2cap_match,
+ .connect_cfm = l2cap_connect_cfm,
+ .disconn_cfm = l2cap_disconn_cfm,
+ .security_cfm = l2cap_security_cfm,
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index ad5177e3a69b77..4c56ca5a216c6f 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -2134,6 +2134,11 @@ static int rfcomm_run(void *unused)
+ return 0;
+ }
+
++static bool rfcomm_match(struct hci_conn *hcon)
++{
++ return hcon->type == ACL_LINK;
++}
++
+ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt)
+ {
+ struct rfcomm_session *s;
+@@ -2180,6 +2185,7 @@ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt)
+
+ static struct hci_cb rfcomm_cb = {
+ .name = "RFCOMM",
++ .match = rfcomm_match,
+ .security_cfm = rfcomm_security_cfm
+ };
+
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index b872a2ca3ff38b..071c404c790af9 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -1355,11 +1355,13 @@ int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ return lm;
+ }
+
+-static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
++static bool sco_match(struct hci_conn *hcon)
+ {
+- if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK)
+- return;
++ return hcon->type == SCO_LINK || hcon->type == ESCO_LINK;
++}
+
++static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
++{
+ BT_DBG("hcon %p bdaddr %pMR status %u", hcon, &hcon->dst, status);
+
+ if (!status) {
+@@ -1374,9 +1376,6 @@ static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
+
+ static void sco_disconn_cfm(struct hci_conn *hcon, __u8 reason)
+ {
+- if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK)
+- return;
+-
+ BT_DBG("hcon %p reason %d", hcon, reason);
+
+ sco_conn_del(hcon, bt_to_errno(reason));
+@@ -1402,6 +1401,7 @@ void sco_recv_scodata(struct hci_conn *hcon, struct sk_buff *skb)
+
+ static struct hci_cb sco_cb = {
+ .name = "SCO",
++ .match = sco_match,
+ .connect_cfm = sco_connect_cfm,
+ .disconn_cfm = sco_disconn_cfm,
+ };
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 8453e14d301b63..f3fa8353d262b0 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3640,8 +3640,10 @@ int skb_csum_hwoffload_help(struct sk_buff *skb,
+
+ if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
+ if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) &&
+- skb_network_header_len(skb) != sizeof(struct ipv6hdr))
++ skb_network_header_len(skb) != sizeof(struct ipv6hdr) &&
++ !ipv6_has_hopopt_jumbo(skb))
+ goto sw_checksum;
++
+ switch (skb->csum_offset) {
+ case offsetof(struct tcphdr, check):
+ case offsetof(struct udphdr, check):
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 55495063621d6c..54a53fae9e98f5 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -7918,42 +7918,37 @@ static const struct bpf_func_proto bpf_tcp_raw_check_syncookie_ipv6_proto = {
+
+ #endif /* CONFIG_INET */
+
+-bool bpf_helper_changes_pkt_data(void *func)
+-{
+- if (func == bpf_skb_vlan_push ||
+- func == bpf_skb_vlan_pop ||
+- func == bpf_skb_store_bytes ||
+- func == bpf_skb_change_proto ||
+- func == bpf_skb_change_head ||
+- func == sk_skb_change_head ||
+- func == bpf_skb_change_tail ||
+- func == sk_skb_change_tail ||
+- func == bpf_skb_adjust_room ||
+- func == sk_skb_adjust_room ||
+- func == bpf_skb_pull_data ||
+- func == sk_skb_pull_data ||
+- func == bpf_clone_redirect ||
+- func == bpf_l3_csum_replace ||
+- func == bpf_l4_csum_replace ||
+- func == bpf_xdp_adjust_head ||
+- func == bpf_xdp_adjust_meta ||
+- func == bpf_msg_pull_data ||
+- func == bpf_msg_push_data ||
+- func == bpf_msg_pop_data ||
+- func == bpf_xdp_adjust_tail ||
+-#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
+- func == bpf_lwt_seg6_store_bytes ||
+- func == bpf_lwt_seg6_adjust_srh ||
+- func == bpf_lwt_seg6_action ||
+-#endif
+-#ifdef CONFIG_INET
+- func == bpf_sock_ops_store_hdr_opt ||
+-#endif
+- func == bpf_lwt_in_push_encap ||
+- func == bpf_lwt_xmit_push_encap)
++bool bpf_helper_changes_pkt_data(enum bpf_func_id func_id)
++{
++ switch (func_id) {
++ case BPF_FUNC_clone_redirect:
++ case BPF_FUNC_l3_csum_replace:
++ case BPF_FUNC_l4_csum_replace:
++ case BPF_FUNC_lwt_push_encap:
++ case BPF_FUNC_lwt_seg6_action:
++ case BPF_FUNC_lwt_seg6_adjust_srh:
++ case BPF_FUNC_lwt_seg6_store_bytes:
++ case BPF_FUNC_msg_pop_data:
++ case BPF_FUNC_msg_pull_data:
++ case BPF_FUNC_msg_push_data:
++ case BPF_FUNC_skb_adjust_room:
++ case BPF_FUNC_skb_change_head:
++ case BPF_FUNC_skb_change_proto:
++ case BPF_FUNC_skb_change_tail:
++ case BPF_FUNC_skb_pull_data:
++ case BPF_FUNC_skb_store_bytes:
++ case BPF_FUNC_skb_vlan_pop:
++ case BPF_FUNC_skb_vlan_push:
++ case BPF_FUNC_store_hdr_opt:
++ case BPF_FUNC_xdp_adjust_head:
++ case BPF_FUNC_xdp_adjust_meta:
++ case BPF_FUNC_xdp_adjust_tail:
++ /* tail-called program could call any of the above */
++ case BPF_FUNC_tail_call:
+ return true;
+-
+- return false;
++ default:
++ return false;
++ }
+ }
+
+ const struct bpf_func_proto bpf_event_output_data_proto __weak;
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 7ce22f40db5b04..d58270b48cb2cf 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -228,8 +228,12 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ rcu_read_unlock();
+ rtnl_unlock();
+
+- if (err)
++ if (err) {
++ goto err_free_msg;
++ } else if (!rsp->len) {
++ err = -ENOENT;
+ goto err_free_msg;
++ }
+
+ return genlmsg_reply(rsp, info);
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index da50df485090ff..a83f64a1d96a29 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1300,7 +1300,10 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
+ sk->sk_reuse = (valbool ? SK_CAN_REUSE : SK_NO_REUSE);
+ break;
+ case SO_REUSEPORT:
+- sk->sk_reuseport = valbool;
++ if (valbool && !sk_is_inet(sk))
++ ret = -EOPNOTSUPP;
++ else
++ sk->sk_reuseport = valbool;
+ break;
+ case SO_DONTROUTE:
+ sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 25505f9b724c33..09b73acf037ae2 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -294,7 +294,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+
+ ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr,
+ iph->saddr, tunnel->parms.o_key,
+- iph->tos & INET_DSCP_MASK, dev_net(dev),
++ iph->tos & INET_DSCP_MASK, tunnel->net,
+ tunnel->parms.link, tunnel->fwmark, 0, 0);
+ rt = ip_route_output_key(tunnel->net, &fl4);
+
+@@ -611,7 +611,7 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ }
+ ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src,
+ tunnel_id_to_key32(key->tun_id),
+- tos & INET_DSCP_MASK, dev_net(dev), 0, skb->mark,
++ tos & INET_DSCP_MASK, tunnel->net, 0, skb->mark,
+ skb_get_hash(skb), key->flow_flags);
+
+ if (!tunnel_hlen)
+@@ -774,7 +774,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+
+ ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr,
+ tunnel->parms.o_key, tos & INET_DSCP_MASK,
+- dev_net(dev), READ_ONCE(tunnel->parms.link),
++ tunnel->net, READ_ONCE(tunnel->parms.link),
+ tunnel->fwmark, skb_get_hash(skb), 0);
+
+ if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 2d844e1f867f0a..2d43b29da15e20 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -7328,6 +7328,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req,
+ req->timeout))) {
+ reqsk_free(req);
++ dst_release(dst);
+ return 0;
+ }
+
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 534a4498e280d7..fff09f5a796a75 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -200,6 +200,8 @@ static const struct nf_hook_ops ila_nf_hook_ops[] = {
+ },
+ };
+
++static DEFINE_MUTEX(ila_mutex);
++
+ static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp)
+ {
+ struct ila_net *ilan = net_generic(net, ila_net_id);
+@@ -207,16 +209,20 @@ static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp)
+ spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match);
+ int err = 0, order;
+
+- if (!ilan->xlat.hooks_registered) {
++ if (!READ_ONCE(ilan->xlat.hooks_registered)) {
+ /* We defer registering net hooks in the namespace until the
+ * first mapping is added.
+ */
+- err = nf_register_net_hooks(net, ila_nf_hook_ops,
+- ARRAY_SIZE(ila_nf_hook_ops));
++ mutex_lock(&ila_mutex);
++ if (!ilan->xlat.hooks_registered) {
++ err = nf_register_net_hooks(net, ila_nf_hook_ops,
++ ARRAY_SIZE(ila_nf_hook_ops));
++ if (!err)
++ WRITE_ONCE(ilan->xlat.hooks_registered, true);
++ }
++ mutex_unlock(&ila_mutex);
+ if (err)
+ return err;
+-
+- ilan->xlat.hooks_registered = true;
+ }
+
+ ila = kzalloc(sizeof(*ila), GFP_KERNEL);
+diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
+index 51bccfb00a9cd9..61b0159b2fbee6 100644
+--- a/net/llc/llc_input.c
++++ b/net/llc/llc_input.c
+@@ -124,8 +124,8 @@ static inline int llc_fixup_skb(struct sk_buff *skb)
+ if (unlikely(!pskb_may_pull(skb, llc_len)))
+ return 0;
+
+- skb->transport_header += llc_len;
+ skb_pull(skb, llc_len);
++ skb_reset_transport_header(skb);
+ if (skb->protocol == htons(ETH_P_802_2)) {
+ __be16 pdulen;
+ s32 data_size;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 1b1bf044378d48..f11fd360b422dd 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -4992,10 +4992,16 @@ static void ieee80211_del_intf_link(struct wiphy *wiphy,
+ unsigned int link_id)
+ {
+ struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
++ u16 new_links = wdev->valid_links & ~BIT(link_id);
+
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
+- ieee80211_vif_set_links(sdata, wdev->valid_links, 0);
++ /* During the link teardown process, certain functions require the
++ * link_id to remain in the valid_links bitmap. Therefore, instead
++ * of removing the link_id from the bitmap, pass a masked value to
++ * simulate as if link_id does not exist anymore.
++ */
++ ieee80211_vif_set_links(sdata, new_links, 0);
+ }
+
+ static int
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 640239f4425b16..50eb1d8cd43deb 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -1157,14 +1157,14 @@ void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata,
+ u64 changed)
+ {
+ struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+- unsigned long bits = changed;
++ unsigned long bits[] = { BITMAP_FROM_U64(changed) };
+ u32 bit;
+
+- if (!bits)
++ if (!changed)
+ return;
+
+ /* if we race with running work, worst case this work becomes a noop */
+- for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE)
++ for_each_set_bit(bit, bits, sizeof(changed) * BITS_PER_BYTE)
+ set_bit(bit, ifmsh->mbss_changed);
+ set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags);
+ wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index b4814e97cf7422..38c30e4ddda98c 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1825,6 +1825,9 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ WARN(1, "Hardware became unavailable upon resume. This could be a software issue prior to suspend or a hardware issue.\n");
+ else
+ WARN(1, "Hardware became unavailable during restart.\n");
++ ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
++ IEEE80211_QUEUE_STOP_REASON_SUSPEND,
++ false);
+ ieee80211_handle_reconfig_failure(local);
+ return res;
+ }
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 1603b3702e2207..a62bc874bf1e17 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -667,8 +667,15 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ &echo, &drop_other_suboptions))
+ return false;
+
++ /*
++ * Later on, mptcp_write_options() will enforce mutually exclusion with
++ * DSS, bail out if such option is set and we can't drop it.
++ */
+ if (drop_other_suboptions)
+ remaining += opt_size;
++ else if (opts->suboptions & OPTION_MPTCP_DSS)
++ return false;
++
+ len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port);
+ if (remaining < len)
+ return false;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 8a8e8fee337f5e..4b9d850ce85a25 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -528,13 +528,13 @@ static void mptcp_send_ack(struct mptcp_sock *msk)
+ mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow));
+ }
+
+-static void mptcp_subflow_cleanup_rbuf(struct sock *ssk)
++static void mptcp_subflow_cleanup_rbuf(struct sock *ssk, int copied)
+ {
+ bool slow;
+
+ slow = lock_sock_fast(ssk);
+ if (tcp_can_send_ack(ssk))
+- tcp_cleanup_rbuf(ssk, 1);
++ tcp_cleanup_rbuf(ssk, copied);
+ unlock_sock_fast(ssk, slow);
+ }
+
+@@ -551,7 +551,7 @@ static bool mptcp_subflow_could_cleanup(const struct sock *ssk, bool rx_empty)
+ (ICSK_ACK_PUSHED2 | ICSK_ACK_PUSHED)));
+ }
+
+-static void mptcp_cleanup_rbuf(struct mptcp_sock *msk)
++static void mptcp_cleanup_rbuf(struct mptcp_sock *msk, int copied)
+ {
+ int old_space = READ_ONCE(msk->old_wspace);
+ struct mptcp_subflow_context *subflow;
+@@ -559,14 +559,14 @@ static void mptcp_cleanup_rbuf(struct mptcp_sock *msk)
+ int space = __mptcp_space(sk);
+ bool cleanup, rx_empty;
+
+- cleanup = (space > 0) && (space >= (old_space << 1));
+- rx_empty = !__mptcp_rmem(sk);
++ cleanup = (space > 0) && (space >= (old_space << 1)) && copied;
++ rx_empty = !__mptcp_rmem(sk) && copied;
+
+ mptcp_for_each_subflow(msk, subflow) {
+ struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+
+ if (cleanup || mptcp_subflow_could_cleanup(ssk, rx_empty))
+- mptcp_subflow_cleanup_rbuf(ssk);
++ mptcp_subflow_cleanup_rbuf(ssk, copied);
+ }
+ }
+
+@@ -1939,6 +1939,8 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ goto out;
+ }
+
++static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied);
++
+ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
+ struct msghdr *msg,
+ size_t len, int flags,
+@@ -1992,6 +1994,7 @@ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
+ break;
+ }
+
++ mptcp_rcv_space_adjust(msk, copied);
+ return copied;
+ }
+
+@@ -2217,9 +2220,6 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+
+ copied += bytes_read;
+
+- /* be sure to advertise window change */
+- mptcp_cleanup_rbuf(msk);
+-
+ if (skb_queue_empty(&msk->receive_queue) && __mptcp_move_skbs(msk))
+ continue;
+
+@@ -2268,7 +2268,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ }
+
+ pr_debug("block timeout %ld\n", timeo);
+- mptcp_rcv_space_adjust(msk, copied);
++ mptcp_cleanup_rbuf(msk, copied);
+ err = sk_wait_data(sk, &timeo, NULL);
+ if (err < 0) {
+ err = copied ? : err;
+@@ -2276,7 +2276,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ }
+ }
+
+- mptcp_rcv_space_adjust(msk, copied);
++ mptcp_cleanup_rbuf(msk, copied);
+
+ out_err:
+ if (cmsg_flags && copied >= 0) {
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index 2b5e246b8d9a7a..b94cb2ffbaf8fa 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -754,6 +754,12 @@ int nr_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ int ret;
+ struct sk_buff *skbn;
+
++ /*
++ * Reject malformed packets early. Check that it contains at least 2
++ * addresses and 1 byte more for Time-To-Live
++ */
++ if (skb->len < 2 * sizeof(ax25_address) + 1)
++ return 0;
+
+ nr_src = (ax25_address *)(skb->data + 0);
+ nr_dest = (ax25_address *)(skb->data + 7);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 97774bd4b6cb11..f3cecb3e4bcb18 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -538,10 +538,8 @@ static void *packet_current_frame(struct packet_sock *po,
+ return packet_lookup_frame(po, rb, rb->head, status);
+ }
+
+-static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
++static u16 vlan_get_tci(const struct sk_buff *skb, struct net_device *dev)
+ {
+- u8 *skb_orig_data = skb->data;
+- int skb_orig_len = skb->len;
+ struct vlan_hdr vhdr, *vh;
+ unsigned int header_len;
+
+@@ -562,33 +560,21 @@ static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
+ else
+ return 0;
+
+- skb_push(skb, skb->data - skb_mac_header(skb));
+- vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr);
+- if (skb_orig_data != skb->data) {
+- skb->data = skb_orig_data;
+- skb->len = skb_orig_len;
+- }
++ vh = skb_header_pointer(skb, skb_mac_offset(skb) + header_len,
++ sizeof(vhdr), &vhdr);
+ if (unlikely(!vh))
+ return 0;
+
+ return ntohs(vh->h_vlan_TCI);
+ }
+
+-static __be16 vlan_get_protocol_dgram(struct sk_buff *skb)
++static __be16 vlan_get_protocol_dgram(const struct sk_buff *skb)
+ {
+ __be16 proto = skb->protocol;
+
+- if (unlikely(eth_type_vlan(proto))) {
+- u8 *skb_orig_data = skb->data;
+- int skb_orig_len = skb->len;
+-
+- skb_push(skb, skb->data - skb_mac_header(skb));
+- proto = __vlan_get_protocol(skb, proto, NULL);
+- if (skb_orig_data != skb->data) {
+- skb->data = skb_orig_data;
+- skb->len = skb_orig_len;
+- }
+- }
++ if (unlikely(eth_type_vlan(proto)))
++ proto = __vlan_get_protocol_offset(skb, proto,
++ skb_mac_offset(skb), NULL);
+
+ return proto;
+ }
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index c45c192b787873..0b0794f164cf2e 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -137,7 +137,8 @@ static struct sctp_association *sctp_association_init(
+ = 5 * asoc->rto_max;
+
+ asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay;
+- asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ;
++ asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] =
++ (unsigned long)sp->autoclose * HZ;
+
+ /* Initializes the timers */
+ for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index f49b55724f8341..18585b1416c662 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -2843,10 +2843,9 @@ void cfg80211_remove_link(struct wireless_dev *wdev, unsigned int link_id)
+ break;
+ }
+
+- wdev->valid_links &= ~BIT(link_id);
+-
+ rdev_del_intf_link(rdev, wdev, link_id);
+
++ wdev->valid_links &= ~BIT(link_id);
+ eth_zero_addr(wdev->links[link_id].addr);
+ }
+
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index c12723a0465562..3accbdb269ac70 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -26,7 +26,7 @@
+ # (do not forget a space before each pattern)
+
+ # local symbols for ARM, MIPS, etc.
+-/ \\$/d
++/ \$/d
+
+ # local labels, .LBB, .Ltmpxxx, .L__unnamed_xx, .LASANPC, etc.
+ / \.L/d
+@@ -39,7 +39,7 @@
+ / __pi_\.L/d
+
+ # arm64 local symbols in non-VHE KVM namespace
+-/ __kvm_nvhe_\\$/d
++/ __kvm_nvhe_\$/d
+ / __kvm_nvhe_\.L/d
+
+ # lld arm/aarch64/mips thunks
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index 634e40748287c0..721e0e9f17cada 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -742,7 +742,7 @@ static void do_input(char *alias,
+
+ for (i = min / BITS_PER_LONG; i < max / BITS_PER_LONG + 1; i++)
+ arr[i] = TO_NATIVE(arr[i]);
+- for (i = min; i < max; i++)
++ for (i = min; i <= max; i++)
+ if (arr[i / BITS_PER_LONG] & (1ULL << (i%BITS_PER_LONG)))
+ sprintf(alias + strlen(alias), "%X,*", i);
+ }
+diff --git a/scripts/package/PKGBUILD b/scripts/package/PKGBUILD
+index f83493838cf96a..dca706617adc76 100644
+--- a/scripts/package/PKGBUILD
++++ b/scripts/package/PKGBUILD
+@@ -103,7 +103,7 @@ _package-headers() {
+
+ _package-api-headers() {
+ pkgdesc="Kernel headers sanitized for use in userspace"
+- provides=(linux-api-headers)
++ provides=(linux-api-headers="${pkgver}")
+ conflicts=(linux-api-headers)
+
+ _prologue
+diff --git a/scripts/sorttable.h b/scripts/sorttable.h
+index 7bd0184380d3b9..a7c5445baf0027 100644
+--- a/scripts/sorttable.h
++++ b/scripts/sorttable.h
+@@ -110,7 +110,7 @@ static inline unsigned long orc_ip(const int *ip)
+
+ static int orc_sort_cmp(const void *_a, const void *_b)
+ {
+- struct orc_entry *orc_a;
++ struct orc_entry *orc_a, *orc_b;
+ const int *a = g_orc_ip_table + *(int *)_a;
+ const int *b = g_orc_ip_table + *(int *)_b;
+ unsigned long a_val = orc_ip(a);
+@@ -128,6 +128,9 @@ static int orc_sort_cmp(const void *_a, const void *_b)
+ * whitelisted .o files which didn't get objtool generation.
+ */
+ orc_a = g_orc_table + (a - g_orc_ip_table);
++ orc_b = g_orc_table + (b - g_orc_ip_table);
++ if (orc_a->type == ORC_TYPE_UNDEFINED && orc_b->type == ORC_TYPE_UNDEFINED)
++ return 0;
+ return orc_a->type == ORC_TYPE_UNDEFINED ? -1 : 1;
+ }
+
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index a9830fbfc5c66c..88850405ded929 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -955,7 +955,10 @@ void services_compute_xperms_decision(struct extended_perms_decision *xpermd,
+ xpermd->driver))
+ return;
+ } else {
+- BUG();
++ pr_warn_once(
++ "SELinux: unknown extended permission (%u) will be ignored\n",
++ node->datum.u.xperms->specified);
++ return;
+ }
+
+ if (node->key.specified == AVTAB_XPERMS_ALLOWED) {
+@@ -992,7 +995,8 @@ void services_compute_xperms_decision(struct extended_perms_decision *xpermd,
+ node->datum.u.xperms->perms.p[i];
+ }
+ } else {
+- BUG();
++ pr_warn_once("SELinux: unknown specified key (%u)\n",
++ node->key.specified);
+ }
+ }
+
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index e3394919daa09a..51ee4c00a84310 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -66,6 +66,7 @@ static struct seq_oss_synth midi_synth_dev = {
+ };
+
+ static DEFINE_SPINLOCK(register_lock);
++static DEFINE_MUTEX(sysex_mutex);
+
+ /*
+ * prototypes
+@@ -497,6 +498,7 @@ snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf,
+ if (!info)
+ return -ENXIO;
+
++ guard(mutex)(&sysex_mutex);
+ sysex = info->sysex;
+ if (sysex == NULL) {
+ sysex = kzalloc(sizeof(*sysex), GFP_KERNEL);
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 3930e2f9082f42..77b6ac9b5c11bc 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1275,10 +1275,16 @@ static int snd_seq_ioctl_set_client_info(struct snd_seq_client *client,
+ if (client->type != client_info->type)
+ return -EINVAL;
+
+- /* check validity of midi_version field */
+- if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3) &&
+- client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0)
+- return -EINVAL;
++ if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3)) {
++ /* check validity of midi_version field */
++ if (client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0)
++ return -EINVAL;
++
++ /* check if UMP is supported in kernel */
++ if (!IS_ENABLED(CONFIG_SND_SEQ_UMP) &&
++ client_info->midi_version > 0)
++ return -EINVAL;
++ }
+
+ /* fill the info fields */
+ if (client_info->name[0])
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index bd26bb2210cbd4..abc537d54b7312 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -1244,7 +1244,7 @@ static int fill_legacy_mapping(struct snd_ump_endpoint *ump)
+
+ num = 0;
+ for (i = 0; i < SNDRV_UMP_MAX_GROUPS; i++)
+- if ((group_maps & (1U << i)) && ump->groups[i].valid)
++ if (group_maps & (1U << i))
+ ump->legacy_mapping[num++] = i;
+
+ return num;
+diff --git a/sound/pci/hda/cs35l56_hda.c b/sound/pci/hda/cs35l56_hda.c
+index e3ac0e23ae3211..7baf3b506eefec 100644
+--- a/sound/pci/hda/cs35l56_hda.c
++++ b/sound/pci/hda/cs35l56_hda.c
+@@ -151,10 +151,6 @@ static int cs35l56_hda_runtime_resume(struct device *dev)
+ }
+ }
+
+- ret = cs35l56_force_sync_asp1_registers_from_cache(&cs35l56->base);
+- if (ret)
+- goto err;
+-
+ return 0;
+
+ err:
+@@ -1059,9 +1055,6 @@ int cs35l56_hda_common_probe(struct cs35l56_hda *cs35l56, int hid, int id)
+
+ regmap_multi_reg_write(cs35l56->base.regmap, cs35l56_hda_dai_config,
+ ARRAY_SIZE(cs35l56_hda_dai_config));
+- ret = cs35l56_force_sync_asp1_registers_from_cache(&cs35l56->base);
+- if (ret)
+- goto dsp_err;
+
+ /*
+ * By default only enable one ASP1TXn, where n=amplifier index,
+@@ -1087,7 +1080,6 @@ int cs35l56_hda_common_probe(struct cs35l56_hda *cs35l56, int hid, int id)
+
+ pm_err:
+ pm_runtime_disable(cs35l56->base.dev);
+-dsp_err:
+ cs_dsp_remove(&cs35l56->cs_dsp);
+ err:
+ gpiod_set_value_cansleep(cs35l56->base.reset_gpio, 0);
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index e4673a71551a3b..d40197fb5fbd58 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1134,7 +1134,6 @@ struct ca0132_spec {
+
+ struct hda_codec *codec;
+ struct delayed_work unsol_hp_work;
+- int quirk;
+
+ #ifdef ENABLE_TUNING_CONTROLS
+ long cur_ctl_vals[TUNING_CTLS_COUNT];
+@@ -1166,7 +1165,6 @@ struct ca0132_spec {
+ * CA0132 quirks table
+ */
+ enum {
+- QUIRK_NONE,
+ QUIRK_ALIENWARE,
+ QUIRK_ALIENWARE_M17XR4,
+ QUIRK_SBZ,
+@@ -1176,10 +1174,11 @@ enum {
+ QUIRK_R3D,
+ QUIRK_AE5,
+ QUIRK_AE7,
++ QUIRK_NONE = HDA_FIXUP_ID_NOT_SET,
+ };
+
+ #ifdef CONFIG_PCI
+-#define ca0132_quirk(spec) ((spec)->quirk)
++#define ca0132_quirk(spec) ((spec)->codec->fixup_id)
+ #define ca0132_use_pci_mmio(spec) ((spec)->use_pci_mmio)
+ #define ca0132_use_alt_functions(spec) ((spec)->use_alt_functions)
+ #define ca0132_use_alt_controls(spec) ((spec)->use_alt_controls)
+@@ -1293,7 +1292,7 @@ static const struct hda_pintbl ae7_pincfgs[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk ca0132_quirks[] = {
++static const struct hda_quirk ca0132_quirks[] = {
+ SND_PCI_QUIRK(0x1028, 0x057b, "Alienware M17x R4", QUIRK_ALIENWARE_M17XR4),
+ SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE),
+ SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE),
+@@ -1316,6 +1315,19 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ {}
+ };
+
++static const struct hda_model_fixup ca0132_quirk_models[] = {
++ { .id = QUIRK_ALIENWARE, .name = "alienware" },
++ { .id = QUIRK_ALIENWARE_M17XR4, .name = "alienware-m17xr4" },
++ { .id = QUIRK_SBZ, .name = "sbz" },
++ { .id = QUIRK_ZXR, .name = "zxr" },
++ { .id = QUIRK_ZXR_DBPRO, .name = "zxr-dbpro" },
++ { .id = QUIRK_R3DI, .name = "r3di" },
++ { .id = QUIRK_R3D, .name = "r3d" },
++ { .id = QUIRK_AE5, .name = "ae5" },
++ { .id = QUIRK_AE7, .name = "ae7" },
++ {}
++};
++
+ /* Output selection quirk info structures. */
+ #define MAX_QUIRK_MMIO_GPIO_SET_VALS 3
+ #define MAX_QUIRK_SCP_SET_VALS 2
+@@ -9957,17 +9969,15 @@ static int ca0132_prepare_verbs(struct hda_codec *codec)
+ */
+ static void sbz_detect_quirk(struct hda_codec *codec)
+ {
+- struct ca0132_spec *spec = codec->spec;
+-
+ switch (codec->core.subsystem_id) {
+ case 0x11020033:
+- spec->quirk = QUIRK_ZXR;
++ codec->fixup_id = QUIRK_ZXR;
+ break;
+ case 0x1102003f:
+- spec->quirk = QUIRK_ZXR_DBPRO;
++ codec->fixup_id = QUIRK_ZXR_DBPRO;
+ break;
+ default:
+- spec->quirk = QUIRK_SBZ;
++ codec->fixup_id = QUIRK_SBZ;
+ break;
+ }
+ }
+@@ -9976,7 +9986,6 @@ static int patch_ca0132(struct hda_codec *codec)
+ {
+ struct ca0132_spec *spec;
+ int err;
+- const struct snd_pci_quirk *quirk;
+
+ codec_dbg(codec, "patch_ca0132\n");
+
+@@ -9987,11 +9996,7 @@ static int patch_ca0132(struct hda_codec *codec)
+ spec->codec = codec;
+
+ /* Detect codec quirk */
+- quirk = snd_pci_quirk_lookup(codec->bus->pci, ca0132_quirks);
+- if (quirk)
+- spec->quirk = quirk->value;
+- else
+- spec->quirk = QUIRK_NONE;
++ snd_hda_pick_fixup(codec, ca0132_quirk_models, ca0132_quirks, NULL);
+ if (ca0132_quirk(spec) == QUIRK_SBZ)
+ sbz_detect_quirk(codec);
+
+@@ -10068,7 +10073,7 @@ static int patch_ca0132(struct hda_codec *codec)
+ spec->mem_base = pci_iomap(codec->bus->pci, 2, 0xC20);
+ if (spec->mem_base == NULL) {
+ codec_warn(codec, "pci_iomap failed! Setting quirk to QUIRK_NONE.");
+- spec->quirk = QUIRK_NONE;
++ codec->fixup_id = QUIRK_NONE;
+ }
+ }
+ #endif
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 192fc75b51e6db..3ed82f98e2de9e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7704,6 +7704,7 @@ enum {
+ ALC274_FIXUP_HP_MIC,
+ ALC274_FIXUP_HP_HEADSET_MIC,
+ ALC274_FIXUP_HP_ENVY_GPIO,
++ ALC274_FIXUP_ASUS_ZEN_AIO_27,
+ ALC256_FIXUP_ASUS_HPE,
+ ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ ALC287_FIXUP_HP_GPIO_LED,
+@@ -9505,6 +9506,26 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc274_fixup_hp_envy_gpio,
+ },
++ [ALC274_FIXUP_ASUS_ZEN_AIO_27] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x10 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xc420 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x40 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x8800 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x49 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0249 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x4a },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x202b },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x62 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xa007 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x6b },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x5060 },
++ {}
++ },
++ .chained = true,
++ .chain_id = ALC2XX_FIXUP_HEADSET_MIC,
++ },
+ [ALC256_FIXUP_ASUS_HPE] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -10615,6 +10636,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1f62, "ASUS UX7602ZM", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
++ SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27),
+ SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+@@ -10971,6 +10993,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+@@ -11162,6 +11185,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+ {.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"},
+ {.id = ALC236_FIXUP_LENOVO_INV_DMIC, .name = "alc236-fixup-lenovo-inv-mic"},
++ {.id = ALC2XX_FIXUP_HEADSET_MIC, .name = "alc2xx-fixup-headset-mic"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/generic/audio-graph-card2.c b/sound/soc/generic/audio-graph-card2.c
+index 93eee40cec760c..63837e25965956 100644
+--- a/sound/soc/generic/audio-graph-card2.c
++++ b/sound/soc/generic/audio-graph-card2.c
+@@ -779,7 +779,7 @@ static void graph_link_init(struct simple_util_priv *priv,
+ of_node_get(port_codec);
+ if (graph_lnk_is_multi(port_codec)) {
+ ep_codec = graph_get_next_multi_ep(&port_codec);
+- of_node_put(port_cpu);
++ of_node_put(port_codec);
+ port_codec = ep_to_port(ep_codec);
+ } else {
+ ep_codec = port_to_endpoint(port_codec);
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 0cbf1d4fbe6edd..6049d957694ca6 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -60,6 +60,8 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
+ pcm_formats |= SNDRV_PCM_FMTBIT_SPECIAL;
+ /* flag potentially raw DSD capable altsettings */
+ fp->dsd_raw = true;
++ /* clear special format bit to avoid "unsupported format" msg below */
++ format &= ~UAC2_FORMAT_TYPE_I_RAW_DATA;
+ }
+
+ format <<= 1;
+@@ -71,8 +73,11 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
+ sample_width = as->bBitResolution;
+ sample_bytes = as->bSubslotSize;
+
+- if (format & UAC3_FORMAT_TYPE_I_RAW_DATA)
++ if (format & UAC3_FORMAT_TYPE_I_RAW_DATA) {
+ pcm_formats |= SNDRV_PCM_FMTBIT_SPECIAL;
++ /* clear special format bit to avoid "unsupported format" msg below */
++ format &= ~UAC3_FORMAT_TYPE_I_RAW_DATA;
++ }
+
+ format <<= 1;
+ break;
+diff --git a/sound/usb/mixer_us16x08.c b/sound/usb/mixer_us16x08.c
+index 6eb7d93b358d99..20ac32635f1f50 100644
+--- a/sound/usb/mixer_us16x08.c
++++ b/sound/usb/mixer_us16x08.c
+@@ -687,7 +687,7 @@ static int snd_us16x08_meter_get(struct snd_kcontrol *kcontrol,
+ struct usb_mixer_elem_info *elem = kcontrol->private_data;
+ struct snd_usb_audio *chip = elem->head.mixer->chip;
+ struct snd_us16x08_meter_store *store = elem->private_data;
+- u8 meter_urb[64];
++ u8 meter_urb[64] = {0};
+
+ switch (kcontrol->private_value) {
+ case 0: {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index a0767de7f1b7ed..8ba0aff8be2ec2 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2325,6 +2325,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_DSD_RAW),
+ DEVICE_FLG(0x2522, 0x0007, /* LH Labs Geek Out HD Audio 1V5 */
+ QUIRK_FLAG_SET_IFACE_FIRST),
++ DEVICE_FLG(0x262a, 0x9302, /* ddHiFi TC44C */
++ QUIRK_FLAG_DSD_RAW),
+ DEVICE_FLG(0x2708, 0x0002, /* Audient iD14 */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+diff --git a/tools/sched_ext/scx_central.c b/tools/sched_ext/scx_central.c
+index 21deea320bd785..e938156ed0a0d0 100644
+--- a/tools/sched_ext/scx_central.c
++++ b/tools/sched_ext/scx_central.c
+@@ -97,7 +97,7 @@ int main(int argc, char **argv)
+ SCX_BUG_ON(!cpuset, "Failed to allocate cpuset");
+ CPU_ZERO(cpuset);
+ CPU_SET(skel->rodata->central_cpu, cpuset);
+- SCX_BUG_ON(sched_setaffinity(0, sizeof(cpuset), cpuset),
++ SCX_BUG_ON(sched_setaffinity(0, sizeof(*cpuset), cpuset),
+ "Failed to affinitize to central CPU %d (max %d)",
+ skel->rodata->central_cpu, skel->rodata->nr_cpu_ids - 1);
+ CPU_FREE(cpuset);
+diff --git a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+index 8a0632c37839a3..79f5087dade224 100644
+--- a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
++++ b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+@@ -10,6 +10,8 @@ int subprog(struct __sk_buff *skb)
+ int ret = 1;
+
+ __sink(ret);
++ /* let verifier know that 'subprog_tc' can change pointers to skb->data */
++ bpf_skb_change_proto(skb, 0, 0);
+ return ret;
+ }
+
+diff --git a/tools/testing/selftests/net/forwarding/local_termination.sh b/tools/testing/selftests/net/forwarding/local_termination.sh
+index c35548767756d0..ecd34f364125cb 100755
+--- a/tools/testing/selftests/net/forwarding/local_termination.sh
++++ b/tools/testing/selftests/net/forwarding/local_termination.sh
+@@ -7,7 +7,6 @@ ALL_TESTS="standalone vlan_unaware_bridge vlan_aware_bridge test_vlan \
+ NUM_NETIFS=2
+ PING_COUNT=1
+ REQUIRE_MTOOLS=yes
+-REQUIRE_MZ=no
+
+ source lib.sh
+
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-17 13:18 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-01-17 13:18 UTC (permalink / raw
To: gentoo-commits
commit: bbd5b42d3ff847b17f85f7ce29fa19f28f88b798
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 13 17:18:55 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jan 13 17:18:55 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bbd5b42d
BMQ(BitMap Queue) Schedule r1 version bump
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 2 +-
...=> 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch | 252 +++++++++++++++------
2 files changed, 184 insertions(+), 70 deletions(-)
diff --git a/0000_README b/0000_README
index 29d9187b..06b9cb3f 100644
--- a/0000_README
+++ b/0000_README
@@ -127,7 +127,7 @@ Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
+Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
From: https://gitlab.com/alfredchen/projectc
Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
similarity index 98%
rename from 5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
index 9eb3139f..532813fd 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
@@ -158,7 +158,7 @@ index 8874f681b056..59eb72bf7d5f 100644
[RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
}
diff --git a/include/linux/sched.h b/include/linux/sched.h
-index bb343136ddd0..212d9204e9aa 100644
+index bb343136ddd0..6adfea989b7b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -804,9 +804,13 @@ struct task_struct {
@@ -212,7 +212,34 @@ index bb343136ddd0..212d9204e9aa 100644
#ifdef CONFIG_CGROUP_SCHED
struct task_group *sched_task_group;
-@@ -1609,6 +1628,15 @@ struct task_struct {
+@@ -878,11 +897,15 @@ struct task_struct {
+ const cpumask_t *cpus_ptr;
+ cpumask_t *user_cpus_ptr;
+ cpumask_t cpus_mask;
++#ifndef CONFIG_SCHED_ALT
+ void *migration_pending;
++#endif
+ #ifdef CONFIG_SMP
+ unsigned short migration_disabled;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ unsigned short migration_flags;
++#endif
+
+ #ifdef CONFIG_PREEMPT_RCU
+ int rcu_read_lock_nesting;
+@@ -914,8 +937,10 @@ struct task_struct {
+
+ struct list_head tasks;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct plist_node pushable_tasks;
+ struct rb_node pushable_dl_tasks;
++#endif
+ #endif
+
+ struct mm_struct *mm;
+@@ -1609,6 +1634,15 @@ struct task_struct {
*/
};
@@ -228,7 +255,7 @@ index bb343136ddd0..212d9204e9aa 100644
#define TASK_REPORT_IDLE (TASK_REPORT + 1)
#define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
-@@ -2135,7 +2163,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+@@ -2135,7 +2169,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
static inline bool task_is_runnable(struct task_struct *p)
{
@@ -341,7 +368,7 @@ index 4237daa5ac7a..3cebd93c49c8 100644
#else
static inline void rebuild_sched_domains_energy(void)
diff --git a/init/Kconfig b/init/Kconfig
-index c521e1421ad4..131a599fcde2 100644
+index c521e1421ad4..4a397b48a453 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -652,6 +652,7 @@ config TASK_IO_ACCOUNTING
@@ -352,15 +379,7 @@ index c521e1421ad4..131a599fcde2 100644
select KERNFS
help
Collect metrics that indicate how overcommitted the CPU, memory,
-@@ -817,6 +818,7 @@ menu "Scheduler features"
- config UCLAMP_TASK
- bool "Enable utilization clamping for RT/FAIR tasks"
- depends on CPU_FREQ_GOV_SCHEDUTIL
-+ depends on !SCHED_ALT
- help
- This feature enables the scheduler to track the clamped utilization
- of each CPU based on RUNNABLE tasks scheduled on that CPU.
-@@ -863,6 +865,35 @@ config UCLAMP_BUCKETS_COUNT
+@@ -863,6 +864,35 @@ config UCLAMP_BUCKETS_COUNT
If in doubt, use the default value.
@@ -396,7 +415,7 @@ index c521e1421ad4..131a599fcde2 100644
endmenu
#
-@@ -928,6 +959,7 @@ config NUMA_BALANCING
+@@ -928,6 +958,7 @@ config NUMA_BALANCING
depends on ARCH_SUPPORTS_NUMA_BALANCING
depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
@@ -404,23 +423,7 @@ index c521e1421ad4..131a599fcde2 100644
help
This option adds support for automatic NUMA aware memory/task placement.
The mechanism is quite primitive and is based on migrating memory when
-@@ -1036,6 +1068,7 @@ menuconfig CGROUP_SCHED
- tasks.
-
- if CGROUP_SCHED
-+if !SCHED_ALT
- config GROUP_SCHED_WEIGHT
- def_bool n
-
-@@ -1073,6 +1106,7 @@ config EXT_GROUP_SCHED
- select GROUP_SCHED_WEIGHT
- default y
-
-+endif #!SCHED_ALT
- endif #CGROUP_SCHED
-
- config SCHED_MM_CID
-@@ -1334,6 +1368,7 @@ config CHECKPOINT_RESTORE
+@@ -1334,6 +1365,7 @@ config CHECKPOINT_RESTORE
config SCHED_AUTOGROUP
bool "Automatic process group scheduling"
@@ -429,7 +432,7 @@ index c521e1421ad4..131a599fcde2 100644
select CGROUP_SCHED
select FAIR_GROUP_SCHED
diff --git a/init/init_task.c b/init/init_task.c
-index 136a8231355a..03770079619a 100644
+index 136a8231355a..12c01ab8e718 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -71,9 +71,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
@@ -466,14 +469,20 @@ index 136a8231355a..03770079619a 100644
.se = {
.group_node = LIST_HEAD_INIT(init_task.se.group_node),
},
-@@ -93,6 +110,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+@@ -93,10 +110,13 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
.run_list = LIST_HEAD_INIT(init_task.rt.run_list),
.time_slice = RR_TIMESLICE,
},
+#endif
.tasks = LIST_HEAD_INIT(init_task.tasks),
++#ifndef CONFIG_SCHED_ALT
#ifdef CONFIG_SMP
.pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+ #endif
++#endif
+ #ifdef CONFIG_CGROUP_SCHED
+ .sched_task_group = &root_task_group,
+ #endif
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index fe782cd77388..d27d2154d71a 100644
--- a/kernel/Kconfig.preempt
@@ -700,10 +709,10 @@ index 976092b7bd45..31d587c16ec1 100644
obj-y += build_utility.o
diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
new file mode 100644
-index 000000000000..c59691742340
+index 000000000000..0a08bc0176ac
--- /dev/null
+++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7458 @@
+@@ -0,0 +1,7515 @@
+/*
+ * kernel/sched/alt_core.c
+ *
@@ -782,7 +791,7 @@ index 000000000000..c59691742340
+#define sched_feat(x) (0)
+#endif /* CONFIG_SCHED_DEBUG */
+
-+#define ALT_SCHED_VERSION "v6.12-r0"
++#define ALT_SCHED_VERSION "v6.12-r1"
+
+#define STOP_PRIO (MAX_RT_PRIO - 1)
+
@@ -2144,8 +2153,6 @@ index 000000000000..c59691742340
+ __set_task_cpu(p, new_cpu);
+}
+
-+#define MDF_FORCE_ENABLED 0x80
-+
+static void
+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
+{
@@ -2186,8 +2193,6 @@ index 000000000000..c59691742340
+ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
+ cpu_rq(cpu)->nr_pinned++;
+ p->migration_disabled = 1;
-+ p->migration_flags &= ~MDF_FORCE_ENABLED;
-+
+ /*
+ * Violates locking rules! see comment in __do_set_cpus_ptr().
+ */
@@ -2237,6 +2242,15 @@ index 000000000000..c59691742340
+}
+EXPORT_SYMBOL_GPL(migrate_enable);
+
++static void __migrate_force_enable(struct task_struct *p, struct rq *rq)
++{
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++}
++
+static inline bool rq_has_pinned_tasks(struct rq *rq)
+{
+ return rq->nr_pinned;
@@ -2417,6 +2431,9 @@ index 000000000000..c59691742340
+
+ __do_set_cpus_allowed(p, &ac);
+
++ if (is_migration_disabled(p) && !cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
++ __migrate_force_enable(p, task_rq(p));
++
+ /*
+ * Because this is called with p->pi_lock held, it is not possible
+ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
@@ -2712,14 +2729,8 @@ index 000000000000..c59691742340
+{
+ /* Can the task run on the task's current CPU? If so, we're done */
+ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
-+ if (p->migration_disabled) {
-+ if (likely(p->cpus_ptr != &p->cpus_mask))
-+ __do_set_cpus_ptr(p, &p->cpus_mask);
-+ p->migration_disabled = 0;
-+ p->migration_flags |= MDF_FORCE_ENABLED;
-+ /* When p is migrate_disabled, rq->lock should be held */
-+ rq->nr_pinned--;
-+ }
++ if (is_migration_disabled(p))
++ __migrate_force_enable(p, rq);
+
+ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
+ struct migration_arg arg = { p, dest_cpu };
@@ -7178,9 +7189,6 @@ index 000000000000..c59691742340
+ if (preempt_count() > 0)
+ return;
+
-+ if (current->migration_flags & MDF_FORCE_ENABLED)
-+ return;
-+
+ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
+ return;
+ prev_jiffy = jiffies;
@@ -7374,6 +7382,43 @@ index 000000000000..c59691742340
+{
+}
+
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++ return 0;
++}
++
++static int sched_group_set_idle(struct task_group *tg, long idle)
++{
++ return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 shareval)
++{
++ return sched_group_set_shares(css_tg(css), shareval);
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 idle)
++{
++ return sched_group_set_idle(css_tg(css), idle);
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
+static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
@@ -7419,7 +7464,9 @@ index 000000000000..c59691742340
+{
+ return 0;
+}
++#endif
+
++#ifdef CONFIG_RT_GROUP_SCHED
+static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, s64 val)
+{
@@ -7443,7 +7490,9 @@ index 000000000000..c59691742340
+{
+ return 0;
+}
++#endif
+
++#ifdef CONFIG_UCLAMP_TASK_GROUP
+static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
+{
+ return 0;
@@ -7467,8 +7516,22 @@ index 000000000000..c59691742340
+{
+ return nbytes;
+}
++#endif
+
+static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++ {
++ .name = "shares",
++ .read_u64 = cpu_shares_read_u64,
++ .write_u64 = cpu_shares_write_u64,
++ },
++ {
++ .name = "idle",
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ .name = "cfs_quota_us",
+ .read_s64 = cpu_cfs_quota_read_s64,
@@ -7492,6 +7555,8 @@ index 000000000000..c59691742340
+ .name = "stat.local",
+ .seq_show = cpu_cfs_local_stat_show,
+ },
++#endif
++#ifdef CONFIG_RT_GROUP_SCHED
+ {
+ .name = "rt_runtime_us",
+ .read_s64 = cpu_rt_runtime_read,
@@ -7502,6 +7567,8 @@ index 000000000000..c59691742340
+ .read_u64 = cpu_rt_period_read_uint,
+ .write_u64 = cpu_rt_period_write_uint,
+ },
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
+ {
+ .name = "uclamp.min",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7514,9 +7581,11 @@ index 000000000000..c59691742340
+ .seq_show = cpu_uclamp_max_show,
+ .write = cpu_uclamp_max_write,
+ },
++#endif
+ { } /* Terminate */
+};
+
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
@@ -7540,19 +7609,9 @@ index 000000000000..c59691742340
+{
+ return 0;
+}
++#endif
+
-+static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft, s64 idle)
-+{
-+ return 0;
-+}
-+
++#ifdef CONFIG_CFS_BANDWIDTH
+static int cpu_max_show(struct seq_file *sf, void *v)
+{
+ return 0;
@@ -7563,8 +7622,10 @@ index 000000000000..c59691742340
+{
+ return nbytes;
+}
++#endif
+
+static struct cftype cpu_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
+ {
+ .name = "weight",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7583,6 +7644,8 @@ index 000000000000..c59691742340
+ .read_s64 = cpu_idle_read_s64,
+ .write_s64 = cpu_idle_write_s64,
+ },
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ .name = "max",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7595,6 +7658,8 @@ index 000000000000..c59691742340
+ .read_u64 = cpu_cfs_burst_read_u64,
+ .write_u64 = cpu_cfs_burst_write_u64,
+ },
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
+ {
+ .name = "uclamp.min",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7607,6 +7672,7 @@ index 000000000000..c59691742340
+ .seq_show = cpu_uclamp_max_show,
+ .write = cpu_uclamp_max_write,
+ },
++#endif
+ { } /* terminate */
+};
+
@@ -8421,10 +8487,10 @@ index 000000000000..1dbd7eb6a434
+{}
diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
new file mode 100644
-index 000000000000..09c9e9f80bf4
+index 000000000000..7fb3433c5c41
--- /dev/null
+++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,971 @@
+@@ -0,0 +1,997 @@
+#ifndef _KERNEL_SCHED_ALT_SCHED_H
+#define _KERNEL_SCHED_ALT_SCHED_H
+
@@ -9120,15 +9186,41 @@ index 000000000000..09c9e9f80bf4
+
+static inline void nohz_run_idle_balance(int cpu) { }
+
-+static inline
-+unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-+ struct task_struct *p)
++static inline unsigned long
++uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id)
+{
-+ return util;
++ if (clamp_id == UCLAMP_MIN)
++ return 0;
++
++ return SCHED_CAPACITY_SCALE;
+}
+
+static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
+
++static inline bool uclamp_is_used(void)
++{
++ return false;
++}
++
++static inline unsigned long
++uclamp_rq_get(struct rq *rq, enum uclamp_id clamp_id)
++{
++ if (clamp_id == UCLAMP_MIN)
++ return 0;
++
++ return SCHED_CAPACITY_SCALE;
++}
++
++static inline void
++uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id, unsigned int value)
++{
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++ return false;
++}
++
+#ifdef CONFIG_SCHED_MM_CID
+
+#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
@@ -11109,6 +11201,28 @@ index 6bcee4704059..cf88205fd4a2 100644
return false;
}
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index a50ed23bee77..be0477666049 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1665,6 +1665,9 @@ static void osnoise_sleep(bool skip_period)
+ */
+ static inline int osnoise_migration_pending(void)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else
+ if (!current->migration_pending)
+ return 0;
+
+@@ -1686,6 +1689,7 @@ static inline int osnoise_migration_pending(void)
+ mutex_unlock(&interface_lock);
+
+ return 1;
++#endif
+ }
+
+ /*
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
index 1469dd8075fa..803527a0e48a 100644
--- a/kernel/trace/trace_selftest.c
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-17 13:18 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-01-17 13:18 UTC (permalink / raw
To: gentoo-commits
commit: 0ac150344b6ac5bfe1c306054599ae5cb28d7d74
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 17 13:18:02 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 17 13:18:02 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ac15034
Linux patch 6.12.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-6.12.10.patch | 6476 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6480 insertions(+)
diff --git a/0000_README b/0000_README
index 06b9cb3f..20574d29 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-6.12.9.patch
From: https://www.kernel.org
Desc: Linux 6.12.9
+Patch: 1009_linux-6.12.10.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.10
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1009_linux-6.12.10.patch b/1009_linux-6.12.10.patch
new file mode 100644
index 00000000..af53d00b
--- /dev/null
+++ b/1009_linux-6.12.10.patch
@@ -0,0 +1,6476 @@
+diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
+index 6d02168d78bed6..2cb58daf3089ba 100644
+--- a/Documentation/admin-guide/cgroup-v2.rst
++++ b/Documentation/admin-guide/cgroup-v2.rst
+@@ -2954,7 +2954,7 @@ following two functions.
+ a queue (device) has been associated with the bio and
+ before submission.
+
+- wbc_account_cgroup_owner(@wbc, @page, @bytes)
++ wbc_account_cgroup_owner(@wbc, @folio, @bytes)
+ Should be called for each data segment being written out.
+ While this function doesn't care exactly when it's called
+ during the writeback session, it's the easiest and most
+diff --git a/Makefile b/Makefile
+index 80151f53d8ee0f..233e9e88e402e7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi b/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
+index dd714d235d5f6a..b0bad0d1ba36f4 100644
+--- a/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
+@@ -87,7 +87,7 @@ usdhc1: mmc@402c0000 {
+ reg = <0x402c0000 0x4000>;
+ interrupts = <110>;
+ clocks = <&clks IMXRT1050_CLK_IPG_PDOF>,
+- <&clks IMXRT1050_CLK_OSC>,
++ <&clks IMXRT1050_CLK_AHB_PODF>,
+ <&clks IMXRT1050_CLK_USDHC1>;
+ clock-names = "ipg", "ahb", "per";
+ bus-width = <4>;
+diff --git a/arch/arm64/boot/dts/freescale/imx95.dtsi b/arch/arm64/boot/dts/freescale/imx95.dtsi
+index 03661e76550f4d..40cbb071f265cf 100644
+--- a/arch/arm64/boot/dts/freescale/imx95.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx95.dtsi
+@@ -1609,7 +1609,7 @@ pcie1_ep: pcie-ep@4c380000 {
+
+ netcmix_blk_ctrl: syscon@4c810000 {
+ compatible = "nxp,imx95-netcmix-blk-ctrl", "syscon";
+- reg = <0x0 0x4c810000 0x0 0x10000>;
++ reg = <0x0 0x4c810000 0x0 0x8>;
+ #clock-cells = <1>;
+ clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
+ assigned-clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index e8dbc8d820a64f..8a21448c0fa845 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -1940,6 +1940,7 @@ tpdm@4003000 {
+
+ qcom,cmb-element-bits = <32>;
+ qcom,cmb-msrs-num = <32>;
++ status = "disabled";
+
+ out-ports {
+ port {
+@@ -5587,7 +5588,7 @@ pcie0_ep: pcie-ep@1c00000 {
+ <0x0 0x40000000 0x0 0xf20>,
+ <0x0 0x40000f20 0x0 0xa8>,
+ <0x0 0x40001000 0x0 0x4000>,
+- <0x0 0x40200000 0x0 0x100000>,
++ <0x0 0x40200000 0x0 0x1fe00000>,
+ <0x0 0x01c03000 0x0 0x1000>,
+ <0x0 0x40005000 0x0 0x2000>;
+ reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+@@ -5744,7 +5745,7 @@ pcie1_ep: pcie-ep@1c10000 {
+ <0x0 0x60000000 0x0 0xf20>,
+ <0x0 0x60000f20 0x0 0xa8>,
+ <0x0 0x60001000 0x0 0x4000>,
+- <0x0 0x60200000 0x0 0x100000>,
++ <0x0 0x60200000 0x0 0x1fe00000>,
+ <0x0 0x01c13000 0x0 0x1000>,
+ <0x0 0x60005000 0x0 0x2000>;
+ reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 914f9cb3aca215..a97ceff939d882 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -2925,7 +2925,7 @@ pcie6a: pci@1bf8000 {
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges = <0x01000000 0x0 0x00000000 0x0 0x70200000 0x0 0x100000>,
+- <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x1d00000>;
++ <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x3d00000>;
+ bus-range = <0x00 0xff>;
+
+ dma-coherent;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index c01a4cad48f30e..d16a13d6442f88 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -333,6 +333,7 @@ power: power-controller {
+
+ power-domain@RK3328_PD_HEVC {
+ reg = <RK3328_PD_HEVC>;
++ clocks = <&cru SCLK_VENC_CORE>;
+ #power-domain-cells = <0>;
+ };
+ power-domain@RK3328_PD_VIDEO {
+diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
+index 32d308a3355fd4..febf820d505837 100644
+--- a/arch/riscv/include/asm/page.h
++++ b/arch/riscv/include/asm/page.h
+@@ -124,6 +124,7 @@ struct kernel_mapping {
+
+ extern struct kernel_mapping kernel_map;
+ extern phys_addr_t phys_ram_base;
++extern unsigned long vmemmap_start_pfn;
+
+ #define is_kernel_mapping(x) \
+ ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size))
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index e79f15293492d5..c0866ada5bbc49 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -87,7 +87,7 @@
+ * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel
+ * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled.
+ */
+-#define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT))
++#define vmemmap ((struct page *)VMEMMAP_START - vmemmap_start_pfn)
+
+ #define PCI_IO_SIZE SZ_16M
+ #define PCI_IO_END VMEMMAP_START
+diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
+index 98f631b051dba8..9be38b05f4adff 100644
+--- a/arch/riscv/include/asm/sbi.h
++++ b/arch/riscv/include/asm/sbi.h
+@@ -158,6 +158,7 @@ struct riscv_pmu_snapshot_data {
+ };
+
+ #define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0)
++#define RISCV_PMU_PLAT_FW_EVENT_MASK GENMASK_ULL(61, 0)
+ #define RISCV_PMU_RAW_EVENT_IDX 0x20000
+ #define RISCV_PLAT_FW_EVENT 0xFFFF
+
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index c200d329d4bdbe..33a5a9f2a0d4e1 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -23,21 +23,21 @@
+ REG_S a0, TASK_TI_A0(tp)
+ csrr a0, CSR_CAUSE
+ /* Exclude IRQs */
+- blt a0, zero, _new_vmalloc_restore_context_a0
++ blt a0, zero, .Lnew_vmalloc_restore_context_a0
+
+ REG_S a1, TASK_TI_A1(tp)
+ /* Only check new_vmalloc if we are in page/protection fault */
+ li a1, EXC_LOAD_PAGE_FAULT
+- beq a0, a1, _new_vmalloc_kernel_address
++ beq a0, a1, .Lnew_vmalloc_kernel_address
+ li a1, EXC_STORE_PAGE_FAULT
+- beq a0, a1, _new_vmalloc_kernel_address
++ beq a0, a1, .Lnew_vmalloc_kernel_address
+ li a1, EXC_INST_PAGE_FAULT
+- bne a0, a1, _new_vmalloc_restore_context_a1
++ bne a0, a1, .Lnew_vmalloc_restore_context_a1
+
+-_new_vmalloc_kernel_address:
++.Lnew_vmalloc_kernel_address:
+ /* Is it a kernel address? */
+ csrr a0, CSR_TVAL
+- bge a0, zero, _new_vmalloc_restore_context_a1
++ bge a0, zero, .Lnew_vmalloc_restore_context_a1
+
+ /* Check if a new vmalloc mapping appeared that could explain the trap */
+ REG_S a2, TASK_TI_A2(tp)
+@@ -69,7 +69,7 @@ _new_vmalloc_kernel_address:
+ /* Check the value of new_vmalloc for this cpu */
+ REG_L a2, 0(a0)
+ and a2, a2, a1
+- beq a2, zero, _new_vmalloc_restore_context
++ beq a2, zero, .Lnew_vmalloc_restore_context
+
+ /* Atomically reset the current cpu bit in new_vmalloc */
+ amoxor.d a0, a1, (a0)
+@@ -83,11 +83,11 @@ _new_vmalloc_kernel_address:
+ csrw CSR_SCRATCH, x0
+ sret
+
+-_new_vmalloc_restore_context:
++.Lnew_vmalloc_restore_context:
+ REG_L a2, TASK_TI_A2(tp)
+-_new_vmalloc_restore_context_a1:
++.Lnew_vmalloc_restore_context_a1:
+ REG_L a1, TASK_TI_A1(tp)
+-_new_vmalloc_restore_context_a0:
++.Lnew_vmalloc_restore_context_a0:
+ REG_L a0, TASK_TI_A0(tp)
+ .endm
+
+@@ -278,6 +278,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception)
+ #else
+ sret
+ #endif
++SYM_INNER_LABEL(ret_from_exception_end, SYM_L_GLOBAL)
+ SYM_CODE_END(ret_from_exception)
+ ASM_NOKPROBE(ret_from_exception)
+
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 1cd461f3d8726d..47d0ebeec93c23 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -23,7 +23,7 @@ struct used_bucket {
+
+ struct relocation_head {
+ struct hlist_node node;
+- struct list_head *rel_entry;
++ struct list_head rel_entry;
+ void *location;
+ };
+
+@@ -634,7 +634,7 @@ process_accumulated_relocations(struct module *me,
+ location = rel_head_iter->location;
+ list_for_each_entry_safe(rel_entry_iter,
+ rel_entry_iter_tmp,
+- rel_head_iter->rel_entry,
++ &rel_head_iter->rel_entry,
+ head) {
+ curr_type = rel_entry_iter->type;
+ reloc_handlers[curr_type].reloc_handler(
+@@ -704,16 +704,7 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+ return -ENOMEM;
+ }
+
+- rel_head->rel_entry =
+- kmalloc(sizeof(struct list_head), GFP_KERNEL);
+-
+- if (!rel_head->rel_entry) {
+- kfree(entry);
+- kfree(rel_head);
+- return -ENOMEM;
+- }
+-
+- INIT_LIST_HEAD(rel_head->rel_entry);
++ INIT_LIST_HEAD(&rel_head->rel_entry);
+ rel_head->location = location;
+ INIT_HLIST_NODE(&rel_head->node);
+ if (!current_head->first) {
+@@ -722,7 +713,6 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+
+ if (!bucket) {
+ kfree(entry);
+- kfree(rel_head->rel_entry);
+ kfree(rel_head);
+ return -ENOMEM;
+ }
+@@ -735,7 +725,7 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+ }
+
+ /* Add relocation to head of discovered rel_head */
+- list_add_tail(&entry->head, rel_head->rel_entry);
++ list_add_tail(&entry->head, &rel_head->rel_entry);
+
+ return 0;
+ }
+diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
+index 474a6521365783..d2dacea1aedd9e 100644
+--- a/arch/riscv/kernel/probes/kprobes.c
++++ b/arch/riscv/kernel/probes/kprobes.c
+@@ -30,7 +30,7 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
+ p->ainsn.api.restore = (unsigned long)p->addr + len;
+
+ patch_text_nosync(p->ainsn.api.insn, &p->opcode, len);
+- patch_text_nosync(p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
++ patch_text_nosync((void *)p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
+ }
+
+ static void __kprobes arch_prepare_simulate(struct kprobe *p)
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 153a2db4c5fa14..d4355c770c36ac 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -17,6 +17,7 @@
+ #ifdef CONFIG_FRAME_POINTER
+
+ extern asmlinkage void handle_exception(void);
++extern unsigned long ret_from_exception_end;
+
+ static inline int fp_is_valid(unsigned long fp, unsigned long sp)
+ {
+@@ -71,7 +72,8 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ fp = frame->fp;
+ pc = ftrace_graph_ret_addr(current, &graph_idx, frame->ra,
+ &frame->ra);
+- if (pc == (unsigned long)handle_exception) {
++ if (pc >= (unsigned long)handle_exception &&
++ pc < (unsigned long)&ret_from_exception_end) {
+ if (unlikely(!__kernel_text_address(pc) || !fn(arg, pc)))
+ break;
+
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 51ebfd23e00764..8ff8e8b36524b7 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -35,7 +35,7 @@
+
+ int show_unhandled_signals = 1;
+
+-static DEFINE_SPINLOCK(die_lock);
++static DEFINE_RAW_SPINLOCK(die_lock);
+
+ static int copy_code(struct pt_regs *regs, u16 *val, const u16 *insns)
+ {
+@@ -81,7 +81,7 @@ void die(struct pt_regs *regs, const char *str)
+
+ oops_enter();
+
+- spin_lock_irqsave(&die_lock, flags);
++ raw_spin_lock_irqsave(&die_lock, flags);
+ console_verbose();
+ bust_spinlocks(1);
+
+@@ -100,7 +100,7 @@ void die(struct pt_regs *regs, const char *str)
+
+ bust_spinlocks(0);
+ add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+- spin_unlock_irqrestore(&die_lock, flags);
++ raw_spin_unlock_irqrestore(&die_lock, flags);
+ oops_exit();
+
+ if (in_interrupt())
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index fc53ce748c8049..8d167e09f1fea5 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -33,6 +33,7 @@
+ #include <asm/pgtable.h>
+ #include <asm/sections.h>
+ #include <asm/soc.h>
++#include <asm/sparsemem.h>
+ #include <asm/tlbflush.h>
+
+ #include "../kernel/head.h"
+@@ -62,6 +63,13 @@ EXPORT_SYMBOL(pgtable_l5_enabled);
+ phys_addr_t phys_ram_base __ro_after_init;
+ EXPORT_SYMBOL(phys_ram_base);
+
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++#define VMEMMAP_ADDR_ALIGN (1ULL << SECTION_SIZE_BITS)
++
++unsigned long vmemmap_start_pfn __ro_after_init;
++EXPORT_SYMBOL(vmemmap_start_pfn);
++#endif
++
+ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
+ __page_aligned_bss;
+ EXPORT_SYMBOL(empty_zero_page);
+@@ -240,8 +248,12 @@ static void __init setup_bootmem(void)
+ * Make sure we align the start of the memory on a PMD boundary so that
+ * at worst, we map the linear mapping with PMD mappings.
+ */
+- if (!IS_ENABLED(CONFIG_XIP_KERNEL))
++ if (!IS_ENABLED(CONFIG_XIP_KERNEL)) {
+ phys_ram_base = memblock_start_of_DRAM() & PMD_MASK;
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++ vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT;
++#endif
++ }
+
+ /*
+ * In 64-bit, any use of __va/__pa before this point is wrong as we
+@@ -1101,6 +1113,9 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom);
+
+ phys_ram_base = CONFIG_PHYS_RAM_BASE;
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++ vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT;
++#endif
+ kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
+ kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start);
+
+diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
+index 6bc1eb2a21bd92..887b0b8e21e364 100644
+--- a/arch/x86/kernel/fpu/regset.c
++++ b/arch/x86/kernel/fpu/regset.c
+@@ -190,7 +190,8 @@ int ssp_get(struct task_struct *target, const struct user_regset *regset,
+ struct fpu *fpu = &target->thread.fpu;
+ struct cet_user_state *cetregs;
+
+- if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
++ if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) ||
++ !ssp_active(target, regset))
+ return -ENODEV;
+
+ sync_fpstate(fpu);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 95dd7b79593565..cad16c163611b5 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6844,16 +6844,24 @@ static struct bfq_queue *bfq_waker_bfqq(struct bfq_queue *bfqq)
+ if (new_bfqq == waker_bfqq) {
+ /*
+ * If waker_bfqq is in the merge chain, and current
+- * is the only procress.
++ * is the only process, waker_bfqq can be freed.
+ */
+ if (bfqq_process_refs(waker_bfqq) == 1)
+ return NULL;
+- break;
++
++ return waker_bfqq;
+ }
+
+ new_bfqq = new_bfqq->new_bfqq;
+ }
+
++ /*
++ * If waker_bfqq is not in the merge chain, and it's procress reference
++ * is 0, waker_bfqq can be freed.
++ */
++ if (bfqq_process_refs(waker_bfqq) == 0)
++ return NULL;
++
+ return waker_bfqq;
+ }
+
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 821867de43bea3..d27a3bf96f80d8 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ },
+ },
++ {
++ /* Asus Vivobook X1504VAP */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_BOARD_NAME, "X1504VAP"),
++ },
++ },
+ {
+ /* Asus Vivobook X1704VAP */
+ .matches = {
+@@ -646,6 +653,17 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
+ },
+ },
++ {
++ /*
++ * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the
++ * board-name is changed, so check OEM strings instead. Note
++ * OEM string matches are always exact matches.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=219614
++ */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_OEM_STRING, "GM5HG0A"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/base/topology.c b/drivers/base/topology.c
+index 89f98be5c5b991..d293cbd253e4f9 100644
+--- a/drivers/base/topology.c
++++ b/drivers/base/topology.c
+@@ -27,9 +27,17 @@ static ssize_t name##_read(struct file *file, struct kobject *kobj, \
+ loff_t off, size_t count) \
+ { \
+ struct device *dev = kobj_to_dev(kobj); \
++ cpumask_var_t mask; \
++ ssize_t n; \
+ \
+- return cpumap_print_bitmask_to_buf(buf, topology_##mask(dev->id), \
+- off, count); \
++ if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \
++ return -ENOMEM; \
++ \
++ cpumask_copy(mask, topology_##mask(dev->id)); \
++ n = cpumap_print_bitmask_to_buf(buf, mask, off, count); \
++ free_cpumask_var(mask); \
++ \
++ return n; \
+ } \
+ \
+ static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
+@@ -37,9 +45,17 @@ static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
+ loff_t off, size_t count) \
+ { \
+ struct device *dev = kobj_to_dev(kobj); \
++ cpumask_var_t mask; \
++ ssize_t n; \
++ \
++ if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \
++ return -ENOMEM; \
++ \
++ cpumask_copy(mask, topology_##mask(dev->id)); \
++ n = cpumap_print_list_to_buf(buf, mask, off, count); \
++ free_cpumask_var(mask); \
+ \
+- return cpumap_print_list_to_buf(buf, topology_##mask(dev->id), \
+- off, count); \
++ return n; \
+ }
+
+ define_id_show_func(physical_package_id, "%d");
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 85e99641eaae02..af487abe9932aa 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -1472,10 +1472,15 @@ EXPORT_SYMBOL_GPL(btmtk_usb_setup);
+
+ int btmtk_usb_shutdown(struct hci_dev *hdev)
+ {
++ struct btmtk_data *data = hci_get_priv(hdev);
+ struct btmtk_hci_wmt_params wmt_params;
+ u8 param = 0;
+ int err;
+
++ err = usb_autopm_get_interface(data->intf);
++ if (err < 0)
++ return err;
++
+ /* Disable the device */
+ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ wmt_params.flag = 0;
+@@ -1486,9 +1491,11 @@ int btmtk_usb_shutdown(struct hci_dev *hdev)
+ err = btmtk_usb_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
++ usb_autopm_put_interface(data->intf);
+ return err;
+ }
+
++ usb_autopm_put_interface(data->intf);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(btmtk_usb_shutdown);
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index 5ea0d23e88c02b..a028984f27829c 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1336,6 +1336,7 @@ static void btnxpuart_tx_work(struct work_struct *work)
+
+ while ((skb = nxp_dequeue(nxpdev))) {
+ len = serdev_device_write_buf(serdev, skb->data, skb->len);
++ serdev_device_wait_until_sent(serdev, 0);
+ hdev->stat.byte_tx += len;
+
+ skb_pull(skb, len);
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index d228b4d18d5600..77a1fc668ae27b 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -500,12 +500,12 @@ static int sbi_cpuidle_probe(struct platform_device *pdev)
+ int cpu, ret;
+ struct cpuidle_driver *drv;
+ struct cpuidle_device *dev;
+- struct device_node *np, *pds_node;
++ struct device_node *pds_node;
+
+ /* Detect OSI support based on CPU DT nodes */
+ sbi_cpuidle_use_osi = true;
+ for_each_possible_cpu(cpu) {
+- np = of_cpu_device_node_get(cpu);
++ struct device_node *np __free(device_node) = of_cpu_device_node_get(cpu);
+ if (np &&
+ of_property_present(np, "power-domains") &&
+ of_property_present(np, "power-domain-names")) {
+diff --git a/drivers/gpio/gpio-loongson-64bit.c b/drivers/gpio/gpio-loongson-64bit.c
+index 6749d4dd6d6496..7f4d78fd800e7e 100644
+--- a/drivers/gpio/gpio-loongson-64bit.c
++++ b/drivers/gpio/gpio-loongson-64bit.c
+@@ -237,9 +237,9 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
+ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = {
+ .label = "ls2k2000_gpio",
+ .mode = BIT_CTRL_MODE,
+- .conf_offset = 0x84,
+- .in_offset = 0x88,
+- .out_offset = 0x80,
++ .conf_offset = 0x4,
++ .in_offset = 0x8,
++ .out_offset = 0x0,
+ };
+
+ static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {
+diff --git a/drivers/gpio/gpio-virtuser.c b/drivers/gpio/gpio-virtuser.c
+index 91b6352c957cf9..d6244f0d3bc752 100644
+--- a/drivers/gpio/gpio-virtuser.c
++++ b/drivers/gpio/gpio-virtuser.c
+@@ -1410,7 +1410,7 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
+ size_t num_entries = gpio_virtuser_get_lookup_count(dev);
+ struct gpio_virtuser_lookup_entry *entry;
+ struct gpio_virtuser_lookup *lookup;
+- unsigned int i = 0;
++ unsigned int i = 0, idx;
+
+ lockdep_assert_held(&dev->lock);
+
+@@ -1424,12 +1424,12 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
+ return -ENOMEM;
+
+ list_for_each_entry(lookup, &dev->lookup_list, siblings) {
++ idx = 0;
+ list_for_each_entry(entry, &lookup->entry_list, siblings) {
+- table->table[i] =
++ table->table[i++] =
+ GPIO_LOOKUP_IDX(entry->key,
+ entry->offset < 0 ? U16_MAX : entry->offset,
+- lookup->con_id, i, entry->flags);
+- i++;
++ lookup->con_id, idx++, entry->flags);
+ }
+ }
+
+@@ -1439,6 +1439,15 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
+ return 0;
+ }
+
++static void
++gpio_virtuser_remove_lookup_table(struct gpio_virtuser_device *dev)
++{
++ gpiod_remove_lookup_table(dev->lookup_table);
++ kfree(dev->lookup_table->dev_id);
++ kfree(dev->lookup_table);
++ dev->lookup_table = NULL;
++}
++
+ static struct fwnode_handle *
+ gpio_virtuser_make_device_swnode(struct gpio_virtuser_device *dev)
+ {
+@@ -1487,10 +1496,8 @@ gpio_virtuser_device_activate(struct gpio_virtuser_device *dev)
+ pdevinfo.fwnode = swnode;
+
+ ret = gpio_virtuser_make_lookup_table(dev);
+- if (ret) {
+- fwnode_remove_software_node(swnode);
+- return ret;
+- }
++ if (ret)
++ goto err_remove_swnode;
+
+ reinit_completion(&dev->probe_completion);
+ dev->driver_bound = false;
+@@ -1498,23 +1505,31 @@ gpio_virtuser_device_activate(struct gpio_virtuser_device *dev)
+
+ pdev = platform_device_register_full(&pdevinfo);
+ if (IS_ERR(pdev)) {
++ ret = PTR_ERR(pdev);
+ bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier);
+- fwnode_remove_software_node(swnode);
+- return PTR_ERR(pdev);
++ goto err_remove_lookup_table;
+ }
+
+ wait_for_completion(&dev->probe_completion);
+ bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier);
+
+ if (!dev->driver_bound) {
+- platform_device_unregister(pdev);
+- fwnode_remove_software_node(swnode);
+- return -ENXIO;
++ ret = -ENXIO;
++ goto err_unregister_pdev;
+ }
+
+ dev->pdev = pdev;
+
+ return 0;
++
++err_unregister_pdev:
++ platform_device_unregister(pdev);
++err_remove_lookup_table:
++ gpio_virtuser_remove_lookup_table(dev);
++err_remove_swnode:
++ fwnode_remove_software_node(swnode);
++
++ return ret;
+ }
+
+ static void
+@@ -1526,10 +1541,9 @@ gpio_virtuser_device_deactivate(struct gpio_virtuser_device *dev)
+
+ swnode = dev_fwnode(&dev->pdev->dev);
+ platform_device_unregister(dev->pdev);
++ gpio_virtuser_remove_lookup_table(dev);
+ fwnode_remove_software_node(swnode);
+ dev->pdev = NULL;
+- gpiod_remove_lookup_table(dev->lookup_table);
+- kfree(dev->lookup_table);
+ }
+
+ static ssize_t
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 7d26a962f811cf..ff5e52025266cd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -567,7 +567,6 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ else
+ remaining_size -= size;
+ }
+- mutex_unlock(&mgr->lock);
+
+ if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
+ struct drm_buddy_block *dcc_block;
+@@ -584,6 +583,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ (u64)vres->base.size,
+ &vres->blocks);
+ }
++ mutex_unlock(&mgr->lock);
+
+ vres->base.start = 0;
+ size = max_t(u64, amdgpu_vram_mgr_blocks_size(&vres->blocks),
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+index 312dfa84f29f84..a8abc309180137 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+@@ -350,10 +350,27 @@ int kfd_dbg_set_mes_debug_mode(struct kfd_process_device *pdd, bool sq_trap_en)
+ {
+ uint32_t spi_dbg_cntl = pdd->spi_dbg_override | pdd->spi_dbg_launch_mode;
+ uint32_t flags = pdd->process->dbg_flags;
++ struct amdgpu_device *adev = pdd->dev->adev;
++ int r;
+
+ if (!kfd_dbg_is_per_vmid_supported(pdd->dev))
+ return 0;
+
++ if (!pdd->proc_ctx_cpu_ptr) {
++ r = amdgpu_amdkfd_alloc_gtt_mem(adev,
++ AMDGPU_MES_PROC_CTX_SIZE,
++ &pdd->proc_ctx_bo,
++ &pdd->proc_ctx_gpu_addr,
++ &pdd->proc_ctx_cpu_ptr,
++ false);
++ if (r) {
++ dev_err(adev->dev,
++ "failed to allocate process context bo\n");
++ return r;
++ }
++ memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
++ }
++
+ return amdgpu_mes_set_shader_debugger(pdd->dev->adev, pdd->proc_ctx_gpu_addr, spi_dbg_cntl,
+ pdd->watch_points, flags, sq_trap_en);
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 3139987b82b100..264bd764f6f27d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1160,7 +1160,8 @@ static void kfd_process_wq_release(struct work_struct *work)
+ */
+ synchronize_rcu();
+ ef = rcu_access_pointer(p->ef);
+- dma_fence_signal(ef);
++ if (ef)
++ dma_fence_signal(ef);
+
+ kfd_process_remove_sysfs(p);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ad3a3aa72b51f3..ea403fece8392c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8393,16 +8393,6 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+ struct amdgpu_crtc *acrtc,
+ struct dm_crtc_state *acrtc_state)
+ {
+- /*
+- * We have no guarantee that the frontend index maps to the same
+- * backend index - some even map to more than one.
+- *
+- * TODO: Use a different interrupt or check DC itself for the mapping.
+- */
+- int irq_type =
+- amdgpu_display_crtc_idx_to_irq_type(
+- adev,
+- acrtc->crtc_id);
+ struct drm_vblank_crtc_config config = {0};
+ struct dc_crtc_timing *timing;
+ int offdelay;
+@@ -8428,28 +8418,7 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+
+ drm_crtc_vblank_on_config(&acrtc->base,
+ &config);
+-
+- amdgpu_irq_get(
+- adev,
+- &adev->pageflip_irq,
+- irq_type);
+-#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+- amdgpu_irq_get(
+- adev,
+- &adev->vline0_irq,
+- irq_type);
+-#endif
+ } else {
+-#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+- amdgpu_irq_put(
+- adev,
+- &adev->vline0_irq,
+- irq_type);
+-#endif
+- amdgpu_irq_put(
+- adev,
+- &adev->pageflip_irq,
+- irq_type);
+ drm_crtc_vblank_off(&acrtc->base);
+ }
+ }
+@@ -11146,8 +11115,8 @@ dm_get_plane_scale(struct drm_plane_state *plane_state,
+ int plane_src_w, plane_src_h;
+
+ dm_get_oriented_plane_size(plane_state, &plane_src_w, &plane_src_h);
+- *out_plane_scale_w = plane_state->crtc_w * 1000 / plane_src_w;
+- *out_plane_scale_h = plane_state->crtc_h * 1000 / plane_src_h;
++ *out_plane_scale_w = plane_src_w ? plane_state->crtc_w * 1000 / plane_src_w : 0;
++ *out_plane_scale_h = plane_src_h ? plane_state->crtc_h * 1000 / plane_src_h : 0;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 9f570d447c2099..6d4ee8fe615c38 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -4421,7 +4421,7 @@ static bool commit_minimal_transition_based_on_current_context(struct dc *dc,
+ struct pipe_split_policy_backup policy;
+ struct dc_state *intermediate_context;
+ struct dc_state *old_current_state = dc->current_state;
+- struct dc_surface_update srf_updates[MAX_SURFACE_NUM] = {0};
++ struct dc_surface_update srf_updates[MAX_SURFACES] = {0};
+ int surface_count;
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+index e006f816ff2f74..1b2cce127981d9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+@@ -483,9 +483,9 @@ bool dc_state_add_plane(
+ if (stream_status == NULL) {
+ dm_error("Existing stream not found; failed to attach surface!\n");
+ goto out;
+- } else if (stream_status->plane_count == MAX_SURFACE_NUM) {
++ } else if (stream_status->plane_count == MAX_SURFACES) {
+ dm_error("Surface: can not attach plane_state %p! Maximum is: %d\n",
+- plane_state, MAX_SURFACE_NUM);
++ plane_state, MAX_SURFACES);
+ goto out;
+ } else if (!otg_master_pipe) {
+ goto out;
+@@ -600,7 +600,7 @@ bool dc_state_rem_all_planes_for_stream(
+ {
+ int i, old_plane_count;
+ struct dc_stream_status *stream_status = NULL;
+- struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
++ struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
+
+ for (i = 0; i < state->stream_count; i++)
+ if (state->streams[i] == stream) {
+@@ -875,7 +875,7 @@ bool dc_state_rem_all_phantom_planes_for_stream(
+ {
+ int i, old_plane_count;
+ struct dc_stream_status *stream_status = NULL;
+- struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
++ struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
+
+ for (i = 0; i < state->stream_count; i++)
+ if (state->streams[i] == phantom_stream) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 7c163aa7e8bd2d..a4f6ff7155c2a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -57,7 +57,7 @@ struct dmub_notification;
+
+ #define DC_VER "3.2.301"
+
+-#define MAX_SURFACES 3
++#define MAX_SURFACES 4
+ #define MAX_PLANES 6
+ #define MAX_STREAMS 6
+ #define MIN_VIEWPORT_SIZE 12
+@@ -1390,7 +1390,7 @@ struct dc_scratch_space {
+ * store current value in plane states so we can still recover
+ * a valid current state during dc update.
+ */
+- struct dc_plane_state plane_states[MAX_SURFACE_NUM];
++ struct dc_plane_state plane_states[MAX_SURFACES];
+
+ struct dc_stream_state stream_state;
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index 14ea47eda0c873..8b9af1a6a03162 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -56,7 +56,7 @@ struct dc_stream_status {
+ int plane_count;
+ int audio_inst;
+ struct timing_sync_info timing_sync_info;
+- struct dc_plane_state *plane_states[MAX_SURFACE_NUM];
++ struct dc_plane_state *plane_states[MAX_SURFACES];
+ bool is_abm_supported;
+ struct mall_stream_config mall_stream_config;
+ bool fpo_in_use;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 6d7989b751e2ce..c8bdbbba44ef9d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -76,7 +76,6 @@ struct dc_perf_trace {
+ unsigned long last_entry_write;
+ };
+
+-#define MAX_SURFACE_NUM 6
+ #define NUM_PIXEL_FORMATS 10
+
+ enum tiling_mode {
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
+index 072bd053960594..6b2ab4ec2b5ffe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
+@@ -66,11 +66,15 @@ static inline double dml_max5(double a, double b, double c, double d, double e)
+
+ static inline double dml_ceil(double a, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_ceil2(a, granularity);
+ }
+
+ static inline double dml_floor(double a, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_floor2(a, granularity);
+ }
+
+@@ -114,11 +118,15 @@ static inline double dml_ceil_2(double f)
+
+ static inline double dml_ceil_ex(double x, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_ceil2(x, granularity);
+ }
+
+ static inline double dml_floor_ex(double x, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_floor2(x, granularity);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
+index 3d29169dd6bbf0..6b3b8803e0aee2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
+@@ -813,7 +813,7 @@ static bool remove_all_phantom_planes_for_stream(struct dml2_context *ctx, struc
+ {
+ int i, old_plane_count;
+ struct dc_stream_status *stream_status = NULL;
+- struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
++ struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
+
+ for (i = 0; i < context->stream_count; i++)
+ if (context->streams[i] == stream) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+index e58220a7ee2f70..30178dde6d49fc 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+@@ -302,5 +302,7 @@ int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
+ int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu,
+ enum smu_clk_type clk_type,
+ uint32_t *value);
++
++void smu_v13_0_interrupt_work(struct smu_context *smu);
+ #endif
+ #endif
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index e17466cc19522d..2024a85fa11bd5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -1320,11 +1320,11 @@ static int smu_v13_0_set_irq_state(struct amdgpu_device *adev,
+ return 0;
+ }
+
+-static int smu_v13_0_ack_ac_dc_interrupt(struct smu_context *smu)
++void smu_v13_0_interrupt_work(struct smu_context *smu)
+ {
+- return smu_cmn_send_smc_msg(smu,
+- SMU_MSG_ReenableAcDcInterrupt,
+- NULL);
++ smu_cmn_send_smc_msg(smu,
++ SMU_MSG_ReenableAcDcInterrupt,
++ NULL);
+ }
+
+ #define THM_11_0__SRCID__THM_DIG_THERM_L2H 0 /* ASIC_TEMP > CG_THERMAL_INT.DIG_THERM_INTH */
+@@ -1377,12 +1377,12 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
+ switch (ctxid) {
+ case SMU_IH_INTERRUPT_CONTEXT_ID_AC:
+ dev_dbg(adev->dev, "Switched to AC mode!\n");
+- smu_v13_0_ack_ac_dc_interrupt(smu);
++ schedule_work(&smu->interrupt_work);
+ adev->pm.ac_power = true;
+ break;
+ case SMU_IH_INTERRUPT_CONTEXT_ID_DC:
+ dev_dbg(adev->dev, "Switched to DC mode!\n");
+- smu_v13_0_ack_ac_dc_interrupt(smu);
++ schedule_work(&smu->interrupt_work);
+ adev->pm.ac_power = false;
+ break;
+ case SMU_IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING:
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index a9373968807164..cd2cf0ffc0f5cb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -3126,6 +3126,7 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
+ .is_asic_wbrf_supported = smu_v13_0_0_wbrf_support_check,
+ .enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
+ .set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
++ .interrupt_work = smu_v13_0_interrupt_work,
+ };
+
+ void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 1aedfafa507f7e..7c753d795287d9 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2704,6 +2704,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
+ .is_asic_wbrf_supported = smu_v13_0_7_wbrf_support_check,
+ .enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
+ .set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
++ .interrupt_work = smu_v13_0_interrupt_work,
+ };
+
+ void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/mediatek/Kconfig b/drivers/gpu/drm/mediatek/Kconfig
+index 417ac8c9af4194..a749c01199d40e 100644
+--- a/drivers/gpu/drm/mediatek/Kconfig
++++ b/drivers/gpu/drm/mediatek/Kconfig
+@@ -13,9 +13,6 @@ config DRM_MEDIATEK
+ select DRM_BRIDGE_CONNECTOR
+ select DRM_MIPI_DSI
+ select DRM_PANEL
+- select MEMORY
+- select MTK_SMI
+- select PHY_MTK_MIPI_DSI
+ select VIDEOMODE_HELPERS
+ help
+ Choose this option if you have a Mediatek SoCs.
+@@ -26,7 +23,6 @@ config DRM_MEDIATEK
+ config DRM_MEDIATEK_DP
+ tristate "DRM DPTX Support for MediaTek SoCs"
+ depends on DRM_MEDIATEK
+- select PHY_MTK_DP
+ select DRM_DISPLAY_HELPER
+ select DRM_DISPLAY_DP_HELPER
+ select DRM_DISPLAY_DP_AUX_BUS
+@@ -37,6 +33,5 @@ config DRM_MEDIATEK_HDMI
+ tristate "DRM HDMI Support for Mediatek SoCs"
+ depends on DRM_MEDIATEK
+ select SND_SOC_HDMI_CODEC if SND_SOC
+- select PHY_MTK_HDMI
+ help
+ DRM/KMS HDMI driver for Mediatek SoCs
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index eb0e1233ad0435..5674f5707cca83 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -112,6 +112,11 @@ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
+
+ drm_crtc_handle_vblank(&mtk_crtc->base);
+
++#if IS_REACHABLE(CONFIG_MTK_CMDQ)
++ if (mtk_crtc->cmdq_client.chan)
++ return;
++#endif
++
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) {
+ mtk_crtc_finish_page_flip(mtk_crtc);
+@@ -284,10 +289,8 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ state = to_mtk_crtc_state(mtk_crtc->base.state);
+
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+- if (mtk_crtc->config_updating) {
+- spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++ if (mtk_crtc->config_updating)
+ goto ddp_cmdq_cb_out;
+- }
+
+ state->pending_config = false;
+
+@@ -315,10 +318,15 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ mtk_crtc->pending_async_planes = false;
+ }
+
+- spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+-
+ ddp_cmdq_cb_out:
+
++ if (mtk_crtc->pending_needs_vblank) {
++ mtk_crtc_finish_page_flip(mtk_crtc);
++ mtk_crtc->pending_needs_vblank = false;
++ }
++
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ mtk_crtc->cmdq_vblank_cnt = 0;
+ wake_up(&mtk_crtc->cb_blocking_queue);
+ }
+@@ -606,13 +614,18 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+ */
+ mtk_crtc->cmdq_vblank_cnt = 3;
+
++ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
++ mtk_crtc->config_updating = false;
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
+ mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
+ }
+-#endif
++#else
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ mtk_crtc->config_updating = false;
+ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++#endif
+
+ mutex_unlock(&mtk_crtc->hw_lock);
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index e0c0bb01f65ae0..19b0d508398198 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -460,6 +460,29 @@ static unsigned int mtk_ovl_fmt_convert(struct mtk_disp_ovl *ovl,
+ }
+ }
+
++static void mtk_ovl_afbc_layer_config(struct mtk_disp_ovl *ovl,
++ unsigned int idx,
++ struct mtk_plane_pending_state *pending,
++ struct cmdq_pkt *cmdq_pkt)
++{
++ unsigned int pitch_msb = pending->pitch >> 16;
++ unsigned int hdr_pitch = pending->hdr_pitch;
++ unsigned int hdr_addr = pending->hdr_addr;
++
++ if (pending->modifier != DRM_FORMAT_MOD_LINEAR) {
++ mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs,
++ DISP_REG_OVL_HDR_ADDR(ovl, idx));
++ mtk_ddp_write_relaxed(cmdq_pkt,
++ OVL_PITCH_MSB_2ND_SUBBUF | pitch_msb,
++ &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
++ mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs,
++ DISP_REG_OVL_HDR_PITCH(ovl, idx));
++ } else {
++ mtk_ddp_write_relaxed(cmdq_pkt, pitch_msb,
++ &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
++ }
++}
++
+ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ struct mtk_plane_state *state,
+ struct cmdq_pkt *cmdq_pkt)
+@@ -467,25 +490,14 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+ struct mtk_plane_pending_state *pending = &state->pending;
+ unsigned int addr = pending->addr;
+- unsigned int hdr_addr = pending->hdr_addr;
+- unsigned int pitch = pending->pitch;
+- unsigned int hdr_pitch = pending->hdr_pitch;
++ unsigned int pitch_lsb = pending->pitch & GENMASK(15, 0);
+ unsigned int fmt = pending->format;
++ unsigned int rotation = pending->rotation;
+ unsigned int offset = (pending->y << 16) | pending->x;
+ unsigned int src_size = (pending->height << 16) | pending->width;
+ unsigned int blend_mode = state->base.pixel_blend_mode;
+ unsigned int ignore_pixel_alpha = 0;
+ unsigned int con;
+- bool is_afbc = pending->modifier != DRM_FORMAT_MOD_LINEAR;
+- union overlay_pitch {
+- struct split_pitch {
+- u16 lsb;
+- u16 msb;
+- } split_pitch;
+- u32 pitch;
+- } overlay_pitch;
+-
+- overlay_pitch.pitch = pitch;
+
+ if (!pending->enable) {
+ mtk_ovl_layer_off(dev, idx, cmdq_pkt);
+@@ -513,22 +525,30 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ ignore_pixel_alpha = OVL_CONST_BLEND;
+ }
+
+- if (pending->rotation & DRM_MODE_REFLECT_Y) {
++ /*
++ * Treat rotate 180 as flip x + flip y, and XOR the original rotation value
++ * to flip x + flip y to support both in the same time.
++ */
++ if (rotation & DRM_MODE_ROTATE_180)
++ rotation ^= DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y;
++
++ if (rotation & DRM_MODE_REFLECT_Y) {
+ con |= OVL_CON_VIRT_FLIP;
+ addr += (pending->height - 1) * pending->pitch;
+ }
+
+- if (pending->rotation & DRM_MODE_REFLECT_X) {
++ if (rotation & DRM_MODE_REFLECT_X) {
+ con |= OVL_CON_HORZ_FLIP;
+ addr += pending->pitch - 1;
+ }
+
+ if (ovl->data->supports_afbc)
+- mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, is_afbc);
++ mtk_ovl_set_afbc(ovl, cmdq_pkt, idx,
++ pending->modifier != DRM_FORMAT_MOD_LINEAR);
+
+ mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs,
+ DISP_REG_OVL_CON(idx));
+- mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb | ignore_pixel_alpha,
++ mtk_ddp_write_relaxed(cmdq_pkt, pitch_lsb | ignore_pixel_alpha,
+ &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx));
+ mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs,
+ DISP_REG_OVL_SRC_SIZE(idx));
+@@ -537,19 +557,8 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs,
+ DISP_REG_OVL_ADDR(ovl, idx));
+
+- if (is_afbc) {
+- mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs,
+- DISP_REG_OVL_HDR_ADDR(ovl, idx));
+- mtk_ddp_write_relaxed(cmdq_pkt,
+- OVL_PITCH_MSB_2ND_SUBBUF | overlay_pitch.split_pitch.msb,
+- &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
+- mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs,
+- DISP_REG_OVL_HDR_PITCH(ovl, idx));
+- } else {
+- mtk_ddp_write_relaxed(cmdq_pkt,
+- overlay_pitch.split_pitch.msb,
+- &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
+- }
++ if (ovl->data->supports_afbc)
++ mtk_ovl_afbc_layer_config(ovl, idx, pending, cmdq_pkt);
+
+ mtk_ovl_set_bit_depth(dev, idx, fmt, cmdq_pkt);
+ mtk_ovl_layer_on(dev, idx, cmdq_pkt);
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index f2bee617f063a7..cad65ea851edc7 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -543,18 +543,16 @@ static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp,
+ enum dp_pixelformat color_format)
+ {
+ u32 val;
+-
+- /* update MISC0 */
+- mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034,
+- color_format << DP_TEST_COLOR_FORMAT_SHIFT,
+- DP_TEST_COLOR_FORMAT_MASK);
++ u32 misc0_color;
+
+ switch (color_format) {
+ case DP_PIXELFORMAT_YUV422:
+ val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422;
++ misc0_color = DP_COLOR_FORMAT_YCbCr422;
+ break;
+ case DP_PIXELFORMAT_RGB:
+ val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB;
++ misc0_color = DP_COLOR_FORMAT_RGB;
+ break;
+ default:
+ drm_warn(mtk_dp->drm_dev, "Unsupported color format: %d\n",
+@@ -562,6 +560,11 @@ static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp,
+ return -EINVAL;
+ }
+
++ /* update MISC0 */
++ mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034,
++ misc0_color,
++ DP_TEST_COLOR_FORMAT_MASK);
++
+ mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C,
+ val, PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK);
+ return 0;
+@@ -2100,7 +2103,6 @@ static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge)
+ struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge);
+ enum drm_connector_status ret = connector_status_disconnected;
+ bool enabled = mtk_dp->enabled;
+- u8 sink_count = 0;
+
+ if (!mtk_dp->train_info.cable_plugged_in)
+ return ret;
+@@ -2115,8 +2117,8 @@ static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge)
+ * function, we just need to check the HPD connection to check
+ * whether we connect to a sink device.
+ */
+- drm_dp_dpcd_readb(&mtk_dp->aux, DP_SINK_COUNT, &sink_count);
+- if (DP_GET_SINK_COUNT(sink_count))
++
++ if (drm_dp_read_sink_count(&mtk_dp->aux) > 0)
+ ret = connector_status_connected;
+
+ if (!enabled)
+@@ -2408,12 +2410,19 @@ mtk_dp_bridge_mode_valid(struct drm_bridge *bridge,
+ {
+ struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge);
+ u32 bpp = info->color_formats & DRM_COLOR_FORMAT_YCBCR422 ? 16 : 24;
+- u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) *
+- drm_dp_max_lane_count(mtk_dp->rx_cap),
+- drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) *
+- mtk_dp->max_lanes);
++ u32 lane_count_min = mtk_dp->train_info.lane_count;
++ u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) *
++ lane_count_min;
+
+- if (rate < mode->clock * bpp / 8)
++ /*
++ *FEC overhead is approximately 2.4% from DP 1.4a spec 2.2.1.4.2.
++ *The down-spread amplitude shall either be disabled (0.0%) or up
++ *to 0.5% from 1.4a 3.5.2.6. Add up to approximately 3% total overhead.
++ *
++ *Because rate is already divided by 10,
++ *mode->clock does not need to be multiplied by 10
++ */
++ if ((rate * 97 / 100) < (mode->clock * bpp / 8))
+ return MODE_CLOCK_HIGH;
+
+ return MODE_OK;
+@@ -2454,10 +2463,9 @@ static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
+ struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ struct drm_display_info *display_info =
+ &conn_state->connector->display_info;
+- u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) *
+- drm_dp_max_lane_count(mtk_dp->rx_cap),
+- drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) *
+- mtk_dp->max_lanes);
++ u32 lane_count_min = mtk_dp->train_info.lane_count;
++ u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) *
++ lane_count_min;
+
+ *num_input_fmts = 0;
+
+@@ -2466,8 +2474,8 @@ static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
+ * datarate of YUV422 and sink device supports YUV422, we output YUV422
+ * format. Use this condition, we can support more resolution.
+ */
+- if ((rate < (mode->clock * 24 / 8)) &&
+- (rate > (mode->clock * 16 / 8)) &&
++ if (((rate * 97 / 100) < (mode->clock * 24 / 8)) &&
++ ((rate * 97 / 100) > (mode->clock * 16 / 8)) &&
+ (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) {
+ input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL);
+ if (!input_fmts)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 2c1cb335d8623f..4e93fd075e03cc 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -673,6 +673,8 @@ static int mtk_drm_bind(struct device *dev)
+ err_free:
+ private->drm = NULL;
+ drm_dev_put(drm);
++ for (i = 0; i < private->data->mmsys_dev_num; i++)
++ private->all_drm_private[i]->drm = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index eeec641cab60db..b9b7fd08b7d7e9 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -139,11 +139,11 @@
+ #define CLK_HS_POST GENMASK(15, 8)
+ #define CLK_HS_EXIT GENMASK(23, 16)
+
+-#define DSI_VM_CMD_CON 0x130
++/* DSI_VM_CMD_CON */
+ #define VM_CMD_EN BIT(0)
+ #define TS_VFP_EN BIT(5)
+
+-#define DSI_SHADOW_DEBUG 0x190U
++/* DSI_SHADOW_DEBUG */
+ #define FORCE_COMMIT BIT(0)
+ #define BYPASS_SHADOW BIT(1)
+
+@@ -187,6 +187,8 @@ struct phy;
+
+ struct mtk_dsi_driver_data {
+ const u32 reg_cmdq_off;
++ const u32 reg_vm_cmd_off;
++ const u32 reg_shadow_dbg_off;
+ bool has_shadow_ctl;
+ bool has_size_ctl;
+ bool cmdq_long_packet_ctl;
+@@ -246,23 +248,22 @@ static void mtk_dsi_phy_timconfig(struct mtk_dsi *dsi)
+ u32 data_rate_mhz = DIV_ROUND_UP(dsi->data_rate, HZ_PER_MHZ);
+ struct mtk_phy_timing *timing = &dsi->phy_timing;
+
+- timing->lpx = (80 * data_rate_mhz / (8 * 1000)) + 1;
+- timing->da_hs_prepare = (59 * data_rate_mhz + 4 * 1000) / 8000 + 1;
+- timing->da_hs_zero = (163 * data_rate_mhz + 11 * 1000) / 8000 + 1 -
++ timing->lpx = (60 * data_rate_mhz / (8 * 1000)) + 1;
++ timing->da_hs_prepare = (80 * data_rate_mhz + 4 * 1000) / 8000;
++ timing->da_hs_zero = (170 * data_rate_mhz + 10 * 1000) / 8000 + 1 -
+ timing->da_hs_prepare;
+- timing->da_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1;
++ timing->da_hs_trail = timing->da_hs_prepare + 1;
+
+- timing->ta_go = 4 * timing->lpx;
+- timing->ta_sure = 3 * timing->lpx / 2;
+- timing->ta_get = 5 * timing->lpx;
+- timing->da_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1;
++ timing->ta_go = 4 * timing->lpx - 2;
++ timing->ta_sure = timing->lpx + 2;
++ timing->ta_get = 4 * timing->lpx;
++ timing->da_hs_exit = 2 * timing->lpx + 1;
+
+- timing->clk_hs_prepare = (57 * data_rate_mhz / (8 * 1000)) + 1;
+- timing->clk_hs_post = (65 * data_rate_mhz + 53 * 1000) / 8000 + 1;
+- timing->clk_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1;
+- timing->clk_hs_zero = (330 * data_rate_mhz / (8 * 1000)) + 1 -
+- timing->clk_hs_prepare;
+- timing->clk_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1;
++ timing->clk_hs_prepare = 70 * data_rate_mhz / (8 * 1000);
++ timing->clk_hs_post = timing->clk_hs_prepare + 8;
++ timing->clk_hs_trail = timing->clk_hs_prepare;
++ timing->clk_hs_zero = timing->clk_hs_trail * 4;
++ timing->clk_hs_exit = 2 * timing->clk_hs_trail;
+
+ timcon0 = FIELD_PREP(LPX, timing->lpx) |
+ FIELD_PREP(HS_PREP, timing->da_hs_prepare) |
+@@ -367,8 +368,8 @@ static void mtk_dsi_set_mode(struct mtk_dsi *dsi)
+
+ static void mtk_dsi_set_vm_cmd(struct mtk_dsi *dsi)
+ {
+- mtk_dsi_mask(dsi, DSI_VM_CMD_CON, VM_CMD_EN, VM_CMD_EN);
+- mtk_dsi_mask(dsi, DSI_VM_CMD_CON, TS_VFP_EN, TS_VFP_EN);
++ mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, VM_CMD_EN, VM_CMD_EN);
++ mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, TS_VFP_EN, TS_VFP_EN);
+ }
+
+ static void mtk_dsi_rxtx_control(struct mtk_dsi *dsi)
+@@ -714,7 +715,7 @@ static int mtk_dsi_poweron(struct mtk_dsi *dsi)
+
+ if (dsi->driver_data->has_shadow_ctl)
+ writel(FORCE_COMMIT | BYPASS_SHADOW,
+- dsi->regs + DSI_SHADOW_DEBUG);
++ dsi->regs + dsi->driver_data->reg_shadow_dbg_off);
+
+ mtk_dsi_reset_engine(dsi);
+ mtk_dsi_phy_timconfig(dsi);
+@@ -1255,26 +1256,36 @@ static void mtk_dsi_remove(struct platform_device *pdev)
+
+ static const struct mtk_dsi_driver_data mt8173_dsi_driver_data = {
+ .reg_cmdq_off = 0x200,
++ .reg_vm_cmd_off = 0x130,
++ .reg_shadow_dbg_off = 0x190
+ };
+
+ static const struct mtk_dsi_driver_data mt2701_dsi_driver_data = {
+ .reg_cmdq_off = 0x180,
++ .reg_vm_cmd_off = 0x130,
++ .reg_shadow_dbg_off = 0x190
+ };
+
+ static const struct mtk_dsi_driver_data mt8183_dsi_driver_data = {
+ .reg_cmdq_off = 0x200,
++ .reg_vm_cmd_off = 0x130,
++ .reg_shadow_dbg_off = 0x190,
+ .has_shadow_ctl = true,
+ .has_size_ctl = true,
+ };
+
+ static const struct mtk_dsi_driver_data mt8186_dsi_driver_data = {
+ .reg_cmdq_off = 0xd00,
++ .reg_vm_cmd_off = 0x200,
++ .reg_shadow_dbg_off = 0xc00,
+ .has_shadow_ctl = true,
+ .has_size_ctl = true,
+ };
+
+ static const struct mtk_dsi_driver_data mt8188_dsi_driver_data = {
+ .reg_cmdq_off = 0xd00,
++ .reg_vm_cmd_off = 0x200,
++ .reg_shadow_dbg_off = 0xc00,
+ .has_shadow_ctl = true,
+ .has_size_ctl = true,
+ .cmdq_long_packet_ctl = true,
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index d5fd6a089b7ccc..b940688c361356 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -386,6 +386,10 @@ int xe_gt_init_early(struct xe_gt *gt)
+ xe_force_wake_init_gt(gt, gt_to_fw(gt));
+ spin_lock_init(>->global_invl_lock);
+
++ err = xe_gt_tlb_invalidation_init_early(gt);
++ if (err)
++ return err;
++
+ return 0;
+ }
+
+@@ -585,10 +589,6 @@ int xe_gt_init(struct xe_gt *gt)
+ xe_hw_fence_irq_init(>->fence_irq[i]);
+ }
+
+- err = xe_gt_tlb_invalidation_init(gt);
+- if (err)
+- return err;
+-
+ err = xe_gt_pagefault_init(gt);
+ if (err)
+ return err;
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 7e385940df0863..ace1fe831a7b72 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -106,7 +106,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ }
+
+ /**
+- * xe_gt_tlb_invalidation_init - Initialize GT TLB invalidation state
++ * xe_gt_tlb_invalidation_init_early - Initialize GT TLB invalidation state
+ * @gt: graphics tile
+ *
+ * Initialize GT TLB invalidation state, purely software initialization, should
+@@ -114,7 +114,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ *
+ * Return: 0 on success, negative error code on error.
+ */
+-int xe_gt_tlb_invalidation_init(struct xe_gt *gt)
++int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
+ {
+ gt->tlb_invalidation.seqno = 1;
+ INIT_LIST_HEAD(>->tlb_invalidation.pending_fences);
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+index 00b1c6c01e8d95..672acfcdf0d70d 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+@@ -14,7 +14,8 @@ struct xe_gt;
+ struct xe_guc;
+ struct xe_vma;
+
+-int xe_gt_tlb_invalidation_init(struct xe_gt *gt);
++int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
++
+ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
+ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
+ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 6bdd21aa005ab8..2a4ec55ddb47ed 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -165,6 +165,7 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
+ {
+ u8 scsi_cmd[MAX_COMMAND_SIZE];
+ enum req_op op;
++ int err;
+
+ memset(scsi_cmd, 0, sizeof(scsi_cmd));
+ scsi_cmd[0] = ATA_16;
+@@ -192,8 +193,11 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
+ scsi_cmd[12] = lba_high;
+ scsi_cmd[14] = ata_command;
+
+- return scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
+- ATA_SECT_SIZE, HZ, 5, NULL);
++ err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
++ ATA_SECT_SIZE, HZ, 5, NULL);
++ if (err > 0)
++ err = -EIO;
++ return err;
+ }
+
+ static int drivetemp_ata_command(struct drivetemp_data *st, u8 feature,
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index b79c48d46cccf8..8d94bc2b1cac35 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -917,6 +917,9 @@ static int ad7124_setup(struct ad7124_state *st)
+ * set all channels to this default value.
+ */
+ ad7124_set_channel_odr(st, i, 10);
++
++ /* Disable all channels to prevent unintended conversions. */
++ ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, 0);
+ }
+
+ ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control);
+diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c
+index 0702ec71aa2933..5a65be00dd190f 100644
+--- a/drivers/iio/adc/ad7173.c
++++ b/drivers/iio/adc/ad7173.c
+@@ -198,6 +198,7 @@ struct ad7173_channel {
+
+ struct ad7173_state {
+ struct ad_sigma_delta sd;
++ struct ad_sigma_delta_info sigma_delta_info;
+ const struct ad7173_device_info *info;
+ struct ad7173_channel *channels;
+ struct regulator_bulk_data regulators[3];
+@@ -733,7 +734,7 @@ static int ad7173_disable_one(struct ad_sigma_delta *sd, unsigned int chan)
+ return ad_sd_write_reg(sd, AD7173_REG_CH(chan), 2, 0);
+ }
+
+-static struct ad_sigma_delta_info ad7173_sigma_delta_info = {
++static const struct ad_sigma_delta_info ad7173_sigma_delta_info = {
+ .set_channel = ad7173_set_channel,
+ .append_status = ad7173_append_status,
+ .disable_all = ad7173_disable_all,
+@@ -1371,7 +1372,7 @@ static int ad7173_fw_parse_device_config(struct iio_dev *indio_dev)
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Interrupt 'rdy' is required\n");
+
+- ad7173_sigma_delta_info.irq_line = ret;
++ st->sigma_delta_info.irq_line = ret;
+
+ return ad7173_fw_parse_channel_config(indio_dev);
+ }
+@@ -1404,8 +1405,9 @@ static int ad7173_probe(struct spi_device *spi)
+ spi->mode = SPI_MODE_3;
+ spi_setup(spi);
+
+- ad7173_sigma_delta_info.num_slots = st->info->num_configs;
+- ret = ad_sd_init(&st->sd, indio_dev, spi, &ad7173_sigma_delta_info);
++ st->sigma_delta_info = ad7173_sigma_delta_info;
++ st->sigma_delta_info.num_slots = st->info->num_configs;
++ ret = ad_sd_init(&st->sd, indio_dev, spi, &st->sigma_delta_info);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 9c39acff17e658..6ab0e88f6895af 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -979,7 +979,7 @@ static int at91_ts_register(struct iio_dev *idev,
+ return ret;
+
+ err:
+- input_free_device(st->ts_input);
++ input_free_device(input);
+ return ret;
+ }
+
+diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
+index 240cfa391674e7..dfd47a6e1f4a1b 100644
+--- a/drivers/iio/adc/rockchip_saradc.c
++++ b/drivers/iio/adc/rockchip_saradc.c
+@@ -368,6 +368,8 @@ static irqreturn_t rockchip_saradc_trigger_handler(int irq, void *p)
+ int ret;
+ int i, j = 0;
+
++ memset(&data, 0, sizeof(data));
++
+ mutex_lock(&info->lock);
+
+ iio_for_each_active_channel(i_dev, i) {
+diff --git a/drivers/iio/adc/ti-ads1119.c b/drivers/iio/adc/ti-ads1119.c
+index 1c760637514928..6637cb6a6dda4a 100644
+--- a/drivers/iio/adc/ti-ads1119.c
++++ b/drivers/iio/adc/ti-ads1119.c
+@@ -500,12 +500,14 @@ static irqreturn_t ads1119_trigger_handler(int irq, void *private)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct ads1119_state *st = iio_priv(indio_dev);
+ struct {
+- unsigned int sample;
++ s16 sample;
+ s64 timestamp __aligned(8);
+ } scan;
+ unsigned int index;
+ int ret;
+
++ memset(&scan, 0, sizeof(scan));
++
+ if (!iio_trigger_using_own(indio_dev)) {
+ index = find_first_bit(indio_dev->active_scan_mask,
+ iio_get_masklength(indio_dev));
+diff --git a/drivers/iio/adc/ti-ads124s08.c b/drivers/iio/adc/ti-ads124s08.c
+index 425b48d8986f52..f452f57f11c956 100644
+--- a/drivers/iio/adc/ti-ads124s08.c
++++ b/drivers/iio/adc/ti-ads124s08.c
+@@ -183,9 +183,9 @@ static int ads124s_reset(struct iio_dev *indio_dev)
+ struct ads124s_private *priv = iio_priv(indio_dev);
+
+ if (priv->reset_gpio) {
+- gpiod_set_value(priv->reset_gpio, 0);
++ gpiod_set_value_cansleep(priv->reset_gpio, 0);
+ udelay(200);
+- gpiod_set_value(priv->reset_gpio, 1);
++ gpiod_set_value_cansleep(priv->reset_gpio, 1);
+ } else {
+ return ads124s_write_cmd(indio_dev, ADS124S08_CMD_RESET);
+ }
+diff --git a/drivers/iio/adc/ti-ads1298.c b/drivers/iio/adc/ti-ads1298.c
+index 0f9f75baaebbf7..d00cd169e8dfd5 100644
+--- a/drivers/iio/adc/ti-ads1298.c
++++ b/drivers/iio/adc/ti-ads1298.c
+@@ -613,6 +613,8 @@ static int ads1298_init(struct iio_dev *indio_dev)
+ }
+ indio_dev->name = devm_kasprintf(dev, GFP_KERNEL, "ads129%u%s",
+ indio_dev->num_channels, suffix);
++ if (!indio_dev->name)
++ return -ENOMEM;
+
+ /* Enable internal test signal, double amplitude, double frequency */
+ ret = regmap_write(priv->regmap, ADS1298_REG_CONFIG2,
+diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
+index 9b1814f1965a37..a31658b760a4ae 100644
+--- a/drivers/iio/adc/ti-ads8688.c
++++ b/drivers/iio/adc/ti-ads8688.c
+@@ -381,7 +381,7 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ /* Ensure naturally aligned timestamp */
+- u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8);
++ u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8) = { };
+ int i, j = 0;
+
+ iio_for_each_active_channel(indio_dev, i) {
+diff --git a/drivers/iio/dummy/iio_simple_dummy_buffer.c b/drivers/iio/dummy/iio_simple_dummy_buffer.c
+index 4ca3f1aaff9996..288880346707a2 100644
+--- a/drivers/iio/dummy/iio_simple_dummy_buffer.c
++++ b/drivers/iio/dummy/iio_simple_dummy_buffer.c
+@@ -48,7 +48,7 @@ static irqreturn_t iio_simple_dummy_trigger_h(int irq, void *p)
+ int i = 0, j;
+ u16 *data;
+
+- data = kmalloc(indio_dev->scan_bytes, GFP_KERNEL);
++ data = kzalloc(indio_dev->scan_bytes, GFP_KERNEL);
+ if (!data)
+ goto done;
+
+diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
+index c28d17ca6f5ee0..aabc5e2d788d15 100644
+--- a/drivers/iio/gyro/fxas21002c_core.c
++++ b/drivers/iio/gyro/fxas21002c_core.c
+@@ -730,14 +730,21 @@ static irqreturn_t fxas21002c_trigger_handler(int irq, void *p)
+ int ret;
+
+ mutex_lock(&data->lock);
++ ret = fxas21002c_pm_get(data);
++ if (ret < 0)
++ goto out_unlock;
++
+ ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB,
+ data->buffer, CHANNEL_SCAN_MAX * sizeof(s16));
+ if (ret < 0)
+- goto out_unlock;
++ goto out_pm_put;
+
+ iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
+ data->timestamp);
+
++out_pm_put:
++ fxas21002c_pm_put(data);
++
+ out_unlock:
+ mutex_unlock(&data->lock);
+
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+index 3a07e43e4cf154..18787a43477b89 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+@@ -403,6 +403,7 @@ struct inv_icm42600_sensor_state {
+ typedef int (*inv_icm42600_bus_setup)(struct inv_icm42600_state *);
+
+ extern const struct regmap_config inv_icm42600_regmap_config;
++extern const struct regmap_config inv_icm42600_spi_regmap_config;
+ extern const struct dev_pm_ops inv_icm42600_pm_ops;
+
+ const struct iio_mount_matrix *
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+index c3924cc6190ee4..a0bed49c3ba674 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+@@ -87,6 +87,21 @@ const struct regmap_config inv_icm42600_regmap_config = {
+ };
+ EXPORT_SYMBOL_NS_GPL(inv_icm42600_regmap_config, IIO_ICM42600);
+
++/* define specific regmap for SPI not supporting burst write */
++const struct regmap_config inv_icm42600_spi_regmap_config = {
++ .name = "inv_icm42600",
++ .reg_bits = 8,
++ .val_bits = 8,
++ .max_register = 0x4FFF,
++ .ranges = inv_icm42600_regmap_ranges,
++ .num_ranges = ARRAY_SIZE(inv_icm42600_regmap_ranges),
++ .volatile_table = inv_icm42600_regmap_volatile_accesses,
++ .rd_noinc_table = inv_icm42600_regmap_rd_noinc_accesses,
++ .cache_type = REGCACHE_RBTREE,
++ .use_single_write = true,
++};
++EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, IIO_ICM42600);
++
+ struct inv_icm42600_hw {
+ uint8_t whoami;
+ const char *name;
+@@ -822,6 +837,8 @@ static int inv_icm42600_suspend(struct device *dev)
+ static int inv_icm42600_resume(struct device *dev)
+ {
+ struct inv_icm42600_state *st = dev_get_drvdata(dev);
++ struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro);
++ struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel);
+ int ret;
+
+ mutex_lock(&st->lock);
+@@ -842,9 +859,12 @@ static int inv_icm42600_resume(struct device *dev)
+ goto out_unlock;
+
+ /* restore FIFO data streaming */
+- if (st->fifo.on)
++ if (st->fifo.on) {
++ inv_sensors_timestamp_reset(&gyro_st->ts);
++ inv_sensors_timestamp_reset(&accel_st->ts);
+ ret = regmap_write(st->map, INV_ICM42600_REG_FIFO_CONFIG,
+ INV_ICM42600_FIFO_CONFIG_STREAM);
++ }
+
+ out_unlock:
+ mutex_unlock(&st->lock);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
+index eae5ff7a3cc102..36fe8d94ec1cc6 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
+@@ -59,7 +59,8 @@ static int inv_icm42600_probe(struct spi_device *spi)
+ return -EINVAL;
+ chip = (uintptr_t)match;
+
+- regmap = devm_regmap_init_spi(spi, &inv_icm42600_regmap_config);
++ /* use SPI specific regmap */
++ regmap = devm_regmap_init_spi(spi, &inv_icm42600_spi_regmap_config);
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+diff --git a/drivers/iio/imu/kmx61.c b/drivers/iio/imu/kmx61.c
+index c61c012e25bbaa..53773418610f75 100644
+--- a/drivers/iio/imu/kmx61.c
++++ b/drivers/iio/imu/kmx61.c
+@@ -1192,7 +1192,7 @@ static irqreturn_t kmx61_trigger_handler(int irq, void *p)
+ struct kmx61_data *data = kmx61_get_data(indio_dev);
+ int bit, ret, i = 0;
+ u8 base;
+- s16 buffer[8];
++ s16 buffer[8] = { };
+
+ if (indio_dev == data->acc_indio_dev)
+ base = KMX61_ACC_XOUT_L;
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 3305ebbdbc0787..1155487f7aeac8 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -499,7 +499,7 @@ struct iio_channel *iio_channel_get_all(struct device *dev)
+ return_ptr(chans);
+
+ error_free_chans:
+- for (i = 0; i < nummaps; i++)
++ for (i = 0; i < mapind; i++)
+ iio_device_put(chans[i].indio_dev);
+ return ERR_PTR(ret);
+ }
+diff --git a/drivers/iio/light/bh1745.c b/drivers/iio/light/bh1745.c
+index 2e458e9d5d8530..a025e279df0747 100644
+--- a/drivers/iio/light/bh1745.c
++++ b/drivers/iio/light/bh1745.c
+@@ -750,6 +750,8 @@ static irqreturn_t bh1745_trigger_handler(int interrupt, void *p)
+ int i;
+ int j = 0;
+
++ memset(&scan, 0, sizeof(scan));
++
+ iio_for_each_active_channel(indio_dev, i) {
+ ret = regmap_bulk_read(data->regmap, BH1745_RED_LSB + 2 * i,
+ &value, 2);
+diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
+index 337a1332c2c64a..67c94be0201897 100644
+--- a/drivers/iio/light/vcnl4035.c
++++ b/drivers/iio/light/vcnl4035.c
+@@ -105,7 +105,7 @@ static irqreturn_t vcnl4035_trigger_consumer_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct vcnl4035_data *data = iio_priv(indio_dev);
+ /* Ensure naturally aligned timestamp */
+- u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8);
++ u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8) = { };
+ int ret;
+
+ ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
+diff --git a/drivers/iio/pressure/zpa2326.c b/drivers/iio/pressure/zpa2326.c
+index 950f8dee2b26b7..b4c6c7c4725694 100644
+--- a/drivers/iio/pressure/zpa2326.c
++++ b/drivers/iio/pressure/zpa2326.c
+@@ -586,6 +586,8 @@ static int zpa2326_fill_sample_buffer(struct iio_dev *indio_dev,
+ } sample;
+ int err;
+
++ memset(&sample, 0, sizeof(sample));
++
+ if (test_bit(0, indio_dev->active_scan_mask)) {
+ /* Get current pressure from hardware FIFO. */
+ err = zpa2326_dequeue_pressure(indio_dev, &sample.pressure);
+diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
+index ec5db1478b2fce..18ae45dcbfb28b 100644
+--- a/drivers/md/dm-ebs-target.c
++++ b/drivers/md/dm-ebs-target.c
+@@ -442,7 +442,7 @@ static int ebs_iterate_devices(struct dm_target *ti,
+ static struct target_type ebs_target = {
+ .name = "ebs",
+ .version = {1, 0, 1},
+- .features = DM_TARGET_PASSES_INTEGRITY,
++ .features = 0,
+ .module = THIS_MODULE,
+ .ctr = ebs_ctr,
+ .dtr = ebs_dtr,
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index c9f47d0cccf9bb..872bb59f547055 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2332,10 +2332,9 @@ static struct thin_c *get_first_thin(struct pool *pool)
+ struct thin_c *tc = NULL;
+
+ rcu_read_lock();
+- if (!list_empty(&pool->active_thins)) {
+- tc = list_entry_rcu(pool->active_thins.next, struct thin_c, list);
++ tc = list_first_or_null_rcu(&pool->active_thins, struct thin_c, list);
++ if (tc)
+ thin_get(tc);
+- }
+ rcu_read_unlock();
+
+ return tc;
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 62b1a44b8dd2e7..6bd9848518d477 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -60,15 +60,19 @@ static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ * to the data block. Caller is responsible for releasing buf.
+ */
+ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index,
+- unsigned int *offset, struct dm_buffer **buf,
+- unsigned short ioprio)
++ unsigned int *offset, unsigned int par_buf_offset,
++ struct dm_buffer **buf, unsigned short ioprio)
+ {
+ u64 position, block, rem;
+ u8 *res;
+
++ /* We have already part of parity bytes read, skip to the next block */
++ if (par_buf_offset)
++ index++;
++
+ position = (index + rsb) * v->fec->roots;
+ block = div64_u64_rem(position, v->fec->io_size, &rem);
+- *offset = (unsigned int)rem;
++ *offset = par_buf_offset ? 0 : (unsigned int)rem;
+
+ res = dm_bufio_read_with_ioprio(v->fec->bufio, block, buf, ioprio);
+ if (IS_ERR(res)) {
+@@ -128,11 +132,12 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ {
+ int r, corrected = 0, res;
+ struct dm_buffer *buf;
+- unsigned int n, i, offset;
+- u8 *par, *block;
++ unsigned int n, i, offset, par_buf_offset = 0;
++ u8 *par, *block, par_buf[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN];
+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+
+- par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio));
++ par = fec_read_parity(v, rsb, block_offset, &offset,
++ par_buf_offset, &buf, bio_prio(bio));
+ if (IS_ERR(par))
+ return PTR_ERR(par);
+
+@@ -142,7 +147,8 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ */
+ fec_for_each_buffer_rs_block(fio, n, i) {
+ block = fec_buffer_rs_block(v, fio, n, i);
+- res = fec_decode_rs8(v, fio, block, &par[offset], neras);
++ memcpy(&par_buf[par_buf_offset], &par[offset], v->fec->roots - par_buf_offset);
++ res = fec_decode_rs8(v, fio, block, par_buf, neras);
+ if (res < 0) {
+ r = res;
+ goto error;
+@@ -155,12 +161,21 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ if (block_offset >= 1 << v->data_dev_block_bits)
+ goto done;
+
+- /* read the next block when we run out of parity bytes */
+- offset += v->fec->roots;
++ /* Read the next block when we run out of parity bytes */
++ offset += (v->fec->roots - par_buf_offset);
++ /* Check if parity bytes are split between blocks */
++ if (offset < v->fec->io_size && (offset + v->fec->roots) > v->fec->io_size) {
++ par_buf_offset = v->fec->io_size - offset;
++ memcpy(par_buf, &par[offset], par_buf_offset);
++ offset += par_buf_offset;
++ } else
++ par_buf_offset = 0;
++
+ if (offset >= v->fec->io_size) {
+ dm_bufio_release(buf);
+
+- par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio));
++ par = fec_read_parity(v, rsb, block_offset, &offset,
++ par_buf_offset, &buf, bio_prio(bio));
+ if (IS_ERR(par))
+ return PTR_ERR(par);
+ }
+@@ -724,10 +739,7 @@ int verity_fec_ctr(struct dm_verity *v)
+ return -E2BIG;
+ }
+
+- if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1))
+- f->io_size = 1 << v->data_dev_block_bits;
+- else
+- f->io_size = v->fec->roots << SECTOR_SHIFT;
++ f->io_size = 1 << v->data_dev_block_bits;
+
+ f->bufio = dm_bufio_client_create(f->dev->bdev,
+ f->io_size,
+diff --git a/drivers/md/persistent-data/dm-array.c b/drivers/md/persistent-data/dm-array.c
+index 157c9bd2fed741..8f8792e5580639 100644
+--- a/drivers/md/persistent-data/dm-array.c
++++ b/drivers/md/persistent-data/dm-array.c
+@@ -917,23 +917,27 @@ static int load_ablock(struct dm_array_cursor *c)
+ if (c->block)
+ unlock_ablock(c->info, c->block);
+
+- c->block = NULL;
+- c->ab = NULL;
+ c->index = 0;
+
+ r = dm_btree_cursor_get_value(&c->cursor, &key, &value_le);
+ if (r) {
+ DMERR("dm_btree_cursor_get_value failed");
+- dm_btree_cursor_end(&c->cursor);
++ goto out;
+
+ } else {
+ r = get_ablock(c->info, le64_to_cpu(value_le), &c->block, &c->ab);
+ if (r) {
+ DMERR("get_ablock failed");
+- dm_btree_cursor_end(&c->cursor);
++ goto out;
+ }
+ }
+
++ return 0;
++
++out:
++ dm_btree_cursor_end(&c->cursor);
++ c->block = NULL;
++ c->ab = NULL;
+ return r;
+ }
+
+@@ -956,10 +960,10 @@ EXPORT_SYMBOL_GPL(dm_array_cursor_begin);
+
+ void dm_array_cursor_end(struct dm_array_cursor *c)
+ {
+- if (c->block) {
++ if (c->block)
+ unlock_ablock(c->info, c->block);
+- dm_btree_cursor_end(&c->cursor);
+- }
++
++ dm_btree_cursor_end(&c->cursor);
+ }
+ EXPORT_SYMBOL_GPL(dm_array_cursor_end);
+
+@@ -999,6 +1003,7 @@ int dm_array_cursor_skip(struct dm_array_cursor *c, uint32_t count)
+ }
+
+ count -= remaining;
++ c->index += (remaining - 1);
+ r = dm_array_cursor_next(c);
+
+ } while (!r);
+diff --git a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
+index e616e3ec2b42fd..3c1359d8d4e692 100644
+--- a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
++++ b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
+@@ -148,7 +148,7 @@ static int pci1xxxx_gpio_set_config(struct gpio_chip *gpio, unsigned int offset,
+ pci1xxx_assign_bit(priv->reg_base, OPENDRAIN_OFFSET(offset), (offset % 32), true);
+ break;
+ default:
+- ret = -EOPNOTSUPP;
++ ret = -ENOTSUPP;
+ break;
+ }
+ spin_unlock_irqrestore(&priv->lock, flags);
+@@ -277,7 +277,7 @@ static irqreturn_t pci1xxxx_gpio_irq_handler(int irq, void *dev_id)
+ writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank));
+ spin_unlock_irqrestore(&priv->lock, flags);
+ irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32)));
+- generic_handle_irq(irq);
++ handle_nested_irq(irq);
+ }
+ }
+ spin_lock_irqsave(&priv->lock, flags);
+diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
+index 2681889162a25e..44971e71991ff5 100644
+--- a/drivers/net/ethernet/amd/pds_core/devlink.c
++++ b/drivers/net/ethernet/amd/pds_core/devlink.c
+@@ -118,7 +118,7 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ if (err && err != -EIO)
+ return err;
+
+- listlen = fw_list.num_fw_slots;
++ listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));
+ for (i = 0; i < listlen; i++) {
+ if (i < ARRAY_SIZE(fw_slotnames))
+ strscpy(buf, fw_slotnames[i], sizeof(buf));
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index dafc5a4039cd2c..c255445e97f3c5 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2826,6 +2826,13 @@ static int bnxt_hwrm_handler(struct bnxt *bp, struct tx_cmp *txcmp)
+ return 0;
+ }
+
++static bool bnxt_vnic_is_active(struct bnxt *bp)
++{
++ struct bnxt_vnic_info *vnic = &bp->vnic_info[0];
++
++ return vnic->fw_vnic_id != INVALID_HW_RING_ID && vnic->mru > 0;
++}
++
+ static irqreturn_t bnxt_msix(int irq, void *dev_instance)
+ {
+ struct bnxt_napi *bnapi = dev_instance;
+@@ -3093,7 +3100,7 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
+ break;
+ }
+ }
+- if (bp->flags & BNXT_FLAG_DIM) {
++ if ((bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) {
+ struct dim_sample dim_sample = {};
+
+ dim_update_sample(cpr->event_ctr,
+@@ -3224,7 +3231,7 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget)
+ poll_done:
+ cpr_rx = &cpr->cp_ring_arr[0];
+ if (cpr_rx->cp_ring_type == BNXT_NQ_HDL_TYPE_RX &&
+- (bp->flags & BNXT_FLAG_DIM)) {
++ (bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) {
+ struct dim_sample dim_sample = {};
+
+ dim_update_sample(cpr->event_ctr,
+@@ -7116,6 +7123,26 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
+ return rc;
+ }
+
++static void bnxt_cancel_dim(struct bnxt *bp)
++{
++ int i;
++
++ /* DIM work is initialized in bnxt_enable_napi(). Proceed only
++ * if NAPI is enabled.
++ */
++ if (!bp->bnapi || test_bit(BNXT_STATE_NAPI_DISABLED, &bp->state))
++ return;
++
++ /* Make sure NAPI sees that the VNIC is disabled */
++ synchronize_net();
++ for (i = 0; i < bp->rx_nr_rings; i++) {
++ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
++ struct bnxt_napi *bnapi = rxr->bnapi;
++
++ cancel_work_sync(&bnapi->cp_ring.dim.work);
++ }
++}
++
+ static int hwrm_ring_free_send_msg(struct bnxt *bp,
+ struct bnxt_ring_struct *ring,
+ u32 ring_type, int cmpl_ring_id)
+@@ -7216,6 +7243,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
+ }
+ }
+
++ bnxt_cancel_dim(bp);
+ for (i = 0; i < bp->rx_nr_rings; i++) {
+ bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path);
+ bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path);
+@@ -11012,8 +11040,6 @@ static void bnxt_disable_napi(struct bnxt *bp)
+ if (bnapi->in_reset)
+ cpr->sw_stats->rx.rx_resets++;
+ napi_disable(&bnapi->napi);
+- if (bnapi->rx_ring)
+- cancel_work_sync(&cpr->dim.work);
+ }
+ }
+
+@@ -15269,8 +15295,10 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+ bnxt_hwrm_vnic_update(bp, vnic,
+ VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
+ }
+-
++ /* Make sure NAPI sees that the VNIC is disabled */
++ synchronize_net();
+ rxr = &bp->rx_ring[idx];
++ cancel_work_sync(&rxr->bnapi->cp_ring.dim.work);
+ bnxt_hwrm_rx_ring_free(bp, rxr, false);
+ bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
+ rxr->rx_next_cons = 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index fdd6356f21efb3..546d9a3d7efea7 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -208,7 +208,7 @@ int bnxt_send_msg(struct bnxt_en_dev *edev,
+
+ rc = hwrm_req_replace(bp, req, fw_msg->msg, fw_msg->msg_len);
+ if (rc)
+- return rc;
++ goto drop_req;
+
+ hwrm_req_timeout(bp, req, fw_msg->timeout);
+ resp = hwrm_req_hold(bp, req);
+@@ -220,6 +220,7 @@ int bnxt_send_msg(struct bnxt_en_dev *edev,
+
+ memcpy(fw_msg->resp, resp, resp_len);
+ }
++drop_req:
+ hwrm_req_drop(bp, req);
+ return rc;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index fb3933fbb8425e..757c6484f53515 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -1799,7 +1799,10 @@ void cxgb4_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid,
+ struct adapter *adap = container_of(t, struct adapter, tids);
+ struct sk_buff *skb;
+
+- WARN_ON(tid_out_of_range(&adap->tids, tid));
++ if (tid_out_of_range(&adap->tids, tid)) {
++ dev_err(adap->pdev_dev, "tid %d out of range\n", tid);
++ return;
++ }
+
+ if (t->tid_tab[tid - adap->tids.tid_base]) {
+ t->tid_tab[tid - adap->tids.tid_base] = NULL;
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index d404819ebc9b3f..f985a3cf2b11fa 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -2224,14 +2224,18 @@ static void gve_service_task(struct work_struct *work)
+
+ static void gve_set_netdev_xdp_features(struct gve_priv *priv)
+ {
++ xdp_features_t xdp_features;
++
+ if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
+- priv->dev->xdp_features = NETDEV_XDP_ACT_BASIC;
+- priv->dev->xdp_features |= NETDEV_XDP_ACT_REDIRECT;
+- priv->dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
+- priv->dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
++ xdp_features = NETDEV_XDP_ACT_BASIC;
++ xdp_features |= NETDEV_XDP_ACT_REDIRECT;
++ xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
++ xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ } else {
+- priv->dev->xdp_features = 0;
++ xdp_features = 0;
+ }
++
++ xdp_set_features_flag(priv->dev, xdp_features);
+ }
+
+ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 27dbe367f3d355..d873523e84f271 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -916,9 +916,6 @@ struct hnae3_handle {
+
+ u8 netdev_flags;
+ struct dentry *hnae3_dbgfs;
+- /* protects concurrent contention between debugfs commands */
+- struct mutex dbgfs_lock;
+- char **dbgfs_buf;
+
+ /* Network interface message level enabled bits */
+ u32 msg_enable;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index 807eb3bbb11c04..9bbece25552b17 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -1260,69 +1260,55 @@ static int hns3_dbg_read_cmd(struct hns3_dbg_data *dbg_data,
+ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
+ size_t count, loff_t *ppos)
+ {
+- struct hns3_dbg_data *dbg_data = filp->private_data;
++ char *buf = filp->private_data;
++
++ return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
++}
++
++static int hns3_dbg_open(struct inode *inode, struct file *filp)
++{
++ struct hns3_dbg_data *dbg_data = inode->i_private;
+ struct hnae3_handle *handle = dbg_data->handle;
+ struct hns3_nic_priv *priv = handle->priv;
+- ssize_t size = 0;
+- char **save_buf;
+- char *read_buf;
+ u32 index;
++ char *buf;
+ int ret;
+
++ if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
++ test_bit(HNS3_NIC_STATE_RESETTING, &priv->state))
++ return -EBUSY;
++
+ ret = hns3_dbg_get_cmd_index(dbg_data, &index);
+ if (ret)
+ return ret;
+
+- mutex_lock(&handle->dbgfs_lock);
+- save_buf = &handle->dbgfs_buf[index];
+-
+- if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
+- test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) {
+- ret = -EBUSY;
+- goto out;
+- }
+-
+- if (*save_buf) {
+- read_buf = *save_buf;
+- } else {
+- read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
+- if (!read_buf) {
+- ret = -ENOMEM;
+- goto out;
+- }
+-
+- /* save the buffer addr until the last read operation */
+- *save_buf = read_buf;
+-
+- /* get data ready for the first time to read */
+- ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
+- read_buf, hns3_dbg_cmd[index].buf_len);
+- if (ret)
+- goto out;
+- }
++ buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
+
+- size = simple_read_from_buffer(buffer, count, ppos, read_buf,
+- strlen(read_buf));
+- if (size > 0) {
+- mutex_unlock(&handle->dbgfs_lock);
+- return size;
++ ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
++ buf, hns3_dbg_cmd[index].buf_len);
++ if (ret) {
++ kvfree(buf);
++ return ret;
+ }
+
+-out:
+- /* free the buffer for the last read operation */
+- if (*save_buf) {
+- kvfree(*save_buf);
+- *save_buf = NULL;
+- }
++ filp->private_data = buf;
++ return 0;
++}
+
+- mutex_unlock(&handle->dbgfs_lock);
+- return ret;
++static int hns3_dbg_release(struct inode *inode, struct file *filp)
++{
++ kvfree(filp->private_data);
++ filp->private_data = NULL;
++ return 0;
+ }
+
+ static const struct file_operations hns3_dbg_fops = {
+ .owner = THIS_MODULE,
+- .open = simple_open,
++ .open = hns3_dbg_open,
+ .read = hns3_dbg_read,
++ .release = hns3_dbg_release,
+ };
+
+ static int hns3_dbg_bd_file_init(struct hnae3_handle *handle, u32 cmd)
+@@ -1379,13 +1365,6 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ int ret;
+ u32 i;
+
+- handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev,
+- ARRAY_SIZE(hns3_dbg_cmd),
+- sizeof(*handle->dbgfs_buf),
+- GFP_KERNEL);
+- if (!handle->dbgfs_buf)
+- return -ENOMEM;
+-
+ hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry =
+ debugfs_create_dir(name, hns3_dbgfs_root);
+ handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry;
+@@ -1395,8 +1374,6 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ debugfs_create_dir(hns3_dbg_dentry[i].name,
+ handle->hnae3_dbgfs);
+
+- mutex_init(&handle->dbgfs_lock);
+-
+ for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {
+ if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&
+ ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||
+@@ -1425,24 +1402,13 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ out:
+ debugfs_remove_recursive(handle->hnae3_dbgfs);
+ handle->hnae3_dbgfs = NULL;
+- mutex_destroy(&handle->dbgfs_lock);
+ return ret;
+ }
+
+ void hns3_dbg_uninit(struct hnae3_handle *handle)
+ {
+- u32 i;
+-
+ debugfs_remove_recursive(handle->hnae3_dbgfs);
+ handle->hnae3_dbgfs = NULL;
+-
+- for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++)
+- if (handle->dbgfs_buf[i]) {
+- kvfree(handle->dbgfs_buf[i]);
+- handle->dbgfs_buf[i] = NULL;
+- }
+-
+- mutex_destroy(&handle->dbgfs_lock);
+ }
+
+ void hns3_dbg_register_debugfs(const char *debugfs_dir_name)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 4cbc4d069a1f36..73825b6bd485d1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -2452,7 +2452,6 @@ static int hns3_nic_set_features(struct net_device *netdev,
+ return ret;
+ }
+
+- netdev->features = features;
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index bd86efd92a5a7d..9a67fe0554a52b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -6,6 +6,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/netdevice.h>
+@@ -3584,6 +3585,17 @@ static int hclge_set_vf_link_state(struct hnae3_handle *handle, int vf,
+ return ret;
+ }
+
++static void hclge_set_reset_pending(struct hclge_dev *hdev,
++ enum hnae3_reset_type reset_type)
++{
++ /* When an incorrect reset type is executed, the get_reset_level
++ * function generates the HNAE3_NONE_RESET flag. As a result, this
++ * type do not need to pending.
++ */
++ if (reset_type != HNAE3_NONE_RESET)
++ set_bit(reset_type, &hdev->reset_pending);
++}
++
+ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ {
+ u32 cmdq_src_reg, msix_src_reg, hw_err_src_reg;
+@@ -3604,7 +3616,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ */
+ if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) {
+ dev_info(&hdev->pdev->dev, "IMP reset interrupt\n");
+- set_bit(HNAE3_IMP_RESET, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, HNAE3_IMP_RESET);
+ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
+ *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B);
+ hdev->rst_stats.imp_rst_cnt++;
+@@ -3614,7 +3626,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) {
+ dev_info(&hdev->pdev->dev, "global reset interrupt\n");
+ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
+- set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, HNAE3_GLOBAL_RESET);
+ *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B);
+ hdev->rst_stats.global_rst_cnt++;
+ return HCLGE_VECTOR0_EVENT_RST;
+@@ -3769,7 +3781,7 @@ static int hclge_misc_irq_init(struct hclge_dev *hdev)
+ snprintf(hdev->misc_vector.name, HNAE3_INT_NAME_LEN, "%s-misc-%s",
+ HCLGE_NAME, pci_name(hdev->pdev));
+ ret = request_irq(hdev->misc_vector.vector_irq, hclge_misc_irq_handle,
+- 0, hdev->misc_vector.name, hdev);
++ IRQF_NO_AUTOEN, hdev->misc_vector.name, hdev);
+ if (ret) {
+ hclge_free_vector(hdev, 0);
+ dev_err(&hdev->pdev->dev, "request misc irq(%d) fail\n",
+@@ -4062,7 +4074,7 @@ static void hclge_do_reset(struct hclge_dev *hdev)
+ case HNAE3_FUNC_RESET:
+ dev_info(&pdev->dev, "PF reset requested\n");
+ /* schedule again to check later */
+- set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, HNAE3_FUNC_RESET);
+ hclge_reset_task_schedule(hdev);
+ break;
+ default:
+@@ -4096,6 +4108,8 @@ static enum hnae3_reset_type hclge_get_reset_level(struct hnae3_ae_dev *ae_dev,
+ clear_bit(HNAE3_FLR_RESET, addr);
+ }
+
++ clear_bit(HNAE3_NONE_RESET, addr);
++
+ if (hdev->reset_type != HNAE3_NONE_RESET &&
+ rst_level < hdev->reset_type)
+ return HNAE3_NONE_RESET;
+@@ -4237,7 +4251,7 @@ static bool hclge_reset_err_handle(struct hclge_dev *hdev)
+ return false;
+ } else if (hdev->rst_stats.reset_fail_cnt < MAX_RESET_FAIL_CNT) {
+ hdev->rst_stats.reset_fail_cnt++;
+- set_bit(hdev->reset_type, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, hdev->reset_type);
+ dev_info(&hdev->pdev->dev,
+ "re-schedule reset task(%u)\n",
+ hdev->rst_stats.reset_fail_cnt);
+@@ -4480,8 +4494,20 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
+ static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
+ enum hnae3_reset_type rst_type)
+ {
++#define HCLGE_SUPPORT_RESET_TYPE \
++ (BIT(HNAE3_FLR_RESET) | BIT(HNAE3_FUNC_RESET) | \
++ BIT(HNAE3_GLOBAL_RESET) | BIT(HNAE3_IMP_RESET))
++
+ struct hclge_dev *hdev = ae_dev->priv;
+
++ if (!(BIT(rst_type) & HCLGE_SUPPORT_RESET_TYPE)) {
++ /* To prevent reset triggered by hclge_reset_event */
++ set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request);
++ dev_warn(&hdev->pdev->dev, "unsupported reset type %d\n",
++ rst_type);
++ return;
++ }
++
+ set_bit(rst_type, &hdev->default_reset_request);
+ }
+
+@@ -11891,9 +11917,6 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+
+ hclge_init_rxd_adv_layout(hdev);
+
+- /* Enable MISC vector(vector0) */
+- hclge_enable_vector(&hdev->misc_vector, true);
+-
+ ret = hclge_init_wol(hdev);
+ if (ret)
+ dev_warn(&pdev->dev,
+@@ -11906,6 +11929,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+ hclge_state_init(hdev);
+ hdev->last_reset_time = jiffies;
+
++ /* Enable MISC vector(vector0) */
++ enable_irq(hdev->misc_vector.vector_irq);
++ hclge_enable_vector(&hdev->misc_vector, true);
++
+ dev_info(&hdev->pdev->dev, "%s driver initialization finished.\n",
+ HCLGE_DRIVER_NAME);
+
+@@ -12311,7 +12338,7 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+
+ /* Disable MISC vector(vector0) */
+ hclge_enable_vector(&hdev->misc_vector, false);
+- synchronize_irq(hdev->misc_vector.vector_irq);
++ disable_irq(hdev->misc_vector.vector_irq);
+
+ /* Disable all hw interrupts */
+ hclge_config_mac_tnl_int(hdev, false);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index 5505caea88e981..bab16c2191b2f0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -58,6 +58,9 @@ bool hclge_ptp_set_tx_info(struct hnae3_handle *handle, struct sk_buff *skb)
+ struct hclge_dev *hdev = vport->back;
+ struct hclge_ptp *ptp = hdev->ptp;
+
++ if (!ptp)
++ return false;
++
+ if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) ||
+ test_and_set_bit(HCLGE_STATE_PTP_TX_HANDLING, &hdev->state)) {
+ ptp->tx_skipped++;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
+index 43c1c18fa81f8d..8c057192aae6e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
+@@ -510,9 +510,9 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
+ struct hnae3_knic_private_info *kinfo)
+ {
+-#define HCLGE_RING_REG_OFFSET 0x200
+ #define HCLGE_RING_INT_REG_OFFSET 0x4
+
++ struct hnae3_queue *tqp;
+ int i, j, reg_num;
+ int data_num_sum;
+ u32 *reg = data;
+@@ -533,10 +533,11 @@ static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
+ reg_num = ARRAY_SIZE(ring_reg_addr_list);
+ for (j = 0; j < kinfo->num_tqps; j++) {
+ reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg);
++ tqp = kinfo->tqp[j];
+ for (i = 0; i < reg_num; i++)
+- *reg++ = hclge_read_dev(&hdev->hw,
+- ring_reg_addr_list[i] +
+- HCLGE_RING_REG_OFFSET * j);
++ *reg++ = readl_relaxed(tqp->io_base -
++ HCLGE_TQP_REG_OFFSET +
++ ring_reg_addr_list[i]);
+ }
+ data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 094a7c7b55921f..d47bd8d6145f97 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1395,6 +1395,17 @@ static int hclgevf_notify_roce_client(struct hclgevf_dev *hdev,
+ return ret;
+ }
+
++static void hclgevf_set_reset_pending(struct hclgevf_dev *hdev,
++ enum hnae3_reset_type reset_type)
++{
++ /* When an incorrect reset type is executed, the get_reset_level
++ * function generates the HNAE3_NONE_RESET flag. As a result, this
++ * type do not need to pending.
++ */
++ if (reset_type != HNAE3_NONE_RESET)
++ set_bit(reset_type, &hdev->reset_pending);
++}
++
+ static int hclgevf_reset_wait(struct hclgevf_dev *hdev)
+ {
+ #define HCLGEVF_RESET_WAIT_US 20000
+@@ -1544,7 +1555,7 @@ static void hclgevf_reset_err_handle(struct hclgevf_dev *hdev)
+ hdev->rst_stats.rst_fail_cnt);
+
+ if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT)
+- set_bit(hdev->reset_type, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, hdev->reset_type);
+
+ if (hclgevf_is_reset_pending(hdev)) {
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+@@ -1664,6 +1675,8 @@ static enum hnae3_reset_type hclgevf_get_reset_level(unsigned long *addr)
+ clear_bit(HNAE3_FLR_RESET, addr);
+ }
+
++ clear_bit(HNAE3_NONE_RESET, addr);
++
+ return rst_level;
+ }
+
+@@ -1673,14 +1686,15 @@ static void hclgevf_reset_event(struct pci_dev *pdev,
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+ struct hclgevf_dev *hdev = ae_dev->priv;
+
+- dev_info(&hdev->pdev->dev, "received reset request from VF enet\n");
+-
+ if (hdev->default_reset_request)
+ hdev->reset_level =
+ hclgevf_get_reset_level(&hdev->default_reset_request);
+ else
+ hdev->reset_level = HNAE3_VF_FUNC_RESET;
+
++ dev_info(&hdev->pdev->dev, "received reset request from VF enet, reset level is %d\n",
++ hdev->reset_level);
++
+ /* reset of this VF requested */
+ set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state);
+ hclgevf_reset_task_schedule(hdev);
+@@ -1691,8 +1705,20 @@ static void hclgevf_reset_event(struct pci_dev *pdev,
+ static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
+ enum hnae3_reset_type rst_type)
+ {
++#define HCLGEVF_SUPPORT_RESET_TYPE \
++ (BIT(HNAE3_VF_RESET) | BIT(HNAE3_VF_FUNC_RESET) | \
++ BIT(HNAE3_VF_PF_FUNC_RESET) | BIT(HNAE3_VF_FULL_RESET) | \
++ BIT(HNAE3_FLR_RESET) | BIT(HNAE3_VF_EXP_RESET))
++
+ struct hclgevf_dev *hdev = ae_dev->priv;
+
++ if (!(BIT(rst_type) & HCLGEVF_SUPPORT_RESET_TYPE)) {
++ /* To prevent reset triggered by hclge_reset_event */
++ set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request);
++ dev_info(&hdev->pdev->dev, "unsupported reset type %d\n",
++ rst_type);
++ return;
++ }
+ set_bit(rst_type, &hdev->default_reset_request);
+ }
+
+@@ -1849,14 +1875,14 @@ static void hclgevf_reset_service_task(struct hclgevf_dev *hdev)
+ */
+ if (hdev->reset_attempts > HCLGEVF_MAX_RESET_ATTEMPTS_CNT) {
+ /* prepare for full reset of stack + pcie interface */
+- set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, HNAE3_VF_FULL_RESET);
+
+ /* "defer" schedule the reset task again */
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ } else {
+ hdev->reset_attempts++;
+
+- set_bit(hdev->reset_level, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, hdev->reset_level);
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ }
+ hclgevf_reset_task_schedule(hdev);
+@@ -1979,7 +2005,7 @@ static enum hclgevf_evt_cause hclgevf_check_evt_cause(struct hclgevf_dev *hdev,
+ rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING);
+ dev_info(&hdev->pdev->dev,
+ "receive reset interrupt 0x%x!\n", rst_ing_reg);
+- set_bit(HNAE3_VF_RESET, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, HNAE3_VF_RESET);
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
+ *clearval = ~(1U << HCLGEVF_VECTOR0_RST_INT_B);
+@@ -2289,6 +2315,8 @@ static void hclgevf_state_init(struct hclgevf_dev *hdev)
+ clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state);
+
+ INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task);
++ /* timer needs to be initialized before misc irq */
++ timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
+
+ mutex_init(&hdev->mbx_resp.mbx_mutex);
+ sema_init(&hdev->reset_sem, 1);
+@@ -2988,7 +3016,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ HCLGEVF_DRIVER_NAME);
+
+ hclgevf_task_schedule(hdev, round_jiffies_relative(HZ));
+- timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
+
+ return 0;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
+index 6db415d8b9176c..7d9d9dbc75603a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
+@@ -123,10 +123,10 @@ int hclgevf_get_regs_len(struct hnae3_handle *handle)
+ void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
+ void *data)
+ {
+-#define HCLGEVF_RING_REG_OFFSET 0x200
+ #define HCLGEVF_RING_INT_REG_OFFSET 0x4
+
+ struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++ struct hnae3_queue *tqp;
+ int i, j, reg_um;
+ u32 *reg = data;
+
+@@ -147,10 +147,11 @@ void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
+ reg_um = ARRAY_SIZE(ring_reg_addr_list);
+ for (j = 0; j < hdev->num_tqps; j++) {
+ reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg);
++ tqp = &hdev->htqp[j].q;
+ for (i = 0; i < reg_um; i++)
+- *reg++ = hclgevf_read_dev(&hdev->hw,
+- ring_reg_addr_list[i] +
+- HCLGEVF_RING_REG_OFFSET * j);
++ *reg++ = readl_relaxed(tqp->io_base -
++ HCLGEVF_TQP_REG_OFFSET +
++ ring_reg_addr_list[i]);
+ }
+
+ reg_um = ARRAY_SIZE(tqp_intr_reg_addr_list);
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 0be1a98d7cc1b5..79a6edd0be0ec4 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -2238,6 +2238,8 @@ struct ice_aqc_get_pkg_info_resp {
+ struct ice_aqc_get_pkg_info pkg_info[];
+ };
+
++#define ICE_AQC_GET_CGU_MAX_PHASE_ADJ GENMASK(30, 0)
++
+ /* Get CGU abilities command response data structure (indirect 0x0C61) */
+ struct ice_aqc_get_cgu_abilities {
+ u8 num_inputs;
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
+index d5ad6d84007c21..38e151c7ea2362 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
+@@ -2064,6 +2064,18 @@ static int ice_dpll_init_worker(struct ice_pf *pf)
+ return 0;
+ }
+
++/**
++ * ice_dpll_phase_range_set - initialize phase adjust range helper
++ * @range: pointer to phase adjust range struct to be initialized
++ * @phase_adj: a value to be used as min(-)/max(+) boundary
++ */
++static void ice_dpll_phase_range_set(struct dpll_pin_phase_adjust_range *range,
++ u32 phase_adj)
++{
++ range->min = -phase_adj;
++ range->max = phase_adj;
++}
++
+ /**
+ * ice_dpll_init_info_pins_generic - initializes generic pins info
+ * @pf: board private structure
+@@ -2105,8 +2117,8 @@ static int ice_dpll_init_info_pins_generic(struct ice_pf *pf, bool input)
+ for (i = 0; i < pin_num; i++) {
+ pins[i].idx = i;
+ pins[i].prop.board_label = labels[i];
+- pins[i].prop.phase_range.min = phase_adj_max;
+- pins[i].prop.phase_range.max = -phase_adj_max;
++ ice_dpll_phase_range_set(&pins[i].prop.phase_range,
++ phase_adj_max);
+ pins[i].prop.capabilities = cap;
+ pins[i].pf = pf;
+ ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
+@@ -2152,6 +2164,7 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ struct ice_hw *hw = &pf->hw;
+ struct ice_dpll_pin *pins;
+ unsigned long caps;
++ u32 phase_adj_max;
+ u8 freq_supp_num;
+ bool input;
+
+@@ -2159,11 +2172,13 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ pins = pf->dplls.inputs;
+ num_pins = pf->dplls.num_inputs;
++ phase_adj_max = pf->dplls.input_phase_adj_max;
+ input = true;
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ pins = pf->dplls.outputs;
+ num_pins = pf->dplls.num_outputs;
++ phase_adj_max = pf->dplls.output_phase_adj_max;
+ input = false;
+ break;
+ default:
+@@ -2188,19 +2203,13 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ return ret;
+ caps |= (DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE |
+ DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE);
+- pins[i].prop.phase_range.min =
+- pf->dplls.input_phase_adj_max;
+- pins[i].prop.phase_range.max =
+- -pf->dplls.input_phase_adj_max;
+ } else {
+- pins[i].prop.phase_range.min =
+- pf->dplls.output_phase_adj_max;
+- pins[i].prop.phase_range.max =
+- -pf->dplls.output_phase_adj_max;
+ ret = ice_cgu_get_output_pin_state_caps(hw, i, &caps);
+ if (ret)
+ return ret;
+ }
++ ice_dpll_phase_range_set(&pins[i].prop.phase_range,
++ phase_adj_max);
+ pins[i].prop.capabilities = caps;
+ ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
+ if (ret)
+@@ -2308,8 +2317,10 @@ static int ice_dpll_init_info(struct ice_pf *pf, bool cgu)
+ dp->dpll_idx = abilities.pps_dpll_idx;
+ d->num_inputs = abilities.num_inputs;
+ d->num_outputs = abilities.num_outputs;
+- d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj);
+- d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj);
++ d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj) &
++ ICE_AQC_GET_CGU_MAX_PHASE_ADJ;
++ d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj) &
++ ICE_AQC_GET_CGU_MAX_PHASE_ADJ;
+
+ alloc_size = sizeof(*d->inputs) * d->num_inputs;
+ d->inputs = kzalloc(alloc_size, GFP_KERNEL);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+index e6980b94a6c1d6..3005dd252a1026 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+@@ -761,9 +761,9 @@ const struct ice_vernier_info_e82x e822_vernier[NUM_ICE_PTP_LNK_SPD] = {
+ /* rx_desk_rsgb_par */
+ 644531250, /* 644.53125 MHz Reed Solomon gearbox */
+ /* tx_desk_rsgb_pcs */
+- 644531250, /* 644.53125 MHz Reed Solomon gearbox */
++ 390625000, /* 390.625 MHz Reed Solomon gearbox */
+ /* rx_desk_rsgb_pcs */
+- 644531250, /* 644.53125 MHz Reed Solomon gearbox */
++ 390625000, /* 390.625 MHz Reed Solomon gearbox */
+ /* tx_fixed_delay */
+ 1620,
+ /* pmd_adj_divisor */
+diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c
+index 9fae8bdec2a7c8..1613b562d17c52 100644
+--- a/drivers/net/ethernet/intel/igc/igc_base.c
++++ b/drivers/net/ethernet/intel/igc/igc_base.c
+@@ -68,6 +68,10 @@ static s32 igc_init_nvm_params_base(struct igc_hw *hw)
+ u32 eecd = rd32(IGC_EECD);
+ u16 size;
+
++ /* failed to read reg and got all F's */
++ if (!(~eecd))
++ return -ENXIO;
++
+ size = FIELD_GET(IGC_EECD_SIZE_EX_MASK, eecd);
+
+ /* Added to a constant, "size" becomes the left-shift value
+@@ -221,6 +225,8 @@ static s32 igc_get_invariants_base(struct igc_hw *hw)
+
+ /* NVM initialization */
+ ret_val = igc_init_nvm_params_base(hw);
++ if (ret_val)
++ goto out;
+ switch (hw->mac.type) {
+ case igc_i225:
+ ret_val = igc_init_nvm_params_i225(hw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 6bd8a18e3af3a1..e733b81e18a21a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1013,6 +1013,7 @@ static void cmd_work_handler(struct work_struct *work)
+ complete(&ent->done);
+ }
+ up(&cmd->vars.sem);
++ complete(&ent->slotted);
+ return;
+ }
+ } else {
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+index 1bfe5ef40c522d..14ffd45e9a25a7 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c
++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+@@ -1827,7 +1827,7 @@ static int rtase_alloc_msix(struct pci_dev *pdev, struct rtase_private *tp)
+
+ for (i = 0; i < tp->int_nums; i++) {
+ irq = pci_irq_vector(pdev, i);
+- if (!irq) {
++ if (irq < 0) {
+ pci_disable_msix(pdev);
+ return irq;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+index 6fdd94c8919ec2..2996bcdea9a28e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
++#include <linux/iommu.h>
+ #include <linux/platform_device.h>
+ #include <linux/of.h>
+ #include <linux/module.h>
+@@ -19,6 +20,8 @@ struct tegra_mgbe {
+ struct reset_control *rst_mac;
+ struct reset_control *rst_pcs;
+
++ u32 iommu_sid;
++
+ void __iomem *hv;
+ void __iomem *regs;
+ void __iomem *xpcs;
+@@ -50,7 +53,6 @@ struct tegra_mgbe {
+ #define MGBE_WRAP_COMMON_INTR_ENABLE 0x8704
+ #define MAC_SBD_INTR BIT(2)
+ #define MGBE_WRAP_AXI_ASID0_CTRL 0x8400
+-#define MGBE_SID 0x6
+
+ static int __maybe_unused tegra_mgbe_suspend(struct device *dev)
+ {
+@@ -84,7 +86,7 @@ static int __maybe_unused tegra_mgbe_resume(struct device *dev)
+ writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE);
+
+ /* Program SID */
+- writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
++ writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
+
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_STATUS);
+ if ((value & XPCS_WRAP_UPHY_STATUS_TX_P_UP) == 0) {
+@@ -241,6 +243,12 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
+ if (IS_ERR(mgbe->xpcs))
+ return PTR_ERR(mgbe->xpcs);
+
++ /* get controller's stream id from iommu property in device tree */
++ if (!tegra_dev_iommu_get_stream_id(mgbe->dev, &mgbe->iommu_sid)) {
++ dev_err(mgbe->dev, "failed to get iommu stream id\n");
++ return -EINVAL;
++ }
++
+ res.addr = mgbe->regs;
+ res.irq = irq;
+
+@@ -346,7 +354,7 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
+ writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE);
+
+ /* Program SID */
+- writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
++ writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
+
+ plat->flags |= STMMAC_FLAG_SERDES_UP_AFTER_PHY_LINKUP;
+
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+index 1bf9c38e412562..deaf670c160ebf 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+@@ -334,27 +334,25 @@ int wx_host_interface_command(struct wx *wx, u32 *buffer,
+ status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000,
+ timeout * 1000, false, wx, WX_MNG_MBOX_CTL);
+
++ buf[0] = rd32(wx, WX_MNG_MBOX);
++ if ((buf[0] & 0xff0000) >> 16 == 0x80) {
++ wx_err(wx, "Unknown FW command: 0x%x\n", buffer[0] & 0xff);
++ status = -EINVAL;
++ goto rel_out;
++ }
++
+ /* Check command completion */
+ if (status) {
+- wx_dbg(wx, "Command has failed with no status valid.\n");
+-
+- buf[0] = rd32(wx, WX_MNG_MBOX);
+- if ((buffer[0] & 0xff) != (~buf[0] >> 24)) {
+- status = -EINVAL;
+- goto rel_out;
+- }
+- if ((buf[0] & 0xff0000) >> 16 == 0x80) {
+- wx_dbg(wx, "It's unknown cmd.\n");
+- status = -EINVAL;
+- goto rel_out;
+- }
+-
++ wx_err(wx, "Command has failed with no status valid.\n");
+ wx_dbg(wx, "write value:\n");
+ for (i = 0; i < dword_len; i++)
+ wx_dbg(wx, "%x ", buffer[i]);
+ wx_dbg(wx, "read value:\n");
+ for (i = 0; i < dword_len; i++)
+ wx_dbg(wx, "%x ", buf[i]);
++ wx_dbg(wx, "\ncheck: %x %x\n", buffer[0] & 0xff, ~buf[0] >> 24);
++
++ goto rel_out;
+ }
+
+ if (!return_data)
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index e685a7f946f0f8..753215ebc67c70 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -3072,7 +3072,11 @@ static int ca8210_probe(struct spi_device *spi_device)
+ spi_set_drvdata(priv->spi, priv);
+ if (IS_ENABLED(CONFIG_IEEE802154_CA8210_DEBUGFS)) {
+ cascoda_api_upstream = ca8210_test_int_driver_write;
+- ca8210_test_interface_init(priv);
++ ret = ca8210_test_interface_init(priv);
++ if (ret) {
++ dev_crit(&spi_device->dev, "ca8210_test_interface_init failed\n");
++ goto error;
++ }
+ } else {
+ cascoda_api_upstream = NULL;
+ }
+diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c
+index 1bc87a0626860f..ee9d562f0817cf 100644
+--- a/drivers/net/mctp/mctp-i3c.c
++++ b/drivers/net/mctp/mctp-i3c.c
+@@ -125,6 +125,8 @@ static int mctp_i3c_read(struct mctp_i3c_device *mi)
+
+ xfer.data.in = skb_put(skb, mi->mrl);
+
++ /* Make sure netif_rx() is read in the same order as i3c. */
++ mutex_lock(&mi->lock);
+ rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1);
+ if (rc < 0)
+ goto err;
+@@ -166,8 +168,10 @@ static int mctp_i3c_read(struct mctp_i3c_device *mi)
+ stats->rx_dropped++;
+ }
+
++ mutex_unlock(&mi->lock);
+ return 0;
+ err:
++ mutex_unlock(&mi->lock);
+ kfree_skb(skb);
+ return rc;
+ }
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 1aa303f76cc7af..da3651d329069c 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -507,8 +507,7 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ {
+ u32 type = event->attr.type;
+ u64 config = event->attr.config;
+- u64 raw_config_val;
+- int ret;
++ int ret = -ENOENT;
+
+ /*
+ * Ensure we are finished checking standard hardware events for
+@@ -528,21 +527,20 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ case PERF_TYPE_RAW:
+ /*
+ * As per SBI specification, the upper 16 bits must be unused
+- * for a raw event.
++ * for a hardware raw event.
+ * Bits 63:62 are used to distinguish between raw events
+ * 00 - Hardware raw event
+ * 10 - SBI firmware events
+ * 11 - Risc-V platform specific firmware event
+ */
+- raw_config_val = config & RISCV_PMU_RAW_EVENT_MASK;
++
+ switch (config >> 62) {
+ case 0:
+ ret = RISCV_PMU_RAW_EVENT_IDX;
+- *econfig = raw_config_val;
++ *econfig = config & RISCV_PMU_RAW_EVENT_MASK;
+ break;
+ case 2:
+- ret = (raw_config_val & 0xFFFF) |
+- (SBI_PMU_EVENT_TYPE_FW << 16);
++ ret = (config & 0xFFFF) | (SBI_PMU_EVENT_TYPE_FW << 16);
+ break;
+ case 3:
+ /*
+@@ -551,12 +549,13 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ * Event data - raw event encoding
+ */
+ ret = SBI_PMU_EVENT_TYPE_FW << 16 | RISCV_PLAT_FW_EVENT;
+- *econfig = raw_config_val;
++ *econfig = config & RISCV_PMU_PLAT_FW_EVENT_MASK;
++ break;
++ default:
+ break;
+ }
+ break;
+ default:
+- ret = -ENOENT;
+ break;
+ }
+
+diff --git a/drivers/platform/x86/amd/pmc/pmc.c b/drivers/platform/x86/amd/pmc/pmc.c
+index 5669f94c3d06bf..4d3acfe849bf4e 100644
+--- a/drivers/platform/x86/amd/pmc/pmc.c
++++ b/drivers/platform/x86/amd/pmc/pmc.c
+@@ -947,6 +947,10 @@ static int amd_pmc_suspend_handler(struct device *dev)
+ {
+ struct amd_pmc_dev *pdev = dev_get_drvdata(dev);
+
++ /*
++ * Must be called only from the same set of dev_pm_ops handlers
++ * as i8042_pm_suspend() is called: currently just from .suspend.
++ */
+ if (pdev->disable_8042_wakeup && !disable_workarounds) {
+ int rc = amd_pmc_wa_irq1(pdev);
+
+@@ -959,7 +963,9 @@ static int amd_pmc_suspend_handler(struct device *dev)
+ return 0;
+ }
+
+-static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmc_pm, amd_pmc_suspend_handler, NULL);
++static const struct dev_pm_ops amd_pmc_pm = {
++ .suspend = amd_pmc_suspend_handler,
++};
+
+ static const struct pci_device_id pmc_pci_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PS) },
+diff --git a/drivers/platform/x86/intel/pmc/core_ssram.c b/drivers/platform/x86/intel/pmc/core_ssram.c
+index 8504154b649f47..927f58dc73e324 100644
+--- a/drivers/platform/x86/intel/pmc/core_ssram.c
++++ b/drivers/platform/x86/intel/pmc/core_ssram.c
+@@ -269,8 +269,12 @@ pmc_core_ssram_get_pmc(struct pmc_dev *pmcdev, int pmc_idx, u32 offset)
+ /*
+ * The secondary PMC BARS (which are behind hidden PCI devices)
+ * are read from fixed offsets in MMIO of the primary PMC BAR.
++ * If a device is not present, the value will be 0.
+ */
+ ssram_base = get_base(tmp_ssram, offset);
++ if (!ssram_base)
++ return 0;
++
+ ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
+ if (!ssram)
+ return -ENOMEM;
+diff --git a/drivers/staging/iio/frequency/ad9832.c b/drivers/staging/iio/frequency/ad9832.c
+index 492612e8f8bad5..140ee4f9c137f5 100644
+--- a/drivers/staging/iio/frequency/ad9832.c
++++ b/drivers/staging/iio/frequency/ad9832.c
+@@ -158,7 +158,7 @@ static int ad9832_write_frequency(struct ad9832_state *st,
+ static int ad9832_write_phase(struct ad9832_state *st,
+ unsigned long addr, unsigned long phase)
+ {
+- if (phase > BIT(AD9832_PHASE_BITS))
++ if (phase >= BIT(AD9832_PHASE_BITS))
+ return -EINVAL;
+
+ st->phase_data[0] = cpu_to_be16((AD9832_CMD_PHA8BITSW << CMD_SHIFT) |
+diff --git a/drivers/staging/iio/frequency/ad9834.c b/drivers/staging/iio/frequency/ad9834.c
+index 47e7d7e6d92089..6e99e008c5f432 100644
+--- a/drivers/staging/iio/frequency/ad9834.c
++++ b/drivers/staging/iio/frequency/ad9834.c
+@@ -131,7 +131,7 @@ static int ad9834_write_frequency(struct ad9834_state *st,
+ static int ad9834_write_phase(struct ad9834_state *st,
+ unsigned long addr, unsigned long phase)
+ {
+- if (phase > BIT(AD9834_PHASE_BITS))
++ if (phase >= BIT(AD9834_PHASE_BITS))
+ return -EINVAL;
+ st->data = cpu_to_be16(addr | phase);
+
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 07e09897165f34..5d3d8ce672cd51 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -176,6 +176,7 @@ static struct device_node *of_thermal_zone_find(struct device_node *sensor, int
+ goto out;
+ }
+
++ of_node_put(sensor_specs.np);
+ if ((sensor == sensor_specs.np) && id == (sensor_specs.args_count ?
+ sensor_specs.args[0] : 0)) {
+ pr_debug("sensor %pOFn id=%d belongs to %pOFn\n", sensor, id, child);
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 5f9f06911795cc..68baf75bdadc42 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -812,6 +812,9 @@ int serial8250_register_8250_port(const struct uart_8250_port *up)
+ uart->dl_write = up->dl_write;
+
+ if (uart->port.type != PORT_8250_CIR) {
++ if (uart_console_registered(&uart->port))
++ pm_runtime_get_sync(uart->port.dev);
++
+ if (serial8250_isa_config != NULL)
+ serial8250_isa_config(0, &uart->port,
+ &uart->capabilities);
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index e1e7bc04c57920..f5199fdecff278 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -1051,14 +1051,14 @@ static void stm32_usart_break_ctl(struct uart_port *port, int break_state)
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ unsigned long flags;
+
+- spin_lock_irqsave(&port->lock, flags);
++ uart_port_lock_irqsave(port, &flags);
+
+ if (break_state)
+ stm32_usart_set_bits(port, ofs->rqr, USART_RQR_SBKRQ);
+ else
+ stm32_usart_clr_bits(port, ofs->rqr, USART_RQR_SBKRQ);
+
+- spin_unlock_irqrestore(&port->lock, flags);
++ uart_port_unlock_irqrestore(port, flags);
+ }
+
+ static int stm32_usart_startup(struct uart_port *port)
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index 9ffd94ddf8c7ce..786f20ef22386b 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -237,12 +237,6 @@ static inline void ufshcd_vops_config_scaling_param(struct ufs_hba *hba,
+ hba->vops->config_scaling_param(hba, p, data);
+ }
+
+-static inline void ufshcd_vops_reinit_notify(struct ufs_hba *hba)
+-{
+- if (hba->vops && hba->vops->reinit_notify)
+- hba->vops->reinit_notify(hba);
+-}
+-
+ static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba)
+ {
+ if (hba->vops && hba->vops->mcq_config_resource)
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index bc13133efaa508..05b936ad353be7 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8881,7 +8881,6 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params)
+ ufshcd_device_reset(hba);
+ ufs_put_device_desc(hba);
+ ufshcd_hba_stop(hba);
+- ufshcd_vops_reinit_notify(hba);
+ ret = ufshcd_hba_enable(hba);
+ if (ret) {
+ dev_err(hba->dev, "Host controller enable failed\n");
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 91127fb171864f..989692fb91083f 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -368,6 +368,11 @@ static int ufs_qcom_power_up_sequence(struct ufs_hba *hba)
+ if (ret)
+ return ret;
+
++ if (phy->power_count) {
++ phy_power_off(phy);
++ phy_exit(phy);
++ }
++
+ /* phy initialization - calibrate the phy */
+ ret = phy_init(phy);
+ if (ret) {
+@@ -1562,13 +1567,6 @@ static void ufs_qcom_config_scaling_param(struct ufs_hba *hba,
+ }
+ #endif
+
+-static void ufs_qcom_reinit_notify(struct ufs_hba *hba)
+-{
+- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+-
+- phy_power_off(host->generic_phy);
+-}
+-
+ /* Resources */
+ static const struct ufshcd_res_info ufs_res_info[RES_MAX] = {
+ {.name = "ufs_mem",},
+@@ -1807,7 +1805,6 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
+ .device_reset = ufs_qcom_device_reset,
+ .config_scaling_param = ufs_qcom_config_scaling_param,
+ .program_key = ufs_qcom_ice_program_key,
+- .reinit_notify = ufs_qcom_reinit_notify,
+ .mcq_config_resource = ufs_qcom_mcq_config_resource,
+ .get_hba_mac = ufs_qcom_get_hba_mac,
+ .op_runtime_config = ufs_qcom_op_runtime_config,
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 17b3ac2ac8a1e8..46d1a4428b9a98 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -370,25 +370,29 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ data->pinctrl = devm_pinctrl_get(dev);
+ if (PTR_ERR(data->pinctrl) == -ENODEV)
+ data->pinctrl = NULL;
+- else if (IS_ERR(data->pinctrl))
+- return dev_err_probe(dev, PTR_ERR(data->pinctrl),
++ else if (IS_ERR(data->pinctrl)) {
++ ret = dev_err_probe(dev, PTR_ERR(data->pinctrl),
+ "pinctrl get failed\n");
++ goto err_put;
++ }
+
+ data->hsic_pad_regulator =
+ devm_regulator_get_optional(dev, "hsic");
+ if (PTR_ERR(data->hsic_pad_regulator) == -ENODEV) {
+ /* no pad regulator is needed */
+ data->hsic_pad_regulator = NULL;
+- } else if (IS_ERR(data->hsic_pad_regulator))
+- return dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator),
++ } else if (IS_ERR(data->hsic_pad_regulator)) {
++ ret = dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator),
+ "Get HSIC pad regulator error\n");
++ goto err_put;
++ }
+
+ if (data->hsic_pad_regulator) {
+ ret = regulator_enable(data->hsic_pad_regulator);
+ if (ret) {
+ dev_err(dev,
+ "Failed to enable HSIC pad regulator\n");
+- return ret;
++ goto err_put;
+ }
+ }
+ }
+@@ -402,13 +406,14 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ dev_err(dev,
+ "pinctrl_hsic_idle lookup failed, err=%ld\n",
+ PTR_ERR(pinctrl_hsic_idle));
+- return PTR_ERR(pinctrl_hsic_idle);
++ ret = PTR_ERR(pinctrl_hsic_idle);
++ goto err_put;
+ }
+
+ ret = pinctrl_select_state(data->pinctrl, pinctrl_hsic_idle);
+ if (ret) {
+ dev_err(dev, "hsic_idle select failed, err=%d\n", ret);
+- return ret;
++ goto err_put;
+ }
+
+ data->pinctrl_hsic_active = pinctrl_lookup_state(data->pinctrl,
+@@ -417,7 +422,8 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ dev_err(dev,
+ "pinctrl_hsic_active lookup failed, err=%ld\n",
+ PTR_ERR(data->pinctrl_hsic_active));
+- return PTR_ERR(data->pinctrl_hsic_active);
++ ret = PTR_ERR(data->pinctrl_hsic_active);
++ goto err_put;
+ }
+ }
+
+@@ -527,6 +533,8 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ if (pdata.flags & CI_HDRC_PMQOS)
+ cpu_latency_qos_remove_request(&data->pm_qos_req);
+ data->ci_pdev = NULL;
++err_put:
++ put_device(data->usbmisc_data->dev);
+ return ret;
+ }
+
+@@ -551,6 +559,7 @@ static void ci_hdrc_imx_remove(struct platform_device *pdev)
+ if (data->hsic_pad_regulator)
+ regulator_disable(data->hsic_pad_regulator);
+ }
++ put_device(data->usbmisc_data->dev);
+ }
+
+ static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 5a2e43331064eb..ff1a941fd2ede4 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -1337,11 +1337,12 @@ static int usblp_set_protocol(struct usblp *usblp, int protocol)
+ if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL)
+ return -EINVAL;
+
++ alts = usblp->protocol[protocol].alt_setting;
++ if (alts < 0)
++ return -EINVAL;
++
+ /* Don't unnecessarily set the interface if there's a single alt. */
+ if (usblp->intf->num_altsetting > 1) {
+- alts = usblp->protocol[protocol].alt_setting;
+- if (alts < 0)
+- return -EINVAL;
+ r = usb_set_interface(usblp->dev, usblp->ifnum, alts);
+ if (r < 0) {
+ printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 4b93c0bd1d4bcc..21ac9b464696f5 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2663,13 +2663,13 @@ int usb_new_device(struct usb_device *udev)
+ err = sysfs_create_link(&udev->dev.kobj,
+ &port_dev->dev.kobj, "port");
+ if (err)
+- goto fail;
++ goto out_del_dev;
+
+ err = sysfs_create_link(&port_dev->dev.kobj,
+ &udev->dev.kobj, "device");
+ if (err) {
+ sysfs_remove_link(&udev->dev.kobj, "port");
+- goto fail;
++ goto out_del_dev;
+ }
+
+ if (!test_and_set_bit(port1, hub->child_usage_bits))
+@@ -2683,6 +2683,8 @@ int usb_new_device(struct usb_device *udev)
+ pm_runtime_put_sync_autosuspend(&udev->dev);
+ return err;
+
++out_del_dev:
++ device_del(&udev->dev);
+ fail:
+ usb_set_device_state(udev, USB_STATE_NOTATTACHED);
+ pm_runtime_disable(&udev->dev);
+diff --git a/drivers/usb/core/port.c b/drivers/usb/core/port.c
+index e7da2fca11a48c..c92fb648a1c4c0 100644
+--- a/drivers/usb/core/port.c
++++ b/drivers/usb/core/port.c
+@@ -452,10 +452,11 @@ static int usb_port_runtime_suspend(struct device *dev)
+ static void usb_port_shutdown(struct device *dev)
+ {
+ struct usb_port *port_dev = to_usb_port(dev);
++ struct usb_device *udev = port_dev->child;
+
+- if (port_dev->child) {
+- usb_disable_usb2_hardware_lpm(port_dev->child);
+- usb_unlocked_disable_lpm(port_dev->child);
++ if (udev && !udev->port_is_suspended) {
++ usb_disable_usb2_hardware_lpm(udev);
++ usb_unlocked_disable_lpm(udev);
+ }
+ }
+
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 0b9ba338b2654c..0e91a227507fff 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -464,6 +464,7 @@
+ #define DWC3_DCTL_TRGTULST_SS_INACT (DWC3_DCTL_TRGTULST(6))
+
+ /* These apply for core versions 1.94a and later */
++#define DWC3_DCTL_NYET_THRES_MASK (0xf << 20)
+ #define DWC3_DCTL_NYET_THRES(n) (((n) & 0xf) << 20)
+
+ #define DWC3_DCTL_KEEP_CONNECT BIT(19)
+diff --git a/drivers/usb/dwc3/dwc3-am62.c b/drivers/usb/dwc3/dwc3-am62.c
+index fad151e78fd669..538185a4d1b4fb 100644
+--- a/drivers/usb/dwc3/dwc3-am62.c
++++ b/drivers/usb/dwc3/dwc3-am62.c
+@@ -309,6 +309,7 @@ static void dwc3_ti_remove(struct platform_device *pdev)
+
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_dont_use_autosuspend(dev);
+ pm_runtime_set_suspended(dev);
+ }
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 56744b11e67cb9..a5d75d7d0a8707 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -4208,8 +4208,10 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+ WARN_ONCE(DWC3_VER_IS_PRIOR(DWC3, 240A) && dwc->has_lpm_erratum,
+ "LPM Erratum not available on dwc3 revisions < 2.40a\n");
+
+- if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A))
++ if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A)) {
++ reg &= ~DWC3_DCTL_NYET_THRES_MASK;
+ reg |= DWC3_DCTL_NYET_THRES(dwc->lpm_nyet_threshold);
++ }
+
+ dwc3_gadget_dctl_write_safe(dwc, reg);
+ } else {
+diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig
+index 566ff0b1282a82..76521555e3c14c 100644
+--- a/drivers/usb/gadget/Kconfig
++++ b/drivers/usb/gadget/Kconfig
+@@ -211,6 +211,8 @@ config USB_F_MIDI
+
+ config USB_F_MIDI2
+ tristate
++ select SND_UMP
++ select SND_UMP_LEGACY_RAWMIDI
+
+ config USB_F_HID
+ tristate
+@@ -445,8 +447,6 @@ config USB_CONFIGFS_F_MIDI2
+ depends on USB_CONFIGFS
+ depends on SND
+ select USB_LIBCOMPOSITE
+- select SND_UMP
+- select SND_UMP_LEGACY_RAWMIDI
+ select USB_F_MIDI2
+ help
+ The MIDI 2.0 function driver provides the generic emulated
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index c82a6a0fba93dd..29390d573e2346 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -827,11 +827,15 @@ static ssize_t gadget_string_s_store(struct config_item *item, const char *page,
+ {
+ struct gadget_string *string = to_gadget_string(item);
+ int size = min(sizeof(string->string), len + 1);
++ ssize_t cpy_len;
+
+ if (len > USB_MAX_STRING_LEN)
+ return -EINVAL;
+
+- return strscpy(string->string, page, size);
++ cpy_len = strscpy(string->string, page, size);
++ if (cpy_len > 0 && string->string[cpy_len - 1] == '\n')
++ string->string[cpy_len - 1] = 0;
++ return len;
+ }
+ CONFIGFS_ATTR(gadget_string_, s);
+
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 2920f8000bbd83..92c883440e02cd 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -2285,7 +2285,7 @@ static int functionfs_bind(struct ffs_data *ffs, struct usb_composite_dev *cdev)
+ struct usb_gadget_strings **lang;
+ int first_id;
+
+- if (WARN_ON(ffs->state != FFS_ACTIVE
++ if ((ffs->state != FFS_ACTIVE
+ || test_and_set_bit(FFS_FL_BOUND, &ffs->flags)))
+ return -EBADFD;
+
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index ce5b77f8919026..9b324821c93bd0 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -1185,6 +1185,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ uac2->as_in_alt = 0;
+ }
+
++ std_ac_if_desc.bNumEndpoints = 0;
+ if (FUOUT_EN(uac2_opts) || FUIN_EN(uac2_opts)) {
+ uac2->int_ep = usb_ep_autoconfig(gadget, &fs_ep_int_desc);
+ if (!uac2->int_ep) {
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 53d9fc41acc522..bc143a86c2ddf0 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1420,6 +1420,10 @@ void gserial_disconnect(struct gserial *gser)
+ /* REVISIT as above: how best to track this? */
+ port->port_line_coding = gser->port_line_coding;
+
++ /* disable endpoints, aborting down any active I/O */
++ usb_ep_disable(gser->out);
++ usb_ep_disable(gser->in);
++
+ port->port_usb = NULL;
+ gser->ioport = NULL;
+ if (port->port.count > 0) {
+@@ -1431,10 +1435,6 @@ void gserial_disconnect(struct gserial *gser)
+ spin_unlock(&port->port_lock);
+ spin_unlock_irqrestore(&serial_port_lock, flags);
+
+- /* disable endpoints, aborting down any active I/O */
+- usb_ep_disable(gser->out);
+- usb_ep_disable(gser->in);
+-
+ /* finally, free any unused/unusable I/O buffers */
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (port->port.count == 0)
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index ecaa75718e5926..e6660472501e4d 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -290,7 +290,8 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+
+ hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node);
+
+- if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT))
++ if ((priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) ||
++ (xhci->quirks & XHCI_SKIP_PHY_INIT))
+ hcd->skip_phy_initialization = 1;
+
+ if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c24101f0a07ad1..9960ac2b10b719 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -223,6 +223,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
++ { USB_DEVICE(0x1B93, 0x1013) }, /* Phoenix Contact UPS Device */
+ { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */
+ { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
+ { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 64317b390d2285..1e2ae0c6c41c79 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -621,7 +621,7 @@ static void option_instat_callback(struct urb *urb);
+
+ /* MeiG Smart Technology products */
+ #define MEIGSMART_VENDOR_ID 0x2dee
+-/* MeiG Smart SRM825L based on Qualcomm 315 */
++/* MeiG Smart SRM815/SRM825L based on Qualcomm 315 */
+ #define MEIGSMART_PRODUCT_SRM825L 0x4d22
+ /* MeiG Smart SLM320 based on UNISOC UIS8910 */
+ #define MEIGSMART_PRODUCT_SLM320 0x4d41
+@@ -2405,6 +2405,7 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
+@@ -2412,6 +2413,7 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(1) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */
+ .driver_info = NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(0x2949, 0x8700, 0xff) }, /* Neoway N723-EA */
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index e5ad23d86833d5..54f0b1c83317cd 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -255,6 +255,13 @@ UNUSUAL_DEV( 0x0421, 0x06aa, 0x1110, 0x1110,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_MAX_SECTORS_64 ),
+
++/* Added by Lubomir Rintel <lkundrak@v3.sk>, a very fine chap */
++UNUSUAL_DEV( 0x0421, 0x06c2, 0x0000, 0x0406,
++ "Nokia",
++ "Nokia 208",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_MAX_SECTORS_64 ),
++
+ #ifdef NO_SDDR09
+ UNUSUAL_DEV( 0x0436, 0x0005, 0x0100, 0x0100,
+ "Microtech",
+diff --git a/drivers/usb/typec/tcpm/maxim_contaminant.c b/drivers/usb/typec/tcpm/maxim_contaminant.c
+index 22163d8f9eb07e..0cdda06592fd3c 100644
+--- a/drivers/usb/typec/tcpm/maxim_contaminant.c
++++ b/drivers/usb/typec/tcpm/maxim_contaminant.c
+@@ -135,7 +135,7 @@ static int max_contaminant_read_resistance_kohm(struct max_tcpci_chip *chip,
+
+ mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true);
+ if (mv < 0)
+- return ret;
++ return mv;
+
+ /* OVP enable */
+ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCOVPDIS, 0);
+@@ -157,7 +157,7 @@ static int max_contaminant_read_resistance_kohm(struct max_tcpci_chip *chip,
+
+ mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true);
+ if (mv < 0)
+- return ret;
++ return mv;
+ /* Disable current source */
+ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, SBURPCTRL, 0);
+ if (ret < 0)
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index ed32583829bec2..24a6a4354df8ba 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -700,7 +700,7 @@ static int tcpci_init(struct tcpc_dev *tcpc)
+
+ tcpci->alert_mask = reg;
+
+- return tcpci_write16(tcpci, TCPC_ALERT_MASK, reg);
++ return 0;
+ }
+
+ irqreturn_t tcpci_irq(struct tcpci *tcpci)
+@@ -923,22 +923,27 @@ static int tcpci_probe(struct i2c_client *client)
+
+ chip->data.set_orientation = err;
+
++ chip->tcpci = tcpci_register_port(&client->dev, &chip->data);
++ if (IS_ERR(chip->tcpci))
++ return PTR_ERR(chip->tcpci);
++
+ err = devm_request_threaded_irq(&client->dev, client->irq, NULL,
+ _tcpci_irq,
+ IRQF_SHARED | IRQF_ONESHOT,
+ dev_name(&client->dev), chip);
+ if (err < 0)
+- return err;
++ goto unregister_port;
+
+- /*
+- * Disable irq while registering port. If irq is configured as an edge
+- * irq this allow to keep track and process the irq as soon as it is enabled.
+- */
+- disable_irq(client->irq);
+- chip->tcpci = tcpci_register_port(&client->dev, &chip->data);
+- enable_irq(client->irq);
++ /* Enable chip interrupts at last */
++ err = tcpci_write16(chip->tcpci, TCPC_ALERT_MASK, chip->tcpci->alert_mask);
++ if (err < 0)
++ goto unregister_port;
+
+- return PTR_ERR_OR_ZERO(chip->tcpci);
++ return 0;
++
++unregister_port:
++ tcpci_unregister_port(chip->tcpci);
++ return err;
+ }
+
+ static void tcpci_remove(struct i2c_client *client)
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index fcb8e61136cfd7..740171f24ef9fa 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -646,7 +646,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ UCSI_CMD_CONNECTOR_MASK;
+ if (con_index == 0) {
+ ret = -EINVAL;
+- goto unlock;
++ goto err_put;
+ }
+ con = &uc->ucsi->connector[con_index - 1];
+ ucsi_ccg_update_set_new_cam_cmd(uc, con, &command);
+@@ -654,8 +654,8 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+
+ ret = ucsi_sync_control_common(ucsi, command);
+
++err_put:
+ pm_runtime_put_sync(uc->dev);
+-unlock:
+ mutex_unlock(&uc->lock);
+
+ return ret;
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 1ab58da9f38a6e..1a4ed5a357d360 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -1661,14 +1661,15 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
+ unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
+
+- if (order && (vmf->address & ((PAGE_SIZE << order) - 1) ||
++ pfn = vma_to_pfn(vma) + pgoff;
++
++ if (order && (pfn & ((1 << order) - 1) ||
++ vmf->address & ((PAGE_SIZE << order) - 1) ||
+ vmf->address + (PAGE_SIZE << order) > vma->vm_end)) {
+ ret = VM_FAULT_FALLBACK;
+ goto out;
+ }
+
+- pfn = vma_to_pfn(vma);
+-
+ down_read(&vdev->memory_lock);
+
+ if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev))
+@@ -1676,18 +1677,18 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
+
+ switch (order) {
+ case 0:
+- ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff);
++ ret = vmf_insert_pfn(vma, vmf->address, pfn);
+ break;
+ #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
+ case PMD_ORDER:
+- ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn + pgoff,
+- PFN_DEV), false);
++ ret = vmf_insert_pfn_pmd(vmf,
++ __pfn_to_pfn_t(pfn, PFN_DEV), false);
+ break;
+ #endif
+ #ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP
+ case PUD_ORDER:
+- ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn + pgoff,
+- PFN_DEV), false);
++ ret = vmf_insert_pfn_pud(vmf,
++ __pfn_to_pfn_t(pfn, PFN_DEV), false);
+ break;
+ #endif
+ default:
+diff --git a/fs/afs/afs.h b/fs/afs/afs.h
+index b488072aee87ae..ec3db00bd0813c 100644
+--- a/fs/afs/afs.h
++++ b/fs/afs/afs.h
+@@ -10,7 +10,7 @@
+
+ #include <linux/in.h>
+
+-#define AFS_MAXCELLNAME 256 /* Maximum length of a cell name */
++#define AFS_MAXCELLNAME 253 /* Maximum length of a cell name (DNS limited) */
+ #define AFS_MAXVOLNAME 64 /* Maximum length of a volume name */
+ #define AFS_MAXNSERVERS 8 /* Maximum servers in a basic volume record */
+ #define AFS_NMAXNSERVERS 13 /* Maximum servers in a N/U-class volume record */
+diff --git a/fs/afs/afs_vl.h b/fs/afs/afs_vl.h
+index a06296c8827d42..b835e25a2c02d3 100644
+--- a/fs/afs/afs_vl.h
++++ b/fs/afs/afs_vl.h
+@@ -13,6 +13,7 @@
+ #define AFS_VL_PORT 7003 /* volume location service port */
+ #define VL_SERVICE 52 /* RxRPC service ID for the Volume Location service */
+ #define YFS_VL_SERVICE 2503 /* Service ID for AuriStor upgraded VL service */
++#define YFS_VL_MAXCELLNAME 256 /* Maximum length of a cell name in YFS protocol */
+
+ enum AFSVL_Operations {
+ VLGETENTRYBYID = 503, /* AFS Get VLDB entry by ID */
+diff --git a/fs/afs/vl_alias.c b/fs/afs/vl_alias.c
+index 9f36e14f1c2d24..f9e76b604f31b9 100644
+--- a/fs/afs/vl_alias.c
++++ b/fs/afs/vl_alias.c
+@@ -253,6 +253,7 @@ static char *afs_vl_get_cell_name(struct afs_cell *cell, struct key *key)
+ static int yfs_check_canonical_cell_name(struct afs_cell *cell, struct key *key)
+ {
+ struct afs_cell *master;
++ size_t name_len;
+ char *cell_name;
+
+ cell_name = afs_vl_get_cell_name(cell, key);
+@@ -264,8 +265,11 @@ static int yfs_check_canonical_cell_name(struct afs_cell *cell, struct key *key)
+ return 0;
+ }
+
+- master = afs_lookup_cell(cell->net, cell_name, strlen(cell_name),
+- NULL, false);
++ name_len = strlen(cell_name);
++ if (!name_len || name_len > AFS_MAXCELLNAME)
++ master = ERR_PTR(-EOPNOTSUPP);
++ else
++ master = afs_lookup_cell(cell->net, cell_name, name_len, NULL, false);
+ kfree(cell_name);
+ if (IS_ERR(master))
+ return PTR_ERR(master);
+diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
+index cac75f89b64ad1..55dd0fc5aad7bf 100644
+--- a/fs/afs/vlclient.c
++++ b/fs/afs/vlclient.c
+@@ -697,7 +697,7 @@ static int afs_deliver_yfsvl_get_cell_name(struct afs_call *call)
+ return ret;
+
+ namesz = ntohl(call->tmp);
+- if (namesz > AFS_MAXCELLNAME)
++ if (namesz > YFS_VL_MAXCELLNAME)
+ return afs_protocol_error(call, afs_eproto_cellname_len);
+ paddedsz = (namesz + 3) & ~3;
+ call->count = namesz;
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 872cca54cc6ce4..42c9899d9241c9 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -786,7 +786,7 @@ static void submit_extent_folio(struct btrfs_bio_ctrl *bio_ctrl,
+ }
+
+ if (bio_ctrl->wbc)
+- wbc_account_cgroup_owner(bio_ctrl->wbc, &folio->page,
++ wbc_account_cgroup_owner(bio_ctrl->wbc, folio,
+ len);
+
+ size -= len;
+@@ -1708,7 +1708,7 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
+ ret = bio_add_folio(&bbio->bio, folio, eb->len,
+ eb->start - folio_pos(folio));
+ ASSERT(ret);
+- wbc_account_cgroup_owner(wbc, folio_page(folio, 0), eb->len);
++ wbc_account_cgroup_owner(wbc, folio, eb->len);
+ folio_unlock(folio);
+ } else {
+ int num_folios = num_extent_folios(eb);
+@@ -1722,8 +1722,7 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
+ folio_start_writeback(folio);
+ ret = bio_add_folio(&bbio->bio, folio, eb->folio_size, 0);
+ ASSERT(ret);
+- wbc_account_cgroup_owner(wbc, folio_page(folio, 0),
+- eb->folio_size);
++ wbc_account_cgroup_owner(wbc, folio, eb->folio_size);
+ wbc->nr_to_write -= folio_nr_pages(folio);
+ folio_unlock(folio);
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b5cfb85af937fc..a3c861b2a6d25d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1729,7 +1729,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
+ * need full accuracy. Just account the whole thing
+ * against the first page.
+ */
+- wbc_account_cgroup_owner(wbc, &locked_folio->page,
++ wbc_account_cgroup_owner(wbc, locked_folio,
+ cur_end - start);
+ async_chunk[i].locked_folio = locked_folio;
+ locked_folio = NULL;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 3a34274280746c..c73a41b1ad5607 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -1541,6 +1541,10 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg,
+ u64 extent_gen;
+ int ret;
+
++ if (unlikely(!extent_root)) {
++ btrfs_err(fs_info, "no valid extent root for scrub");
++ return -EUCLEAN;
++ }
+ memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) *
+ stripe->nr_sectors);
+ scrub_stripe_reset_bitmaps(stripe);
+diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
+index 100abc00b794ca..03958d1a53b0eb 100644
+--- a/fs/btrfs/zlib.c
++++ b/fs/btrfs/zlib.c
+@@ -174,10 +174,10 @@ int zlib_compress_folios(struct list_head *ws, struct address_space *mapping,
+ copy_page(workspace->buf + i * PAGE_SIZE,
+ data_in);
+ start += PAGE_SIZE;
+- workspace->strm.avail_in =
+- (in_buf_folios << PAGE_SHIFT);
+ }
+ workspace->strm.next_in = workspace->buf;
++ workspace->strm.avail_in = min(bytes_left,
++ in_buf_folios << PAGE_SHIFT);
+ } else {
+ unsigned int pg_off;
+ unsigned int cur_len;
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 1fc9a50def0b51..32bd0f4c422360 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -2803,7 +2803,7 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
+ bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
+ bio->bi_write_hint = write_hint;
+
+- __bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh));
++ bio_add_folio_nofail(bio, bh->b_folio, bh->b_size, bh_offset(bh));
+
+ bio->bi_end_io = end_bio_bh_io_sync;
+ bio->bi_private = bh;
+@@ -2813,7 +2813,7 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
+
+ if (wbc) {
+ wbc_init_bio(wbc, bio);
+- wbc_account_cgroup_owner(wbc, bh->b_page, bh->b_size);
++ wbc_account_cgroup_owner(wbc, bh->b_folio, bh->b_size);
+ }
+
+ submit_bio(bio);
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 7446bf09a04a8f..9d8848872fe8ac 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -125,7 +125,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ type = exfat_get_entry_type(ep);
+ if (type == TYPE_UNUSED) {
+ brelse(bh);
+- break;
++ goto out;
+ }
+
+ if (type != TYPE_FILE && type != TYPE_DIR) {
+@@ -189,6 +189,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ }
+ }
+
++out:
+ dir_entry->namebuf.lfn[0] = '\0';
+ *cpos = EXFAT_DEN_TO_B(dentry);
+ return 0;
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 773c320d68f3f2..9e5492ac409b07 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -216,6 +216,16 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
+
+ if (err)
+ goto dec_used_clus;
++
++ if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) {
++ /*
++ * The cluster chain includes a loop, scan the
++ * bitmap to get the number of used clusters.
++ */
++ exfat_count_used_clusters(sb, &sbi->used_clusters);
++
++ return 0;
++ }
+ } while (clu != EXFAT_EOF_CLUSTER);
+ }
+
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index fb38769c3e39d1..05b51e7217838f 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -545,6 +545,7 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
+ while (pos < new_valid_size) {
+ u32 len;
+ struct folio *folio;
++ unsigned long off;
+
+ len = PAGE_SIZE - (pos & (PAGE_SIZE - 1));
+ if (pos + len > new_valid_size)
+@@ -554,6 +555,9 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
+ if (err)
+ goto out;
+
++ off = offset_in_folio(folio, pos);
++ folio_zero_new_buffers(folio, off, off + len);
++
+ err = ops->write_end(file, mapping, pos, len, len, folio, NULL);
+ if (err < 0)
+ goto out;
+@@ -563,6 +567,8 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
+ cond_resched();
+ }
+
++ return 0;
++
+ out:
+ return err;
+ }
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index ad5543866d2152..b7b9261fec3b50 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -421,7 +421,7 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
+ io_submit_init_bio(io, bh);
+ if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh)))
+ goto submit_and_retry;
+- wbc_account_cgroup_owner(io->io_wbc, &folio->page, bh->b_size);
++ wbc_account_cgroup_owner(io->io_wbc, folio, bh->b_size);
+ io->io_next_block++;
+ }
+
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index da0960d496ae09..1b0050b8421d88 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -711,7 +711,8 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc && !is_read_io(fio->op))
+- wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page),
++ PAGE_SIZE);
+
+ inc_page_count(fio->sbi, is_read_io(fio->op) ?
+ __read_io_type(page) : WB_DATA_TYPE(fio->page, false));
+@@ -911,7 +912,8 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc)
+- wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page),
++ PAGE_SIZE);
+
+ inc_page_count(fio->sbi, WB_DATA_TYPE(page, false));
+
+@@ -1011,7 +1013,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc)
+- wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page),
++ PAGE_SIZE);
+
+ io->last_block_in_bio = fio->new_blkaddr;
+
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index d8bec3c1bb1fa7..2391b09f4cedec 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -890,17 +890,16 @@ EXPORT_SYMBOL_GPL(wbc_detach_inode);
+ /**
+ * wbc_account_cgroup_owner - account writeback to update inode cgroup ownership
+ * @wbc: writeback_control of the writeback in progress
+- * @page: page being written out
++ * @folio: folio being written out
+ * @bytes: number of bytes being written out
+ *
+- * @bytes from @page are about to written out during the writeback
++ * @bytes from @folio are about to written out during the writeback
+ * controlled by @wbc. Keep the book for foreign inode detection. See
+ * wbc_detach_inode().
+ */
+-void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
++void wbc_account_cgroup_owner(struct writeback_control *wbc, struct folio *folio,
+ size_t bytes)
+ {
+- struct folio *folio;
+ struct cgroup_subsys_state *css;
+ int id;
+
+@@ -913,7 +912,6 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
+ if (!wbc->wb || wbc->no_cgroup_owner)
+ return;
+
+- folio = page_folio(page);
+ css = mem_cgroup_css_from_folio(folio);
+ /* dead cgroups shouldn't contribute to inode ownership arbitration */
+ if (!(css->flags & CSS_ONLINE))
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 54104dd48af7c9..2e62e62c07f836 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1680,6 +1680,8 @@ static int fuse_dir_open(struct inode *inode, struct file *file)
+ */
+ if (ff->open_flags & (FOPEN_STREAM | FOPEN_NONSEEKABLE))
+ nonseekable_open(inode, file);
++ if (!(ff->open_flags & FOPEN_KEEP_CACHE))
++ invalidate_inode_pages2(inode->i_mapping);
+ }
+
+ return err;
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index ef0b68bccbb612..25d1ede6bb0eb0 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1764,7 +1764,8 @@ static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t pos)
+ */
+ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
+ struct writeback_control *wbc, struct folio *folio,
+- struct inode *inode, loff_t pos, unsigned len)
++ struct inode *inode, loff_t pos, loff_t end_pos,
++ unsigned len)
+ {
+ struct iomap_folio_state *ifs = folio->private;
+ size_t poff = offset_in_folio(folio, pos);
+@@ -1783,15 +1784,60 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
+
+ if (ifs)
+ atomic_add(len, &ifs->write_bytes_pending);
++
++ /*
++ * Clamp io_offset and io_size to the incore EOF so that ondisk
++ * file size updates in the ioend completion are byte-accurate.
++ * This avoids recovering files with zeroed tail regions when
++ * writeback races with appending writes:
++ *
++ * Thread 1: Thread 2:
++ * ------------ -----------
++ * write [A, A+B]
++ * update inode size to A+B
++ * submit I/O [A, A+BS]
++ * write [A+B, A+B+C]
++ * update inode size to A+B+C
++ * <I/O completes, updates disk size to min(A+B+C, A+BS)>
++ * <power failure>
++ *
++ * After reboot:
++ * 1) with A+B+C < A+BS, the file has zero padding in range
++ * [A+B, A+B+C]
++ *
++ * |< Block Size (BS) >|
++ * |DDDDDDDDDDDD0000000000000|
++ * ^ ^ ^
++ * A A+B A+B+C
++ * (EOF)
++ *
++ * 2) with A+B+C > A+BS, the file has zero padding in range
++ * [A+B, A+BS]
++ *
++ * |< Block Size (BS) >|< Block Size (BS) >|
++ * |DDDDDDDDDDDD0000000000000|00000000000000000000000000|
++ * ^ ^ ^ ^
++ * A A+B A+BS A+B+C
++ * (EOF)
++ *
++ * D = Valid Data
++ * 0 = Zero Padding
++ *
++ * Note that this defeats the ability to chain the ioends of
++ * appending writes.
++ */
+ wpc->ioend->io_size += len;
+- wbc_account_cgroup_owner(wbc, &folio->page, len);
++ if (wpc->ioend->io_offset + wpc->ioend->io_size > end_pos)
++ wpc->ioend->io_size = end_pos - wpc->ioend->io_offset;
++
++ wbc_account_cgroup_owner(wbc, folio, len);
+ return 0;
+ }
+
+ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
+ struct writeback_control *wbc, struct folio *folio,
+- struct inode *inode, u64 pos, unsigned dirty_len,
+- unsigned *count)
++ struct inode *inode, u64 pos, u64 end_pos,
++ unsigned dirty_len, unsigned *count)
+ {
+ int error;
+
+@@ -1816,7 +1862,7 @@ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
+ break;
+ default:
+ error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos,
+- map_len);
++ end_pos, map_len);
+ if (!error)
+ (*count)++;
+ break;
+@@ -1887,11 +1933,11 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode,
+ * remaining memory is zeroed when mapped, and writes to that
+ * region are not written out to the file.
+ *
+- * Also adjust the writeback range to skip all blocks entirely
+- * beyond i_size.
++ * Also adjust the end_pos to the end of file and skip writeback
++ * for all blocks entirely beyond i_size.
+ */
+ folio_zero_segment(folio, poff, folio_size(folio));
+- *end_pos = round_up(isize, i_blocksize(inode));
++ *end_pos = isize;
+ }
+
+ return true;
+@@ -1904,6 +1950,7 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ struct inode *inode = folio->mapping->host;
+ u64 pos = folio_pos(folio);
+ u64 end_pos = pos + folio_size(folio);
++ u64 end_aligned = 0;
+ unsigned count = 0;
+ int error = 0;
+ u32 rlen;
+@@ -1945,9 +1992,10 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ /*
+ * Walk through the folio to find dirty areas to write back.
+ */
+- while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) {
++ end_aligned = round_up(end_pos, i_blocksize(inode));
++ while ((rlen = iomap_find_dirty_range(folio, &pos, end_aligned))) {
+ error = iomap_writepage_map_blocks(wpc, wbc, folio, inode,
+- pos, rlen, &count);
++ pos, end_pos, rlen, &count);
+ if (error)
+ break;
+ pos += rlen;
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 4305a1ac808a60..f95cf272a1b500 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -776,9 +776,9 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ /*
+ * If the journal is not located on the file system device,
+ * then we must flush the file system device before we issue
+- * the commit record
++ * the commit record and update the journal tail sequence.
+ */
+- if (commit_transaction->t_need_data_flush &&
++ if ((commit_transaction->t_need_data_flush || update_tail) &&
+ (journal->j_fs_dev != journal->j_dev) &&
+ (journal->j_flags & JBD2_BARRIER))
+ blkdev_issue_flush(journal->j_fs_dev);
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index 4556e468902449..ce63d5fde9c3a8 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -654,7 +654,7 @@ static void flush_descriptor(journal_t *journal,
+ set_buffer_jwrite(descriptor);
+ BUFFER_TRACE(descriptor, "write");
+ set_buffer_dirty(descriptor);
+- write_dirty_buffer(descriptor, REQ_SYNC);
++ write_dirty_buffer(descriptor, JBD2_JOURNAL_REQ_FLAGS);
+ }
+ #endif
+
+diff --git a/fs/mount.h b/fs/mount.h
+index 185fc56afc1333..179f690a0c7223 100644
+--- a/fs/mount.h
++++ b/fs/mount.h
+@@ -38,6 +38,7 @@ struct mount {
+ struct dentry *mnt_mountpoint;
+ struct vfsmount mnt;
+ union {
++ struct rb_node mnt_node; /* node in the ns->mounts rbtree */
+ struct rcu_head mnt_rcu;
+ struct llist_node mnt_llist;
+ };
+@@ -51,10 +52,7 @@ struct mount {
+ struct list_head mnt_child; /* and going through their mnt_child */
+ struct list_head mnt_instance; /* mount instance on sb->s_mounts */
+ const char *mnt_devname; /* Name of device e.g. /dev/dsk/hda1 */
+- union {
+- struct rb_node mnt_node; /* Under ns->mounts */
+- struct list_head mnt_list;
+- };
++ struct list_head mnt_list;
+ struct list_head mnt_expire; /* link in fs-specific expiry list */
+ struct list_head mnt_share; /* circular list of shared mounts */
+ struct list_head mnt_slave_list;/* list of slave mounts */
+@@ -145,11 +143,16 @@ static inline bool is_anon_ns(struct mnt_namespace *ns)
+ return ns->seq == 0;
+ }
+
++static inline bool mnt_ns_attached(const struct mount *mnt)
++{
++ return !RB_EMPTY_NODE(&mnt->mnt_node);
++}
++
+ static inline void move_from_ns(struct mount *mnt, struct list_head *dt_list)
+ {
+- WARN_ON(!(mnt->mnt.mnt_flags & MNT_ONRB));
+- mnt->mnt.mnt_flags &= ~MNT_ONRB;
++ WARN_ON(!mnt_ns_attached(mnt));
+ rb_erase(&mnt->mnt_node, &mnt->mnt_ns->mounts);
++ RB_CLEAR_NODE(&mnt->mnt_node);
+ list_add_tail(&mnt->mnt_list, dt_list);
+ }
+
+diff --git a/fs/mpage.c b/fs/mpage.c
+index b5b5ddf9d513d4..82aecf37274379 100644
+--- a/fs/mpage.c
++++ b/fs/mpage.c
+@@ -606,7 +606,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc,
+ * the confused fail path above (OOM) will be very confused when
+ * it finds all bh marked clean (i.e. it will not write anything)
+ */
+- wbc_account_cgroup_owner(wbc, &folio->page, folio_size(folio));
++ wbc_account_cgroup_owner(wbc, folio, folio_size(folio));
+ length = first_unmapped << blkbits;
+ if (!bio_add_folio(bio, folio, length, 0)) {
+ bio = mpage_bio_submit_write(bio);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index d26f5e6d2ca35f..5ea644b679add5 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -344,6 +344,7 @@ static struct mount *alloc_vfsmnt(const char *name)
+ INIT_HLIST_NODE(&mnt->mnt_mp_list);
+ INIT_LIST_HEAD(&mnt->mnt_umounting);
+ INIT_HLIST_HEAD(&mnt->mnt_stuck_children);
++ RB_CLEAR_NODE(&mnt->mnt_node);
+ mnt->mnt.mnt_idmap = &nop_mnt_idmap;
+ }
+ return mnt;
+@@ -1124,7 +1125,7 @@ static void mnt_add_to_ns(struct mnt_namespace *ns, struct mount *mnt)
+ struct rb_node **link = &ns->mounts.rb_node;
+ struct rb_node *parent = NULL;
+
+- WARN_ON(mnt->mnt.mnt_flags & MNT_ONRB);
++ WARN_ON(mnt_ns_attached(mnt));
+ mnt->mnt_ns = ns;
+ while (*link) {
+ parent = *link;
+@@ -1135,7 +1136,6 @@ static void mnt_add_to_ns(struct mnt_namespace *ns, struct mount *mnt)
+ }
+ rb_link_node(&mnt->mnt_node, parent, link);
+ rb_insert_color(&mnt->mnt_node, &ns->mounts);
+- mnt->mnt.mnt_flags |= MNT_ONRB;
+ }
+
+ /*
+@@ -1305,7 +1305,7 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
+ }
+
+ mnt->mnt.mnt_flags = old->mnt.mnt_flags;
+- mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL|MNT_ONRB);
++ mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL);
+
+ atomic_inc(&sb->s_active);
+ mnt->mnt.mnt_idmap = mnt_idmap_get(mnt_idmap(&old->mnt));
+@@ -1763,7 +1763,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ /* Gather the mounts to umount */
+ for (p = mnt; p; p = next_mnt(p, mnt)) {
+ p->mnt.mnt_flags |= MNT_UMOUNT;
+- if (p->mnt.mnt_flags & MNT_ONRB)
++ if (mnt_ns_attached(p))
+ move_from_ns(p, &tmp_list);
+ else
+ list_move(&p->mnt_list, &tmp_list);
+@@ -1912,16 +1912,14 @@ static int do_umount(struct mount *mnt, int flags)
+
+ event++;
+ if (flags & MNT_DETACH) {
+- if (mnt->mnt.mnt_flags & MNT_ONRB ||
+- !list_empty(&mnt->mnt_list))
++ if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list))
+ umount_tree(mnt, UMOUNT_PROPAGATE);
+ retval = 0;
+ } else {
+ shrink_submounts(mnt);
+ retval = -EBUSY;
+ if (!propagate_mount_busy(mnt, 2)) {
+- if (mnt->mnt.mnt_flags & MNT_ONRB ||
+- !list_empty(&mnt->mnt_list))
++ if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list))
+ umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ retval = 0;
+ }
+@@ -2055,9 +2053,15 @@ SYSCALL_DEFINE1(oldumount, char __user *, name)
+
+ static bool is_mnt_ns_file(struct dentry *dentry)
+ {
++ struct ns_common *ns;
++
+ /* Is this a proxy for a mount namespace? */
+- return dentry->d_op == &ns_dentry_operations &&
+- dentry->d_fsdata == &mntns_operations;
++ if (dentry->d_op != &ns_dentry_operations)
++ return false;
++
++ ns = d_inode(dentry)->i_private;
++
++ return ns->ops == &mntns_operations;
+ }
+
+ struct ns_common *from_mnt_ns(struct mnt_namespace *mnt)
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index af46a598f4d7c7..2dd2260352dbf8 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -275,22 +275,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ netfs_stat(&netfs_n_rh_download);
+ if (rreq->netfs_ops->prepare_read) {
+ ret = rreq->netfs_ops->prepare_read(subreq);
+- if (ret < 0) {
+- atomic_dec(&rreq->nr_outstanding);
+- netfs_put_subrequest(subreq, false,
+- netfs_sreq_trace_put_cancel);
+- break;
+- }
++ if (ret < 0)
++ goto prep_failed;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
+ }
+
+ slice = netfs_prepare_read_iterator(subreq);
+- if (slice < 0) {
+- atomic_dec(&rreq->nr_outstanding);
+- netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
+- ret = slice;
+- break;
+- }
++ if (slice < 0)
++ goto prep_iter_failed;
+
+ rreq->netfs_ops->issue_read(subreq);
+ goto done;
+@@ -302,6 +294,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ netfs_stat(&netfs_n_rh_zero);
+ slice = netfs_prepare_read_iterator(subreq);
++ if (slice < 0)
++ goto prep_iter_failed;
+ __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ netfs_read_subreq_terminated(subreq, 0, false);
+ goto done;
+@@ -310,6 +304,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ if (source == NETFS_READ_FROM_CACHE) {
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ slice = netfs_prepare_read_iterator(subreq);
++ if (slice < 0)
++ goto prep_iter_failed;
+ netfs_read_cache_to_pagecache(rreq, subreq);
+ goto done;
+ }
+@@ -318,6 +314,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ WARN_ON_ONCE(1);
+ break;
+
++ prep_iter_failed:
++ ret = slice;
++ prep_failed:
++ subreq->error = ret;
++ atomic_dec(&rreq->nr_outstanding);
++ netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++ break;
++
+ done:
+ size -= slice;
+ start += slice;
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index 88f2adfab75e92..26cf9c94deebb3 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -67,7 +67,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ * allocate a sufficiently large bvec array and may shorten the
+ * request.
+ */
+- if (async || user_backed_iter(iter)) {
++ if (user_backed_iter(iter)) {
+ n = netfs_extract_user_iter(iter, len, &wreq->iter, 0);
+ if (n < 0) {
+ ret = n;
+@@ -77,6 +77,11 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ wreq->direct_bv_count = n;
+ wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
+ } else {
++ /* If this is a kernel-generated async DIO request,
++ * assume that any resources the iterator points to
++ * (eg. a bio_vec array) will persist till the end of
++ * the op.
++ */
+ wreq->iter = *iter;
+ }
+
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index 3cbb289535a85a..e70eb4ea21c038 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -62,10 +62,14 @@ static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq,
+ } else {
+ trace_netfs_folio(folio, netfs_folio_trace_read_done);
+ }
++
++ folioq_clear(folioq, slot);
+ } else {
+ // TODO: Use of PG_private_2 is deprecated.
+ if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
+ netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot);
++ else
++ folioq_clear(folioq, slot);
+ }
+
+ if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) {
+@@ -77,8 +81,6 @@ static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq,
+ folio_unlock(folio);
+ }
+ }
+-
+- folioq_clear(folioq, slot);
+ }
+
+ /*
+@@ -378,8 +380,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq)
+ task_io_account_read(rreq->transferred);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip);
+- clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
+- wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);
++ clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_done);
+ netfs_clear_subrequests(rreq, false);
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index ba5af89d37fae5..54d5004fec1826 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -170,6 +170,10 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq)
+
+ trace_netfs_write(wreq, netfs_write_trace_copy_to_cache);
+ netfs_stat(&netfs_n_wh_copy_to_cache);
++ if (!wreq->io_streams[1].avail) {
++ netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++ goto couldnt_start;
++ }
+
+ for (;;) {
+ error = netfs_pgpriv2_copy_folio(wreq, folio);
+diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
+index 0350592ea8047d..48fb0303f7eee0 100644
+--- a/fs/netfs/read_retry.c
++++ b/fs/netfs/read_retry.c
+@@ -49,7 +49,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+ * up to the first permanently failed one.
+ */
+ if (!rreq->netfs_ops->prepare_read &&
+- !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) {
++ !rreq->cache_resources.ops) {
+ struct netfs_io_subrequest *subreq;
+
+ list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
+@@ -149,7 +149,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+ BUG_ON(!len);
+
+ /* Renegotiate max_len (rsize) */
+- if (rreq->netfs_ops->prepare_read(subreq) < 0) {
++ if (rreq->netfs_ops->prepare_read &&
++ rreq->netfs_ops->prepare_read(subreq) < 0) {
+ trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);
+ __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ }
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 1d438be2e1b4b8..82290c92ba7a29 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -501,8 +501,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ goto need_retry;
+ if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
+ trace_netfs_rreq(wreq, netfs_rreq_trace_unpause);
+- clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags);
+- wake_up_bit(&wreq->flags, NETFS_RREQ_PAUSE);
++ clear_and_wake_up_bit(NETFS_RREQ_PAUSE, &wreq->flags);
+ }
+
+ if (notes & NEED_REASSESS) {
+@@ -605,8 +604,7 @@ void netfs_write_collection_worker(struct work_struct *work)
+
+ _debug("finished");
+ trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip);
+- clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
+- wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS);
++ clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
+
+ if (wreq->iocb) {
+ size_t written = min(wreq->transferred, wreq->len);
+@@ -714,8 +712,7 @@ void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+
+ trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
+
+- clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+- wake_up_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS);
++ clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+
+ /* If we are at the head of the queue, wake up the collector,
+ * transferring a ref to it if we were the ones to do so.
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index 810269ee0a50e6..d49e4ce279994f 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -263,6 +263,12 @@ int nfs_netfs_readahead(struct readahead_control *ractl)
+ static atomic_t nfs_netfs_debug_id;
+ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *file)
+ {
++ if (!file) {
++ if (WARN_ON_ONCE(rreq->origin != NETFS_PGPRIV2_COPY_TO_CACHE))
++ return -EIO;
++ return 0;
++ }
++
+ rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file));
+ rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id);
+ /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */
+@@ -274,7 +280,8 @@ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *fi
+
+ static void nfs_netfs_free_request(struct netfs_io_request *rreq)
+ {
+- put_nfs_open_context(rreq->netfs_priv);
++ if (rreq->netfs_priv)
++ put_nfs_open_context(rreq->netfs_priv);
+ }
+
+ static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq)
+diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
+index dec553034027e0..e933f9c65d904a 100644
+--- a/fs/notify/fdinfo.c
++++ b/fs/notify/fdinfo.c
+@@ -47,10 +47,8 @@ static void show_mark_fhandle(struct seq_file *m, struct inode *inode)
+ size = f->handle_bytes >> 2;
+
+ ret = exportfs_encode_fid(inode, (struct fid *)f->f_handle, &size);
+- if ((ret == FILEID_INVALID) || (ret < 0)) {
+- WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret);
++ if ((ret == FILEID_INVALID) || (ret < 0))
+ return;
+- }
+
+ f->handle_type = ret;
+ f->handle_bytes = size * sizeof(u32);
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 2ed6ad641a2069..b2c78621da44a4 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -416,13 +416,13 @@ int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upperdentry,
+ return err;
+ }
+
+-struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
++struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode,
+ bool is_upper)
+ {
+ struct ovl_fh *fh;
+ int fh_type, dwords;
+ int buflen = MAX_HANDLE_SZ;
+- uuid_t *uuid = &real->d_sb->s_uuid;
++ uuid_t *uuid = &realinode->i_sb->s_uuid;
+ int err;
+
+ /* Make sure the real fid stays 32bit aligned */
+@@ -439,13 +439,13 @@ struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
+ * the price or reconnecting the dentry.
+ */
+ dwords = buflen >> 2;
+- fh_type = exportfs_encode_fh(real, (void *)fh->fb.fid, &dwords, 0);
++ fh_type = exportfs_encode_inode_fh(realinode, (void *)fh->fb.fid,
++ &dwords, NULL, 0);
+ buflen = (dwords << 2);
+
+ err = -EIO;
+- if (WARN_ON(fh_type < 0) ||
+- WARN_ON(buflen > MAX_HANDLE_SZ) ||
+- WARN_ON(fh_type == FILEID_INVALID))
++ if (fh_type < 0 || fh_type == FILEID_INVALID ||
++ WARN_ON(buflen > MAX_HANDLE_SZ))
+ goto out_err;
+
+ fh->fb.version = OVL_FH_VERSION;
+@@ -481,7 +481,7 @@ struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin)
+ if (!ovl_can_decode_fh(origin->d_sb))
+ return NULL;
+
+- return ovl_encode_real_fh(ofs, origin, false);
++ return ovl_encode_real_fh(ofs, d_inode(origin), false);
+ }
+
+ int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
+@@ -506,7 +506,7 @@ static int ovl_set_upper_fh(struct ovl_fs *ofs, struct dentry *upper,
+ const struct ovl_fh *fh;
+ int err;
+
+- fh = ovl_encode_real_fh(ofs, upper, true);
++ fh = ovl_encode_real_fh(ofs, d_inode(upper), true);
+ if (IS_ERR(fh))
+ return PTR_ERR(fh);
+
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index 5868cb2229552f..444aeeccb6daf9 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -176,35 +176,37 @@ static int ovl_connect_layer(struct dentry *dentry)
+ *
+ * Return 0 for upper file handle, > 0 for lower file handle or < 0 on error.
+ */
+-static int ovl_check_encode_origin(struct dentry *dentry)
++static int ovl_check_encode_origin(struct inode *inode)
+ {
+- struct ovl_fs *ofs = OVL_FS(dentry->d_sb);
++ struct ovl_fs *ofs = OVL_FS(inode->i_sb);
+ bool decodable = ofs->config.nfs_export;
++ struct dentry *dentry;
++ int err;
+
+ /* No upper layer? */
+ if (!ovl_upper_mnt(ofs))
+ return 1;
+
+ /* Lower file handle for non-upper non-decodable */
+- if (!ovl_dentry_upper(dentry) && !decodable)
++ if (!ovl_inode_upper(inode) && !decodable)
+ return 1;
+
+ /* Upper file handle for pure upper */
+- if (!ovl_dentry_lower(dentry))
++ if (!ovl_inode_lower(inode))
+ return 0;
+
+ /*
+ * Root is never indexed, so if there's an upper layer, encode upper for
+ * root.
+ */
+- if (dentry == dentry->d_sb->s_root)
++ if (inode == d_inode(inode->i_sb->s_root))
+ return 0;
+
+ /*
+ * Upper decodable file handle for non-indexed upper.
+ */
+- if (ovl_dentry_upper(dentry) && decodable &&
+- !ovl_test_flag(OVL_INDEX, d_inode(dentry)))
++ if (ovl_inode_upper(inode) && decodable &&
++ !ovl_test_flag(OVL_INDEX, inode))
+ return 0;
+
+ /*
+@@ -213,14 +215,23 @@ static int ovl_check_encode_origin(struct dentry *dentry)
+ * ovl_connect_layer() will try to make origin's layer "connected" by
+ * copying up a "connectable" ancestor.
+ */
+- if (d_is_dir(dentry) && decodable)
+- return ovl_connect_layer(dentry);
++ if (!decodable || !S_ISDIR(inode->i_mode))
++ return 1;
++
++ dentry = d_find_any_alias(inode);
++ if (!dentry)
++ return -ENOENT;
++
++ err = ovl_connect_layer(dentry);
++ dput(dentry);
++ if (err < 0)
++ return err;
+
+ /* Lower file handle for indexed and non-upper dir/non-dir */
+ return 1;
+ }
+
+-static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
++static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct inode *inode,
+ u32 *fid, int buflen)
+ {
+ struct ovl_fh *fh = NULL;
+@@ -231,13 +242,13 @@ static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
+ * Check if we should encode a lower or upper file handle and maybe
+ * copy up an ancestor to make lower file handle connectable.
+ */
+- err = enc_lower = ovl_check_encode_origin(dentry);
++ err = enc_lower = ovl_check_encode_origin(inode);
+ if (enc_lower < 0)
+ goto fail;
+
+ /* Encode an upper or lower file handle */
+- fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_dentry_lower(dentry) :
+- ovl_dentry_upper(dentry), !enc_lower);
++ fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_inode_lower(inode) :
++ ovl_inode_upper(inode), !enc_lower);
+ if (IS_ERR(fh))
+ return PTR_ERR(fh);
+
+@@ -251,8 +262,8 @@ static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
+ return err;
+
+ fail:
+- pr_warn_ratelimited("failed to encode file handle (%pd2, err=%i)\n",
+- dentry, err);
++ pr_warn_ratelimited("failed to encode file handle (ino=%lu, err=%i)\n",
++ inode->i_ino, err);
+ goto out;
+ }
+
+@@ -260,19 +271,13 @@ static int ovl_encode_fh(struct inode *inode, u32 *fid, int *max_len,
+ struct inode *parent)
+ {
+ struct ovl_fs *ofs = OVL_FS(inode->i_sb);
+- struct dentry *dentry;
+ int bytes, buflen = *max_len << 2;
+
+ /* TODO: encode connectable file handles */
+ if (parent)
+ return FILEID_INVALID;
+
+- dentry = d_find_any_alias(inode);
+- if (!dentry)
+- return FILEID_INVALID;
+-
+- bytes = ovl_dentry_to_fid(ofs, dentry, fid, buflen);
+- dput(dentry);
++ bytes = ovl_dentry_to_fid(ofs, inode, fid, buflen);
+ if (bytes <= 0)
+ return FILEID_INVALID;
+
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index 5764f91d283e70..42b73ae5ba01be 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -542,7 +542,7 @@ int ovl_verify_origin_xattr(struct ovl_fs *ofs, struct dentry *dentry,
+ struct ovl_fh *fh;
+ int err;
+
+- fh = ovl_encode_real_fh(ofs, real, is_upper);
++ fh = ovl_encode_real_fh(ofs, d_inode(real), is_upper);
+ err = PTR_ERR(fh);
+ if (IS_ERR(fh)) {
+ fh = NULL;
+@@ -738,7 +738,7 @@ int ovl_get_index_name(struct ovl_fs *ofs, struct dentry *origin,
+ struct ovl_fh *fh;
+ int err;
+
+- fh = ovl_encode_real_fh(ofs, origin, false);
++ fh = ovl_encode_real_fh(ofs, d_inode(origin), false);
+ if (IS_ERR(fh))
+ return PTR_ERR(fh);
+
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 0bfe35da4b7b7a..844874b4a91a94 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -869,7 +869,7 @@ int ovl_copy_up_with_data(struct dentry *dentry);
+ int ovl_maybe_copy_up(struct dentry *dentry, int flags);
+ int ovl_copy_xattr(struct super_block *sb, const struct path *path, struct dentry *new);
+ int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upper, struct kstat *stat);
+-struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
++struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode,
+ bool is_upper);
+ struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin);
+ int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
+diff --git a/fs/smb/client/namespace.c b/fs/smb/client/namespace.c
+index 0f788031b7405f..e3f9213131c467 100644
+--- a/fs/smb/client/namespace.c
++++ b/fs/smb/client/namespace.c
+@@ -196,11 +196,28 @@ static struct vfsmount *cifs_do_automount(struct path *path)
+ struct smb3_fs_context tmp;
+ char *full_path;
+ struct vfsmount *mnt;
++ struct cifs_sb_info *mntpt_sb;
++ struct cifs_ses *ses;
+
+ if (IS_ROOT(mntpt))
+ return ERR_PTR(-ESTALE);
+
+- cur_ctx = CIFS_SB(mntpt->d_sb)->ctx;
++ mntpt_sb = CIFS_SB(mntpt->d_sb);
++ ses = cifs_sb_master_tcon(mntpt_sb)->ses;
++ cur_ctx = mntpt_sb->ctx;
++
++ /*
++ * At this point, the root session should be in the mntpt sb. We should
++ * bring the sb context passwords in sync with the root session's
++ * passwords. This would help prevent unnecessary retries and password
++ * swaps for automounts.
++ */
++ mutex_lock(&ses->session_mutex);
++ rc = smb3_sync_session_ctx_passwords(mntpt_sb, ses);
++ mutex_unlock(&ses->session_mutex);
++
++ if (rc)
++ return ERR_PTR(rc);
+
+ fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, mntpt);
+ if (IS_ERR(fc))
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 04ffc5b158c3bf..c763a2f7df6640 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -695,6 +695,9 @@ void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status)
+ struct smb2_hdr *rsp_hdr;
+ struct ksmbd_work *in_work = ksmbd_alloc_work_struct();
+
++ if (!in_work)
++ return;
++
+ if (allocate_interim_rsp_buf(in_work)) {
+ pr_err("smb_allocate_rsp_buf failed!\n");
+ ksmbd_free_work_struct(in_work);
+@@ -3985,6 +3988,26 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
+ posix_info->DeviceId = cpu_to_le32(ksmbd_kstat->kstat->rdev);
+ posix_info->HardLinks = cpu_to_le32(ksmbd_kstat->kstat->nlink);
+ posix_info->Mode = cpu_to_le32(ksmbd_kstat->kstat->mode & 0777);
++ switch (ksmbd_kstat->kstat->mode & S_IFMT) {
++ case S_IFDIR:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFLNK:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFCHR:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFBLK:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFIFO:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFSOCK:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT);
++ }
++
+ posix_info->Inode = cpu_to_le64(ksmbd_kstat->kstat->ino);
+ posix_info->DosAttributes =
+ S_ISDIR(ksmbd_kstat->kstat->mode) ?
+@@ -5173,6 +5196,26 @@ static int find_file_posix_info(struct smb2_query_info_rsp *rsp,
+ file_info->AllocationSize = cpu_to_le64(stat.blocks << 9);
+ file_info->HardLinks = cpu_to_le32(stat.nlink);
+ file_info->Mode = cpu_to_le32(stat.mode & 0777);
++ switch (stat.mode & S_IFMT) {
++ case S_IFDIR:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFLNK:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFCHR:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFBLK:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFIFO:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFSOCK:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT);
++ }
++
+ file_info->DeviceId = cpu_to_le32(stat.rdev);
+
+ /*
+diff --git a/fs/smb/server/smb2pdu.h b/fs/smb/server/smb2pdu.h
+index 649dacf7e8c493..17a0b18a8406b3 100644
+--- a/fs/smb/server/smb2pdu.h
++++ b/fs/smb/server/smb2pdu.h
+@@ -502,4 +502,14 @@ static inline void *smb2_get_msg(void *buf)
+ return buf + 4;
+ }
+
++#define POSIX_TYPE_FILE 0
++#define POSIX_TYPE_DIR 1
++#define POSIX_TYPE_SYMLINK 2
++#define POSIX_TYPE_CHARDEV 3
++#define POSIX_TYPE_BLKDEV 4
++#define POSIX_TYPE_FIFO 5
++#define POSIX_TYPE_SOCKET 6
++
++#define POSIX_FILETYPE_SHIFT 12
++
+ #endif /* _SMB2PDU_H */
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 7cbd580120d129..ee825971abd9ab 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -1264,6 +1264,8 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ filepath,
+ flags,
+ path);
++ if (!is_last)
++ next[0] = '/';
+ if (err)
+ goto out2;
+ else if (is_last)
+@@ -1271,7 +1273,6 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ path_put(parent_path);
+ *parent_path = *path;
+
+- next[0] = '/';
+ remain_len -= filename_len + 1;
+ }
+
+diff --git a/include/linux/bus/stm32_firewall_device.h b/include/linux/bus/stm32_firewall_device.h
+index 18e0a2fc3816ac..5178b72bc92098 100644
+--- a/include/linux/bus/stm32_firewall_device.h
++++ b/include/linux/bus/stm32_firewall_device.h
+@@ -115,7 +115,7 @@ void stm32_firewall_release_access_by_id(struct stm32_firewall *firewall, u32 su
+ #else /* CONFIG_STM32_FIREWALL */
+
+ int stm32_firewall_get_firewall(struct device_node *np, struct stm32_firewall *firewall,
+- unsigned int nb_firewall);
++ unsigned int nb_firewall)
+ {
+ return -ENODEV;
+ }
+diff --git a/include/linux/iomap.h b/include/linux/iomap.h
+index f61407e3b12192..d204dcd35063d7 100644
+--- a/include/linux/iomap.h
++++ b/include/linux/iomap.h
+@@ -330,7 +330,7 @@ struct iomap_ioend {
+ u16 io_type;
+ u16 io_flags; /* IOMAP_F_* */
+ struct inode *io_inode; /* file being written to */
+- size_t io_size; /* size of the extent */
++ size_t io_size; /* size of data within eof */
+ loff_t io_offset; /* offset in the file */
+ sector_t io_sector; /* start sector of ioend */
+ struct bio io_bio; /* MUST BE LAST! */
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index c34c18b4e8f36f..04213d8ef8376d 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -50,7 +50,7 @@ struct path;
+ #define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME )
+
+ #define MNT_INTERNAL_FLAGS (MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | \
+- MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED | MNT_ONRB)
++ MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED)
+
+ #define MNT_INTERNAL 0x4000
+
+@@ -64,7 +64,6 @@ struct path;
+ #define MNT_SYNC_UMOUNT 0x2000000
+ #define MNT_MARKED 0x4000000
+ #define MNT_UMOUNT 0x8000000
+-#define MNT_ONRB 0x10000000
+
+ struct vfsmount {
+ struct dentry *mnt_root; /* root of the mounted tree */
+diff --git a/include/linux/netfs.h b/include/linux/netfs.h
+index 5eaceef41e6cac..474481ee8b7c29 100644
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -269,7 +269,6 @@ struct netfs_io_request {
+ size_t prev_donated; /* Fallback for subreq->prev_donated */
+ refcount_t ref;
+ unsigned long flags;
+-#define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
+ #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */
+ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
+ #define NETFS_RREQ_FAILED 4 /* The request failed */
+diff --git a/include/linux/writeback.h b/include/linux/writeback.h
+index d6db822e4bb30c..641a057e041329 100644
+--- a/include/linux/writeback.h
++++ b/include/linux/writeback.h
+@@ -217,7 +217,7 @@ void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
+ struct inode *inode)
+ __releases(&inode->i_lock);
+ void wbc_detach_inode(struct writeback_control *wbc);
+-void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
++void wbc_account_cgroup_owner(struct writeback_control *wbc, struct folio *folio,
+ size_t bytes);
+ int cgroup_writeback_by_id(u64 bdi_id, int memcg_id,
+ enum wb_reason reason, struct wb_completion *done);
+@@ -324,7 +324,7 @@ static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
+ }
+
+ static inline void wbc_account_cgroup_owner(struct writeback_control *wbc,
+- struct page *page, size_t bytes)
++ struct folio *folio, size_t bytes)
+ {
+ }
+
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index c0deaafebfdc0b..4bd93571e6c1b5 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -281,7 +281,7 @@ static inline int inet_csk_reqsk_queue_len(const struct sock *sk)
+
+ static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
+ {
+- return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog);
++ return inet_csk_reqsk_queue_len(sk) > READ_ONCE(sk->sk_max_ack_backlog);
+ }
+
+ bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 8932ec5bd7c029..20c5374e922ef5 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -329,7 +329,6 @@ struct ufs_pwr_mode_info {
+ * @program_key: program or evict an inline encryption key
+ * @fill_crypto_prdt: initialize crypto-related fields in the PRDT
+ * @event_notify: called to notify important events
+- * @reinit_notify: called to notify reinit of UFSHCD during max gear switch
+ * @mcq_config_resource: called to configure MCQ platform resources
+ * @get_hba_mac: reports maximum number of outstanding commands supported by
+ * the controller. Should be implemented for UFSHCI 4.0 or later
+@@ -381,7 +380,6 @@ struct ufs_hba_variant_ops {
+ void *prdt, unsigned int num_segments);
+ void (*event_notify)(struct ufs_hba *hba,
+ enum ufs_event_type evt, void *data);
+- void (*reinit_notify)(struct ufs_hba *);
+ int (*mcq_config_resource)(struct ufs_hba *hba);
+ int (*get_hba_mac)(struct ufs_hba *hba);
+ int (*op_runtime_config)(struct ufs_hba *hba);
+diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
+index e37fddd5d9ce8e..ffc4bd17d0786c 100644
+--- a/io_uring/eventfd.c
++++ b/io_uring/eventfd.c
+@@ -38,7 +38,7 @@ static void io_eventfd_do_signal(struct rcu_head *rcu)
+ eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE);
+
+ if (refcount_dec_and_test(&ev_fd->refs))
+- io_eventfd_free(rcu);
++ call_rcu(&ev_fd->rcu, io_eventfd_free);
+ }
+
+ void io_eventfd_signal(struct io_ring_ctx *ctx)
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 9849da128364af..21f1bcba2f52b5 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1244,10 +1244,7 @@ static void io_req_normal_work_add(struct io_kiocb *req)
+
+ /* SQPOLL doesn't need the task_work added, it'll run it itself */
+ if (ctx->flags & IORING_SETUP_SQPOLL) {
+- struct io_sq_data *sqd = ctx->sq_data;
+-
+- if (sqd->thread)
+- __set_notify_signal(sqd->thread);
++ __set_notify_signal(req->task);
+ return;
+ }
+
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 1cfcc735b8e38e..5bc54c6df20fd6 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -275,8 +275,12 @@ static int io_sq_thread(void *data)
+ DEFINE_WAIT(wait);
+
+ /* offload context creation failed, just exit */
+- if (!current->io_uring)
++ if (!current->io_uring) {
++ mutex_lock(&sqd->lock);
++ sqd->thread = NULL;
++ mutex_unlock(&sqd->lock);
+ goto err_out;
++ }
+
+ snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid);
+ set_task_comm(current, buf);
+diff --git a/io_uring/timeout.c b/io_uring/timeout.c
+index 9973876d91b0ef..21c4bfea79f1c9 100644
+--- a/io_uring/timeout.c
++++ b/io_uring/timeout.c
+@@ -409,10 +409,12 @@ static int io_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
+
+ timeout->off = 0; /* noseq */
+ data = req->async_data;
++ data->ts = *ts;
++
+ list_add_tail(&timeout->list, &ctx->timeout_list);
+ hrtimer_init(&data->timer, io_timeout_get_clock(data), mode);
+ data->timer.function = io_timeout_fn;
+- hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode);
++ hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), mode);
+ return 0;
+ }
+
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index a4dd285cdf39b7..24ece85fd3b126 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -862,7 +862,15 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ */
+ if (cgrpv2) {
+ for (i = 0; i < ndoms; i++) {
+- cpumask_copy(doms[i], csa[i]->effective_cpus);
++ /*
++ * The top cpuset may contain some boot time isolated
++ * CPUs that need to be excluded from the sched domain.
++ */
++ if (csa[i] == &top_cpuset)
++ cpumask_and(doms[i], csa[i]->effective_cpus,
++ housekeeping_cpumask(HK_TYPE_DOMAIN));
++ else
++ cpumask_copy(doms[i], csa[i]->effective_cpus);
+ if (dattr)
+ dattr[i] = SD_ATTR_INIT;
+ }
+@@ -3102,29 +3110,6 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ int retval = -ENODEV;
+
+ buf = strstrip(buf);
+-
+- /*
+- * CPU or memory hotunplug may leave @cs w/o any execution
+- * resources, in which case the hotplug code asynchronously updates
+- * configuration and transfers all tasks to the nearest ancestor
+- * which can execute.
+- *
+- * As writes to "cpus" or "mems" may restore @cs's execution
+- * resources, wait for the previously scheduled operations before
+- * proceeding, so that we don't end up keep removing tasks added
+- * after execution capability is restored.
+- *
+- * cpuset_handle_hotplug may call back into cgroup core asynchronously
+- * via cgroup_transfer_tasks() and waiting for it from a cgroupfs
+- * operation like this one can lead to a deadlock through kernfs
+- * active_ref protection. Let's break the protection. Losing the
+- * protection is okay as we check whether @cs is online after
+- * grabbing cpuset_mutex anyway. This only happens on the legacy
+- * hierarchies.
+- */
+- css_get(&cs->css);
+- kernfs_break_active_protection(of->kn);
+-
+ cpus_read_lock();
+ mutex_lock(&cpuset_mutex);
+ if (!is_cpuset_online(cs))
+@@ -3155,8 +3140,6 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ out_unlock:
+ mutex_unlock(&cpuset_mutex);
+ cpus_read_unlock();
+- kernfs_unbreak_active_protection(of->kn);
+- css_put(&cs->css);
+ flush_workqueue(cpuset_migrate_mm_wq);
+ return retval ?: nbytes;
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 40f915f893e2ed..f928a67a07d29a 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2917,7 +2917,7 @@ static void put_prev_task_scx(struct rq *rq, struct task_struct *p,
+ */
+ if (p->scx.slice && !scx_rq_bypassing(rq)) {
+ dispatch_enqueue(&rq->scx.local_dsq, p, SCX_ENQ_HEAD);
+- return;
++ goto switch_class;
+ }
+
+ /*
+@@ -2934,6 +2934,7 @@ static void put_prev_task_scx(struct rq *rq, struct task_struct *p,
+ }
+ }
+
++switch_class:
+ if (next && next->sched_class != &ext_sched_class)
+ switch_class(rq, next);
+ }
+@@ -3239,16 +3240,8 @@ static void reset_idle_masks(void)
+ cpumask_copy(idle_masks.smt, cpu_online_mask);
+ }
+
+-void __scx_update_idle(struct rq *rq, bool idle)
++static void update_builtin_idle(int cpu, bool idle)
+ {
+- int cpu = cpu_of(rq);
+-
+- if (SCX_HAS_OP(update_idle) && !scx_rq_bypassing(rq)) {
+- SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle);
+- if (!static_branch_unlikely(&scx_builtin_idle_enabled))
+- return;
+- }
+-
+ if (idle)
+ cpumask_set_cpu(cpu, idle_masks.cpu);
+ else
+@@ -3275,6 +3268,57 @@ void __scx_update_idle(struct rq *rq, bool idle)
+ #endif
+ }
+
++/*
++ * Update the idle state of a CPU to @idle.
++ *
++ * If @do_notify is true, ops.update_idle() is invoked to notify the scx
++ * scheduler of an actual idle state transition (idle to busy or vice
++ * versa). If @do_notify is false, only the idle state in the idle masks is
++ * refreshed without invoking ops.update_idle().
++ *
++ * This distinction is necessary, because an idle CPU can be "reserved" and
++ * awakened via scx_bpf_pick_idle_cpu() + scx_bpf_kick_cpu(), marking it as
++ * busy even if no tasks are dispatched. In this case, the CPU may return
++ * to idle without a true state transition. Refreshing the idle masks
++ * without invoking ops.update_idle() ensures accurate idle state tracking
++ * while avoiding unnecessary updates and maintaining balanced state
++ * transitions.
++ */
++void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
++{
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_rq_held(rq);
++
++ /*
++ * Trigger ops.update_idle() only when transitioning from a task to
++ * the idle thread and vice versa.
++ *
++ * Idle transitions are indicated by do_notify being set to true,
++ * managed by put_prev_task_idle()/set_next_task_idle().
++ */
++ if (SCX_HAS_OP(update_idle) && do_notify && !scx_rq_bypassing(rq))
++ SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle);
++
++ /*
++ * Update the idle masks:
++ * - for real idle transitions (do_notify == true)
++ * - for idle-to-idle transitions (indicated by the previous task
++ * being the idle thread, managed by pick_task_idle())
++ *
++ * Skip updating idle masks if the previous task is not the idle
++ * thread, since set_next_task_idle() has already handled it when
++ * transitioning from a task to the idle thread (calling this
++ * function with do_notify == true).
++ *
++ * In this way we can avoid updating the idle masks twice,
++ * unnecessarily.
++ */
++ if (static_branch_likely(&scx_builtin_idle_enabled))
++ if (do_notify || is_idle_task(rq->curr))
++ update_builtin_idle(cpu, idle);
++}
++
+ static void handle_hotplug(struct rq *rq, bool online)
+ {
+ int cpu = cpu_of(rq);
+@@ -4348,10 +4392,9 @@ static void scx_ops_bypass(bool bypass)
+ */
+ for_each_possible_cpu(cpu) {
+ struct rq *rq = cpu_rq(cpu);
+- struct rq_flags rf;
+ struct task_struct *p, *n;
+
+- rq_lock(rq, &rf);
++ raw_spin_rq_lock(rq);
+
+ if (bypass) {
+ WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING);
+@@ -4367,7 +4410,7 @@ static void scx_ops_bypass(bool bypass)
+ * sees scx_rq_bypassing() before moving tasks to SCX.
+ */
+ if (!scx_enabled()) {
+- rq_unlock(rq, &rf);
++ raw_spin_rq_unlock(rq);
+ continue;
+ }
+
+@@ -4387,10 +4430,11 @@ static void scx_ops_bypass(bool bypass)
+ sched_enq_and_set_task(&ctx);
+ }
+
+- rq_unlock(rq, &rf);
+-
+ /* resched to restore ticks and idle state */
+- resched_cpu(cpu);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(rq);
++
++ raw_spin_rq_unlock(rq);
+ }
+ unlock:
+ raw_spin_unlock_irqrestore(&__scx_ops_bypass_lock, flags);
+diff --git a/kernel/sched/ext.h b/kernel/sched/ext.h
+index b1675bb59fc461..4d022d17ac7dd6 100644
+--- a/kernel/sched/ext.h
++++ b/kernel/sched/ext.h
+@@ -57,15 +57,15 @@ static inline void init_sched_ext_class(void) {}
+ #endif /* CONFIG_SCHED_CLASS_EXT */
+
+ #if defined(CONFIG_SCHED_CLASS_EXT) && defined(CONFIG_SMP)
+-void __scx_update_idle(struct rq *rq, bool idle);
++void __scx_update_idle(struct rq *rq, bool idle, bool do_notify);
+
+-static inline void scx_update_idle(struct rq *rq, bool idle)
++static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify)
+ {
+ if (scx_enabled())
+- __scx_update_idle(rq, idle);
++ __scx_update_idle(rq, idle, do_notify);
+ }
+ #else
+-static inline void scx_update_idle(struct rq *rq, bool idle) {}
++static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {}
+ #endif
+
+ #ifdef CONFIG_CGROUP_SCHED
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index d2f096bb274c3f..53bb9193c537a8 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -453,19 +453,20 @@ static void wakeup_preempt_idle(struct rq *rq, struct task_struct *p, int flags)
+ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct task_struct *next)
+ {
+ dl_server_update_idle_time(rq, prev);
+- scx_update_idle(rq, false);
++ scx_update_idle(rq, false, true);
+ }
+
+ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
+ {
+ update_idle_core(rq);
+- scx_update_idle(rq, true);
++ scx_update_idle(rq, true, true);
+ schedstat_inc(rq->sched_goidle);
+ next->se.exec_start = rq_clock_task(rq);
+ }
+
+ struct task_struct *pick_task_idle(struct rq *rq)
+ {
++ scx_update_idle(rq, true, false);
+ return rq->idle;
+ }
+
+diff --git a/net/802/psnap.c b/net/802/psnap.c
+index fca9d454905fe3..389df460c8c4b9 100644
+--- a/net/802/psnap.c
++++ b/net/802/psnap.c
+@@ -55,11 +55,11 @@ static int snap_rcv(struct sk_buff *skb, struct net_device *dev,
+ goto drop;
+
+ rcu_read_lock();
+- proto = find_snap_client(skb_transport_header(skb));
++ proto = find_snap_client(skb->data);
+ if (proto) {
+ /* Pass the frame on. */
+- skb->transport_header += 5;
+ skb_pull_rcsum(skb, 5);
++ skb_reset_transport_header(skb);
+ rc = proto->rcvfunc(skb, dev, &snap_packet_type, orig_dev);
+ }
+ rcu_read_unlock();
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index c86f4e42e69cab..7b2b04d6b85630 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1031,9 +1031,9 @@ static bool adv_use_rpa(struct hci_dev *hdev, uint32_t flags)
+
+ static int hci_set_random_addr_sync(struct hci_dev *hdev, bdaddr_t *rpa)
+ {
+- /* If we're advertising or initiating an LE connection we can't
+- * go ahead and change the random address at this time. This is
+- * because the eventual initiator address used for the
++ /* If a random_addr has been set we're advertising or initiating an LE
++ * connection we can't go ahead and change the random address at this
++ * time. This is because the eventual initiator address used for the
+ * subsequently created connection will be undefined (some
+ * controllers use the new address and others the one we had
+ * when the operation started).
+@@ -1041,8 +1041,9 @@ static int hci_set_random_addr_sync(struct hci_dev *hdev, bdaddr_t *rpa)
+ * In this kind of scenario skip the update and let the random
+ * address be updated at the next cycle.
+ */
+- if (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
+- hci_lookup_le_connect(hdev)) {
++ if (bacmp(&hdev->random_addr, BDADDR_ANY) &&
++ (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
++ hci_lookup_le_connect(hdev))) {
+ bt_dev_dbg(hdev, "Deferring random address update");
+ hci_dev_set_flag(hdev, HCI_RPA_EXPIRED);
+ return 0;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 2343e15f8938ec..7dc315c1658e7d 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7596,6 +7596,24 @@ static void device_added(struct sock *sk, struct hci_dev *hdev,
+ mgmt_event(MGMT_EV_DEVICE_ADDED, hdev, &ev, sizeof(ev), sk);
+ }
+
++static void add_device_complete(struct hci_dev *hdev, void *data, int err)
++{
++ struct mgmt_pending_cmd *cmd = data;
++ struct mgmt_cp_add_device *cp = cmd->param;
++
++ if (!err) {
++ device_added(cmd->sk, hdev, &cp->addr.bdaddr, cp->addr.type,
++ cp->action);
++ device_flags_changed(NULL, hdev, &cp->addr.bdaddr,
++ cp->addr.type, hdev->conn_flags,
++ PTR_UINT(cmd->user_data));
++ }
++
++ mgmt_cmd_complete(cmd->sk, hdev->id, MGMT_OP_ADD_DEVICE,
++ mgmt_status(err), &cp->addr, sizeof(cp->addr));
++ mgmt_pending_free(cmd);
++}
++
+ static int add_device_sync(struct hci_dev *hdev, void *data)
+ {
+ return hci_update_passive_scan_sync(hdev);
+@@ -7604,6 +7622,7 @@ static int add_device_sync(struct hci_dev *hdev, void *data)
+ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ void *data, u16 len)
+ {
++ struct mgmt_pending_cmd *cmd;
+ struct mgmt_cp_add_device *cp = data;
+ u8 auto_conn, addr_type;
+ struct hci_conn_params *params;
+@@ -7684,9 +7703,24 @@ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ current_flags = params->flags;
+ }
+
+- err = hci_cmd_sync_queue(hdev, add_device_sync, NULL, NULL);
+- if (err < 0)
++ cmd = mgmt_pending_new(sk, MGMT_OP_ADD_DEVICE, hdev, data, len);
++ if (!cmd) {
++ err = -ENOMEM;
+ goto unlock;
++ }
++
++ cmd->user_data = UINT_PTR(current_flags);
++
++ err = hci_cmd_sync_queue(hdev, add_device_sync, cmd,
++ add_device_complete);
++ if (err < 0) {
++ err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_ADD_DEVICE,
++ MGMT_STATUS_FAILED, &cp->addr,
++ sizeof(cp->addr));
++ mgmt_pending_free(cmd);
++ }
++
++ goto unlock;
+
+ added:
+ device_added(sk, hdev, &cp->addr.bdaddr, cp->addr.type, cp->action);
+diff --git a/net/bluetooth/rfcomm/tty.c b/net/bluetooth/rfcomm/tty.c
+index af80d599c33715..21a5b5535ebceb 100644
+--- a/net/bluetooth/rfcomm/tty.c
++++ b/net/bluetooth/rfcomm/tty.c
+@@ -201,14 +201,14 @@ static ssize_t address_show(struct device *tty_dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct rfcomm_dev *dev = dev_get_drvdata(tty_dev);
+- return sprintf(buf, "%pMR\n", &dev->dst);
++ return sysfs_emit(buf, "%pMR\n", &dev->dst);
+ }
+
+ static ssize_t channel_show(struct device *tty_dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct rfcomm_dev *dev = dev_get_drvdata(tty_dev);
+- return sprintf(buf, "%d\n", dev->channel);
++ return sysfs_emit(buf, "%d\n", dev->channel);
+ }
+
+ static DEVICE_ATTR_RO(address);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index f3fa8353d262b0..1867a6a8d76da9 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -753,6 +753,36 @@ int dev_fill_forward_path(const struct net_device *dev, const u8 *daddr,
+ }
+ EXPORT_SYMBOL_GPL(dev_fill_forward_path);
+
++/* must be called under rcu_read_lock(), as we dont take a reference */
++static struct napi_struct *napi_by_id(unsigned int napi_id)
++{
++ unsigned int hash = napi_id % HASH_SIZE(napi_hash);
++ struct napi_struct *napi;
++
++ hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
++ if (napi->napi_id == napi_id)
++ return napi;
++
++ return NULL;
++}
++
++/* must be called under rcu_read_lock(), as we dont take a reference */
++struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id)
++{
++ struct napi_struct *napi;
++
++ napi = napi_by_id(napi_id);
++ if (!napi)
++ return NULL;
++
++ if (WARN_ON_ONCE(!napi->dev))
++ return NULL;
++ if (!net_eq(net, dev_net(napi->dev)))
++ return NULL;
++
++ return napi;
++}
++
+ /**
+ * __dev_get_by_name - find a device by its name
+ * @net: the applicable net namespace
+@@ -6291,19 +6321,6 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
+ }
+ EXPORT_SYMBOL(napi_complete_done);
+
+-/* must be called under rcu_read_lock(), as we dont take a reference */
+-struct napi_struct *napi_by_id(unsigned int napi_id)
+-{
+- unsigned int hash = napi_id % HASH_SIZE(napi_hash);
+- struct napi_struct *napi;
+-
+- hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
+- if (napi->napi_id == napi_id)
+- return napi;
+-
+- return NULL;
+-}
+-
+ static void skb_defer_free_flush(struct softnet_data *sd)
+ {
+ struct sk_buff *skb, *next;
+diff --git a/net/core/dev.h b/net/core/dev.h
+index 5654325c5b710c..2e3bb7669984a6 100644
+--- a/net/core/dev.h
++++ b/net/core/dev.h
+@@ -22,6 +22,8 @@ struct sd_flow_limit {
+
+ extern int netdev_flow_limit_table_len;
+
++struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id);
++
+ #ifdef CONFIG_PROC_FS
+ int __init dev_proc_init(void);
+ #else
+@@ -146,7 +148,6 @@ void xdp_do_check_flushed(struct napi_struct *napi);
+ static inline void xdp_do_check_flushed(struct napi_struct *napi) { }
+ #endif
+
+-struct napi_struct *napi_by_id(unsigned int napi_id);
+ void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu);
+
+ #define XMIT_RECURSION_LIMIT 8
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index 1b4d39e3808427..cb04ef2b9807c9 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -42,14 +42,18 @@ static unsigned int default_operstate(const struct net_device *dev)
+ * first check whether lower is indeed the source of its down state.
+ */
+ if (!netif_carrier_ok(dev)) {
+- int iflink = dev_get_iflink(dev);
+ struct net_device *peer;
++ int iflink;
+
+ /* If called from netdev_run_todo()/linkwatch_sync_dev(),
+ * dev_net(dev) can be already freed, and RTNL is not held.
+ */
+- if (dev->reg_state == NETREG_UNREGISTERED ||
+- iflink == dev->ifindex)
++ if (dev->reg_state <= NETREG_REGISTERED)
++ iflink = dev_get_iflink(dev);
++ else
++ iflink = dev->ifindex;
++
++ if (iflink == dev->ifindex)
+ return IF_OPER_DOWN;
+
+ ASSERT_RTNL();
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index d58270b48cb2cf..ad426b3a03b526 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -164,8 +164,6 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
+ void *hdr;
+ pid_t pid;
+
+- if (WARN_ON_ONCE(!napi->dev))
+- return -EINVAL;
+ if (!(napi->dev->flags & IFF_UP))
+ return 0;
+
+@@ -173,8 +171,7 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
+ if (!hdr)
+ return -EMSGSIZE;
+
+- if (napi->napi_id >= MIN_NAPI_ID &&
+- nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id))
++ if (nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id))
+ goto nla_put_failure;
+
+ if (nla_put_u32(rsp, NETDEV_A_NAPI_IFINDEX, napi->dev->ifindex))
+@@ -217,7 +214,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ rtnl_lock();
+ rcu_read_lock();
+
+- napi = napi_by_id(napi_id);
++ napi = netdev_napi_by_id(genl_info_net(info), napi_id);
+ if (napi) {
+ err = netdev_nl_napi_fill_one(rsp, napi, info);
+ } else {
+@@ -254,6 +251,8 @@ netdev_nl_napi_dump_one(struct net_device *netdev, struct sk_buff *rsp,
+ return err;
+
+ list_for_each_entry(napi, &netdev->napi_list, dev_list) {
++ if (napi->napi_id < MIN_NAPI_ID)
++ continue;
+ if (ctx->napi_id && napi->napi_id >= ctx->napi_id)
+ continue;
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a7cd433a54c9ae..bcc2f1e090c7db 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -896,7 +896,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb,
+ sock_net_set(ctl_sk, net);
+ if (sk) {
+ ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ?
+- inet_twsk(sk)->tw_mark : sk->sk_mark;
++ inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark);
+ ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ?
+ inet_twsk(sk)->tw_priority : READ_ONCE(sk->sk_priority);
+ transmit_time = tcp_transmit_time(sk);
+diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c
+index 38d8121331d4a9..b0dd008e2114bc 100644
+--- a/net/mptcp/ctrl.c
++++ b/net/mptcp/ctrl.c
+@@ -102,16 +102,15 @@ static void mptcp_pernet_set_defaults(struct mptcp_pernet *pernet)
+ }
+
+ #ifdef CONFIG_SYSCTL
+-static int mptcp_set_scheduler(const struct net *net, const char *name)
++static int mptcp_set_scheduler(char *scheduler, const char *name)
+ {
+- struct mptcp_pernet *pernet = mptcp_get_pernet(net);
+ struct mptcp_sched_ops *sched;
+ int ret = 0;
+
+ rcu_read_lock();
+ sched = mptcp_sched_find(name);
+ if (sched)
+- strscpy(pernet->scheduler, name, MPTCP_SCHED_NAME_MAX);
++ strscpy(scheduler, name, MPTCP_SCHED_NAME_MAX);
+ else
+ ret = -ENOENT;
+ rcu_read_unlock();
+@@ -122,7 +121,7 @@ static int mptcp_set_scheduler(const struct net *net, const char *name)
+ static int proc_scheduler(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- const struct net *net = current->nsproxy->net_ns;
++ char (*scheduler)[MPTCP_SCHED_NAME_MAX] = ctl->data;
+ char val[MPTCP_SCHED_NAME_MAX];
+ struct ctl_table tbl = {
+ .data = val,
+@@ -130,11 +129,11 @@ static int proc_scheduler(const struct ctl_table *ctl, int write,
+ };
+ int ret;
+
+- strscpy(val, mptcp_get_scheduler(net), MPTCP_SCHED_NAME_MAX);
++ strscpy(val, *scheduler, MPTCP_SCHED_NAME_MAX);
+
+ ret = proc_dostring(&tbl, write, buffer, lenp, ppos);
+ if (write && ret == 0)
+- ret = mptcp_set_scheduler(net, val);
++ ret = mptcp_set_scheduler(*scheduler, val);
+
+ return ret;
+ }
+@@ -161,7 +160,9 @@ static int proc_blackhole_detect_timeout(const struct ctl_table *table,
+ int write, void *buffer, size_t *lenp,
+ loff_t *ppos)
+ {
+- struct mptcp_pernet *pernet = mptcp_get_pernet(current->nsproxy->net_ns);
++ struct mptcp_pernet *pernet = container_of(table->data,
++ struct mptcp_pernet,
++ blackhole_timeout);
+ int ret;
+
+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+@@ -228,7 +229,7 @@ static struct ctl_table mptcp_sysctl_table[] = {
+ {
+ .procname = "available_schedulers",
+ .maxlen = MPTCP_SCHED_BUF_MAX,
+- .mode = 0644,
++ .mode = 0444,
+ .proc_handler = proc_available_schedulers,
+ },
+ {
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 9db3e2b0b1c347..456446d7af200e 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2517,12 +2517,15 @@ void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls)
+ struct hlist_nulls_head *hash;
+ unsigned int nr_slots, i;
+
+- if (*sizep > (UINT_MAX / sizeof(struct hlist_nulls_head)))
++ if (*sizep > (INT_MAX / sizeof(struct hlist_nulls_head)))
+ return NULL;
+
+ BUILD_BUG_ON(sizeof(struct hlist_nulls_head) != sizeof(struct hlist_head));
+ nr_slots = *sizep = roundup(*sizep, PAGE_SIZE / sizeof(struct hlist_nulls_head));
+
++ if (nr_slots > (INT_MAX / sizeof(struct hlist_nulls_head)))
++ return NULL;
++
+ hash = kvcalloc(nr_slots, sizeof(struct hlist_nulls_head), GFP_KERNEL);
+
+ if (hash && nulls)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0c5ff4afc37022..42dc8cc721ff7b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -8565,6 +8565,7 @@ static void nft_unregister_flowtable_hook(struct net *net,
+ }
+
+ static void __nft_unregister_flowtable_net_hooks(struct net *net,
++ struct nft_flowtable *flowtable,
+ struct list_head *hook_list,
+ bool release_netdev)
+ {
+@@ -8572,6 +8573,8 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net,
+
+ list_for_each_entry_safe(hook, next, hook_list, list) {
+ nf_unregister_net_hook(net, &hook->ops);
++ flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
++ FLOW_BLOCK_UNBIND);
+ if (release_netdev) {
+ list_del(&hook->list);
+ kfree_rcu(hook, rcu);
+@@ -8580,9 +8583,10 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net,
+ }
+
+ static void nft_unregister_flowtable_net_hooks(struct net *net,
++ struct nft_flowtable *flowtable,
+ struct list_head *hook_list)
+ {
+- __nft_unregister_flowtable_net_hooks(net, hook_list, false);
++ __nft_unregister_flowtable_net_hooks(net, flowtable, hook_list, false);
+ }
+
+ static int nft_register_flowtable_net_hooks(struct net *net,
+@@ -9223,8 +9227,6 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+
+ flowtable->data.type->free(&flowtable->data);
+ list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
+- flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
+- FLOW_BLOCK_UNBIND);
+ list_del_rcu(&hook->list);
+ kfree_rcu(hook, rcu);
+ }
+@@ -10622,6 +10624,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ &nft_trans_flowtable_hooks(trans),
+ trans->msg_type);
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable_hooks(trans));
+ } else {
+ list_del_rcu(&nft_trans_flowtable(trans)->list);
+@@ -10630,6 +10633,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ NULL,
+ trans->msg_type);
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable(trans)->hook_list);
+ }
+ break;
+@@ -10901,11 +10905,13 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ case NFT_MSG_NEWFLOWTABLE:
+ if (nft_trans_flowtable_update(trans)) {
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable_hooks(trans));
+ } else {
+ nft_use_dec_restore(&table->use);
+ list_del_rcu(&nft_trans_flowtable(trans)->list);
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable(trans)->hook_list);
+ }
+ break;
+@@ -11498,7 +11504,8 @@ static void __nft_release_hook(struct net *net, struct nft_table *table)
+ list_for_each_entry(chain, &table->chains, list)
+ __nf_tables_unregister_hook(net, table, chain, true);
+ list_for_each_entry(flowtable, &table->flowtables, list)
+- __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list,
++ __nft_unregister_flowtable_net_hooks(net, flowtable,
++ &flowtable->hook_list,
+ true);
+ }
+
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 351ac1747224a3..0581c53e651704 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -61,8 +61,10 @@ static atomic_t rds_tcp_unloading = ATOMIC_INIT(0);
+
+ static struct kmem_cache *rds_tcp_conn_slab;
+
+-static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write,
+- void *buffer, size_t *lenp, loff_t *fpos);
++static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos);
++static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos);
+
+ static int rds_tcp_min_sndbuf = SOCK_MIN_SNDBUF;
+ static int rds_tcp_min_rcvbuf = SOCK_MIN_RCVBUF;
+@@ -74,7 +76,7 @@ static struct ctl_table rds_tcp_sysctl_table[] = {
+ /* data is per-net pointer */
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = rds_tcp_skbuf_handler,
++ .proc_handler = rds_tcp_sndbuf_handler,
+ .extra1 = &rds_tcp_min_sndbuf,
+ },
+ #define RDS_TCP_RCVBUF 1
+@@ -83,7 +85,7 @@ static struct ctl_table rds_tcp_sysctl_table[] = {
+ /* data is per-net pointer */
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = rds_tcp_skbuf_handler,
++ .proc_handler = rds_tcp_rcvbuf_handler,
+ .extra1 = &rds_tcp_min_rcvbuf,
+ },
+ };
+@@ -682,10 +684,10 @@ static void rds_tcp_sysctl_reset(struct net *net)
+ spin_unlock_irq(&rds_tcp_conn_lock);
+ }
+
+-static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write,
++static int rds_tcp_skbuf_handler(struct rds_tcp_net *rtn,
++ const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *fpos)
+ {
+- struct net *net = current->nsproxy->net_ns;
+ int err;
+
+ err = proc_dointvec_minmax(ctl, write, buffer, lenp, fpos);
+@@ -694,11 +696,34 @@ static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write,
+ *(int *)(ctl->extra1));
+ return err;
+ }
+- if (write)
++
++ if (write && rtn->rds_tcp_listen_sock && rtn->rds_tcp_listen_sock->sk) {
++ struct net *net = sock_net(rtn->rds_tcp_listen_sock->sk);
++
+ rds_tcp_sysctl_reset(net);
++ }
++
+ return 0;
+ }
+
++static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos)
++{
++ struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net,
++ sndbuf_size);
++
++ return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos);
++}
++
++static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos)
++{
++ struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net,
++ rcvbuf_size);
++
++ return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos);
++}
++
+ static void rds_tcp_exit(void)
+ {
+ rds_tcp_set_unloading();
+diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c
+index 5502998aace741..5c2580a07530e4 100644
+--- a/net/sched/cls_flow.c
++++ b/net/sched/cls_flow.c
+@@ -356,7 +356,8 @@ static const struct nla_policy flow_policy[TCA_FLOW_MAX + 1] = {
+ [TCA_FLOW_KEYS] = { .type = NLA_U32 },
+ [TCA_FLOW_MODE] = { .type = NLA_U32 },
+ [TCA_FLOW_BASECLASS] = { .type = NLA_U32 },
+- [TCA_FLOW_RSHIFT] = { .type = NLA_U32 },
++ [TCA_FLOW_RSHIFT] = NLA_POLICY_MAX(NLA_U32,
++ 31 /* BITS_PER_U32 - 1 */),
+ [TCA_FLOW_ADDEND] = { .type = NLA_U32 },
+ [TCA_FLOW_MASK] = { .type = NLA_U32 },
+ [TCA_FLOW_XOR] = { .type = NLA_U32 },
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 8d8b2db4653c0c..2c2e2a67f3b244 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -627,6 +627,63 @@ static bool cake_ddst(int flow_mode)
+ return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST;
+ }
+
++static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_dsrc(flow_mode) &&
++ q->hosts[flow->srchost].srchost_bulk_flow_count))
++ q->hosts[flow->srchost].srchost_bulk_flow_count--;
++}
++
++static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_dsrc(flow_mode) &&
++ q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE_QUEUES))
++ q->hosts[flow->srchost].srchost_bulk_flow_count++;
++}
++
++static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_ddst(flow_mode) &&
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count))
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count--;
++}
++
++static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_ddst(flow_mode) &&
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE_QUEUES))
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count++;
++}
++
++static u16 cake_get_flow_quantum(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ u16 host_load = 1;
++
++ if (cake_dsrc(flow_mode))
++ host_load = max(host_load,
++ q->hosts[flow->srchost].srchost_bulk_flow_count);
++
++ if (cake_ddst(flow_mode))
++ host_load = max(host_load,
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count);
++
++ /* The get_random_u16() is a way to apply dithering to avoid
++ * accumulating roundoff errors
++ */
++ return (q->flow_quantum * quantum_div[host_load] +
++ get_random_u16()) >> 16;
++}
++
+ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ int flow_mode, u16 flow_override, u16 host_override)
+ {
+@@ -773,10 +830,8 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ allocate_dst = cake_ddst(flow_mode);
+
+ if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
+- if (allocate_src)
+- q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
+- if (allocate_dst)
+- q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
++ cake_dec_srchost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode);
++ cake_dec_dsthost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode);
+ }
+ found:
+ /* reserve queue for future packets in same flow */
+@@ -801,9 +856,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ q->hosts[outer_hash + k].srchost_tag = srchost_hash;
+ found_src:
+ srchost_idx = outer_hash + k;
+- if (q->flows[reduced_hash].set == CAKE_SET_BULK)
+- q->hosts[srchost_idx].srchost_bulk_flow_count++;
+ q->flows[reduced_hash].srchost = srchost_idx;
++
++ if (q->flows[reduced_hash].set == CAKE_SET_BULK)
++ cake_inc_srchost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode);
+ }
+
+ if (allocate_dst) {
+@@ -824,9 +880,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ q->hosts[outer_hash + k].dsthost_tag = dsthost_hash;
+ found_dst:
+ dsthost_idx = outer_hash + k;
+- if (q->flows[reduced_hash].set == CAKE_SET_BULK)
+- q->hosts[dsthost_idx].dsthost_bulk_flow_count++;
+ q->flows[reduced_hash].dsthost = dsthost_idx;
++
++ if (q->flows[reduced_hash].set == CAKE_SET_BULK)
++ cake_inc_dsthost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode);
+ }
+ }
+
+@@ -1839,10 +1896,6 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+
+ /* flowchain */
+ if (!flow->set || flow->set == CAKE_SET_DECAYING) {
+- struct cake_host *srchost = &b->hosts[flow->srchost];
+- struct cake_host *dsthost = &b->hosts[flow->dsthost];
+- u16 host_load = 1;
+-
+ if (!flow->set) {
+ list_add_tail(&flow->flowchain, &b->new_flows);
+ } else {
+@@ -1852,18 +1905,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ flow->set = CAKE_SET_SPARSE;
+ b->sparse_flow_count++;
+
+- if (cake_dsrc(q->flow_mode))
+- host_load = max(host_load, srchost->srchost_bulk_flow_count);
+-
+- if (cake_ddst(q->flow_mode))
+- host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
+-
+- flow->deficit = (b->flow_quantum *
+- quantum_div[host_load]) >> 16;
++ flow->deficit = cake_get_flow_quantum(b, flow, q->flow_mode);
+ } else if (flow->set == CAKE_SET_SPARSE_WAIT) {
+- struct cake_host *srchost = &b->hosts[flow->srchost];
+- struct cake_host *dsthost = &b->hosts[flow->dsthost];
+-
+ /* this flow was empty, accounted as a sparse flow, but actually
+ * in the bulk rotation.
+ */
+@@ -1871,12 +1914,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ b->sparse_flow_count--;
+ b->bulk_flow_count++;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count++;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count++;
+-
++ cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+ }
+
+ if (q->buffer_used > q->buffer_max_used)
+@@ -1933,13 +1972,11 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ {
+ struct cake_sched_data *q = qdisc_priv(sch);
+ struct cake_tin_data *b = &q->tins[q->cur_tin];
+- struct cake_host *srchost, *dsthost;
+ ktime_t now = ktime_get();
+ struct cake_flow *flow;
+ struct list_head *head;
+ bool first_flow = true;
+ struct sk_buff *skb;
+- u16 host_load;
+ u64 delay;
+ u32 len;
+
+@@ -2039,11 +2076,6 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ q->cur_flow = flow - b->flows;
+ first_flow = false;
+
+- /* triple isolation (modified DRR++) */
+- srchost = &b->hosts[flow->srchost];
+- dsthost = &b->hosts[flow->dsthost];
+- host_load = 1;
+-
+ /* flow isolation (DRR++) */
+ if (flow->deficit <= 0) {
+ /* Keep all flows with deficits out of the sparse and decaying
+@@ -2055,11 +2087,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ b->sparse_flow_count--;
+ b->bulk_flow_count++;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count++;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count++;
++ cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+
+ flow->set = CAKE_SET_BULK;
+ } else {
+@@ -2071,19 +2100,7 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ }
+ }
+
+- if (cake_dsrc(q->flow_mode))
+- host_load = max(host_load, srchost->srchost_bulk_flow_count);
+-
+- if (cake_ddst(q->flow_mode))
+- host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
+-
+- WARN_ON(host_load > CAKE_QUEUES);
+-
+- /* The get_random_u16() is a way to apply dithering to avoid
+- * accumulating roundoff errors
+- */
+- flow->deficit += (b->flow_quantum * quantum_div[host_load] +
+- get_random_u16()) >> 16;
++ flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode);
+ list_move_tail(&flow->flowchain, &b->old_flows);
+
+ goto retry;
+@@ -2107,11 +2124,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ if (flow->set == CAKE_SET_BULK) {
+ b->bulk_flow_count--;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count--;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count--;
++ cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+
+ b->decaying_flow_count++;
+ } else if (flow->set == CAKE_SET_SPARSE ||
+@@ -2129,12 +2143,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ else if (flow->set == CAKE_SET_BULK) {
+ b->bulk_flow_count--;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count--;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count--;
+-
++ cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+ } else
+ b->decaying_flow_count--;
+
+diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
+index e5a5af343c4c98..8e1e97be4df79f 100644
+--- a/net/sctp/sysctl.c
++++ b/net/sctp/sysctl.c
+@@ -387,7 +387,8 @@ static struct ctl_table sctp_net_table[] = {
+ static int proc_sctp_do_hmac_alg(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net,
++ sctp.sctp_hmac_alg);
+ struct ctl_table tbl;
+ bool changed = false;
+ char *none = "none";
+@@ -432,7 +433,7 @@ static int proc_sctp_do_hmac_alg(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_rto_min(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.rto_min);
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ struct ctl_table tbl;
+@@ -460,7 +461,7 @@ static int proc_sctp_do_rto_min(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_rto_max(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.rto_max);
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ struct ctl_table tbl;
+@@ -498,7 +499,7 @@ static int proc_sctp_do_alpha_beta(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_auth(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.auth_enable);
+ struct ctl_table tbl;
+ int new_value, ret;
+
+@@ -527,7 +528,7 @@ static int proc_sctp_do_auth(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.udp_port);
+ unsigned int min = *(unsigned int *)ctl->extra1;
+ unsigned int max = *(unsigned int *)ctl->extra2;
+ struct ctl_table tbl;
+@@ -568,7 +569,8 @@ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_probe_interval(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net,
++ sctp.probe_interval);
+ struct ctl_table tbl;
+ int ret, new_value;
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index bbf26cc4f6ee26..7bcc9b4408a2c7 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -458,7 +458,7 @@ int tls_tx_records(struct sock *sk, int flags)
+
+ tx_err:
+ if (rc < 0 && rc != -EAGAIN)
+- tls_err_abort(sk, -EBADMSG);
++ tls_err_abort(sk, rc);
+
+ return rc;
+ }
+diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c
+index f9f7512ca36087..9a0747c4bdeac0 100644
+--- a/sound/soc/codecs/rt722-sdca.c
++++ b/sound/soc/codecs/rt722-sdca.c
+@@ -1467,13 +1467,18 @@ static void rt722_sdca_jack_preset(struct rt722_sdca_priv *rt722)
+ 0x008d);
+ /* check HP calibration FSM status */
+ for (loop_check = 0; loop_check < chk_cnt; loop_check++) {
++ usleep_range(10000, 11000);
+ ret = rt722_sdca_index_read(rt722, RT722_VENDOR_CALI,
+ RT722_DAC_DC_CALI_CTL3, &calib_status);
+- if (ret < 0 || loop_check == chk_cnt)
++ if (ret < 0)
+ dev_dbg(&rt722->slave->dev, "calibration failed!, ret=%d\n", ret);
+ if ((calib_status & 0x0040) == 0x0)
+ break;
+ }
++
++ if (loop_check == chk_cnt)
++ dev_dbg(&rt722->slave->dev, "%s, calibration time-out!\n", __func__);
++
+ /* Set ADC09 power entity floating control */
+ rt722_sdca_index_write(rt722, RT722_VENDOR_HDA_CTL, RT722_ADC0A_08_PDE_FLOAT_CTL,
+ 0x2a12);
+diff --git a/sound/soc/mediatek/common/mtk-afe-platform-driver.c b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+index 9b72b2a7ae917e..6b633058394140 100644
+--- a/sound/soc/mediatek/common/mtk-afe-platform-driver.c
++++ b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+@@ -120,8 +120,8 @@ int mtk_afe_pcm_new(struct snd_soc_component *component,
+ struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component);
+
+ size = afe->mtk_afe_hardware->buffer_bytes_max;
+- snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV,
+- afe->dev, size, size);
++ snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(mtk_afe_pcm_new);
+diff --git a/tools/testing/selftests/alsa/Makefile b/tools/testing/selftests/alsa/Makefile
+index 944279160fed26..8dab90ad22bb27 100644
+--- a/tools/testing/selftests/alsa/Makefile
++++ b/tools/testing/selftests/alsa/Makefile
+@@ -27,5 +27,5 @@ include ../lib.mk
+ $(OUTPUT)/libatest.so: conf.c alsa-local.h
+ $(CC) $(CFLAGS) -shared -fPIC $< $(LDLIBS) -o $@
+
+-$(OUTPUT)/%: %.c $(TEST_GEN_PROGS_EXTENDED) alsa-local.h
++$(OUTPUT)/%: %.c $(OUTPUT)/libatest.so alsa-local.h
+ $(CC) $(CFLAGS) $< $(LDLIBS) -latest -o $@
+diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+index 03c1bdaed2c3c5..400a696a0d212e 100755
+--- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh
++++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+@@ -86,15 +86,15 @@ echo "" > test/cpuset.cpus
+
+ #
+ # If isolated CPUs have been reserved at boot time (as shown in
+-# cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-7
++# cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-8
+ # that will be used by this script for testing purpose. If not, some of
+-# the tests may fail incorrectly. These isolated CPUs will also be removed
+-# before being compared with the expected results.
++# the tests may fail incorrectly. These pre-isolated CPUs should stay in
++# an isolated state throughout the testing process for now.
+ #
+ BOOT_ISOLCPUS=$(cat $CGROUP2/cpuset.cpus.isolated)
+ if [[ -n "$BOOT_ISOLCPUS" ]]
+ then
+- [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 7 ]] &&
++ [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 8 ]] &&
+ skip_test "Pre-isolated CPUs ($BOOT_ISOLCPUS) overlap CPUs to be tested"
+ echo "Pre-isolated CPUs: $BOOT_ISOLCPUS"
+ fi
+@@ -683,15 +683,19 @@ check_isolcpus()
+ EXPECT_VAL2=$EXPECT_VAL
+ fi
+
++ #
++ # Appending pre-isolated CPUs
++ # Even though CPU #8 isn't used for testing, it can't be pre-isolated
++ # to make appending those CPUs easier.
++ #
++ [[ -n "$BOOT_ISOLCPUS" ]] && {
++ EXPECT_VAL=${EXPECT_VAL:+${EXPECT_VAL},}${BOOT_ISOLCPUS}
++ EXPECT_VAL2=${EXPECT_VAL2:+${EXPECT_VAL2},}${BOOT_ISOLCPUS}
++ }
++
+ #
+ # Check cpuset.cpus.isolated cpumask
+ #
+- if [[ -z "$BOOT_ISOLCPUS" ]]
+- then
+- ISOLCPUS=$(cat $ISCPUS)
+- else
+- ISOLCPUS=$(cat $ISCPUS | sed -e "s/,*$BOOT_ISOLCPUS//")
+- fi
+ [[ "$EXPECT_VAL2" != "$ISOLCPUS" ]] && {
+ # Take a 50ms pause and try again
+ pause 0.05
+@@ -731,8 +735,6 @@ check_isolcpus()
+ fi
+ done
+ [[ "$ISOLCPUS" = *- ]] && ISOLCPUS=${ISOLCPUS}$LASTISOLCPU
+- [[ -n "BOOT_ISOLCPUS" ]] &&
+- ISOLCPUS=$(echo $ISOLCPUS | sed -e "s/,*$BOOT_ISOLCPUS//")
+
+ [[ "$EXPECT_VAL" = "$ISOLCPUS" ]]
+ }
+@@ -836,8 +838,11 @@ run_state_test()
+ # if available
+ [[ -n "$ICPUS" ]] && {
+ check_isolcpus $ICPUS
+- [[ $? -ne 0 ]] && test_fail $I "isolated CPU" \
+- "Expect $ICPUS, get $ISOLCPUS instead"
++ [[ $? -ne 0 ]] && {
++ [[ -n "$BOOT_ISOLCPUS" ]] && ICPUS=${ICPUS},${BOOT_ISOLCPUS}
++ test_fail $I "isolated CPU" \
++ "Expect $ICPUS, get $ISOLCPUS instead"
++ }
+ }
+ reset_cgroup_states
+ #
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-23 17:02 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-01-23 17:02 UTC (permalink / raw
To: gentoo-commits
commit: be3664f69c599632b14b89fe20fa9dd3418eff74
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 23 17:02:05 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 23 17:02:05 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=be3664f6
Linux patch 6.12.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-6.12.11.patch | 5351 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5355 insertions(+)
diff --git a/0000_README b/0000_README
index 20574d29..9c94906b 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-6.12.10.patch
From: https://www.kernel.org
Desc: Linux 6.12.10
+Patch: 1010_linux-6.12.11.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.11
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1010_linux-6.12.11.patch b/1010_linux-6.12.11.patch
new file mode 100644
index 00000000..b7226d9f
--- /dev/null
+++ b/1010_linux-6.12.11.patch
@@ -0,0 +1,5351 @@
+diff --git a/Makefile b/Makefile
+index 233e9e88e402e7..7cf8f11975f89c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index aec6e2d3aa1d52..98bfc097389c4e 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -217,7 +217,7 @@ static inline int write_user_shstk_64(u64 __user *addr, u64 val)
+
+ #define nop() asm volatile ("nop")
+
+-static inline void serialize(void)
++static __always_inline void serialize(void)
+ {
+ /* Instruction opcode for SERIALIZE; supported in binutils >= 2.35. */
+ asm volatile(".byte 0xf, 0x1, 0xe8" ::: "memory");
+diff --git a/arch/x86/kernel/fred.c b/arch/x86/kernel/fred.c
+index 8d32c3f48abc0c..5e2cd10049804e 100644
+--- a/arch/x86/kernel/fred.c
++++ b/arch/x86/kernel/fred.c
+@@ -50,7 +50,13 @@ void cpu_init_fred_exceptions(void)
+ FRED_CONFIG_ENTRYPOINT(asm_fred_entrypoint_user));
+
+ wrmsrl(MSR_IA32_FRED_STKLVLS, 0);
+- wrmsrl(MSR_IA32_FRED_RSP0, 0);
++
++ /*
++ * Ater a CPU offline/online cycle, the FRED RSP0 MSR should be
++ * resynchronized with its per-CPU cache.
++ */
++ wrmsrl(MSR_IA32_FRED_RSP0, __this_cpu_read(fred_rsp0));
++
+ wrmsrl(MSR_IA32_FRED_RSP1, 0);
+ wrmsrl(MSR_IA32_FRED_RSP2, 0);
+ wrmsrl(MSR_IA32_FRED_RSP3, 0);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index d27a3bf96f80d8..90aaec923889cf 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -689,11 +689,11 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
+ for (i = 0; i < ARRAY_SIZE(override_table); i++) {
+ const struct irq_override_cmp *entry = &override_table[i];
+
+- if (dmi_check_system(entry->system) &&
+- entry->irq == gsi &&
++ if (entry->irq == gsi &&
+ entry->triggering == triggering &&
+ entry->polarity == polarity &&
+- entry->shareable == shareable)
++ entry->shareable == shareable &&
++ dmi_check_system(entry->system))
+ return entry->override;
+ }
+
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index bf83a104086cce..76b326ddd75c47 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1349,6 +1349,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
+ zram->mem_pool = zs_create_pool(zram->disk->disk_name);
+ if (!zram->mem_pool) {
+ vfree(zram->table);
++ zram->table = NULL;
+ return false;
+ }
+
+diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
+index 2561b215432a82..588ab1cc6d557c 100644
+--- a/drivers/cpufreq/Kconfig
++++ b/drivers/cpufreq/Kconfig
+@@ -311,8 +311,6 @@ config QORIQ_CPUFREQ
+ This adds the CPUFreq driver support for Freescale QorIQ SoCs
+ which are capable of changing the CPU's frequency dynamically.
+
+-endif
+-
+ config ACPI_CPPC_CPUFREQ
+ tristate "CPUFreq driver based on the ACPI CPPC spec"
+ depends on ACPI_PROCESSOR
+@@ -341,4 +339,6 @@ config ACPI_CPPC_CPUFREQ_FIE
+
+ If in doubt, say N.
+
++endif
++
+ endmenu
+diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
+index f2992f92d8db86..173ddcac540ade 100644
+--- a/drivers/cpuidle/governors/teo.c
++++ b/drivers/cpuidle/governors/teo.c
+@@ -10,25 +10,27 @@
+ * DOC: teo-description
+ *
+ * The idea of this governor is based on the observation that on many systems
+- * timer events are two or more orders of magnitude more frequent than any
+- * other interrupts, so they are likely to be the most significant cause of CPU
+- * wakeups from idle states. Moreover, information about what happened in the
+- * (relatively recent) past can be used to estimate whether or not the deepest
+- * idle state with target residency within the (known) time till the closest
+- * timer event, referred to as the sleep length, is likely to be suitable for
+- * the upcoming CPU idle period and, if not, then which of the shallower idle
+- * states to choose instead of it.
++ * timer interrupts are two or more orders of magnitude more frequent than any
++ * other interrupt types, so they are likely to dominate CPU wakeup patterns.
++ * Moreover, in principle, the time when the next timer event is going to occur
++ * can be determined at the idle state selection time, although doing that may
++ * be costly, so it can be regarded as the most reliable source of information
++ * for idle state selection.
+ *
+- * Of course, non-timer wakeup sources are more important in some use cases
+- * which can be covered by taking a few most recent idle time intervals of the
+- * CPU into account. However, even in that context it is not necessary to
+- * consider idle duration values greater than the sleep length, because the
+- * closest timer will ultimately wake up the CPU anyway unless it is woken up
+- * earlier.
++ * Of course, non-timer wakeup sources are more important in some use cases,
++ * but even then it is generally unnecessary to consider idle duration values
++ * greater than the time time till the next timer event, referred as the sleep
++ * length in what follows, because the closest timer will ultimately wake up the
++ * CPU anyway unless it is woken up earlier.
+ *
+- * Thus this governor estimates whether or not the prospective idle duration of
+- * a CPU is likely to be significantly shorter than the sleep length and selects
+- * an idle state for it accordingly.
++ * However, since obtaining the sleep length may be costly, the governor first
++ * checks if it can select a shallow idle state using wakeup pattern information
++ * from recent times, in which case it can do without knowing the sleep length
++ * at all. For this purpose, it counts CPU wakeup events and looks for an idle
++ * state whose target residency has not exceeded the idle duration (measured
++ * after wakeup) in the majority of relevant recent cases. If the target
++ * residency of that state is small enough, it may be used right away and the
++ * sleep length need not be determined.
+ *
+ * The computations carried out by this governor are based on using bins whose
+ * boundaries are aligned with the target residency parameter values of the CPU
+@@ -39,7 +41,11 @@
+ * idle state 2, the third bin spans from the target residency of idle state 2
+ * up to, but not including, the target residency of idle state 3 and so on.
+ * The last bin spans from the target residency of the deepest idle state
+- * supplied by the driver to infinity.
++ * supplied by the driver to the scheduler tick period length or to infinity if
++ * the tick period length is less than the target residency of that state. In
++ * the latter case, the governor also counts events with the measured idle
++ * duration between the tick period length and the target residency of the
++ * deepest idle state.
+ *
+ * Two metrics called "hits" and "intercepts" are associated with each bin.
+ * They are updated every time before selecting an idle state for the given CPU
+@@ -49,47 +55,46 @@
+ * sleep length and the idle duration measured after CPU wakeup fall into the
+ * same bin (that is, the CPU appears to wake up "on time" relative to the sleep
+ * length). In turn, the "intercepts" metric reflects the relative frequency of
+- * situations in which the measured idle duration is so much shorter than the
+- * sleep length that the bin it falls into corresponds to an idle state
+- * shallower than the one whose bin is fallen into by the sleep length (these
+- * situations are referred to as "intercepts" below).
++ * non-timer wakeup events for which the measured idle duration falls into a bin
++ * that corresponds to an idle state shallower than the one whose bin is fallen
++ * into by the sleep length (these events are also referred to as "intercepts"
++ * below).
+ *
+ * In order to select an idle state for a CPU, the governor takes the following
+ * steps (modulo the possible latency constraint that must be taken into account
+ * too):
+ *
+- * 1. Find the deepest CPU idle state whose target residency does not exceed
+- * the current sleep length (the candidate idle state) and compute 2 sums as
+- * follows:
++ * 1. Find the deepest enabled CPU idle state (the candidate idle state) and
++ * compute 2 sums as follows:
+ *
+- * - The sum of the "hits" and "intercepts" metrics for the candidate state
+- * and all of the deeper idle states (it represents the cases in which the
+- * CPU was idle long enough to avoid being intercepted if the sleep length
+- * had been equal to the current one).
++ * - The sum of the "hits" metric for all of the idle states shallower than
++ * the candidate one (it represents the cases in which the CPU was likely
++ * woken up by a timer).
+ *
+- * - The sum of the "intercepts" metrics for all of the idle states shallower
+- * than the candidate one (it represents the cases in which the CPU was not
+- * idle long enough to avoid being intercepted if the sleep length had been
+- * equal to the current one).
++ * - The sum of the "intercepts" metric for all of the idle states shallower
++ * than the candidate one (it represents the cases in which the CPU was
++ * likely woken up by a non-timer wakeup source).
+ *
+- * 2. If the second sum is greater than the first one the CPU is likely to wake
+- * up early, so look for an alternative idle state to select.
++ * 2. If the second sum computed in step 1 is greater than a half of the sum of
++ * both metrics for the candidate state bin and all subsequent bins(if any),
++ * a shallower idle state is likely to be more suitable, so look for it.
+ *
+- * - Traverse the idle states shallower than the candidate one in the
++ * - Traverse the enabled idle states shallower than the candidate one in the
+ * descending order.
+ *
+ * - For each of them compute the sum of the "intercepts" metrics over all
+ * of the idle states between it and the candidate one (including the
+ * former and excluding the latter).
+ *
+- * - If each of these sums that needs to be taken into account (because the
+- * check related to it has indicated that the CPU is likely to wake up
+- * early) is greater than a half of the corresponding sum computed in step
+- * 1 (which means that the target residency of the state in question had
+- * not exceeded the idle duration in over a half of the relevant cases),
+- * select the given idle state instead of the candidate one.
++ * - If this sum is greater than a half of the second sum computed in step 1,
++ * use the given idle state as the new candidate one.
+ *
+- * 3. By default, select the candidate state.
++ * 3. If the current candidate state is state 0 or its target residency is short
++ * enough, return it and prevent the scheduler tick from being stopped.
++ *
++ * 4. Obtain the sleep length value and check if it is below the target
++ * residency of the current candidate state, in which case a new shallower
++ * candidate state needs to be found, so look for it.
+ */
+
+ #include <linux/cpuidle.h>
+diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
+index 72f2537d90cafd..f45c70154a9302 100644
+--- a/drivers/firmware/efi/Kconfig
++++ b/drivers/firmware/efi/Kconfig
+@@ -76,10 +76,6 @@ config EFI_ZBOOT
+ bool "Enable the generic EFI decompressor"
+ depends on EFI_GENERIC_STUB && !ARM
+ select HAVE_KERNEL_GZIP
+- select HAVE_KERNEL_LZ4
+- select HAVE_KERNEL_LZMA
+- select HAVE_KERNEL_LZO
+- select HAVE_KERNEL_XZ
+ select HAVE_KERNEL_ZSTD
+ help
+ Create the bootable image as an EFI application that carries the
+diff --git a/drivers/firmware/efi/libstub/Makefile.zboot b/drivers/firmware/efi/libstub/Makefile.zboot
+index 65ffd0b760b2fb..48842b5c106b83 100644
+--- a/drivers/firmware/efi/libstub/Makefile.zboot
++++ b/drivers/firmware/efi/libstub/Makefile.zboot
+@@ -12,22 +12,16 @@ quiet_cmd_copy_and_pad = PAD $@
+ $(obj)/vmlinux.bin: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
+ $(call if_changed,copy_and_pad)
+
+-comp-type-$(CONFIG_KERNEL_GZIP) := gzip
+-comp-type-$(CONFIG_KERNEL_LZ4) := lz4
+-comp-type-$(CONFIG_KERNEL_LZMA) := lzma
+-comp-type-$(CONFIG_KERNEL_LZO) := lzo
+-comp-type-$(CONFIG_KERNEL_XZ) := xzkern
+-comp-type-$(CONFIG_KERNEL_ZSTD) := zstd22
+-
+ # in GZIP, the appended le32 carrying the uncompressed size is part of the
+ # format, but in other cases, we just append it at the end for convenience,
+ # causing the original tools to complain when checking image integrity.
+-# So disregard it when calculating the payload size in the zimage header.
+-zboot-method-y := $(comp-type-y)_with_size
+-zboot-size-len-y := 4
++comp-type-y := gzip
++zboot-method-y := gzip
++zboot-size-len-y := 0
+
+-zboot-method-$(CONFIG_KERNEL_GZIP) := gzip
+-zboot-size-len-$(CONFIG_KERNEL_GZIP) := 0
++comp-type-$(CONFIG_KERNEL_ZSTD) := zstd
++zboot-method-$(CONFIG_KERNEL_ZSTD) := zstd22_with_size
++zboot-size-len-$(CONFIG_KERNEL_ZSTD) := 4
+
+ $(obj)/vmlinuz: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,$(zboot-method-y))
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index dcca1d7f173e5f..deedacdeb23952 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -1030,6 +1030,30 @@ static void gpio_sim_device_deactivate(struct gpio_sim_device *dev)
+ dev->pdev = NULL;
+ }
+
++static void
++gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock)
++{
++ struct configfs_subsystem *subsys = dev->group.cg_subsys;
++ struct gpio_sim_bank *bank;
++ struct gpio_sim_line *line;
++
++ /*
++ * The device only needs to depend on leaf line entries. This is
++ * sufficient to lock up all the configfs entries that the
++ * instantiated, alive device depends on.
++ */
++ list_for_each_entry(bank, &dev->bank_list, siblings) {
++ list_for_each_entry(line, &bank->line_list, siblings) {
++ if (lock)
++ WARN_ON(configfs_depend_item_unlocked(
++ subsys, &line->group.cg_item));
++ else
++ configfs_undepend_item_unlocked(
++ &line->group.cg_item);
++ }
++ }
++}
++
+ static ssize_t
+ gpio_sim_device_config_live_store(struct config_item *item,
+ const char *page, size_t count)
+@@ -1042,14 +1066,24 @@ gpio_sim_device_config_live_store(struct config_item *item,
+ if (ret)
+ return ret;
+
+- guard(mutex)(&dev->lock);
++ if (live)
++ gpio_sim_device_lockup_configfs(dev, true);
+
+- if (live == gpio_sim_device_is_live(dev))
+- ret = -EPERM;
+- else if (live)
+- ret = gpio_sim_device_activate(dev);
+- else
+- gpio_sim_device_deactivate(dev);
++ scoped_guard(mutex, &dev->lock) {
++ if (live == gpio_sim_device_is_live(dev))
++ ret = -EPERM;
++ else if (live)
++ ret = gpio_sim_device_activate(dev);
++ else
++ gpio_sim_device_deactivate(dev);
++ }
++
++ /*
++ * Undepend is required only if device disablement (live == 0)
++ * succeeds or if device enablement (live == 1) fails.
++ */
++ if (live == !!ret)
++ gpio_sim_device_lockup_configfs(dev, false);
+
+ return ret ?: count;
+ }
+diff --git a/drivers/gpio/gpio-virtuser.c b/drivers/gpio/gpio-virtuser.c
+index d6244f0d3bc752..e89f299f214009 100644
+--- a/drivers/gpio/gpio-virtuser.c
++++ b/drivers/gpio/gpio-virtuser.c
+@@ -1546,6 +1546,30 @@ gpio_virtuser_device_deactivate(struct gpio_virtuser_device *dev)
+ dev->pdev = NULL;
+ }
+
++static void
++gpio_virtuser_device_lockup_configfs(struct gpio_virtuser_device *dev, bool lock)
++{
++ struct configfs_subsystem *subsys = dev->group.cg_subsys;
++ struct gpio_virtuser_lookup_entry *entry;
++ struct gpio_virtuser_lookup *lookup;
++
++ /*
++ * The device only needs to depend on leaf lookup entries. This is
++ * sufficient to lock up all the configfs entries that the
++ * instantiated, alive device depends on.
++ */
++ list_for_each_entry(lookup, &dev->lookup_list, siblings) {
++ list_for_each_entry(entry, &lookup->entry_list, siblings) {
++ if (lock)
++ WARN_ON(configfs_depend_item_unlocked(
++ subsys, &entry->group.cg_item));
++ else
++ configfs_undepend_item_unlocked(
++ &entry->group.cg_item);
++ }
++ }
++}
++
+ static ssize_t
+ gpio_virtuser_device_config_live_store(struct config_item *item,
+ const char *page, size_t count)
+@@ -1558,15 +1582,24 @@ gpio_virtuser_device_config_live_store(struct config_item *item,
+ if (ret)
+ return ret;
+
+- guard(mutex)(&dev->lock);
++ if (live)
++ gpio_virtuser_device_lockup_configfs(dev, true);
+
+- if (live == gpio_virtuser_device_is_live(dev))
+- return -EPERM;
++ scoped_guard(mutex, &dev->lock) {
++ if (live == gpio_virtuser_device_is_live(dev))
++ ret = -EPERM;
++ else if (live)
++ ret = gpio_virtuser_device_activate(dev);
++ else
++ gpio_virtuser_device_deactivate(dev);
++ }
+
+- if (live)
+- ret = gpio_virtuser_device_activate(dev);
+- else
+- gpio_virtuser_device_deactivate(dev);
++ /*
++ * Undepend is required only if device disablement (live == 0)
++ * succeeds or if device enablement (live == 1) fails.
++ */
++ if (live == !!ret)
++ gpio_virtuser_device_lockup_configfs(dev, false);
+
+ return ret ?: count;
+ }
+diff --git a/drivers/gpio/gpio-xilinx.c b/drivers/gpio/gpio-xilinx.c
+index afcf432a1573ed..2ea8ccfbdccdd4 100644
+--- a/drivers/gpio/gpio-xilinx.c
++++ b/drivers/gpio/gpio-xilinx.c
+@@ -65,7 +65,7 @@ struct xgpio_instance {
+ DECLARE_BITMAP(state, 64);
+ DECLARE_BITMAP(last_irq_read, 64);
+ DECLARE_BITMAP(dir, 64);
+- spinlock_t gpio_lock; /* For serializing operations */
++ raw_spinlock_t gpio_lock; /* For serializing operations */
+ int irq;
+ DECLARE_BITMAP(enable, 64);
+ DECLARE_BITMAP(rising_edge, 64);
+@@ -179,14 +179,14 @@ static void xgpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
+ struct xgpio_instance *chip = gpiochip_get_data(gc);
+ int bit = xgpio_to_bit(chip, gpio);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ /* Write to GPIO signal and set its direction to output */
+ __assign_bit(bit, chip->state, val);
+
+ xgpio_write_ch(chip, XGPIO_DATA_OFFSET, bit, chip->state);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+ }
+
+ /**
+@@ -210,7 +210,7 @@ static void xgpio_set_multiple(struct gpio_chip *gc, unsigned long *mask,
+ bitmap_remap(hw_mask, mask, chip->sw_map, chip->hw_map, 64);
+ bitmap_remap(hw_bits, bits, chip->sw_map, chip->hw_map, 64);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ bitmap_replace(state, chip->state, hw_bits, hw_mask, 64);
+
+@@ -218,7 +218,7 @@ static void xgpio_set_multiple(struct gpio_chip *gc, unsigned long *mask,
+
+ bitmap_copy(chip->state, state, 64);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+ }
+
+ /**
+@@ -236,13 +236,13 @@ static int xgpio_dir_in(struct gpio_chip *gc, unsigned int gpio)
+ struct xgpio_instance *chip = gpiochip_get_data(gc);
+ int bit = xgpio_to_bit(chip, gpio);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ /* Set the GPIO bit in shadow register and set direction as input */
+ __set_bit(bit, chip->dir);
+ xgpio_write_ch(chip, XGPIO_TRI_OFFSET, bit, chip->dir);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ return 0;
+ }
+@@ -265,7 +265,7 @@ static int xgpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
+ struct xgpio_instance *chip = gpiochip_get_data(gc);
+ int bit = xgpio_to_bit(chip, gpio);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ /* Write state of GPIO signal */
+ __assign_bit(bit, chip->state, val);
+@@ -275,7 +275,7 @@ static int xgpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
+ __clear_bit(bit, chip->dir);
+ xgpio_write_ch(chip, XGPIO_TRI_OFFSET, bit, chip->dir);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ return 0;
+ }
+@@ -398,7 +398,7 @@ static void xgpio_irq_mask(struct irq_data *irq_data)
+ int bit = xgpio_to_bit(chip, irq_offset);
+ u32 mask = BIT(bit / 32), temp;
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ __clear_bit(bit, chip->enable);
+
+@@ -408,7 +408,7 @@ static void xgpio_irq_mask(struct irq_data *irq_data)
+ temp &= ~mask;
+ xgpio_writereg(chip->regs + XGPIO_IPIER_OFFSET, temp);
+ }
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ gpiochip_disable_irq(&chip->gc, irq_offset);
+ }
+@@ -428,7 +428,7 @@ static void xgpio_irq_unmask(struct irq_data *irq_data)
+
+ gpiochip_enable_irq(&chip->gc, irq_offset);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ __set_bit(bit, chip->enable);
+
+@@ -447,7 +447,7 @@ static void xgpio_irq_unmask(struct irq_data *irq_data)
+ xgpio_writereg(chip->regs + XGPIO_IPIER_OFFSET, val);
+ }
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+ }
+
+ /**
+@@ -512,7 +512,7 @@ static void xgpio_irqhandler(struct irq_desc *desc)
+
+ chained_irq_enter(irqchip, desc);
+
+- spin_lock(&chip->gpio_lock);
++ raw_spin_lock(&chip->gpio_lock);
+
+ xgpio_read_ch_all(chip, XGPIO_DATA_OFFSET, all);
+
+@@ -529,7 +529,7 @@ static void xgpio_irqhandler(struct irq_desc *desc)
+ bitmap_copy(chip->last_irq_read, all, 64);
+ bitmap_or(all, rising, falling, 64);
+
+- spin_unlock(&chip->gpio_lock);
++ raw_spin_unlock(&chip->gpio_lock);
+
+ dev_dbg(gc->parent, "IRQ rising %*pb falling %*pb\n", 64, rising, 64, falling);
+
+@@ -620,7 +620,7 @@ static int xgpio_probe(struct platform_device *pdev)
+ bitmap_set(chip->hw_map, 0, width[0]);
+ bitmap_set(chip->hw_map, 32, width[1]);
+
+- spin_lock_init(&chip->gpio_lock);
++ raw_spin_lock_init(&chip->gpio_lock);
+
+ chip->gc.base = -1;
+ chip->gc.ngpio = bitmap_weight(chip->hw_map, 64);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index e41318bfbf4575..84e5364d1f67d0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -715,8 +715,9 @@ int amdgpu_amdkfd_submit_ib(struct amdgpu_device *adev,
+ void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle)
+ {
+ enum amd_powergating_state state = idle ? AMD_PG_STATE_GATE : AMD_PG_STATE_UNGATE;
+- if (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 &&
+- ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) {
++ if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 &&
++ ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) ||
++ (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 12)) {
+ pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");
+ amdgpu_gfx_off_ctrl(adev, idle);
+ } else if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 9) &&
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
+index 2d4b67175b55be..328a1b9635481c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
+@@ -122,6 +122,10 @@ static int amdgpu_is_fw_attestation_supported(struct amdgpu_device *adev)
+ if (adev->flags & AMD_IS_APU)
+ return 0;
+
++ if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(14, 0, 2) ||
++ amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(14, 0, 3))
++ return 0;
++
+ if (adev->asic_type >= CHIP_SIENNA_CICHLID)
+ return 1;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 8b512dc28df838..071f187f5e282f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -193,8 +193,8 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
+ need_ctx_switch = ring->current_ctx != fence_ctx;
+ if (ring->funcs->emit_pipeline_sync && job &&
+ ((tmp = amdgpu_sync_get_fence(&job->explicit_sync)) ||
+- (amdgpu_sriov_vf(adev) && need_ctx_switch) ||
+- amdgpu_vm_need_pipeline_sync(ring, job))) {
++ need_ctx_switch || amdgpu_vm_need_pipeline_sync(ring, job))) {
++
+ need_pipe_sync = true;
+
+ if (tmp)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ea403fece8392c..08c58d0315de7f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8889,6 +8889,7 @@ static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
+ struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;
+ struct amdgpu_dm_connector *aconn =
+ (struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context;
++ bool vrr_active = amdgpu_dm_crtc_vrr_active(acrtc_state);
+
+ if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
+ if (pr->config.replay_supported && !pr->replay_feature_enabled)
+@@ -8915,14 +8916,15 @@ static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
+ * adequate number of fast atomic commits to notify KMD
+ * of update events. See `vblank_control_worker()`.
+ */
+- if (acrtc_attach->dm_irq_params.allow_sr_entry &&
++ if (!vrr_active &&
++ acrtc_attach->dm_irq_params.allow_sr_entry &&
+ #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+ !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
+ #endif
+ (current_ts - psr->psr_dirty_rects_change_timestamp_ns) > 500000000) {
+ if (pr->replay_feature_enabled && !pr->replay_allow_active)
+ amdgpu_dm_replay_enable(acrtc_state->stream, true);
+- if (psr->psr_version >= DC_PSR_VERSION_SU_1 &&
++ if (psr->psr_version == DC_PSR_VERSION_SU_1 &&
+ !psr->psr_allow_active && !aconn->disallow_edp_enter_psr)
+ amdgpu_dm_psr_enable(acrtc_state->stream);
+ }
+@@ -9093,7 +9095,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ acrtc_state->stream->link->psr_settings.psr_dirty_rects_change_timestamp_ns =
+ timestamp_ns;
+ if (acrtc_state->stream->link->psr_settings.psr_allow_active)
+- amdgpu_dm_psr_disable(acrtc_state->stream);
++ amdgpu_dm_psr_disable(acrtc_state->stream, true);
+ mutex_unlock(&dm->dc_lock);
+ }
+ }
+@@ -9259,11 +9261,11 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ bundle->stream_update.abm_level = &acrtc_state->abm_level;
+
+ mutex_lock(&dm->dc_lock);
+- if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
++ if ((acrtc_state->update_type > UPDATE_TYPE_FAST) || vrr_active) {
+ if (acrtc_state->stream->link->replay_settings.replay_allow_active)
+ amdgpu_dm_replay_disable(acrtc_state->stream);
+ if (acrtc_state->stream->link->psr_settings.psr_allow_active)
+- amdgpu_dm_psr_disable(acrtc_state->stream);
++ amdgpu_dm_psr_disable(acrtc_state->stream, true);
+ }
+ mutex_unlock(&dm->dc_lock);
+
+@@ -11370,6 +11372,25 @@ static int dm_crtc_get_cursor_mode(struct amdgpu_device *adev,
+ return 0;
+ }
+
++static bool amdgpu_dm_crtc_mem_type_changed(struct drm_device *dev,
++ struct drm_atomic_state *state,
++ struct drm_crtc_state *crtc_state)
++{
++ struct drm_plane *plane;
++ struct drm_plane_state *new_plane_state, *old_plane_state;
++
++ drm_for_each_plane_mask(plane, dev, crtc_state->plane_mask) {
++ new_plane_state = drm_atomic_get_plane_state(state, plane);
++ old_plane_state = drm_atomic_get_plane_state(state, plane);
++
++ if (old_plane_state->fb && new_plane_state->fb &&
++ get_mem_type(old_plane_state->fb) != get_mem_type(new_plane_state->fb))
++ return true;
++ }
++
++ return false;
++}
++
+ /**
+ * amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
+ *
+@@ -11567,10 +11588,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+
+ /* Remove exiting planes if they are modified */
+ for_each_oldnew_plane_in_descending_zpos(state, plane, old_plane_state, new_plane_state) {
+- if (old_plane_state->fb && new_plane_state->fb &&
+- get_mem_type(old_plane_state->fb) !=
+- get_mem_type(new_plane_state->fb))
+- lock_and_validation_needed = true;
+
+ ret = dm_update_plane_state(dc, state, plane,
+ old_plane_state,
+@@ -11865,9 +11882,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+
+ /*
+ * Only allow async flips for fast updates that don't change
+- * the FB pitch, the DCC state, rotation, etc.
++ * the FB pitch, the DCC state, rotation, mem_type, etc.
+ */
+- if (new_crtc_state->async_flip && lock_and_validation_needed) {
++ if (new_crtc_state->async_flip &&
++ (lock_and_validation_needed ||
++ amdgpu_dm_crtc_mem_type_changed(dev, state, new_crtc_state))) {
+ drm_dbg_atomic(crtc->dev,
+ "[CRTC:%d:%s] async flips are only supported for fast updates\n",
+ crtc->base.id, crtc->name);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index f936a35fa9ebb7..0f6ba7b1575d08 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -30,6 +30,7 @@
+ #include "amdgpu_dm.h"
+ #include "dc.h"
+ #include "amdgpu_securedisplay.h"
++#include "amdgpu_dm_psr.h"
+
+ static const char *const pipe_crc_sources[] = {
+ "none",
+@@ -224,6 +225,10 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
+
+ mutex_lock(&adev->dm.dc_lock);
+
++ /* For PSR1, check that the panel has exited PSR */
++ if (stream_state->link->psr_settings.psr_version < DC_PSR_VERSION_SU_1)
++ amdgpu_dm_psr_wait_disable(stream_state);
++
+ /* Enable or disable CRTC CRC generation */
+ if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) {
+ if (!dc_stream_configure_crc(stream_state->ctx->dc,
+@@ -357,6 +362,17 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+
+ }
+
++ /*
++ * Reading the CRC requires the vblank interrupt handler to be
++ * enabled. Keep a reference until CRC capture stops.
++ */
++ enabled = amdgpu_dm_is_valid_crc_source(cur_crc_src);
++ if (!enabled && enable) {
++ ret = drm_crtc_vblank_get(crtc);
++ if (ret)
++ goto cleanup;
++ }
++
+ #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+ /* Reset secure_display when we change crc source from debugfs */
+ amdgpu_dm_set_crc_window_default(crtc, crtc_state->stream);
+@@ -367,16 +383,7 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ goto cleanup;
+ }
+
+- /*
+- * Reading the CRC requires the vblank interrupt handler to be
+- * enabled. Keep a reference until CRC capture stops.
+- */
+- enabled = amdgpu_dm_is_valid_crc_source(cur_crc_src);
+ if (!enabled && enable) {
+- ret = drm_crtc_vblank_get(crtc);
+- if (ret)
+- goto cleanup;
+-
+ if (dm_is_crc_source_dprx(source)) {
+ if (drm_dp_start_crc(aux, crtc)) {
+ DRM_DEBUG_DRIVER("dp start crc failed\n");
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 9be87b53251739..70fcfae8e4c552 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -93,7 +93,7 @@ int amdgpu_dm_crtc_set_vupdate_irq(struct drm_crtc *crtc, bool enable)
+ return rc;
+ }
+
+-bool amdgpu_dm_crtc_vrr_active(struct dm_crtc_state *dm_state)
++bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state)
+ {
+ return dm_state->freesync_config.state == VRR_STATE_ACTIVE_VARIABLE ||
+ dm_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED;
+@@ -142,7 +142,7 @@ static void amdgpu_dm_crtc_set_panel_sr_feature(
+ amdgpu_dm_replay_enable(vblank_work->stream, true);
+ } else if (vblank_enabled) {
+ if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active)
+- amdgpu_dm_psr_disable(vblank_work->stream);
++ amdgpu_dm_psr_disable(vblank_work->stream, false);
+ } else if (link->psr_settings.psr_feature_enabled &&
+ allow_sr_entry && !is_sr_active && !is_crc_window_active) {
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
+index 17e948753f59bd..c1212947a77b83 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
+@@ -37,7 +37,7 @@ int amdgpu_dm_crtc_set_vupdate_irq(struct drm_crtc *crtc, bool enable);
+
+ bool amdgpu_dm_crtc_vrr_active_irq(struct amdgpu_crtc *acrtc);
+
+-bool amdgpu_dm_crtc_vrr_active(struct dm_crtc_state *dm_state);
++bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state);
+
+ int amdgpu_dm_crtc_enable_vblank(struct drm_crtc *crtc);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index db56b0aa545454..98e88903d07d52 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -3638,7 +3638,7 @@ static int crc_win_update_set(void *data, u64 val)
+ /* PSR may write to OTG CRC window control register,
+ * so close it before starting secure_display.
+ */
+- amdgpu_dm_psr_disable(acrtc->dm_irq_params.stream);
++ amdgpu_dm_psr_disable(acrtc->dm_irq_params.stream, true);
+
+ spin_lock_irq(&adev_to_drm(adev)->event_lock);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 32b025c92c63cf..3d624ae6d9bdfe 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1831,11 +1831,15 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ if (immediate_upstream_port) {
+ virtual_channel_bw_in_kbps = kbps_from_pbn(immediate_upstream_port->full_pbn);
+ virtual_channel_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
+- if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
+- DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
+- "Max dsc compression can't fit into MST available bw\n");
+- return DC_FAIL_BANDWIDTH_VALIDATE;
+- }
++ } else {
++ /* For topology LCT 1 case - only one mstb*/
++ virtual_channel_bw_in_kbps = root_link_bw_in_kbps;
++ }
++
++ if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
++ DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
++ "Max dsc compression can't fit into MST available bw\n");
++ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index f40240aafe988e..45858bf1523d8f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -201,14 +201,13 @@ void amdgpu_dm_psr_enable(struct dc_stream_state *stream)
+ *
+ * Return: true if success
+ */
+-bool amdgpu_dm_psr_disable(struct dc_stream_state *stream)
++bool amdgpu_dm_psr_disable(struct dc_stream_state *stream, bool wait)
+ {
+- unsigned int power_opt = 0;
+ bool psr_enable = false;
+
+ DRM_DEBUG_DRIVER("Disabling psr...\n");
+
+- return dc_link_set_psr_allow_active(stream->link, &psr_enable, true, false, &power_opt);
++ return dc_link_set_psr_allow_active(stream->link, &psr_enable, wait, false, NULL);
+ }
+
+ /*
+@@ -251,3 +250,33 @@ bool amdgpu_dm_psr_is_active_allowed(struct amdgpu_display_manager *dm)
+
+ return allow_active;
+ }
++
++/**
++ * amdgpu_dm_psr_wait_disable() - Wait for eDP panel to exit PSR
++ * @stream: stream state attached to the eDP link
++ *
++ * Waits for a max of 500ms for the eDP panel to exit PSR.
++ *
++ * Return: true if panel exited PSR, false otherwise.
++ */
++bool amdgpu_dm_psr_wait_disable(struct dc_stream_state *stream)
++{
++ enum dc_psr_state psr_state = PSR_STATE0;
++ struct dc_link *link = stream->link;
++ int retry_count;
++
++ if (link == NULL)
++ return false;
++
++ for (retry_count = 0; retry_count <= 1000; retry_count++) {
++ dc_link_get_psr_state(link, &psr_state);
++ if (psr_state == PSR_STATE0)
++ break;
++ udelay(500);
++ }
++
++ if (retry_count == 1000)
++ return false;
++
++ return true;
++}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
+index cd2d45c2b5ef01..e2366321a3c1bd 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
+@@ -34,8 +34,9 @@
+ void amdgpu_dm_set_psr_caps(struct dc_link *link);
+ void amdgpu_dm_psr_enable(struct dc_stream_state *stream);
+ bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream);
+-bool amdgpu_dm_psr_disable(struct dc_stream_state *stream);
++bool amdgpu_dm_psr_disable(struct dc_stream_state *stream, bool wait);
+ bool amdgpu_dm_psr_disable_all(struct amdgpu_display_manager *dm);
+ bool amdgpu_dm_psr_is_active_allowed(struct amdgpu_display_manager *dm);
++bool amdgpu_dm_psr_wait_disable(struct dc_stream_state *stream);
+
+ #endif /* AMDGPU_DM_AMDGPU_DM_PSR_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+index beed7adbbd43e0..47d785204f29cb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+@@ -195,9 +195,9 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
+ .dcn_downspread_percent = 0.5,
+ .gpuvm_min_page_size_bytes = 4096,
+ .hostvm_min_page_size_bytes = 4096,
+- .do_urgent_latency_adjustment = 1,
++ .do_urgent_latency_adjustment = 0,
+ .urgent_latency_adjustment_fabric_clock_component_us = 0,
+- .urgent_latency_adjustment_fabric_clock_reference_mhz = 3000,
++ .urgent_latency_adjustment_fabric_clock_reference_mhz = 0,
+ };
+
+ void dcn35_build_wm_range_table_fpu(struct clk_mgr *clk_mgr)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index cd2cf0ffc0f5cb..5a0a10144a73fe 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2549,11 +2549,12 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ &backend_workload_mask);
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+- if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+- ((smu->adev->pm.fw_version == 0x004e6601) ||
+- (smu->adev->pm.fw_version >= 0x004e7300))) ||
+- (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
+- smu->adev->pm.fw_version >= 0x00504500)) {
++ if ((workload_mask & (1 << PP_SMC_POWER_PROFILE_COMPUTE)) &&
++ ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
++ ((smu->adev->pm.fw_version == 0x004e6601) ||
++ (smu->adev->pm.fw_version >= 0x004e7300))) ||
++ (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
++ smu->adev->pm.fw_version >= 0x00504500))) {
+ workload_type = smu_cmn_to_asic_specific_index(smu,
+ CMN2ASIC_MAPPING_WORKLOAD,
+ PP_SMC_POWER_PROFILE_POWERSAVING);
+diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c
+index 35557d98d7a700..3d16e9406dc6b1 100644
+--- a/drivers/gpu/drm/i915/display/intel_fb.c
++++ b/drivers/gpu/drm/i915/display/intel_fb.c
+@@ -1613,7 +1613,7 @@ int intel_fill_fb_info(struct drm_i915_private *i915, struct intel_framebuffer *
+ * arithmetic related to alignment and offset calculation.
+ */
+ if (is_gen12_ccs_cc_plane(&fb->base, i)) {
+- if (IS_ALIGNED(fb->base.offsets[i], PAGE_SIZE))
++ if (IS_ALIGNED(fb->base.offsets[i], 64))
+ continue;
+ else
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
+index 09686d038d6053..7cc84472cecec2 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
+@@ -387,11 +387,13 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan,
+ if (f) {
+ struct nouveau_channel *prev;
+ bool must_wait = true;
++ bool local;
+
+ rcu_read_lock();
+ prev = rcu_dereference(f->channel);
+- if (prev && (prev == chan ||
+- fctx->sync(f, prev, chan) == 0))
++ local = prev && prev->cli->drm == chan->cli->drm;
++ if (local && (prev == chan ||
++ fctx->sync(f, prev, chan) == 0))
+ must_wait = false;
+ rcu_read_unlock();
+ if (!must_wait)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
+index 841e3b69fcaf3e..5a0c9b8a79f3ec 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
+@@ -31,6 +31,7 @@ mcp77_sor = {
+ .state = g94_sor_state,
+ .power = nv50_sor_power,
+ .clock = nv50_sor_clock,
++ .bl = &nv50_sor_bl,
+ .hdmi = &g84_sor_hdmi,
+ .dp = &g94_sor_dp,
+ };
+diff --git a/drivers/gpu/drm/tests/drm_kunit_helpers.c b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+index 04a6b8cc62ac67..3c0b7824c0be37 100644
+--- a/drivers/gpu/drm/tests/drm_kunit_helpers.c
++++ b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+@@ -320,8 +320,7 @@ static void kunit_action_drm_mode_destroy(void *ptr)
+ }
+
+ /**
+- * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC
+- for a KUnit test
++ * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC for a KUnit test
+ * @test: The test context object
+ * @dev: DRM device
+ * @video_code: CEA VIC of the mode
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 20bf33702c3c4f..da203045df9bec 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -108,6 +108,7 @@ v3d_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->bin_job->base, V3D_BIN);
+ trace_v3d_bcl_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->bin_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+@@ -118,6 +119,7 @@ v3d_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->render_job->base, V3D_RENDER);
+ trace_v3d_rcl_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->render_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+@@ -128,6 +130,7 @@ v3d_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->csd_job->base, V3D_CSD);
+ trace_v3d_csd_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->csd_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+@@ -165,6 +168,7 @@ v3d_hub_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->tfu_job->base, V3D_TFU);
+ trace_v3d_tfu_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->tfu_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index a0e433fbcba67c..183cda50094cb7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -443,7 +443,8 @@ static int vmw_bo_init(struct vmw_private *dev_priv,
+
+ if (params->pin)
+ ttm_bo_pin(&vmw_bo->tbo);
+- ttm_bo_unreserve(&vmw_bo->tbo);
++ if (!params->keep_resv)
++ ttm_bo_unreserve(&vmw_bo->tbo);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+index 43b5439ec9f760..c21ba7ff773682 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+@@ -56,8 +56,9 @@ struct vmw_bo_params {
+ u32 domain;
+ u32 busy_domain;
+ enum ttm_bo_type bo_type;
+- size_t size;
+ bool pin;
++ bool keep_resv;
++ size_t size;
+ struct dma_resv *resv;
+ struct sg_table *sg;
+ };
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 2825dd3149ed5c..2e84e1029732d3 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -401,7 +401,8 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv)
+ .busy_domain = VMW_BO_DOMAIN_SYS,
+ .bo_type = ttm_bo_type_kernel,
+ .size = PAGE_SIZE,
+- .pin = true
++ .pin = true,
++ .keep_resv = true,
+ };
+
+ /*
+@@ -413,10 +414,6 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv)
+ if (unlikely(ret != 0))
+ return ret;
+
+- ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL);
+- BUG_ON(ret != 0);
+- vmw_bo_pin_reserved(vbo, true);
+-
+ ret = ttm_bo_kmap(&vbo->tbo, 0, 1, &map);
+ if (likely(ret == 0)) {
+ result = ttm_kmap_obj_virtual(&map, &dummy);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
+index b9857f37ca1ac6..ed5015ced3920a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
+@@ -206,6 +206,7 @@ struct drm_gem_object *vmw_prime_import_sg_table(struct drm_device *dev,
+ .bo_type = ttm_bo_type_sg,
+ .size = attach->dmabuf->size,
+ .pin = false,
++ .keep_resv = true,
+ .resv = attach->dmabuf->resv,
+ .sg = table,
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 10d596cb4b4029..5f99f7437ae614 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -750,6 +750,7 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ struct vmw_plane_state *old_vps = vmw_plane_state_to_vps(old_state);
+ struct vmw_bo *old_bo = NULL;
+ struct vmw_bo *new_bo = NULL;
++ struct ww_acquire_ctx ctx;
+ s32 hotspot_x, hotspot_y;
+ int ret;
+
+@@ -769,9 +770,11 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ if (du->cursor_surface)
+ du->cursor_age = du->cursor_surface->snooper.age;
+
++ ww_acquire_init(&ctx, &reservation_ww_class);
++
+ if (!vmw_user_object_is_null(&old_vps->uo)) {
+ old_bo = vmw_user_object_buffer(&old_vps->uo);
+- ret = ttm_bo_reserve(&old_bo->tbo, false, false, NULL);
++ ret = ttm_bo_reserve(&old_bo->tbo, false, false, &ctx);
+ if (ret != 0)
+ return;
+ }
+@@ -779,9 +782,14 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ if (!vmw_user_object_is_null(&vps->uo)) {
+ new_bo = vmw_user_object_buffer(&vps->uo);
+ if (old_bo != new_bo) {
+- ret = ttm_bo_reserve(&new_bo->tbo, false, false, NULL);
+- if (ret != 0)
++ ret = ttm_bo_reserve(&new_bo->tbo, false, false, &ctx);
++ if (ret != 0) {
++ if (old_bo) {
++ ttm_bo_unreserve(&old_bo->tbo);
++ ww_acquire_fini(&ctx);
++ }
+ return;
++ }
+ } else {
+ new_bo = NULL;
+ }
+@@ -803,10 +811,12 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ hotspot_x, hotspot_y);
+ }
+
+- if (old_bo)
+- ttm_bo_unreserve(&old_bo->tbo);
+ if (new_bo)
+ ttm_bo_unreserve(&new_bo->tbo);
++ if (old_bo)
++ ttm_bo_unreserve(&old_bo->tbo);
++
++ ww_acquire_fini(&ctx);
+
+ du->cursor_x = new_state->crtc_x + du->set_gui_x;
+ du->cursor_y = new_state->crtc_y + du->set_gui_y;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+index a01ca3226d0af8..7fb1c88bcc475f 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+@@ -896,7 +896,8 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ .busy_domain = VMW_BO_DOMAIN_SYS,
+ .bo_type = ttm_bo_type_device,
+ .size = size,
+- .pin = true
++ .pin = true,
++ .keep_resv = true,
+ };
+
+ if (!vmw_shader_id_ok(user_key, shader_type))
+@@ -906,10 +907,6 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ if (unlikely(ret != 0))
+ goto out;
+
+- ret = ttm_bo_reserve(&buf->tbo, false, true, NULL);
+- if (unlikely(ret != 0))
+- goto no_reserve;
+-
+ /* Map and copy shader bytecode. */
+ ret = ttm_bo_kmap(&buf->tbo, 0, PFN_UP(size), &map);
+ if (unlikely(ret != 0)) {
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+index 621d98b376bbbc..5553892d7c3e0d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+@@ -572,15 +572,14 @@ int vmw_bo_create_and_populate(struct vmw_private *dev_priv,
+ .busy_domain = domain,
+ .bo_type = ttm_bo_type_kernel,
+ .size = bo_size,
+- .pin = true
++ .pin = true,
++ .keep_resv = true,
+ };
+
+ ret = vmw_bo_create(dev_priv, &bo_params, &vbo);
+ if (unlikely(ret != 0))
+ return ret;
+
+- ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL);
+- BUG_ON(ret != 0);
+ ret = vmw_ttm_populate(vbo->tbo.bdev, vbo->tbo.ttm, &ctx);
+ if (likely(ret == 0)) {
+ struct vmw_ttm_tt *vmw_tt =
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
+index 547919e8ce9e45..b11bc0f00dfda1 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine.c
+@@ -417,7 +417,7 @@ hw_engine_setup_default_state(struct xe_hw_engine *hwe)
+ * Bspec: 72161
+ */
+ const u8 mocs_write_idx = gt->mocs.uc_index;
+- const u8 mocs_read_idx = hwe->class == XE_ENGINE_CLASS_COMPUTE &&
++ const u8 mocs_read_idx = hwe->class == XE_ENGINE_CLASS_COMPUTE && IS_DGFX(xe) &&
+ (GRAPHICS_VER(xe) >= 20 || xe->info.platform == XE_PVC) ?
+ gt->mocs.wb_index : gt->mocs.uc_index;
+ u32 ring_cmd_cctl_val = REG_FIELD_PREP(CMD_CCTL_WRITE_OVERRIDE_MASK, mocs_write_idx) |
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 78823f53d2905d..6fc00d63b2857f 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -1980,6 +1980,7 @@ static const struct xe_mmio_range xe2_oa_mux_regs[] = {
+ { .start = 0x5194, .end = 0x5194 }, /* SYS_MEM_LAT_MEASURE_MERTF_GRP_3D */
+ { .start = 0x8704, .end = 0x8704 }, /* LMEM_LAT_MEASURE_MCFG_GRP */
+ { .start = 0xB1BC, .end = 0xB1BC }, /* L3_BANK_LAT_MEASURE_LBCF_GFX */
++ { .start = 0xD0E0, .end = 0xD0F4 }, /* VISACTL */
+ { .start = 0xE18C, .end = 0xE18C }, /* SAMPLER_MODE */
+ { .start = 0xE590, .end = 0xE590 }, /* TDL_LSC_LAT_MEASURE_TDL_GFX */
+ { .start = 0x13000, .end = 0x137FC }, /* PES_0_PESL0 - PES_63_UPPER_PESL3 */
+diff --git a/drivers/hwmon/ltc2991.c b/drivers/hwmon/ltc2991.c
+index 7ca139e4b6aff0..6d5d4cb846daf3 100644
+--- a/drivers/hwmon/ltc2991.c
++++ b/drivers/hwmon/ltc2991.c
+@@ -125,7 +125,7 @@ static int ltc2991_get_curr(struct ltc2991_state *st, u32 reg, int channel,
+
+ /* Vx-Vy, 19.075uV/LSB */
+ *val = DIV_ROUND_CLOSEST(sign_extend32(reg_val, 14) * 19075,
+- st->r_sense_uohm[channel]);
++ (s32)st->r_sense_uohm[channel]);
+
+ return 0;
+ }
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index 1c2cb12071b808..5acbfd7d088dd5 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -207,7 +207,8 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ *val = sign_extend32(regval,
+ reg == TMP51X_SHUNT_CURRENT_RESULT ?
+ 16 - tmp51x_get_pga_shift(data) : 15);
+- *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms);
++ *val = DIV_ROUND_CLOSEST(*val * 10 * (long)MILLI, (long)data->shunt_uohms);
++
+ break;
+ case TMP51X_BUS_VOLTAGE_RESULT:
+ case TMP51X_BUS_VOLTAGE_H_LIMIT:
+@@ -223,7 +224,7 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ case TMP51X_BUS_CURRENT_RESULT:
+ // Current = (ShuntVoltage * CalibrationRegister) / 4096
+ *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua;
+- *val = DIV_ROUND_CLOSEST(*val, MILLI);
++ *val = DIV_ROUND_CLOSEST(*val, (long)MILLI);
+ break;
+ case TMP51X_LOCAL_TEMP_RESULT:
+ case TMP51X_REMOTE_TEMP_RESULT_1:
+@@ -263,7 +264,7 @@ static int tmp51x_set_value(struct tmp51x_data *data, u8 reg, long val)
+ * The user enter current value and we convert it to
+ * voltage. 1lsb = 10uV
+ */
+- val = DIV_ROUND_CLOSEST(val * data->shunt_uohms, 10 * MILLI);
++ val = DIV_ROUND_CLOSEST(val * (long)data->shunt_uohms, 10 * (long)MILLI);
+ max_val = U16_MAX >> tmp51x_get_pga_shift(data);
+ regval = clamp_val(val, -max_val, max_val);
+ break;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 9267df38c2d0a1..3991224148214a 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -130,6 +130,8 @@
+ #define ID_P_PM_BLOCKED BIT(31)
+ #define ID_P_MASK GENMASK(31, 27)
+
++#define ID_SLAVE_NACK BIT(0)
++
+ enum rcar_i2c_type {
+ I2C_RCAR_GEN1,
+ I2C_RCAR_GEN2,
+@@ -166,6 +168,7 @@ struct rcar_i2c_priv {
+ int irq;
+
+ struct i2c_client *host_notify_client;
++ u8 slave_flags;
+ };
+
+ #define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent)
+@@ -655,6 +658,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ {
+ u32 ssr_raw, ssr_filtered;
+ u8 value;
++ int ret;
+
+ ssr_raw = rcar_i2c_read(priv, ICSSR) & 0xff;
+ ssr_filtered = ssr_raw & rcar_i2c_read(priv, ICSIER);
+@@ -670,7 +674,10 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ rcar_i2c_write(priv, ICRXTX, value);
+ rcar_i2c_write(priv, ICSIER, SDE | SSR | SAR);
+ } else {
+- i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_REQUESTED, &value);
++ ret = i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_REQUESTED, &value);
++ if (ret)
++ priv->slave_flags |= ID_SLAVE_NACK;
++
+ rcar_i2c_read(priv, ICRXTX); /* dummy read */
+ rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR);
+ }
+@@ -683,18 +690,21 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ if (ssr_filtered & SSR) {
+ i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
+ rcar_i2c_write(priv, ICSCR, SIE | SDBS); /* clear our NACK */
++ priv->slave_flags &= ~ID_SLAVE_NACK;
+ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
+ }
+
+ /* master wants to write to us */
+ if (ssr_filtered & SDR) {
+- int ret;
+-
+ value = rcar_i2c_read(priv, ICRXTX);
+ ret = i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_RECEIVED, &value);
+- /* Send NACK in case of error */
+- rcar_i2c_write(priv, ICSCR, SIE | SDBS | (ret < 0 ? FNA : 0));
++ if (ret)
++ priv->slave_flags |= ID_SLAVE_NACK;
++
++ /* Send NACK in case of error, but it will come 1 byte late :( */
++ rcar_i2c_write(priv, ICSCR, SIE | SDBS |
++ (priv->slave_flags & ID_SLAVE_NACK ? FNA : 0));
+ rcar_i2c_write(priv, ICSSR, ~SDR & 0xff);
+ }
+
+diff --git a/drivers/i2c/i2c-atr.c b/drivers/i2c/i2c-atr.c
+index f21475ae592183..0d54d0b5e32731 100644
+--- a/drivers/i2c/i2c-atr.c
++++ b/drivers/i2c/i2c-atr.c
+@@ -412,7 +412,7 @@ static int i2c_atr_bus_notifier_call(struct notifier_block *nb,
+ dev_name(dev), ret);
+ break;
+
+- case BUS_NOTIFY_DEL_DEVICE:
++ case BUS_NOTIFY_REMOVED_DEVICE:
+ i2c_atr_detach_client(client->adapter, client);
+ break;
+
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 7c810893bfa332..75d30861ffe21a 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -1562,6 +1562,7 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+ res = device_add(&adap->dev);
+ if (res) {
+ pr_err("adapter '%s': can't register device (%d)\n", adap->name, res);
++ put_device(&adap->dev);
+ goto out_list;
+ }
+
+diff --git a/drivers/i2c/i2c-slave-testunit.c b/drivers/i2c/i2c-slave-testunit.c
+index 9fe3150378e863..7ae0c7902f670b 100644
+--- a/drivers/i2c/i2c-slave-testunit.c
++++ b/drivers/i2c/i2c-slave-testunit.c
+@@ -38,6 +38,7 @@ enum testunit_regs {
+
+ enum testunit_flags {
+ TU_FLAG_IN_PROCESS,
++ TU_FLAG_NACK,
+ };
+
+ struct testunit_data {
+@@ -90,8 +91,10 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+
+ switch (event) {
+ case I2C_SLAVE_WRITE_REQUESTED:
+- if (test_bit(TU_FLAG_IN_PROCESS, &tu->flags))
+- return -EBUSY;
++ if (test_bit(TU_FLAG_IN_PROCESS | TU_FLAG_NACK, &tu->flags)) {
++ ret = -EBUSY;
++ break;
++ }
+
+ memset(tu->regs, 0, TU_NUM_REGS);
+ tu->reg_idx = 0;
+@@ -99,8 +102,10 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ break;
+
+ case I2C_SLAVE_WRITE_RECEIVED:
+- if (test_bit(TU_FLAG_IN_PROCESS, &tu->flags))
+- return -EBUSY;
++ if (test_bit(TU_FLAG_IN_PROCESS | TU_FLAG_NACK, &tu->flags)) {
++ ret = -EBUSY;
++ break;
++ }
+
+ if (tu->reg_idx < TU_NUM_REGS)
+ tu->regs[tu->reg_idx] = *val;
+@@ -129,6 +134,8 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ * here because we still need them in the workqueue!
+ */
+ tu->reg_idx = 0;
++
++ clear_bit(TU_FLAG_NACK, &tu->flags);
+ break;
+
+ case I2C_SLAVE_READ_PROCESSED:
+@@ -151,6 +158,10 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ break;
+ }
+
++ /* If an error occurred somewhen, we NACK everything until next STOP */
++ if (ret)
++ set_bit(TU_FLAG_NACK, &tu->flags);
++
+ return ret;
+ }
+
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 7e2686b606c04d..cec7f3447e1935 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -261,7 +261,9 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ pm_runtime_no_callbacks(&pdev->dev);
+
+ /* switch to first parent as active master */
+- i2c_demux_activate_master(priv, 0);
++ err = i2c_demux_activate_master(priv, 0);
++ if (err)
++ goto err_rollback;
+
+ err = device_create_file(&pdev->dev, &dev_attr_available_masters);
+ if (err)
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index b20cffcc3e7d2d..14e434ff51edea 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -2269,6 +2269,7 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ qp_attr->retry_cnt = qplib_qp->retry_cnt;
+ qp_attr->rnr_retry = qplib_qp->rnr_retry;
+ qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer;
++ qp_attr->port_num = __to_ib_port_num(qplib_qp->port_id);
+ qp_attr->rq_psn = qplib_qp->rq.psn;
+ qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic;
+ qp_attr->sq_psn = qplib_qp->sq.psn;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+index b789e47ec97a85..9cd8f770d1b27e 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+@@ -264,6 +264,10 @@ void bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
+ int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+ void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
+
++static inline u32 __to_ib_port_num(u16 port_id)
++{
++ return (u32)port_id + 1;
++}
+
+ unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp);
+ void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 828e2f9808012b..613b5fc70e13ea 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -1479,6 +1479,7 @@ int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ qp->dest_qpn = le32_to_cpu(sb->dest_qp_id);
+ memcpy(qp->smac, sb->src_mac, 6);
+ qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id);
++ qp->port_id = le16_to_cpu(sb->port_id);
+ bail:
+ dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
+ sbuf.sb, sbuf.dma_addr);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index d8c71c024613bf..6f02954eb1429f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -298,6 +298,7 @@ struct bnxt_qplib_qp {
+ u32 dest_qpn;
+ u8 smac[6];
+ u16 vlan_id;
++ u16 port_id;
+ u8 nw_type;
+ struct bnxt_qplib_ah ah;
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index d9b6ec844cdda0..0d3a889b1905c7 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1961,7 +1961,7 @@ static int its_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
+ if (!is_v4(its_dev->its))
+ return -EINVAL;
+
+- guard(raw_spinlock_irq)(&its_dev->event_map.vlpi_lock);
++ guard(raw_spinlock)(&its_dev->event_map.vlpi_lock);
+
+ /* Unmap request? */
+ if (!info)
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index b0bfb61539c202..8fdee511bc0f2c 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -1522,7 +1522,7 @@ static int gic_retrigger(struct irq_data *data)
+ static int gic_cpu_pm_notifier(struct notifier_block *self,
+ unsigned long cmd, void *v)
+ {
+- if (cmd == CPU_PM_EXIT) {
++ if (cmd == CPU_PM_EXIT || cmd == CPU_PM_ENTER_FAILED) {
+ if (gic_dist_security_disabled())
+ gic_enable_redist(true);
+ gic_cpu_sys_reg_enable();
+diff --git a/drivers/irqchip/irqchip.c b/drivers/irqchip/irqchip.c
+index 1eeb0d0156ce9e..0ee7b6b71f5fa5 100644
+--- a/drivers/irqchip/irqchip.c
++++ b/drivers/irqchip/irqchip.c
+@@ -35,11 +35,10 @@ void __init irqchip_init(void)
+ int platform_irqchip_probe(struct platform_device *pdev)
+ {
+ struct device_node *np = pdev->dev.of_node;
+- struct device_node *par_np = of_irq_find_parent(np);
++ struct device_node *par_np __free(device_node) = of_irq_find_parent(np);
+ of_irq_init_cb_t irq_init_cb = of_device_get_match_data(&pdev->dev);
+
+ if (!irq_init_cb) {
+- of_node_put(par_np);
+ return -EINVAL;
+ }
+
+@@ -55,7 +54,6 @@ int platform_irqchip_probe(struct platform_device *pdev)
+ * interrupt controller can check for specific domains as necessary.
+ */
+ if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY)) {
+- of_node_put(par_np);
+ return -EPROBE_DEFER;
+ }
+
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 8c57df44c40fe8..9d6e85bf227b92 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor,
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->dummy.nbytes)
+- op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
++ op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->data.nbytes)
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 6a716337f48be1..268399dfcf22f0 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -923,7 +923,6 @@ static void xgbe_phy_free_phy_device(struct xgbe_prv_data *pdata)
+
+ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata)
+ {
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, };
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int phy_id = phy_data->phydev->phy_id;
+
+@@ -945,14 +944,7 @@ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata)
+ phy_write(phy_data->phydev, 0x04, 0x0d01);
+ phy_write(phy_data->phydev, 0x00, 0x9140);
+
+- linkmode_set_bit_array(phy_10_100_features_array,
+- ARRAY_SIZE(phy_10_100_features_array),
+- supported);
+- linkmode_set_bit_array(phy_gbit_features_array,
+- ARRAY_SIZE(phy_gbit_features_array),
+- supported);
+-
+- linkmode_copy(phy_data->phydev->supported, supported);
++ linkmode_copy(phy_data->phydev->supported, PHY_GBIT_FEATURES);
+
+ phy_support_asym_pause(phy_data->phydev);
+
+@@ -964,7 +956,6 @@ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata)
+
+ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata)
+ {
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, };
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ struct xgbe_sfp_eeprom *sfp_eeprom = &phy_data->sfp_eeprom;
+ unsigned int phy_id = phy_data->phydev->phy_id;
+@@ -1028,13 +1019,7 @@ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata)
+ reg = phy_read(phy_data->phydev, 0x00);
+ phy_write(phy_data->phydev, 0x00, reg & ~0x00800);
+
+- linkmode_set_bit_array(phy_10_100_features_array,
+- ARRAY_SIZE(phy_10_100_features_array),
+- supported);
+- linkmode_set_bit_array(phy_gbit_features_array,
+- ARRAY_SIZE(phy_gbit_features_array),
+- supported);
+- linkmode_copy(phy_data->phydev->supported, supported);
++ linkmode_copy(phy_data->phydev->supported, PHY_GBIT_FEATURES);
+ phy_support_asym_pause(phy_data->phydev);
+
+ netif_dbg(pdata, drv, pdata->netdev,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c255445e97f3c5..603e9c968c44bd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4558,7 +4558,7 @@ void bnxt_set_ring_params(struct bnxt *bp)
+ /* Changing allocation mode of RX rings.
+ * TODO: Update when extending xdp_rxq_info to support allocation modes.
+ */
+-int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
++static void __bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ {
+ struct net_device *dev = bp->dev;
+
+@@ -4579,15 +4579,30 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ bp->rx_skb_func = bnxt_rx_page_skb;
+ }
+ bp->rx_dir = DMA_BIDIRECTIONAL;
+- /* Disable LRO or GRO_HW */
+- netdev_update_features(dev);
+ } else {
+ dev->max_mtu = bp->max_mtu;
+ bp->flags &= ~BNXT_FLAG_RX_PAGE_MODE;
+ bp->rx_dir = DMA_FROM_DEVICE;
+ bp->rx_skb_func = bnxt_rx_skb;
+ }
+- return 0;
++}
++
++void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
++{
++ __bnxt_set_rx_skb_mode(bp, page_mode);
++
++ if (!page_mode) {
++ int rx, tx;
++
++ bnxt_get_max_rings(bp, &rx, &tx, true);
++ if (rx > 1) {
++ bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS;
++ bp->dev->hw_features |= NETIF_F_LRO;
++ }
++ }
++
++ /* Update LRO and GRO_HW availability */
++ netdev_update_features(bp->dev);
+ }
+
+ static void bnxt_free_vnic_attributes(struct bnxt *bp)
+@@ -15909,7 +15924,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (bp->max_fltr < BNXT_MAX_FLTR)
+ bp->max_fltr = BNXT_MAX_FLTR;
+ bnxt_init_l2_fltr_tbl(bp);
+- bnxt_set_rx_skb_mode(bp, false);
++ __bnxt_set_rx_skb_mode(bp, false);
+ bnxt_set_tpa_flags(bp);
+ bnxt_set_ring_params(bp);
+ bnxt_rdma_aux_device_init(bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 9e05704d94450e..bee645f58d0bde 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -2796,7 +2796,7 @@ void bnxt_reuse_rx_data(struct bnxt_rx_ring_info *rxr, u16 cons, void *data);
+ u32 bnxt_fw_health_readl(struct bnxt *bp, int reg_idx);
+ void bnxt_set_tpa_flags(struct bnxt *bp);
+ void bnxt_set_ring_params(struct bnxt *);
+-int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
++void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
+ void bnxt_insert_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr);
+ void bnxt_del_one_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr);
+ int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index f88b641533fcc5..dc51dce209d5f0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -422,15 +422,8 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog)
+ bnxt_set_rx_skb_mode(bp, true);
+ xdp_features_set_redirect_target(dev, true);
+ } else {
+- int rx, tx;
+-
+ xdp_features_clear_redirect_target(dev);
+ bnxt_set_rx_skb_mode(bp, false);
+- bnxt_get_max_rings(bp, &rx, &tx, true);
+- if (rx > 1) {
+- bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS;
+- bp->dev->hw_features |= NETIF_F_LRO;
+- }
+ }
+ bp->tx_nr_rings_xdp = tx_xdp;
+ bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tc + tx_xdp;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 9d9fcec41488e3..49d1748e0c043d 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1591,19 +1591,22 @@ static void fec_enet_tx(struct net_device *ndev, int budget)
+ fec_enet_tx_queue(ndev, i, budget);
+ }
+
+-static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
++static int fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
+ struct bufdesc *bdp, int index)
+ {
+ struct page *new_page;
+ dma_addr_t phys_addr;
+
+ new_page = page_pool_dev_alloc_pages(rxq->page_pool);
+- WARN_ON(!new_page);
+- rxq->rx_skb_info[index].page = new_page;
++ if (unlikely(!new_page))
++ return -ENOMEM;
+
++ rxq->rx_skb_info[index].page = new_page;
+ rxq->rx_skb_info[index].offset = FEC_ENET_XDP_HEADROOM;
+ phys_addr = page_pool_get_dma_addr(new_page) + FEC_ENET_XDP_HEADROOM;
+ bdp->cbd_bufaddr = cpu_to_fec32(phys_addr);
++
++ return 0;
+ }
+
+ static u32
+@@ -1698,6 +1701,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
+ int cpu = smp_processor_id();
+ struct xdp_buff xdp;
+ struct page *page;
++ __fec32 cbd_bufaddr;
+ u32 sub_len = 4;
+
+ #if !defined(CONFIG_M5272)
+@@ -1766,12 +1770,17 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
+
+ index = fec_enet_get_bd_index(bdp, &rxq->bd);
+ page = rxq->rx_skb_info[index].page;
++ cbd_bufaddr = bdp->cbd_bufaddr;
++ if (fec_enet_update_cbd(rxq, bdp, index)) {
++ ndev->stats.rx_dropped++;
++ goto rx_processing_done;
++ }
++
+ dma_sync_single_for_cpu(&fep->pdev->dev,
+- fec32_to_cpu(bdp->cbd_bufaddr),
++ fec32_to_cpu(cbd_bufaddr),
+ pkt_len,
+ DMA_FROM_DEVICE);
+ prefetch(page_address(page));
+- fec_enet_update_cbd(rxq, bdp, index);
+
+ if (xdp_prog) {
+ xdp_buff_clear_frags_flag(&xdp);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index d6f80da30decf4..558cda577191d6 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -1047,5 +1047,10 @@ static inline void ice_clear_rdma_cap(struct ice_pf *pf)
+ clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ }
+
++static inline enum ice_phy_model ice_get_phy_model(const struct ice_hw *hw)
++{
++ return hw->ptp.phy_model;
++}
++
+ extern const struct xdp_metadata_ops ice_xdp_md_ops;
+ #endif /* _ICE_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.c b/drivers/net/ethernet/intel/ice/ice_adapter.c
+index ad84d8ad49a63f..f3e195974a8efa 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.c
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.c
+@@ -40,11 +40,17 @@ static struct ice_adapter *ice_adapter_new(void)
+ spin_lock_init(&adapter->ptp_gltsyn_time_lock);
+ refcount_set(&adapter->refcount, 1);
+
++ mutex_init(&adapter->ports.lock);
++ INIT_LIST_HEAD(&adapter->ports.ports);
++
+ return adapter;
+ }
+
+ static void ice_adapter_free(struct ice_adapter *adapter)
+ {
++ WARN_ON(!list_empty(&adapter->ports.ports));
++ mutex_destroy(&adapter->ports.lock);
++
+ kfree(adapter);
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.h b/drivers/net/ethernet/intel/ice/ice_adapter.h
+index 9d11014ec02ff2..e233225848b384 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.h
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.h
+@@ -4,22 +4,42 @@
+ #ifndef _ICE_ADAPTER_H_
+ #define _ICE_ADAPTER_H_
+
++#include <linux/types.h>
+ #include <linux/spinlock_types.h>
+ #include <linux/refcount_types.h>
+
+ struct pci_dev;
++struct ice_pf;
++
++/**
++ * struct ice_port_list - data used to store the list of adapter ports
++ *
++ * This structure contains data used to maintain a list of adapter ports
++ *
++ * @ports: list of ports
++ * @lock: protect access to the ports list
++ */
++struct ice_port_list {
++ struct list_head ports;
++ /* To synchronize the ports list operations */
++ struct mutex lock;
++};
+
+ /**
+ * struct ice_adapter - PCI adapter resources shared across PFs
+ * @ptp_gltsyn_time_lock: Spinlock protecting access to the GLTSYN_TIME
+ * register of the PTP clock.
+ * @refcount: Reference count. struct ice_pf objects hold the references.
++ * @ctrl_pf: Control PF of the adapter
++ * @ports: Ports list
+ */
+ struct ice_adapter {
++ refcount_t refcount;
+ /* For access to the GLTSYN_TIME register */
+ spinlock_t ptp_gltsyn_time_lock;
+
+- refcount_t refcount;
++ struct ice_pf *ctrl_pf;
++ struct ice_port_list ports;
+ };
+
+ struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev);
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 79a6edd0be0ec4..80f3dfd2712430 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -1648,6 +1648,7 @@ struct ice_aqc_get_port_options_elem {
+ #define ICE_AQC_PORT_OPT_MAX_LANE_25G 5
+ #define ICE_AQC_PORT_OPT_MAX_LANE_50G 6
+ #define ICE_AQC_PORT_OPT_MAX_LANE_100G 7
++#define ICE_AQC_PORT_OPT_MAX_LANE_200G 8
+
+ u8 global_scid[2];
+ u8 phy_scid[2];
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index f1324e25b2af1c..068a467de1d56d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -4074,6 +4074,57 @@ ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid,
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ }
+
++/**
++ * ice_get_phy_lane_number - Get PHY lane number for current adapter
++ * @hw: pointer to the hw struct
++ *
++ * Return: PHY lane number on success, negative error code otherwise.
++ */
++int ice_get_phy_lane_number(struct ice_hw *hw)
++{
++ struct ice_aqc_get_port_options_elem *options;
++ unsigned int lport = 0;
++ unsigned int lane;
++ int err;
++
++ options = kcalloc(ICE_AQC_PORT_OPT_MAX, sizeof(*options), GFP_KERNEL);
++ if (!options)
++ return -ENOMEM;
++
++ for (lane = 0; lane < ICE_MAX_PORT_PER_PCI_DEV; lane++) {
++ u8 options_count = ICE_AQC_PORT_OPT_MAX;
++ u8 speed, active_idx, pending_idx;
++ bool active_valid, pending_valid;
++
++ err = ice_aq_get_port_options(hw, options, &options_count, lane,
++ true, &active_idx, &active_valid,
++ &pending_idx, &pending_valid);
++ if (err)
++ goto err;
++
++ if (!active_valid)
++ continue;
++
++ speed = options[active_idx].max_lane_speed;
++ /* If we don't get speed for this lane, it's unoccupied */
++ if (speed > ICE_AQC_PORT_OPT_MAX_LANE_200G)
++ continue;
++
++ if (hw->pf_id == lport) {
++ kfree(options);
++ return lane;
++ }
++
++ lport++;
++ }
++
++ /* PHY lane not found */
++ err = -ENXIO;
++err:
++ kfree(options);
++ return err;
++}
++
+ /**
+ * ice_aq_sff_eeprom
+ * @hw: pointer to the HW struct
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
+index 27208a60cece51..fe6f88cfd94866 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.h
++++ b/drivers/net/ethernet/intel/ice/ice_common.h
+@@ -193,6 +193,7 @@ ice_aq_get_port_options(struct ice_hw *hw,
+ int
+ ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid,
+ u8 new_option);
++int ice_get_phy_lane_number(struct ice_hw *hw);
+ int
+ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
+ u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 8f2e758c394277..45eefe22fb5b73 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1144,7 +1144,7 @@ ice_link_event(struct ice_pf *pf, struct ice_port_info *pi, bool link_up,
+ if (link_up == old_link && link_speed == old_link_speed)
+ return 0;
+
+- ice_ptp_link_change(pf, pf->hw.pf_id, link_up);
++ ice_ptp_link_change(pf, link_up);
+
+ if (ice_is_dcb_active(pf)) {
+ if (test_bit(ICE_FLAG_DCB_ENA, pf->flags))
+@@ -6744,7 +6744,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ ice_print_link_msg(vsi, true);
+ netif_tx_start_all_queues(vsi->netdev);
+ netif_carrier_on(vsi->netdev);
+- ice_ptp_link_change(pf, pf->hw.pf_id, true);
++ ice_ptp_link_change(pf, true);
+ }
+
+ /* Perform an initial read of the statistics registers now to
+@@ -7214,7 +7214,7 @@ int ice_down(struct ice_vsi *vsi)
+
+ if (vsi->netdev) {
+ vlan_err = ice_vsi_del_vlan_zero(vsi);
+- ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false);
++ ice_ptp_link_change(vsi->back, false);
+ netif_carrier_off(vsi->netdev);
+ netif_tx_disable(vsi->netdev);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index ef2e858f49bb0e..7c6f81beaee460 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -16,6 +16,18 @@ static const struct ptp_pin_desc ice_pin_desc_e810t[] = {
+ { "U.FL2", UFL2, PTP_PF_NONE, 2, { 0, } },
+ };
+
++static struct ice_pf *ice_get_ctrl_pf(struct ice_pf *pf)
++{
++ return !pf->adapter ? NULL : pf->adapter->ctrl_pf;
++}
++
++static struct ice_ptp *ice_get_ctrl_ptp(struct ice_pf *pf)
++{
++ struct ice_pf *ctrl_pf = ice_get_ctrl_pf(pf);
++
++ return !ctrl_pf ? NULL : &ctrl_pf->ptp;
++}
++
+ /**
+ * ice_get_sma_config_e810t
+ * @hw: pointer to the hw struct
+@@ -800,8 +812,8 @@ static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
+ struct ice_ptp_port *port;
+ unsigned int i;
+
+- mutex_lock(&pf->ptp.ports_owner.lock);
+- list_for_each_entry(port, &pf->ptp.ports_owner.ports, list_member) {
++ mutex_lock(&pf->adapter->ports.lock);
++ list_for_each_entry(port, &pf->adapter->ports.ports, list_node) {
+ struct ice_ptp_tx *tx = &port->tx;
+
+ if (!tx || !tx->init)
+@@ -809,7 +821,7 @@ static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
+
+ ice_ptp_process_tx_tstamp(tx);
+ }
+- mutex_unlock(&pf->ptp.ports_owner.lock);
++ mutex_unlock(&pf->adapter->ports.lock);
+
+ for (i = 0; i < ICE_GET_QUAD_NUM(pf->hw.ptp.num_lports); i++) {
+ u64 tstamp_ready;
+@@ -974,7 +986,7 @@ ice_ptp_flush_all_tx_tracker(struct ice_pf *pf)
+ {
+ struct ice_ptp_port *port;
+
+- list_for_each_entry(port, &pf->ptp.ports_owner.ports, list_member)
++ list_for_each_entry(port, &pf->adapter->ports.ports, list_node)
+ ice_ptp_flush_tx_tracker(ptp_port_to_pf(port), &port->tx);
+ }
+
+@@ -1363,7 +1375,7 @@ ice_ptp_port_phy_stop(struct ice_ptp_port *ptp_port)
+
+ mutex_lock(&ptp_port->ps_lock);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_stop_phy_timer_eth56g(hw, port, true);
+ break;
+@@ -1409,7 +1421,7 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
+
+ mutex_lock(&ptp_port->ps_lock);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_start_phy_timer_eth56g(hw, port);
+ break;
+@@ -1454,10 +1466,9 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
+ /**
+ * ice_ptp_link_change - Reconfigure PTP after link status change
+ * @pf: Board private structure
+- * @port: Port for which the PHY start is set
+ * @linkup: Link is up or down
+ */
+-void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
++void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
+ {
+ struct ice_ptp_port *ptp_port;
+ struct ice_hw *hw = &pf->hw;
+@@ -1465,14 +1476,7 @@ void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
+ if (pf->ptp.state != ICE_PTP_READY)
+ return;
+
+- if (WARN_ON_ONCE(port >= hw->ptp.num_lports))
+- return;
+-
+ ptp_port = &pf->ptp.port;
+- if (ice_is_e825c(hw) && hw->ptp.is_2x50g_muxed_topo)
+- port *= 2;
+- if (WARN_ON_ONCE(ptp_port->port_num != port))
+- return;
+
+ /* Update cached link status for this port immediately */
+ ptp_port->link_up = linkup;
+@@ -1480,8 +1484,7 @@ void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
+ /* Skip HW writes if reset is in progress */
+ if (pf->hw.reset_ongoing)
+ return;
+-
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_E810:
+ /* Do not reconfigure E810 PHY */
+ return;
+@@ -1514,7 +1517,7 @@ static int ice_ptp_cfg_phy_interrupt(struct ice_pf *pf, bool ena, u32 threshold)
+
+ ice_ptp_reset_ts_memory(hw);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G: {
+ int port;
+
+@@ -1553,7 +1556,7 @@ static int ice_ptp_cfg_phy_interrupt(struct ice_pf *pf, bool ena, u32 threshold)
+ case ICE_PHY_UNSUP:
+ default:
+ dev_warn(dev, "%s: Unexpected PHY model %d\n", __func__,
+- hw->ptp.phy_model);
++ ice_get_phy_model(hw));
+ return -EOPNOTSUPP;
+ }
+ }
+@@ -1575,10 +1578,10 @@ static void ice_ptp_restart_all_phy(struct ice_pf *pf)
+ {
+ struct list_head *entry;
+
+- list_for_each(entry, &pf->ptp.ports_owner.ports) {
++ list_for_each(entry, &pf->adapter->ports.ports) {
+ struct ice_ptp_port *port = list_entry(entry,
+ struct ice_ptp_port,
+- list_member);
++ list_node);
+
+ if (port->link_up)
+ ice_ptp_port_phy_restart(port);
+@@ -2059,7 +2062,7 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ /* For Vernier mode on E82X, we need to recalibrate after new settime.
+ * Start with marking timestamps as invalid.
+ */
+- if (hw->ptp.phy_model == ICE_PHY_E82X) {
++ if (ice_get_phy_model(hw) == ICE_PHY_E82X) {
+ err = ice_ptp_clear_phy_offset_ready_e82x(hw);
+ if (err)
+ dev_warn(ice_pf_to_dev(pf), "Failed to mark timestamps as invalid before settime\n");
+@@ -2083,7 +2086,7 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ ice_ptp_enable_all_clkout(pf);
+
+ /* Recalibrate and re-enable timestamp blocks for E822/E823 */
+- if (hw->ptp.phy_model == ICE_PHY_E82X)
++ if (ice_get_phy_model(hw) == ICE_PHY_E82X)
+ ice_ptp_restart_all_phy(pf);
+ exit:
+ if (err) {
+@@ -2895,6 +2898,50 @@ void ice_ptp_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
+ dev_err(ice_pf_to_dev(pf), "PTP reset failed %d\n", err);
+ }
+
++static bool ice_is_primary(struct ice_hw *hw)
++{
++ return ice_is_e825c(hw) && ice_is_dual(hw) ?
++ !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_PRIMARY_M) : true;
++}
++
++static int ice_ptp_setup_adapter(struct ice_pf *pf)
++{
++ if (!ice_pf_src_tmr_owned(pf) || !ice_is_primary(&pf->hw))
++ return -EPERM;
++
++ pf->adapter->ctrl_pf = pf;
++
++ return 0;
++}
++
++static int ice_ptp_setup_pf(struct ice_pf *pf)
++{
++ struct ice_ptp *ctrl_ptp = ice_get_ctrl_ptp(pf);
++ struct ice_ptp *ptp = &pf->ptp;
++
++ if (WARN_ON(!ctrl_ptp) || ice_get_phy_model(&pf->hw) == ICE_PHY_UNSUP)
++ return -ENODEV;
++
++ INIT_LIST_HEAD(&ptp->port.list_node);
++ mutex_lock(&pf->adapter->ports.lock);
++
++ list_add(&ptp->port.list_node,
++ &pf->adapter->ports.ports);
++ mutex_unlock(&pf->adapter->ports.lock);
++
++ return 0;
++}
++
++static void ice_ptp_cleanup_pf(struct ice_pf *pf)
++{
++ struct ice_ptp *ptp = &pf->ptp;
++
++ if (ice_get_phy_model(&pf->hw) != ICE_PHY_UNSUP) {
++ mutex_lock(&pf->adapter->ports.lock);
++ list_del(&ptp->port.list_node);
++ mutex_unlock(&pf->adapter->ports.lock);
++ }
++}
+ /**
+ * ice_ptp_aux_dev_to_aux_pf - Get auxiliary PF handle for the auxiliary device
+ * @aux_dev: auxiliary device to get the auxiliary PF for
+@@ -2946,9 +2993,9 @@ static int ice_ptp_auxbus_probe(struct auxiliary_device *aux_dev,
+ if (WARN_ON(!owner_pf))
+ return -ENODEV;
+
+- INIT_LIST_HEAD(&aux_pf->ptp.port.list_member);
++ INIT_LIST_HEAD(&aux_pf->ptp.port.list_node);
+ mutex_lock(&owner_pf->ptp.ports_owner.lock);
+- list_add(&aux_pf->ptp.port.list_member,
++ list_add(&aux_pf->ptp.port.list_node,
+ &owner_pf->ptp.ports_owner.ports);
+ mutex_unlock(&owner_pf->ptp.ports_owner.lock);
+
+@@ -2965,7 +3012,7 @@ static void ice_ptp_auxbus_remove(struct auxiliary_device *aux_dev)
+ struct ice_pf *aux_pf = ice_ptp_aux_dev_to_aux_pf(aux_dev);
+
+ mutex_lock(&owner_pf->ptp.ports_owner.lock);
+- list_del(&aux_pf->ptp.port.list_member);
++ list_del(&aux_pf->ptp.port.list_node);
+ mutex_unlock(&owner_pf->ptp.ports_owner.lock);
+ }
+
+@@ -3025,7 +3072,7 @@ ice_ptp_auxbus_create_id_table(struct ice_pf *pf, const char *name)
+ * ice_ptp_register_auxbus_driver - Register PTP auxiliary bus driver
+ * @pf: Board private structure
+ */
+-static int ice_ptp_register_auxbus_driver(struct ice_pf *pf)
++static int __always_unused ice_ptp_register_auxbus_driver(struct ice_pf *pf)
+ {
+ struct auxiliary_driver *aux_driver;
+ struct ice_ptp *ptp;
+@@ -3068,7 +3115,7 @@ static int ice_ptp_register_auxbus_driver(struct ice_pf *pf)
+ * ice_ptp_unregister_auxbus_driver - Unregister PTP auxiliary bus driver
+ * @pf: Board private structure
+ */
+-static void ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
++static void __always_unused ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
+ {
+ struct auxiliary_driver *aux_driver = &pf->ptp.ports_owner.aux_driver;
+
+@@ -3087,15 +3134,12 @@ static void ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
+ */
+ int ice_ptp_clock_index(struct ice_pf *pf)
+ {
+- struct auxiliary_device *aux_dev;
+- struct ice_pf *owner_pf;
++ struct ice_ptp *ctrl_ptp = ice_get_ctrl_ptp(pf);
+ struct ptp_clock *clock;
+
+- aux_dev = &pf->ptp.port.aux_dev;
+- owner_pf = ice_ptp_aux_dev_to_owner_pf(aux_dev);
+- if (!owner_pf)
++ if (!ctrl_ptp)
+ return -1;
+- clock = owner_pf->ptp.clock;
++ clock = ctrl_ptp->clock;
+
+ return clock ? ptp_clock_index(clock) : -1;
+ }
+@@ -3155,15 +3199,7 @@ static int ice_ptp_init_owner(struct ice_pf *pf)
+ if (err)
+ goto err_clk;
+
+- err = ice_ptp_register_auxbus_driver(pf);
+- if (err) {
+- dev_err(ice_pf_to_dev(pf), "Failed to register PTP auxbus driver");
+- goto err_aux;
+- }
+-
+ return 0;
+-err_aux:
+- ptp_clock_unregister(pf->ptp.clock);
+ err_clk:
+ pf->ptp.clock = NULL;
+ err_exit:
+@@ -3209,7 +3245,7 @@ static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
+
+ mutex_init(&ptp_port->ps_lock);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_ptp_init_tx_eth56g(pf, &ptp_port->tx,
+ ptp_port->port_num);
+@@ -3239,7 +3275,7 @@ static void ice_ptp_release_auxbus_device(struct device *dev)
+ * ice_ptp_create_auxbus_device - Create PTP auxiliary bus device
+ * @pf: Board private structure
+ */
+-static int ice_ptp_create_auxbus_device(struct ice_pf *pf)
++static __always_unused int ice_ptp_create_auxbus_device(struct ice_pf *pf)
+ {
+ struct auxiliary_device *aux_dev;
+ struct ice_ptp *ptp;
+@@ -3286,7 +3322,7 @@ static int ice_ptp_create_auxbus_device(struct ice_pf *pf)
+ * ice_ptp_remove_auxbus_device - Remove PTP auxiliary bus device
+ * @pf: Board private structure
+ */
+-static void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
++static __always_unused void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
+ {
+ struct auxiliary_device *aux_dev = &pf->ptp.port.aux_dev;
+
+@@ -3307,7 +3343,7 @@ static void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
+ */
+ static void ice_ptp_init_tx_interrupt_mode(struct ice_pf *pf)
+ {
+- switch (pf->hw.ptp.phy_model) {
++ switch (ice_get_phy_model(&pf->hw)) {
+ case ICE_PHY_E82X:
+ /* E822 based PHY has the clock owner process the interrupt
+ * for all ports.
+@@ -3339,10 +3375,17 @@ void ice_ptp_init(struct ice_pf *pf)
+ {
+ struct ice_ptp *ptp = &pf->ptp;
+ struct ice_hw *hw = &pf->hw;
+- int err;
++ int lane_num, err;
+
+ ptp->state = ICE_PTP_INITIALIZING;
+
++ lane_num = ice_get_phy_lane_number(hw);
++ if (lane_num < 0) {
++ err = lane_num;
++ goto err_exit;
++ }
++
++ ptp->port.port_num = (u8)lane_num;
+ ice_ptp_init_hw(hw);
+
+ ice_ptp_init_tx_interrupt_mode(pf);
+@@ -3350,19 +3393,22 @@ void ice_ptp_init(struct ice_pf *pf)
+ /* If this function owns the clock hardware, it must allocate and
+ * configure the PTP clock device to represent it.
+ */
+- if (ice_pf_src_tmr_owned(pf)) {
++ if (ice_pf_src_tmr_owned(pf) && ice_is_primary(hw)) {
++ err = ice_ptp_setup_adapter(pf);
++ if (err)
++ goto err_exit;
+ err = ice_ptp_init_owner(pf);
+ if (err)
+- goto err;
++ goto err_exit;
+ }
+
+- ptp->port.port_num = hw->pf_id;
+- if (ice_is_e825c(hw) && hw->ptp.is_2x50g_muxed_topo)
+- ptp->port.port_num = hw->pf_id * 2;
++ err = ice_ptp_setup_pf(pf);
++ if (err)
++ goto err_exit;
+
+ err = ice_ptp_init_port(pf, &ptp->port);
+ if (err)
+- goto err;
++ goto err_exit;
+
+ /* Start the PHY timestamping block */
+ ice_ptp_reset_phy_timestamping(pf);
+@@ -3370,20 +3416,16 @@ void ice_ptp_init(struct ice_pf *pf)
+ /* Configure initial Tx interrupt settings */
+ ice_ptp_cfg_tx_interrupt(pf);
+
+- err = ice_ptp_create_auxbus_device(pf);
+- if (err)
+- goto err;
+-
+ ptp->state = ICE_PTP_READY;
+
+ err = ice_ptp_init_work(pf, ptp);
+ if (err)
+- goto err;
++ goto err_exit;
+
+ dev_info(ice_pf_to_dev(pf), "PTP init successful\n");
+ return;
+
+-err:
++err_exit:
+ /* If we registered a PTP clock, release it */
+ if (pf->ptp.clock) {
+ ptp_clock_unregister(ptp->clock);
+@@ -3410,7 +3452,7 @@ void ice_ptp_release(struct ice_pf *pf)
+ /* Disable timestamping for both Tx and Rx */
+ ice_ptp_disable_timestamp_mode(pf);
+
+- ice_ptp_remove_auxbus_device(pf);
++ ice_ptp_cleanup_pf(pf);
+
+ ice_ptp_release_tx_tracker(pf, &pf->ptp.port.tx);
+
+@@ -3425,9 +3467,6 @@ void ice_ptp_release(struct ice_pf *pf)
+ pf->ptp.kworker = NULL;
+ }
+
+- if (ice_pf_src_tmr_owned(pf))
+- ice_ptp_unregister_auxbus_driver(pf);
+-
+ if (!pf->ptp.clock)
+ return;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.h b/drivers/net/ethernet/intel/ice/ice_ptp.h
+index 2db2257a0fb2f3..f1cfa6aa4e76bf 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.h
+@@ -169,7 +169,7 @@ struct ice_ptp_tx {
+ * ready for PTP functionality. It is used to track the port initialization
+ * and determine when the port's PHY offset is valid.
+ *
+- * @list_member: list member structure of auxiliary device
++ * @list_node: list member structure
+ * @tx: Tx timestamp tracking for this port
+ * @aux_dev: auxiliary device associated with this port
+ * @ov_work: delayed work task for tracking when PHY offset is valid
+@@ -179,7 +179,7 @@ struct ice_ptp_tx {
+ * @port_num: the port number this structure represents
+ */
+ struct ice_ptp_port {
+- struct list_head list_member;
++ struct list_head list_node;
+ struct ice_ptp_tx tx;
+ struct auxiliary_device aux_dev;
+ struct kthread_delayed_work ov_work;
+@@ -205,6 +205,7 @@ enum ice_ptp_tx_interrupt {
+ * @ports: list of porst handled by this port owner
+ * @lock: protect access to ports list
+ */
++
+ struct ice_ptp_port_owner {
+ struct auxiliary_driver aux_driver;
+ struct list_head ports;
+@@ -331,7 +332,7 @@ void ice_ptp_prepare_for_reset(struct ice_pf *pf,
+ enum ice_reset_req reset_type);
+ void ice_ptp_init(struct ice_pf *pf);
+ void ice_ptp_release(struct ice_pf *pf);
+-void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup);
++void ice_ptp_link_change(struct ice_pf *pf, bool linkup);
+ #else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
+ static inline int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr)
+ {
+@@ -379,7 +380,7 @@ static inline void ice_ptp_prepare_for_reset(struct ice_pf *pf,
+ }
+ static inline void ice_ptp_init(struct ice_pf *pf) { }
+ static inline void ice_ptp_release(struct ice_pf *pf) { }
+-static inline void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
++static inline void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
+ {
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+index 3005dd252a1026..bdb1020147d1c2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+@@ -131,7 +131,7 @@ struct ice_eth56g_mac_reg_cfg eth56g_mac_cfg[NUM_ICE_ETH56G_LNK_SPD] = {
+ .rx_offset = {
+ .serdes = 0xffffeb27, /* -10.42424 */
+ .no_fec = 0xffffcccd, /* -25.6 */
+- .fc = 0xfffe0014, /* -255.96 */
++ .fc = 0xfffc557b, /* -469.26 */
+ .sfd = 0x4a4, /* 2.32 */
+ .bs_ds = 0x32 /* 0.0969697 */
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index 3816e45b6ab44a..7190fde16c8681 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -804,7 +804,7 @@ static u32 ice_ptp_tmr_cmd_to_port_reg(struct ice_hw *hw,
+ /* Certain hardware families share the same register values for the
+ * port register and source timer register.
+ */
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_E810:
+ return ice_ptp_tmr_cmd_to_src_reg(hw, cmd) & TS_CMD_MASK_E810;
+ default:
+@@ -877,31 +877,46 @@ static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw)
+ * The following functions operate on devices with the ETH 56G PHY.
+ */
+
++/**
++ * ice_ptp_get_dest_dev_e825 - get destination PHY for given port number
++ * @hw: pointer to the HW struct
++ * @port: destination port
++ *
++ * Return: destination sideband queue PHY device.
++ */
++static enum ice_sbq_msg_dev ice_ptp_get_dest_dev_e825(struct ice_hw *hw,
++ u8 port)
++{
++ /* On a single complex E825, PHY 0 is always destination device phy_0
++ * and PHY 1 is phy_0_peer.
++ */
++ if (port >= hw->ptp.ports_per_phy)
++ return eth56g_phy_1;
++ else
++ return eth56g_phy_0;
++}
++
+ /**
+ * ice_write_phy_eth56g - Write a PHY port register
+ * @hw: pointer to the HW struct
+- * @phy_idx: PHY index
++ * @port: destination port
+ * @addr: PHY register address
+ * @val: Value to write
+ *
+ * Return: 0 on success, other error codes when failed to write to PHY
+ */
+-static int ice_write_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+- u32 val)
++static int ice_write_phy_eth56g(struct ice_hw *hw, u8 port, u32 addr, u32 val)
+ {
+- struct ice_sbq_msg_input phy_msg;
++ struct ice_sbq_msg_input msg = {
++ .dest_dev = ice_ptp_get_dest_dev_e825(hw, port),
++ .opcode = ice_sbq_msg_wr,
++ .msg_addr_low = lower_16_bits(addr),
++ .msg_addr_high = upper_16_bits(addr),
++ .data = val
++ };
+ int err;
+
+- phy_msg.opcode = ice_sbq_msg_wr;
+-
+- phy_msg.msg_addr_low = lower_16_bits(addr);
+- phy_msg.msg_addr_high = upper_16_bits(addr);
+-
+- phy_msg.data = val;
+- phy_msg.dest_dev = hw->ptp.phy.eth56g.phy_addr[phy_idx];
+-
+- err = ice_sbq_rw_reg(hw, &phy_msg, ICE_AQ_FLAG_RD);
+-
++ err = ice_sbq_rw_reg(hw, &msg, ICE_AQ_FLAG_RD);
+ if (err)
+ ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n",
+ err);
+@@ -912,41 +927,36 @@ static int ice_write_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+ /**
+ * ice_read_phy_eth56g - Read a PHY port register
+ * @hw: pointer to the HW struct
+- * @phy_idx: PHY index
++ * @port: destination port
+ * @addr: PHY register address
+ * @val: Value to write
+ *
+ * Return: 0 on success, other error codes when failed to read from PHY
+ */
+-static int ice_read_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+- u32 *val)
++static int ice_read_phy_eth56g(struct ice_hw *hw, u8 port, u32 addr, u32 *val)
+ {
+- struct ice_sbq_msg_input phy_msg;
++ struct ice_sbq_msg_input msg = {
++ .dest_dev = ice_ptp_get_dest_dev_e825(hw, port),
++ .opcode = ice_sbq_msg_rd,
++ .msg_addr_low = lower_16_bits(addr),
++ .msg_addr_high = upper_16_bits(addr)
++ };
+ int err;
+
+- phy_msg.opcode = ice_sbq_msg_rd;
+-
+- phy_msg.msg_addr_low = lower_16_bits(addr);
+- phy_msg.msg_addr_high = upper_16_bits(addr);
+-
+- phy_msg.data = 0;
+- phy_msg.dest_dev = hw->ptp.phy.eth56g.phy_addr[phy_idx];
+-
+- err = ice_sbq_rw_reg(hw, &phy_msg, ICE_AQ_FLAG_RD);
+- if (err) {
++ err = ice_sbq_rw_reg(hw, &msg, ICE_AQ_FLAG_RD);
++ if (err)
+ ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n",
+ err);
+- return err;
+- }
+-
+- *val = phy_msg.data;
++ else
++ *val = msg.data;
+
+- return 0;
++ return err;
+ }
+
+ /**
+ * ice_phy_res_address_eth56g - Calculate a PHY port register address
+- * @port: Port number to be written
++ * @hw: pointer to the HW struct
++ * @lane: Lane number to be written
+ * @res_type: resource type (register/memory)
+ * @offset: Offset from PHY port register base
+ * @addr: The result address
+@@ -955,17 +965,19 @@ static int ice_read_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+ * * %0 - success
+ * * %EINVAL - invalid port number or resource type
+ */
+-static int ice_phy_res_address_eth56g(u8 port, enum eth56g_res_type res_type,
+- u32 offset, u32 *addr)
++static int ice_phy_res_address_eth56g(struct ice_hw *hw, u8 lane,
++ enum eth56g_res_type res_type,
++ u32 offset,
++ u32 *addr)
+ {
+- u8 lane = port % ICE_PORTS_PER_QUAD;
+- u8 phy = ICE_GET_QUAD_NUM(port);
+-
+ if (res_type >= NUM_ETH56G_PHY_RES)
+ return -EINVAL;
+
+- *addr = eth56g_phy_res[res_type].base[phy] +
++ /* Lanes 4..7 are in fact 0..3 on a second PHY */
++ lane %= hw->ptp.ports_per_phy;
++ *addr = eth56g_phy_res[res_type].base[0] +
+ lane * eth56g_phy_res[res_type].step + offset;
++
+ return 0;
+ }
+
+@@ -985,19 +997,17 @@ static int ice_phy_res_address_eth56g(u8 port, enum eth56g_res_type res_type,
+ static int ice_write_port_eth56g(struct ice_hw *hw, u8 port, u32 offset,
+ u32 val, enum eth56g_res_type res_type)
+ {
+- u8 phy_port = port % hw->ptp.ports_per_phy;
+- u8 phy_idx = port / hw->ptp.ports_per_phy;
+ u32 addr;
+ int err;
+
+ if (port >= hw->ptp.num_lports)
+ return -EINVAL;
+
+- err = ice_phy_res_address_eth56g(phy_port, res_type, offset, &addr);
++ err = ice_phy_res_address_eth56g(hw, port, res_type, offset, &addr);
+ if (err)
+ return err;
+
+- return ice_write_phy_eth56g(hw, phy_idx, addr, val);
++ return ice_write_phy_eth56g(hw, port, addr, val);
+ }
+
+ /**
+@@ -1016,19 +1026,17 @@ static int ice_write_port_eth56g(struct ice_hw *hw, u8 port, u32 offset,
+ static int ice_read_port_eth56g(struct ice_hw *hw, u8 port, u32 offset,
+ u32 *val, enum eth56g_res_type res_type)
+ {
+- u8 phy_port = port % hw->ptp.ports_per_phy;
+- u8 phy_idx = port / hw->ptp.ports_per_phy;
+ u32 addr;
+ int err;
+
+ if (port >= hw->ptp.num_lports)
+ return -EINVAL;
+
+- err = ice_phy_res_address_eth56g(phy_port, res_type, offset, &addr);
++ err = ice_phy_res_address_eth56g(hw, port, res_type, offset, &addr);
+ if (err)
+ return err;
+
+- return ice_read_phy_eth56g(hw, phy_idx, addr, val);
++ return ice_read_phy_eth56g(hw, port, addr, val);
+ }
+
+ /**
+@@ -1177,6 +1185,56 @@ static int ice_write_port_mem_eth56g(struct ice_hw *hw, u8 port, u16 offset,
+ return ice_write_port_eth56g(hw, port, offset, val, ETH56G_PHY_MEM_PTP);
+ }
+
++/**
++ * ice_write_quad_ptp_reg_eth56g - Write a PHY quad register
++ * @hw: pointer to the HW struct
++ * @offset: PHY register offset
++ * @port: Port number
++ * @val: Value to write
++ *
++ * Return:
++ * * %0 - success
++ * * %EIO - invalid port number or resource type
++ * * %other - failed to write to PHY
++ */
++static int ice_write_quad_ptp_reg_eth56g(struct ice_hw *hw, u8 port,
++ u32 offset, u32 val)
++{
++ u32 addr;
++
++ if (port >= hw->ptp.num_lports)
++ return -EIO;
++
++ addr = eth56g_phy_res[ETH56G_PHY_REG_PTP].base[0] + offset;
++
++ return ice_write_phy_eth56g(hw, port, addr, val);
++}
++
++/**
++ * ice_read_quad_ptp_reg_eth56g - Read a PHY quad register
++ * @hw: pointer to the HW struct
++ * @offset: PHY register offset
++ * @port: Port number
++ * @val: Value to read
++ *
++ * Return:
++ * * %0 - success
++ * * %EIO - invalid port number or resource type
++ * * %other - failed to read from PHY
++ */
++static int ice_read_quad_ptp_reg_eth56g(struct ice_hw *hw, u8 port,
++ u32 offset, u32 *val)
++{
++ u32 addr;
++
++ if (port >= hw->ptp.num_lports)
++ return -EIO;
++
++ addr = eth56g_phy_res[ETH56G_PHY_REG_PTP].base[0] + offset;
++
++ return ice_read_phy_eth56g(hw, port, addr, val);
++}
++
+ /**
+ * ice_is_64b_phy_reg_eth56g - Check if this is a 64bit PHY register
+ * @low_addr: the low address to check
+@@ -1896,7 +1954,6 @@ ice_phy_get_speed_eth56g(struct ice_link_status *li)
+ */
+ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ {
+- u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1);
+ u32 val;
+ int err;
+
+@@ -1911,8 +1968,8 @@ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ switch (ice_phy_get_speed_eth56g(&hw->port_info->phy.link_info)) {
+ case ICE_ETH56G_LNK_SPD_1G:
+ case ICE_ETH56G_LNK_SPD_2_5G:
+- err = ice_read_ptp_reg_eth56g(hw, port_blk,
+- PHY_GPCS_CONFIG_REG0, &val);
++ err = ice_read_quad_ptp_reg_eth56g(hw, port,
++ PHY_GPCS_CONFIG_REG0, &val);
+ if (err) {
+ ice_debug(hw, ICE_DBG_PTP, "Failed to read PHY_GPCS_CONFIG_REG0, status: %d",
+ err);
+@@ -1923,8 +1980,8 @@ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ val |= FIELD_PREP(PHY_GPCS_CONFIG_REG0_TX_THR_M,
+ ICE_ETH56G_NOMINAL_TX_THRESH);
+
+- err = ice_write_ptp_reg_eth56g(hw, port_blk,
+- PHY_GPCS_CONFIG_REG0, val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port,
++ PHY_GPCS_CONFIG_REG0, val);
+ if (err) {
+ ice_debug(hw, ICE_DBG_PTP, "Failed to write PHY_GPCS_CONFIG_REG0, status: %d",
+ err);
+@@ -1965,50 +2022,47 @@ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ */
+ int ice_phy_cfg_ptp_1step_eth56g(struct ice_hw *hw, u8 port)
+ {
+- u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1);
+- u8 blk_port = port & (ICE_PORTS_PER_QUAD - 1);
++ u8 quad_lane = port % ICE_PORTS_PER_QUAD;
++ u32 addr, val, peer_delay;
+ bool enable, sfd_ena;
+- u32 val, peer_delay;
+ int err;
+
+ enable = hw->ptp.phy.eth56g.onestep_ena;
+ peer_delay = hw->ptp.phy.eth56g.peer_delay;
+ sfd_ena = hw->ptp.phy.eth56g.sfd_ena;
+
+- /* PHY_PTP_1STEP_CONFIG */
+- err = ice_read_ptp_reg_eth56g(hw, port_blk, PHY_PTP_1STEP_CONFIG, &val);
++ addr = PHY_PTP_1STEP_CONFIG;
++ err = ice_read_quad_ptp_reg_eth56g(hw, port, addr, &val);
+ if (err)
+ return err;
+
+ if (enable)
+- val |= blk_port;
++ val |= BIT(quad_lane);
+ else
+- val &= ~blk_port;
++ val &= ~BIT(quad_lane);
+
+ val &= ~(PHY_PTP_1STEP_T1S_UP64_M | PHY_PTP_1STEP_T1S_DELTA_M);
+
+- err = ice_write_ptp_reg_eth56g(hw, port_blk, PHY_PTP_1STEP_CONFIG, val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val);
+ if (err)
+ return err;
+
+- /* PHY_PTP_1STEP_PEER_DELAY */
++ addr = PHY_PTP_1STEP_PEER_DELAY(quad_lane);
+ val = FIELD_PREP(PHY_PTP_1STEP_PD_DELAY_M, peer_delay);
+ if (peer_delay)
+ val |= PHY_PTP_1STEP_PD_ADD_PD_M;
+ val |= PHY_PTP_1STEP_PD_DLY_V_M;
+- err = ice_write_ptp_reg_eth56g(hw, port_blk,
+- PHY_PTP_1STEP_PEER_DELAY(blk_port), val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val);
+ if (err)
+ return err;
+
+ val &= ~PHY_PTP_1STEP_PD_DLY_V_M;
+- err = ice_write_ptp_reg_eth56g(hw, port_blk,
+- PHY_PTP_1STEP_PEER_DELAY(blk_port), val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val);
+ if (err)
+ return err;
+
+- /* PHY_MAC_XIF_MODE */
+- err = ice_read_mac_reg_eth56g(hw, port, PHY_MAC_XIF_MODE, &val);
++ addr = PHY_MAC_XIF_MODE;
++ err = ice_read_mac_reg_eth56g(hw, port, addr, &val);
+ if (err)
+ return err;
+
+@@ -2028,7 +2082,7 @@ int ice_phy_cfg_ptp_1step_eth56g(struct ice_hw *hw, u8 port)
+ FIELD_PREP(PHY_MAC_XIF_TS_BIN_MODE_M, enable) |
+ FIELD_PREP(PHY_MAC_XIF_TS_SFD_ENA_M, sfd_ena);
+
+- return ice_write_mac_reg_eth56g(hw, port, PHY_MAC_XIF_MODE, val);
++ return ice_write_mac_reg_eth56g(hw, port, addr, val);
+ }
+
+ /**
+@@ -2070,21 +2124,22 @@ static u32 ice_ptp_calc_bitslip_eth56g(struct ice_hw *hw, u8 port, u32 bs,
+ bool fc, bool rs,
+ enum ice_eth56g_link_spd spd)
+ {
+- u8 port_offset = port & (ICE_PORTS_PER_QUAD - 1);
+- u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1);
+ u32 bitslip;
+ int err;
+
+ if (!bs || rs)
+ return 0;
+
+- if (spd == ICE_ETH56G_LNK_SPD_1G || spd == ICE_ETH56G_LNK_SPD_2_5G)
++ if (spd == ICE_ETH56G_LNK_SPD_1G || spd == ICE_ETH56G_LNK_SPD_2_5G) {
+ err = ice_read_gpcs_reg_eth56g(hw, port, PHY_GPCS_BITSLIP,
+ &bitslip);
+- else
+- err = ice_read_ptp_reg_eth56g(hw, port_blk,
+- PHY_REG_SD_BIT_SLIP(port_offset),
+- &bitslip);
++ } else {
++ u8 quad_lane = port % ICE_PORTS_PER_QUAD;
++ u32 addr;
++
++ addr = PHY_REG_SD_BIT_SLIP(quad_lane);
++ err = ice_read_quad_ptp_reg_eth56g(hw, port, addr, &bitslip);
++ }
+ if (err)
+ return 0;
+
+@@ -2644,59 +2699,29 @@ static int ice_get_phy_tx_tstamp_ready_eth56g(struct ice_hw *hw, u8 port,
+ }
+
+ /**
+- * ice_is_muxed_topo - detect breakout 2x50G topology for E825C
+- * @hw: pointer to the HW struct
+- *
+- * Return: true if it's 2x50 breakout topology, false otherwise
+- */
+-static bool ice_is_muxed_topo(struct ice_hw *hw)
+-{
+- u8 link_topo;
+- bool mux;
+- u32 val;
+-
+- val = rd32(hw, GLGEN_SWITCH_MODE_CONFIG);
+- mux = FIELD_GET(GLGEN_SWITCH_MODE_CONFIG_25X4_QUAD_M, val);
+- val = rd32(hw, GLGEN_MAC_LINK_TOPO);
+- link_topo = FIELD_GET(GLGEN_MAC_LINK_TOPO_LINK_TOPO_M, val);
+-
+- return (mux && link_topo == ICE_LINK_TOPO_UP_TO_2_LINKS);
+-}
+-
+-/**
+- * ice_ptp_init_phy_e825c - initialize PHY parameters
++ * ice_ptp_init_phy_e825 - initialize PHY parameters
+ * @hw: pointer to the HW struct
+ */
+-static void ice_ptp_init_phy_e825c(struct ice_hw *hw)
++static void ice_ptp_init_phy_e825(struct ice_hw *hw)
+ {
+ struct ice_ptp_hw *ptp = &hw->ptp;
+ struct ice_eth56g_params *params;
+- u8 phy;
++ u32 phy_rev;
++ int err;
+
+ ptp->phy_model = ICE_PHY_ETH56G;
+ params = &ptp->phy.eth56g;
+ params->onestep_ena = false;
+ params->peer_delay = 0;
+ params->sfd_ena = false;
+- params->phy_addr[0] = eth56g_phy_0;
+- params->phy_addr[1] = eth56g_phy_1;
+ params->num_phys = 2;
+ ptp->ports_per_phy = 4;
+ ptp->num_lports = params->num_phys * ptp->ports_per_phy;
+
+ ice_sb_access_ena_eth56g(hw, true);
+- for (phy = 0; phy < params->num_phys; phy++) {
+- u32 phy_rev;
+- int err;
+-
+- err = ice_read_phy_eth56g(hw, phy, PHY_REG_REVISION, &phy_rev);
+- if (err || phy_rev != PHY_REVISION_ETH56G) {
+- ptp->phy_model = ICE_PHY_UNSUP;
+- return;
+- }
+- }
+-
+- ptp->is_2x50g_muxed_topo = ice_is_muxed_topo(hw);
++ err = ice_read_phy_eth56g(hw, hw->pf_id, PHY_REG_REVISION, &phy_rev);
++ if (err || phy_rev != PHY_REVISION_ETH56G)
++ ptp->phy_model = ICE_PHY_UNSUP;
+ }
+
+ /* E822 family functions
+@@ -2715,10 +2740,9 @@ static void ice_fill_phy_msg_e82x(struct ice_hw *hw,
+ struct ice_sbq_msg_input *msg, u8 port,
+ u16 offset)
+ {
+- int phy_port, phy, quadtype;
++ int phy_port, quadtype;
+
+ phy_port = port % hw->ptp.ports_per_phy;
+- phy = port / hw->ptp.ports_per_phy;
+ quadtype = ICE_GET_QUAD_NUM(port) %
+ ICE_GET_QUAD_NUM(hw->ptp.ports_per_phy);
+
+@@ -2730,12 +2754,7 @@ static void ice_fill_phy_msg_e82x(struct ice_hw *hw,
+ msg->msg_addr_high = P_Q1_H(P_4_BASE + offset, phy_port);
+ }
+
+- if (phy == 0)
+- msg->dest_dev = rmn_0;
+- else if (phy == 1)
+- msg->dest_dev = rmn_1;
+- else
+- msg->dest_dev = rmn_2;
++ msg->dest_dev = rmn_0;
+ }
+
+ /**
+@@ -5395,7 +5414,7 @@ void ice_ptp_init_hw(struct ice_hw *hw)
+ else if (ice_is_e810(hw))
+ ice_ptp_init_phy_e810(ptp);
+ else if (ice_is_e825c(hw))
+- ice_ptp_init_phy_e825c(hw);
++ ice_ptp_init_phy_e825(hw);
+ else
+ ptp->phy_model = ICE_PHY_UNSUP;
+ }
+@@ -5418,7 +5437,7 @@ void ice_ptp_init_hw(struct ice_hw *hw)
+ static int ice_ptp_write_port_cmd(struct ice_hw *hw, u8 port,
+ enum ice_ptp_tmr_cmd cmd)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_ptp_write_port_cmd_eth56g(hw, port, cmd);
+ case ICE_PHY_E82X:
+@@ -5483,7 +5502,7 @@ static int ice_ptp_port_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+ u32 port;
+
+ /* PHY models which can program all ports simultaneously */
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_E810:
+ return ice_ptp_port_cmd_e810(hw, cmd);
+ default:
+@@ -5562,7 +5581,7 @@ int ice_ptp_init_time(struct ice_hw *hw, u64 time)
+
+ /* PHY timers */
+ /* Fill Rx and Tx ports and send msg to PHY */
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_ptp_prep_phy_time_eth56g(hw,
+ (u32)(time & 0xFFFFFFFF));
+@@ -5608,7 +5627,7 @@ int ice_ptp_write_incval(struct ice_hw *hw, u64 incval)
+ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), lower_32_bits(incval));
+ wr32(hw, GLTSYN_SHADJ_H(tmr_idx), upper_32_bits(incval));
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_ptp_prep_phy_incval_eth56g(hw, incval);
+ break;
+@@ -5677,7 +5696,7 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
+ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), 0);
+ wr32(hw, GLTSYN_SHADJ_H(tmr_idx), adj);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_ptp_prep_phy_adj_eth56g(hw, adj);
+ break;
+@@ -5710,7 +5729,7 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
+ */
+ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_read_ptp_tstamp_eth56g(hw, block, idx, tstamp);
+ case ICE_PHY_E810:
+@@ -5740,7 +5759,7 @@ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
+ */
+ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_clear_ptp_tstamp_eth56g(hw, block, idx);
+ case ICE_PHY_E810:
+@@ -5803,7 +5822,7 @@ static int ice_get_pf_c827_idx(struct ice_hw *hw, u8 *idx)
+ */
+ void ice_ptp_reset_ts_memory(struct ice_hw *hw)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ ice_ptp_reset_ts_memory_eth56g(hw);
+ break;
+@@ -5832,7 +5851,7 @@ int ice_ptp_init_phc(struct ice_hw *hw)
+ /* Clear event err indications for auxiliary pins */
+ (void)rd32(hw, GLTSYN_STAT(src_idx));
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_ptp_init_phc_eth56g(hw);
+ case ICE_PHY_E810:
+@@ -5857,7 +5876,7 @@ int ice_ptp_init_phc(struct ice_hw *hw)
+ */
+ int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_get_phy_tx_tstamp_ready_eth56g(hw, block,
+ tstamp_ready);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 4c8b8457134427..3499062218b59e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -452,6 +452,11 @@ static inline u64 ice_get_base_incval(struct ice_hw *hw)
+ }
+ }
+
++static inline bool ice_is_dual(struct ice_hw *hw)
++{
++ return !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_DUAL_M);
++}
++
+ #define PFTSYN_SEM_BYTES 4
+
+ #define ICE_PTP_CLOCK_INDEX_0 0x00
+diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
+index 45768796691fec..609f31e0dfdede 100644
+--- a/drivers/net/ethernet/intel/ice/ice_type.h
++++ b/drivers/net/ethernet/intel/ice/ice_type.h
+@@ -850,7 +850,6 @@ struct ice_mbx_data {
+
+ struct ice_eth56g_params {
+ u8 num_phys;
+- u8 phy_addr[2];
+ bool onestep_ena;
+ bool sfd_ena;
+ u32 peer_delay;
+@@ -881,7 +880,6 @@ struct ice_ptp_hw {
+ union ice_phy_params phy;
+ u8 num_lports;
+ u8 ports_per_phy;
+- bool is_2x50g_muxed_topo;
+ };
+
+ /* Port hardware description */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index ca92e518be7669..1baf8933a07cb0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -724,6 +724,12 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ /* check esn */
+ if (x->props.flags & XFRM_STATE_ESN)
+ mlx5e_ipsec_update_esn_state(sa_entry);
++ else
++ /* According to RFC4303, section "3.3.3. Sequence Number Generation",
++ * the first packet sent using a given SA will contain a sequence
++ * number of 1.
++ */
++ sa_entry->esn_state.esn = 1;
+
+ mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, &sa_entry->attrs);
+
+@@ -768,9 +774,12 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ MLX5_IPSEC_RESCHED);
+
+ if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+- x->props.mode == XFRM_MODE_TUNNEL)
+- xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id,
+- MLX5E_IPSEC_TUNNEL_SA);
++ x->props.mode == XFRM_MODE_TUNNEL) {
++ xa_lock_bh(&ipsec->sadb);
++ __xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id,
++ MLX5E_IPSEC_TUNNEL_SA);
++ xa_unlock_bh(&ipsec->sadb);
++ }
+
+ out:
+ x->xso.offload_handle = (unsigned long)sa_entry;
+@@ -797,7 +806,6 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
+ {
+ struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+- struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs;
+ struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+ struct mlx5e_ipsec_sa_entry *old;
+
+@@ -806,12 +814,6 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
+
+ old = xa_erase_bh(&ipsec->sadb, sa_entry->ipsec_obj_id);
+ WARN_ON(old != sa_entry);
+-
+- if (attrs->mode == XFRM_MODE_TUNNEL &&
+- attrs->type == XFRM_DEV_OFFLOAD_PACKET)
+- /* Make sure that no ARP requests are running in parallel */
+- flush_workqueue(ipsec->wq);
+-
+ }
+
+ static void mlx5e_xfrm_free_state(struct xfrm_state *x)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+index e51b03d4c717f1..57861d34d46f85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+@@ -1718,23 +1718,21 @@ static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
+ goto err_alloc;
+ }
+
+- if (attrs->family == AF_INET)
+- setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
+- else
+- setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+-
+ setup_fte_no_frags(spec);
+ setup_fte_upper_proto_match(spec, &attrs->upspec);
+
+ switch (attrs->type) {
+ case XFRM_DEV_OFFLOAD_CRYPTO:
++ if (attrs->family == AF_INET)
++ setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
++ else
++ setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+ setup_fte_spi(spec, attrs->spi, false);
+ setup_fte_esp(spec);
+ setup_fte_reg_a(spec);
+ break;
+ case XFRM_DEV_OFFLOAD_PACKET:
+- if (attrs->reqid)
+- setup_fte_reg_c4(spec, attrs->reqid);
++ setup_fte_reg_c4(spec, attrs->reqid);
+ err = setup_pkt_reformat(ipsec, attrs, &flow_act);
+ if (err)
+ goto err_pkt_reformat;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+index 53cfa39188cb0e..820debf3fbbf22 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+@@ -91,8 +91,9 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev)
+ EXPORT_SYMBOL_GPL(mlx5_ipsec_device_caps);
+
+ static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn,
+- struct mlx5_accel_esp_xfrm_attrs *attrs)
++ struct mlx5e_ipsec_sa_entry *sa_entry)
+ {
++ struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs;
+ void *aso_ctx;
+
+ aso_ctx = MLX5_ADDR_OF(ipsec_obj, obj, ipsec_aso);
+@@ -120,8 +121,12 @@ static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn,
+ * active.
+ */
+ MLX5_SET(ipsec_obj, obj, aso_return_reg, MLX5_IPSEC_ASO_REG_C_4_5);
+- if (attrs->dir == XFRM_DEV_OFFLOAD_OUT)
++ if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) {
+ MLX5_SET(ipsec_aso, aso_ctx, mode, MLX5_IPSEC_ASO_INC_SN);
++ if (!attrs->replay_esn.trigger)
++ MLX5_SET(ipsec_aso, aso_ctx, mode_parameter,
++ sa_entry->esn_state.esn);
++ }
+
+ if (attrs->lft.hard_packet_limit != XFRM_INF) {
+ MLX5_SET(ipsec_aso, aso_ctx, remove_flow_pkt_cnt,
+@@ -175,7 +180,7 @@ static int mlx5_create_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry)
+
+ res = &mdev->mlx5e_res.hw_objs;
+ if (attrs->type == XFRM_DEV_OFFLOAD_PACKET)
+- mlx5e_ipsec_packet_setup(obj, res->pdn, attrs);
++ mlx5e_ipsec_packet_setup(obj, res->pdn, sa_entry);
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+ if (!err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 2eabfcc247c6ae..0ce999706d412a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2709,6 +2709,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
+ break;
+ case MLX5_FLOW_NAMESPACE_RDMA_TX:
+ root_ns = steering->rdma_tx_root_ns;
++ prio = RDMA_TX_BYPASS_PRIO;
+ break;
+ case MLX5_FLOW_NAMESPACE_RDMA_RX_COUNTERS:
+ root_ns = steering->rdma_rx_root_ns;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+index ab2717012b79b5..39e80704b1c425 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+@@ -530,7 +530,7 @@ int mlx5_lag_port_sel_create(struct mlx5_lag *ldev,
+ set_tt_map(port_sel, hash_type);
+ err = mlx5_lag_create_definers(ldev, hash_type, ports);
+ if (err)
+- return err;
++ goto clear_port_sel;
+
+ if (port_sel->tunnel) {
+ err = mlx5_lag_create_inner_ttc_table(ldev);
+@@ -549,6 +549,8 @@ int mlx5_lag_port_sel_create(struct mlx5_lag *ldev,
+ mlx5_destroy_ttc_table(port_sel->inner.ttc);
+ destroy_definers:
+ mlx5_lag_destroy_definers(ldev);
++clear_port_sel:
++ memset(port_sel, 0, sizeof(*port_sel));
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+index a96be98be032f5..b96909fbeb12de 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+@@ -257,6 +257,7 @@ static int mlx5_sf_add(struct mlx5_core_dev *dev, struct mlx5_sf_table *table,
+ return 0;
+
+ esw_err:
++ mlx5_sf_function_id_erase(table, sf);
+ mlx5_sf_free(table, sf);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wc.c b/drivers/net/ethernet/mellanox/mlx5/core/wc.c
+index 1bed75eca97db8..740b719e7072df 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wc.c
+@@ -382,6 +382,7 @@ static void mlx5_core_test_wc(struct mlx5_core_dev *mdev)
+
+ bool mlx5_wc_support_get(struct mlx5_core_dev *mdev)
+ {
++ struct mutex *wc_state_lock = &mdev->wc_state_lock;
+ struct mlx5_core_dev *parent = NULL;
+
+ if (!MLX5_CAP_GEN(mdev, bf)) {
+@@ -400,32 +401,31 @@ bool mlx5_wc_support_get(struct mlx5_core_dev *mdev)
+ */
+ goto out;
+
+- mutex_lock(&mdev->wc_state_lock);
+-
+- if (mdev->wc_state != MLX5_WC_STATE_UNINITIALIZED)
+- goto unlock;
+-
+ #ifdef CONFIG_MLX5_SF
+- if (mlx5_core_is_sf(mdev))
++ if (mlx5_core_is_sf(mdev)) {
+ parent = mdev->priv.parent_mdev;
++ wc_state_lock = &parent->wc_state_lock;
++ }
+ #endif
+
+- if (parent) {
+- mutex_lock(&parent->wc_state_lock);
++ mutex_lock(wc_state_lock);
+
++ if (mdev->wc_state != MLX5_WC_STATE_UNINITIALIZED)
++ goto unlock;
++
++ if (parent) {
+ mlx5_core_test_wc(parent);
+
+ mlx5_core_dbg(mdev, "parent set wc_state=%d\n",
+ parent->wc_state);
+ mdev->wc_state = parent->wc_state;
+
+- mutex_unlock(&parent->wc_state_lock);
++ } else {
++ mlx5_core_test_wc(mdev);
+ }
+
+- mlx5_core_test_wc(mdev);
+-
+ unlock:
+- mutex_unlock(&mdev->wc_state_lock);
++ mutex_unlock(wc_state_lock);
+ out:
+ mlx5_core_dbg(mdev, "wc_state=%d\n", mdev->wc_state);
+
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+index 9d97cd281f18e4..c03558adda91eb 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+@@ -458,7 +458,8 @@ int nfp_bpf_event_output(struct nfp_app_bpf *bpf, const void *data,
+ map_id_full = be64_to_cpu(cbe->map_ptr);
+ map_id = map_id_full;
+
+- if (len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size)
++ if (size_add(pkt_size, data_size) > INT_MAX ||
++ len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size)
+ return -EINVAL;
+ if (cbe->hdr.ver != NFP_CCM_ABI_VERSION)
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 907af4651c5534..6f6b0566c65bcb 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -2756,6 +2756,7 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = {
+ .net_features = NETIF_F_RXCSUM,
+ .stats_len = ARRAY_SIZE(ravb_gstrings_stats),
+ .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3,
++ .tx_max_frame_size = SZ_2K,
+ .rx_max_frame_size = SZ_2K,
+ .rx_buffer_size = SZ_2K +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 8d02d2b2142937..dc5e247ca5d1a6 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -127,15 +127,15 @@ struct cpsw_ale_dev_id {
+
+ static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
+ {
+- int idx, idx2;
++ int idx, idx2, index;
+ u32 hi_val = 0;
+
+ idx = start / 32;
+ idx2 = (start + bits - 1) / 32;
+ /* Check if bits to be fetched exceed a word */
+ if (idx != idx2) {
+- idx2 = 2 - idx2; /* flip */
+- hi_val = ale_entry[idx2] << ((idx2 * 32) - start);
++ index = 2 - idx2; /* flip */
++ hi_val = ale_entry[index] << ((idx2 * 32) - start);
+ }
+ start -= idx * 32;
+ idx = 2 - idx; /* flip */
+@@ -145,16 +145,16 @@ static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
+ static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits,
+ u32 value)
+ {
+- int idx, idx2;
++ int idx, idx2, index;
+
+ value &= BITMASK(bits);
+ idx = start / 32;
+ idx2 = (start + bits - 1) / 32;
+ /* Check if bits to be set exceed a word */
+ if (idx != idx2) {
+- idx2 = 2 - idx2; /* flip */
+- ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32)));
+- ale_entry[idx2] |= (value >> ((idx2 * 32) - start));
++ index = 2 - idx2; /* flip */
++ ale_entry[index] &= ~(BITMASK(bits + start - (idx2 * 32)));
++ ale_entry[index] |= (value >> ((idx2 * 32) - start));
+ }
+ start -= idx * 32;
+ idx = 2 - idx; /* flip */
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 1fcbcaa85ebdb4..de10a2d08c428e 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2056,6 +2056,12 @@ axienet_ethtools_set_coalesce(struct net_device *ndev,
+ return -EBUSY;
+ }
+
++ if (ecoalesce->rx_max_coalesced_frames > 255 ||
++ ecoalesce->tx_max_coalesced_frames > 255) {
++ NL_SET_ERR_MSG(extack, "frames must be less than 256");
++ return -EINVAL;
++ }
++
+ if (ecoalesce->rx_max_coalesced_frames)
+ lp->coalesce_count_rx = ecoalesce->rx_max_coalesced_frames;
+ if (ecoalesce->rx_coalesce_usecs)
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 70f981887518aa..47406ce9901612 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1526,8 +1526,8 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
+ goto out_encap;
+ }
+
+- gn = net_generic(dev_net(dev), gtp_net_id);
+- list_add_rcu(>p->list, &gn->gtp_dev_list);
++ gn = net_generic(src_net, gtp_net_id);
++ list_add(>p->list, &gn->gtp_dev_list);
+ dev->priv_destructor = gtp_destructor;
+
+ netdev_dbg(dev, "registered new GTP interface\n");
+@@ -1553,7 +1553,7 @@ static void gtp_dellink(struct net_device *dev, struct list_head *head)
+ hlist_for_each_entry_safe(pctx, next, >p->tid_hash[i], hlist_tid)
+ pdp_context_delete(pctx);
+
+- list_del_rcu(>p->list);
++ list_del(>p->list);
+ unregister_netdevice_queue(dev, head);
+ }
+
+@@ -2279,16 +2279,19 @@ static int gtp_genl_dump_pdp(struct sk_buff *skb,
+ struct gtp_dev *last_gtp = (struct gtp_dev *)cb->args[2], *gtp;
+ int i, j, bucket = cb->args[0], skip = cb->args[1];
+ struct net *net = sock_net(skb->sk);
++ struct net_device *dev;
+ struct pdp_ctx *pctx;
+- struct gtp_net *gn;
+-
+- gn = net_generic(net, gtp_net_id);
+
+ if (cb->args[4])
+ return 0;
+
+ rcu_read_lock();
+- list_for_each_entry_rcu(gtp, &gn->gtp_dev_list, list) {
++ for_each_netdev_rcu(net, dev) {
++ if (dev->rtnl_link_ops != >p_link_ops)
++ continue;
++
++ gtp = netdev_priv(dev);
++
+ if (last_gtp && last_gtp != gtp)
+ continue;
+ else
+@@ -2483,9 +2486,14 @@ static void __net_exit gtp_net_exit_batch_rtnl(struct list_head *net_list,
+
+ list_for_each_entry(net, net_list, exit_list) {
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+- struct gtp_dev *gtp;
++ struct gtp_dev *gtp, *gtp_next;
++ struct net_device *dev;
++
++ for_each_netdev(net, dev)
++ if (dev->rtnl_link_ops == >p_link_ops)
++ gtp_dellink(dev, dev_to_kill);
+
+- list_for_each_entry(gtp, &gn->gtp_dev_list, list)
++ list_for_each_entry_safe(gtp, gtp_next, &gn->gtp_dev_list, list)
+ gtp_dellink(gtp->dev, dev_to_kill);
+ }
+ }
+diff --git a/drivers/net/pfcp.c b/drivers/net/pfcp.c
+index 69434fd13f9612..68d0d9e92a2209 100644
+--- a/drivers/net/pfcp.c
++++ b/drivers/net/pfcp.c
+@@ -206,8 +206,8 @@ static int pfcp_newlink(struct net *net, struct net_device *dev,
+ goto exit_del_pfcp_sock;
+ }
+
+- pn = net_generic(dev_net(dev), pfcp_net_id);
+- list_add_rcu(&pfcp->list, &pn->pfcp_dev_list);
++ pn = net_generic(net, pfcp_net_id);
++ list_add(&pfcp->list, &pn->pfcp_dev_list);
+
+ netdev_dbg(dev, "registered new PFCP interface\n");
+
+@@ -224,7 +224,7 @@ static void pfcp_dellink(struct net_device *dev, struct list_head *head)
+ {
+ struct pfcp_dev *pfcp = netdev_priv(dev);
+
+- list_del_rcu(&pfcp->list);
++ list_del(&pfcp->list);
+ unregister_netdevice_queue(dev, head);
+ }
+
+@@ -247,11 +247,16 @@ static int __net_init pfcp_net_init(struct net *net)
+ static void __net_exit pfcp_net_exit(struct net *net)
+ {
+ struct pfcp_net *pn = net_generic(net, pfcp_net_id);
+- struct pfcp_dev *pfcp;
++ struct pfcp_dev *pfcp, *pfcp_next;
++ struct net_device *dev;
+ LIST_HEAD(list);
+
+ rtnl_lock();
+- list_for_each_entry(pfcp, &pn->pfcp_dev_list, list)
++ for_each_netdev(net, dev)
++ if (dev->rtnl_link_ops == &pfcp_link_ops)
++ pfcp_dellink(dev, &list);
++
++ list_for_each_entry_safe(pfcp, pfcp_next, &pn->pfcp_dev_list, list)
+ pfcp_dellink(pfcp->dev, &list);
+
+ unregister_netdevice_many(&list);
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 0bda83d0fc3e08..eaf31c823cbe88 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -36,7 +36,7 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id)
+ */
+ id->nsfeat |= 1 << 4;
+ /* NPWG = Namespace Preferred Write Granularity. 0's based */
+- id->npwg = lpp0b;
++ id->npwg = to0based(bdev_io_min(bdev) / bdev_logical_block_size(bdev));
+ /* NPWA = Namespace Preferred Write Alignment. 0's based */
+ id->npwa = id->npwg;
+ /* NPDG = Namespace Preferred Deallocate Granularity. 0's based */
+diff --git a/drivers/platform/x86/dell/dell-uart-backlight.c b/drivers/platform/x86/dell/dell-uart-backlight.c
+index 3995f90add4568..c45bc332af7a02 100644
+--- a/drivers/platform/x86/dell/dell-uart-backlight.c
++++ b/drivers/platform/x86/dell/dell-uart-backlight.c
+@@ -283,6 +283,9 @@ static int dell_uart_bl_serdev_probe(struct serdev_device *serdev)
+ init_waitqueue_head(&dell_bl->wait_queue);
+ dell_bl->dev = dev;
+
++ serdev_device_set_drvdata(serdev, dell_bl);
++ serdev_device_set_client_ops(serdev, &dell_uart_bl_serdev_ops);
++
+ ret = devm_serdev_device_open(dev, serdev);
+ if (ret)
+ return dev_err_probe(dev, ret, "opening UART device\n");
+@@ -290,8 +293,6 @@ static int dell_uart_bl_serdev_probe(struct serdev_device *serdev)
+ /* 9600 bps, no flow control, these are the default but set them to be sure */
+ serdev_device_set_baudrate(serdev, 9600);
+ serdev_device_set_flow_control(serdev, false);
+- serdev_device_set_drvdata(serdev, dell_bl);
+- serdev_device_set_client_ops(serdev, &dell_uart_bl_serdev_ops);
+
+ get_version[0] = DELL_SOF(GET_CMD_LEN);
+ get_version[1] = CMD_GET_VERSION;
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index 1e46e30dae9669..dbcd3087aaa4b0 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -804,6 +804,7 @@ EXPORT_SYMBOL_GPL(isst_if_cdev_unregister);
+ static const struct x86_cpu_id isst_cpu_ids[] = {
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, SST_HPM_SUPPORTED),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, SST_HPM_SUPPORTED),
++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, SST_HPM_SUPPORTED),
+ X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, 0),
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, SST_HPM_SUPPORTED),
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, SST_HPM_SUPPORTED),
+diff --git a/drivers/platform/x86/intel/tpmi_power_domains.c b/drivers/platform/x86/intel/tpmi_power_domains.c
+index 0609a8320f7ec1..12fb0943b5dc37 100644
+--- a/drivers/platform/x86/intel/tpmi_power_domains.c
++++ b/drivers/platform/x86/intel/tpmi_power_domains.c
+@@ -81,6 +81,7 @@ static const struct x86_cpu_id tpmi_cpu_ids[] = {
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, NULL),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, NULL),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, NULL),
++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, NULL),
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, NULL),
+ X86_MATCH_VFM(INTEL_PANTHERCOVE_X, NULL),
+ {}
+diff --git a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+index d525bdc8ca9b3f..32d9b6009c4229 100644
+--- a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
++++ b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+@@ -199,14 +199,15 @@ static int yt2_1380_fc_serdev_probe(struct serdev_device *serdev)
+ if (ret)
+ return ret;
+
++ serdev_device_set_drvdata(serdev, fc);
++ serdev_device_set_client_ops(serdev, &yt2_1380_fc_serdev_ops);
++
+ ret = devm_serdev_device_open(dev, serdev);
+ if (ret)
+ return dev_err_probe(dev, ret, "opening UART device\n");
+
+ serdev_device_set_baudrate(serdev, 600);
+ serdev_device_set_flow_control(serdev, false);
+- serdev_device_set_drvdata(serdev, fc);
+- serdev_device_set_client_ops(serdev, &yt2_1380_fc_serdev_ops);
+
+ ret = devm_extcon_register_notifier_all(dev, fc->extcon, &fc->nb);
+ if (ret)
+diff --git a/drivers/pmdomain/imx/imx8mp-blk-ctrl.c b/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
+index 77e889165eed3c..a19e806bb14726 100644
+--- a/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
+@@ -770,7 +770,7 @@ static void imx8mp_blk_ctrl_remove(struct platform_device *pdev)
+
+ of_genpd_del_provider(pdev->dev.of_node);
+
+- for (i = 0; bc->onecell_data.num_domains; i++) {
++ for (i = 0; i < bc->onecell_data.num_domains; i++) {
+ struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i];
+
+ pm_genpd_remove(&domain->genpd);
+diff --git a/drivers/reset/reset-rzg2l-usbphy-ctrl.c b/drivers/reset/reset-rzg2l-usbphy-ctrl.c
+index 1cd157f4f03b47..4e2ac1f0060c0d 100644
+--- a/drivers/reset/reset-rzg2l-usbphy-ctrl.c
++++ b/drivers/reset/reset-rzg2l-usbphy-ctrl.c
+@@ -176,6 +176,7 @@ static int rzg2l_usbphy_ctrl_probe(struct platform_device *pdev)
+ vdev->dev.parent = dev;
+ priv->vdev = vdev;
+
++ device_set_of_node_from_dev(&vdev->dev, dev);
+ error = platform_device_add(vdev);
+ if (error)
+ goto err_device_put;
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 05b936ad353be7..6cc9e61cca07de 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -10589,14 +10589,17 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ }
+
+ /*
+- * Set the default power management level for runtime and system PM.
++ * Set the default power management level for runtime and system PM if
++ * not set by the host controller drivers.
+ * Default power saving mode is to keep UFS link in Hibern8 state
+ * and UFS device in sleep state.
+ */
+- hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ if (!hba->rpm_lvl)
++ hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+ UFS_SLEEP_PWR_MODE,
+ UIC_LINK_HIBERN8_STATE);
+- hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ if (!hba->spm_lvl)
++ hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+ UFS_SLEEP_PWR_MODE,
+ UIC_LINK_HIBERN8_STATE);
+
+diff --git a/fs/afs/addr_prefs.c b/fs/afs/addr_prefs.c
+index a189ff8a5034e0..c0384201b8feb5 100644
+--- a/fs/afs/addr_prefs.c
++++ b/fs/afs/addr_prefs.c
+@@ -413,8 +413,10 @@ int afs_proc_addr_prefs_write(struct file *file, char *buf, size_t size)
+
+ do {
+ argc = afs_split_string(&buf, argv, ARRAY_SIZE(argv));
+- if (argc < 0)
+- return argc;
++ if (argc < 0) {
++ ret = argc;
++ goto done;
++ }
+ if (argc < 2)
+ goto inval;
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 0c4d14c59ebec5..395b8b880ce786 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -797,6 +797,10 @@ static int get_canonical_dev_path(const char *dev_path, char *canonical)
+ if (ret)
+ goto out;
+ resolved_path = d_path(&path, path_buf, PATH_MAX);
++ if (IS_ERR(resolved_path)) {
++ ret = PTR_ERR(resolved_path);
++ goto out;
++ }
+ ret = strscpy(canonical, resolved_path, PATH_MAX);
+ out:
+ kfree(path_buf);
+diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c
+index 89b11336a83697..1806bff8e59bc3 100644
+--- a/fs/cachefiles/daemon.c
++++ b/fs/cachefiles/daemon.c
+@@ -15,6 +15,7 @@
+ #include <linux/namei.h>
+ #include <linux/poll.h>
+ #include <linux/mount.h>
++#include <linux/security.h>
+ #include <linux/statfs.h>
+ #include <linux/ctype.h>
+ #include <linux/string.h>
+@@ -576,7 +577,7 @@ static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args)
+ */
+ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args)
+ {
+- char *secctx;
++ int err;
+
+ _enter(",%s", args);
+
+@@ -585,16 +586,16 @@ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args)
+ return -EINVAL;
+ }
+
+- if (cache->secctx) {
++ if (cache->have_secid) {
+ pr_err("Second security context specified\n");
+ return -EINVAL;
+ }
+
+- secctx = kstrdup(args, GFP_KERNEL);
+- if (!secctx)
+- return -ENOMEM;
++ err = security_secctx_to_secid(args, strlen(args), &cache->secid);
++ if (err)
++ return err;
+
+- cache->secctx = secctx;
++ cache->have_secid = true;
+ return 0;
+ }
+
+@@ -820,7 +821,6 @@ static void cachefiles_daemon_unbind(struct cachefiles_cache *cache)
+ put_cred(cache->cache_cred);
+
+ kfree(cache->rootdirname);
+- kfree(cache->secctx);
+ kfree(cache->tag);
+
+ _leave("");
+diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
+index 7b99bd98de75b8..38c236e38cef85 100644
+--- a/fs/cachefiles/internal.h
++++ b/fs/cachefiles/internal.h
+@@ -122,7 +122,6 @@ struct cachefiles_cache {
+ #define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */
+ #define CACHEFILES_ONDEMAND_MODE 4 /* T if in on-demand read mode */
+ char *rootdirname; /* name of cache root directory */
+- char *secctx; /* LSM security context */
+ char *tag; /* cache binding tag */
+ refcount_t unbind_pincount;/* refcount to do daemon unbind */
+ struct xarray reqs; /* xarray of pending on-demand requests */
+@@ -130,6 +129,8 @@ struct cachefiles_cache {
+ struct xarray ondemand_ids; /* xarray for ondemand_id allocation */
+ u32 ondemand_id_next;
+ u32 msg_id_next;
++ u32 secid; /* LSM security id */
++ bool have_secid; /* whether "secid" was set */
+ };
+
+ static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache)
+diff --git a/fs/cachefiles/security.c b/fs/cachefiles/security.c
+index fe777164f1d894..fc6611886b3b5e 100644
+--- a/fs/cachefiles/security.c
++++ b/fs/cachefiles/security.c
+@@ -18,7 +18,7 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache)
+ struct cred *new;
+ int ret;
+
+- _enter("{%s}", cache->secctx);
++ _enter("{%u}", cache->have_secid ? cache->secid : 0);
+
+ new = prepare_kernel_cred(current);
+ if (!new) {
+@@ -26,8 +26,8 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache)
+ goto error;
+ }
+
+- if (cache->secctx) {
+- ret = set_security_override_from_ctx(new, cache->secctx);
++ if (cache->have_secid) {
++ ret = set_security_override(new, cache->secid);
+ if (ret < 0) {
+ put_cred(new);
+ pr_err("Security denies permission to nominate security context: error %d\n",
+diff --git a/fs/file.c b/fs/file.c
+index eb093e73697206..4cb952541dd036 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -21,6 +21,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/close_range.h>
+ #include <net/sock.h>
++#include <linux/init_task.h>
+
+ #include "internal.h"
+
+diff --git a/fs/hfs/super.c b/fs/hfs/super.c
+index eeac99765f0d61..cf13b5cc108488 100644
+--- a/fs/hfs/super.c
++++ b/fs/hfs/super.c
+@@ -419,11 +419,13 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent)
+ goto bail_no_root;
+ res = hfs_cat_find_brec(sb, HFS_ROOT_CNID, &fd);
+ if (!res) {
+- if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) {
++ if (fd.entrylength != sizeof(rec.dir)) {
+ res = -EIO;
+ goto bail_hfs_find;
+ }
+ hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength);
++ if (rec.type != HFS_CDR_DIR)
++ res = -EIO;
+ }
+ if (res)
+ goto bail_hfs_find;
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 25d1ede6bb0eb0..1bad460275ebe2 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1138,7 +1138,7 @@ static void iomap_write_delalloc_scan(struct inode *inode,
+ start_byte, end_byte, iomap, punch);
+
+ /* move offset to start of next folio in range */
+- start_byte = folio_next_index(folio) << PAGE_SHIFT;
++ start_byte = folio_pos(folio) + folio_size(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index e70eb4ea21c038..a44132c986538b 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -249,16 +249,17 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was
+
+ /* Deal with the trickiest case: that this subreq is in the middle of a
+ * folio, not touching either edge, but finishes first. In such a
+- * case, we donate to the previous subreq, if there is one, so that the
+- * donation is only handled when that completes - and remove this
+- * subreq from the list.
++ * case, we donate to the previous subreq, if there is one and if it is
++ * contiguous, so that the donation is only handled when that completes
++ * - and remove this subreq from the list.
+ *
+ * If the previous subreq finished first, we will have acquired their
+ * donation and should be able to unlock folios and/or donate nextwards.
+ */
+ if (!subreq->consumed &&
+ !prev_donated &&
+- !list_is_first(&subreq->rreq_link, &rreq->subrequests)) {
++ !list_is_first(&subreq->rreq_link, &rreq->subrequests) &&
++ subreq->start == prev->start + prev->len) {
+ prev = list_prev_entry(subreq, rreq_link);
+ WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len);
+ subreq->start += subreq->len;
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index b4521b09605881..387a7a176ad84b 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -404,6 +404,8 @@ static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos)
+ if (!iov_iter_count(iter))
+ return acc;
+ }
++
++ cond_resched();
+ }
+
+ return acc;
+diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c
+index 85925ec0051a97..3310d1ad4d0e98 100644
+--- a/fs/qnx6/inode.c
++++ b/fs/qnx6/inode.c
+@@ -179,8 +179,7 @@ static int qnx6_statfs(struct dentry *dentry, struct kstatfs *buf)
+ */
+ static const char *qnx6_checkroot(struct super_block *s)
+ {
+- static char match_root[2][3] = {".\0\0", "..\0"};
+- int i, error = 0;
++ int error = 0;
+ struct qnx6_dir_entry *dir_entry;
+ struct inode *root = d_inode(s->s_root);
+ struct address_space *mapping = root->i_mapping;
+@@ -189,11 +188,9 @@ static const char *qnx6_checkroot(struct super_block *s)
+ if (IS_ERR(folio))
+ return "error reading root directory";
+ dir_entry = kmap_local_folio(folio, 0);
+- for (i = 0; i < 2; i++) {
+- /* maximum 3 bytes - due to match_root limitation */
+- if (strncmp(dir_entry[i].de_fname, match_root[i], 3))
+- error = 1;
+- }
++ if (memcmp(dir_entry[0].de_fname, ".", 2) ||
++ memcmp(dir_entry[1].de_fname, "..", 3))
++ error = 1;
+ folio_release_kmap(folio, dir_entry);
+ if (error)
+ return "error reading root directory.";
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index fe40152b915d82..fb51cdf5520617 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1044,6 +1044,7 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ /* Release netns reference for this server. */
+ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
++ kfree(server->hostname);
+ kfree(server);
+
+ length = atomic_dec_return(&tcpSesAllocCount);
+@@ -1670,8 +1671,6 @@ cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ kfree_sensitive(server->session_key.response);
+ server->session_key.response = NULL;
+ server->session_key.len = 0;
+- kfree(server->hostname);
+- server->hostname = NULL;
+
+ task = xchg(&server->tsk, NULL);
+ if (task)
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index aa1e65ccb61584..6caaa62d2b1f89 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -379,6 +379,7 @@ extern void __init hrtimers_init(void);
+ extern void sysrq_timer_list_show(void);
+
+ int hrtimers_prepare_cpu(unsigned int cpu);
++int hrtimers_cpu_starting(unsigned int cpu);
+ #ifdef CONFIG_HOTPLUG_CPU
+ int hrtimers_cpu_dying(unsigned int cpu);
+ #else
+diff --git a/include/linux/poll.h b/include/linux/poll.h
+index d1ea4f3714a848..fc641b50f1298e 100644
+--- a/include/linux/poll.h
++++ b/include/linux/poll.h
+@@ -41,8 +41,16 @@ typedef struct poll_table_struct {
+
+ static inline void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p)
+ {
+- if (p && p->_qproc && wait_address)
++ if (p && p->_qproc && wait_address) {
+ p->_qproc(filp, wait_address, p);
++ /*
++ * This memory barrier is paired in the wq_has_sleeper().
++ * See the comment above prepare_to_wait(), we need to
++ * ensure that subsequent tests in this thread can't be
++ * reordered with __add_wait_queue() in _qproc() paths.
++ */
++ smp_mb();
++ }
+ }
+
+ /*
+diff --git a/include/linux/pruss_driver.h b/include/linux/pruss_driver.h
+index c9a31c567e85bf..2e18fef1a2e109 100644
+--- a/include/linux/pruss_driver.h
++++ b/include/linux/pruss_driver.h
+@@ -144,32 +144,32 @@ static inline int pruss_release_mem_region(struct pruss *pruss,
+ static inline int pruss_cfg_get_gpmux(struct pruss *pruss,
+ enum pruss_pru_id pru_id, u8 *mux)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_set_gpmux(struct pruss *pruss,
+ enum pruss_pru_id pru_id, u8 mux)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_gpimode(struct pruss *pruss,
+ enum pruss_pru_id pru_id,
+ enum pruss_gpi_mode mode)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_miirt_enable(struct pruss *pruss, bool enable)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_xfr_enable(struct pruss *pruss,
+ enum pru_type pru_type,
+- bool enable);
++ bool enable)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ #endif /* CONFIG_TI_PRUSS */
+diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
+index cb40f1a1d0811d..75342022d14414 100644
+--- a/include/linux/userfaultfd_k.h
++++ b/include/linux/userfaultfd_k.h
+@@ -247,6 +247,13 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
+ vma_is_shmem(vma);
+ }
+
++static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
++{
++ struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx;
++
++ return uffd_ctx && (uffd_ctx->features & UFFD_FEATURE_EVENT_REMAP) == 0;
++}
++
+ extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *);
+ extern void dup_userfaultfd_complete(struct list_head *);
+ void dup_userfaultfd_fail(struct list_head *);
+@@ -402,6 +409,11 @@ static inline bool userfaultfd_wp_async(struct vm_area_struct *vma)
+ return false;
+ }
+
++static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
++{
++ return false;
++}
++
+ #endif /* CONFIG_USERFAULTFD */
+
+ static inline bool userfaultfd_wp_use_markers(struct vm_area_struct *vma)
+diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
+index 793e6fd78bc5c0..60a5347922becc 100644
+--- a/include/net/page_pool/helpers.h
++++ b/include/net/page_pool/helpers.h
+@@ -294,7 +294,7 @@ static inline long page_pool_unref_page(struct page *page, long nr)
+
+ static inline void page_pool_ref_netmem(netmem_ref netmem)
+ {
+- atomic_long_inc(&netmem_to_page(netmem)->pp_ref_count);
++ atomic_long_inc(netmem_get_pp_ref_count_ref(netmem));
+ }
+
+ static inline void page_pool_ref_page(struct page *page)
+diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
+index bb8a59c6caa219..d36c857dd24974 100644
+--- a/include/trace/events/mmflags.h
++++ b/include/trace/events/mmflags.h
+@@ -13,6 +13,69 @@
+ * Thus most bits set go first.
+ */
+
++/* These define the values that are enums (the bits) */
++#define TRACE_GFP_FLAGS_GENERAL \
++ TRACE_GFP_EM(DMA) \
++ TRACE_GFP_EM(HIGHMEM) \
++ TRACE_GFP_EM(DMA32) \
++ TRACE_GFP_EM(MOVABLE) \
++ TRACE_GFP_EM(RECLAIMABLE) \
++ TRACE_GFP_EM(HIGH) \
++ TRACE_GFP_EM(IO) \
++ TRACE_GFP_EM(FS) \
++ TRACE_GFP_EM(ZERO) \
++ TRACE_GFP_EM(DIRECT_RECLAIM) \
++ TRACE_GFP_EM(KSWAPD_RECLAIM) \
++ TRACE_GFP_EM(WRITE) \
++ TRACE_GFP_EM(NOWARN) \
++ TRACE_GFP_EM(RETRY_MAYFAIL) \
++ TRACE_GFP_EM(NOFAIL) \
++ TRACE_GFP_EM(NORETRY) \
++ TRACE_GFP_EM(MEMALLOC) \
++ TRACE_GFP_EM(COMP) \
++ TRACE_GFP_EM(NOMEMALLOC) \
++ TRACE_GFP_EM(HARDWALL) \
++ TRACE_GFP_EM(THISNODE) \
++ TRACE_GFP_EM(ACCOUNT) \
++ TRACE_GFP_EM(ZEROTAGS)
++
++#ifdef CONFIG_KASAN_HW_TAGS
++# define TRACE_GFP_FLAGS_KASAN \
++ TRACE_GFP_EM(SKIP_ZERO) \
++ TRACE_GFP_EM(SKIP_KASAN)
++#else
++# define TRACE_GFP_FLAGS_KASAN
++#endif
++
++#ifdef CONFIG_LOCKDEP
++# define TRACE_GFP_FLAGS_LOCKDEP \
++ TRACE_GFP_EM(NOLOCKDEP)
++#else
++# define TRACE_GFP_FLAGS_LOCKDEP
++#endif
++
++#ifdef CONFIG_SLAB_OBJ_EXT
++# define TRACE_GFP_FLAGS_SLAB \
++ TRACE_GFP_EM(NO_OBJ_EXT)
++#else
++# define TRACE_GFP_FLAGS_SLAB
++#endif
++
++#define TRACE_GFP_FLAGS \
++ TRACE_GFP_FLAGS_GENERAL \
++ TRACE_GFP_FLAGS_KASAN \
++ TRACE_GFP_FLAGS_LOCKDEP \
++ TRACE_GFP_FLAGS_SLAB
++
++#undef TRACE_GFP_EM
++#define TRACE_GFP_EM(a) TRACE_DEFINE_ENUM(___GFP_##a##_BIT);
++
++TRACE_GFP_FLAGS
++
++/* Just in case these are ever used */
++TRACE_DEFINE_ENUM(___GFP_UNUSED_BIT);
++TRACE_DEFINE_ENUM(___GFP_LAST_BIT);
++
+ #define gfpflag_string(flag) {(__force unsigned long)flag, #flag}
+
+ #define __def_gfpflag_names \
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index d293d52a3e00e1..9ee6c9145b1df9 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2179,7 +2179,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ },
+ [CPUHP_AP_HRTIMERS_DYING] = {
+ .name = "hrtimers:dying",
+- .startup.single = NULL,
++ .startup.single = hrtimers_cpu_starting,
+ .teardown.single = hrtimers_cpu_dying,
+ },
+ [CPUHP_AP_TICK_DYING] = {
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index 383fd43ac61222..7e1340da5acae6 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -89,6 +89,7 @@ find $cpio_dir -type f -print0 |
+
+ # Create archive and try to normalize metadata for reproducibility.
+ tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
++ --exclude=".__afs*" --exclude=".nfs*" \
+ --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
+ -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index f928a67a07d29a..4c4681cb9337b4 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2630,6 +2630,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+ {
+ struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx);
+ bool prev_on_scx = prev->sched_class == &ext_sched_class;
++ bool prev_on_rq = prev->scx.flags & SCX_TASK_QUEUED;
+ int nr_loops = SCX_DSP_MAX_LOOPS;
+
+ lockdep_assert_rq_held(rq);
+@@ -2662,8 +2663,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+ * See scx_ops_disable_workfn() for the explanation on the
+ * bypassing test.
+ */
+- if ((prev->scx.flags & SCX_TASK_QUEUED) &&
+- prev->scx.slice && !scx_rq_bypassing(rq)) {
++ if (prev_on_rq && prev->scx.slice && !scx_rq_bypassing(rq)) {
+ rq->scx.flags |= SCX_RQ_BAL_KEEP;
+ goto has_tasks;
+ }
+@@ -2696,6 +2696,10 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+
+ flush_dispatch_buf(rq);
+
++ if (prev_on_rq && prev->scx.slice) {
++ rq->scx.flags |= SCX_RQ_BAL_KEEP;
++ goto has_tasks;
++ }
+ if (rq->scx.local_dsq.nr)
+ goto has_tasks;
+ if (consume_global_dsq(rq))
+@@ -2721,8 +2725,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+ * Didn't find another task to run. Keep running @prev unless
+ * %SCX_OPS_ENQ_LAST is in effect.
+ */
+- if ((prev->scx.flags & SCX_TASK_QUEUED) &&
+- (!static_branch_unlikely(&scx_ops_enq_last) ||
++ if (prev_on_rq && (!static_branch_unlikely(&scx_ops_enq_last) ||
+ scx_rq_bypassing(rq))) {
+ rq->scx.flags |= SCX_RQ_BAL_KEEP;
+ goto has_tasks;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 1ca96c99872f08..60be5f8bbe7115 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4065,7 +4065,11 @@ static void update_cfs_group(struct sched_entity *se)
+ struct cfs_rq *gcfs_rq = group_cfs_rq(se);
+ long shares;
+
+- if (!gcfs_rq)
++ /*
++ * When a group becomes empty, preserve its weight. This matters for
++ * DELAY_DEQUEUE.
++ */
++ if (!gcfs_rq || !gcfs_rq->load.weight)
+ return;
+
+ if (throttled_hierarchy(gcfs_rq))
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index cddcd08ea827f9..ee20f5032a0366 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2156,6 +2156,15 @@ int hrtimers_prepare_cpu(unsigned int cpu)
+ }
+
+ cpu_base->cpu = cpu;
++ hrtimer_cpu_base_init_expiry_lock(cpu_base);
++ return 0;
++}
++
++int hrtimers_cpu_starting(unsigned int cpu)
++{
++ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
++
++ /* Clear out any left over state from a CPU down operation */
+ cpu_base->active_bases = 0;
+ cpu_base->hres_active = 0;
+ cpu_base->hang_detected = 0;
+@@ -2164,7 +2173,6 @@ int hrtimers_prepare_cpu(unsigned int cpu)
+ cpu_base->expires_next = KTIME_MAX;
+ cpu_base->softirq_expires_next = KTIME_MAX;
+ cpu_base->online = 1;
+- hrtimer_cpu_base_init_expiry_lock(cpu_base);
+ return 0;
+ }
+
+@@ -2240,6 +2248,7 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ void __init hrtimers_init(void)
+ {
+ hrtimers_prepare_cpu(smp_processor_id());
++ hrtimers_cpu_starting(smp_processor_id());
+ open_softirq(HRTIMER_SOFTIRQ, hrtimer_run_softirq);
+ }
+
+diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
+index 8d57f7686bb03a..371a62a749aad3 100644
+--- a/kernel/time/timer_migration.c
++++ b/kernel/time/timer_migration.c
+@@ -534,8 +534,13 @@ static void __walk_groups(up_f up, struct tmigr_walk *data,
+ break;
+
+ child = group;
+- group = group->parent;
++ /*
++ * Pairs with the store release on group connection
++ * to make sure group initialization is visible.
++ */
++ group = READ_ONCE(group->parent);
+ data->childmask = child->groupmask;
++ WARN_ON_ONCE(!data->childmask);
+ } while (group);
+ }
+
+@@ -1487,6 +1492,21 @@ static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
+ s.seq = 0;
+ atomic_set(&group->migr_state, s.state);
+
++ /*
++ * If this is a new top-level, prepare its groupmask in advance.
++ * This avoids accidents where yet another new top-level is
++ * created in the future and made visible before the current groupmask.
++ */
++ if (list_empty(&tmigr_level_list[lvl])) {
++ group->groupmask = BIT(0);
++ /*
++ * The previous top level has prepared its groupmask already,
++ * simply account it as the first child.
++ */
++ if (lvl > 0)
++ group->num_children = 1;
++ }
++
+ timerqueue_init_head(&group->events);
+ timerqueue_init(&group->groupevt.nextevt);
+ group->groupevt.nextevt.expires = KTIME_MAX;
+@@ -1550,8 +1570,25 @@ static void tmigr_connect_child_parent(struct tmigr_group *child,
+ raw_spin_lock_irq(&child->lock);
+ raw_spin_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING);
+
+- child->parent = parent;
+- child->groupmask = BIT(parent->num_children++);
++ if (activate) {
++ /*
++ * @child is the old top and @parent the new one. In this
++ * case groupmask is pre-initialized and @child already
++ * accounted, along with its new sibling corresponding to the
++ * CPU going up.
++ */
++ WARN_ON_ONCE(child->groupmask != BIT(0) || parent->num_children != 2);
++ } else {
++ /* Adding @child for the CPU going up to @parent. */
++ child->groupmask = BIT(parent->num_children++);
++ }
++
++ /*
++ * Make sure parent initialization is visible before publishing it to a
++ * racing CPU entering/exiting idle. This RELEASE barrier enforces an
++ * address dependency that pairs with the READ_ONCE() in __walk_groups().
++ */
++ smp_store_release(&child->parent, parent);
+
+ raw_spin_unlock(&parent->lock);
+ raw_spin_unlock_irq(&child->lock);
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 56fa431c52af7b..dc83baab85a140 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -3004,7 +3004,7 @@ static inline loff_t folio_seek_hole_data(struct xa_state *xas,
+ if (ops->is_partially_uptodate(folio, offset, bsz) ==
+ seek_data)
+ break;
+- start = (start + bsz) & ~(bsz - 1);
++ start = (start + bsz) & ~((u64)bsz - 1);
+ offset += bsz;
+ } while (offset < folio_size(folio));
+ unlock:
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 7e0f72cd9fd4a0..f127b61f04a825 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2132,6 +2132,16 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
+ return pmd;
+ }
+
++static pmd_t clear_uffd_wp_pmd(pmd_t pmd)
++{
++ if (pmd_present(pmd))
++ pmd = pmd_clear_uffd_wp(pmd);
++ else if (is_swap_pmd(pmd))
++ pmd = pmd_swp_clear_uffd_wp(pmd);
++
++ return pmd;
++}
++
+ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)
+ {
+@@ -2170,6 +2180,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
+ }
+ pmd = move_soft_dirty_pmd(pmd);
++ if (vma_has_uffd_without_event_remap(vma))
++ pmd = clear_uffd_wp_pmd(pmd);
+ set_pmd_at(mm, new_addr, new_pmd, pmd);
+ if (force_flush)
+ flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 2fa87b9ecec6c7..4a8a4f3535caf7 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5395,6 +5395,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+ unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte,
+ unsigned long sz)
+ {
++ bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma);
+ struct hstate *h = hstate_vma(vma);
+ struct mm_struct *mm = vma->vm_mm;
+ spinlock_t *src_ptl, *dst_ptl;
+@@ -5411,7 +5412,18 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+ spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
+
+ pte = huge_ptep_get_and_clear(mm, old_addr, src_pte);
+- set_huge_pte_at(mm, new_addr, dst_pte, pte, sz);
++
++ if (need_clear_uffd_wp && pte_marker_uffd_wp(pte))
++ huge_pte_clear(mm, new_addr, dst_pte, sz);
++ else {
++ if (need_clear_uffd_wp) {
++ if (pte_present(pte))
++ pte = huge_pte_clear_uffd_wp(pte);
++ else if (is_swap_pte(pte))
++ pte = pte_swp_clear_uffd_wp(pte);
++ }
++ set_huge_pte_at(mm, new_addr, dst_pte, pte, sz);
++ }
+
+ if (src_ptl != dst_ptl)
+ spin_unlock(src_ptl);
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 74f5f4c51ab8c8..5f878ee05ff80b 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1071,7 +1071,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
+ pr_debug("%s(0x%px, %zu)\n", __func__, ptr, size);
+
+ if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr))
+- create_object_percpu((__force unsigned long)ptr, size, 0, gfp);
++ create_object_percpu((__force unsigned long)ptr, size, 1, gfp);
+ }
+ EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu);
+
+diff --git a/mm/mremap.c b/mm/mremap.c
+index dee98ff2bbd644..1b2edd65c2a172 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -138,6 +138,7 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ struct vm_area_struct *new_vma, pmd_t *new_pmd,
+ unsigned long new_addr, bool need_rmap_locks)
+ {
++ bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma);
+ struct mm_struct *mm = vma->vm_mm;
+ pte_t *old_pte, *new_pte, pte;
+ spinlock_t *old_ptl, *new_ptl;
+@@ -207,7 +208,18 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ force_flush = true;
+ pte = move_pte(pte, old_addr, new_addr);
+ pte = move_soft_dirty_pte(pte);
+- set_pte_at(mm, new_addr, new_pte, pte);
++
++ if (need_clear_uffd_wp && pte_marker_uffd_wp(pte))
++ pte_clear(mm, new_addr, new_pte);
++ else {
++ if (need_clear_uffd_wp) {
++ if (pte_present(pte))
++ pte = pte_clear_uffd_wp(pte);
++ else if (is_swap_pte(pte))
++ pte = pte_swp_clear_uffd_wp(pte);
++ }
++ set_pte_at(mm, new_addr, new_pte, pte);
++ }
+ }
+
+ arch_leave_lazy_mmu_mode();
+@@ -269,6 +281,15 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ if (WARN_ON_ONCE(!pmd_none(*new_pmd)))
+ return false;
+
++ /* If this pmd belongs to a uffd vma with remap events disabled, we need
++ * to ensure that the uffd-wp state is cleared from all pgtables. This
++ * means recursing into lower page tables in move_page_tables(), and we
++ * can reuse the existing code if we simply treat the entry as "not
++ * moved".
++ */
++ if (vma_has_uffd_without_event_remap(vma))
++ return false;
++
+ /*
+ * We don't have to worry about the ordering of src and dst
+ * ptlocks because exclusive mmap_lock prevents deadlock.
+@@ -324,6 +345,15 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
+ if (WARN_ON_ONCE(!pud_none(*new_pud)))
+ return false;
+
++ /* If this pud belongs to a uffd vma with remap events disabled, we need
++ * to ensure that the uffd-wp state is cleared from all pgtables. This
++ * means recursing into lower page tables in move_page_tables(), and we
++ * can reuse the existing code if we simply treat the entry as "not
++ * moved".
++ */
++ if (vma_has_uffd_without_event_remap(vma))
++ return false;
++
+ /*
+ * We don't have to worry about the ordering of src and dst
+ * ptlocks because exclusive mmap_lock prevents deadlock.
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 67a680e4b484d7..d81d667907448c 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4637,6 +4637,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
+ reset_batch_size(walk);
+ }
+
++ __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(),
++ stat.nr_demoted);
++
+ item = PGSTEAL_KSWAPD + reclaimer_offset();
+ if (!cgroup_reclaim(sc))
+ __count_vm_events(item, reclaimed);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 54a53fae9e98f5..46da488ff0703f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -11263,6 +11263,7 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
+ bool is_sockarray = map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY;
+ struct sock_reuseport *reuse;
+ struct sock *selected_sk;
++ int err;
+
+ selected_sk = map->ops->map_lookup_elem(map, key);
+ if (!selected_sk)
+@@ -11270,10 +11271,6 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
+
+ reuse = rcu_dereference(selected_sk->sk_reuseport_cb);
+ if (!reuse) {
+- /* Lookup in sock_map can return TCP ESTABLISHED sockets. */
+- if (sk_is_refcounted(selected_sk))
+- sock_put(selected_sk);
+-
+ /* reuseport_array has only sk with non NULL sk_reuseport_cb.
+ * The only (!reuse) case here is - the sk has already been
+ * unhashed (e.g. by close()), so treat it as -ENOENT.
+@@ -11281,24 +11278,33 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
+ * Other maps (e.g. sock_map) do not provide this guarantee and
+ * the sk may never be in the reuseport group to begin with.
+ */
+- return is_sockarray ? -ENOENT : -EINVAL;
++ err = is_sockarray ? -ENOENT : -EINVAL;
++ goto error;
+ }
+
+ if (unlikely(reuse->reuseport_id != reuse_kern->reuseport_id)) {
+ struct sock *sk = reuse_kern->sk;
+
+- if (sk->sk_protocol != selected_sk->sk_protocol)
+- return -EPROTOTYPE;
+- else if (sk->sk_family != selected_sk->sk_family)
+- return -EAFNOSUPPORT;
+-
+- /* Catch all. Likely bound to a different sockaddr. */
+- return -EBADFD;
++ if (sk->sk_protocol != selected_sk->sk_protocol) {
++ err = -EPROTOTYPE;
++ } else if (sk->sk_family != selected_sk->sk_family) {
++ err = -EAFNOSUPPORT;
++ } else {
++ /* Catch all. Likely bound to a different sockaddr. */
++ err = -EBADFD;
++ }
++ goto error;
+ }
+
+ reuse_kern->selected_sk = selected_sk;
+
+ return 0;
++error:
++ /* Lookup in sock_map can return TCP ESTABLISHED sockets. */
++ if (sk_is_refcounted(selected_sk))
++ sock_put(selected_sk);
++
++ return err;
+ }
+
+ static const struct bpf_func_proto sk_select_reuseport_proto = {
+diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
+index b28424ae06d5fa..8614988fc67b9a 100644
+--- a/net/core/netdev-genl-gen.c
++++ b/net/core/netdev-genl-gen.c
+@@ -178,6 +178,16 @@ static const struct genl_multicast_group netdev_nl_mcgrps[] = {
+ [NETDEV_NLGRP_PAGE_POOL] = { "page-pool", },
+ };
+
++static void __netdev_nl_sock_priv_init(void *priv)
++{
++ netdev_nl_sock_priv_init(priv);
++}
++
++static void __netdev_nl_sock_priv_destroy(void *priv)
++{
++ netdev_nl_sock_priv_destroy(priv);
++}
++
+ struct genl_family netdev_nl_family __ro_after_init = {
+ .name = NETDEV_FAMILY_NAME,
+ .version = NETDEV_FAMILY_VERSION,
+@@ -189,6 +199,6 @@ struct genl_family netdev_nl_family __ro_after_init = {
+ .mcgrps = netdev_nl_mcgrps,
+ .n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps),
+ .sock_priv_size = sizeof(struct list_head),
+- .sock_priv_init = (void *)netdev_nl_sock_priv_init,
+- .sock_priv_destroy = (void *)netdev_nl_sock_priv_destroy,
++ .sock_priv_init = __netdev_nl_sock_priv_init,
++ .sock_priv_destroy = __netdev_nl_sock_priv_destroy,
+ };
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index 34f68ef74b8f2c..b6db4910359bb5 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -851,6 +851,9 @@ static ssize_t get_imix_entries(const char __user *buffer,
+ unsigned long weight;
+ unsigned long size;
+
++ if (pkt_dev->n_imix_entries >= MAX_IMIX_ENTRIES)
++ return -E2BIG;
++
+ len = num_arg(&buffer[i], max_digits, &size);
+ if (len < 0)
+ return len;
+@@ -880,9 +883,6 @@ static ssize_t get_imix_entries(const char __user *buffer,
+
+ i++;
+ pkt_dev->n_imix_entries++;
+-
+- if (pkt_dev->n_imix_entries > MAX_IMIX_ENTRIES)
+- return -E2BIG;
+ } while (c == ' ');
+
+ return i;
+diff --git a/net/mac802154/iface.c b/net/mac802154/iface.c
+index c0e2da5072bea2..9e4631fade90c9 100644
+--- a/net/mac802154/iface.c
++++ b/net/mac802154/iface.c
+@@ -684,6 +684,10 @@ void ieee802154_if_remove(struct ieee802154_sub_if_data *sdata)
+ ASSERT_RTNL();
+
+ mutex_lock(&sdata->local->iflist_mtx);
++ if (list_empty(&sdata->local->interfaces)) {
++ mutex_unlock(&sdata->local->iflist_mtx);
++ return;
++ }
+ list_del_rcu(&sdata->list);
+ mutex_unlock(&sdata->local->iflist_mtx);
+
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index a62bc874bf1e17..123f3f2972841a 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -607,7 +607,6 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
+ }
+ opts->ext_copy.use_ack = 1;
+ opts->suboptions = OPTION_MPTCP_DSS;
+- WRITE_ONCE(msk->old_wspace, __mptcp_space((struct sock *)msk));
+
+ /* Add kind/length/subtype/flag overhead if mapping is not populated */
+ if (dss_size == 0)
+@@ -1288,7 +1287,7 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th)
+ }
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICT);
+ }
+- return;
++ goto update_wspace;
+ }
+
+ if (rcv_wnd_new != rcv_wnd_old) {
+@@ -1313,6 +1312,9 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th)
+ th->window = htons(new_win);
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDSHARED);
+ }
++
++update_wspace:
++ WRITE_ONCE(msk->old_wspace, tp->rcv_wnd);
+ }
+
+ __sum16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum)
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index a93e661ef5c435..73526f1d768fcb 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -760,10 +760,15 @@ static inline u64 mptcp_data_avail(const struct mptcp_sock *msk)
+
+ static inline bool mptcp_epollin_ready(const struct sock *sk)
+ {
++ u64 data_avail = mptcp_data_avail(mptcp_sk(sk));
++
++ if (!data_avail)
++ return false;
++
+ /* mptcp doesn't have to deal with small skbs in the receive queue,
+- * at it can always coalesce them
++ * as it can always coalesce them
+ */
+- return (mptcp_data_avail(mptcp_sk(sk)) >= sk->sk_rcvlowat) ||
++ return (data_avail >= sk->sk_rcvlowat) ||
+ (mem_cgroup_sockets_enabled && sk->sk_memcg &&
+ mem_cgroup_under_socket_pressure(sk->sk_memcg)) ||
+ READ_ONCE(tcp_memory_pressure);
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index ef0f8f73826f53..4e0842df5234ea 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -289,6 +289,7 @@ enum {
+ ncsi_dev_state_config_sp = 0x0301,
+ ncsi_dev_state_config_cis,
+ ncsi_dev_state_config_oem_gma,
++ ncsi_dev_state_config_apply_mac,
+ ncsi_dev_state_config_clear_vids,
+ ncsi_dev_state_config_svf,
+ ncsi_dev_state_config_ev,
+@@ -322,6 +323,7 @@ struct ncsi_dev_priv {
+ #define NCSI_DEV_RESHUFFLE 4
+ #define NCSI_DEV_RESET 8 /* Reset state of NC */
+ unsigned int gma_flag; /* OEM GMA flag */
++ struct sockaddr pending_mac; /* MAC address received from GMA */
+ spinlock_t lock; /* Protect the NCSI device */
+ unsigned int package_probe_id;/* Current ID during probe */
+ unsigned int package_num; /* Number of packages */
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index 5cf55bde366d18..bf276eaf933075 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -1038,7 +1038,7 @@ static void ncsi_configure_channel(struct ncsi_dev_priv *ndp)
+ : ncsi_dev_state_config_clear_vids;
+ break;
+ case ncsi_dev_state_config_oem_gma:
+- nd->state = ncsi_dev_state_config_clear_vids;
++ nd->state = ncsi_dev_state_config_apply_mac;
+
+ nca.package = np->id;
+ nca.channel = nc->id;
+@@ -1050,10 +1050,22 @@ static void ncsi_configure_channel(struct ncsi_dev_priv *ndp)
+ nca.type = NCSI_PKT_CMD_OEM;
+ ret = ncsi_gma_handler(&nca, nc->version.mf_id);
+ }
+- if (ret < 0)
++ if (ret < 0) {
++ nd->state = ncsi_dev_state_config_clear_vids;
+ schedule_work(&ndp->work);
++ }
+
+ break;
++ case ncsi_dev_state_config_apply_mac:
++ rtnl_lock();
++ ret = dev_set_mac_address(dev, &ndp->pending_mac, NULL);
++ rtnl_unlock();
++ if (ret < 0)
++ netdev_warn(dev, "NCSI: 'Writing MAC address to device failed\n");
++
++ nd->state = ncsi_dev_state_config_clear_vids;
++
++ fallthrough;
+ case ncsi_dev_state_config_clear_vids:
+ case ncsi_dev_state_config_svf:
+ case ncsi_dev_state_config_ev:
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index e28be33bdf2c48..14bd66909ca455 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -628,16 +628,14 @@ static int ncsi_rsp_handler_snfc(struct ncsi_request *nr)
+ static int ncsi_rsp_handler_oem_gma(struct ncsi_request *nr, int mfr_id)
+ {
+ struct ncsi_dev_priv *ndp = nr->ndp;
++ struct sockaddr *saddr = &ndp->pending_mac;
+ struct net_device *ndev = ndp->ndev.dev;
+ struct ncsi_rsp_oem_pkt *rsp;
+- struct sockaddr saddr;
+ u32 mac_addr_off = 0;
+- int ret = 0;
+
+ /* Get the response header */
+ rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp);
+
+- saddr.sa_family = ndev->type;
+ ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ if (mfr_id == NCSI_OEM_MFR_BCM_ID)
+ mac_addr_off = BCM_MAC_ADDR_OFFSET;
+@@ -646,22 +644,17 @@ static int ncsi_rsp_handler_oem_gma(struct ncsi_request *nr, int mfr_id)
+ else if (mfr_id == NCSI_OEM_MFR_INTEL_ID)
+ mac_addr_off = INTEL_MAC_ADDR_OFFSET;
+
+- memcpy(saddr.sa_data, &rsp->data[mac_addr_off], ETH_ALEN);
++ saddr->sa_family = ndev->type;
++ memcpy(saddr->sa_data, &rsp->data[mac_addr_off], ETH_ALEN);
+ if (mfr_id == NCSI_OEM_MFR_BCM_ID || mfr_id == NCSI_OEM_MFR_INTEL_ID)
+- eth_addr_inc((u8 *)saddr.sa_data);
+- if (!is_valid_ether_addr((const u8 *)saddr.sa_data))
++ eth_addr_inc((u8 *)saddr->sa_data);
++ if (!is_valid_ether_addr((const u8 *)saddr->sa_data))
+ return -ENXIO;
+
+ /* Set the flag for GMA command which should only be called once */
+ ndp->gma_flag = 1;
+
+- rtnl_lock();
+- ret = dev_set_mac_address(ndev, &saddr, NULL);
+- rtnl_unlock();
+- if (ret < 0)
+- netdev_warn(ndev, "NCSI: 'Writing mac address to device failed\n");
+-
+- return ret;
++ return 0;
+ }
+
+ /* Response handler for Mellanox card */
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 16e26001468449..704c858cf2093b 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -934,7 +934,9 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
+ {
+ struct vport *vport = ovs_vport_rcu(dp, out_port);
+
+- if (likely(vport && netif_carrier_ok(vport->dev))) {
++ if (likely(vport &&
++ netif_running(vport->dev) &&
++ netif_carrier_ok(vport->dev))) {
+ u16 mru = OVS_CB(skb)->mru;
+ u32 cutlen = OVS_CB(skb)->cutlen;
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index b52b798aa4c292..15724f171b0f96 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -491,6 +491,15 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ */
+ vsk->transport->release(vsk);
+ vsock_deassign_transport(vsk);
++
++ /* transport's release() and destruct() can touch some socket
++ * state, since we are reassigning the socket to a new transport
++ * during vsock_connect(), let's reset these fields to have a
++ * clean state.
++ */
++ sock_reset_flag(sk, SOCK_DONE);
++ sk->sk_state = TCP_CLOSE;
++ vsk->peer_shutdown = 0;
+ }
+
+ /* We increase the module refcnt to prevent the transport unloading
+@@ -870,6 +879,9 @@ EXPORT_SYMBOL_GPL(vsock_create_connected);
+
+ s64 vsock_stream_has_data(struct vsock_sock *vsk)
+ {
++ if (WARN_ON(!vsk->transport))
++ return 0;
++
+ return vsk->transport->stream_has_data(vsk);
+ }
+ EXPORT_SYMBOL_GPL(vsock_stream_has_data);
+@@ -878,6 +890,9 @@ s64 vsock_connectible_has_data(struct vsock_sock *vsk)
+ {
+ struct sock *sk = sk_vsock(vsk);
+
++ if (WARN_ON(!vsk->transport))
++ return 0;
++
+ if (sk->sk_type == SOCK_SEQPACKET)
+ return vsk->transport->seqpacket_has_data(vsk);
+ else
+@@ -887,6 +902,9 @@ EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
+
+ s64 vsock_stream_has_space(struct vsock_sock *vsk)
+ {
++ if (WARN_ON(!vsk->transport))
++ return 0;
++
+ return vsk->transport->stream_has_space(vsk);
+ }
+ EXPORT_SYMBOL_GPL(vsock_stream_has_space);
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 9acc13ab3f822d..7f7de6d8809655 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -26,6 +26,9 @@
+ /* Threshold for detecting small packets to copy */
+ #define GOOD_COPY_LEN 128
+
++static void virtio_transport_cancel_close_work(struct vsock_sock *vsk,
++ bool cancel_timeout);
++
+ static const struct virtio_transport *
+ virtio_transport_get_ops(struct vsock_sock *vsk)
+ {
+@@ -1109,6 +1112,8 @@ void virtio_transport_destruct(struct vsock_sock *vsk)
+ {
+ struct virtio_vsock_sock *vvs = vsk->trans;
+
++ virtio_transport_cancel_close_work(vsk, true);
++
+ kfree(vvs);
+ vsk->trans = NULL;
+ }
+@@ -1204,17 +1209,11 @@ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+ }
+ }
+
+-static void virtio_transport_do_close(struct vsock_sock *vsk,
+- bool cancel_timeout)
++static void virtio_transport_cancel_close_work(struct vsock_sock *vsk,
++ bool cancel_timeout)
+ {
+ struct sock *sk = sk_vsock(vsk);
+
+- sock_set_flag(sk, SOCK_DONE);
+- vsk->peer_shutdown = SHUTDOWN_MASK;
+- if (vsock_stream_has_data(vsk) <= 0)
+- sk->sk_state = TCP_CLOSING;
+- sk->sk_state_change(sk);
+-
+ if (vsk->close_work_scheduled &&
+ (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
+ vsk->close_work_scheduled = false;
+@@ -1226,6 +1225,20 @@ static void virtio_transport_do_close(struct vsock_sock *vsk,
+ }
+ }
+
++static void virtio_transport_do_close(struct vsock_sock *vsk,
++ bool cancel_timeout)
++{
++ struct sock *sk = sk_vsock(vsk);
++
++ sock_set_flag(sk, SOCK_DONE);
++ vsk->peer_shutdown = SHUTDOWN_MASK;
++ if (vsock_stream_has_data(vsk) <= 0)
++ sk->sk_state = TCP_CLOSING;
++ sk->sk_state_change(sk);
++
++ virtio_transport_cancel_close_work(vsk, cancel_timeout);
++}
++
+ static void virtio_transport_close_timeout(struct work_struct *work)
+ {
+ struct vsock_sock *vsk =
+@@ -1628,8 +1641,11 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+
+ lock_sock(sk);
+
+- /* Check if sk has been closed before lock_sock */
+- if (sock_flag(sk, SOCK_DONE)) {
++ /* Check if sk has been closed or assigned to another transport before
++ * lock_sock (note: listener sockets are not assigned to any transport)
++ */
++ if (sock_flag(sk, SOCK_DONE) ||
++ (sk->sk_state != TCP_LISTEN && vsk->transport != &t->transport)) {
+ (void)virtio_transport_reset_no_sock(t, skb);
+ release_sock(sk);
+ sock_put(sk);
+diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
+index 4aa6e74ec2957b..f201d9eca1df2f 100644
+--- a/net/vmw_vsock/vsock_bpf.c
++++ b/net/vmw_vsock/vsock_bpf.c
+@@ -77,6 +77,7 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ size_t len, int flags, int *addr_len)
+ {
+ struct sk_psock *psock;
++ struct vsock_sock *vsk;
+ int copied;
+
+ psock = sk_psock_get(sk);
+@@ -84,6 +85,13 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ return __vsock_recvmsg(sk, msg, len, flags);
+
+ lock_sock(sk);
++ vsk = vsock_sk(sk);
++
++ if (!vsk->transport) {
++ copied = -ENODEV;
++ goto out;
++ }
++
+ if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+@@ -108,6 +116,7 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+ }
+
++out:
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index 14df15e3569525..105706abf281b2 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -626,6 +626,7 @@ struct aa_profile *aa_alloc_null(struct aa_profile *parent, const char *name,
+
+ /* TODO: ideally we should inherit abi from parent */
+ profile->label.flags |= FLAG_NULL;
++ profile->attach.xmatch = aa_get_pdb(nullpdb);
+ rules = list_first_entry(&profile->rules, typeof(*rules), list);
+ rules->file = aa_get_pdb(nullpdb);
+ rules->policy = aa_get_pdb(nullpdb);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3ed82f98e2de9e..a9f6138b59b0c1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10625,6 +10625,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
++ SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
++ SND_PCI_QUIRK(0x1043, 0x1e83, "ASUS GA605W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10979,6 +10981,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
+ SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+diff --git a/tools/net/ynl/ynl-gen-c.py b/tools/net/ynl/ynl-gen-c.py
+index 717530bc9c52e7..463f1394ab971b 100755
+--- a/tools/net/ynl/ynl-gen-c.py
++++ b/tools/net/ynl/ynl-gen-c.py
+@@ -2361,6 +2361,17 @@ def print_kernel_family_struct_src(family, cw):
+ if not kernel_can_gen_family_struct(family):
+ return
+
++ if 'sock-priv' in family.kernel_family:
++ # Generate "trampolines" to make CFI happy
++ cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_init",
++ [f"{family.c_name}_nl_sock_priv_init(priv);"],
++ ["void *priv"])
++ cw.nl()
++ cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_destroy",
++ [f"{family.c_name}_nl_sock_priv_destroy(priv);"],
++ ["void *priv"])
++ cw.nl()
++
+ cw.block_start(f"struct genl_family {family.ident_name}_nl_family __ro_after_init =")
+ cw.p('.name\t\t= ' + family.fam_key + ',')
+ cw.p('.version\t= ' + family.ver_key + ',')
+@@ -2378,9 +2389,8 @@ def print_kernel_family_struct_src(family, cw):
+ cw.p(f'.n_mcgrps\t= ARRAY_SIZE({family.c_name}_nl_mcgrps),')
+ if 'sock-priv' in family.kernel_family:
+ cw.p(f'.sock_priv_size\t= sizeof({family.kernel_family["sock-priv"]}),')
+- # Force cast here, actual helpers take pointer to the real type.
+- cw.p(f'.sock_priv_init\t= (void *){family.c_name}_nl_sock_priv_init,')
+- cw.p(f'.sock_priv_destroy = (void *){family.c_name}_nl_sock_priv_destroy,')
++ cw.p(f'.sock_priv_init\t= __{family.c_name}_nl_sock_priv_init,')
++ cw.p(f'.sock_priv_destroy = __{family.c_name}_nl_sock_priv_destroy,')
+ cw.block_end(';')
+
+
+diff --git a/tools/testing/selftests/mm/cow.c b/tools/testing/selftests/mm/cow.c
+index 32c6ccc2a6be98..1238e1c5aae150 100644
+--- a/tools/testing/selftests/mm/cow.c
++++ b/tools/testing/selftests/mm/cow.c
+@@ -758,7 +758,7 @@ static void do_run_with_base_page(test_fn fn, bool swapout)
+ }
+
+ /* Populate a base page. */
+- memset(mem, 0, pagesize);
++ memset(mem, 1, pagesize);
+
+ if (swapout) {
+ madvise(mem, pagesize, MADV_PAGEOUT);
+@@ -824,12 +824,12 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run, size_t thpsize)
+ * Try to populate a THP. Touch the first sub-page and test if
+ * we get the last sub-page populated automatically.
+ */
+- mem[0] = 0;
++ mem[0] = 1;
+ if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) {
+ ksft_test_result_skip("Did not get a THP populated\n");
+ goto munmap;
+ }
+- memset(mem, 0, thpsize);
++ memset(mem, 1, thpsize);
+
+ size = thpsize;
+ switch (thp_run) {
+@@ -1012,7 +1012,7 @@ static void run_with_hugetlb(test_fn fn, const char *desc, size_t hugetlbsize)
+ }
+
+ /* Populate an huge page. */
+- memset(mem, 0, hugetlbsize);
++ memset(mem, 1, hugetlbsize);
+
+ /*
+ * We need a total of two hugetlb pages to handle COW/unsharing
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index 4209b95690394b..414addef9a4514 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -25,6 +25,8 @@
+ #include <sys/types.h>
+ #include <sys/mman.h>
+
++#include <arpa/inet.h>
++
+ #include <netdb.h>
+ #include <netinet/in.h>
+
+@@ -1211,23 +1213,42 @@ static void parse_setsock_options(const char *name)
+ exit(1);
+ }
+
+-void xdisconnect(int fd, int addrlen)
++void xdisconnect(int fd)
+ {
+- struct sockaddr_storage empty;
++ socklen_t addrlen = sizeof(struct sockaddr_storage);
++ struct sockaddr_storage addr, empty;
+ int msec_sleep = 10;
+- int queued = 1;
+- int i;
++ void *raw_addr;
++ int i, cmdlen;
++ char cmd[128];
++
++ /* get the local address and convert it to string */
++ if (getsockname(fd, (struct sockaddr *)&addr, &addrlen) < 0)
++ xerror("getsockname");
++
++ if (addr.ss_family == AF_INET)
++ raw_addr = &(((struct sockaddr_in *)&addr)->sin_addr);
++ else if (addr.ss_family == AF_INET6)
++ raw_addr = &(((struct sockaddr_in6 *)&addr)->sin6_addr);
++ else
++ xerror("bad family");
++
++ strcpy(cmd, "ss -M | grep -q ");
++ cmdlen = strlen(cmd);
++ if (!inet_ntop(addr.ss_family, raw_addr, &cmd[cmdlen],
++ sizeof(cmd) - cmdlen))
++ xerror("inet_ntop");
+
+ shutdown(fd, SHUT_WR);
+
+- /* while until the pending data is completely flushed, the later
++ /*
++ * wait until the pending data is completely flushed and all
++ * the MPTCP sockets reached the closed status.
+ * disconnect will bypass/ignore/drop any pending data.
+ */
+ for (i = 0; ; i += msec_sleep) {
+- if (ioctl(fd, SIOCOUTQ, &queued) < 0)
+- xerror("can't query out socket queue: %d", errno);
+-
+- if (!queued)
++ /* closed socket are not listed by 'ss' */
++ if (system(cmd) != 0)
+ break;
+
+ if (i > poll_timeout)
+@@ -1281,9 +1302,9 @@ int main_loop(void)
+ return ret;
+
+ if (cfg_truncate > 0) {
+- xdisconnect(fd, peer->ai_addrlen);
++ xdisconnect(fd);
+ } else if (--cfg_repeat > 0) {
+- xdisconnect(fd, peer->ai_addrlen);
++ xdisconnect(fd);
+
+ /* the socket could be unblocking at this point, we need the
+ * connect to be blocking
+diff --git a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+index 37d9bf6fb7458d..6f4c3f5a1c5d99 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+@@ -20,7 +20,7 @@ s32 BPF_STRUCT_OPS(ddsp_bogus_dsq_fail_select_cpu, struct task_struct *p,
+ * If we dispatch to a bogus DSQ that will fall back to the
+ * builtin global DSQ, we fail gracefully.
+ */
+- scx_bpf_dispatch_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
++ scx_bpf_dsq_insert_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
+ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+index dffc97d9cdf141..e4a55027778fd0 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+@@ -17,8 +17,8 @@ s32 BPF_STRUCT_OPS(ddsp_vtimelocal_fail_select_cpu, struct task_struct *p,
+
+ if (cpu >= 0) {
+ /* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */
+- scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
+- p->scx.dsq_vtime, 0);
++ scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
++ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+index 6a7db1502c29e1..fbda6bf5467128 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+@@ -43,9 +43,12 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev)
+ if (!p)
+ return;
+
+- target = bpf_get_prandom_u32() % nr_cpus;
++ if (p->nr_cpus_allowed == nr_cpus)
++ target = bpf_get_prandom_u32() % nr_cpus;
++ else
++ target = scx_bpf_task_cpu(p);
+
+- scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
+ bpf_task_release(p);
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.c b/tools/testing/selftests/sched_ext/dsp_local_on.c
+index 472851b5685487..0ff27e57fe4303 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.c
+@@ -34,9 +34,10 @@ static enum scx_test_status run(void *ctx)
+ /* Just sleeping is fine, plenty of scheduling events happening */
+ sleep(1);
+
+- SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_ERROR));
+ bpf_link__destroy(link);
+
++ SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG));
++
+ return SCX_TEST_PASS;
+ }
+
+@@ -50,7 +51,7 @@ static void cleanup(void *ctx)
+ struct scx_test dsp_local_on = {
+ .name = "dsp_local_on",
+ .description = "Verify we can directly dispatch tasks to a local DSQs "
+- "from osp.dispatch()",
++ "from ops.dispatch()",
+ .setup = setup,
+ .run = run,
+ .cleanup = cleanup,
+diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+index 1efb50d61040ad..a7cf868d5e311d 100644
+--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
++++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+@@ -31,7 +31,7 @@ void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p,
+ /* Can only call from ops.select_cpu() */
+ scx_bpf_select_cpu_dfl(p, 0, 0, &found);
+
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c
+index d75d4faf07f6d5..4bc36182d3ffc2 100644
+--- a/tools/testing/selftests/sched_ext/exit.bpf.c
++++ b/tools/testing/selftests/sched_ext/exit.bpf.c
+@@ -33,7 +33,7 @@ void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags)
+ if (exit_point == EXIT_ENQUEUE)
+ EXIT_CLEANLY();
+
+- scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+@@ -41,7 +41,7 @@ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+ if (exit_point == EXIT_DISPATCH)
+ EXIT_CLEANLY();
+
+- scx_bpf_consume(DSQ_ID);
++ scx_bpf_dsq_move_to_local(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(exit_enable, struct task_struct *p)
+diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
+index 4d4cd8d966dba6..430f5e13bf5544 100644
+--- a/tools/testing/selftests/sched_ext/maximal.bpf.c
++++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
+@@ -12,6 +12,8 @@
+
+ char _license[] SEC("license") = "GPL";
+
++#define DSQ_ID 0
++
+ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
+ u64 wake_flags)
+ {
+@@ -20,7 +22,7 @@ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
+
+ void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags)
+ {
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+@@ -28,7 +30,7 @@ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+
+ void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev)
+ {
+- scx_bpf_consume(SCX_DSQ_GLOBAL);
++ scx_bpf_dsq_move_to_local(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags)
+@@ -123,7 +125,7 @@ void BPF_STRUCT_OPS(maximal_cgroup_set_weight, struct cgroup *cgrp, u32 weight)
+
+ s32 BPF_STRUCT_OPS_SLEEPABLE(maximal_init)
+ {
+- return 0;
++ return scx_bpf_create_dsq(DSQ_ID, -1);
+ }
+
+ void BPF_STRUCT_OPS(maximal_exit, struct scx_exit_info *info)
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+index f171ac47097060..13d0f5be788d12 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_enqueue, struct task_struct *p,
+ }
+ scx_bpf_put_idle_cpumask(idle_mask);
+
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+index 9efdbb7da92887..815f1d5d61ac43 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+@@ -67,7 +67,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p,
+ saw_local = true;
+ }
+
+- scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, enq_flags);
+ }
+
+ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_init_task,
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+index 59bfc4f36167a7..4bb99699e9209c 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+@@ -29,7 +29,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+
+ dispatch:
+- scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+index 3bbd5fcdfb18e0..2a75de11b2cfd5 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+@@ -18,7 +18,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_bad_dsq_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching to a random DSQ should fail. */
+- scx_bpf_dispatch(p, 0xcafef00d, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, 0xcafef00d, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+index 0fda57fe0ecfae..99d075695c9743 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+@@ -18,8 +18,8 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_dbl_dsp_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching twice in a row is disallowed. */
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+index e6c67bcf5e6e35..bfcb96cd4954bd 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+@@ -2,8 +2,8 @@
+ /*
+ * A scheduler that validates that enqueue flags are properly stored and
+ * applied at dispatch time when a task is directly dispatched from
+- * ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and
+- * making the test a very basic vtime scheduler.
++ * ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(),
++ * and making the test a very basic vtime scheduler.
+ *
+ * Copyright (c) 2024 Meta Platforms, Inc. and affiliates.
+ * Copyright (c) 2024 David Vernet <dvernet@meta.com>
+@@ -47,13 +47,13 @@ s32 BPF_STRUCT_OPS(select_cpu_vtime_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+ scx_bpf_test_and_clear_cpu_idle(cpu);
+ ddsp:
+- scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
++ scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
+ return cpu;
+ }
+
+ void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p)
+ {
+- if (scx_bpf_consume(VTIME_DSQ))
++ if (scx_bpf_dsq_move_to_local(VTIME_DSQ))
+ consumed = true;
+ }
+
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json b/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
+index 58189327f6444a..383fbda07245c8 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
+@@ -78,10 +78,10 @@
+ "setup": [
+ "$TC qdisc add dev $DEV1 ingress"
+ ],
+- "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0xff",
++ "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0x1f",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 protocol ip prio 1 flow",
+- "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 255 baseclass",
++ "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 31 baseclass",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DEV1 ingress"
+diff --git a/tools/testing/shared/linux/maple_tree.h b/tools/testing/shared/linux/maple_tree.h
+index 06c89bdcc51541..f67d47d32857ce 100644
+--- a/tools/testing/shared/linux/maple_tree.h
++++ b/tools/testing/shared/linux/maple_tree.h
+@@ -2,6 +2,6 @@
+ #define atomic_t int32_t
+ #define atomic_inc(x) uatomic_inc(x)
+ #define atomic_read(x) uatomic_read(x)
+-#define atomic_set(x, y) do {} while (0)
++#define atomic_set(x, y) uatomic_set(x, y)
+ #define U8_MAX UCHAR_MAX
+ #include "../../../../include/linux/maple_tree.h"
+diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h
+index e01f66f9898279..3e1b6adc027b99 100644
+--- a/tools/testing/vma/linux/atomic.h
++++ b/tools/testing/vma/linux/atomic.h
+@@ -6,7 +6,7 @@
+ #define atomic_t int32_t
+ #define atomic_inc(x) uatomic_inc(x)
+ #define atomic_read(x) uatomic_read(x)
+-#define atomic_set(x, y) do {} while (0)
++#define atomic_set(x, y) uatomic_set(x, y)
+ #define U8_MAX UCHAR_MAX
+
+ #endif /* _LINUX_ATOMIC_H */
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-30 12:47 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-01-30 12:47 UTC (permalink / raw
To: gentoo-commits
commit: 5e62dc25f02e5f291109108ac41ca60469dc4531
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 30 12:47:37 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 30 12:47:37 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5e62dc25
Update CPU Optimization patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
5010_enable-cpu-optimizations-universal.patch | 64 ++++++++++++++++-----------
1 file changed, 38 insertions(+), 26 deletions(-)
diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index 0758b0ba..5011aaa6 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -116,13 +116,13 @@ REFERENCES
4. http://www.linuxforge.net/docs/linux/linux-gcc.php
---
- arch/x86/Kconfig.cpu | 359 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile | 87 +++++++-
- arch/x86/include/asm/vermagic.h | 70 +++++++
- 3 files changed, 499 insertions(+), 17 deletions(-)
+ arch/x86/Kconfig.cpu | 367 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile | 89 +++++++-
+ arch/x86/include/asm/vermagic.h | 72 +++++++
+ 3 files changed, 511 insertions(+), 17 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 2a7279d80460..abfadddd1b23 100644
+index ce5ed2c2db0c..6d89f21aba52 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -155,9 +155,8 @@ config MPENTIUM4
@@ -252,7 +252,7 @@ index 2a7279d80460..abfadddd1b23 100644
+
+config MZEN5
+ bool "AMD Zen 5"
-+ depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 191000)
++ depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 190100)
+ help
+ Select this for AMD Family 19h Zen 5 processors.
+
@@ -280,7 +280,7 @@ index 2a7279d80460..abfadddd1b23 100644
help
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,14 +388,191 @@ config MCORE2
+@@ -278,14 +388,199 @@ config MCORE2
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
@@ -388,14 +388,22 @@ index 2a7279d80460..abfadddd1b23 100644
+
+ Enables -march=cannonlake
+
-+config MICELAKE
++config MICELAKE_CLIENT
+ bool "Intel Ice Lake"
+ help
+
-+ Select this for 10th Gen Core processors in the Ice Lake family.
++ Select this for 10th Gen Core client processors in the Ice Lake family.
+
+ Enables -march=icelake-client
+
++config MICELAKE_SERVER
++ bool "Intel Ice Lake Server"
++ help
++
++ Select this for 10th Gen Core server processors in the Ice Lake family.
++
++ Enables -march=icelake-server
++
+config MCASCADELAKE
+ bool "Intel Cascade Lake"
+ help
@@ -478,7 +486,7 @@ index 2a7279d80460..abfadddd1b23 100644
config GENERIC_CPU
bool "Generic-x86-64"
-@@ -294,6 +581,26 @@ config GENERIC_CPU
+@@ -294,6 +589,26 @@ config GENERIC_CPU
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
@@ -505,7 +513,7 @@ index 2a7279d80460..abfadddd1b23 100644
endchoice
config X86_GENERIC
-@@ -308,6 +615,30 @@ config X86_GENERIC
+@@ -308,6 +623,30 @@ config X86_GENERIC
This is really intended for distributors who need more
generic optimizations.
@@ -536,34 +544,34 @@ index 2a7279d80460..abfadddd1b23 100644
#
# Define implied options from the CPU selection here
config X86_INTERNODE_CACHE_SHIFT
-@@ -318,7 +649,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +657,7 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || MPSC
- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
++ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
default "4" if MELAN || M486SX || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-@@ -336,11 +667,11 @@ config X86_ALIGNMENT_16
+@@ -336,11 +675,11 @@ config X86_ALIGNMENT_16
config X86_INTEL_USERCOPY
def_bool y
- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
config X86_USE_PPRO_CHECKSUM
def_bool y
- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
#
# P6_NOPs are a relatively minor optimization that require a family >=
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index cd75e78a06c1..396d1db12bca 100644
+index 3419ffa2a350..aafb069de612 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
-@@ -181,15 +181,96 @@ else
+@@ -152,15 +152,98 @@ else
cflags-$(CONFIG_MK8) += -march=k8
cflags-$(CONFIG_MPSC) += -march=nocona
cflags-$(CONFIG_MCORE2) += -march=core2
@@ -605,7 +613,8 @@ index cd75e78a06c1..396d1db12bca 100644
+ cflags-$(CONFIG_MSKYLAKE) += -march=skylake
+ cflags-$(CONFIG_MSKYLAKEX) += -march=skylake-avx512
+ cflags-$(CONFIG_MCANNONLAKE) += -march=cannonlake
-+ cflags-$(CONFIG_MICELAKE) += -march=icelake-client
++ cflags-$(CONFIG_MICELAKE_CLIENT) += -march=icelake-client
++ cflags-$(CONFIG_MICELAKE_SERVER) += -march=icelake-server
+ cflags-$(CONFIG_MCASCADELAKE) += -march=cascadelake
+ cflags-$(CONFIG_MCOOPERLAKE) += -march=cooperlake
+ cflags-$(CONFIG_MTIGERLAKE) += -march=tigerlake
@@ -650,7 +659,8 @@ index cd75e78a06c1..396d1db12bca 100644
+ rustflags-$(CONFIG_MSKYLAKE) += -Ctarget-cpu=skylake
+ rustflags-$(CONFIG_MSKYLAKEX) += -Ctarget-cpu=skylake-avx512
+ rustflags-$(CONFIG_MCANNONLAKE) += -Ctarget-cpu=cannonlake
-+ rustflags-$(CONFIG_MICELAKE) += -Ctarget-cpu=icelake-client
++ rustflags-$(CONFIG_MICELAKE_CLIENT) += -Ctarget-cpu=icelake-client
++ rustflags-$(CONFIG_MICELAKE_SERVER) += -Ctarget-cpu=icelake-server
+ rustflags-$(CONFIG_MCASCADELAKE) += -Ctarget-cpu=cascadelake
+ rustflags-$(CONFIG_MCOOPERLAKE) += -Ctarget-cpu=cooperlake
+ rustflags-$(CONFIG_MTIGERLAKE) += -Ctarget-cpu=tigerlake
@@ -664,10 +674,10 @@ index cd75e78a06c1..396d1db12bca 100644
KBUILD_CFLAGS += -mno-red-zone
diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec3..f4e29563473d 100644
+index 75884d2cdec3..2fdae271f47f 100644
--- a/arch/x86/include/asm/vermagic.h
+++ b/arch/x86/include/asm/vermagic.h
-@@ -17,6 +17,54 @@
+@@ -17,6 +17,56 @@
#define MODULE_PROC_FAMILY "586MMX "
#elif defined CONFIG_MCORE2
#define MODULE_PROC_FAMILY "CORE2 "
@@ -699,8 +709,10 @@ index 75884d2cdec3..f4e29563473d 100644
+#define MODULE_PROC_FAMILY "SKYLAKEX "
+#elif defined CONFIG_MCANNONLAKE
+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MICELAKE_CLIENT
++#define MODULE_PROC_FAMILY "ICELAKE_CLIENT "
++#elif defined CONFIG_MICELAKE_SERVER
++#define MODULE_PROC_FAMILY "ICELAKE_SERVER "
+#elif defined CONFIG_MCASCADELAKE
+#define MODULE_PROC_FAMILY "CASCADELAKE "
+#elif defined CONFIG_MCOOPERLAKE
@@ -722,7 +734,7 @@ index 75884d2cdec3..f4e29563473d 100644
#elif defined CONFIG_MATOM
#define MODULE_PROC_FAMILY "ATOM "
#elif defined CONFIG_M686
-@@ -35,6 +83,28 @@
+@@ -35,6 +85,28 @@
#define MODULE_PROC_FAMILY "K7 "
#elif defined CONFIG_MK8
#define MODULE_PROC_FAMILY "K8 "
@@ -752,5 +764,5 @@ index 75884d2cdec3..f4e29563473d 100644
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
--
-2.46.2
+2.47.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-01 23:07 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-01 23:07 UTC (permalink / raw
To: gentoo-commits
commit: 05ce594d8f20da5e0040106fb43894994a64fd0b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 1 23:06:45 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 1 23:06:45 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=05ce594d
Linux patch 6.12.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-6.12.12.patch | 1388 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1392 insertions(+)
diff --git a/0000_README b/0000_README
index 9c94906b..17858f75 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-6.12.11.patch
From: https://www.kernel.org
Desc: Linux 6.12.11
+Patch: 1011_linux-6.12.12.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.12
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1011_linux-6.12.12.patch b/1011_linux-6.12.12.patch
new file mode 100644
index 00000000..921cacc4
--- /dev/null
+++ b/1011_linux-6.12.12.patch
@@ -0,0 +1,1388 @@
+diff --git a/Makefile b/Makefile
+index 7cf8f11975f89c..9e6246e733eb94 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index bf636b28e3e16e..5bb8b78bf250a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -63,7 +63,8 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+
+ bool should_use_dmub_lock(struct dc_link *link)
+ {
+- if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
++ if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 ||
++ link->psr_settings.psr_version == DC_PSR_VERSION_1)
+ return true;
+
+ if (link->replay_settings.replay_feature_enabled)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+index 3ea54fd52e4683..e2a3764d9d181a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+@@ -578,8 +578,8 @@ static void CalculateBytePerPixelAndBlockSizes(
+ {
+ *BytePerPixelDETY = 0;
+ *BytePerPixelDETC = 0;
+- *BytePerPixelY = 0;
+- *BytePerPixelC = 0;
++ *BytePerPixelY = 1;
++ *BytePerPixelC = 1;
+
+ if (SourcePixelFormat == dml2_444_64) {
+ *BytePerPixelDETY = 8;
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index fc35f47e2849ed..ca7f43c8d6f1b3 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -507,6 +507,9 @@ int drmm_connector_hdmi_init(struct drm_device *dev,
+ if (!supported_formats || !(supported_formats & BIT(HDMI_COLORSPACE_RGB)))
+ return -EINVAL;
+
++ if (connector->ycbcr_420_allowed != !!(supported_formats & BIT(HDMI_COLORSPACE_YUV420)))
++ return -EINVAL;
++
+ if (!(max_bpc == 8 || max_bpc == 10 || max_bpc == 12))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index da203045df9bec..72b6a119412fa7 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -107,8 +107,10 @@ v3d_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->bin_job->base, V3D_BIN);
+ trace_v3d_bcl_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->bin_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+@@ -118,8 +120,10 @@ v3d_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->render_job->base, V3D_RENDER);
+ trace_v3d_rcl_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->render_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+@@ -129,8 +133,10 @@ v3d_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->csd_job->base, V3D_CSD);
+ trace_v3d_csd_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->csd_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+@@ -167,8 +173,10 @@ v3d_hub_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->tfu_job->base, V3D_TFU);
+ trace_v3d_tfu_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->tfu_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 0f23be98c56e22..ceb3b1a72e235c 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -506,7 +506,6 @@
+ #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100
+
+ #define I2C_VENDOR_ID_GOODIX 0x27c6
+-#define I2C_DEVICE_ID_GOODIX_01E0 0x01e0
+ #define I2C_DEVICE_ID_GOODIX_01E8 0x01e8
+ #define I2C_DEVICE_ID_GOODIX_01E9 0x01e9
+ #define I2C_DEVICE_ID_GOODIX_01F0 0x01f0
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e936019d21fecf..d1b7ccfb3e051f 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1452,8 +1452,7 @@ static const __u8 *mt_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ {
+ if (hdev->vendor == I2C_VENDOR_ID_GOODIX &&
+ (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E9 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E0)) {
++ hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) {
+ if (rdesc[607] == 0x15) {
+ rdesc[607] = 0x25;
+ dev_info(
+@@ -2079,10 +2078,7 @@ static const struct hid_device_id mt_devices[] = {
+ I2C_DEVICE_ID_GOODIX_01E8) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E9) },
+- { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+- HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E0) },
++ I2C_DEVICE_ID_GOODIX_01E8) },
+
+ /* GoodTouch panels */
+ { .driver_data = MT_CLS_NSMU,
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 9843b52bd017a0..34428349fa3118 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -1370,17 +1370,6 @@ static int wacom_led_register_one(struct device *dev, struct wacom *wacom,
+ if (!name)
+ return -ENOMEM;
+
+- if (!read_only) {
+- led->trigger.name = name;
+- error = devm_led_trigger_register(dev, &led->trigger);
+- if (error) {
+- hid_err(wacom->hdev,
+- "failed to register LED trigger %s: %d\n",
+- led->cdev.name, error);
+- return error;
+- }
+- }
+-
+ led->group = group;
+ led->id = id;
+ led->wacom = wacom;
+@@ -1397,6 +1386,19 @@ static int wacom_led_register_one(struct device *dev, struct wacom *wacom,
+ led->cdev.brightness_set = wacom_led_readonly_brightness_set;
+ }
+
++ if (!read_only) {
++ led->trigger.name = name;
++ if (id == wacom->led.groups[group].select)
++ led->trigger.brightness = wacom_leds_brightness_get(led);
++ error = devm_led_trigger_register(dev, &led->trigger);
++ if (error) {
++ hid_err(wacom->hdev,
++ "failed to register LED trigger %s: %d\n",
++ led->cdev.name, error);
++ return error;
++ }
++ }
++
+ error = devm_led_classdev_register(dev, &led->cdev);
+ if (error) {
+ hid_err(wacom->hdev,
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 2a4ec55ddb47ed..291d91f6864676 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -194,7 +194,7 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
+ scsi_cmd[14] = ata_command;
+
+ err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
+- ATA_SECT_SIZE, HZ, 5, NULL);
++ ATA_SECT_SIZE, 10 * HZ, 5, NULL);
+ if (err > 0)
+ err = -EIO;
+ return err;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 22ea58bf76cb5c..77fddab9d9502e 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -150,6 +150,7 @@ static const struct xpad_device {
+ { 0x045e, 0x028e, "Microsoft X-Box 360 pad", 0, XTYPE_XBOX360 },
+ { 0x045e, 0x028f, "Microsoft X-Box 360 pad v2", 0, XTYPE_XBOX360 },
+ { 0x045e, 0x0291, "Xbox 360 Wireless Receiver (XBOX)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
++ { 0x045e, 0x02a9, "Xbox 360 Wireless Receiver (Unofficial)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
+ { 0x045e, 0x02d1, "Microsoft X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x045e, 0x02dd, "Microsoft X-Box One pad (Firmware 2015)", 0, XTYPE_XBOXONE },
+ { 0x045e, 0x02e3, "Microsoft X-Box One Elite pad", MAP_PADDLES, XTYPE_XBOXONE },
+@@ -305,6 +306,7 @@ static const struct xpad_device {
+ { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ { 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 },
+ { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 },
++ { 0x1a86, 0xe310, "QH Electronics Controller", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+@@ -373,16 +375,19 @@ static const struct xpad_device {
+ { 0x294b, 0x3303, "Snakebyte GAMEPAD BASE X", 0, XTYPE_XBOXONE },
+ { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE },
+ { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE },
+- { 0x2dc8, 0x3106, "8BitDo Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 },
+ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1220, "Wooting Two HE", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1230, "Wooting Two HE (ARM)", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 },
+ { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
++ { 0x3285, 0x0646, "Nacon Pro Compact", 0, XTYPE_XBOXONE },
++ { 0x3285, 0x0663, "Nacon Evol-X", 0, XTYPE_XBOXONE },
+ { 0x3537, 0x1004, "GameSir T4 Kaleid", 0, XTYPE_XBOX360 },
+ { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
+ { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+@@ -514,6 +519,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */
+ XPAD_XBOX360_VENDOR(0x17ef), /* Lenovo */
+ XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */
++ XPAD_XBOX360_VENDOR(0x1a86), /* QH Electronics */
+ XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */
+ XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */
+ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */
+@@ -530,6 +536,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir controllers */
+ XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */
+ XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */
++ XPAD_XBOXONE_VENDOR(0x3285), /* Nacon Evol-X */
+ XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */
+ XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */
+ { }
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index 5855d4fc6e6a4d..f7b08b359c9c67 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -89,7 +89,7 @@ static const unsigned short atkbd_set2_keycode[ATKBD_KEYMAP_SIZE] = {
+ 0, 46, 45, 32, 18, 5, 4, 95, 0, 57, 47, 33, 20, 19, 6,183,
+ 0, 49, 48, 35, 34, 21, 7,184, 0, 0, 50, 36, 22, 8, 9,185,
+ 0, 51, 37, 23, 24, 11, 10, 0, 0, 52, 53, 38, 39, 25, 12, 0,
+- 0, 89, 40, 0, 26, 13, 0, 0, 58, 54, 28, 27, 0, 43, 0, 85,
++ 0, 89, 40, 0, 26, 13, 0,193, 58, 54, 28, 27, 0, 43, 0, 85,
+ 0, 86, 91, 90, 92, 0, 14, 94, 0, 79,124, 75, 71,121, 0, 0,
+ 82, 83, 80, 76, 77, 72, 1, 69, 87, 78, 81, 74, 55, 73, 70, 99,
+
+diff --git a/drivers/irqchip/irq-sunxi-nmi.c b/drivers/irqchip/irq-sunxi-nmi.c
+index bb92fd85e975f8..0b431215202434 100644
+--- a/drivers/irqchip/irq-sunxi-nmi.c
++++ b/drivers/irqchip/irq-sunxi-nmi.c
+@@ -186,7 +186,8 @@ static int __init sunxi_sc_nmi_irq_init(struct device_node *node,
+ gc->chip_types[0].chip.irq_unmask = irq_gc_mask_set_bit;
+ gc->chip_types[0].chip.irq_eoi = irq_gc_ack_set_bit;
+ gc->chip_types[0].chip.irq_set_type = sunxi_sc_nmi_set_type;
+- gc->chip_types[0].chip.flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED;
++ gc->chip_types[0].chip.flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED |
++ IRQCHIP_SKIP_SET_WAKE;
+ gc->chip_types[0].regs.ack = reg_offs->pend;
+ gc->chip_types[0].regs.mask = reg_offs->enable;
+ gc->chip_types[0].regs.type = reg_offs->ctrl;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index f95898f68d68a5..4ce0c05c512910 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -8147,6 +8147,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x817e, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x8186, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x818a, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x317f, 0xff, 0xff, 0xff),
+@@ -8157,12 +8159,18 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x1102, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x11f2, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x06f8, 0xe033, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x07b8, 0x8188, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x07b8, 0x8189, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9041, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9043, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0b05, 0x17ba, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x1e1e, 0xff, 0xff, 0xff),
+@@ -8179,6 +8187,10 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x13d3, 0x3357, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x13d3, 0x3358, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x13d3, 0x3359, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330b, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2019, 0x4902, 0xff, 0xff, 0xff),
+@@ -8193,6 +8205,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x4856, 0x0091, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x9846, 0x9041, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0xcdab, 0x8010, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x04f2, 0xaff7, 0xff, 0xff, 0xff),
+@@ -8218,6 +8232,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0586, 0x341f, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x06f8, 0xe033, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x06f8, 0xe035, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0b05, 0x17ab, 0xff, 0xff, 0xff),
+@@ -8226,6 +8242,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0df6, 0x0070, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x0df6, 0x0077, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0789, 0x016d, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x07aa, 0x0056, 0xff, 0xff, 0xff),
+@@ -8248,6 +8266,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330a, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330d, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2019, 0xab2b, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x20f4, 0x624d, 0xff, 0xff, 0xff),
+diff --git a/drivers/of/unittest-data/tests-platform.dtsi b/drivers/of/unittest-data/tests-platform.dtsi
+index fa39611071b32f..cd310b26b50c81 100644
+--- a/drivers/of/unittest-data/tests-platform.dtsi
++++ b/drivers/of/unittest-data/tests-platform.dtsi
+@@ -34,5 +34,18 @@ dev@100 {
+ };
+ };
+ };
++
++ platform-tests-2 {
++ // No #address-cells or #size-cells
++ node {
++ #address-cells = <1>;
++ #size-cells = <1>;
++
++ test-device@100 {
++ compatible = "test-sub-device";
++ reg = <0x100 1>;
++ };
++ };
++ };
+ };
+ };
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index daf9a2dddd7e0d..576e9beefc7c8f 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -1342,6 +1342,7 @@ static void __init of_unittest_bus_3cell_ranges(void)
+ static void __init of_unittest_reg(void)
+ {
+ struct device_node *np;
++ struct resource res;
+ int ret;
+ u64 addr, size;
+
+@@ -1358,6 +1359,19 @@ static void __init of_unittest_reg(void)
+ np, addr);
+
+ of_node_put(np);
++
++ np = of_find_node_by_path("/testcase-data/platform-tests-2/node/test-device@100");
++ if (!np) {
++ pr_err("missing testcase data\n");
++ return;
++ }
++
++ ret = of_address_to_resource(np, 0, &res);
++ unittest(ret == -EINVAL, "of_address_to_resource(%pOF) expected error on untranslatable address\n",
++ np);
++
++ of_node_put(np);
++
+ }
+
+ struct of_unittest_expected_res {
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index fde7de3b1e5538..9b47f91c5b9720 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -4104,7 +4104,7 @@ iscsi_if_rx(struct sk_buff *skb)
+ }
+ do {
+ /*
+- * special case for GET_STATS:
++ * special case for GET_STATS, GET_CHAP and GET_HOST_STATS:
+ * on success - sending reply and stats from
+ * inside of if_recv_msg(),
+ * on error - fall through.
+@@ -4113,6 +4113,8 @@ iscsi_if_rx(struct sk_buff *skb)
+ break;
+ if (ev->type == ISCSI_UEVENT_GET_CHAP && !err)
+ break;
++ if (ev->type == ISCSI_UEVENT_GET_HOST_STATS && !err)
++ break;
+ err = iscsi_if_send_reply(portid, nlh->nlmsg_type,
+ ev, sizeof(*ev));
+ if (err == -EAGAIN && --retries < 0) {
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index d0b55c1fa908a5..b3c588b102d900 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -171,6 +171,12 @@ do { \
+ dev_warn(&(dev)->device, fmt, ##__VA_ARGS__); \
+ } while (0)
+
++#define storvsc_log_ratelimited(dev, level, fmt, ...) \
++do { \
++ if (do_logging(level)) \
++ dev_warn_ratelimited(&(dev)->device, fmt, ##__VA_ARGS__); \
++} while (0)
++
+ struct vmscsi_request {
+ u16 length;
+ u8 srb_status;
+@@ -1177,7 +1183,7 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
+ int loglevel = (stor_pkt->vm_srb.cdb[0] == TEST_UNIT_READY) ?
+ STORVSC_LOGGING_WARN : STORVSC_LOGGING_ERROR;
+
+- storvsc_log(device, loglevel,
++ storvsc_log_ratelimited(device, loglevel,
+ "tag#%d cmd 0x%x status: scsi 0x%x srb 0x%x hv 0x%x\n",
+ scsi_cmd_to_rq(request->cmd)->tag,
+ stor_pkt->vm_srb.cdb[0],
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index bc143a86c2ddf0..53d9fc41acc522 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1420,10 +1420,6 @@ void gserial_disconnect(struct gserial *gser)
+ /* REVISIT as above: how best to track this? */
+ port->port_line_coding = gser->port_line_coding;
+
+- /* disable endpoints, aborting down any active I/O */
+- usb_ep_disable(gser->out);
+- usb_ep_disable(gser->in);
+-
+ port->port_usb = NULL;
+ gser->ioport = NULL;
+ if (port->port.count > 0) {
+@@ -1435,6 +1431,10 @@ void gserial_disconnect(struct gserial *gser)
+ spin_unlock(&port->port_lock);
+ spin_unlock_irqrestore(&serial_port_lock, flags);
+
++ /* disable endpoints, aborting down any active I/O */
++ usb_ep_disable(gser->out);
++ usb_ep_disable(gser->in);
++
+ /* finally, free any unused/unusable I/O buffers */
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (port->port.count == 0)
+diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
+index a317bdbd00ad5c..72fe83a6c97801 100644
+--- a/drivers/usb/serial/quatech2.c
++++ b/drivers/usb/serial/quatech2.c
+@@ -503,7 +503,7 @@ static void qt2_process_read_urb(struct urb *urb)
+
+ newport = *(ch + 3);
+
+- if (newport > serial->num_ports) {
++ if (newport >= serial->num_ports) {
+ dev_err(&port->dev,
+ "%s - port change to invalid port: %i\n",
+ __func__, newport);
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index e53757d1d0958a..3bf1043cd7957c 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -388,6 +388,11 @@ static ssize_t vfio_platform_read_mmio(struct vfio_platform_region *reg,
+ {
+ unsigned int done = 0;
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+@@ -467,6 +472,11 @@ static ssize_t vfio_platform_write_mmio(struct vfio_platform_region *reg,
+ {
+ unsigned int done = 0;
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index f7dd64856c9bff..ac6f50167076d8 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -251,6 +251,7 @@ static int do_gfs2_set_flags(struct inode *inode, u32 reqflags, u32 mask)
+ error = filemap_fdatawait(inode->i_mapping);
+ if (error)
+ goto out;
++ truncate_inode_pages(inode->i_mapping, 0);
+ if (new_flags & GFS2_DIF_JDATA)
+ gfs2_ordered_del_inode(ip);
+ }
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 46966fd8bcf9f0..b0f262223b5351 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -241,9 +241,16 @@ const struct inode_operations simple_dir_inode_operations = {
+ };
+ EXPORT_SYMBOL(simple_dir_inode_operations);
+
+-/* 0 is '.', 1 is '..', so always start with offset 2 or more */
++/* simple_offset_add() never assigns these to a dentry */
+ enum {
+- DIR_OFFSET_MIN = 2,
++ DIR_OFFSET_FIRST = 2, /* Find first real entry */
++ DIR_OFFSET_EOD = S32_MAX,
++};
++
++/* simple_offset_add() allocation range */
++enum {
++ DIR_OFFSET_MIN = DIR_OFFSET_FIRST + 1,
++ DIR_OFFSET_MAX = DIR_OFFSET_EOD - 1,
+ };
+
+ static void offset_set(struct dentry *dentry, long offset)
+@@ -287,9 +294,10 @@ int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry)
+ return -EBUSY;
+
+ ret = mtree_alloc_cyclic(&octx->mt, &offset, dentry, DIR_OFFSET_MIN,
+- LONG_MAX, &octx->next_offset, GFP_KERNEL);
+- if (ret < 0)
+- return ret;
++ DIR_OFFSET_MAX, &octx->next_offset,
++ GFP_KERNEL);
++ if (unlikely(ret < 0))
++ return ret == -EBUSY ? -ENOSPC : ret;
+
+ offset_set(dentry, offset);
+ return 0;
+@@ -325,38 +333,6 @@ void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry)
+ offset_set(dentry, 0);
+ }
+
+-/**
+- * simple_offset_empty - Check if a dentry can be unlinked
+- * @dentry: dentry to be tested
+- *
+- * Returns 0 if @dentry is a non-empty directory; otherwise returns 1.
+- */
+-int simple_offset_empty(struct dentry *dentry)
+-{
+- struct inode *inode = d_inode(dentry);
+- struct offset_ctx *octx;
+- struct dentry *child;
+- unsigned long index;
+- int ret = 1;
+-
+- if (!inode || !S_ISDIR(inode->i_mode))
+- return ret;
+-
+- index = DIR_OFFSET_MIN;
+- octx = inode->i_op->get_offset_ctx(inode);
+- mt_for_each(&octx->mt, child, index, LONG_MAX) {
+- spin_lock(&child->d_lock);
+- if (simple_positive(child)) {
+- spin_unlock(&child->d_lock);
+- ret = 0;
+- break;
+- }
+- spin_unlock(&child->d_lock);
+- }
+-
+- return ret;
+-}
+-
+ /**
+ * simple_offset_rename - handle directory offsets for rename
+ * @old_dir: parent directory of source entry
+@@ -450,14 +426,6 @@ void simple_offset_destroy(struct offset_ctx *octx)
+ mtree_destroy(&octx->mt);
+ }
+
+-static int offset_dir_open(struct inode *inode, struct file *file)
+-{
+- struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
+-
+- file->private_data = (void *)ctx->next_offset;
+- return 0;
+-}
+-
+ /**
+ * offset_dir_llseek - Advance the read position of a directory descriptor
+ * @file: an open directory whose position is to be updated
+@@ -471,9 +439,6 @@ static int offset_dir_open(struct inode *inode, struct file *file)
+ */
+ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
+ {
+- struct inode *inode = file->f_inode;
+- struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
+-
+ switch (whence) {
+ case SEEK_CUR:
+ offset += file->f_pos;
+@@ -486,62 +451,89 @@ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
+ return -EINVAL;
+ }
+
+- /* In this case, ->private_data is protected by f_pos_lock */
+- if (!offset)
+- file->private_data = (void *)ctx->next_offset;
+ return vfs_setpos(file, offset, LONG_MAX);
+ }
+
+-static struct dentry *offset_find_next(struct offset_ctx *octx, loff_t offset)
++static struct dentry *find_positive_dentry(struct dentry *parent,
++ struct dentry *dentry,
++ bool next)
+ {
+- MA_STATE(mas, &octx->mt, offset, offset);
++ struct dentry *found = NULL;
++
++ spin_lock(&parent->d_lock);
++ if (next)
++ dentry = d_next_sibling(dentry);
++ else if (!dentry)
++ dentry = d_first_child(parent);
++ hlist_for_each_entry_from(dentry, d_sib) {
++ if (!simple_positive(dentry))
++ continue;
++ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
++ if (simple_positive(dentry))
++ found = dget_dlock(dentry);
++ spin_unlock(&dentry->d_lock);
++ if (likely(found))
++ break;
++ }
++ spin_unlock(&parent->d_lock);
++ return found;
++}
++
++static noinline_for_stack struct dentry *
++offset_dir_lookup(struct dentry *parent, loff_t offset)
++{
++ struct inode *inode = d_inode(parent);
++ struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
+ struct dentry *child, *found = NULL;
+
+- rcu_read_lock();
+- child = mas_find(&mas, LONG_MAX);
+- if (!child)
+- goto out;
+- spin_lock(&child->d_lock);
+- if (simple_positive(child))
+- found = dget_dlock(child);
+- spin_unlock(&child->d_lock);
+-out:
+- rcu_read_unlock();
++ MA_STATE(mas, &octx->mt, offset, offset);
++
++ if (offset == DIR_OFFSET_FIRST)
++ found = find_positive_dentry(parent, NULL, false);
++ else {
++ rcu_read_lock();
++ child = mas_find(&mas, DIR_OFFSET_MAX);
++ found = find_positive_dentry(parent, child, false);
++ rcu_read_unlock();
++ }
+ return found;
+ }
+
+ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
+ {
+ struct inode *inode = d_inode(dentry);
+- long offset = dentry2offset(dentry);
+
+- return ctx->actor(ctx, dentry->d_name.name, dentry->d_name.len, offset,
+- inode->i_ino, fs_umode_to_dtype(inode->i_mode));
++ return dir_emit(ctx, dentry->d_name.name, dentry->d_name.len,
++ inode->i_ino, fs_umode_to_dtype(inode->i_mode));
+ }
+
+-static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, long last_index)
++static void offset_iterate_dir(struct file *file, struct dir_context *ctx)
+ {
+- struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
++ struct dentry *dir = file->f_path.dentry;
+ struct dentry *dentry;
+
++ dentry = offset_dir_lookup(dir, ctx->pos);
++ if (!dentry)
++ goto out_eod;
+ while (true) {
+- dentry = offset_find_next(octx, ctx->pos);
+- if (!dentry)
+- return;
+-
+- if (dentry2offset(dentry) >= last_index) {
+- dput(dentry);
+- return;
+- }
++ struct dentry *next;
+
+- if (!offset_dir_emit(ctx, dentry)) {
+- dput(dentry);
+- return;
+- }
++ ctx->pos = dentry2offset(dentry);
++ if (!offset_dir_emit(ctx, dentry))
++ break;
+
+- ctx->pos = dentry2offset(dentry) + 1;
++ next = find_positive_dentry(dir, dentry, true);
+ dput(dentry);
++
++ if (!next)
++ goto out_eod;
++ dentry = next;
+ }
++ dput(dentry);
++ return;
++
++out_eod:
++ ctx->pos = DIR_OFFSET_EOD;
+ }
+
+ /**
+@@ -561,6 +553,8 @@ static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, lon
+ *
+ * On return, @ctx->pos contains an offset that will read the next entry
+ * in this directory when offset_readdir() is called again with @ctx.
++ * Caller places this value in the d_off field of the last entry in the
++ * user's buffer.
+ *
+ * Return values:
+ * %0 - Complete
+@@ -568,19 +562,17 @@ static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, lon
+ static int offset_readdir(struct file *file, struct dir_context *ctx)
+ {
+ struct dentry *dir = file->f_path.dentry;
+- long last_index = (long)file->private_data;
+
+ lockdep_assert_held(&d_inode(dir)->i_rwsem);
+
+ if (!dir_emit_dots(file, ctx))
+ return 0;
+-
+- offset_iterate_dir(d_inode(dir), ctx, last_index);
++ if (ctx->pos != DIR_OFFSET_EOD)
++ offset_iterate_dir(file, ctx);
+ return 0;
+ }
+
+ const struct file_operations simple_offset_dir_operations = {
+- .open = offset_dir_open,
+ .llseek = offset_dir_llseek,
+ .iterate_shared = offset_readdir,
+ .read = generic_read_dir,
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index a55f0044d30bde..b935c1a62d10cf 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -176,27 +176,27 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ struct kvec *out_iov, int *out_buftype, struct dentry *dentry)
+ {
+
+- struct reparse_data_buffer *rbuf;
++ struct smb2_query_info_rsp *qi_rsp = NULL;
+ struct smb2_compound_vars *vars = NULL;
+- struct kvec *rsp_iov, *iov;
+- struct smb_rqst *rqst;
+- int rc;
+- __le16 *utf16_path = NULL;
+ __u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+- struct cifs_fid fid;
++ struct cifs_open_info_data *idata;
+ struct cifs_ses *ses = tcon->ses;
++ struct reparse_data_buffer *rbuf;
+ struct TCP_Server_Info *server;
+- int num_rqst = 0, i;
+ int resp_buftype[MAX_COMPOUND];
+- struct smb2_query_info_rsp *qi_rsp = NULL;
+- struct cifs_open_info_data *idata;
++ int retries = 0, cur_sleep = 1;
++ __u8 delete_pending[8] = {1,};
++ struct kvec *rsp_iov, *iov;
+ struct inode *inode = NULL;
+- int flags = 0;
+- __u8 delete_pending[8] = {1, 0, 0, 0, 0, 0, 0, 0};
++ __le16 *utf16_path = NULL;
++ struct smb_rqst *rqst;
+ unsigned int size[2];
+- void *data[2];
++ struct cifs_fid fid;
++ int num_rqst = 0, i;
+ unsigned int len;
+- int retries = 0, cur_sleep = 1;
++ int tmp_rc, rc;
++ int flags = 0;
++ void *data[2];
+
+ replay_again:
+ /* reinitialize for possible replay */
+@@ -637,7 +637,14 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ tcon->need_reconnect = true;
+ }
+
++ tmp_rc = rc;
+ for (i = 0; i < num_cmds; i++) {
++ char *buf = rsp_iov[i + i].iov_base;
++
++ if (buf && resp_buftype[i + 1] != CIFS_NO_BUFFER)
++ rc = server->ops->map_error(buf, false);
++ else
++ rc = tmp_rc;
+ switch (cmds[i]) {
+ case SMB2_OP_QUERY_INFO:
+ idata = in_iov[i].iov_base;
+@@ -803,6 +810,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ SMB2_close_free(&rqst[num_rqst]);
++ rc = tmp_rc;
+
+ num_cmds += 2;
+ if (out_iov && out_buftype) {
+@@ -858,22 +866,52 @@ static int parse_create_response(struct cifs_open_info_data *data,
+ return rc;
+ }
+
++/* Check only if SMB2_OP_QUERY_WSL_EA command failed in the compound chain */
++static bool ea_unsupported(int *cmds, int num_cmds,
++ struct kvec *out_iov, int *out_buftype)
++{
++ int i;
++
++ if (cmds[num_cmds - 1] != SMB2_OP_QUERY_WSL_EA)
++ return false;
++
++ for (i = 1; i < num_cmds - 1; i++) {
++ struct smb2_hdr *hdr = out_iov[i].iov_base;
++
++ if (out_buftype[i] == CIFS_NO_BUFFER || !hdr ||
++ hdr->Status != STATUS_SUCCESS)
++ return false;
++ }
++ return true;
++}
++
++static inline void free_rsp_iov(struct kvec *iovs, int *buftype, int count)
++{
++ int i;
++
++ for (i = 0; i < count; i++) {
++ free_rsp_buf(buftype[i], iovs[i].iov_base);
++ memset(&iovs[i], 0, sizeof(*iovs));
++ buftype[i] = CIFS_NO_BUFFER;
++ }
++}
++
+ int smb2_query_path_info(const unsigned int xid,
+ struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
++ struct kvec in_iov[3], out_iov[5] = {};
++ struct cached_fid *cfid = NULL;
+ struct cifs_open_parms oparms;
+- __u32 create_options = 0;
+ struct cifsFileInfo *cfile;
+- struct cached_fid *cfid = NULL;
++ __u32 create_options = 0;
++ int out_buftype[5] = {};
+ struct smb2_hdr *hdr;
+- struct kvec in_iov[3], out_iov[3] = {};
+- int out_buftype[3] = {};
++ int num_cmds = 0;
+ int cmds[3];
+ bool islink;
+- int i, num_cmds = 0;
+ int rc, rc2;
+
+ data->adjust_tz = false;
+@@ -943,14 +981,14 @@ int smb2_query_path_info(const unsigned int xid,
+ if (rc || !data->reparse_point)
+ goto out;
+
+- if (!tcon->posix_extensions)
+- cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
+ /*
+ * Skip SMB2_OP_GET_REPARSE if symlink already parsed in create
+ * response.
+ */
+ if (data->reparse.tag != IO_REPARSE_TAG_SYMLINK)
+ cmds[num_cmds++] = SMB2_OP_GET_REPARSE;
++ if (!tcon->posix_extensions)
++ cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
+
+ oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+ FILE_READ_ATTRIBUTES |
+@@ -958,9 +996,18 @@ int smb2_query_path_info(const unsigned int xid,
+ FILE_OPEN, create_options |
+ OPEN_REPARSE_POINT, ACL_NO_MODE);
+ cifs_get_readable_path(tcon, full_path, &cfile);
++ free_rsp_iov(out_iov, out_buftype, ARRAY_SIZE(out_iov));
+ rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
+ &oparms, in_iov, cmds, num_cmds,
+- cfile, NULL, NULL, NULL);
++ cfile, out_iov, out_buftype, NULL);
++ if (rc && ea_unsupported(cmds, num_cmds,
++ out_iov, out_buftype)) {
++ if (data->reparse.tag != IO_REPARSE_TAG_LX_BLK &&
++ data->reparse.tag != IO_REPARSE_TAG_LX_CHR)
++ rc = 0;
++ else
++ rc = -EOPNOTSUPP;
++ }
+ break;
+ case -EREMOTE:
+ break;
+@@ -978,8 +1025,7 @@ int smb2_query_path_info(const unsigned int xid,
+ }
+
+ out:
+- for (i = 0; i < ARRAY_SIZE(out_buftype); i++)
+- free_rsp_buf(out_buftype[i], out_iov[i].iov_base);
++ free_rsp_iov(out_iov, out_buftype, ARRAY_SIZE(out_iov));
+ return rc;
+ }
+
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 4b5cad44a12683..fc3de42d9d764f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3434,7 +3434,6 @@ struct offset_ctx {
+ void simple_offset_init(struct offset_ctx *octx);
+ int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry);
+ void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry);
+-int simple_offset_empty(struct dentry *dentry);
+ int simple_offset_rename(struct inode *old_dir, struct dentry *old_dentry,
+ struct inode *new_dir, struct dentry *new_dentry);
+ int simple_offset_rename_exchange(struct inode *old_dir,
+diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
+index 709ad84809e1ea..8934c7da47f4c3 100644
+--- a/include/linux/seccomp.h
++++ b/include/linux/seccomp.h
+@@ -50,10 +50,10 @@ struct seccomp_data;
+
+ #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+ static inline int secure_computing(void) { return 0; }
+-static inline int __secure_computing(const struct seccomp_data *sd) { return 0; }
+ #else
+ static inline void secure_computing_strict(int this_syscall) { return; }
+ #endif
++static inline int __secure_computing(const struct seccomp_data *sd) { return 0; }
+
+ static inline long prctl_get_seccomp(void)
+ {
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index 6f3b6de230bd2b..a67bae350416b3 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -1153,6 +1153,13 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
+ struct io_rsrc_data *data;
+ int i, ret, nbufs;
+
++ /*
++ * Accounting state is shared between the two rings; that only works if
++ * both rings are accounted towards the same counters.
++ */
++ if (ctx->user != src_ctx->user || ctx->mm_account != src_ctx->mm_account)
++ return -EINVAL;
++
+ /*
+ * Drop our own lock here. We'll setup the data we need and reference
+ * the source buffers, then re-grab, check, and assign at the end.
+diff --git a/mm/filemap.c b/mm/filemap.c
+index dc83baab85a140..05adf0392625da 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -4383,6 +4383,20 @@ static void filemap_cachestat(struct address_space *mapping,
+ rcu_read_unlock();
+ }
+
++/*
++ * See mincore: reveal pagecache information only for files
++ * that the calling process has write access to, or could (if
++ * tried) open for writing.
++ */
++static inline bool can_do_cachestat(struct file *f)
++{
++ if (f->f_mode & FMODE_WRITE)
++ return true;
++ if (inode_owner_or_capable(file_mnt_idmap(f), file_inode(f)))
++ return true;
++ return file_permission(f, MAY_WRITE) == 0;
++}
++
+ /*
+ * The cachestat(2) system call.
+ *
+@@ -4442,6 +4456,11 @@ SYSCALL_DEFINE4(cachestat, unsigned int, fd,
+ return -EOPNOTSUPP;
+ }
+
++ if (!can_do_cachestat(fd_file(f))) {
++ fdput(f);
++ return -EPERM;
++ }
++
+ if (flags != 0) {
+ fdput(f);
+ return -EINVAL;
+diff --git a/mm/shmem.c b/mm/shmem.c
+index dd4eb11c84b59e..5960e5035f9835 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3700,7 +3700,7 @@ static int shmem_unlink(struct inode *dir, struct dentry *dentry)
+
+ static int shmem_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+- if (!simple_offset_empty(dentry))
++ if (!simple_empty(dentry))
+ return -ENOTEMPTY;
+
+ drop_nlink(d_inode(dentry));
+@@ -3757,7 +3757,7 @@ static int shmem_rename2(struct mnt_idmap *idmap,
+ return simple_offset_rename_exchange(old_dir, old_dentry,
+ new_dir, new_dentry);
+
+- if (!simple_offset_empty(new_dentry))
++ if (!simple_empty(new_dentry))
+ return -ENOTEMPTY;
+
+ if (flags & RENAME_WHITEOUT) {
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 0030ce8fecfc56..7fefb2eb3fcd80 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -251,7 +251,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
+ struct zswap_pool *pool;
+ char name[38]; /* 'zswap' + 32 char (max) num + \0 */
+ gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
+- int ret;
++ int ret, cpu;
+
+ if (!zswap_has_pool) {
+ /* if either are unset, pool initialization failed, and we
+@@ -285,6 +285,9 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
+ goto error;
+ }
+
++ for_each_possible_cpu(cpu)
++ mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
++
+ ret = cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE,
+ &pool->node);
+ if (ret)
+@@ -812,36 +815,41 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
+ {
+ struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
+ struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
+- struct crypto_acomp *acomp;
+- struct acomp_req *req;
++ struct crypto_acomp *acomp = NULL;
++ struct acomp_req *req = NULL;
++ u8 *buffer = NULL;
+ int ret;
+
+- mutex_init(&acomp_ctx->mutex);
+-
+- acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu));
+- if (!acomp_ctx->buffer)
+- return -ENOMEM;
++ buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu));
++ if (!buffer) {
++ ret = -ENOMEM;
++ goto fail;
++ }
+
+ acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu));
+ if (IS_ERR(acomp)) {
+ pr_err("could not alloc crypto acomp %s : %ld\n",
+ pool->tfm_name, PTR_ERR(acomp));
+ ret = PTR_ERR(acomp);
+- goto acomp_fail;
++ goto fail;
+ }
+- acomp_ctx->acomp = acomp;
+- acomp_ctx->is_sleepable = acomp_is_async(acomp);
+
+- req = acomp_request_alloc(acomp_ctx->acomp);
++ req = acomp_request_alloc(acomp);
+ if (!req) {
+ pr_err("could not alloc crypto acomp_request %s\n",
+ pool->tfm_name);
+ ret = -ENOMEM;
+- goto req_fail;
++ goto fail;
+ }
+- acomp_ctx->req = req;
+
++ /*
++ * Only hold the mutex after completing allocations, otherwise we may
++ * recurse into zswap through reclaim and attempt to hold the mutex
++ * again resulting in a deadlock.
++ */
++ mutex_lock(&acomp_ctx->mutex);
+ crypto_init_wait(&acomp_ctx->wait);
++
+ /*
+ * if the backend of acomp is async zip, crypto_req_done() will wakeup
+ * crypto_wait_req(); if the backend of acomp is scomp, the callback
+@@ -850,12 +858,17 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
+ acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ crypto_req_done, &acomp_ctx->wait);
+
++ acomp_ctx->buffer = buffer;
++ acomp_ctx->acomp = acomp;
++ acomp_ctx->is_sleepable = acomp_is_async(acomp);
++ acomp_ctx->req = req;
++ mutex_unlock(&acomp_ctx->mutex);
+ return 0;
+
+-req_fail:
+- crypto_free_acomp(acomp_ctx->acomp);
+-acomp_fail:
+- kfree(acomp_ctx->buffer);
++fail:
++ if (acomp)
++ crypto_free_acomp(acomp);
++ kfree(buffer);
+ return ret;
+ }
+
+@@ -864,17 +877,45 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
+ struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
+ struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
+
++ mutex_lock(&acomp_ctx->mutex);
+ if (!IS_ERR_OR_NULL(acomp_ctx)) {
+ if (!IS_ERR_OR_NULL(acomp_ctx->req))
+ acomp_request_free(acomp_ctx->req);
++ acomp_ctx->req = NULL;
+ if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
+ crypto_free_acomp(acomp_ctx->acomp);
+ kfree(acomp_ctx->buffer);
+ }
++ mutex_unlock(&acomp_ctx->mutex);
+
+ return 0;
+ }
+
++static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *pool)
++{
++ struct crypto_acomp_ctx *acomp_ctx;
++
++ for (;;) {
++ acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
++ mutex_lock(&acomp_ctx->mutex);
++ if (likely(acomp_ctx->req))
++ return acomp_ctx;
++ /*
++ * It is possible that we were migrated to a different CPU after
++ * getting the per-CPU ctx but before the mutex was acquired. If
++ * the old CPU got offlined, zswap_cpu_comp_dead() could have
++ * already freed ctx->req (among other things) and set it to
++ * NULL. Just try again on the new CPU that we ended up on.
++ */
++ mutex_unlock(&acomp_ctx->mutex);
++ }
++}
++
++static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx)
++{
++ mutex_unlock(&acomp_ctx->mutex);
++}
++
+ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
+ {
+ struct crypto_acomp_ctx *acomp_ctx;
+@@ -887,10 +928,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
+ gfp_t gfp;
+ u8 *dst;
+
+- acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+-
+- mutex_lock(&acomp_ctx->mutex);
+-
++ acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool);
+ dst = acomp_ctx->buffer;
+ sg_init_table(&input, 1);
+ sg_set_folio(&input, folio, PAGE_SIZE, 0);
+@@ -943,7 +981,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
+ else if (alloc_ret)
+ zswap_reject_alloc_fail++;
+
+- mutex_unlock(&acomp_ctx->mutex);
++ acomp_ctx_put_unlock(acomp_ctx);
+ return comp_ret == 0 && alloc_ret == 0;
+ }
+
+@@ -954,9 +992,7 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
+ struct crypto_acomp_ctx *acomp_ctx;
+ u8 *src;
+
+- acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+- mutex_lock(&acomp_ctx->mutex);
+-
++ acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool);
+ src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
+ /*
+ * If zpool_map_handle is atomic, we cannot reliably utilize its mapped buffer
+@@ -980,10 +1016,10 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
+ acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE);
+ BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait));
+ BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
+- mutex_unlock(&acomp_ctx->mutex);
+
+ if (src != acomp_ctx->buffer)
+ zpool_unmap_handle(zpool, entry->handle);
++ acomp_ctx_put_unlock(acomp_ctx);
+ }
+
+ /*********************************
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index f80bc05d4c5a50..516038a4416380 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -91,6 +91,8 @@ ets_class_from_arg(struct Qdisc *sch, unsigned long arg)
+ {
+ struct ets_sched *q = qdisc_priv(sch);
+
++ if (arg == 0 || arg > q->nbands)
++ return NULL;
+ return &q->classes[arg - 1];
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a9f6138b59b0c1..8c4de5a253addf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10916,8 +10916,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x38e0, "Yoga Y990 Intel VECO Dual", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38f8, "Yoga Book 9i", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+- SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+- SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
++ SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC),
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 7092842480ef17..0d9d1d250f2b5e 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -2397,6 +2397,7 @@ config SND_SOC_WM8993
+
+ config SND_SOC_WM8994
+ tristate
++ depends on MFD_WM8994
+
+ config SND_SOC_WM8995
+ tristate
+diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c
+index d0098b4558b529..8ec4083cd3b807 100644
+--- a/sound/soc/codecs/cs42l43.c
++++ b/sound/soc/codecs/cs42l43.c
+@@ -2446,6 +2446,7 @@ static const struct dev_pm_ops cs42l43_codec_pm_ops = {
+ SYSTEM_SLEEP_PM_OPS(cs42l43_codec_suspend, cs42l43_codec_resume)
+ NOIRQ_SYSTEM_SLEEP_PM_OPS(cs42l43_codec_suspend_noirq, cs42l43_codec_resume_noirq)
+ RUNTIME_PM_OPS(NULL, cs42l43_codec_runtime_resume, NULL)
++ SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ static const struct platform_device_id cs42l43_codec_id_table[] = {
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index 61729e5b50a8e4..f508df01145bfb 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -39,7 +39,9 @@ struct es8316_priv {
+ struct snd_soc_jack *jack;
+ int irq;
+ unsigned int sysclk;
+- unsigned int allowed_rates[ARRAY_SIZE(supported_mclk_lrck_ratios)];
++ /* ES83xx supports halving the MCLK so it supports twice as many rates
++ */
++ unsigned int allowed_rates[ARRAY_SIZE(supported_mclk_lrck_ratios) * 2];
+ struct snd_pcm_hw_constraint_list sysclk_constraints;
+ bool jd_inverted;
+ };
+@@ -386,6 +388,12 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+
+ if (freq % ratio == 0)
+ es8316->allowed_rates[count++] = freq / ratio;
++
++ /* We also check if the halved MCLK produces a valid rate
++ * since the codec supports halving the MCLK.
++ */
++ if ((freq / ratio) % 2 == 0)
++ es8316->allowed_rates[count++] = freq / ratio / 2;
+ }
+
+ if (count) {
+diff --git a/sound/soc/samsung/Kconfig b/sound/soc/samsung/Kconfig
+index 4b1ea7b2c79617..60b4b7b7521554 100644
+--- a/sound/soc/samsung/Kconfig
++++ b/sound/soc/samsung/Kconfig
+@@ -127,8 +127,9 @@ config SND_SOC_SAMSUNG_TM2_WM5110
+
+ config SND_SOC_SAMSUNG_ARIES_WM8994
+ tristate "SoC I2S Audio support for WM8994 on Aries"
+- depends on SND_SOC_SAMSUNG && MFD_WM8994 && IIO && EXTCON
++ depends on SND_SOC_SAMSUNG && I2C && IIO && EXTCON
+ select SND_SOC_BT_SCO
++ select MFD_WM8994
+ select SND_SOC_WM8994
+ select SND_SAMSUNG_I2S
+ help
+@@ -140,8 +141,9 @@ config SND_SOC_SAMSUNG_ARIES_WM8994
+
+ config SND_SOC_SAMSUNG_MIDAS_WM1811
+ tristate "SoC I2S Audio support for Midas boards"
+- depends on SND_SOC_SAMSUNG && IIO
++ depends on SND_SOC_SAMSUNG && I2C && IIO
+ select SND_SAMSUNG_I2S
++ select MFD_WM8994
+ select SND_SOC_WM8994
+ help
+ Say Y if you want to add support for SoC audio on the Midas boards.
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 8ba0aff8be2ec2..7968d6a2f592ac 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2239,6 +2239,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
++ DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+ QUIRK_FLAG_FIXED_RATE),
+ DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-08 11:26 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-08 11:26 UTC (permalink / raw
To: gentoo-commits
commit: 2415ed0f24c93b7a8b46736d437efcca53246fd4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 8 11:25:59 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 8 11:25:59 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2415ed0f
Linux patch 6.12.13
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1012_linux-6.12.13.patch | 26355 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 26359 insertions(+)
diff --git a/0000_README b/0000_README
index 17858f75..ceb862e7 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-6.12.12.patch
From: https://www.kernel.org
Desc: Linux 6.12.12
+Patch: 1012_linux-6.12.13.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.13
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1012_linux-6.12.13.patch b/1012_linux-6.12.13.patch
new file mode 100644
index 00000000..784a2897
--- /dev/null
+++ b/1012_linux-6.12.13.patch
@@ -0,0 +1,26355 @@
+diff --git a/Documentation/core-api/symbol-namespaces.rst b/Documentation/core-api/symbol-namespaces.rst
+index 12e4aecdae9452..d1154eb438101a 100644
+--- a/Documentation/core-api/symbol-namespaces.rst
++++ b/Documentation/core-api/symbol-namespaces.rst
+@@ -68,7 +68,7 @@ is to define the default namespace in the ``Makefile`` of the subsystem. E.g. to
+ export all symbols defined in usb-common into the namespace USB_COMMON, add a
+ line like this to drivers/usb/common/Makefile::
+
+- ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_COMMON
++ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_COMMON"'
+
+ That will affect all EXPORT_SYMBOL() and EXPORT_SYMBOL_GPL() statements. A
+ symbol exported with EXPORT_SYMBOL_NS() while this definition is present, will
+@@ -79,7 +79,7 @@ A second option to define the default namespace is directly in the compilation
+ unit as preprocessor statement. The above example would then read::
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+- #define DEFAULT_SYMBOL_NAMESPACE USB_COMMON
++ #define DEFAULT_SYMBOL_NAMESPACE "USB_COMMON"
+
+ within the corresponding compilation unit before any EXPORT_SYMBOL macro is
+ used.
+diff --git a/Documentation/devicetree/bindings/clock/imx93-clock.yaml b/Documentation/devicetree/bindings/clock/imx93-clock.yaml
+index ccb53c6b96c119..98c0800732ef5d 100644
+--- a/Documentation/devicetree/bindings/clock/imx93-clock.yaml
++++ b/Documentation/devicetree/bindings/clock/imx93-clock.yaml
+@@ -16,6 +16,7 @@ description: |
+ properties:
+ compatible:
+ enum:
++ - fsl,imx91-ccm
+ - fsl,imx93-ccm
+
+ reg:
+diff --git a/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml b/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml
+index e850a8894758df..bb40bb9e036ee0 100644
+--- a/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml
++++ b/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml
+@@ -27,7 +27,7 @@ properties:
+ description: |
+ For multicolor LED support this property should be defined as either
+ LED_COLOR_ID_RGB or LED_COLOR_ID_MULTI which can be found in
+- include/linux/leds/common.h.
++ include/dt-bindings/leds/common.h.
+ enum: [ 8, 9 ]
+
+ required:
+diff --git a/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml b/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml
+index bb81307dc11b89..4fc78efaa5504a 100644
+--- a/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml
++++ b/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml
+@@ -50,15 +50,15 @@ properties:
+ minimum: 0
+ maximum: 1
+
+- rohm,charger-sense-resistor-ohms:
+- minimum: 10000000
+- maximum: 50000000
++ rohm,charger-sense-resistor-micro-ohms:
++ minimum: 10000
++ maximum: 50000
+ description: |
+- BD71827 and BD71828 have SAR ADC for measuring charging currents.
+- External sense resistor (RSENSE in data sheet) should be used. If
+- something other but 30MOhm resistor is used the resistance value
+- should be given here in Ohms.
+- default: 30000000
++ BD71815 has SAR ADC for measuring charging currents. External sense
++ resistor (RSENSE in data sheet) should be used. If something other
++ but a 30 mOhm resistor is used the resistance value should be given
++ here in micro Ohms.
++ default: 30000
+
+ regulators:
+ $ref: /schemas/regulator/rohm,bd71815-regulator.yaml
+@@ -67,7 +67,7 @@ properties:
+
+ gpio-reserved-ranges:
+ description: |
+- Usage of BD71828 GPIO pins can be changed via OTP. This property can be
++ Usage of BD71815 GPIO pins can be changed via OTP. This property can be
+ used to mark the pins which should not be configured for GPIO. Please see
+ the ../gpio/gpio.txt for more information.
+
+@@ -113,7 +113,7 @@ examples:
+ gpio-controller;
+ #gpio-cells = <2>;
+
+- rohm,charger-sense-resistor-ohms = <10000000>;
++ rohm,charger-sense-resistor-micro-ohms = <10000>;
+
+ regulators {
+ buck1: buck1 {
+diff --git a/Documentation/devicetree/bindings/mmc/mmc-controller.yaml b/Documentation/devicetree/bindings/mmc/mmc-controller.yaml
+index 58ae298cd2fcf4..23884b8184a9df 100644
+--- a/Documentation/devicetree/bindings/mmc/mmc-controller.yaml
++++ b/Documentation/devicetree/bindings/mmc/mmc-controller.yaml
+@@ -25,7 +25,7 @@ properties:
+ "#address-cells":
+ const: 1
+ description: |
+- The cell is the slot ID if a function subnode is used.
++ The cell is the SDIO function number if a function subnode is used.
+
+ "#size-cells":
+ const: 0
+diff --git a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+index cd4aa27218a1b6..fa6743bb269d44 100644
+--- a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+@@ -35,10 +35,6 @@ properties:
+ $ref: regulator.yaml#
+ unevaluatedProperties: false
+
+- properties:
+- regulator-compatible:
+- pattern: "^vbuck[1-4]$"
+-
+ additionalProperties: false
+
+ required:
+@@ -56,7 +52,6 @@ examples:
+
+ regulators {
+ vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+ regulator-enable-ramp-delay = <256>;
+@@ -64,7 +59,6 @@ examples:
+ };
+
+ vbuck3 {
+- regulator-compatible = "vbuck3";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+ regulator-enable-ramp-delay = <256>;
+diff --git a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
+index bba40158dd5c5a..8e50b900d51c27 100644
+--- a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
++++ b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
+@@ -272,7 +272,7 @@ The available attributes are:
+ echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode
+
+ Async mode without interrupts (caller must poll) can be enabled by
+- writing 'async' to it::
++ writing 'async' to it (please see Caveat)::
+
+ echo async > /sys/bus/dsa/drivers/crypto/sync_mode
+
+@@ -283,6 +283,13 @@ The available attributes are:
+
+ The default mode is 'sync'.
+
++ Caveat: since the only mechanism that iaa_crypto currently implements
++ for async polling without interrupts is via the 'sync' mode as
++ described earlier, writing 'async' to
++ '/sys/bus/dsa/drivers/crypto/sync_mode' will internally enable the
++ 'sync' mode. This is to ensure correct iaa_crypto behavior until true
++ async polling without interrupts is enabled in iaa_crypto.
++
+ .. _iaa_default_config:
+
+ IAA Default Configuration
+diff --git a/Documentation/translations/it_IT/core-api/symbol-namespaces.rst b/Documentation/translations/it_IT/core-api/symbol-namespaces.rst
+index 17abc25ee4c1e4..6657f82c0101f1 100644
+--- a/Documentation/translations/it_IT/core-api/symbol-namespaces.rst
++++ b/Documentation/translations/it_IT/core-api/symbol-namespaces.rst
+@@ -69,7 +69,7 @@ Per esempio per esportare tutti i simboli definiti in usb-common nello spazio
+ dei nomi USB_COMMON, si può aggiungere la seguente linea in
+ drivers/usb/common/Makefile::
+
+- ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_COMMON
++ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_COMMON"'
+
+ Questo cambierà tutte le macro EXPORT_SYMBOL() ed EXPORT_SYMBOL_GPL(). Invece,
+ un simbolo esportato con EXPORT_SYMBOL_NS() non verrà cambiato e il simbolo
+@@ -79,7 +79,7 @@ Una seconda possibilità è quella di definire il simbolo di preprocessore
+ direttamente nei file da compilare. L'esempio precedente diventerebbe::
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+- #define DEFAULT_SYMBOL_NAMESPACE USB_COMMON
++ #define DEFAULT_SYMBOL_NAMESPACE "USB_COMMON"
+
+ Questo va messo prima di un qualsiasi uso di EXPORT_SYMBOL.
+
+diff --git a/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst b/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst
+index bb16f0611046d3..f3e73834f7d7df 100644
+--- a/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst
++++ b/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst
+@@ -66,7 +66,7 @@
+ 子系统的 ``Makefile`` 中定义默认命名空间。例如,如果要将usb-common中定义的所有符号导
+ 出到USB_COMMON命名空间,可以在drivers/usb/common/Makefile中添加这样一行::
+
+- ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_COMMON
++ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_COMMON"'
+
+ 这将影响所有 EXPORT_SYMBOL() 和 EXPORT_SYMBOL_GPL() 语句。当这个定义存在时,
+ 用EXPORT_SYMBOL_NS()导出的符号仍然会被导出到作为命名空间参数传递的命名空间中,
+@@ -76,7 +76,7 @@
+ 成::
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+- #define DEFAULT_SYMBOL_NAMESPACE USB_COMMON
++ #define DEFAULT_SYMBOL_NAMESPACE "USB_COMMON"
+
+ 应置于相关编译单元中任何 EXPORT_SYMBOL 宏之前
+
+diff --git a/Makefile b/Makefile
+index 9e6246e733eb94..5442ff45f963ed 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -509,7 +509,7 @@ KGZIP = gzip
+ KBZIP2 = bzip2
+ KLZOP = lzop
+ LZMA = lzma
+-LZ4 = lz4c
++LZ4 = lz4
+ XZ = xz
+ ZSTD = zstd
+
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts
+index 98477792aa005a..14d17510310680 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts
++++ b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts
+@@ -284,12 +284,12 @@ &i2c10 {
+ &i2c11 {
+ status = "okay";
+ power-sensor@10 {
+- compatible = "adi, adm1272";
++ compatible = "adi,adm1272";
+ reg = <0x10>;
+ };
+
+ power-sensor@12 {
+- compatible = "adi, adm1272";
++ compatible = "adi,adm1272";
+ reg = <0x12>;
+ };
+
+@@ -461,22 +461,20 @@ adc@1f {
+ };
+
+ pwm@20{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ gpio@22{
+ compatible = "ti,tca6424";
+ reg = <0x22>;
++ gpio-controller;
++ #gpio-cells = <2>;
+ };
+
+ pwm@23{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x23>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ adc@33 {
+@@ -511,22 +509,20 @@ adc@1f {
+ };
+
+ pwm@20{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ gpio@22{
+ compatible = "ti,tca6424";
+ reg = <0x22>;
++ gpio-controller;
++ #gpio-cells = <2>;
+ };
+
+ pwm@23{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x23>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ adc@33 {
+diff --git a/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi b/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi
+index 6b6e77596ffa86..b108265e9bde42 100644
+--- a/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi
+@@ -440,7 +440,7 @@ gmac0: ethernet@ff800000 {
+ clocks = <&l4_mp_clk>, <&peri_emac_ptp_clk>;
+ clock-names = "stmmaceth", "ptp_ref";
+ resets = <&rst EMAC0_RESET>, <&rst EMAC0_OCP_RESET>;
+- reset-names = "stmmaceth", "ahb";
++ reset-names = "stmmaceth", "stmmaceth-ocp";
+ snps,axi-config = <&socfpga_axi_setup>;
+ status = "disabled";
+ };
+@@ -460,7 +460,7 @@ gmac1: ethernet@ff802000 {
+ clocks = <&l4_mp_clk>, <&peri_emac_ptp_clk>;
+ clock-names = "stmmaceth", "ptp_ref";
+ resets = <&rst EMAC1_RESET>, <&rst EMAC1_OCP_RESET>;
+- reset-names = "stmmaceth", "ahb";
++ reset-names = "stmmaceth", "stmmaceth-ocp";
+ snps,axi-config = <&socfpga_axi_setup>;
+ status = "disabled";
+ };
+@@ -480,7 +480,7 @@ gmac2: ethernet@ff804000 {
+ clocks = <&l4_mp_clk>, <&peri_emac_ptp_clk>;
+ clock-names = "stmmaceth", "ptp_ref";
+ resets = <&rst EMAC2_RESET>, <&rst EMAC2_OCP_RESET>;
+- reset-names = "stmmaceth", "ahb";
++ reset-names = "stmmaceth", "stmmaceth-ocp";
+ snps,axi-config = <&socfpga_axi_setup>;
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/mediatek/mt7623.dtsi b/arch/arm/boot/dts/mediatek/mt7623.dtsi
+index 814586abc2979e..fd7a89cc337d69 100644
+--- a/arch/arm/boot/dts/mediatek/mt7623.dtsi
++++ b/arch/arm/boot/dts/mediatek/mt7623.dtsi
+@@ -308,7 +308,7 @@ pwrap: pwrap@1000d000 {
+ clock-names = "spi", "wrap";
+ };
+
+- cir: cir@10013000 {
++ cir: ir-receiver@10013000 {
+ compatible = "mediatek,mt7623-cir";
+ reg = <0 0x10013000 0 0x1000>;
+ interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts
+index 15239834d886ed..35a933eec5738f 100644
+--- a/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts
++++ b/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts
+@@ -197,6 +197,7 @@ qspi1_flash: flash@0 {
+
+ &sdmmc0 {
+ bus-width = <4>;
++ no-1-8-v;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sdmmc0_default>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts b/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts
+index 951a0c97d3c6bb..5933840bb8f7e0 100644
+--- a/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts
++++ b/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts
+@@ -514,6 +514,7 @@ kernel@200000 {
+
+ &sdmmc0 {
+ bus-width = <4>;
++ no-1-8-v;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sdmmc0_default>;
+ disable-wp;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi b/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi
+index 028961eb71089c..91ca23a66bf3c2 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi
+@@ -135,6 +135,7 @@ vgen6_reg: vldo4 {
+ lm75a: temperature-sensor@48 {
+ compatible = "national,lm75a";
+ reg = <0x48>;
++ vs-supply = <&vgen4_reg>;
+ };
+
+ /* NXP SE97BTP with temperature sensor + eeprom, TQMa7x 02xx */
+diff --git a/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi b/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi
+index ddad6497775b8e..ffb7233b063d23 100644
+--- a/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi
+@@ -85,8 +85,8 @@ regulators {
+
+ vddcpu: buck1 { /* VDD_CPU_1V2 */
+ regulator-name = "vddcpu";
+- regulator-min-microvolt = <1250000>;
+- regulator-max-microvolt = <1250000>;
++ regulator-min-microvolt = <1350000>;
++ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-initial-mode = <0>;
+ regulator-over-current-protection;
+diff --git a/arch/arm/boot/dts/st/stm32mp151.dtsi b/arch/arm/boot/dts/st/stm32mp151.dtsi
+index 4f878ec102c1f6..fdc42a89bd37d4 100644
+--- a/arch/arm/boot/dts/st/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp151.dtsi
+@@ -129,7 +129,7 @@ ipcc: mailbox@4c001000 {
+ reg = <0x4c001000 0x400>;
+ st,proc-id = <0>;
+ interrupts-extended =
+- <&exti 61 1>,
++ <&exti 61 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "rx", "tx";
+ clocks = <&rcc IPCC>;
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi
+index bb4f8a0b937f37..abe2dfe706364b 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi
+@@ -6,18 +6,6 @@
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/pwm/pwm.h>
+
+-/ {
+- aliases {
+- serial0 = &uart4;
+- serial1 = &usart3;
+- serial2 = &uart8;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-};
+-
+ &adc {
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi
+index 171d7c7658fa86..0fb4e55843b9d2 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -7,16 +7,6 @@
+ #include <dt-bindings/pwm/pwm.h>
+
+ / {
+- aliases {
+- serial0 = &uart4;
+- serial1 = &usart3;
+- serial2 = &uart8;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-
+ clk_ext_audio_codec: clock-codec {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi
+index b5bc53accd6b2f..01c693cc03446c 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi
+@@ -7,16 +7,6 @@
+ #include <dt-bindings/pwm/pwm.h>
+
+ / {
+- aliases {
+- serial0 = &uart4;
+- serial1 = &usart3;
+- serial2 = &uart8;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-
+ led {
+ compatible = "gpio-leds";
+
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
+index 74a11ccc5333f8..142d4a8731f8d4 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
+@@ -14,6 +14,13 @@ aliases {
+ ethernet1 = &ksz8851;
+ rtc0 = &hwrtc;
+ rtc1 = &rtc;
++ serial0 = &uart4;
++ serial1 = &uart8;
++ serial2 = &usart3;
++ };
++
++ chosen {
++ stdout-path = "serial0:115200n8";
+ };
+
+ memory@c0000000 {
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index b9b995f8a36e14..05a1547642b60f 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -598,7 +598,21 @@ static int at91_suspend_finish(unsigned long val)
+ return 0;
+ }
+
+-static void at91_pm_switch_ba_to_vbat(void)
++/**
++ * at91_pm_switch_ba_to_auto() - Configure Backup Unit Power Switch
++ * to automatic/hardware mode.
++ *
++ * The Backup Unit Power Switch can be managed either by software or hardware.
++ * Enabling hardware mode allows the automatic transition of power between
++ * VDDANA (or VDDIN33) and VDDBU (or VBAT, respectively), based on the
++ * availability of these power sources.
++ *
++ * If the Backup Unit Power Switch is already in automatic mode, no action is
++ * required. If it is in software-controlled mode, it is switched to automatic
++ * mode to enhance safety and eliminate the need for toggling between power
++ * sources.
++ */
++static void at91_pm_switch_ba_to_auto(void)
+ {
+ unsigned int offset = offsetof(struct at91_pm_sfrbu_regs, pswbu);
+ unsigned int val;
+@@ -609,24 +623,19 @@ static void at91_pm_switch_ba_to_vbat(void)
+
+ val = readl(soc_pm.data.sfrbu + offset);
+
+- /* Already on VBAT. */
+- if (!(val & soc_pm.sfrbu_regs.pswbu.state))
++ /* Already on auto/hardware. */
++ if (!(val & soc_pm.sfrbu_regs.pswbu.ctrl))
+ return;
+
+- val &= ~soc_pm.sfrbu_regs.pswbu.softsw;
+- val |= soc_pm.sfrbu_regs.pswbu.key | soc_pm.sfrbu_regs.pswbu.ctrl;
++ val &= ~soc_pm.sfrbu_regs.pswbu.ctrl;
++ val |= soc_pm.sfrbu_regs.pswbu.key;
+ writel(val, soc_pm.data.sfrbu + offset);
+-
+- /* Wait for update. */
+- val = readl(soc_pm.data.sfrbu + offset);
+- while (val & soc_pm.sfrbu_regs.pswbu.state)
+- val = readl(soc_pm.data.sfrbu + offset);
+ }
+
+ static void at91_pm_suspend(suspend_state_t state)
+ {
+ if (soc_pm.data.mode == AT91_PM_BACKUP) {
+- at91_pm_switch_ba_to_vbat();
++ at91_pm_switch_ba_to_auto();
+
+ cpu_suspend(0, at91_suspend_finish);
+
+diff --git a/arch/arm/mach-omap1/board-nokia770.c b/arch/arm/mach-omap1/board-nokia770.c
+index 3312ef93355da7..a5bf5554800fe1 100644
+--- a/arch/arm/mach-omap1/board-nokia770.c
++++ b/arch/arm/mach-omap1/board-nokia770.c
+@@ -289,7 +289,7 @@ static struct gpiod_lookup_table nokia770_irq_gpio_table = {
+ GPIO_LOOKUP("gpio-0-15", 15, "ads7846_irq",
+ GPIO_ACTIVE_HIGH),
+ /* GPIO used for retu IRQ */
+- GPIO_LOOKUP("gpio-48-63", 15, "retu_irq",
++ GPIO_LOOKUP("gpio-48-63", 14, "retu_irq",
+ GPIO_ACTIVE_HIGH),
+ /* GPIO used for tahvo IRQ */
+ GPIO_LOOKUP("gpio-32-47", 8, "tahvo_irq",
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
+index 379c2c8466f504..86d44349e09517 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
+@@ -390,6 +390,8 @@ &sound {
+ &tcon0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_rgb666_pins>;
++ assigned-clocks = <&ccu CLK_TCON0>;
++ assigned-clock-parents = <&ccu CLK_PLL_VIDEO0_2X>;
+
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts
+index b407e1dd08a737..ec055510af8b68 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts
+@@ -369,6 +369,8 @@ &sound {
+ &tcon0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_rgb666_pins>;
++ assigned-clocks = <&ccu CLK_TCON0>;
++ assigned-clock-parents = <&ccu CLK_PLL_VIDEO0_2X>;
+
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index a5c3920e0f048e..0fecf0abb204c7 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -445,6 +445,8 @@ tcon0: lcd-controller@1c0c000 {
+ clock-names = "ahb", "tcon-ch0";
+ clock-output-names = "tcon-data-clock";
+ #clock-cells = <0>;
++ assigned-clocks = <&ccu CLK_TCON0>;
++ assigned-clock-parents = <&ccu CLK_PLL_MIPI>;
+ resets = <&ccu RST_BUS_TCON0>, <&ccu RST_BUS_LVDS>;
+ reset-names = "lcd", "lvds";
+
+diff --git a/arch/arm64/boot/dts/freescale/imx93.dtsi b/arch/arm64/boot/dts/freescale/imx93.dtsi
+index 04b9b3d31f4faf..7bc3852c6ef8fb 100644
+--- a/arch/arm64/boot/dts/freescale/imx93.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx93.dtsi
+@@ -917,7 +917,7 @@ xcvr: xcvr@42680000 {
+ reg-names = "ram", "regs", "rxfifo", "txfifo";
+ interrupts = <GIC_SPI 203 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 204 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX93_CLK_BUS_WAKEUP>,
++ clocks = <&clk IMX93_CLK_SPDIF_IPG>,
+ <&clk IMX93_CLK_SPDIF_GATE>,
+ <&clk IMX93_CLK_DUMMY>,
+ <&clk IMX93_CLK_AUD_XCVR_GATE>;
+diff --git a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+index b1ea7dcaed17dc..47234d0858dd21 100644
+--- a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
++++ b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+@@ -435,7 +435,7 @@ &cp1_eth1 {
+ managed = "in-band-status";
+ phy-mode = "sgmii";
+ phy = <&cp1_phy0>;
+- phys = <&cp0_comphy3 1>;
++ phys = <&cp1_comphy3 1>;
+ status = "okay";
+ };
+
+@@ -444,7 +444,7 @@ &cp1_eth2 {
+ managed = "in-band-status";
+ phy-mode = "sgmii";
+ phy = <&cp1_phy1>;
+- phys = <&cp0_comphy5 2>;
++ phys = <&cp1_comphy5 2>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt7988a.dtsi b/arch/arm64/boot/dts/mediatek/mt7988a.dtsi
+index aa728331e876b7..284e240b79977f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7988a.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7988a.dtsi
+@@ -129,6 +129,7 @@ i2c@11003000 {
+ reg = <0 0x11003000 0 0x1000>,
+ <0 0x10217080 0 0x80>;
+ interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>;
++ clock-div = <1>;
+ clocks = <&infracfg CLK_INFRA_I2C_BCK>,
+ <&infracfg CLK_INFRA_66M_AP_DMA_BCK>;
+ clock-names = "main", "dma";
+@@ -142,6 +143,7 @@ i2c@11004000 {
+ reg = <0 0x11004000 0 0x1000>,
+ <0 0x10217100 0 0x80>;
+ interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
++ clock-div = <1>;
+ clocks = <&infracfg CLK_INFRA_I2C_BCK>,
+ <&infracfg CLK_INFRA_66M_AP_DMA_BCK>;
+ clock-names = "main", "dma";
+@@ -155,6 +157,7 @@ i2c@11005000 {
+ reg = <0 0x11005000 0 0x1000>,
+ <0 0x10217180 0 0x80>;
+ interrupts = <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
++ clock-div = <1>;
+ clocks = <&infracfg CLK_INFRA_I2C_BCK>,
+ <&infracfg CLK_INFRA_66M_AP_DMA_BCK>;
+ clock-names = "main", "dma";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+index b4d85147b77b0b..309e2d104fdc9f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+@@ -931,7 +931,7 @@ pmic: pmic {
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+- clock: mt6397clock {
++ clock: clocks {
+ compatible = "mediatek,mt6397-clk";
+ #clock-cells = <1>;
+ };
+@@ -942,11 +942,10 @@ pio6397: pinctrl {
+ #gpio-cells = <2>;
+ };
+
+- regulator: mt6397regulator {
++ regulators {
+ compatible = "mediatek,mt6397-regulator";
+
+ mt6397_vpca15_reg: buck_vpca15 {
+- regulator-compatible = "buck_vpca15";
+ regulator-name = "vpca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -956,7 +955,6 @@ mt6397_vpca15_reg: buck_vpca15 {
+ };
+
+ mt6397_vpca7_reg: buck_vpca7 {
+- regulator-compatible = "buck_vpca7";
+ regulator-name = "vpca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -966,7 +964,6 @@ mt6397_vpca7_reg: buck_vpca7 {
+ };
+
+ mt6397_vsramca15_reg: buck_vsramca15 {
+- regulator-compatible = "buck_vsramca15";
+ regulator-name = "vsramca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -975,7 +972,6 @@ mt6397_vsramca15_reg: buck_vsramca15 {
+ };
+
+ mt6397_vsramca7_reg: buck_vsramca7 {
+- regulator-compatible = "buck_vsramca7";
+ regulator-name = "vsramca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -984,7 +980,6 @@ mt6397_vsramca7_reg: buck_vsramca7 {
+ };
+
+ mt6397_vcore_reg: buck_vcore {
+- regulator-compatible = "buck_vcore";
+ regulator-name = "vcore";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -993,7 +988,6 @@ mt6397_vcore_reg: buck_vcore {
+ };
+
+ mt6397_vgpu_reg: buck_vgpu {
+- regulator-compatible = "buck_vgpu";
+ regulator-name = "vgpu";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -1002,7 +996,6 @@ mt6397_vgpu_reg: buck_vgpu {
+ };
+
+ mt6397_vdrm_reg: buck_vdrm {
+- regulator-compatible = "buck_vdrm";
+ regulator-name = "vdrm";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1400000>;
+@@ -1011,7 +1004,6 @@ mt6397_vdrm_reg: buck_vdrm {
+ };
+
+ mt6397_vio18_reg: buck_vio18 {
+- regulator-compatible = "buck_vio18";
+ regulator-name = "vio18";
+ regulator-min-microvolt = <1620000>;
+ regulator-max-microvolt = <1980000>;
+@@ -1020,18 +1012,15 @@ mt6397_vio18_reg: buck_vio18 {
+ };
+
+ mt6397_vtcxo_reg: ldo_vtcxo {
+- regulator-compatible = "ldo_vtcxo";
+ regulator-name = "vtcxo";
+ regulator-always-on;
+ };
+
+ mt6397_va28_reg: ldo_va28 {
+- regulator-compatible = "ldo_va28";
+ regulator-name = "va28";
+ };
+
+ mt6397_vcama_reg: ldo_vcama {
+- regulator-compatible = "ldo_vcama";
+ regulator-name = "vcama";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+@@ -1039,18 +1028,15 @@ mt6397_vcama_reg: ldo_vcama {
+ };
+
+ mt6397_vio28_reg: ldo_vio28 {
+- regulator-compatible = "ldo_vio28";
+ regulator-name = "vio28";
+ regulator-always-on;
+ };
+
+ mt6397_vusb_reg: ldo_vusb {
+- regulator-compatible = "ldo_vusb";
+ regulator-name = "vusb";
+ };
+
+ mt6397_vmc_reg: ldo_vmc {
+- regulator-compatible = "ldo_vmc";
+ regulator-name = "vmc";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1058,7 +1044,6 @@ mt6397_vmc_reg: ldo_vmc {
+ };
+
+ mt6397_vmch_reg: ldo_vmch {
+- regulator-compatible = "ldo_vmch";
+ regulator-name = "vmch";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1066,7 +1051,6 @@ mt6397_vmch_reg: ldo_vmch {
+ };
+
+ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+- regulator-compatible = "ldo_vemc3v3";
+ regulator-name = "vemc_3v3";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1074,7 +1058,6 @@ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+ };
+
+ mt6397_vgp1_reg: ldo_vgp1 {
+- regulator-compatible = "ldo_vgp1";
+ regulator-name = "vcamd";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+@@ -1082,7 +1065,6 @@ mt6397_vgp1_reg: ldo_vgp1 {
+ };
+
+ mt6397_vgp2_reg: ldo_vgp2 {
+- regulator-compatible = "ldo_vgp2";
+ regulator-name = "vcamio";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1090,7 +1072,6 @@ mt6397_vgp2_reg: ldo_vgp2 {
+ };
+
+ mt6397_vgp3_reg: ldo_vgp3 {
+- regulator-compatible = "ldo_vgp3";
+ regulator-name = "vcamaf";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+@@ -1098,7 +1079,6 @@ mt6397_vgp3_reg: ldo_vgp3 {
+ };
+
+ mt6397_vgp4_reg: ldo_vgp4 {
+- regulator-compatible = "ldo_vgp4";
+ regulator-name = "vgp4";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1106,7 +1086,6 @@ mt6397_vgp4_reg: ldo_vgp4 {
+ };
+
+ mt6397_vgp5_reg: ldo_vgp5 {
+- regulator-compatible = "ldo_vgp5";
+ regulator-name = "vgp5";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3000000>;
+@@ -1114,7 +1093,6 @@ mt6397_vgp5_reg: ldo_vgp5 {
+ };
+
+ mt6397_vgp6_reg: ldo_vgp6 {
+- regulator-compatible = "ldo_vgp6";
+ regulator-name = "vgp6";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1123,7 +1101,6 @@ mt6397_vgp6_reg: ldo_vgp6 {
+ };
+
+ mt6397_vibr_reg: ldo_vibr {
+- regulator-compatible = "ldo_vibr";
+ regulator-name = "vibr";
+ regulator-min-microvolt = <1300000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1131,7 +1108,7 @@ mt6397_vibr_reg: ldo_vibr {
+ };
+ };
+
+- rtc: mt6397rtc {
++ rtc: rtc {
+ compatible = "mediatek,mt6397-rtc";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+index bb4671c18e3bd4..9fffed0ef4bff4 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+@@ -307,11 +307,10 @@ pmic: pmic {
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+- mt6397regulator: mt6397regulator {
++ regulators {
+ compatible = "mediatek,mt6397-regulator";
+
+ mt6397_vpca15_reg: buck_vpca15 {
+- regulator-compatible = "buck_vpca15";
+ regulator-name = "vpca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -320,7 +319,6 @@ mt6397_vpca15_reg: buck_vpca15 {
+ };
+
+ mt6397_vpca7_reg: buck_vpca7 {
+- regulator-compatible = "buck_vpca7";
+ regulator-name = "vpca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -329,7 +327,6 @@ mt6397_vpca7_reg: buck_vpca7 {
+ };
+
+ mt6397_vsramca15_reg: buck_vsramca15 {
+- regulator-compatible = "buck_vsramca15";
+ regulator-name = "vsramca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -338,7 +335,6 @@ mt6397_vsramca15_reg: buck_vsramca15 {
+ };
+
+ mt6397_vsramca7_reg: buck_vsramca7 {
+- regulator-compatible = "buck_vsramca7";
+ regulator-name = "vsramca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -347,7 +343,6 @@ mt6397_vsramca7_reg: buck_vsramca7 {
+ };
+
+ mt6397_vcore_reg: buck_vcore {
+- regulator-compatible = "buck_vcore";
+ regulator-name = "vcore";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -356,7 +351,6 @@ mt6397_vcore_reg: buck_vcore {
+ };
+
+ mt6397_vgpu_reg: buck_vgpu {
+- regulator-compatible = "buck_vgpu";
+ regulator-name = "vgpu";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -365,7 +359,6 @@ mt6397_vgpu_reg: buck_vgpu {
+ };
+
+ mt6397_vdrm_reg: buck_vdrm {
+- regulator-compatible = "buck_vdrm";
+ regulator-name = "vdrm";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1400000>;
+@@ -374,7 +367,6 @@ mt6397_vdrm_reg: buck_vdrm {
+ };
+
+ mt6397_vio18_reg: buck_vio18 {
+- regulator-compatible = "buck_vio18";
+ regulator-name = "vio18";
+ regulator-min-microvolt = <1620000>;
+ regulator-max-microvolt = <1980000>;
+@@ -383,19 +375,16 @@ mt6397_vio18_reg: buck_vio18 {
+ };
+
+ mt6397_vtcxo_reg: ldo_vtcxo {
+- regulator-compatible = "ldo_vtcxo";
+ regulator-name = "vtcxo";
+ regulator-always-on;
+ };
+
+ mt6397_va28_reg: ldo_va28 {
+- regulator-compatible = "ldo_va28";
+ regulator-name = "va28";
+ regulator-always-on;
+ };
+
+ mt6397_vcama_reg: ldo_vcama {
+- regulator-compatible = "ldo_vcama";
+ regulator-name = "vcama";
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <2800000>;
+@@ -403,18 +392,15 @@ mt6397_vcama_reg: ldo_vcama {
+ };
+
+ mt6397_vio28_reg: ldo_vio28 {
+- regulator-compatible = "ldo_vio28";
+ regulator-name = "vio28";
+ regulator-always-on;
+ };
+
+ mt6397_vusb_reg: ldo_vusb {
+- regulator-compatible = "ldo_vusb";
+ regulator-name = "vusb";
+ };
+
+ mt6397_vmc_reg: ldo_vmc {
+- regulator-compatible = "ldo_vmc";
+ regulator-name = "vmc";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+@@ -422,7 +408,6 @@ mt6397_vmc_reg: ldo_vmc {
+ };
+
+ mt6397_vmch_reg: ldo_vmch {
+- regulator-compatible = "ldo_vmch";
+ regulator-name = "vmch";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -430,7 +415,6 @@ mt6397_vmch_reg: ldo_vmch {
+ };
+
+ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+- regulator-compatible = "ldo_vemc3v3";
+ regulator-name = "vemc_3v3";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -438,7 +422,6 @@ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+ };
+
+ mt6397_vgp1_reg: ldo_vgp1 {
+- regulator-compatible = "ldo_vgp1";
+ regulator-name = "vcamd";
+ regulator-min-microvolt = <1220000>;
+ regulator-max-microvolt = <3300000>;
+@@ -446,7 +429,6 @@ mt6397_vgp1_reg: ldo_vgp1 {
+ };
+
+ mt6397_vgp2_reg: ldo_vgp2 {
+- regulator-compatible = "ldo_vgp2";
+ regulator-name = "vcamio";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -454,7 +436,6 @@ mt6397_vgp2_reg: ldo_vgp2 {
+ };
+
+ mt6397_vgp3_reg: ldo_vgp3 {
+- regulator-compatible = "ldo_vgp3";
+ regulator-name = "vcamaf";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -462,7 +443,6 @@ mt6397_vgp3_reg: ldo_vgp3 {
+ };
+
+ mt6397_vgp4_reg: ldo_vgp4 {
+- regulator-compatible = "ldo_vgp4";
+ regulator-name = "vgp4";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -470,7 +450,6 @@ mt6397_vgp4_reg: ldo_vgp4 {
+ };
+
+ mt6397_vgp5_reg: ldo_vgp5 {
+- regulator-compatible = "ldo_vgp5";
+ regulator-name = "vgp5";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3000000>;
+@@ -478,7 +457,6 @@ mt6397_vgp5_reg: ldo_vgp5 {
+ };
+
+ mt6397_vgp6_reg: ldo_vgp6 {
+- regulator-compatible = "ldo_vgp6";
+ regulator-name = "vgp6";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -486,7 +464,6 @@ mt6397_vgp6_reg: ldo_vgp6 {
+ };
+
+ mt6397_vibr_reg: ldo_vibr {
+- regulator-compatible = "ldo_vibr";
+ regulator-name = "vibr";
+ regulator-min-microvolt = <1300000>;
+ regulator-max-microvolt = <3300000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+index 65860b33c01fe8..3935d83a047e08 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+@@ -26,6 +26,10 @@ &touchscreen {
+ hid-descr-addr = <0x0001>;
+ };
+
++&mt6358codec {
++ mediatek,dmic-mode = <1>; /* one-wire */
++};
++
+ &qca_wifi {
+ qcom,ath10k-calibration-variant = "GO_DAMU";
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts
+index e8241587949b2b..561770fcf69e66 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts
+@@ -12,3 +12,18 @@ / {
+ chassis-type = "laptop";
+ compatible = "google,juniper-sku17", "google,juniper", "mediatek,mt8183";
+ };
++
++&i2c0 {
++ touchscreen@40 {
++ compatible = "hid-over-i2c";
++ reg = <0x40>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchscreen_pins>;
++
++ interrupts-extended = <&pio 155 IRQ_TYPE_LEVEL_LOW>;
++
++ post-power-on-delay-ms = <70>;
++ hid-descr-addr = <0x0001>;
++ };
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi
+index 76d33540166f90..c942e461a177ef 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi
+@@ -6,6 +6,21 @@
+ /dts-v1/;
+ #include "mt8183-kukui-jacuzzi.dtsi"
+
++&i2c0 {
++ touchscreen@40 {
++ compatible = "hid-over-i2c";
++ reg = <0x40>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchscreen_pins>;
++
++ interrupts-extended = <&pio 155 IRQ_TYPE_LEVEL_LOW>;
++
++ post-power-on-delay-ms = <70>;
++ hid-descr-addr = <0x0001>;
++ };
++};
++
+ &i2c2 {
+ trackpad@2c {
+ compatible = "hid-over-i2c";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+index 49e053b932e76c..80888bd4ad823d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+@@ -39,8 +39,6 @@ pp1800_mipibrdg: pp1800-mipibrdg {
+ pp3300_panel: pp3300-panel {
+ compatible = "regulator-fixed";
+ regulator-name = "pp3300_panel";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pp3300_panel_pins>;
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 0a6578aacf8280..9cd5e0cef02a29 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1024,7 +1024,8 @@ pwrap: pwrap@1000d000 {
+ };
+
+ keyboard: keyboard@10010000 {
+- compatible = "mediatek,mt6779-keypad";
++ compatible = "mediatek,mt8183-keypad",
++ "mediatek,mt6779-keypad";
+ reg = <0 0x10010000 0 0x1000>;
+ interrupts = <GIC_SPI 186 IRQ_TYPE_EDGE_FALLING>;
+ clocks = <&clk26m>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index 148c332018b0d8..ac34ba3afacb05 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -1570,6 +1570,8 @@ ssusb0: usb@11201000 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
++ wakeup-source;
++ mediatek,syscon-wakeup = <&pericfg 0x420 2>;
+ status = "disabled";
+
+ usb_host0: usb@11200000 {
+@@ -1583,8 +1585,6 @@ usb_host0: usb@11200000 {
+ <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_XHCI>;
+ clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck", "xhci_ck";
+ interrupts = <GIC_SPI 294 IRQ_TYPE_LEVEL_HIGH 0>;
+- mediatek,syscon-wakeup = <&pericfg 0x420 2>;
+- wakeup-source;
+ status = "disabled";
+ };
+ };
+@@ -1636,6 +1636,8 @@ ssusb1: usb@11281000 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
++ wakeup-source;
++ mediatek,syscon-wakeup = <&pericfg 0x424 2>;
+ status = "disabled";
+
+ usb_host1: usb@11280000 {
+@@ -1649,8 +1651,6 @@ usb_host1: usb@11280000 {
+ <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_XHCI>;
+ clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck","xhci_ck";
+ interrupts = <GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH 0>;
+- mediatek,syscon-wakeup = <&pericfg 0x424 2>;
+- wakeup-source;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+index 08d71ddf36683e..ad52c1d6e4eef7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+@@ -1420,7 +1420,6 @@ mt6315_6: pmic@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+@@ -1430,7 +1429,6 @@ mt6315_6_vbuck1: vbuck1 {
+ };
+
+ mt6315_6_vbuck3: vbuck3 {
+- regulator-compatible = "vbuck3";
+ regulator-name = "Vlcpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+@@ -1447,7 +1445,6 @@ mt6315_7: pmic@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <800000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index 2c7b2223ee76b1..5056e07399e23a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -1285,7 +1285,6 @@ mt6315@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+@@ -1303,7 +1302,6 @@ mt6315@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-demo.dts b/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
+index 31d424b8fc7ced..bfb75296795c39 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
+@@ -137,7 +137,6 @@ charger {
+ richtek,vinovp-microvolt = <14500000>;
+
+ otg_vbus_regulator: usb-otg-vbus-regulator {
+- regulator-compatible = "usb-otg-vbus";
+ regulator-name = "usb-otg-vbus";
+ regulator-min-microvolt = <4425000>;
+ regulator-max-microvolt = <5825000>;
+@@ -149,7 +148,6 @@ regulator {
+ LDO_VIN3-supply = <&mt6360_buck2>;
+
+ mt6360_buck1: buck1 {
+- regulator-compatible = "BUCK1";
+ regulator-name = "mt6360,buck1";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1300000>;
+@@ -160,7 +158,6 @@ MT6360_OPMODE_LP
+ };
+
+ mt6360_buck2: buck2 {
+- regulator-compatible = "BUCK2";
+ regulator-name = "mt6360,buck2";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1300000>;
+@@ -171,7 +168,6 @@ MT6360_OPMODE_LP
+ };
+
+ mt6360_ldo1: ldo1 {
+- regulator-compatible = "LDO1";
+ regulator-name = "mt6360,ldo1";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3600000>;
+@@ -180,7 +176,6 @@ mt6360_ldo1: ldo1 {
+ };
+
+ mt6360_ldo2: ldo2 {
+- regulator-compatible = "LDO2";
+ regulator-name = "mt6360,ldo2";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3600000>;
+@@ -189,7 +184,6 @@ mt6360_ldo2: ldo2 {
+ };
+
+ mt6360_ldo3: ldo3 {
+- regulator-compatible = "LDO3";
+ regulator-name = "mt6360,ldo3";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3600000>;
+@@ -198,7 +192,6 @@ mt6360_ldo3: ldo3 {
+ };
+
+ mt6360_ldo5: ldo5 {
+- regulator-compatible = "LDO5";
+ regulator-name = "mt6360,ldo5";
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <3600000>;
+@@ -207,7 +200,6 @@ mt6360_ldo5: ldo5 {
+ };
+
+ mt6360_ldo6: ldo6 {
+- regulator-compatible = "LDO6";
+ regulator-name = "mt6360,ldo6";
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <2100000>;
+@@ -216,7 +208,6 @@ mt6360_ldo6: ldo6 {
+ };
+
+ mt6360_ldo7: ldo7 {
+- regulator-compatible = "LDO7";
+ regulator-name = "mt6360,ldo7";
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <2100000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index ade685ed2190b7..f013dbad9dc4ea 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -1611,9 +1611,6 @@ pcie1: pcie@112f8000 {
+ phy-names = "pcie-phy";
+ power-domains = <&spm MT8195_POWER_DOMAIN_PCIE_MAC_P1>;
+
+- resets = <&infracfg_ao MT8195_INFRA_RST2_PCIE_P1_SWRST>;
+- reset-names = "mac";
+-
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 7>;
+ interrupt-map = <0 0 0 1 &pcie_intc1 0>,
+@@ -3138,7 +3135,7 @@ larb20: larb@1b010000 {
+ };
+
+ ovl0: ovl@1c000000 {
+- compatible = "mediatek,mt8195-disp-ovl", "mediatek,mt8183-disp-ovl";
++ compatible = "mediatek,mt8195-disp-ovl";
+ reg = <0 0x1c000000 0 0x1000>;
+ interrupts = <GIC_SPI 636 IRQ_TYPE_LEVEL_HIGH 0>;
+ power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS0>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8365.dtsi b/arch/arm64/boot/dts/mediatek/mt8365.dtsi
+index 9c91fe8ea0f969..2bf8c9d02b6ee7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8365.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8365.dtsi
+@@ -449,7 +449,8 @@ pwrap: pwrap@1000d000 {
+ };
+
+ keypad: keypad@10010000 {
+- compatible = "mediatek,mt6779-keypad";
++ compatible = "mediatek,mt8365-keypad",
++ "mediatek,mt6779-keypad";
+ reg = <0 0x10010000 0 0x1000>;
+ wakeup-source;
+ interrupts = <GIC_SPI 124 IRQ_TYPE_EDGE_FALLING>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+index b4b48eb93f3c54..6f34b06a0359a7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+@@ -820,7 +820,6 @@ mt6315_6: pmic@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+@@ -837,7 +836,6 @@ mt6315_7: pmic@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+index 14ec970c4e491f..41dc34837b02e7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+@@ -812,7 +812,6 @@ mt6315_6: pmic@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+@@ -829,7 +828,6 @@ mt6315_7: pmic@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8516.dtsi b/arch/arm64/boot/dts/mediatek/mt8516.dtsi
+index d0b03dc4d3f43a..e30623ebac0e1b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8516.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8516.dtsi
+@@ -144,10 +144,10 @@ reserved-memory {
+ #size-cells = <2>;
+ ranges;
+
+- /* 128 KiB reserved for ARM Trusted Firmware (BL31) */
++ /* 192 KiB reserved for ARM Trusted Firmware (BL31) */
+ bl31_secmon_reserved: secmon@43000000 {
+ no-map;
+- reg = <0 0x43000000 0 0x20000>;
++ reg = <0 0x43000000 0 0x30000>;
+ };
+ };
+
+@@ -206,7 +206,7 @@ watchdog@10007000 {
+ compatible = "mediatek,mt8516-wdt",
+ "mediatek,mt6589-wdt";
+ reg = <0 0x10007000 0 0x1000>;
+- interrupts = <GIC_SPI 198 IRQ_TYPE_EDGE_FALLING>;
++ interrupts = <GIC_SPI 198 IRQ_TYPE_LEVEL_LOW>;
+ #reset-cells = <1>;
+ };
+
+@@ -268,7 +268,7 @@ gic: interrupt-controller@10310000 {
+ interrupt-parent = <&gic>;
+ interrupt-controller;
+ reg = <0 0x10310000 0 0x1000>,
+- <0 0x10320000 0 0x1000>,
++ <0 0x1032f000 0 0x2000>,
+ <0 0x10340000 0 0x2000>,
+ <0 0x10360000 0 0x2000>;
+ interrupts = <GIC_PPI 9
+@@ -344,6 +344,7 @@ i2c0: i2c@11009000 {
+ reg = <0 0x11009000 0 0x90>,
+ <0 0x11000180 0 0x80>;
+ interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_LOW>;
++ clock-div = <2>;
+ clocks = <&topckgen CLK_TOP_I2C0>,
+ <&topckgen CLK_TOP_APDMA>;
+ clock-names = "main", "dma";
+@@ -358,6 +359,7 @@ i2c1: i2c@1100a000 {
+ reg = <0 0x1100a000 0 0x90>,
+ <0 0x11000200 0 0x80>;
+ interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_LOW>;
++ clock-div = <2>;
+ clocks = <&topckgen CLK_TOP_I2C1>,
+ <&topckgen CLK_TOP_APDMA>;
+ clock-names = "main", "dma";
+@@ -372,6 +374,7 @@ i2c2: i2c@1100b000 {
+ reg = <0 0x1100b000 0 0x90>,
+ <0 0x11000280 0 0x80>;
+ interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_LOW>;
++ clock-div = <2>;
+ clocks = <&topckgen CLK_TOP_I2C2>,
+ <&topckgen CLK_TOP_APDMA>;
+ clock-names = "main", "dma";
+diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+index ec8dfb3d1c6d69..a356db5fcc5f3c 100644
+--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+@@ -47,7 +47,6 @@ key-volume-down {
+ };
+
+ &i2c0 {
+- clock-div = <2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins_a>;
+ status = "okay";
+@@ -156,7 +155,6 @@ cam-pwdn-hog {
+ };
+
+ &i2c2 {
+- clock-div = <2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins_a>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+index 984c85eab41afd..570331baa09ee3 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+@@ -3900,7 +3900,7 @@ spi@c260000 {
+ assigned-clock-parents = <&bpmp TEGRA234_CLK_PLLP_OUT0>;
+ resets = <&bpmp TEGRA234_RESET_SPI2>;
+ reset-names = "spi";
+- dmas = <&gpcdma 19>, <&gpcdma 19>;
++ dmas = <&gpcdma 16>, <&gpcdma 16>;
+ dma-names = "rx", "tx";
+ dma-coherent;
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/Makefile b/arch/arm64/boot/dts/qcom/Makefile
+index ae002c7cf1268a..b13c169ec70d26 100644
+--- a/arch/arm64/boot/dts/qcom/Makefile
++++ b/arch/arm64/boot/dts/qcom/Makefile
+@@ -207,6 +207,9 @@ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r1.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r2.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r3.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-db845c.dtb
++
++sdm845-db845c-navigation-mezzanine-dtbs := sdm845-db845c.dtb sdm845-db845c-navigation-mezzanine.dtbo
++
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-db845c-navigation-mezzanine.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-lg-judyln.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-lg-judyp.dtb
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 0ee44706b70ba3..800bfe83dbf837 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -125,7 +125,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8939.dtsi b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+index 7af210789879af..effa3aaeb25054 100644
+--- a/arch/arm64/boot/dts/qcom/msm8939.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+@@ -34,7 +34,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index fc2a7f13f690ee..8a7de1dba2b9d0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -34,7 +34,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ clock-output-names = "sleep_clk";
+ };
+ };
+@@ -437,6 +437,15 @@ usb3: usb@f92f8800 {
+ #size-cells = <1>;
+ ranges;
+
++ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 311 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 310 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-names = "pwr_event",
++ "qusb2_phy",
++ "hs_phy_irq",
++ "ss_phy_irq";
++
+ clocks = <&gcc GCC_USB30_MASTER_CLK>,
+ <&gcc GCC_SYS_NOC_USB3_AXI_CLK>,
+ <&gcc GCC_USB30_SLEEP_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
+index f8e9d90afab000..dbad8f57f2fa34 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
++++ b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
+@@ -64,7 +64,7 @@ led@0 {
+ };
+
+ led@1 {
+- reg = <0>;
++ reg = <1>;
+ chan-name = "button-backlight1";
+ led-cur = /bits/ 8 <0x32>;
+ max-cur = /bits/ 8 <0xc8>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index e5966724f37c69..0a8884145865d6 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -3065,9 +3065,14 @@ usb3: usb@6af8800 {
+ #size-cells = <1>;
+ ranges;
+
+- interrupts = <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>,
++ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
+- interrupt-names = "hs_phy_irq", "ss_phy_irq";
++ interrupt-names = "pwr_event",
++ "qusb2_phy",
++ "hs_phy_irq",
++ "ss_phy_irq";
+
+ clocks = <&gcc GCC_SYS_NOC_USB3_AXI_CLK>,
+ <&gcc GCC_USB30_MASTER_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts b/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
+index 4667e47a74bc5b..75930f95769663 100644
+--- a/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
++++ b/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
+@@ -942,8 +942,6 @@ &usb_1_hsphy {
+
+ qcom,squelch-detector-bp = <(-2090)>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index cddc16bac0cea4..81a161c0cc5a82 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -28,7 +28,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi b/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi
+index f6960e2d466a26..e6ac529e6b7216 100644
+--- a/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi
+@@ -367,7 +367,7 @@ &pm8550b_eusb2_repeater {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &ufs_mem_hc {
+diff --git a/arch/arm64/boot/dts/qcom/qdu1000-idp.dts b/arch/arm64/boot/dts/qcom/qdu1000-idp.dts
+index e65305f8136c88..c73eda75faf820 100644
+--- a/arch/arm64/boot/dts/qcom/qdu1000-idp.dts
++++ b/arch/arm64/boot/dts/qcom/qdu1000-idp.dts
+@@ -31,7 +31,7 @@ xo_board: xo-board-clk {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+index 1888d99d398b11..f99fb9159e0b68 100644
+--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
++++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+@@ -545,7 +545,7 @@ can@0 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/qru1000-idp.dts b/arch/arm64/boot/dts/qcom/qru1000-idp.dts
+index 1c781d9e24cf4d..52ce51e56e2fdc 100644
+--- a/arch/arm64/boot/dts/qcom/qru1000-idp.dts
++++ b/arch/arm64/boot/dts/qcom/qru1000-idp.dts
+@@ -31,7 +31,7 @@ xo_board: xo-board-clk {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi b/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi
+index 0c1b21def4b62c..adb71aeff339b5 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi
+@@ -517,7 +517,7 @@ &serdes1 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32764>;
++ clock-frequency = <32000>;
+ };
+
+ &spi16 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi b/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi
+index ee35a454dbf6f3..59162b3afcb841 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi
+@@ -6,82 +6,82 @@
+ * by Qualcomm firmware.
+ */
+
+-&CPU0 {
++&cpu0 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU1 {
++&cpu1 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU2 {
++&cpu2 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU3 {
++&cpu3 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU4 {
++&cpu4 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU5 {
++&cpu5 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU6 {
++&cpu6 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&BIG_CPU_SLEEP_0
+- &BIG_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&big_cpu_sleep_0
++ &big_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU7 {
++&cpu7 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&BIG_CPU_SLEEP_0
+- &BIG_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&big_cpu_sleep_0
++ &big_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+ /delete-node/ &domain_idle_states;
+
+ &idle_states {
+- CLUSTER_SLEEP_0: cluster-sleep-0 {
++ cluster_sleep_0: cluster-sleep-0 {
+ compatible = "arm,idle-state";
+ idle-state-name = "cluster-power-down";
+ arm,psci-suspend-param = <0x40003444>;
+@@ -92,15 +92,15 @@ CLUSTER_SLEEP_0: cluster-sleep-0 {
+ };
+ };
+
+-/delete-node/ &CPU_PD0;
+-/delete-node/ &CPU_PD1;
+-/delete-node/ &CPU_PD2;
+-/delete-node/ &CPU_PD3;
+-/delete-node/ &CPU_PD4;
+-/delete-node/ &CPU_PD5;
+-/delete-node/ &CPU_PD6;
+-/delete-node/ &CPU_PD7;
+-/delete-node/ &CLUSTER_PD;
++/delete-node/ &cpu_pd0;
++/delete-node/ &cpu_pd1;
++/delete-node/ &cpu_pd2;
++/delete-node/ &cpu_pd3;
++/delete-node/ &cpu_pd4;
++/delete-node/ &cpu_pd5;
++/delete-node/ &cpu_pd6;
++/delete-node/ &cpu_pd7;
++/delete-node/ &cluster_pd;
+
+ &apps_rsc {
+ /delete-property/ power-domains;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+index 3c124bbe2f4c94..25b17b0425f24e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+@@ -53,14 +53,14 @@ skin-temp-crit {
+ cooling-maps {
+ map0 {
+ trip = <&skin_temp_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+
+ map1 {
+ trip = <&skin_temp_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
+index b2df22faafe889..f57976906d6304 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
+@@ -71,14 +71,14 @@ skin-temp-crit {
+ cooling-maps {
+ map0 {
+ trip = <&skin_temp_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+
+ map1 {
+ trip = <&skin_temp_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
+index ac8d4589e3fb74..f7300ffbb4519a 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
+@@ -12,11 +12,11 @@
+
+ / {
+ thermal-zones {
+- 5v-choke-thermal {
++ choke-5v-thermal {
+ thermal-sensors = <&pm6150_adc_tm 1>;
+
+ trips {
+- 5v-choke-crit {
++ choke-5v-crit {
+ temperature = <125000>;
+ hysteresis = <1000>;
+ type = "critical";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi
+index 00229b1515e605..ff8996b4de4e1e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi
+@@ -78,6 +78,7 @@ panel: panel@0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_rst>;
+ avdd-supply = <&ppvar_lcd>;
++ avee-supply = <&ppvar_lcd>;
+ pp1800-supply = <&v1p8_disp>;
+ pp3300-supply = <&pp3300_dx_edp>;
+ backlight = <&backlight>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi
+index af89d80426abbd..d4925be3b1fcf5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi
+@@ -78,14 +78,14 @@ skin-temp-crit {
+ cooling-maps {
+ map0 {
+ trip = <&skin_temp_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+
+ map1 {
+ trip = <&skin_temp_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index b5ebf898032512..249b257fc6a74b 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -77,28 +77,28 @@ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+- CPU0: cpu@0 {
++ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x0>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD0>;
++ power-domains = <&cpu_pd0>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+- next-level-cache = <&L2_0>;
++ next-level-cache = <&l2_0>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_0: l2-cache {
++ l2_0: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
+- L3_0: l3-cache {
++ next-level-cache = <&l3_0>;
++ l3_0: l3-cache {
+ compatible = "cache";
+ cache-level = <3>;
+ cache-unified;
+@@ -106,206 +106,206 @@ L3_0: l3-cache {
+ };
+ };
+
+- CPU1: cpu@100 {
++ cpu1: cpu@100 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x100>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD1>;
++ power-domains = <&cpu_pd1>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_100>;
++ next-level-cache = <&l2_100>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_100: l2-cache {
++ l2_100: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU2: cpu@200 {
++ cpu2: cpu@200 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x200>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD2>;
++ power-domains = <&cpu_pd2>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_200>;
++ next-level-cache = <&l2_200>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_200: l2-cache {
++ l2_200: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU3: cpu@300 {
++ cpu3: cpu@300 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x300>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD3>;
++ power-domains = <&cpu_pd3>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_300>;
++ next-level-cache = <&l2_300>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_300: l2-cache {
++ l2_300: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU4: cpu@400 {
++ cpu4: cpu@400 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x400>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD4>;
++ power-domains = <&cpu_pd4>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_400>;
++ next-level-cache = <&l2_400>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_400: l2-cache {
++ l2_400: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU5: cpu@500 {
++ cpu5: cpu@500 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x500>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD5>;
++ power-domains = <&cpu_pd5>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_500>;
++ next-level-cache = <&l2_500>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_500: l2-cache {
++ l2_500: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU6: cpu@600 {
++ cpu6: cpu@600 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x600>;
+ clocks = <&cpufreq_hw 1>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD6>;
++ power-domains = <&cpu_pd6>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <1024>;
+ dynamic-power-coefficient = <480>;
+- next-level-cache = <&L2_600>;
++ next-level-cache = <&l2_600>;
+ operating-points-v2 = <&cpu6_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 1>;
+- L2_600: l2-cache {
++ l2_600: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU7: cpu@700 {
++ cpu7: cpu@700 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x700>;
+ clocks = <&cpufreq_hw 1>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD7>;
++ power-domains = <&cpu_pd7>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <1024>;
+ dynamic-power-coefficient = <480>;
+- next-level-cache = <&L2_700>;
++ next-level-cache = <&l2_700>;
+ operating-points-v2 = <&cpu6_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 1>;
+- L2_700: l2-cache {
++ l2_700: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+ cpu-map {
+ cluster0 {
+ core0 {
+- cpu = <&CPU0>;
++ cpu = <&cpu0>;
+ };
+
+ core1 {
+- cpu = <&CPU1>;
++ cpu = <&cpu1>;
+ };
+
+ core2 {
+- cpu = <&CPU2>;
++ cpu = <&cpu2>;
+ };
+
+ core3 {
+- cpu = <&CPU3>;
++ cpu = <&cpu3>;
+ };
+
+ core4 {
+- cpu = <&CPU4>;
++ cpu = <&cpu4>;
+ };
+
+ core5 {
+- cpu = <&CPU5>;
++ cpu = <&cpu5>;
+ };
+
+ core6 {
+- cpu = <&CPU6>;
++ cpu = <&cpu6>;
+ };
+
+ core7 {
+- cpu = <&CPU7>;
++ cpu = <&cpu7>;
+ };
+ };
+ };
+@@ -313,7 +313,7 @@ core7 {
+ idle_states: idle-states {
+ entry-method = "psci";
+
+- LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
++ little_cpu_sleep_0: cpu-sleep-0-0 {
+ compatible = "arm,idle-state";
+ idle-state-name = "little-power-down";
+ arm,psci-suspend-param = <0x40000003>;
+@@ -323,7 +323,7 @@ LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+ local-timer-stop;
+ };
+
+- LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
++ little_cpu_sleep_1: cpu-sleep-0-1 {
+ compatible = "arm,idle-state";
+ idle-state-name = "little-rail-power-down";
+ arm,psci-suspend-param = <0x40000004>;
+@@ -333,7 +333,7 @@ LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
+ local-timer-stop;
+ };
+
+- BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
++ big_cpu_sleep_0: cpu-sleep-1-0 {
+ compatible = "arm,idle-state";
+ idle-state-name = "big-power-down";
+ arm,psci-suspend-param = <0x40000003>;
+@@ -343,7 +343,7 @@ BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+ local-timer-stop;
+ };
+
+- BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
++ big_cpu_sleep_1: cpu-sleep-1-1 {
+ compatible = "arm,idle-state";
+ idle-state-name = "big-rail-power-down";
+ arm,psci-suspend-param = <0x40000004>;
+@@ -355,7 +355,7 @@ BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
+ };
+
+ domain_idle_states: domain-idle-states {
+- CLUSTER_SLEEP_PC: cluster-sleep-0 {
++ cluster_sleep_pc: cluster-sleep-0 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-l3-power-collapse";
+ arm,psci-suspend-param = <0x41000044>;
+@@ -364,7 +364,7 @@ CLUSTER_SLEEP_PC: cluster-sleep-0 {
+ min-residency-us = <6118>;
+ };
+
+- CLUSTER_SLEEP_CX_RET: cluster-sleep-1 {
++ cluster_sleep_cx_ret: cluster-sleep-1 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-cx-retention";
+ arm,psci-suspend-param = <0x41001244>;
+@@ -373,7 +373,7 @@ CLUSTER_SLEEP_CX_RET: cluster-sleep-1 {
+ min-residency-us = <8467>;
+ };
+
+- CLUSTER_AOSS_SLEEP: cluster-sleep-2 {
++ cluster_aoss_sleep: cluster-sleep-2 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-power-down";
+ arm,psci-suspend-param = <0x4100b244>;
+@@ -583,59 +583,59 @@ psci {
+ compatible = "arm,psci-1.0";
+ method = "smc";
+
+- CPU_PD0: cpu0 {
++ cpu_pd0: power-domain-cpu0 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD1: cpu1 {
++ cpu_pd1: power-domain-cpu1 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD2: cpu2 {
++ cpu_pd2: power-domain-cpu2 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD3: cpu3 {
++ cpu_pd3: power-domain-cpu3 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD4: cpu4 {
++ cpu_pd4: power-domain-cpu4 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD5: cpu5 {
++ cpu_pd5: power-domain-cpu5 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD6: cpu6 {
++ cpu_pd6: power-domain-cpu6 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&BIG_CPU_SLEEP_0 &BIG_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&big_cpu_sleep_0 &big_cpu_sleep_1>;
+ };
+
+- CPU_PD7: cpu7 {
++ cpu_pd7: power-domain-cpu7 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&BIG_CPU_SLEEP_0 &BIG_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&big_cpu_sleep_0 &big_cpu_sleep_1>;
+ };
+
+- CLUSTER_PD: cpu-cluster0 {
++ cluster_pd: power-domain-cluster {
+ #power-domain-cells = <0>;
+- domain-idle-states = <&CLUSTER_SLEEP_PC
+- &CLUSTER_SLEEP_CX_RET
+- &CLUSTER_AOSS_SLEEP>;
++ domain-idle-states = <&cluster_sleep_pc
++ &cluster_sleep_cx_ret
++ &cluster_aoss_sleep>;
+ };
+ };
+
+@@ -2546,7 +2546,7 @@ etm@7040000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07040000 0 0x1000>;
+
+- cpu = <&CPU0>;
++ cpu = <&cpu0>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2566,7 +2566,7 @@ etm@7140000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07140000 0 0x1000>;
+
+- cpu = <&CPU1>;
++ cpu = <&cpu1>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2586,7 +2586,7 @@ etm@7240000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07240000 0 0x1000>;
+
+- cpu = <&CPU2>;
++ cpu = <&cpu2>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2606,7 +2606,7 @@ etm@7340000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07340000 0 0x1000>;
+
+- cpu = <&CPU3>;
++ cpu = <&cpu3>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2626,7 +2626,7 @@ etm@7440000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07440000 0 0x1000>;
+
+- cpu = <&CPU4>;
++ cpu = <&cpu4>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2646,7 +2646,7 @@ etm@7540000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07540000 0 0x1000>;
+
+- cpu = <&CPU5>;
++ cpu = <&cpu5>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2666,7 +2666,7 @@ etm@7640000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07640000 0 0x1000>;
+
+- cpu = <&CPU6>;
++ cpu = <&cpu6>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2686,7 +2686,7 @@ etm@7740000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07740000 0 0x1000>;
+
+- cpu = <&CPU7>;
++ cpu = <&cpu7>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -3734,7 +3734,7 @@ apps_rsc: rsc@18200000 {
+ <SLEEP_TCS 3>,
+ <WAKE_TCS 3>,
+ <CONTROL_TCS 1>;
+- power-domains = <&CLUSTER_PD>;
++ power-domains = <&cluster_pd>;
+
+ rpmhcc: clock-controller {
+ compatible = "qcom,sc7180-rpmh-clk";
+@@ -4063,21 +4063,21 @@ cpu0_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu0_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu0_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4111,21 +4111,21 @@ cpu1_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu1_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu1_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4159,21 +4159,21 @@ cpu2_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu2_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu2_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4207,21 +4207,21 @@ cpu3_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu3_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu3_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4255,21 +4255,21 @@ cpu4_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu4_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu4_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4303,21 +4303,21 @@ cpu5_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu5_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu5_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4351,13 +4351,13 @@ cpu6_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu6_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu6_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4391,13 +4391,13 @@ cpu7_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu7_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu7_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4431,13 +4431,13 @@ cpu8_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu8_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu8_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4471,13 +4471,13 @@ cpu9_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu9_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu9_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 3d8410683402fd..8fbc95cf63fe7e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -83,7 +83,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index 80a57aa228397e..b1e0e51a558291 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -2725,7 +2725,7 @@ usb_2_qmpphy1: phy@88f1000 {
+
+ remoteproc_adsp: remoteproc@3000000 {
+ compatible = "qcom,sc8280xp-adsp-pas";
+- reg = <0 0x03000000 0 0x100>;
++ reg = <0 0x03000000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 162 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -3882,26 +3882,26 @@ camss: camss@ac5a000 {
+ "vfe3",
+ "csid3";
+
+- interrupts = <GIC_SPI 359 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 477 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 478 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 479 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 640 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 641 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 758 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 759 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 760 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 761 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 762 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 764 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 359 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 360 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 448 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 464 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 465 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 466 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 467 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 468 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 469 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 477 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 478 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 479 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 640 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 641 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 758 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 759 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 760 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 761 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 762 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 764 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csid1_lite",
+ "vfe_lite1",
+ "csiphy3",
+@@ -5205,7 +5205,7 @@ cpufreq_hw: cpufreq@18591000 {
+
+ remoteproc_nsp0: remoteproc@1b300000 {
+ compatible = "qcom,sc8280xp-nsp0-pas";
+- reg = <0 0x1b300000 0 0x100>;
++ reg = <0 0x1b300000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_nsp0_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -5336,7 +5336,7 @@ compute-cb@14 {
+
+ remoteproc_nsp1: remoteproc@21300000 {
+ compatible = "qcom,sc8280xp-nsp1-pas";
+- reg = <0 0x21300000 0 0x100>;
++ reg = <0 0x21300000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 887 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_nsp1_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dts
+deleted file mode 100644
+index a21caa6f3fa259..00000000000000
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dts
++++ /dev/null
+@@ -1,104 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2022, Linaro Ltd.
+- */
+-
+-/dts-v1/;
+-
+-#include "sdm845-db845c.dts"
+-
+-&camss {
+- vdda-phy-supply = <&vreg_l1a_0p875>;
+- vdda-pll-supply = <&vreg_l26a_1p2>;
+-
+- status = "okay";
+-
+- ports {
+- port@0 {
+- csiphy0_ep: endpoint {
+- data-lanes = <0 1 2 3>;
+- remote-endpoint = <&ov8856_ep>;
+- };
+- };
+- };
+-};
+-
+-&cci {
+- status = "okay";
+-};
+-
+-&cci_i2c0 {
+- camera@10 {
+- compatible = "ovti,ov8856";
+- reg = <0x10>;
+-
+- /* CAM0_RST_N */
+- reset-gpios = <&tlmm 9 GPIO_ACTIVE_LOW>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&cam0_default>;
+-
+- clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+- clock-names = "xvclk";
+- clock-frequency = <19200000>;
+-
+- /*
+- * The &vreg_s4a_1p8 trace is powered on as a,
+- * so it is represented by a fixed regulator.
+- *
+- * The 2.8V vdda-supply and 1.2V vddd-supply regulators
+- * both have to be enabled through the power management
+- * gpios.
+- */
+- dovdd-supply = <&vreg_lvs1a_1p8>;
+- avdd-supply = <&cam0_avdd_2v8>;
+- dvdd-supply = <&cam0_dvdd_1v2>;
+-
+- port {
+- ov8856_ep: endpoint {
+- link-frequencies = /bits/ 64
+- <360000000 180000000>;
+- data-lanes = <1 2 3 4>;
+- remote-endpoint = <&csiphy0_ep>;
+- };
+- };
+- };
+-};
+-
+-&cci_i2c1 {
+- camera@60 {
+- compatible = "ovti,ov7251";
+-
+- /* I2C address as per ov7251.txt linux documentation */
+- reg = <0x60>;
+-
+- /* CAM3_RST_N */
+- enable-gpios = <&tlmm 21 GPIO_ACTIVE_HIGH>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&cam3_default>;
+-
+- clocks = <&clock_camcc CAM_CC_MCLK3_CLK>;
+- clock-names = "xclk";
+- clock-frequency = <24000000>;
+-
+- /*
+- * The &vreg_s4a_1p8 trace always powered on.
+- *
+- * The 2.8V vdda-supply regulator is enabled when the
+- * vreg_s4a_1p8 trace is pulled high.
+- * It too is represented by a fixed regulator.
+- *
+- * No 1.2V vddd-supply regulator is used.
+- */
+- vdddo-supply = <&vreg_lvs1a_1p8>;
+- vdda-supply = <&cam3_avdd_2v8>;
+-
+- status = "disabled";
+-
+- port {
+- ov7251_ep: endpoint {
+- data-lanes = <0 1>;
+-/* remote-endpoint = <&csiphy3_ep>; */
+- };
+- };
+- };
+-};
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dtso b/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dtso
+new file mode 100644
+index 00000000000000..51f1a4883ab8f0
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dtso
+@@ -0,0 +1,70 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2022, Linaro Ltd.
++ */
++
++/dts-v1/;
++/plugin/;
++
++#include <dt-bindings/clock/qcom,camcc-sdm845.h>
++#include <dt-bindings/gpio/gpio.h>
++
++&camss {
++ vdda-phy-supply = <&vreg_l1a_0p875>;
++ vdda-pll-supply = <&vreg_l26a_1p2>;
++
++ status = "okay";
++
++ ports {
++ port@0 {
++ csiphy0_ep: endpoint {
++ data-lanes = <0 1 2 3>;
++ remote-endpoint = <&ov8856_ep>;
++ };
++ };
++ };
++};
++
++&cci {
++ status = "okay";
++};
++
++&cci_i2c0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ camera@10 {
++ compatible = "ovti,ov8856";
++ reg = <0x10>;
++
++ /* CAM0_RST_N */
++ reset-gpios = <&tlmm 9 GPIO_ACTIVE_LOW>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&cam0_default>;
++
++ clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
++ clock-names = "xvclk";
++ clock-frequency = <19200000>;
++
++ /*
++ * The &vreg_s4a_1p8 trace is powered on as a,
++ * so it is represented by a fixed regulator.
++ *
++ * The 2.8V vdda-supply and 1.2V vddd-supply regulators
++ * both have to be enabled through the power management
++ * gpios.
++ */
++ dovdd-supply = <&vreg_lvs1a_1p8>;
++ avdd-supply = <&cam0_avdd_2v8>;
++ dvdd-supply = <&cam0_dvdd_1v2>;
++
++ port {
++ ov8856_ep: endpoint {
++ link-frequencies = /bits/ 64
++ <360000000 180000000>;
++ data-lanes = <1 2 3 4>;
++ remote-endpoint = <&csiphy0_ep>;
++ };
++ };
++ };
++};
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 54077549b9da7f..0a0cef9dfcc416 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4326,16 +4326,16 @@ camss: camss@acb3000 {
+ "vfe1",
+ "vfe_lite";
+
+- interrupts = <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 477 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 478 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 479 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 464 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 466 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 468 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 477 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 478 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 479 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 448 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 465 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 467 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 469 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csid0",
+ "csid1",
+ "csid2",
+diff --git a/arch/arm64/boot/dts/qcom/sdx75.dtsi b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+index 7cf3fcb469a868..dcb925348e3f31 100644
+--- a/arch/arm64/boot/dts/qcom/sdx75.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+@@ -34,7 +34,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm4450.dtsi b/arch/arm64/boot/dts/qcom/sm4450.dtsi
+index 1e05cd00b635ee..0bbacab6842c3e 100644
+--- a/arch/arm64/boot/dts/qcom/sm4450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm4450.dtsi
+@@ -29,7 +29,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+index 133610d14fc41a..1f7fd429ad4286 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+@@ -28,7 +28,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ clock-output-names = "sleep_clk";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6375.dtsi b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+index 4d519dd6e7ef2f..72e01437ded125 100644
+--- a/arch/arm64/boot/dts/qcom/sm6375.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+@@ -29,7 +29,7 @@ xo_board_clk: xo-board-clk {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm7125.dtsi b/arch/arm64/boot/dts/qcom/sm7125.dtsi
+index 12dd72859a433b..a53145a610a3c8 100644
+--- a/arch/arm64/boot/dts/qcom/sm7125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm7125.dtsi
+@@ -6,11 +6,11 @@
+ #include "sc7180.dtsi"
+
+ /* SM7125 uses Kryo 465 instead of Kryo 468 */
+-&CPU0 { compatible = "qcom,kryo465"; };
+-&CPU1 { compatible = "qcom,kryo465"; };
+-&CPU2 { compatible = "qcom,kryo465"; };
+-&CPU3 { compatible = "qcom,kryo465"; };
+-&CPU4 { compatible = "qcom,kryo465"; };
+-&CPU5 { compatible = "qcom,kryo465"; };
+-&CPU6 { compatible = "qcom,kryo465"; };
+-&CPU7 { compatible = "qcom,kryo465"; };
++&cpu0 { compatible = "qcom,kryo465"; };
++&cpu1 { compatible = "qcom,kryo465"; };
++&cpu2 { compatible = "qcom,kryo465"; };
++&cpu3 { compatible = "qcom,kryo465"; };
++&cpu4 { compatible = "qcom,kryo465"; };
++&cpu5 { compatible = "qcom,kryo465"; };
++&cpu6 { compatible = "qcom,kryo465"; };
++&cpu7 { compatible = "qcom,kryo465"; };
+diff --git a/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts b/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts
+index 2ee2561b57b1d6..52b16a4fdc4321 100644
+--- a/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts
++++ b/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts
+@@ -32,7 +32,7 @@ / {
+ chassis-type = "handset";
+
+ /* required for bootloader to select correct board */
+- qcom,msm-id = <434 0x10000>, <459 0x10000>;
++ qcom,msm-id = <459 0x10000>;
+ qcom,board-id = <8 32>;
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts b/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts
+index b039773c44653a..a1323a8b8e6bfb 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts
++++ b/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts
+@@ -376,8 +376,8 @@ da7280@4a {
+ pinctrl-0 = <&da7280_intr_default>;
+
+ dlg,actuator-type = "LRA";
+- dlg,dlg,const-op-mode = <1>;
+- dlg,dlg,periodic-op-mode = <1>;
++ dlg,const-op-mode = <1>;
++ dlg,periodic-op-mode = <1>;
+ dlg,nom-microvolt = <2000000>;
+ dlg,abs-max-microvolt = <2000000>;
+ dlg,imax-microamp = <129000>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 630f4eff20bf81..faa36d17b9f2c9 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -84,7 +84,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+@@ -4481,20 +4481,20 @@ camss: camss@ac6a000 {
+ "vfe_lite0",
+ "vfe_lite1";
+
+- interrupts = <GIC_SPI 477 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 478 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 479 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 359 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 477 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 478 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 479 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 448 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 86 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 89 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 464 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 466 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 468 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 359 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 465 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 467 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 469 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 360 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csiphy0",
+ "csiphy1",
+ "csiphy2",
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 37a2aba0d4cae0..041750d71e4550 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -42,7 +42,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 38cb524cc56893..f7d52e491b694b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -43,7 +43,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-hdk.dts b/arch/arm64/boot/dts/qcom/sm8550-hdk.dts
+index 01c92160260572..29bc1ddfc7b25f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-hdk.dts
+@@ -1172,7 +1172,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+index ab447fc252f7dd..5648ab60ba4c4b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+@@ -825,7 +825,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-qrd.dts b/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
+index 6052dd922ec55c..3a6cb279130489 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
+@@ -1005,7 +1005,7 @@ &remoteproc_mpss {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts b/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts
+index 3d351e90bb3986..62a6b90697b063 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts
+@@ -565,7 +565,7 @@ &remoteproc_mpss {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts b/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts
+index 85d487ef80a0be..d90dc7b37c4a74 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts
+@@ -722,7 +722,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650-hdk.dts b/arch/arm64/boot/dts/qcom/sm8650-hdk.dts
+index 127c7aacd4fc31..59363267d2e0ab 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8650-hdk.dts
+@@ -1117,7 +1117,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650-mtp.dts b/arch/arm64/boot/dts/qcom/sm8650-mtp.dts
+index c63822f5b12789..74275ca668c76f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8650-mtp.dts
+@@ -734,7 +734,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650-qrd.dts b/arch/arm64/boot/dts/qcom/sm8650-qrd.dts
+index 8ca0d28eba9bd0..1689699d6de710 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650-qrd.dts
++++ b/arch/arm64/boot/dts/qcom/sm8650-qrd.dts
+@@ -1045,7 +1045,7 @@ &remoteproc_mpss {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &spi4 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index 01ac3769ffa62f..cd54fd723ce40e 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -5624,7 +5624,7 @@ compute-cb@8 {
+
+ /* note: secure cb9 in downstream */
+
+- compute-cb@10 {
++ compute-cb@12 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <12>;
+
+@@ -5634,7 +5634,7 @@ compute-cb@10 {
+ dma-coherent;
+ };
+
+- compute-cb@11 {
++ compute-cb@13 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <13>;
+
+@@ -5644,7 +5644,7 @@ compute-cb@11 {
+ dma-coherent;
+ };
+
+- compute-cb@12 {
++ compute-cb@14 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <14>;
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+index cdb401767c4206..89e39d55278579 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+@@ -680,14 +680,14 @@ &qupv3_2 {
+
+ &remoteproc_adsp {
+ firmware-name = "qcom/x1e80100/microsoft/Romulus/qcadsp8380.mbn",
+- "qcom/x1e80100/microsoft/Romulus/adsp_dtb.mbn";
++ "qcom/x1e80100/microsoft/Romulus/adsp_dtbs.elf";
+
+ status = "okay";
+ };
+
+ &remoteproc_cdsp {
+ firmware-name = "qcom/x1e80100/microsoft/Romulus/qccdsp8380.mbn",
+- "qcom/x1e80100/microsoft/Romulus/cdsp_dtb.mbn";
++ "qcom/x1e80100/microsoft/Romulus/cdsp_dtbs.elf";
+
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index a97ceff939d882..f0797df9619b15 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -38,7 +38,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi b/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
+index 21bfa4e03972ff..612cdc7efabbcc 100644
+--- a/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
+@@ -42,11 +42,6 @@ aliases {
+ #endif
+ };
+
+- chosen {
+- bootargs = "ignore_loglevel";
+- stdout-path = "serial0:115200n8";
+- };
+-
+ memory@48000000 {
+ device_type = "memory";
+ /* First 128MB is reserved for secure area. */
+diff --git a/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi b/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi
+index 7945d44e6ee159..af2ab1629104b0 100644
+--- a/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi
++++ b/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi
+@@ -12,10 +12,15 @@
+ / {
+ aliases {
+ i2c0 = &i2c0;
+- serial0 = &scif0;
++ serial3 = &scif0;
+ mmc1 = &sdhi1;
+ };
+
++ chosen {
++ bootargs = "ignore_loglevel";
++ stdout-path = "serial3:115200n8";
++ };
++
+ keys {
+ compatible = "gpio-keys";
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts b/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts
+index bd6419a5c20a22..8311af4c8689f6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts
+@@ -74,6 +74,23 @@ vcc_io: regulator-3v3-vcc-io {
+ vin-supply = <&vcc5v0_sys>;
+ };
+
++ /*
++ * HW revision prior to v1.2 must pull GPIO4_D6 low to access sdmmc.
++ * This is modeled as an always-on active low fixed regulator.
++ */
++ vcc_sd: regulator-3v3-vcc-sd {
++ compatible = "regulator-fixed";
++ gpios = <&gpio4 RK_PD6 GPIO_ACTIVE_LOW>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&sdmmc_2030>;
++ regulator-name = "vcc_sd";
++ regulator-always-on;
++ regulator-boot-on;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ vin-supply = <&vcc_io>;
++ };
++
+ vcc5v0_sys: regulator-5v0-vcc-sys {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc5v0_sys";
+@@ -181,6 +198,12 @@ pwr_led: pwr-led {
+ };
+ };
+
++ sdmmc {
++ sdmmc_2030: sdmmc-2030 {
++ rockchip,pins = <4 RK_PD6 RK_FUNC_GPIO &pcfg_pull_none>;
++ };
++ };
++
+ wifi {
+ wifi_reg_on: wifi-reg-on {
+ rockchip,pins = <0 RK_PA2 RK_FUNC_GPIO &pcfg_pull_none>;
+@@ -233,7 +256,7 @@ &sdmmc {
+ cap-mmc-highspeed;
+ cap-sd-highspeed;
+ disable-wp;
+- vmmc-supply = <&vcc_io>;
++ vmmc-supply = <&vcc_sd>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts
+index 170b14f92f51b5..f9ef0af8aa1ac8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts
+@@ -53,7 +53,7 @@ hdmi_tx_5v: hdmi-tx-5v-regulator {
+
+ pdm_codec: pdm-codec {
+ compatible = "dmic-codec";
+- num-channels = <1>;
++ num-channels = <2>;
+ #sound-dai-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/Makefile b/arch/arm64/boot/dts/ti/Makefile
+index bcd392c3206e50..562e6d57bc9919 100644
+--- a/arch/arm64/boot/dts/ti/Makefile
++++ b/arch/arm64/boot/dts/ti/Makefile
+@@ -41,10 +41,6 @@ dtb-$(CONFIG_ARCH_K3) += k3-am62x-sk-csi2-imx219.dtbo
+ dtb-$(CONFIG_ARCH_K3) += k3-am62x-sk-hdmi-audio.dtbo
+
+ # Boards with AM64x SoC
+-k3-am642-hummingboard-t-pcie-dtbs := \
+- k3-am642-hummingboard-t.dtb k3-am642-hummingboard-t-pcie.dtbo
+-k3-am642-hummingboard-t-usb3-dtbs := \
+- k3-am642-hummingboard-t.dtb k3-am642-hummingboard-t-usb3.dtbo
+ dtb-$(CONFIG_ARCH_K3) += k3-am642-evm.dtb
+ dtb-$(CONFIG_ARCH_K3) += k3-am642-evm-icssg1-dualemac.dtbo
+ dtb-$(CONFIG_ARCH_K3) += k3-am642-evm-icssg1-dualemac-mii.dtbo
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 5b92aef5b284b7..60c6814206a1f9 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -23,7 +23,6 @@ gic500: interrupt-controller@1800000 {
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+ <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+- <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+ <0x01 0x00000000 0x00 0x2000>, /* GICC */
+ <0x01 0x00010000 0x00 0x1000>, /* GICH */
+ <0x01 0x00020000 0x00 0x2000>; /* GICV */
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+index 16a578ae2b412f..56945d29e0150b 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+@@ -18,7 +18,6 @@ gic500: interrupt-controller@1800000 {
+ compatible = "arm,gic-v3";
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+ <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+- <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+ <0x01 0x00000000 0x00 0x2000>, /* GICC */
+ <0x01 0x00010000 0x00 0x1000>, /* GICH */
+ <0x01 0x00020000 0x00 0x2000>; /* GICV */
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dts b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dts
+new file mode 100644
+index 00000000000000..023b2a6aaa5668
+--- /dev/null
++++ b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
++ *
++ * DTS for SolidRun AM642 HummingBoard-T,
++ * running on Cortex A53, with PCI-E.
++ *
++ */
++
++#include "k3-am642-hummingboard-t.dts"
++
++#include "k3-serdes.h"
++
++/ {
++ model = "SolidRun AM642 HummingBoard-T with PCI-E";
++};
++
++&pcie0_rc {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pcie0_default_pins>;
++ reset-gpios = <&main_gpio1 15 GPIO_ACTIVE_HIGH>;
++ phys = <&serdes0_link>;
++ phy-names = "pcie-phy";
++ num-lanes = <1>;
++ status = "okay";
++};
++
++&serdes0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ serdes0_link: phy@0 {
++ reg = <0>;
++ cdns,num-lanes = <1>;
++ cdns,phy-type = <PHY_TYPE_PCIE>;
++ #phy-cells = <0>;
++ resets = <&serdes_wiz0 1>;
++ };
++};
++
++&serdes_ln_ctrl {
++ idle-states = <AM64_SERDES0_LANE0_PCIE0>;
++};
++
++&serdes_mux {
++ idle-state = <1>;
++};
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dtso b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dtso
+deleted file mode 100644
+index bd9a5caf20da5b..00000000000000
+--- a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dtso
++++ /dev/null
+@@ -1,45 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/*
+- * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
+- *
+- * Overlay for SolidRun AM642 HummingBoard-T to enable PCI-E.
+- */
+-
+-/dts-v1/;
+-/plugin/;
+-
+-#include <dt-bindings/gpio/gpio.h>
+-#include <dt-bindings/phy/phy.h>
+-
+-#include "k3-serdes.h"
+-
+-&pcie0_rc {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pcie0_default_pins>;
+- reset-gpios = <&main_gpio1 15 GPIO_ACTIVE_HIGH>;
+- phys = <&serdes0_link>;
+- phy-names = "pcie-phy";
+- num-lanes = <1>;
+- status = "okay";
+-};
+-
+-&serdes0 {
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- serdes0_link: phy@0 {
+- reg = <0>;
+- cdns,num-lanes = <1>;
+- cdns,phy-type = <PHY_TYPE_PCIE>;
+- #phy-cells = <0>;
+- resets = <&serdes_wiz0 1>;
+- };
+-};
+-
+-&serdes_ln_ctrl {
+- idle-states = <AM64_SERDES0_LANE0_PCIE0>;
+-};
+-
+-&serdes_mux {
+- idle-state = <1>;
+-};
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dts b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dts
+new file mode 100644
+index 00000000000000..ee9bd618f37010
+--- /dev/null
++++ b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
++ *
++ * DTS for SolidRun AM642 HummingBoard-T,
++ * running on Cortex A53, with USB-3.1 Gen 1.
++ *
++ */
++
++#include "k3-am642-hummingboard-t.dts"
++
++#include "k3-serdes.h"
++
++/ {
++ model = "SolidRun AM642 HummingBoard-T with USB-3.1 Gen 1";
++};
++
++&serdes0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ serdes0_link: phy@0 {
++ reg = <0>;
++ cdns,num-lanes = <1>;
++ cdns,phy-type = <PHY_TYPE_USB3>;
++ #phy-cells = <0>;
++ resets = <&serdes_wiz0 1>;
++ };
++};
++
++&serdes_ln_ctrl {
++ idle-states = <AM64_SERDES0_LANE0_USB>;
++};
++
++&serdes_mux {
++ idle-state = <0>;
++};
++
++&usbss0 {
++ /delete-property/ ti,usb2-only;
++};
++
++&usb0 {
++ maximum-speed = "super-speed";
++ phys = <&serdes0_link>;
++ phy-names = "cdns3,usb3-phy";
++};
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dtso b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dtso
+deleted file mode 100644
+index ffcc3bd3c7bc5d..00000000000000
+--- a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dtso
++++ /dev/null
+@@ -1,44 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/*
+- * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
+- *
+- * Overlay for SolidRun AM642 HummingBoard-T to enable USB-3.1.
+- */
+-
+-/dts-v1/;
+-/plugin/;
+-
+-#include <dt-bindings/phy/phy.h>
+-
+-#include "k3-serdes.h"
+-
+-&serdes0 {
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- serdes0_link: phy@0 {
+- reg = <0>;
+- cdns,num-lanes = <1>;
+- cdns,phy-type = <PHY_TYPE_USB3>;
+- #phy-cells = <0>;
+- resets = <&serdes_wiz0 1>;
+- };
+-};
+-
+-&serdes_ln_ctrl {
+- idle-states = <AM64_SERDES0_LANE0_USB>;
+-};
+-
+-&serdes_mux {
+- idle-state = <0>;
+-};
+-
+-&usbss0 {
+- /delete-property/ ti,usb2-only;
+-};
+-
+-&usb0 {
+- maximum-speed = "super-speed";
+- phys = <&serdes0_link>;
+- phy-names = "cdns3,usb3-phy";
+-};
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 5fdbfea7a5b295..8fe7dbae33bf90 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1347,7 +1347,6 @@ CONFIG_SM_DISPCC_6115=m
+ CONFIG_SM_DISPCC_8250=y
+ CONFIG_SM_DISPCC_8450=m
+ CONFIG_SM_DISPCC_8550=m
+-CONFIG_SM_DISPCC_8650=m
+ CONFIG_SM_GCC_4450=y
+ CONFIG_SM_GCC_6115=y
+ CONFIG_SM_GCC_8350=y
+diff --git a/arch/hexagon/include/asm/cmpxchg.h b/arch/hexagon/include/asm/cmpxchg.h
+index bf6cf5579cf459..9c58fb81f7fd67 100644
+--- a/arch/hexagon/include/asm/cmpxchg.h
++++ b/arch/hexagon/include/asm/cmpxchg.h
+@@ -56,7 +56,7 @@ __arch_xchg(unsigned long x, volatile void *ptr, int size)
+ __typeof__(ptr) __ptr = (ptr); \
+ __typeof__(*(ptr)) __old = (old); \
+ __typeof__(*(ptr)) __new = (new); \
+- __typeof__(*(ptr)) __oldval = 0; \
++ __typeof__(*(ptr)) __oldval = (__typeof__(*(ptr))) 0; \
+ \
+ asm volatile( \
+ "1: %0 = memw_locked(%1);\n" \
+diff --git a/arch/hexagon/kernel/traps.c b/arch/hexagon/kernel/traps.c
+index 75e062722d285b..040a958de1dfc5 100644
+--- a/arch/hexagon/kernel/traps.c
++++ b/arch/hexagon/kernel/traps.c
+@@ -195,8 +195,10 @@ int die(const char *str, struct pt_regs *regs, long err)
+ printk(KERN_EMERG "Oops: %s[#%d]:\n", str, ++die.counter);
+
+ if (notify_die(DIE_OOPS, str, regs, err, pt_cause(regs), SIGSEGV) ==
+- NOTIFY_STOP)
++ NOTIFY_STOP) {
++ spin_unlock_irq(&die.lock);
+ return 1;
++ }
+
+ print_modules();
+ show_regs(regs);
+diff --git a/arch/loongarch/include/asm/hw_breakpoint.h b/arch/loongarch/include/asm/hw_breakpoint.h
+index d78330916bd18a..13b2462f3d8c9d 100644
+--- a/arch/loongarch/include/asm/hw_breakpoint.h
++++ b/arch/loongarch/include/asm/hw_breakpoint.h
+@@ -38,8 +38,8 @@ struct arch_hw_breakpoint {
+ * Limits.
+ * Changing these will require modifications to the register accessors.
+ */
+-#define LOONGARCH_MAX_BRP 8
+-#define LOONGARCH_MAX_WRP 8
++#define LOONGARCH_MAX_BRP 14
++#define LOONGARCH_MAX_WRP 14
+
+ /* Virtual debug register bases. */
+ #define CSR_CFG_ADDR 0
+diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
+index 64ad277e096edd..aaa4ad6b85944a 100644
+--- a/arch/loongarch/include/asm/loongarch.h
++++ b/arch/loongarch/include/asm/loongarch.h
+@@ -959,6 +959,36 @@
+ #define LOONGARCH_CSR_DB7CTRL 0x34a /* data breakpoint 7 control */
+ #define LOONGARCH_CSR_DB7ASID 0x34b /* data breakpoint 7 asid */
+
++#define LOONGARCH_CSR_DB8ADDR 0x350 /* data breakpoint 8 address */
++#define LOONGARCH_CSR_DB8MASK 0x351 /* data breakpoint 8 mask */
++#define LOONGARCH_CSR_DB8CTRL 0x352 /* data breakpoint 8 control */
++#define LOONGARCH_CSR_DB8ASID 0x353 /* data breakpoint 8 asid */
++
++#define LOONGARCH_CSR_DB9ADDR 0x358 /* data breakpoint 9 address */
++#define LOONGARCH_CSR_DB9MASK 0x359 /* data breakpoint 9 mask */
++#define LOONGARCH_CSR_DB9CTRL 0x35a /* data breakpoint 9 control */
++#define LOONGARCH_CSR_DB9ASID 0x35b /* data breakpoint 9 asid */
++
++#define LOONGARCH_CSR_DB10ADDR 0x360 /* data breakpoint 10 address */
++#define LOONGARCH_CSR_DB10MASK 0x361 /* data breakpoint 10 mask */
++#define LOONGARCH_CSR_DB10CTRL 0x362 /* data breakpoint 10 control */
++#define LOONGARCH_CSR_DB10ASID 0x363 /* data breakpoint 10 asid */
++
++#define LOONGARCH_CSR_DB11ADDR 0x368 /* data breakpoint 11 address */
++#define LOONGARCH_CSR_DB11MASK 0x369 /* data breakpoint 11 mask */
++#define LOONGARCH_CSR_DB11CTRL 0x36a /* data breakpoint 11 control */
++#define LOONGARCH_CSR_DB11ASID 0x36b /* data breakpoint 11 asid */
++
++#define LOONGARCH_CSR_DB12ADDR 0x370 /* data breakpoint 12 address */
++#define LOONGARCH_CSR_DB12MASK 0x371 /* data breakpoint 12 mask */
++#define LOONGARCH_CSR_DB12CTRL 0x372 /* data breakpoint 12 control */
++#define LOONGARCH_CSR_DB12ASID 0x373 /* data breakpoint 12 asid */
++
++#define LOONGARCH_CSR_DB13ADDR 0x378 /* data breakpoint 13 address */
++#define LOONGARCH_CSR_DB13MASK 0x379 /* data breakpoint 13 mask */
++#define LOONGARCH_CSR_DB13CTRL 0x37a /* data breakpoint 13 control */
++#define LOONGARCH_CSR_DB13ASID 0x37b /* data breakpoint 13 asid */
++
+ #define LOONGARCH_CSR_FWPC 0x380 /* instruction breakpoint config */
+ #define LOONGARCH_CSR_FWPS 0x381 /* instruction breakpoint status */
+
+@@ -1002,6 +1032,36 @@
+ #define LOONGARCH_CSR_IB7CTRL 0x3ca /* inst breakpoint 7 control */
+ #define LOONGARCH_CSR_IB7ASID 0x3cb /* inst breakpoint 7 asid */
+
++#define LOONGARCH_CSR_IB8ADDR 0x3d0 /* inst breakpoint 8 address */
++#define LOONGARCH_CSR_IB8MASK 0x3d1 /* inst breakpoint 8 mask */
++#define LOONGARCH_CSR_IB8CTRL 0x3d2 /* inst breakpoint 8 control */
++#define LOONGARCH_CSR_IB8ASID 0x3d3 /* inst breakpoint 8 asid */
++
++#define LOONGARCH_CSR_IB9ADDR 0x3d8 /* inst breakpoint 9 address */
++#define LOONGARCH_CSR_IB9MASK 0x3d9 /* inst breakpoint 9 mask */
++#define LOONGARCH_CSR_IB9CTRL 0x3da /* inst breakpoint 9 control */
++#define LOONGARCH_CSR_IB9ASID 0x3db /* inst breakpoint 9 asid */
++
++#define LOONGARCH_CSR_IB10ADDR 0x3e0 /* inst breakpoint 10 address */
++#define LOONGARCH_CSR_IB10MASK 0x3e1 /* inst breakpoint 10 mask */
++#define LOONGARCH_CSR_IB10CTRL 0x3e2 /* inst breakpoint 10 control */
++#define LOONGARCH_CSR_IB10ASID 0x3e3 /* inst breakpoint 10 asid */
++
++#define LOONGARCH_CSR_IB11ADDR 0x3e8 /* inst breakpoint 11 address */
++#define LOONGARCH_CSR_IB11MASK 0x3e9 /* inst breakpoint 11 mask */
++#define LOONGARCH_CSR_IB11CTRL 0x3ea /* inst breakpoint 11 control */
++#define LOONGARCH_CSR_IB11ASID 0x3eb /* inst breakpoint 11 asid */
++
++#define LOONGARCH_CSR_IB12ADDR 0x3f0 /* inst breakpoint 12 address */
++#define LOONGARCH_CSR_IB12MASK 0x3f1 /* inst breakpoint 12 mask */
++#define LOONGARCH_CSR_IB12CTRL 0x3f2 /* inst breakpoint 12 control */
++#define LOONGARCH_CSR_IB12ASID 0x3f3 /* inst breakpoint 12 asid */
++
++#define LOONGARCH_CSR_IB13ADDR 0x3f8 /* inst breakpoint 13 address */
++#define LOONGARCH_CSR_IB13MASK 0x3f9 /* inst breakpoint 13 mask */
++#define LOONGARCH_CSR_IB13CTRL 0x3fa /* inst breakpoint 13 control */
++#define LOONGARCH_CSR_IB13ASID 0x3fb /* inst breakpoint 13 asid */
++
+ #define LOONGARCH_CSR_DEBUG 0x500 /* debug config */
+ #define LOONGARCH_CSR_DERA 0x501 /* debug era */
+ #define LOONGARCH_CSR_DESAVE 0x502 /* debug save */
+diff --git a/arch/loongarch/kernel/hw_breakpoint.c b/arch/loongarch/kernel/hw_breakpoint.c
+index a6e4b605bfa8d6..c35f9bf3803349 100644
+--- a/arch/loongarch/kernel/hw_breakpoint.c
++++ b/arch/loongarch/kernel/hw_breakpoint.c
+@@ -51,7 +51,13 @@ int hw_breakpoint_slots(int type)
+ READ_WB_REG_CASE(OFF, 4, REG, T, VAL); \
+ READ_WB_REG_CASE(OFF, 5, REG, T, VAL); \
+ READ_WB_REG_CASE(OFF, 6, REG, T, VAL); \
+- READ_WB_REG_CASE(OFF, 7, REG, T, VAL);
++ READ_WB_REG_CASE(OFF, 7, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 8, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 9, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 10, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 11, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 12, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 13, REG, T, VAL);
+
+ #define GEN_WRITE_WB_REG_CASES(OFF, REG, T, VAL) \
+ WRITE_WB_REG_CASE(OFF, 0, REG, T, VAL); \
+@@ -61,7 +67,13 @@ int hw_breakpoint_slots(int type)
+ WRITE_WB_REG_CASE(OFF, 4, REG, T, VAL); \
+ WRITE_WB_REG_CASE(OFF, 5, REG, T, VAL); \
+ WRITE_WB_REG_CASE(OFF, 6, REG, T, VAL); \
+- WRITE_WB_REG_CASE(OFF, 7, REG, T, VAL);
++ WRITE_WB_REG_CASE(OFF, 7, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 8, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 9, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 10, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 11, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 12, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 13, REG, T, VAL);
+
+ static u64 read_wb_reg(int reg, int n, int t)
+ {
+diff --git a/arch/loongarch/power/platform.c b/arch/loongarch/power/platform.c
+index 0909729dc2e153..5bbdb9fd76e5d0 100644
+--- a/arch/loongarch/power/platform.c
++++ b/arch/loongarch/power/platform.c
+@@ -17,7 +17,7 @@ void enable_gpe_wakeup(void)
+ if (acpi_gbl_reduced_hardware)
+ return;
+
+- acpi_enable_all_wakeup_gpes();
++ acpi_hw_enable_all_wakeup_gpes();
+ }
+
+ void enable_pci_wakeup(void)
+diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
+index 18a3028ac3b6de..dad2e7980f245b 100644
+--- a/arch/powerpc/include/asm/hugetlb.h
++++ b/arch/powerpc/include/asm/hugetlb.h
+@@ -15,6 +15,15 @@
+
+ extern bool hugetlb_disabled;
+
++static inline bool hugepages_supported(void)
++{
++ if (hugetlb_disabled)
++ return false;
++
++ return HPAGE_SHIFT != 0;
++}
++#define hugepages_supported hugepages_supported
++
+ void __init hugetlbpage_init_defaultsize(void);
+
+ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index 76381e14e800c7..0ebae6e4c19dd7 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -687,7 +687,7 @@ void iommu_table_clear(struct iommu_table *tbl)
+ void iommu_table_reserve_pages(struct iommu_table *tbl,
+ unsigned long res_start, unsigned long res_end)
+ {
+- int i;
++ unsigned long i;
+
+ WARN_ON_ONCE(res_end < res_start);
+ /*
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index 534cd159e9ab4c..ae6f7a235d8b24 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -1650,7 +1650,8 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ iommu_table_setparms_common(newtbl, pci->phb->bus->number, create.liobn,
+ dynamic_addr, dynamic_len, page_shift, NULL,
+ &iommu_table_lpar_multi_ops);
+- iommu_init_table(newtbl, pci->phb->node, start, end);
++ iommu_init_table(newtbl, pci->phb->node,
++ start >> page_shift, end >> page_shift);
+
+ pci->table_group->tables[default_win_removed ? 0 : 1] = newtbl;
+
+@@ -2065,7 +2066,9 @@ static long spapr_tce_create_table(struct iommu_table_group *table_group, int nu
+ offset, 1UL << window_shift,
+ IOMMU_PAGE_SHIFT_4K, NULL,
+ &iommu_table_lpar_multi_ops);
+- iommu_init_table(tbl, pci->phb->node, start, end);
++ iommu_init_table(tbl, pci->phb->node,
++ start >> IOMMU_PAGE_SHIFT_4K,
++ end >> IOMMU_PAGE_SHIFT_4K);
+
+ table_group->tables[0] = tbl;
+
+@@ -2136,7 +2139,7 @@ static long spapr_tce_create_table(struct iommu_table_group *table_group, int nu
+ /* New table for using DDW instead of the default DMA window */
+ iommu_table_setparms_common(tbl, pci->phb->bus->number, create.liobn, win_addr,
+ 1UL << len, page_shift, NULL, &iommu_table_lpar_multi_ops);
+- iommu_init_table(tbl, pci->phb->node, start, end);
++ iommu_init_table(tbl, pci->phb->node, start >> page_shift, end >> page_shift);
+
+ pci->table_group->tables[num] = tbl;
+ set_iommu_table_base(&pdev->dev, tbl);
+@@ -2205,6 +2208,9 @@ static long spapr_tce_unset_window(struct iommu_table_group *table_group, int nu
+ const char *win_name;
+ int ret = -ENODEV;
+
++ if (!tbl) /* The table was never created OR window was never opened */
++ return 0;
++
+ mutex_lock(&dma_win_init_mutex);
+
+ if ((num == 0) && is_default_window_table(table_group, tbl))
+diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c
+index 682b3feee45114..a30fb2fb8a2b16 100644
+--- a/arch/riscv/kernel/vector.c
++++ b/arch/riscv/kernel/vector.c
+@@ -309,7 +309,7 @@ static int __init riscv_v_sysctl_init(void)
+ static int __init riscv_v_sysctl_init(void) { return 0; }
+ #endif /* ! CONFIG_SYSCTL */
+
+-static int riscv_v_init(void)
++static int __init riscv_v_init(void)
+ {
+ return riscv_v_sysctl_init();
+ }
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index cc1f9cffe2a5f4..62f2c9e8e05f7d 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -65,6 +65,7 @@ config S390
+ select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM
+ select ARCH_ENABLE_MEMORY_HOTREMOVE
+ select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
++ select ARCH_HAS_CPU_FINALIZE_INIT
+ select ARCH_HAS_CURRENT_STACK_POINTER
+ select ARCH_HAS_DEBUG_VIRTUAL
+ select ARCH_HAS_DEBUG_VM_PGTABLE
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 7fd57398221ea3..9b772093278704 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -22,7 +22,7 @@ KBUILD_AFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -D__ASSEMBLY__
+ ifndef CONFIG_AS_IS_LLVM
+ KBUILD_AFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),$(aflags_dwarf))
+ endif
+-KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2 -mpacked-stack
++KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2 -mpacked-stack -std=gnu11
+ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
+ KBUILD_CFLAGS_DECOMPRESSOR += -D__DECOMPRESSOR
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float -mbackchain
+diff --git a/arch/s390/include/asm/sclp.h b/arch/s390/include/asm/sclp.h
+index eb00fa1771da07..ad17d91ad2e661 100644
+--- a/arch/s390/include/asm/sclp.h
++++ b/arch/s390/include/asm/sclp.h
+@@ -137,6 +137,7 @@ void sclp_early_printk(const char *s);
+ void __sclp_early_printk(const char *s, unsigned int len);
+ void sclp_emergency_printk(const char *s);
+
++int sclp_init(void);
+ int sclp_early_get_memsize(unsigned long *mem);
+ int sclp_early_get_hsa_size(unsigned long *hsa_size);
+ int _sclp_get_core_info(struct sclp_core_info *info);
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index e2e0aa463fbd1e..c3075e4a8efc31 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -981,7 +981,7 @@ static int cfdiag_push_sample(struct perf_event *event,
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+ raw.frag.size = cpuhw->usedss;
+ raw.frag.data = cpuhw->stop;
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ overflow = perf_event_overflow(event, &data, ®s);
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index fa732545426611..10725f5a6f0fd1 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -478,7 +478,7 @@ static int paicrypt_push_sample(size_t rawsize, struct paicrypt_map *cpump,
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+ raw.frag.size = rawsize;
+ raw.frag.data = cpump->save;
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ overflow = perf_event_overflow(event, &data, ®s);
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index 7f462bef1fc075..a8f0bad99cf04f 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -503,7 +503,7 @@ static int paiext_push_sample(size_t rawsize, struct paiext_map *cpump,
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+ raw.frag.size = rawsize;
+ raw.frag.data = cpump->save;
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ overflow = perf_event_overflow(event, &data, ®s);
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index a3fea683b22706..99f165726ca9ef 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -1006,3 +1006,8 @@ void __init setup_arch(char **cmdline_p)
+ /* Add system specific data to the random pool */
+ setup_randomness();
+ }
++
++void __init arch_cpu_finalize_init(void)
++{
++ sclp_init();
++}
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 24eccaa293371b..bdcf2a3b6c41b3 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -13,7 +13,7 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS -D__NO_FORTIFY
+ $(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE
+ $(call if_changed_rule,as_o_S)
+
+-KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes
++KBUILD_CFLAGS := -std=gnu11 -fno-strict-aliasing -Wall -Wstrict-prototypes
+ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -Os -m64 -msoft-float -fno-common
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index e91970b01d6243..c3a2f6f57770ab 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -1118,7 +1118,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs)
+ .data = ibs_data.data,
+ },
+ };
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ if (perf_ibs == &perf_ibs_op)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 427d1daf06d06a..6b981868905f5d 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1735,7 +1735,7 @@ struct kvm_x86_ops {
+ bool allow_apicv_in_x2apic_without_x2apic_virtualization;
+ void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
+ void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
+- void (*hwapic_isr_update)(int isr);
++ void (*hwapic_isr_update)(struct kvm_vcpu *vcpu, int isr);
+ void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
+ void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
+ void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 766f092dab80b3..f1fac08fdef28c 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -495,14 +495,6 @@ static int x86_cluster_flags(void)
+ }
+ #endif
+
+-static int x86_die_flags(void)
+-{
+- if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))
+- return x86_sched_itmt_flags();
+-
+- return 0;
+-}
+-
+ /*
+ * Set if a package/die has multiple NUMA nodes inside.
+ * AMD Magny-Cours, Intel Cluster-on-Die, and Intel
+@@ -538,7 +530,7 @@ static void __init build_sched_topology(void)
+ */
+ if (!x86_has_numa_in_package) {
+ x86_topology[i++] = (struct sched_domain_topology_level){
+- cpu_cpu_mask, x86_die_flags, SD_INIT_NAME(PKG)
++ cpu_cpu_mask, x86_sched_itmt_flags, SD_INIT_NAME(PKG)
+ };
+ }
+
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 95c6beb8ce2799..375bbb9600d3c1 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -763,7 +763,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
+ * just set SVI.
+ */
+ if (unlikely(apic->apicv_active))
+- kvm_x86_call(hwapic_isr_update)(vec);
++ kvm_x86_call(hwapic_isr_update)(apic->vcpu, vec);
+ else {
+ ++apic->isr_count;
+ BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
+@@ -808,7 +808,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
+ * and must be left alone.
+ */
+ if (unlikely(apic->apicv_active))
+- kvm_x86_call(hwapic_isr_update)(apic_find_highest_isr(apic));
++ kvm_x86_call(hwapic_isr_update)(apic->vcpu, apic_find_highest_isr(apic));
+ else {
+ --apic->isr_count;
+ BUG_ON(apic->isr_count < 0);
+@@ -2786,7 +2786,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
+ if (apic->apicv_active) {
+ kvm_x86_call(apicv_post_state_restore)(vcpu);
+ kvm_x86_call(hwapic_irr_update)(vcpu, -1);
+- kvm_x86_call(hwapic_isr_update)(-1);
++ kvm_x86_call(hwapic_isr_update)(vcpu, -1);
+ }
+
+ vcpu->arch.apic_arb_prio = 0;
+@@ -3102,9 +3102,8 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
+ kvm_apic_update_apicv(vcpu);
+ if (apic->apicv_active) {
+ kvm_x86_call(apicv_post_state_restore)(vcpu);
+- kvm_x86_call(hwapic_irr_update)(vcpu,
+- apic_find_highest_irr(apic));
+- kvm_x86_call(hwapic_isr_update)(apic_find_highest_isr(apic));
++ kvm_x86_call(hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
++ kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
+ }
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
+ if (ioapic_in_kernel(vcpu->kvm))
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 92fee5e8a3c741..968ddf71405446 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6847,7 +6847,7 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
+ kvm_release_pfn_clean(pfn);
+ }
+
+-void vmx_hwapic_isr_update(int max_isr)
++void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+ {
+ u16 status;
+ u8 old;
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index a55981c5216e63..48dc76bf0ec03a 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -48,7 +48,7 @@ void vmx_migrate_timers(struct kvm_vcpu *vcpu);
+ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+ void vmx_apicv_pre_state_restore(struct kvm_vcpu *vcpu);
+ void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+-void vmx_hwapic_isr_update(int max_isr);
++void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
+ int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu);
+ void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+ int trig_mode, int vector);
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 88e3ad73c38549..9aed61fcd0bf94 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -118,17 +118,18 @@ static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs,
+
+ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
+ {
+- unsigned short nr_vecs = bip->bip_max_vcnt - 1;
+- struct bio_vec *copy = &bip->bip_vec[1];
+- size_t bytes = bip->bip_iter.bi_size;
+- struct iov_iter iter;
++ unsigned short orig_nr_vecs = bip->bip_max_vcnt - 1;
++ struct bio_vec *orig_bvecs = &bip->bip_vec[1];
++ struct bio_vec *bounce_bvec = &bip->bip_vec[0];
++ size_t bytes = bounce_bvec->bv_len;
++ struct iov_iter orig_iter;
+ int ret;
+
+- iov_iter_bvec(&iter, ITER_DEST, copy, nr_vecs, bytes);
+- ret = copy_to_iter(bvec_virt(bip->bip_vec), bytes, &iter);
++ iov_iter_bvec(&orig_iter, ITER_DEST, orig_bvecs, orig_nr_vecs, bytes);
++ ret = copy_to_iter(bvec_virt(bounce_bvec), bytes, &orig_iter);
+ WARN_ON_ONCE(ret != bytes);
+
+- bio_integrity_unpin_bvec(copy, nr_vecs, true);
++ bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs, true);
+ }
+
+ /**
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 4f791a3114a12c..42023addf9cda6 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -629,8 +629,14 @@ static void __submit_bio(struct bio *bio)
+ blk_mq_submit_bio(bio);
+ } else if (likely(bio_queue_enter(bio) == 0)) {
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+-
+- disk->fops->submit_bio(bio);
++
++ if ((bio->bi_opf & REQ_POLLED) &&
++ !(disk->queue->limits.features & BLK_FEAT_POLL)) {
++ bio->bi_status = BLK_STS_NOTSUPP;
++ bio_endio(bio);
++ } else {
++ disk->fops->submit_bio(bio);
++ }
+ blk_queue_exit(disk->queue);
+ }
+
+@@ -805,12 +811,6 @@ void submit_bio_noacct(struct bio *bio)
+ }
+ }
+
+- if (!(q->limits.features & BLK_FEAT_POLL) &&
+- (bio->bi_opf & REQ_POLLED)) {
+- bio_clear_polled(bio);
+- goto not_supported;
+- }
+-
+ switch (bio_op(bio)) {
+ case REQ_OP_READ:
+ break;
+@@ -935,7 +935,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
+ return 0;
+
+ q = bdev_get_queue(bdev);
+- if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL))
++ if (cookie == BLK_QC_T_NONE)
+ return 0;
+
+ blk_flush_plug(current->plug, false);
+@@ -956,7 +956,8 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
+ } else {
+ struct gendisk *disk = q->disk;
+
+- if (disk && disk->fops->poll_bio)
++ if ((q->limits.features & BLK_FEAT_POLL) && disk &&
++ disk->fops->poll_bio)
+ ret = disk->fops->poll_bio(bio, iob, flags);
+ }
+ blk_queue_exit(q);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 4e76651e786d19..662e52ab06467f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3092,14 +3092,21 @@ void blk_mq_submit_bio(struct bio *bio)
+ }
+
+ /*
+- * Device reconfiguration may change logical block size, so alignment
+- * check has to be done with queue usage counter held
++ * Device reconfiguration may change logical block size or reduce the
++ * number of poll queues, so the checks for alignment and poll support
++ * have to be done with queue usage counter held.
+ */
+ if (unlikely(bio_unaligned(bio, q))) {
+ bio_io_error(bio);
+ goto queue_exit;
+ }
+
++ if ((bio->bi_opf & REQ_POLLED) && !blk_mq_can_poll(q)) {
++ bio->bi_status = BLK_STS_NOTSUPP;
++ bio_endio(bio);
++ goto queue_exit;
++ }
++
+ bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
+ if (!bio)
+ goto queue_exit;
+@@ -4325,12 +4332,6 @@ void blk_mq_release(struct request_queue *q)
+ blk_mq_sysfs_deinit(q);
+ }
+
+-static bool blk_mq_can_poll(struct blk_mq_tag_set *set)
+-{
+- return set->nr_maps > HCTX_TYPE_POLL &&
+- set->map[HCTX_TYPE_POLL].nr_queues;
+-}
+-
+ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
+ struct queue_limits *lim, void *queuedata)
+ {
+@@ -4341,7 +4342,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
+ if (!lim)
+ lim = &default_lim;
+ lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+- if (blk_mq_can_poll(set))
++ if (set->nr_maps > HCTX_TYPE_POLL)
+ lim->features |= BLK_FEAT_POLL;
+
+ q = blk_alloc_queue(lim, set->numa_node);
+@@ -5029,8 +5030,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ fallback:
+ blk_mq_update_queue_map(set);
+ list_for_each_entry(q, &set->tag_list, tag_set_list) {
+- struct queue_limits lim;
+-
+ blk_mq_realloc_hw_ctxs(set, q);
+
+ if (q->nr_hw_queues != set->nr_hw_queues) {
+@@ -5044,13 +5043,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ set->nr_hw_queues = prev_nr_hw_queues;
+ goto fallback;
+ }
+- lim = queue_limits_start_update(q);
+- if (blk_mq_can_poll(set))
+- lim.features |= BLK_FEAT_POLL;
+- else
+- lim.features &= ~BLK_FEAT_POLL;
+- if (queue_limits_commit_update(q, &lim) < 0)
+- pr_warn("updating the poll flag failed\n");
+ blk_mq_map_swqueue(q);
+ }
+
+@@ -5110,9 +5102,9 @@ static int blk_hctx_poll(struct request_queue *q, struct blk_mq_hw_ctx *hctx,
+ int blk_mq_poll(struct request_queue *q, blk_qc_t cookie,
+ struct io_comp_batch *iob, unsigned int flags)
+ {
+- struct blk_mq_hw_ctx *hctx = xa_load(&q->hctx_table, cookie);
+-
+- return blk_hctx_poll(q, hctx, iob, flags);
++ if (!blk_mq_can_poll(q))
++ return 0;
++ return blk_hctx_poll(q, xa_load(&q->hctx_table, cookie), iob, flags);
+ }
+
+ int blk_rq_poll(struct request *rq, struct io_comp_batch *iob,
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index f4ac1af77a267e..364c0415293cf7 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -451,4 +451,10 @@ do { \
+ #define blk_mq_run_dispatch_ops(q, dispatch_ops) \
+ __blk_mq_run_dispatch_ops(q, true, dispatch_ops) \
+
++static inline bool blk_mq_can_poll(struct request_queue *q)
++{
++ return (q->limits.features & BLK_FEAT_POLL) &&
++ q->tag_set->map[HCTX_TYPE_POLL].nr_queues;
++}
++
+ #endif
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 207577145c54f4..692b27266220fe 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -256,10 +256,17 @@ static ssize_t queue_##_name##_show(struct gendisk *disk, char *page) \
+ !!(disk->queue->limits.features & _feature)); \
+ }
+
+-QUEUE_SYSFS_FEATURE_SHOW(poll, BLK_FEAT_POLL);
+ QUEUE_SYSFS_FEATURE_SHOW(fua, BLK_FEAT_FUA);
+ QUEUE_SYSFS_FEATURE_SHOW(dax, BLK_FEAT_DAX);
+
++static ssize_t queue_poll_show(struct gendisk *disk, char *page)
++{
++ if (queue_is_mq(disk->queue))
++ return sysfs_emit(page, "%u\n", blk_mq_can_poll(disk->queue));
++ return sysfs_emit(page, "%u\n",
++ !!(disk->queue->limits.features & BLK_FEAT_POLL));
++}
++
+ static ssize_t queue_zoned_show(struct gendisk *disk, char *page)
+ {
+ if (blk_queue_is_zoned(disk->queue))
+diff --git a/block/genhd.c b/block/genhd.c
+index 8645cf3b0816e4..99344f53c78975 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -778,7 +778,7 @@ static ssize_t disk_badblocks_store(struct device *dev,
+ }
+
+ #ifdef CONFIG_BLOCK_LEGACY_AUTOLOAD
+-void blk_request_module(dev_t devt)
++static bool blk_probe_dev(dev_t devt)
+ {
+ unsigned int major = MAJOR(devt);
+ struct blk_major_name **n;
+@@ -788,14 +788,26 @@ void blk_request_module(dev_t devt)
+ if ((*n)->major == major && (*n)->probe) {
+ (*n)->probe(devt);
+ mutex_unlock(&major_names_lock);
+- return;
++ return true;
+ }
+ }
+ mutex_unlock(&major_names_lock);
++ return false;
++}
++
++void blk_request_module(dev_t devt)
++{
++ int error;
++
++ if (blk_probe_dev(devt))
++ return;
+
+- if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
+- /* Make old-style 2.4 aliases work */
+- request_module("block-major-%d", MAJOR(devt));
++ error = request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt));
++ /* Make old-style 2.4 aliases work */
++ if (error > 0)
++ error = request_module("block-major-%d", MAJOR(devt));
++ if (!error)
++ blk_probe_dev(devt);
+ }
+ #endif /* CONFIG_BLOCK_LEGACY_AUTOLOAD */
+
+diff --git a/block/partitions/ldm.h b/block/partitions/ldm.h
+index e259180c89148b..aa3bd050d8cdd0 100644
+--- a/block/partitions/ldm.h
++++ b/block/partitions/ldm.h
+@@ -1,5 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-or-later
+-/**
++/*
+ * ldm - Part of the Linux-NTFS project.
+ *
+ * Copyright (C) 2001,2002 Richard Russon <ldm@flatcap.org>
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 004d27e41315ff..c067412d909a16 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -1022,6 +1022,8 @@ static void __init crypto_start_tests(void)
+ if (IS_ENABLED(CONFIG_CRYPTO_MANAGER_DISABLE_TESTS))
+ return;
+
++ set_crypto_boot_test_finished();
++
+ for (;;) {
+ struct crypto_larval *larval = NULL;
+ struct crypto_alg *q;
+@@ -1053,8 +1055,6 @@ static void __init crypto_start_tests(void)
+ if (!larval)
+ break;
+ }
+-
+- set_crypto_boot_test_finished();
+ }
+
+ static int __init crypto_algapi_init(void)
+diff --git a/drivers/acpi/acpica/achware.h b/drivers/acpi/acpica/achware.h
+index 79bbfe00d241f9..b8543a34caeada 100644
+--- a/drivers/acpi/acpica/achware.h
++++ b/drivers/acpi/acpica/achware.h
+@@ -103,8 +103,6 @@ acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info,
+
+ acpi_status acpi_hw_enable_all_runtime_gpes(void);
+
+-acpi_status acpi_hw_enable_all_wakeup_gpes(void);
+-
+ u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number);
+
+ acpi_status
+diff --git a/drivers/acpi/fan_core.c b/drivers/acpi/fan_core.c
+index 7cea4495f19bbe..300e5d91998648 100644
+--- a/drivers/acpi/fan_core.c
++++ b/drivers/acpi/fan_core.c
+@@ -371,19 +371,25 @@ static int acpi_fan_probe(struct platform_device *pdev)
+ result = sysfs_create_link(&pdev->dev.kobj,
+ &cdev->device.kobj,
+ "thermal_cooling");
+- if (result)
++ if (result) {
+ dev_err(&pdev->dev, "Failed to create sysfs link 'thermal_cooling'\n");
++ goto err_unregister;
++ }
+
+ result = sysfs_create_link(&cdev->device.kobj,
+ &pdev->dev.kobj,
+ "device");
+ if (result) {
+ dev_err(&pdev->dev, "Failed to create sysfs link 'device'\n");
+- goto err_end;
++ goto err_remove_link;
+ }
+
+ return 0;
+
++err_remove_link:
++ sysfs_remove_link(&pdev->dev.kobj, "thermal_cooling");
++err_unregister:
++ thermal_cooling_device_unregister(cdev);
+ err_end:
+ if (fan->acpi4)
+ acpi_fan_delete_attributes(device);
+diff --git a/drivers/base/class.c b/drivers/base/class.c
+index cb5359235c7020..ce460e1ab1376d 100644
+--- a/drivers/base/class.c
++++ b/drivers/base/class.c
+@@ -323,8 +323,12 @@ void class_dev_iter_init(struct class_dev_iter *iter, const struct class *class,
+ struct subsys_private *sp = class_to_subsys(class);
+ struct klist_node *start_knode = NULL;
+
+- if (!sp)
++ memset(iter, 0, sizeof(*iter));
++ if (!sp) {
++ pr_crit("%s: class %p was not registered yet\n",
++ __func__, class);
+ return;
++ }
+
+ if (start)
+ start_knode = &start->p->knode_class;
+@@ -351,6 +355,9 @@ struct device *class_dev_iter_next(struct class_dev_iter *iter)
+ struct klist_node *knode;
+ struct device *dev;
+
++ if (!iter->sp)
++ return NULL;
++
+ while (1) {
+ knode = klist_next(&iter->ki);
+ if (!knode)
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b852050d8a9665..450458267e6e64 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2180,6 +2180,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ flush_workqueue(nbd->recv_workq);
+ nbd_clear_que(nbd);
+ nbd->task_setup = NULL;
++ clear_bit(NBD_RT_BOUND, &nbd->config->runtime_flags);
+ mutex_unlock(&nbd->config_lock);
+
+ if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
+index ff45ed76646957..226ffc743238e9 100644
+--- a/drivers/block/ps3disk.c
++++ b/drivers/block/ps3disk.c
+@@ -384,9 +384,9 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
+ unsigned int devidx;
+ struct queue_limits lim = {
+ .logical_block_size = dev->blk_size,
+- .max_hw_sectors = dev->bounce_size >> 9,
++ .max_hw_sectors = BOUNCE_SIZE >> 9,
+ .max_segments = -1,
+- .max_segment_size = dev->bounce_size,
++ .max_segment_size = BOUNCE_SIZE,
+ .dma_alignment = dev->blk_size - 1,
+ .features = BLK_FEAT_WRITE_CACHE |
+ BLK_FEAT_ROTATIONAL,
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index a1153ada74d206..0a60660fc8ce80 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -553,6 +553,9 @@ static const char *btbcm_get_board_name(struct device *dev)
+
+ /* get rid of any '/' in the compatible string */
+ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
++ if (!board_type)
++ return NULL;
++
+ strreplace(board_type, '/', '-');
+
+ return board_type;
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index a028984f27829c..84a1ad61c4ad5f 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1336,13 +1336,12 @@ static void btnxpuart_tx_work(struct work_struct *work)
+
+ while ((skb = nxp_dequeue(nxpdev))) {
+ len = serdev_device_write_buf(serdev, skb->data, skb->len);
+- serdev_device_wait_until_sent(serdev, 0);
+ hdev->stat.byte_tx += len;
+
+ skb_pull(skb, len);
+ if (skb->len > 0) {
+ skb_queue_head(&nxpdev->txq, skb);
+- break;
++ continue;
+ }
+
+ switch (hci_skb_pkt_type(skb)) {
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 0bcb44cf7b31d7..0a6ca6dfb94841 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1351,12 +1351,14 @@ int btrtl_setup_realtek(struct hci_dev *hdev)
+
+ btrtl_set_quirks(hdev, btrtl_dev);
+
+- hci_set_hw_info(hdev,
++ if (btrtl_dev->ic_info) {
++ hci_set_hw_info(hdev,
+ "RTL lmp_subver=%u hci_rev=%u hci_ver=%u hci_bus=%u",
+ btrtl_dev->ic_info->lmp_subver,
+ btrtl_dev->ic_info->hci_rev,
+ btrtl_dev->ic_info->hci_ver,
+ btrtl_dev->ic_info->hci_bus);
++ }
+
+ btrtl_free(btrtl_dev);
+ return ret;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 0c85c981a8334a..258a5cb6f27afe 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2632,8 +2632,15 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ struct btmtk_data *btmtk_data = hci_get_priv(data->hdev);
+ int err;
+
++ /*
++ * The function usb_driver_claim_interface() is documented to need
++ * locks held if it's not called from a probe routine. The code here
++ * is called from the hci_power_on workqueue, so grab the lock.
++ */
++ device_lock(&btmtk_data->isopkt_intf->dev);
+ err = usb_driver_claim_interface(&btusb_driver,
+ btmtk_data->isopkt_intf, data);
++ device_unlock(&btmtk_data->isopkt_intf->dev);
+ if (err < 0) {
+ btmtk_data->isopkt_intf = NULL;
+ bt_dev_err(data->hdev, "Failed to claim iso interface");
+diff --git a/drivers/cdx/Makefile b/drivers/cdx/Makefile
+index 749a3295c2bdc1..3ca7068a305256 100644
+--- a/drivers/cdx/Makefile
++++ b/drivers/cdx/Makefile
+@@ -5,7 +5,7 @@
+ # Copyright (C) 2022-2023, Advanced Micro Devices, Inc.
+ #
+
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CDX_BUS
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"CDX_BUS"'
+
+ obj-$(CONFIG_CDX_BUS) += cdx.o controller/
+
+diff --git a/drivers/char/ipmi/ipmb_dev_int.c b/drivers/char/ipmi/ipmb_dev_int.c
+index 7296127181eca3..8a14fd0291d89b 100644
+--- a/drivers/char/ipmi/ipmb_dev_int.c
++++ b/drivers/char/ipmi/ipmb_dev_int.c
+@@ -321,6 +321,9 @@ static int ipmb_probe(struct i2c_client *client)
+ ipmb_dev->miscdev.name = devm_kasprintf(&client->dev, GFP_KERNEL,
+ "%s%d", "ipmb-",
+ client->adapter->nr);
++ if (!ipmb_dev->miscdev.name)
++ return -ENOMEM;
++
+ ipmb_dev->miscdev.fops = &ipmb_fops;
+ ipmb_dev->miscdev.parent = &client->dev;
+ ret = misc_register(&ipmb_dev->miscdev);
+diff --git a/drivers/char/ipmi/ssif_bmc.c b/drivers/char/ipmi/ssif_bmc.c
+index a14fafc583d4d8..310f17dd9511a5 100644
+--- a/drivers/char/ipmi/ssif_bmc.c
++++ b/drivers/char/ipmi/ssif_bmc.c
+@@ -292,7 +292,6 @@ static void complete_response(struct ssif_bmc_ctx *ssif_bmc)
+ ssif_bmc->nbytes_processed = 0;
+ ssif_bmc->remain_len = 0;
+ ssif_bmc->busy = false;
+- memset(&ssif_bmc->part_buf, 0, sizeof(struct ssif_part_buffer));
+ wake_up_all(&ssif_bmc->wait_queue);
+ }
+
+@@ -744,9 +743,11 @@ static void on_stop_event(struct ssif_bmc_ctx *ssif_bmc, u8 *val)
+ ssif_bmc->aborting = true;
+ }
+ } else if (ssif_bmc->state == SSIF_RES_SENDING) {
+- if (ssif_bmc->is_singlepart_read || ssif_bmc->block_num == 0xFF)
++ if (ssif_bmc->is_singlepart_read || ssif_bmc->block_num == 0xFF) {
++ memset(&ssif_bmc->part_buf, 0, sizeof(struct ssif_part_buffer));
+ /* Invalidate response buffer to denote it is sent */
+ complete_response(ssif_bmc);
++ }
+ ssif_bmc->state = SSIF_READY;
+ }
+
+diff --git a/drivers/clk/analogbits/wrpll-cln28hpc.c b/drivers/clk/analogbits/wrpll-cln28hpc.c
+index 65d422a588e1f1..9d178afc73bdd1 100644
+--- a/drivers/clk/analogbits/wrpll-cln28hpc.c
++++ b/drivers/clk/analogbits/wrpll-cln28hpc.c
+@@ -292,7 +292,7 @@ int wrpll_configure_for_rate(struct wrpll_cfg *c, u32 target_rate,
+ vco = vco_pre * f;
+ }
+
+- delta = abs(target_rate - vco);
++ delta = abs(target_vco_rate - vco);
+ if (delta < best_delta) {
+ best_delta = delta;
+ best_r = r;
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index d02451f951cf05..5b4ab94193c2b0 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -5391,8 +5391,10 @@ const char *of_clk_get_parent_name(const struct device_node *np, int index)
+ count++;
+ }
+ /* We went off the end of 'clock-indices' without finding it */
+- if (of_property_present(clkspec.np, "clock-indices") && !found)
++ if (of_property_present(clkspec.np, "clock-indices") && !found) {
++ of_node_put(clkspec.np);
+ return NULL;
++ }
+
+ if (of_property_read_string_index(clkspec.np, "clock-output-names",
+ index,
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 516dbd170c8a35..fb18f507f12135 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -399,8 +399,9 @@ static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_r
+
+ static const char * const imx8mp_clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out",
+ "dummy", "dummy", "gpu_pll_out", "vpu_pll_out",
+- "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3",
+- "dummy", "dummy", "osc_24m", "dummy", "osc_32k"};
++ "arm_pll_out", "sys_pll1_out", "sys_pll2_out",
++ "sys_pll3_out", "dummy", "dummy", "osc_24m",
++ "dummy", "osc_32k"};
+
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index c6a9bc8ecc1fc7..c5f358a75f307b 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -15,6 +15,11 @@
+
+ #include "clk.h"
+
++#define IMX93_CLK_END 208
++
++#define PLAT_IMX93 BIT(0)
++#define PLAT_IMX91 BIT(1)
++
+ enum clk_sel {
+ LOW_SPEED_IO_SEL,
+ NON_IO_SEL,
+@@ -33,6 +38,7 @@ static u32 share_count_sai2;
+ static u32 share_count_sai3;
+ static u32 share_count_mub;
+ static u32 share_count_pdm;
++static u32 share_count_spdif;
+
+ static const char * const a55_core_sels[] = {"a55_alt", "arm_pll"};
+ static const char *parent_names[MAX_SEL][4] = {
+@@ -53,6 +59,7 @@ static const struct imx93_clk_root {
+ u32 off;
+ enum clk_sel sel;
+ unsigned long flags;
++ unsigned long plat;
+ } root_array[] = {
+ /* a55/m33/bus critical clk for system run */
+ { IMX93_CLK_A55_PERIPH, "a55_periph_root", 0x0000, FAST_SEL, CLK_IS_CRITICAL },
+@@ -63,9 +70,9 @@ static const struct imx93_clk_root {
+ { IMX93_CLK_BUS_AON, "bus_aon_root", 0x0300, LOW_SPEED_IO_SEL, CLK_IS_CRITICAL },
+ { IMX93_CLK_WAKEUP_AXI, "wakeup_axi_root", 0x0380, FAST_SEL, CLK_IS_CRITICAL },
+ { IMX93_CLK_SWO_TRACE, "swo_trace_root", 0x0400, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_M33_SYSTICK, "m33_systick_root", 0x0480, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_FLEXIO1, "flexio1_root", 0x0500, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_FLEXIO2, "flexio2_root", 0x0580, LOW_SPEED_IO_SEL, },
++ { IMX93_CLK_M33_SYSTICK, "m33_systick_root", 0x0480, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_FLEXIO1, "flexio1_root", 0x0500, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_FLEXIO2, "flexio2_root", 0x0580, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_LPTMR1, "lptmr1_root", 0x0700, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_LPTMR2, "lptmr2_root", 0x0780, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_TPM2, "tpm2_root", 0x0880, TPM_SEL, },
+@@ -120,15 +127,15 @@ static const struct imx93_clk_root {
+ { IMX93_CLK_HSIO_ACSCAN_80M, "hsio_acscan_80m_root", 0x1f80, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_HSIO_ACSCAN_480M, "hsio_acscan_480m_root", 0x2000, MISC_SEL, },
+ { IMX93_CLK_NIC_AXI, "nic_axi_root", 0x2080, FAST_SEL, CLK_IS_CRITICAL, },
+- { IMX93_CLK_ML_APB, "ml_apb_root", 0x2180, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_ML, "ml_root", 0x2200, FAST_SEL, },
++ { IMX93_CLK_ML_APB, "ml_apb_root", 0x2180, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ML, "ml_root", 0x2200, FAST_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_MEDIA_AXI, "media_axi_root", 0x2280, FAST_SEL, },
+ { IMX93_CLK_MEDIA_APB, "media_apb_root", 0x2300, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_MEDIA_LDB, "media_ldb_root", 0x2380, VIDEO_SEL, },
++ { IMX93_CLK_MEDIA_LDB, "media_ldb_root", 0x2380, VIDEO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_MEDIA_DISP_PIX, "media_disp_pix_root", 0x2400, VIDEO_SEL, },
+ { IMX93_CLK_CAM_PIX, "cam_pix_root", 0x2480, VIDEO_SEL, },
+- { IMX93_CLK_MIPI_TEST_BYTE, "mipi_test_byte_root", 0x2500, VIDEO_SEL, },
+- { IMX93_CLK_MIPI_PHY_CFG, "mipi_phy_cfg_root", 0x2580, VIDEO_SEL, },
++ { IMX93_CLK_MIPI_TEST_BYTE, "mipi_test_byte_root", 0x2500, VIDEO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_MIPI_PHY_CFG, "mipi_phy_cfg_root", 0x2580, VIDEO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_ADC, "adc_root", 0x2700, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_PDM, "pdm_root", 0x2780, AUDIO_SEL, },
+ { IMX93_CLK_TSTMR1, "tstmr1_root", 0x2800, LOW_SPEED_IO_SEL, },
+@@ -137,13 +144,16 @@ static const struct imx93_clk_root {
+ { IMX93_CLK_MQS2, "mqs2_root", 0x2980, AUDIO_SEL, },
+ { IMX93_CLK_AUDIO_XCVR, "audio_xcvr_root", 0x2a00, NON_IO_SEL, },
+ { IMX93_CLK_SPDIF, "spdif_root", 0x2a80, AUDIO_SEL, },
+- { IMX93_CLK_ENET, "enet_root", 0x2b00, NON_IO_SEL, },
+- { IMX93_CLK_ENET_TIMER1, "enet_timer1_root", 0x2b80, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_ENET_TIMER2, "enet_timer2_root", 0x2c00, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_ENET_REF, "enet_ref_root", 0x2c80, NON_IO_SEL, },
+- { IMX93_CLK_ENET_REF_PHY, "enet_ref_phy_root", 0x2d00, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_I3C1_SLOW, "i3c1_slow_root", 0x2d80, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_I3C2_SLOW, "i3c2_slow_root", 0x2e00, LOW_SPEED_IO_SEL, },
++ { IMX93_CLK_ENET, "enet_root", 0x2b00, NON_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_TIMER1, "enet_timer1_root", 0x2b80, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_TIMER2, "enet_timer2_root", 0x2c00, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_REF, "enet_ref_root", 0x2c80, NON_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_REF_PHY, "enet_ref_phy_root", 0x2d00, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX91_CLK_ENET1_QOS_TSN, "enet1_qos_tsn_root", 0x2b00, NON_IO_SEL, 0, PLAT_IMX91, },
++ { IMX91_CLK_ENET_TIMER, "enet_timer_root", 0x2b80, LOW_SPEED_IO_SEL, 0, PLAT_IMX91, },
++ { IMX91_CLK_ENET2_REGULAR, "enet2_regular_root", 0x2c80, NON_IO_SEL, 0, PLAT_IMX91, },
++ { IMX93_CLK_I3C1_SLOW, "i3c1_slow_root", 0x2d80, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_I3C2_SLOW, "i3c2_slow_root", 0x2e00, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_USB_PHY_BURUNIN, "usb_phy_root", 0x2e80, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_PAL_CAME_SCAN, "pal_came_scan_root", 0x2f00, MISC_SEL, }
+ };
+@@ -155,6 +165,7 @@ static const struct imx93_clk_ccgr {
+ u32 off;
+ unsigned long flags;
+ u32 *shared_count;
++ unsigned long plat;
+ } ccgr_array[] = {
+ { IMX93_CLK_A55_GATE, "a55_alt", "a55_alt_root", 0x8000, },
+ /* M33 critical clk for system run */
+@@ -167,10 +178,10 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_WDOG5_GATE, "wdog5", "osc_24m", 0x8400, },
+ { IMX93_CLK_SEMA1_GATE, "sema1", "bus_aon_root", 0x8440, },
+ { IMX93_CLK_SEMA2_GATE, "sema2", "bus_wakeup_root", 0x8480, },
+- { IMX93_CLK_MU1_A_GATE, "mu1_a", "bus_aon_root", 0x84c0, CLK_IGNORE_UNUSED },
+- { IMX93_CLK_MU2_A_GATE, "mu2_a", "bus_wakeup_root", 0x84c0, CLK_IGNORE_UNUSED },
+- { IMX93_CLK_MU1_B_GATE, "mu1_b", "bus_aon_root", 0x8500, 0, &share_count_mub },
+- { IMX93_CLK_MU2_B_GATE, "mu2_b", "bus_wakeup_root", 0x8500, 0, &share_count_mub },
++ { IMX93_CLK_MU1_A_GATE, "mu1_a", "bus_aon_root", 0x84c0, CLK_IGNORE_UNUSED, NULL, PLAT_IMX93 },
++ { IMX93_CLK_MU2_A_GATE, "mu2_a", "bus_wakeup_root", 0x84c0, CLK_IGNORE_UNUSED, NULL, PLAT_IMX93 },
++ { IMX93_CLK_MU1_B_GATE, "mu1_b", "bus_aon_root", 0x8500, 0, &share_count_mub, PLAT_IMX93 },
++ { IMX93_CLK_MU2_B_GATE, "mu2_b", "bus_wakeup_root", 0x8500, 0, &share_count_mub, PLAT_IMX93 },
+ { IMX93_CLK_EDMA1_GATE, "edma1", "m33_root", 0x8540, },
+ { IMX93_CLK_EDMA2_GATE, "edma2", "wakeup_axi_root", 0x8580, },
+ { IMX93_CLK_FLEXSPI1_GATE, "flexspi1", "flexspi1_root", 0x8640, },
+@@ -178,8 +189,8 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_GPIO2_GATE, "gpio2", "bus_wakeup_root", 0x88c0, },
+ { IMX93_CLK_GPIO3_GATE, "gpio3", "bus_wakeup_root", 0x8900, },
+ { IMX93_CLK_GPIO4_GATE, "gpio4", "bus_wakeup_root", 0x8940, },
+- { IMX93_CLK_FLEXIO1_GATE, "flexio1", "flexio1_root", 0x8980, },
+- { IMX93_CLK_FLEXIO2_GATE, "flexio2", "flexio2_root", 0x89c0, },
++ { IMX93_CLK_FLEXIO1_GATE, "flexio1", "flexio1_root", 0x8980, 0, NULL, PLAT_IMX93},
++ { IMX93_CLK_FLEXIO2_GATE, "flexio2", "flexio2_root", 0x89c0, 0, NULL, PLAT_IMX93},
+ { IMX93_CLK_LPIT1_GATE, "lpit1", "bus_aon_root", 0x8a00, },
+ { IMX93_CLK_LPIT2_GATE, "lpit2", "bus_wakeup_root", 0x8a40, },
+ { IMX93_CLK_LPTMR1_GATE, "lptmr1", "lptmr1_root", 0x8a80, },
+@@ -228,10 +239,10 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_SAI3_GATE, "sai3", "sai3_root", 0x94c0, 0, &share_count_sai3},
+ { IMX93_CLK_SAI3_IPG, "sai3_ipg_clk", "bus_wakeup_root", 0x94c0, 0, &share_count_sai3},
+ { IMX93_CLK_MIPI_CSI_GATE, "mipi_csi", "media_apb_root", 0x9580, },
+- { IMX93_CLK_MIPI_DSI_GATE, "mipi_dsi", "media_apb_root", 0x95c0, },
+- { IMX93_CLK_LVDS_GATE, "lvds", "media_ldb_root", 0x9600, },
++ { IMX93_CLK_MIPI_DSI_GATE, "mipi_dsi", "media_apb_root", 0x95c0, 0, NULL, PLAT_IMX93 },
++ { IMX93_CLK_LVDS_GATE, "lvds", "media_ldb_root", 0x9600, 0, NULL, PLAT_IMX93 },
+ { IMX93_CLK_LCDIF_GATE, "lcdif", "media_apb_root", 0x9640, },
+- { IMX93_CLK_PXP_GATE, "pxp", "media_apb_root", 0x9680, },
++ { IMX93_CLK_PXP_GATE, "pxp", "media_apb_root", 0x9680, 0, NULL, PLAT_IMX93 },
+ { IMX93_CLK_ISI_GATE, "isi", "media_apb_root", 0x96c0, },
+ { IMX93_CLK_NIC_MEDIA_GATE, "nic_media", "media_axi_root", 0x9700, },
+ { IMX93_CLK_USB_CONTROLLER_GATE, "usb_controller", "hsio_root", 0x9a00, },
+@@ -242,10 +253,13 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_MQS1_GATE, "mqs1", "sai1_root", 0x9b00, },
+ { IMX93_CLK_MQS2_GATE, "mqs2", "sai3_root", 0x9b40, },
+ { IMX93_CLK_AUD_XCVR_GATE, "aud_xcvr", "audio_xcvr_root", 0x9b80, },
+- { IMX93_CLK_SPDIF_GATE, "spdif", "spdif_root", 0x9c00, },
++ { IMX93_CLK_SPDIF_IPG, "spdif_ipg_clk", "bus_wakeup_root", 0x9c00, 0, &share_count_spdif},
++ { IMX93_CLK_SPDIF_GATE, "spdif", "spdif_root", 0x9c00, 0, &share_count_spdif},
+ { IMX93_CLK_HSIO_32K_GATE, "hsio_32k", "osc_32k", 0x9dc0, },
+- { IMX93_CLK_ENET1_GATE, "enet1", "wakeup_axi_root", 0x9e00, },
+- { IMX93_CLK_ENET_QOS_GATE, "enet_qos", "wakeup_axi_root", 0x9e40, },
++ { IMX93_CLK_ENET1_GATE, "enet1", "wakeup_axi_root", 0x9e00, 0, NULL, PLAT_IMX93, },
++ { IMX93_CLK_ENET_QOS_GATE, "enet_qos", "wakeup_axi_root", 0x9e40, 0, NULL, PLAT_IMX93, },
++ { IMX91_CLK_ENET2_REGULAR_GATE, "enet2_regular", "wakeup_axi_root", 0x9e00, 0, NULL, PLAT_IMX91, },
++ { IMX91_CLK_ENET1_QOS_TSN_GATE, "enet1_qos_tsn", "wakeup_axi_root", 0x9e40, 0, NULL, PLAT_IMX91, },
+ /* Critical because clk accessed during CPU idle */
+ { IMX93_CLK_SYS_CNT_GATE, "sys_cnt", "osc_24m", 0x9e80, CLK_IS_CRITICAL},
+ { IMX93_CLK_TSTMR1_GATE, "tstmr1", "bus_aon_root", 0x9ec0, },
+@@ -265,6 +279,7 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ const struct imx93_clk_ccgr *ccgr;
+ void __iomem *base, *anatop_base;
+ int i, ret;
++ const unsigned long plat = (unsigned long)device_get_match_data(&pdev->dev);
+
+ clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ IMX93_CLK_END), GFP_KERNEL);
+@@ -314,17 +329,20 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+
+ for (i = 0; i < ARRAY_SIZE(root_array); i++) {
+ root = &root_array[i];
+- clks[root->clk] = imx93_clk_composite_flags(root->name,
+- parent_names[root->sel],
+- 4, base + root->off, 3,
+- root->flags);
++ if (!root->plat || root->plat & plat)
++ clks[root->clk] = imx93_clk_composite_flags(root->name,
++ parent_names[root->sel],
++ 4, base + root->off, 3,
++ root->flags);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(ccgr_array); i++) {
+ ccgr = &ccgr_array[i];
+- clks[ccgr->clk] = imx93_clk_gate(NULL, ccgr->name, ccgr->parent_name,
+- ccgr->flags, base + ccgr->off, 0, 1, 1, 3,
+- ccgr->shared_count);
++ if (!ccgr->plat || ccgr->plat & plat)
++ clks[ccgr->clk] = imx93_clk_gate(NULL,
++ ccgr->name, ccgr->parent_name,
++ ccgr->flags, base + ccgr->off, 0, 1, 1, 3,
++ ccgr->shared_count);
+ }
+
+ clks[IMX93_CLK_A55_SEL] = imx_clk_hw_mux2("a55_sel", base + 0x4820, 0, 1, a55_core_sels,
+@@ -354,7 +372,8 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ }
+
+ static const struct of_device_id imx93_clk_of_match[] = {
+- { .compatible = "fsl,imx93-ccm" },
++ { .compatible = "fsl,imx93-ccm", .data = (void *)PLAT_IMX93 },
++ { .compatible = "fsl,imx91-ccm", .data = (void *)PLAT_IMX91 },
+ { /* Sentinel */ },
+ };
+ MODULE_DEVICE_TABLE(of, imx93_clk_of_match);
+diff --git a/drivers/clk/qcom/camcc-x1e80100.c b/drivers/clk/qcom/camcc-x1e80100.c
+index 85e76c7712ad84..b73524ae64b1b2 100644
+--- a/drivers/clk/qcom/camcc-x1e80100.c
++++ b/drivers/clk/qcom/camcc-x1e80100.c
+@@ -2212,6 +2212,8 @@ static struct clk_branch cam_cc_sfe_0_fast_ahb_clk = {
+ },
+ };
+
++static struct gdsc cam_cc_titan_top_gdsc;
++
+ static struct gdsc cam_cc_bps_gdsc = {
+ .gdscr = 0x10004,
+ .en_rest_wait_val = 0x2,
+@@ -2221,6 +2223,7 @@ static struct gdsc cam_cc_bps_gdsc = {
+ .name = "cam_cc_bps_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2233,6 +2236,7 @@ static struct gdsc cam_cc_ife_0_gdsc = {
+ .name = "cam_cc_ife_0_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2245,6 +2249,7 @@ static struct gdsc cam_cc_ife_1_gdsc = {
+ .name = "cam_cc_ife_1_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2257,6 +2262,7 @@ static struct gdsc cam_cc_ipe_0_gdsc = {
+ .name = "cam_cc_ipe_0_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2269,6 +2275,7 @@ static struct gdsc cam_cc_sfe_0_gdsc = {
+ .name = "cam_cc_sfe_0_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index dc3aa7014c3ed1..c6692808a8228c 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -454,7 +454,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s0_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+@@ -470,7 +470,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s1_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+@@ -486,7 +486,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s2_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+@@ -502,7 +502,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s3_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+@@ -518,7 +518,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s4_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+@@ -534,7 +534,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s5_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+@@ -550,7 +550,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s6_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+@@ -566,7 +566,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s7_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+@@ -582,7 +582,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s0_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -598,7 +598,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s1_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -614,7 +614,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s2_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+@@ -630,7 +630,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s3_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -646,7 +646,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s4_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -662,7 +662,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s5_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -678,7 +678,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s6_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = {
+@@ -694,7 +694,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s7_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = {
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 8ea25aa25dff04..7288af845434d8 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -6083,7 +6083,7 @@ static struct gdsc gcc_usb20_prim_gdsc = {
+ .pd = {
+ .name = "gcc_usb20_prim_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c
+index 76285fbbdeaa2d..4b5d8b741e4e17 100644
+--- a/drivers/clk/ralink/clk-mtmips.c
++++ b/drivers/clk/ralink/clk-mtmips.c
+@@ -264,7 +264,6 @@ static int mtmips_register_pherip_clocks(struct device_node *np,
+ }
+
+ static struct mtmips_clk_fixed rt3883_fixed_clocks[] = {
+- CLK_FIXED("xtal", NULL, 40000000),
+ CLK_FIXED("periph", "xtal", 40000000)
+ };
+
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index 1b421b8097965b..0f27c33192e10d 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -981,7 +981,7 @@ static void __init cpg_mssr_reserved_exit(struct cpg_mssr_priv *priv)
+ static int __init cpg_mssr_reserved_init(struct cpg_mssr_priv *priv,
+ const struct cpg_mssr_info *info)
+ {
+- struct device_node *soc = of_find_node_by_path("/soc");
++ struct device_node *soc __free(device_node) = of_find_node_by_path("/soc");
+ struct device_node *node;
+ uint32_t args[MAX_PHANDLE_ARGS];
+ unsigned int *ids = NULL;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
+index c255dba2c96db3..6727a3e30a1297 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
+@@ -535,11 +535,11 @@ static SUNXI_CCU_M_WITH_MUX_GATE(de_clk, "de", de_parents,
+ CLK_SET_RATE_PARENT);
+
+ /*
+- * DSI output seems to work only when PLL_MIPI selected. Set it and prevent
+- * the mux from reparenting.
++ * Experiments showed that RGB output requires pll-video0-2x, while DSI
++ * requires pll-mipi. It will not work with incorrect clock, the screen will
++ * be blank.
++ * sun50i-a64.dtsi assigns pll-mipi as TCON0 parent by default
+ */
+-#define SUN50I_A64_TCON0_CLK_REG 0x118
+-
+ static const char * const tcon0_parents[] = { "pll-mipi", "pll-video0-2x" };
+ static const u8 tcon0_table[] = { 0, 2, };
+ static SUNXI_CCU_MUX_TABLE_WITH_GATE_CLOSEST(tcon0_clk, "tcon0", tcon0_parents,
+@@ -959,11 +959,6 @@ static int sun50i_a64_ccu_probe(struct platform_device *pdev)
+
+ writel(0x515, reg + SUN50I_A64_PLL_MIPI_REG);
+
+- /* Set PLL MIPI as parent for TCON0 */
+- val = readl(reg + SUN50I_A64_TCON0_CLK_REG);
+- val &= ~GENMASK(26, 24);
+- writel(val | (0 << 24), reg + SUN50I_A64_TCON0_CLK_REG);
+-
+ ret = devm_sunxi_ccu_probe(&pdev->dev, reg, &sun50i_a64_ccu_desc);
+ if (ret)
+ return ret;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a64.h b/drivers/clk/sunxi-ng/ccu-sun50i-a64.h
+index a8c11c0b4e0676..dfba88a5ad0f7c 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a64.h
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a64.h
+@@ -21,7 +21,6 @@
+
+ /* PLL_VIDEO0 exported for HDMI PHY */
+
+-#define CLK_PLL_VIDEO0_2X 8
+ #define CLK_PLL_VE 9
+ #define CLK_PLL_DDR0 10
+
+@@ -32,7 +31,6 @@
+ #define CLK_PLL_PERIPH1_2X 14
+ #define CLK_PLL_VIDEO1 15
+ #define CLK_PLL_GPU 16
+-#define CLK_PLL_MIPI 17
+ #define CLK_PLL_HSIC 18
+ #define CLK_PLL_DE 19
+ #define CLK_PLL_DDR1 20
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 1015fab9525157..4c9555fc61844d 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -657,7 +657,7 @@ static struct ccu_div apb_pclk = {
+ .hw.init = CLK_HW_INIT_PARENTS_DATA("apb-pclk",
+ apb_parents,
+ &ccu_div_ops,
+- 0),
++ CLK_IGNORE_UNUSED),
+ },
+ };
+
+@@ -794,13 +794,13 @@ static CCU_GATE(CLK_X2X_CPUSYS, x2x_cpusys_clk, "x2x-cpusys", axi4_cpusys2_aclk_
+ 0x134, BIT(7), 0);
+ static CCU_GATE(CLK_CPU2AON_X2H, cpu2aon_x2h_clk, "cpu2aon-x2h", axi_aclk_pd, 0x138, BIT(8), 0);
+ static CCU_GATE(CLK_CPU2PERI_X2H, cpu2peri_x2h_clk, "cpu2peri-x2h", axi4_cpusys2_aclk_pd,
+- 0x140, BIT(9), 0);
++ 0x140, BIT(9), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB1_HCLK, perisys_apb1_hclk, "perisys-apb1-hclk", perisys_ahb_hclk_pd,
+ 0x150, BIT(9), 0);
+ static CCU_GATE(CLK_PERISYS_APB2_HCLK, perisys_apb2_hclk, "perisys-apb2-hclk", perisys_ahb_hclk_pd,
+- 0x150, BIT(10), 0);
++ 0x150, BIT(10), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB3_HCLK, perisys_apb3_hclk, "perisys-apb3-hclk", perisys_ahb_hclk_pd,
+- 0x150, BIT(11), 0);
++ 0x150, BIT(11), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB4_HCLK, perisys_apb4_hclk, "perisys-apb4-hclk", perisys_ahb_hclk_pd,
+ 0x150, BIT(12), 0);
+ static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0);
+@@ -896,7 +896,6 @@ static struct ccu_common *th1520_div_clks[] = {
+ &vo_axi_clk.common,
+ &vp_apb_clk.common,
+ &vp_axi_clk.common,
+- &cpu2vp_clk.common,
+ &venc_clk.common,
+ &dpu0_clk.common,
+ &dpu1_clk.common,
+@@ -916,6 +915,7 @@ static struct ccu_common *th1520_gate_clks[] = {
+ &bmu_clk.common,
+ &cpu2aon_x2h_clk.common,
+ &cpu2peri_x2h_clk.common,
++ &cpu2vp_clk.common,
+ &perisys_apb1_hclk.common,
+ &perisys_apb2_hclk.common,
+ &perisys_apb3_hclk.common,
+@@ -1048,7 +1048,8 @@ static int th1520_clk_probe(struct platform_device *pdev)
+ hw = devm_clk_hw_register_gate_parent_data(dev,
+ cg->common.hw.init->name,
+ cg->common.hw.init->parent_data,
+- 0, base + cg->common.cfg0,
++ cg->common.hw.init->flags,
++ base + cg->common.cfg0,
+ ffs(cg->enable) - 1, 0, NULL);
+ if (IS_ERR(hw))
+ return PTR_ERR(hw);
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 0f04feb6cafaf1..47e910c22a80bd 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -626,7 +626,14 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
+ #endif
+
+ #ifdef CONFIG_ACPI_CPPC_LIB
+-static u64 get_max_boost_ratio(unsigned int cpu)
++/*
++ * get_max_boost_ratio: Computes the max_boost_ratio as the ratio
++ * between the highest_perf and the nominal_perf.
++ *
++ * Returns the max_boost_ratio for @cpu. Returns the CPPC nominal
++ * frequency via @nominal_freq if it is non-NULL pointer.
++ */
++static u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq)
+ {
+ struct cppc_perf_caps perf_caps;
+ u64 highest_perf, nominal_perf;
+@@ -655,6 +662,9 @@ static u64 get_max_boost_ratio(unsigned int cpu)
+
+ nominal_perf = perf_caps.nominal_perf;
+
++ if (nominal_freq)
++ *nominal_freq = perf_caps.nominal_freq;
++
+ if (!highest_perf || !nominal_perf) {
+ pr_debug("CPU%d: highest or nominal performance missing\n", cpu);
+ return 0;
+@@ -667,8 +677,12 @@ static u64 get_max_boost_ratio(unsigned int cpu)
+
+ return div_u64(highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
+ }
++
+ #else
+-static inline u64 get_max_boost_ratio(unsigned int cpu) { return 0; }
++static inline u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq)
++{
++ return 0;
++}
+ #endif
+
+ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+@@ -678,9 +692,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ struct acpi_cpufreq_data *data;
+ unsigned int cpu = policy->cpu;
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
++ u64 max_boost_ratio, nominal_freq = 0;
+ unsigned int valid_states = 0;
+ unsigned int result = 0;
+- u64 max_boost_ratio;
+ unsigned int i;
+ #ifdef CONFIG_SMP
+ static int blacklisted;
+@@ -830,16 +844,20 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ }
+ freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
+
+- max_boost_ratio = get_max_boost_ratio(cpu);
++ max_boost_ratio = get_max_boost_ratio(cpu, &nominal_freq);
+ if (max_boost_ratio) {
+- unsigned int freq = freq_table[0].frequency;
++ unsigned int freq = nominal_freq;
+
+ /*
+- * Because the loop above sorts the freq_table entries in the
+- * descending order, freq is the maximum frequency in the table.
+- * Assume that it corresponds to the CPPC nominal frequency and
+- * use it to set cpuinfo.max_freq.
++ * The loop above sorts the freq_table entries in the
++ * descending order. If ACPI CPPC has not advertised
++ * the nominal frequency (this is possible in CPPC
++ * revisions prior to 3), then use the first entry in
++ * the pstate table as a proxy for nominal frequency.
+ */
++ if (!freq)
++ freq = freq_table[0].frequency;
++
+ policy->cpuinfo.max_freq = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT;
+ } else {
+ /*
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 900d6844c43d3f..e7399780638393 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -143,14 +143,12 @@ static unsigned long qcom_lmh_get_throttle_freq(struct qcom_cpufreq_data *data)
+ }
+
+ /* Get the frequency requested by the cpufreq core for the CPU */
+-static unsigned int qcom_cpufreq_get_freq(unsigned int cpu)
++static unsigned int qcom_cpufreq_get_freq(struct cpufreq_policy *policy)
+ {
+ struct qcom_cpufreq_data *data;
+ const struct qcom_cpufreq_soc_data *soc_data;
+- struct cpufreq_policy *policy;
+ unsigned int index;
+
+- policy = cpufreq_cpu_get_raw(cpu);
+ if (!policy)
+ return 0;
+
+@@ -163,12 +161,10 @@ static unsigned int qcom_cpufreq_get_freq(unsigned int cpu)
+ return policy->freq_table[index].frequency;
+ }
+
+-static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
++static unsigned int __qcom_cpufreq_hw_get(struct cpufreq_policy *policy)
+ {
+ struct qcom_cpufreq_data *data;
+- struct cpufreq_policy *policy;
+
+- policy = cpufreq_cpu_get_raw(cpu);
+ if (!policy)
+ return 0;
+
+@@ -177,7 +173,12 @@ static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
+ if (data->throttle_irq >= 0)
+ return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ;
+
+- return qcom_cpufreq_get_freq(cpu);
++ return qcom_cpufreq_get_freq(policy);
++}
++
++static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
++{
++ return __qcom_cpufreq_hw_get(cpufreq_cpu_get_raw(cpu));
+ }
+
+ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
+@@ -363,7 +364,7 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
+ * If h/w throttled frequency is higher than what cpufreq has requested
+ * for, then stop polling and switch back to interrupt mechanism.
+ */
+- if (throttled_freq >= qcom_cpufreq_get_freq(cpu))
++ if (throttled_freq >= qcom_cpufreq_get_freq(cpufreq_cpu_get_raw(cpu)))
+ enable_irq(data->throttle_irq);
+ else
+ mod_delayed_work(system_highpri_wq, &data->throttle_work,
+@@ -441,7 +442,6 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index)
+ return data->throttle_irq;
+
+ data->cancel_throttle = false;
+- data->policy = policy;
+
+ mutex_init(&data->throttle_lock);
+ INIT_DEFERRABLE_WORK(&data->throttle_work, qcom_lmh_dcvs_poll);
+@@ -552,6 +552,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+
+ policy->driver_data = data;
+ policy->dvfs_possible_from_any_cpu = true;
++ data->policy = policy;
+
+ ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy);
+ if (ret) {
+@@ -622,11 +623,24 @@ static unsigned long qcom_cpufreq_hw_recalc_rate(struct clk_hw *hw, unsigned lon
+ {
+ struct qcom_cpufreq_data *data = container_of(hw, struct qcom_cpufreq_data, cpu_clk);
+
+- return qcom_lmh_get_throttle_freq(data);
++ return __qcom_cpufreq_hw_get(data->policy) * HZ_PER_KHZ;
++}
++
++/*
++ * Since we cannot determine the closest rate of the target rate, let's just
++ * return the actual rate at which the clock is running at. This is needed to
++ * make clk_set_rate() API work properly.
++ */
++static int qcom_cpufreq_hw_determine_rate(struct clk_hw *hw, struct clk_rate_request *req)
++{
++ req->rate = qcom_cpufreq_hw_recalc_rate(hw, 0);
++
++ return 0;
+ }
+
+ static const struct clk_ops qcom_cpufreq_hw_clk_ops = {
+ .recalc_rate = qcom_cpufreq_hw_recalc_rate,
++ .determine_rate = qcom_cpufreq_hw_determine_rate,
+ };
+
+ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
+diff --git a/drivers/crypto/caam/blob_gen.c b/drivers/crypto/caam/blob_gen.c
+index 87781c1534ee5b..079a22cc9f02be 100644
+--- a/drivers/crypto/caam/blob_gen.c
++++ b/drivers/crypto/caam/blob_gen.c
+@@ -2,6 +2,7 @@
+ /*
+ * Copyright (C) 2015 Pengutronix, Steffen Trumtrar <kernel@pengutronix.de>
+ * Copyright (C) 2021 Pengutronix, Ahmad Fatoum <kernel@pengutronix.de>
++ * Copyright 2024 NXP
+ */
+
+ #define pr_fmt(fmt) "caam blob_gen: " fmt
+@@ -104,7 +105,7 @@ int caam_process_blob(struct caam_blob_priv *priv,
+ }
+
+ ctrlpriv = dev_get_drvdata(jrdev->parent);
+- moo = FIELD_GET(CSTA_MOO, rd_reg32(&ctrlpriv->ctrl->perfmon.status));
++ moo = FIELD_GET(CSTA_MOO, rd_reg32(&ctrlpriv->jr[0]->perfmon.status));
+ if (moo != CSTA_MOO_SECURE && moo != CSTA_MOO_TRUSTED)
+ dev_warn(jrdev,
+ "using insecure test key, enable HAB to use unique device key!\n");
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 410c83712e2851..30c2b1a64695c0 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -37,6 +37,7 @@ struct sec_aead_req {
+ u8 *a_ivin;
+ dma_addr_t a_ivin_dma;
+ struct aead_request *aead_req;
++ bool fallback;
+ };
+
+ /* SEC request of Crypto */
+@@ -90,9 +91,7 @@ struct sec_auth_ctx {
+ dma_addr_t a_key_dma;
+ u8 *a_key;
+ u8 a_key_len;
+- u8 mac_len;
+ u8 a_alg;
+- bool fallback;
+ struct crypto_shash *hash_tfm;
+ struct crypto_aead *fallback_aead_tfm;
+ };
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 0558f98e221f63..a9b1b9b0b03bf7 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -948,15 +948,14 @@ static int sec_aead_mac_init(struct sec_aead_req *req)
+ struct aead_request *aead_req = req->aead_req;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req);
+ size_t authsize = crypto_aead_authsize(tfm);
+- u8 *mac_out = req->out_mac;
+ struct scatterlist *sgl = aead_req->src;
++ u8 *mac_out = req->out_mac;
+ size_t copy_size;
+ off_t skip_size;
+
+ /* Copy input mac */
+ skip_size = aead_req->assoclen + aead_req->cryptlen - authsize;
+- copy_size = sg_pcopy_to_buffer(sgl, sg_nents(sgl), mac_out,
+- authsize, skip_size);
++ copy_size = sg_pcopy_to_buffer(sgl, sg_nents(sgl), mac_out, authsize, skip_size);
+ if (unlikely(copy_size != authsize))
+ return -EINVAL;
+
+@@ -1120,10 +1119,7 @@ static int sec_aead_setauthsize(struct crypto_aead *aead, unsigned int authsize)
+ struct sec_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+
+- if (unlikely(a_ctx->fallback_aead_tfm))
+- return crypto_aead_setauthsize(a_ctx->fallback_aead_tfm, authsize);
+-
+- return 0;
++ return crypto_aead_setauthsize(a_ctx->fallback_aead_tfm, authsize);
+ }
+
+ static int sec_aead_fallback_setkey(struct sec_auth_ctx *a_ctx,
+@@ -1139,7 +1135,6 @@ static int sec_aead_fallback_setkey(struct sec_auth_ctx *a_ctx,
+ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ const u32 keylen, const enum sec_hash_alg a_alg,
+ const enum sec_calg c_alg,
+- const enum sec_mac_len mac_len,
+ const enum sec_cmode c_mode)
+ {
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+@@ -1151,7 +1146,6 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+
+ ctx->a_ctx.a_alg = a_alg;
+ ctx->c_ctx.c_alg = c_alg;
+- ctx->a_ctx.mac_len = mac_len;
+ c_ctx->c_mode = c_mode;
+
+ if (c_mode == SEC_CMODE_CCM || c_mode == SEC_CMODE_GCM) {
+@@ -1162,13 +1156,7 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ }
+ memcpy(c_ctx->c_key, key, keylen);
+
+- if (unlikely(a_ctx->fallback_aead_tfm)) {
+- ret = sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
++ return sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+ }
+
+ ret = crypto_authenc_extractkeys(&keys, key, keylen);
+@@ -1187,10 +1175,15 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ goto bad_key;
+ }
+
+- if ((ctx->a_ctx.mac_len & SEC_SQE_LEN_RATE_MASK) ||
+- (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK)) {
++ if (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK) {
+ ret = -EINVAL;
+- dev_err(dev, "MAC or AUTH key length error!\n");
++ dev_err(dev, "AUTH key length error!\n");
++ goto bad_key;
++ }
++
++ ret = sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
++ if (ret) {
++ dev_err(dev, "set sec fallback key err!\n");
+ goto bad_key;
+ }
+
+@@ -1202,27 +1195,19 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ }
+
+
+-#define GEN_SEC_AEAD_SETKEY_FUNC(name, aalg, calg, maclen, cmode) \
+-static int sec_setkey_##name(struct crypto_aead *tfm, const u8 *key, \
+- u32 keylen) \
+-{ \
+- return sec_aead_setkey(tfm, key, keylen, aalg, calg, maclen, cmode);\
+-}
+-
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha1, SEC_A_HMAC_SHA1,
+- SEC_CALG_AES, SEC_HMAC_SHA1_MAC, SEC_CMODE_CBC)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha256, SEC_A_HMAC_SHA256,
+- SEC_CALG_AES, SEC_HMAC_SHA256_MAC, SEC_CMODE_CBC)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha512, SEC_A_HMAC_SHA512,
+- SEC_CALG_AES, SEC_HMAC_SHA512_MAC, SEC_CMODE_CBC)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_ccm, 0, SEC_CALG_AES,
+- SEC_HMAC_CCM_MAC, SEC_CMODE_CCM)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_gcm, 0, SEC_CALG_AES,
+- SEC_HMAC_GCM_MAC, SEC_CMODE_GCM)
+-GEN_SEC_AEAD_SETKEY_FUNC(sm4_ccm, 0, SEC_CALG_SM4,
+- SEC_HMAC_CCM_MAC, SEC_CMODE_CCM)
+-GEN_SEC_AEAD_SETKEY_FUNC(sm4_gcm, 0, SEC_CALG_SM4,
+- SEC_HMAC_GCM_MAC, SEC_CMODE_GCM)
++#define GEN_SEC_AEAD_SETKEY_FUNC(name, aalg, calg, cmode) \
++static int sec_setkey_##name(struct crypto_aead *tfm, const u8 *key, u32 keylen) \
++{ \
++ return sec_aead_setkey(tfm, key, keylen, aalg, calg, cmode); \
++}
++
++GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha1, SEC_A_HMAC_SHA1, SEC_CALG_AES, SEC_CMODE_CBC)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha256, SEC_A_HMAC_SHA256, SEC_CALG_AES, SEC_CMODE_CBC)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha512, SEC_A_HMAC_SHA512, SEC_CALG_AES, SEC_CMODE_CBC)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_ccm, 0, SEC_CALG_AES, SEC_CMODE_CCM)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_gcm, 0, SEC_CALG_AES, SEC_CMODE_GCM)
++GEN_SEC_AEAD_SETKEY_FUNC(sm4_ccm, 0, SEC_CALG_SM4, SEC_CMODE_CCM)
++GEN_SEC_AEAD_SETKEY_FUNC(sm4_gcm, 0, SEC_CALG_SM4, SEC_CMODE_GCM)
+
+ static int sec_aead_sgl_map(struct sec_ctx *ctx, struct sec_req *req)
+ {
+@@ -1470,9 +1455,10 @@ static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req,
+ static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req)
+ {
+ struct aead_request *aead_req = req->aead_req.aead_req;
+- struct sec_cipher_req *c_req = &req->c_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req);
++ size_t authsize = crypto_aead_authsize(tfm);
+ struct sec_aead_req *a_req = &req->aead_req;
+- size_t authsize = ctx->a_ctx.mac_len;
++ struct sec_cipher_req *c_req = &req->c_req;
+ u32 data_size = aead_req->cryptlen;
+ u8 flage = 0;
+ u8 cm, cl;
+@@ -1513,10 +1499,8 @@ static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req)
+ static void sec_aead_set_iv(struct sec_ctx *ctx, struct sec_req *req)
+ {
+ struct aead_request *aead_req = req->aead_req.aead_req;
+- struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req);
+- size_t authsize = crypto_aead_authsize(tfm);
+- struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_aead_req *a_req = &req->aead_req;
++ struct sec_cipher_req *c_req = &req->c_req;
+
+ memcpy(c_req->c_ivin, aead_req->iv, ctx->c_ctx.ivsize);
+
+@@ -1524,15 +1508,11 @@ static void sec_aead_set_iv(struct sec_ctx *ctx, struct sec_req *req)
+ /*
+ * CCM 16Byte Cipher_IV: {1B_Flage,13B_IV,2B_counter},
+ * the counter must set to 0x01
++ * CCM 16Byte Auth_IV: {1B_AFlage,13B_IV,2B_Ptext_length}
+ */
+- ctx->a_ctx.mac_len = authsize;
+- /* CCM 16Byte Auth_IV: {1B_AFlage,13B_IV,2B_Ptext_length} */
+ set_aead_auth_iv(ctx, req);
+- }
+-
+- /* GCM 12Byte Cipher_IV == Auth_IV */
+- if (ctx->c_ctx.c_mode == SEC_CMODE_GCM) {
+- ctx->a_ctx.mac_len = authsize;
++ } else if (ctx->c_ctx.c_mode == SEC_CMODE_GCM) {
++ /* GCM 12Byte Cipher_IV == Auth_IV */
+ memcpy(a_req->a_ivin, c_req->c_ivin, SEC_AIV_SIZE);
+ }
+ }
+@@ -1542,9 +1522,11 @@ static void sec_auth_bd_fill_xcm(struct sec_auth_ctx *ctx, int dir,
+ {
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ /* C_ICV_Len is MAC size, 0x4 ~ 0x10 */
+- sec_sqe->type2.icvw_kmode |= cpu_to_le16((u16)ctx->mac_len);
++ sec_sqe->type2.icvw_kmode |= cpu_to_le16((u16)authsize);
+
+ /* mode set to CCM/GCM, don't set {A_Alg, AKey_Len, MAC_Len} */
+ sec_sqe->type2.a_key_addr = sec_sqe->type2.c_key_addr;
+@@ -1568,9 +1550,11 @@ static void sec_auth_bd_fill_xcm_v3(struct sec_auth_ctx *ctx, int dir,
+ {
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ /* C_ICV_Len is MAC size, 0x4 ~ 0x10 */
+- sqe3->c_icv_key |= cpu_to_le16((u16)ctx->mac_len << SEC_MAC_OFFSET_V3);
++ sqe3->c_icv_key |= cpu_to_le16((u16)authsize << SEC_MAC_OFFSET_V3);
+
+ /* mode set to CCM/GCM, don't set {A_Alg, AKey_Len, MAC_Len} */
+ sqe3->a_key_addr = sqe3->c_key_addr;
+@@ -1594,11 +1578,12 @@ static void sec_auth_bd_fill_ex(struct sec_auth_ctx *ctx, int dir,
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ sec_sqe->type2.a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+- sec_sqe->type2.mac_key_alg =
+- cpu_to_le32(ctx->mac_len / SEC_SQE_LEN_RATE);
++ sec_sqe->type2.mac_key_alg = cpu_to_le32(authsize / SEC_SQE_LEN_RATE);
+
+ sec_sqe->type2.mac_key_alg |=
+ cpu_to_le32((u32)((ctx->a_key_len) /
+@@ -1648,11 +1633,13 @@ static void sec_auth_bd_fill_ex_v3(struct sec_auth_ctx *ctx, int dir,
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ sqe3->a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+ sqe3->auth_mac_key |=
+- cpu_to_le32((u32)(ctx->mac_len /
++ cpu_to_le32((u32)(authsize /
+ SEC_SQE_LEN_RATE) << SEC_MAC_OFFSET_V3);
+
+ sqe3->auth_mac_key |=
+@@ -1703,9 +1690,9 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
+ {
+ struct aead_request *a_req = req->aead_req.aead_req;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(a_req);
++ size_t authsize = crypto_aead_authsize(tfm);
+ struct sec_aead_req *aead_req = &req->aead_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+- size_t authsize = crypto_aead_authsize(tfm);
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ struct aead_request *backlog_aead_req;
+ struct sec_req *backlog_req;
+@@ -1718,10 +1705,8 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
+ if (!err && c_req->encrypt) {
+ struct scatterlist *sgl = a_req->dst;
+
+- sz = sg_pcopy_from_buffer(sgl, sg_nents(sgl),
+- aead_req->out_mac,
+- authsize, a_req->cryptlen +
+- a_req->assoclen);
++ sz = sg_pcopy_from_buffer(sgl, sg_nents(sgl), aead_req->out_mac,
++ authsize, a_req->cryptlen + a_req->assoclen);
+ if (unlikely(sz != authsize)) {
+ dev_err(c->dev, "copy out mac err!\n");
+ err = -EINVAL;
+@@ -1929,8 +1914,10 @@ static void sec_aead_exit(struct crypto_aead *tfm)
+
+ static int sec_aead_ctx_init(struct crypto_aead *tfm, const char *hash_name)
+ {
++ struct aead_alg *alg = crypto_aead_alg(tfm);
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+- struct sec_auth_ctx *auth_ctx = &ctx->a_ctx;
++ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
++ const char *aead_name = alg->base.cra_name;
+ int ret;
+
+ ret = sec_aead_init(tfm);
+@@ -1939,11 +1926,20 @@ static int sec_aead_ctx_init(struct crypto_aead *tfm, const char *hash_name)
+ return ret;
+ }
+
+- auth_ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0);
+- if (IS_ERR(auth_ctx->hash_tfm)) {
++ a_ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0);
++ if (IS_ERR(a_ctx->hash_tfm)) {
+ dev_err(ctx->dev, "aead alloc shash error!\n");
+ sec_aead_exit(tfm);
+- return PTR_ERR(auth_ctx->hash_tfm);
++ return PTR_ERR(a_ctx->hash_tfm);
++ }
++
++ a_ctx->fallback_aead_tfm = crypto_alloc_aead(aead_name, 0,
++ CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC);
++ if (IS_ERR(a_ctx->fallback_aead_tfm)) {
++ dev_err(ctx->dev, "aead driver alloc fallback tfm error!\n");
++ crypto_free_shash(ctx->a_ctx.hash_tfm);
++ sec_aead_exit(tfm);
++ return PTR_ERR(a_ctx->fallback_aead_tfm);
+ }
+
+ return 0;
+@@ -1953,6 +1949,7 @@ static void sec_aead_ctx_exit(struct crypto_aead *tfm)
+ {
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+
++ crypto_free_aead(ctx->a_ctx.fallback_aead_tfm);
+ crypto_free_shash(ctx->a_ctx.hash_tfm);
+ sec_aead_exit(tfm);
+ }
+@@ -1979,7 +1976,6 @@ static int sec_aead_xcm_ctx_init(struct crypto_aead *tfm)
+ sec_aead_exit(tfm);
+ return PTR_ERR(a_ctx->fallback_aead_tfm);
+ }
+- a_ctx->fallback = false;
+
+ return 0;
+ }
+@@ -2233,21 +2229,20 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ {
+ struct aead_request *req = sreq->aead_req.aead_req;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+- size_t authsize = crypto_aead_authsize(tfm);
++ size_t sz = crypto_aead_authsize(tfm);
+ u8 c_mode = ctx->c_ctx.c_mode;
+ struct device *dev = ctx->dev;
+ int ret;
+
+- if (unlikely(req->cryptlen + req->assoclen > MAX_INPUT_DATA_LEN ||
+- req->assoclen > SEC_MAX_AAD_LEN)) {
+- dev_err(dev, "aead input spec error!\n");
++ /* Hardware does not handle cases where authsize is less than 4 bytes */
++ if (unlikely(sz < MIN_MAC_LEN)) {
++ sreq->aead_req.fallback = true;
+ return -EINVAL;
+ }
+
+- if (unlikely((c_mode == SEC_CMODE_GCM && authsize < DES_BLOCK_SIZE) ||
+- (c_mode == SEC_CMODE_CCM && (authsize < MIN_MAC_LEN ||
+- authsize & MAC_LEN_MASK)))) {
+- dev_err(dev, "aead input mac length error!\n");
++ if (unlikely(req->cryptlen + req->assoclen > MAX_INPUT_DATA_LEN ||
++ req->assoclen > SEC_MAX_AAD_LEN)) {
++ dev_err(dev, "aead input spec error!\n");
+ return -EINVAL;
+ }
+
+@@ -2266,7 +2261,7 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ if (sreq->c_req.encrypt)
+ sreq->c_req.c_len = req->cryptlen;
+ else
+- sreq->c_req.c_len = req->cryptlen - authsize;
++ sreq->c_req.c_len = req->cryptlen - sz;
+ if (c_mode == SEC_CMODE_CBC) {
+ if (unlikely(sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
+ dev_err(dev, "aead crypto length error!\n");
+@@ -2292,8 +2287,8 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+
+ if (ctx->sec->qm.ver == QM_HW_V2) {
+ if (unlikely(!req->cryptlen || (!sreq->c_req.encrypt &&
+- req->cryptlen <= authsize))) {
+- ctx->a_ctx.fallback = true;
++ req->cryptlen <= authsize))) {
++ sreq->aead_req.fallback = true;
+ return -EINVAL;
+ }
+ }
+@@ -2321,16 +2316,9 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ bool encrypt)
+ {
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+- struct device *dev = ctx->dev;
+ struct aead_request *subreq;
+ int ret;
+
+- /* Kunpeng920 aead mode not support input 0 size */
+- if (!a_ctx->fallback_aead_tfm) {
+- dev_err(dev, "aead fallback tfm is NULL!\n");
+- return -EINVAL;
+- }
+-
+ subreq = aead_request_alloc(a_ctx->fallback_aead_tfm, GFP_KERNEL);
+ if (!subreq)
+ return -ENOMEM;
+@@ -2362,10 +2350,11 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+ req->aead_req.aead_req = a_req;
+ req->c_req.encrypt = encrypt;
+ req->ctx = ctx;
++ req->aead_req.fallback = false;
+
+ ret = sec_aead_param_check(ctx, req);
+ if (unlikely(ret)) {
+- if (ctx->a_ctx.fallback)
++ if (req->aead_req.fallback)
+ return sec_aead_soft_crypto(ctx, a_req, encrypt);
+ return -EINVAL;
+ }
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+index 27a0ee5ad9131c..04725b514382f8 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.h
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+@@ -23,17 +23,6 @@ enum sec_hash_alg {
+ SEC_A_HMAC_SHA512 = 0x15,
+ };
+
+-enum sec_mac_len {
+- SEC_HMAC_CCM_MAC = 16,
+- SEC_HMAC_GCM_MAC = 16,
+- SEC_SM3_MAC = 32,
+- SEC_HMAC_SM3_MAC = 32,
+- SEC_HMAC_MD5_MAC = 16,
+- SEC_HMAC_SHA1_MAC = 20,
+- SEC_HMAC_SHA256_MAC = 32,
+- SEC_HMAC_SHA512_MAC = 64,
+-};
+-
+ enum sec_cmode {
+ SEC_CMODE_ECB = 0x0,
+ SEC_CMODE_CBC = 0x1,
+diff --git a/drivers/crypto/intel/iaa/Makefile b/drivers/crypto/intel/iaa/Makefile
+index b64b208d234408..55bda7770fac79 100644
+--- a/drivers/crypto/intel/iaa/Makefile
++++ b/drivers/crypto/intel/iaa/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for IAA crypto device drivers
+ #
+
+-ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE=IDXD
++ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE='"IDXD"'
+
+ obj-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO) := iaa_crypto.o
+
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index 237f8700007021..d2f07e34f3142d 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -173,7 +173,7 @@ static int set_iaa_sync_mode(const char *name)
+ async_mode = false;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async")) {
+- async_mode = true;
++ async_mode = false;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async_irq")) {
+ async_mode = true;
+diff --git a/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c b/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
+index f8a77bff88448d..e43361392c83f7 100644
+--- a/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
++++ b/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
+@@ -471,6 +471,7 @@ static int init_ixp_crypto(struct device *dev)
+ return -ENODEV;
+ }
+ npe_id = npe_spec.args[0];
++ of_node_put(npe_spec.np);
+
+ ret = of_parse_phandle_with_fixed_args(np, "queue-rx", 1, 0,
+ &queue_spec);
+@@ -479,6 +480,7 @@ static int init_ixp_crypto(struct device *dev)
+ return -ENODEV;
+ }
+ recv_qid = queue_spec.args[0];
++ of_node_put(queue_spec.np);
+
+ ret = of_parse_phandle_with_fixed_args(np, "queue-txready", 1, 0,
+ &queue_spec);
+@@ -487,6 +489,7 @@ static int init_ixp_crypto(struct device *dev)
+ return -ENODEV;
+ }
+ send_qid = queue_spec.args[0];
++ of_node_put(queue_spec.np);
+ } else {
+ /*
+ * Hardcoded engine when using platform data, this goes away
+diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile
+index eac73cbfdd38e2..7acf9c576149ba 100644
+--- a/drivers/crypto/intel/qat/qat_common/Makefile
++++ b/drivers/crypto/intel/qat/qat_common/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_CRYPTO_DEV_QAT) += intel_qat.o
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CRYPTO_QAT
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"CRYPTO_QAT"'
+ intel_qat-objs := adf_cfg.o \
+ adf_isr.o \
+ adf_ctl_drv.o \
+diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c
+index ae7a0f8435fc63..3106fd1e84b91e 100644
+--- a/drivers/crypto/tegra/tegra-se-aes.c
++++ b/drivers/crypto/tegra/tegra-se-aes.c
+@@ -1752,10 +1752,13 @@ static int tegra_cmac_digest(struct ahash_request *req)
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
++ int ret;
+
+- tegra_cmac_init(req);
+- rctx->task |= SHA_UPDATE | SHA_FINAL;
++ ret = tegra_cmac_init(req);
++ if (ret)
++ return ret;
+
++ rctx->task |= SHA_UPDATE | SHA_FINAL;
+ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+diff --git a/drivers/crypto/tegra/tegra-se-hash.c b/drivers/crypto/tegra/tegra-se-hash.c
+index 4d4bd727f49869..0b5cdd5676b17e 100644
+--- a/drivers/crypto/tegra/tegra-se-hash.c
++++ b/drivers/crypto/tegra/tegra-se-hash.c
+@@ -615,13 +615,16 @@ static int tegra_sha_digest(struct ahash_request *req)
+ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
++ int ret;
+
+ if (ctx->fallback)
+ return tegra_sha_fallback_digest(req);
+
+- tegra_sha_init(req);
+- rctx->task |= SHA_UPDATE | SHA_FINAL;
++ ret = tegra_sha_init(req);
++ if (ret)
++ return ret;
+
++ rctx->task |= SHA_UPDATE | SHA_FINAL;
+ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+diff --git a/drivers/dma/idxd/Makefile b/drivers/dma/idxd/Makefile
+index 2b4a0d406e1e71..9ff9d7b87b649d 100644
+--- a/drivers/dma/idxd/Makefile
++++ b/drivers/dma/idxd/Makefile
+@@ -1,4 +1,4 @@
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=IDXD
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"IDXD"'
+
+ obj-$(CONFIG_INTEL_IDXD_BUS) += idxd_bus.o
+ idxd_bus-y := bus.o
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index 5f8d2e93ff3fb5..7f861fb07cb837 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -208,7 +208,6 @@ struct edma_desc {
+ struct edma_cc;
+
+ struct edma_tc {
+- struct device_node *node;
+ u16 id;
+ };
+
+@@ -2466,13 +2465,13 @@ static int edma_probe(struct platform_device *pdev)
+ if (ret || i == ecc->num_tc)
+ break;
+
+- ecc->tc_list[i].node = tc_args.np;
+ ecc->tc_list[i].id = i;
+ queue_priority_mapping[i][1] = tc_args.args[0];
+ if (queue_priority_mapping[i][1] > lowest_priority) {
+ lowest_priority = queue_priority_mapping[i][1];
+ info->default_queue = i;
+ }
++ of_node_put(tc_args.np);
+ }
+
+ /* See if we have optional dma-channel-mask array */
+diff --git a/drivers/firewire/device-attribute-test.c b/drivers/firewire/device-attribute-test.c
+index 2f123c6b0a1659..97478a96d1c965 100644
+--- a/drivers/firewire/device-attribute-test.c
++++ b/drivers/firewire/device-attribute-test.c
+@@ -99,6 +99,7 @@ static void device_attr_simple_avc(struct kunit *test)
+ struct device *unit0_dev = (struct device *)&unit0.device;
+ static const int unit0_expected_ids[] = {0x00ffffff, 0x00ffffff, 0x0000a02d, 0x00010001};
+ char *buf = kunit_kzalloc(test, PAGE_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf);
+ int ids[4] = {0, 0, 0, 0};
+
+ // Ensure associations for node and unit devices.
+@@ -180,6 +181,7 @@ static void device_attr_legacy_avc(struct kunit *test)
+ struct device *unit0_dev = (struct device *)&unit0.device;
+ static const int unit0_expected_ids[] = {0x00012345, 0x00fedcba, 0x00abcdef, 0x00543210};
+ char *buf = kunit_kzalloc(test, PAGE_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf);
+ int ids[4] = {0, 0, 0, 0};
+
+ // Ensure associations for node and unit devices.
+diff --git a/drivers/firmware/efi/sysfb_efi.c b/drivers/firmware/efi/sysfb_efi.c
+index cc807ed35aedf7..1e509595ac0343 100644
+--- a/drivers/firmware/efi/sysfb_efi.c
++++ b/drivers/firmware/efi/sysfb_efi.c
+@@ -91,6 +91,7 @@ void efifb_setup_from_dmi(struct screen_info *si, const char *opt)
+ _ret_; \
+ })
+
++#ifdef CONFIG_EFI
+ static int __init efifb_set_system(const struct dmi_system_id *id)
+ {
+ struct efifb_dmi_info *info = id->driver_data;
+@@ -346,7 +347,6 @@ static const struct fwnode_operations efifb_fwnode_ops = {
+ .add_links = efifb_add_links,
+ };
+
+-#ifdef CONFIG_EFI
+ static struct fwnode_handle efifb_fwnode;
+
+ __init void sysfb_apply_efi_quirks(void)
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 14afd68664a911..a6bdedbbf70888 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -2001,13 +2001,17 @@ static int qcom_scm_probe(struct platform_device *pdev)
+
+ irq = platform_get_irq_optional(pdev, 0);
+ if (irq < 0) {
+- if (irq != -ENXIO)
+- return irq;
++ if (irq != -ENXIO) {
++ ret = irq;
++ goto err;
++ }
+ } else {
+ ret = devm_request_threaded_irq(__scm->dev, irq, NULL, qcom_scm_irq_handler,
+ IRQF_ONESHOT, "qcom-scm", __scm);
+- if (ret < 0)
+- return dev_err_probe(scm->dev, ret, "Failed to request qcom-scm irq\n");
++ if (ret < 0) {
++ dev_err_probe(scm->dev, ret, "Failed to request qcom-scm irq\n");
++ goto err;
++ }
+ }
+
+ __get_convention();
+@@ -2026,14 +2030,18 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ qcom_scm_disable_sdi();
+
+ ret = of_reserved_mem_device_init(__scm->dev);
+- if (ret && ret != -ENODEV)
+- return dev_err_probe(__scm->dev, ret,
+- "Failed to setup the reserved memory region for TZ mem\n");
++ if (ret && ret != -ENODEV) {
++ dev_err_probe(__scm->dev, ret,
++ "Failed to setup the reserved memory region for TZ mem\n");
++ goto err;
++ }
+
+ ret = qcom_tzmem_enable(__scm->dev);
+- if (ret)
+- return dev_err_probe(__scm->dev, ret,
+- "Failed to enable the TrustZone memory allocator\n");
++ if (ret) {
++ dev_err_probe(__scm->dev, ret,
++ "Failed to enable the TrustZone memory allocator\n");
++ goto err;
++ }
+
+ memset(&pool_config, 0, sizeof(pool_config));
+ pool_config.initial_size = 0;
+@@ -2041,9 +2049,11 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ pool_config.max_size = SZ_256K;
+
+ __scm->mempool = devm_qcom_tzmem_pool_new(__scm->dev, &pool_config);
+- if (IS_ERR(__scm->mempool))
+- return dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
+- "Failed to create the SCM memory pool\n");
++ if (IS_ERR(__scm->mempool)) {
++ dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
++ "Failed to create the SCM memory pool\n");
++ goto err;
++ }
+
+ /*
+ * Initialize the QSEECOM interface.
+@@ -2059,6 +2069,12 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ WARN(ret < 0, "failed to initialize qseecom: %d\n", ret);
+
+ return 0;
++
++err:
++ /* Paired with smp_load_acquire() in qcom_scm_is_available(). */
++ smp_store_release(&__scm, NULL);
++
++ return ret;
+ }
+
+ static void qcom_scm_shutdown(struct platform_device *pdev)
+diff --git a/drivers/gpio/gpio-idio-16.c b/drivers/gpio/gpio-idio-16.c
+index 53b1eb876a1257..2c951258929721 100644
+--- a/drivers/gpio/gpio-idio-16.c
++++ b/drivers/gpio/gpio-idio-16.c
+@@ -14,7 +14,7 @@
+
+ #include "gpio-idio-16.h"
+
+-#define DEFAULT_SYMBOL_NAMESPACE GPIO_IDIO_16
++#define DEFAULT_SYMBOL_NAMESPACE "GPIO_IDIO_16"
+
+ #define IDIO_16_DAT_BASE 0x0
+ #define IDIO_16_OUT_BASE IDIO_16_DAT_BASE
+diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c
+index 4cb455b2bdee71..619b6fb9d833a4 100644
+--- a/drivers/gpio/gpio-mxc.c
++++ b/drivers/gpio/gpio-mxc.c
+@@ -490,8 +490,7 @@ static int mxc_gpio_probe(struct platform_device *pdev)
+ port->gc.request = mxc_gpio_request;
+ port->gc.free = mxc_gpio_free;
+ port->gc.to_irq = mxc_gpio_to_irq;
+- port->gc.base = (pdev->id < 0) ? of_alias_get_id(np, "gpio") * 32 :
+- pdev->id * 32;
++ port->gc.base = of_alias_get_id(np, "gpio") * 32;
+
+ err = devm_gpiochip_add_data(&pdev->dev, &port->gc, port);
+ if (err)
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 3f2d33ee20cca9..e49802f26e07f8 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1088,7 +1088,8 @@ static int pca953x_probe(struct i2c_client *client)
+ */
+ reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
+ if (IS_ERR(reset_gpio))
+- return PTR_ERR(reset_gpio);
++ return dev_err_probe(dev, PTR_ERR(reset_gpio),
++ "Failed to get reset gpio\n");
+ }
+
+ chip->client = client;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+index 3bc0cbf45bc59a..a46d6dd6de32fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+@@ -1133,8 +1133,7 @@ uint64_t kgd_gfx_v9_hqd_get_pq_addr(struct amdgpu_device *adev,
+ uint32_t low, high;
+ uint64_t queue_addr = 0;
+
+- if (!adev->debug_exp_resets &&
+- !adev->gfx.num_gfx_rings)
++ if (!amdgpu_gpu_recovery)
+ return 0;
+
+ kgd_gfx_v9_acquire_queue(adev, pipe_id, queue_id, inst);
+@@ -1185,6 +1184,9 @@ uint64_t kgd_gfx_v9_hqd_reset(struct amdgpu_device *adev,
+ uint32_t low, high, pipe_reset_data = 0;
+ uint64_t queue_addr = 0;
+
++ if (!amdgpu_gpu_recovery)
++ return 0;
++
+ kgd_gfx_v9_acquire_queue(adev, pipe_id, queue_id, inst);
+ amdgpu_gfx_rlc_enter_safe_mode(adev, inst);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 9f922ec50ea2dc..ae9ca6788df78c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -2065,6 +2065,7 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
+ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_GDS);
+ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_GWS);
+ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_OA);
++ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_DOORBELL);
+ ttm_device_fini(&adev->mman.bdev);
+ adev->mman.initialized = false;
+ DRM_INFO("amdgpu: ttm finalized\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index e7cd51c95141e1..e2501c98e107d3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -7251,10 +7251,6 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
+ unsigned long flags;
+ int i, r;
+
+- if (!adev->debug_exp_resets &&
+- !adev->gfx.num_gfx_rings)
+- return -EINVAL;
+-
+ if (amdgpu_sriov_vf(adev))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index ffdb966c4127ee..5dc3454d7d3610 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -3062,9 +3062,6 @@ static void gfx_v9_4_3_ring_soft_recovery(struct amdgpu_ring *ring,
+ struct amdgpu_device *adev = ring->adev;
+ uint32_t value = 0;
+
+- if (!adev->debug_exp_resets)
+- return;
+-
+ value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+ value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+@@ -3580,9 +3577,6 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
+ unsigned long flags;
+ int r;
+
+- if (!adev->debug_exp_resets)
+- return -EINVAL;
+-
+ if (amdgpu_sriov_vf(adev))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index 6fca2915ea8fd5..84c6b0f5c4c0b2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -943,6 +943,8 @@ static int vcn_v4_0_3_start_sriov(struct amdgpu_device *adev)
+ for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+ vcn_inst = GET_INST(VCN, i);
+
++ vcn_v4_0_3_fw_shared_init(adev, vcn_inst);
++
+ memset(&header, 0, sizeof(struct mmsch_v4_0_3_init_header));
+ header.version = MMSCH_VERSION;
+ header.total_size = sizeof(struct mmsch_v4_0_3_init_header) >> 2;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index a0bc2c0ac04d96..20ad72d1b0d9b3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -697,6 +697,8 @@ struct amdgpu_dm_connector {
+ struct drm_dp_mst_port *mst_output_port;
+ struct amdgpu_dm_connector *mst_root;
+ struct drm_dp_aux *dsc_aux;
++ uint32_t mst_local_bw;
++ uint16_t vc_full_pbn;
+ struct mutex handle_mst_msg_ready;
+
+ /* TODO see if we can merge with ddc_bus or make a dm_connector */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 3d624ae6d9bdfe..754dbc544f03a3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -155,6 +155,17 @@ amdgpu_dm_mst_connector_late_register(struct drm_connector *connector)
+ return 0;
+ }
+
++
++static inline void
++amdgpu_dm_mst_reset_mst_connector_setting(struct amdgpu_dm_connector *aconnector)
++{
++ aconnector->edid = NULL;
++ aconnector->dsc_aux = NULL;
++ aconnector->mst_output_port->passthrough_aux = NULL;
++ aconnector->mst_local_bw = 0;
++ aconnector->vc_full_pbn = 0;
++}
++
+ static void
+ amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
+ {
+@@ -182,9 +193,7 @@ amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
+
+ dc_sink_release(dc_sink);
+ aconnector->dc_sink = NULL;
+- aconnector->edid = NULL;
+- aconnector->dsc_aux = NULL;
+- port->passthrough_aux = NULL;
++ amdgpu_dm_mst_reset_mst_connector_setting(aconnector);
+ }
+
+ aconnector->mst_status = MST_STATUS_DEFAULT;
+@@ -500,9 +509,7 @@ dm_dp_mst_detect(struct drm_connector *connector,
+
+ dc_sink_release(aconnector->dc_sink);
+ aconnector->dc_sink = NULL;
+- aconnector->edid = NULL;
+- aconnector->dsc_aux = NULL;
+- port->passthrough_aux = NULL;
++ amdgpu_dm_mst_reset_mst_connector_setting(aconnector);
+
+ amdgpu_dm_set_mst_status(&aconnector->mst_status,
+ MST_REMOTE_EDID | MST_ALLOCATE_NEW_PAYLOAD | MST_CLEAR_ALLOCATED_PAYLOAD,
+@@ -1815,9 +1822,18 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ struct drm_dp_mst_port *immediate_upstream_port = NULL;
+ uint32_t end_link_bw = 0;
+
+- /*Get last DP link BW capability*/
+- if (dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw)) {
+- if (stream_kbps > end_link_bw) {
++ /*Get last DP link BW capability. Mode shall be supported by Legacy peer*/
++ if (aconnector->mst_output_port->pdt != DP_PEER_DEVICE_DP_LEGACY_CONV &&
++ aconnector->mst_output_port->pdt != DP_PEER_DEVICE_NONE) {
++ if (aconnector->vc_full_pbn != aconnector->mst_output_port->full_pbn) {
++ dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw);
++ aconnector->vc_full_pbn = aconnector->mst_output_port->full_pbn;
++ aconnector->mst_local_bw = end_link_bw;
++ } else {
++ end_link_bw = aconnector->mst_local_bw;
++ }
++
++ if (end_link_bw > 0 && stream_kbps > end_link_bw) {
+ DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
+ "Mode required bw can't fit into last link\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+index e1da48b05d0094..961d8936150ab7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+@@ -194,6 +194,9 @@ void dpp_reset(struct dpp *dpp_base)
+ dpp->filter_h = NULL;
+ dpp->filter_v = NULL;
+
++ memset(&dpp_base->pos, 0, sizeof(dpp_base->pos));
++ memset(&dpp_base->att, 0, sizeof(dpp_base->att));
++
+ memset(&dpp->scl_data, 0, sizeof(dpp->scl_data));
+ memset(&dpp->pwl_data, 0, sizeof(dpp->pwl_data));
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
+index 22ac2b7e49aeae..da963f73829f6c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
+@@ -532,6 +532,12 @@ void hubp1_dcc_control(struct hubp *hubp, bool enable,
+ SECONDARY_SURFACE_DCC_IND_64B_BLK, dcc_ind_64b_blk);
+ }
+
++void hubp_reset(struct hubp *hubp)
++{
++ memset(&hubp->pos, 0, sizeof(hubp->pos));
++ memset(&hubp->att, 0, sizeof(hubp->att));
++}
++
+ void hubp1_program_surface_config(
+ struct hubp *hubp,
+ enum surface_pixel_format format,
+@@ -1337,8 +1343,9 @@ static void hubp1_wait_pipe_read_start(struct hubp *hubp)
+
+ void hubp1_init(struct hubp *hubp)
+ {
+- //do nothing
++ hubp_reset(hubp);
+ }
++
+ static const struct hubp_funcs dcn10_hubp_funcs = {
+ .hubp_program_surface_flip_and_addr =
+ hubp1_program_surface_flip_and_addr,
+@@ -1351,6 +1358,7 @@ static const struct hubp_funcs dcn10_hubp_funcs = {
+ .hubp_set_vm_context0_settings = hubp1_set_vm_context0_settings,
+ .set_blank = hubp1_set_blank,
+ .dcc_control = hubp1_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_hubp_blank_en = hubp1_set_hubp_blank_en,
+ .set_cursor_attributes = hubp1_cursor_set_attributes,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
+index 69119b2fdce23b..193e48b440ef18 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
+@@ -746,6 +746,8 @@ void hubp1_dcc_control(struct hubp *hubp,
+ bool enable,
+ enum hubp_ind_block_size independent_64b_blks);
+
++void hubp_reset(struct hubp *hubp);
++
+ bool hubp1_program_surface_flip_and_addr(
+ struct hubp *hubp,
+ const struct dc_plane_address *address,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+index 0637e4c552d8a2..b405fa22f87a9e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+@@ -1660,6 +1660,7 @@ static struct hubp_funcs dcn20_hubp_funcs = {
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
+ .dcc_control = hubp2_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c
+index cd2bfcc5127650..6efcb10abf3dee 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c
+@@ -121,6 +121,7 @@ static struct hubp_funcs dcn201_hubp_funcs = {
+ .set_cursor_position = hubp1_cursor_set_position,
+ .set_blank = hubp1_set_blank,
+ .dcc_control = hubp1_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .hubp_clk_cntl = hubp1_clk_cntl,
+ .hubp_vtg_sel = hubp1_vtg_sel,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
+index e13d69a22c1c7f..4e2d9d381db393 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
+@@ -811,6 +811,8 @@ static void hubp21_init(struct hubp *hubp)
+ struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
+ //hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
+ REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
++
++ hubp_reset(hubp);
+ }
+ static struct hubp_funcs dcn21_hubp_funcs = {
+ .hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
+@@ -823,6 +825,7 @@ static struct hubp_funcs dcn21_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp21_set_vm_system_aperture_settings,
+ .set_blank = hubp1_set_blank,
+ .dcc_control = hubp1_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = hubp21_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp1_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+index 60a64d29035274..c55b1b8be8ffd6 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+@@ -483,6 +483,8 @@ void hubp3_init(struct hubp *hubp)
+ struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+ //hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
+ REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
++
++ hubp_reset(hubp);
+ }
+
+ static struct hubp_funcs dcn30_hubp_funcs = {
+@@ -497,6 +499,7 @@ static struct hubp_funcs dcn30_hubp_funcs = {
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+index 8394e8c069199f..a65a0ddee64672 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+@@ -79,6 +79,7 @@ static struct hubp_funcs dcn31_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+ .set_blank = hubp2_set_blank,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+index ca5b4b28a66441..45023fa9b708dc 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+@@ -181,6 +181,7 @@ static struct hubp_funcs dcn32_hubp_funcs = {
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp32_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
+index d1f05b82b3dd5c..e7625290c0e467 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
+@@ -199,6 +199,7 @@ static struct hubp_funcs dcn35_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+ .set_blank = hubp2_set_blank,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+index b1ebf5053b4fc3..2d52100510f05f 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+@@ -141,7 +141,7 @@ void hubp401_update_mall_sel(struct hubp *hubp, uint32_t mall_sel, bool c_cursor
+
+ void hubp401_init(struct hubp *hubp)
+ {
+- //For now nothing to do, HUBPREQ_DEBUG_DB register is removed on DCN4x.
++ hubp_reset(hubp);
+ }
+
+ void hubp401_vready_at_or_After_vsync(struct hubp *hubp,
+@@ -974,6 +974,7 @@ static struct hubp_funcs dcn401_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = hubp401_set_viewport,
+ .set_cursor_attributes = hubp32_cursor_set_attributes,
+ .set_cursor_position = hubp401_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index a6a1db5ba8bad1..fd0530251c6e5a 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1286,6 +1286,7 @@ void dcn10_plane_atomic_power_down(struct dc *dc,
+ if (hws->funcs.hubp_pg_control)
+ hws->funcs.hubp_pg_control(hws, hubp->inst, false);
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ REG_SET(DC_IP_REQUEST_CNTL, 0,
+@@ -1447,6 +1448,7 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
+ /* Disable on the current state so the new one isn't cleared. */
+ pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ pipe_ctx->stream_res.tg = tg;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index bd309dbdf7b2a7..f6b17bd3f714fa 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -787,6 +787,7 @@ void dcn35_init_pipes(struct dc *dc, struct dc_state *context)
+ /* Disable on the current state so the new one isn't cleared. */
+ pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ pipe_ctx->stream_res.tg = tg;
+@@ -940,6 +941,7 @@ void dcn35_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ /*to do, need to support both case*/
+ hubp->power_gated = true;
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ pipe_ctx->stream = NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+index 16580d62427891..eec16b0a199dd4 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+@@ -152,6 +152,8 @@ struct hubp_funcs {
+ void (*dcc_control)(struct hubp *hubp, bool enable,
+ enum hubp_ind_block_size blk_size);
+
++ void (*hubp_reset)(struct hubp *hubp);
++
+ void (*mem_program_viewport)(
+ struct hubp *hubp,
+ const struct rect *viewport,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+index b56298d9da98f3..5c54c9fd446196 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+@@ -1420,6 +1420,8 @@ int atomctrl_get_smc_sclk_range_table(struct pp_hwmgr *hwmgr, struct pp_atom_ctr
+ GetIndexIntoMasterTable(DATA, SMU_Info),
+ &size, &frev, &crev);
+
++ if (!psmu_info)
++ return -EINVAL;
+
+ for (i = 0; i < psmu_info->ucSclkEntryNum; i++) {
+ table->entry[i].ucVco_setting = psmu_info->asSclkFcwRangeEntry[i].ucVco_setting;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c
+index 3007b054c873c9..776d58ea63ae90 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c
+@@ -1120,13 +1120,14 @@ static int vega10_enable_se_edc_force_stall_config(struct pp_hwmgr *hwmgr)
+ result = vega10_program_didt_config_registers(hwmgr, SEEDCForceStallPatternConfig_Vega10, VEGA10_CONFIGREG_DIDT);
+ result |= vega10_program_didt_config_registers(hwmgr, SEEDCCtrlForceStallConfig_Vega10, VEGA10_CONFIGREG_DIDT);
+ if (0 != result)
+- return result;
++ goto exit_safe_mode;
+
+ vega10_didt_set_mask(hwmgr, false);
+
++exit_safe_mode:
+ amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+
+- return 0;
++ return result;
+ }
+
+ static int vega10_disable_se_edc_force_stall_config(struct pp_hwmgr *hwmgr)
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 008d86cc562af7..cf891e7677c0e2 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -300,7 +300,7 @@
+ #define MAX_CR_LEVEL 0x03
+ #define MAX_EQ_LEVEL 0x03
+ #define AUX_WAIT_TIMEOUT_MS 15
+-#define AUX_FIFO_MAX_SIZE 32
++#define AUX_FIFO_MAX_SIZE 16
+ #define PIXEL_CLK_DELAY 1
+ #define PIXEL_CLK_INVERSE 0
+ #define ADJUST_PHASE_THRESHOLD 80000
+diff --git a/drivers/gpu/drm/display/drm_hdmi_state_helper.c b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+index feb7a3a759811a..936a8f95d80f7e 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_state_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+@@ -347,6 +347,8 @@ static int hdmi_generate_avi_infoframe(const struct drm_connector *connector,
+ is_limited_range ? HDMI_QUANTIZATION_RANGE_LIMITED : HDMI_QUANTIZATION_RANGE_FULL;
+ int ret;
+
++ infoframe->set = false;
++
+ ret = drm_hdmi_avi_infoframe_from_display_mode(frame, connector, mode);
+ if (ret)
+ return ret;
+@@ -376,6 +378,8 @@ static int hdmi_generate_spd_infoframe(const struct drm_connector *connector,
+ &infoframe->data.spd;
+ int ret;
+
++ infoframe->set = false;
++
+ ret = hdmi_spd_infoframe_init(frame,
+ connector->hdmi.vendor,
+ connector->hdmi.product);
+@@ -398,6 +402,8 @@ static int hdmi_generate_hdr_infoframe(const struct drm_connector *connector,
+ &infoframe->data.drm;
+ int ret;
+
++ infoframe->set = false;
++
+ if (connector->max_bpc < 10)
+ return 0;
+
+@@ -425,6 +431,8 @@ static int hdmi_generate_hdmi_vendor_infoframe(const struct drm_connector *conne
+ &infoframe->data.vendor.hdmi;
+ int ret;
+
++ infoframe->set = false;
++
+ if (!info->has_hdmi_infoframe)
+ return 0;
+
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index 5c0c9d4e3be183..d3f6df047f5a2b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -342,6 +342,7 @@ void *etnaviv_gem_vmap(struct drm_gem_object *obj)
+ static void *etnaviv_gem_vmap_impl(struct etnaviv_gem_object *obj)
+ {
+ struct page **pages;
++ pgprot_t prot;
+
+ lockdep_assert_held(&obj->lock);
+
+@@ -349,8 +350,19 @@ static void *etnaviv_gem_vmap_impl(struct etnaviv_gem_object *obj)
+ if (IS_ERR(pages))
+ return NULL;
+
+- return vmap(pages, obj->base.size >> PAGE_SHIFT,
+- VM_MAP, pgprot_writecombine(PAGE_KERNEL));
++ switch (obj->flags & ETNA_BO_CACHE_MASK) {
++ case ETNA_BO_CACHED:
++ prot = PAGE_KERNEL;
++ break;
++ case ETNA_BO_UNCACHED:
++ prot = pgprot_noncached(PAGE_KERNEL);
++ break;
++ case ETNA_BO_WC:
++ default:
++ prot = pgprot_writecombine(PAGE_KERNEL);
++ }
++
++ return vmap(pages, obj->base.size >> PAGE_SHIFT, VM_MAP, prot);
+ }
+
+ static inline enum dma_data_direction etnaviv_op_to_dma_dir(u32 op)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 14db7376c712d1..e386b059187acf 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1603,7 +1603,9 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
+
+ gmu->dev = &pdev->dev;
+
+- of_dma_configure(gmu->dev, node, true);
++ ret = of_dma_configure(gmu->dev, node, true);
++ if (ret)
++ return ret;
+
+ pm_runtime_enable(gmu->dev);
+
+@@ -1668,7 +1670,9 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
+
+ gmu->dev = &pdev->dev;
+
+- of_dma_configure(gmu->dev, node, true);
++ ret = of_dma_configure(gmu->dev, node, true);
++ if (ret)
++ return ret;
+
+ /* Fow now, don't do anything fancy until we get our feet under us */
+ gmu->idle_level = GMU_IDLE_STATE_ACTIVE;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
+index eb5dfff2ec4f48..e187e7b1cef167 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
+@@ -160,6 +160,7 @@ static const struct dpu_lm_cfg sm8650_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x400,
+@@ -167,6 +168,7 @@ static const struct dpu_lm_cfg sm8650_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x400,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
+index cbbdaebe357ec4..daef07924886a5 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
+@@ -65,6 +65,54 @@ static const struct dpu_sspp_cfg sdm670_sspp[] = {
+ },
+ };
+
++static const struct dpu_lm_cfg sdm670_lm[] = {
++ {
++ .name = "lm_0", .id = LM_0,
++ .base = 0x44000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_1,
++ .pingpong = PINGPONG_0,
++ .dspp = DSPP_0,
++ }, {
++ .name = "lm_1", .id = LM_1,
++ .base = 0x45000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_0,
++ .pingpong = PINGPONG_1,
++ .dspp = DSPP_1,
++ }, {
++ .name = "lm_2", .id = LM_2,
++ .base = 0x46000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_5,
++ .pingpong = PINGPONG_2,
++ }, {
++ .name = "lm_5", .id = LM_5,
++ .base = 0x49000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_2,
++ .pingpong = PINGPONG_3,
++ },
++};
++
++static const struct dpu_dspp_cfg sdm670_dspp[] = {
++ {
++ .name = "dspp_0", .id = DSPP_0,
++ .base = 0x54000, .len = 0x1800,
++ .features = DSPP_SC7180_MASK,
++ .sblk = &sdm845_dspp_sblk,
++ }, {
++ .name = "dspp_1", .id = DSPP_1,
++ .base = 0x56000, .len = 0x1800,
++ .features = DSPP_SC7180_MASK,
++ .sblk = &sdm845_dspp_sblk,
++ },
++};
++
+ static const struct dpu_dsc_cfg sdm670_dsc[] = {
+ {
+ .name = "dsc_0", .id = DSC_0,
+@@ -88,8 +136,10 @@ const struct dpu_mdss_cfg dpu_sdm670_cfg = {
+ .ctl = sdm845_ctl,
+ .sspp_count = ARRAY_SIZE(sdm670_sspp),
+ .sspp = sdm670_sspp,
+- .mixer_count = ARRAY_SIZE(sdm845_lm),
+- .mixer = sdm845_lm,
++ .mixer_count = ARRAY_SIZE(sdm670_lm),
++ .mixer = sdm670_lm,
++ .dspp_count = ARRAY_SIZE(sdm670_dspp),
++ .dspp = sdm670_dspp,
+ .pingpong_count = ARRAY_SIZE(sdm845_pp),
+ .pingpong = sdm845_pp,
+ .dsc_count = ARRAY_SIZE(sdm670_dsc),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 6ccfde82fecdb4..421afacb724803 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -164,6 +164,7 @@ static const struct dpu_lm_cfg sm8150_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -171,6 +172,7 @@ static const struct dpu_lm_cfg sm8150_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index bab19ddd1d4f97..641023b102bf59 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -163,6 +163,7 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -170,6 +171,7 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+index a57d50b1f02807..e8916ae826a6da 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+@@ -162,6 +162,7 @@ static const struct dpu_lm_cfg sm8250_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -169,6 +170,7 @@ static const struct dpu_lm_cfg sm8250_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+index aced16e350daa1..f7c08e89c88203 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+@@ -162,6 +162,7 @@ static const struct dpu_lm_cfg sm8350_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -169,6 +170,7 @@ static const struct dpu_lm_cfg sm8350_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+index ad48defa154f7d..a1dbbf5c652ff9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+@@ -160,6 +160,7 @@ static const struct dpu_lm_cfg sm8550_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -167,6 +168,7 @@ static const struct dpu_lm_cfg sm8550_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+index a3e60ac70689e7..e084406ebb0711 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+@@ -159,6 +159,7 @@ static const struct dpu_lm_cfg x1e80100_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -166,6 +167,7 @@ static const struct dpu_lm_cfg x1e80100_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c
+index 576995ddce37e9..8bbc7fb881d599 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c
+@@ -389,7 +389,7 @@ struct drm_encoder *mdp4_lcdc_encoder_init(struct drm_device *dev,
+
+ /* TODO: different regulators in other cases? */
+ mdp4_lcdc_encoder->regs[0].supply = "lvds-vccs-3p3v";
+- mdp4_lcdc_encoder->regs[1].supply = "lvds-vccs-3p3v";
++ mdp4_lcdc_encoder->regs[1].supply = "lvds-pll-vdda";
+ mdp4_lcdc_encoder->regs[2].supply = "lvds-vdda";
+
+ ret = devm_regulator_bulk_get(dev->dev,
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
+index a599fc5d63c524..f4e01da5c55b00 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.c
++++ b/drivers/gpu/drm/msm/dp/dp_audio.c
+@@ -329,10 +329,10 @@ static void dp_audio_safe_to_exit_level(struct dp_audio_private *audio)
+ safe_to_exit_level = 5;
+ break;
+ default:
++ safe_to_exit_level = 14;
+ drm_dbg_dp(audio->drm_dev,
+ "setting the default safe_to_exit_level = %u\n",
+ safe_to_exit_level);
+- safe_to_exit_level = 14;
+ break;
+ }
+
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
+index e6ffaf92d26d32..1c4211cfa2a476 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
+@@ -137,7 +137,7 @@ static inline u32 pll_get_integloop_gain(u64 frac_start, u64 bclk, u32 ref_clk,
+
+ base <<= (digclk_divsel == 2 ? 1 : 0);
+
+- return (base <= 2046 ? base : 2046);
++ return base;
+ }
+
+ static inline u32 pll_get_pll_cmp(u64 fdata, unsigned long ref_clk)
+diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
+index af6a6fcb11736f..6749f0fbca96d5 100644
+--- a/drivers/gpu/drm/msm/msm_kms.c
++++ b/drivers/gpu/drm/msm/msm_kms.c
+@@ -244,7 +244,6 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv)
+ ret = priv->kms_init(ddev);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "failed to load kms\n");
+- priv->kms = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
+index 6fbff516c1c1f0..01dff89bed4e1d 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.c
++++ b/drivers/gpu/drm/panthor/panthor_device.c
+@@ -445,8 +445,8 @@ int panthor_device_resume(struct device *dev)
+ drm_dev_enter(&ptdev->base, &cookie)) {
+ panthor_gpu_resume(ptdev);
+ panthor_mmu_resume(ptdev);
+- ret = drm_WARN_ON(&ptdev->base, panthor_fw_resume(ptdev));
+- if (!ret) {
++ ret = panthor_fw_resume(ptdev);
++ if (!drm_WARN_ON(&ptdev->base, ret)) {
+ panthor_sched_resume(ptdev);
+ } else {
+ panthor_mmu_suspend(ptdev);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index 9873172e3fd331..5880d87fe6b3aa 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -33,7 +33,6 @@
+ #include <uapi/linux/videodev2.h>
+ #include <dt-bindings/soc/rockchip,vop2.h>
+
+-#include "rockchip_drm_drv.h"
+ #include "rockchip_drm_gem.h"
+ #include "rockchip_drm_vop2.h"
+ #include "rockchip_rgb.h"
+@@ -550,6 +549,25 @@ static bool rockchip_vop2_mod_supported(struct drm_plane *plane, u32 format,
+ if (modifier == DRM_FORMAT_MOD_INVALID)
+ return false;
+
++ if (vop2->data->soc_id == 3568 || vop2->data->soc_id == 3566) {
++ if (vop2_cluster_window(win)) {
++ if (modifier == DRM_FORMAT_MOD_LINEAR) {
++ drm_dbg_kms(vop2->drm,
++ "Cluster window only supports format with afbc\n");
++ return false;
++ }
++ }
++ }
++
++ if (format == DRM_FORMAT_XRGB2101010 || format == DRM_FORMAT_XBGR2101010) {
++ if (vop2->data->soc_id == 3588) {
++ if (!rockchip_afbc(plane, modifier)) {
++ drm_dbg_kms(vop2->drm, "Only support 32 bpp format with afbc\n");
++ return false;
++ }
++ }
++ }
++
+ if (modifier == DRM_FORMAT_MOD_LINEAR)
+ return true;
+
+@@ -1320,6 +1338,12 @@ static void vop2_plane_atomic_update(struct drm_plane *plane,
+ &fb->format->format,
+ afbc_en ? "AFBC" : "", &yrgb_mst);
+
++ if (vop2->data->soc_id > 3568) {
++ vop2_win_write(win, VOP2_WIN_AXI_BUS_ID, win->data->axi_bus_id);
++ vop2_win_write(win, VOP2_WIN_AXI_YRGB_R_ID, win->data->axi_yrgb_r_id);
++ vop2_win_write(win, VOP2_WIN_AXI_UV_R_ID, win->data->axi_uv_r_id);
++ }
++
+ if (vop2_cluster_window(win))
+ vop2_win_write(win, VOP2_WIN_AFBC_HALF_BLOCK_EN, half_block_en);
+
+@@ -1721,9 +1745,9 @@ static unsigned long rk3588_calc_cru_cfg(struct vop2_video_port *vp, int id,
+ else
+ dclk_out_rate = v_pixclk >> 2;
+
+- dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000);
++ dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000000);
+ if (!dclk_rate) {
+- drm_err(vop2->drm, "DP dclk_out_rate out of range, dclk_out_rate: %ld KHZ\n",
++ drm_err(vop2->drm, "DP dclk_out_rate out of range, dclk_out_rate: %ld Hz\n",
+ dclk_out_rate);
+ return 0;
+ }
+@@ -1738,9 +1762,9 @@ static unsigned long rk3588_calc_cru_cfg(struct vop2_video_port *vp, int id,
+ * dclk_rate = N * dclk_core_rate N = (1,2,4 ),
+ * we get a little factor here
+ */
+- dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000);
++ dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000000);
+ if (!dclk_rate) {
+- drm_err(vop2->drm, "MIPI dclk out of range, dclk_out_rate: %ld KHZ\n",
++ drm_err(vop2->drm, "MIPI dclk out of range, dclk_out_rate: %ld Hz\n",
+ dclk_out_rate);
+ return 0;
+ }
+@@ -2159,7 +2183,6 @@ static int vop2_find_start_mixer_id_for_vp(struct vop2 *vop2, u8 port_id)
+
+ static void vop2_setup_cluster_alpha(struct vop2 *vop2, struct vop2_win *main_win)
+ {
+- u32 offset = (main_win->data->phys_id * 0x10);
+ struct vop2_alpha_config alpha_config;
+ struct vop2_alpha alpha;
+ struct drm_plane_state *bottom_win_pstate;
+@@ -2167,6 +2190,7 @@ static void vop2_setup_cluster_alpha(struct vop2 *vop2, struct vop2_win *main_wi
+ u16 src_glb_alpha_val, dst_glb_alpha_val;
+ bool premulti_en = false;
+ bool swap = false;
++ u32 offset = 0;
+
+ /* At one win mode, win0 is dst/bottom win, and win1 is a all zero src/top win */
+ bottom_win_pstate = main_win->base.state;
+@@ -2185,6 +2209,22 @@ static void vop2_setup_cluster_alpha(struct vop2 *vop2, struct vop2_win *main_wi
+ vop2_parse_alpha(&alpha_config, &alpha);
+
+ alpha.src_color_ctrl.bits.src_dst_swap = swap;
++
++ switch (main_win->data->phys_id) {
++ case ROCKCHIP_VOP2_CLUSTER0:
++ offset = 0x0;
++ break;
++ case ROCKCHIP_VOP2_CLUSTER1:
++ offset = 0x10;
++ break;
++ case ROCKCHIP_VOP2_CLUSTER2:
++ offset = 0x20;
++ break;
++ case ROCKCHIP_VOP2_CLUSTER3:
++ offset = 0x30;
++ break;
++ }
++
+ vop2_writel(vop2, RK3568_CLUSTER0_MIX_SRC_COLOR_CTRL + offset,
+ alpha.src_color_ctrl.val);
+ vop2_writel(vop2, RK3568_CLUSTER0_MIX_DST_COLOR_CTRL + offset,
+@@ -2232,6 +2272,12 @@ static void vop2_setup_alpha(struct vop2_video_port *vp)
+ struct vop2_win *win = to_vop2_win(plane);
+ int zpos = plane->state->normalized_zpos;
+
++ /*
++ * Need to configure alpha from second layer.
++ */
++ if (zpos == 0)
++ continue;
++
+ if (plane->state->pixel_blend_mode == DRM_MODE_BLEND_PREMULTI)
+ premulti_en = 1;
+ else
+@@ -2308,7 +2354,10 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ struct drm_plane *plane;
+ u32 layer_sel = 0;
+ u32 port_sel;
+- unsigned int nlayer, ofs;
++ u8 layer_id;
++ u8 old_layer_id;
++ u8 layer_sel_id;
++ unsigned int ofs;
+ u32 ovl_ctrl;
+ int i;
+ struct vop2_video_port *vp0 = &vop2->vps[0];
+@@ -2352,9 +2401,30 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ for (i = 0; i < vp->id; i++)
+ ofs += vop2->vps[i].nlayers;
+
+- nlayer = 0;
+ drm_atomic_crtc_for_each_plane(plane, &vp->crtc) {
+ struct vop2_win *win = to_vop2_win(plane);
++ struct vop2_win *old_win;
++
++ layer_id = (u8)(plane->state->normalized_zpos + ofs);
++
++ /*
++ * Find the layer this win bind in old state.
++ */
++ for (old_layer_id = 0; old_layer_id < vop2->data->win_size; old_layer_id++) {
++ layer_sel_id = (layer_sel >> (4 * old_layer_id)) & 0xf;
++ if (layer_sel_id == win->data->layer_sel_id)
++ break;
++ }
++
++ /*
++ * Find the win bind to this layer in old state
++ */
++ for (i = 0; i < vop2->data->win_size; i++) {
++ old_win = &vop2->win[i];
++ layer_sel_id = (layer_sel >> (4 * layer_id)) & 0xf;
++ if (layer_sel_id == old_win->data->layer_sel_id)
++ break;
++ }
+
+ switch (win->data->phys_id) {
+ case ROCKCHIP_VOP2_CLUSTER0:
+@@ -2399,17 +2469,14 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ break;
+ }
+
+- layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(plane->state->normalized_zpos + ofs,
+- 0x7);
+- layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(plane->state->normalized_zpos + ofs,
+- win->data->layer_sel_id);
+- nlayer++;
+- }
+-
+- /* configure unused layers to 0x5 (reserved) */
+- for (; nlayer < vp->nlayers; nlayer++) {
+- layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(nlayer + ofs, 0x7);
+- layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(nlayer + ofs, 5);
++ layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(layer_id, 0x7);
++ layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(layer_id, win->data->layer_sel_id);
++ /*
++ * When we bind a window from layerM to layerN, we also need to move the old
++ * window on layerN to layerM to avoid one window selected by two or more layers.
++ */
++ layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(old_layer_id, 0x7);
++ layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(old_layer_id, old_win->data->layer_sel_id);
+ }
+
+ vop2_writel(vop2, RK3568_OVL_LAYER_SEL, layer_sel);
+@@ -2444,9 +2511,11 @@ static void vop2_setup_dly_for_windows(struct vop2 *vop2)
+ sdly |= FIELD_PREP(RK3568_SMART_DLY_NUM__ESMART1, dly);
+ break;
+ case ROCKCHIP_VOP2_SMART0:
++ case ROCKCHIP_VOP2_ESMART2:
+ sdly |= FIELD_PREP(RK3568_SMART_DLY_NUM__SMART0, dly);
+ break;
+ case ROCKCHIP_VOP2_SMART1:
++ case ROCKCHIP_VOP2_ESMART3:
+ sdly |= FIELD_PREP(RK3568_SMART_DLY_NUM__SMART1, dly);
+ break;
+ }
+@@ -2865,6 +2934,10 @@ static struct reg_field vop2_cluster_regs[VOP2_WIN_MAX_REG] = {
+ [VOP2_WIN_Y2R_EN] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL0, 8, 8),
+ [VOP2_WIN_R2Y_EN] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL0, 9, 9),
+ [VOP2_WIN_CSC_MODE] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL0, 10, 11),
++ [VOP2_WIN_AXI_YRGB_R_ID] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL2, 0, 3),
++ [VOP2_WIN_AXI_UV_R_ID] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL2, 5, 8),
++ /* RK3588 only, reserved bit on rk3568*/
++ [VOP2_WIN_AXI_BUS_ID] = REG_FIELD(RK3568_CLUSTER_CTRL, 13, 13),
+
+ /* Scale */
+ [VOP2_WIN_SCALE_YRGB_X] = REG_FIELD(RK3568_CLUSTER_WIN_SCL_FACTOR_YRGB, 0, 15),
+@@ -2957,6 +3030,10 @@ static struct reg_field vop2_esmart_regs[VOP2_WIN_MAX_REG] = {
+ [VOP2_WIN_YMIRROR] = REG_FIELD(RK3568_SMART_CTRL1, 31, 31),
+ [VOP2_WIN_COLOR_KEY] = REG_FIELD(RK3568_SMART_COLOR_KEY_CTRL, 0, 29),
+ [VOP2_WIN_COLOR_KEY_EN] = REG_FIELD(RK3568_SMART_COLOR_KEY_CTRL, 31, 31),
++ [VOP2_WIN_AXI_YRGB_R_ID] = REG_FIELD(RK3568_SMART_CTRL1, 4, 8),
++ [VOP2_WIN_AXI_UV_R_ID] = REG_FIELD(RK3568_SMART_CTRL1, 12, 16),
++ /* RK3588 only, reserved register on rk3568 */
++ [VOP2_WIN_AXI_BUS_ID] = REG_FIELD(RK3588_SMART_AXI_CTRL, 1, 1),
+
+ /* Scale */
+ [VOP2_WIN_SCALE_YRGB_X] = REG_FIELD(RK3568_SMART_REGION0_SCL_FACTOR_YRGB, 0, 15),
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+index 615a16196aff6b..130aaa40316d13 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+@@ -9,6 +9,7 @@
+
+ #include <linux/regmap.h>
+ #include <drm/drm_modes.h>
++#include "rockchip_drm_drv.h"
+ #include "rockchip_drm_vop.h"
+
+ #define VOP2_VP_FEATURE_OUTPUT_10BIT BIT(0)
+@@ -78,6 +79,9 @@ enum vop2_win_regs {
+ VOP2_WIN_COLOR_KEY,
+ VOP2_WIN_COLOR_KEY_EN,
+ VOP2_WIN_DITHER_UP,
++ VOP2_WIN_AXI_BUS_ID,
++ VOP2_WIN_AXI_YRGB_R_ID,
++ VOP2_WIN_AXI_UV_R_ID,
+
+ /* scale regs */
+ VOP2_WIN_SCALE_YRGB_X,
+@@ -140,6 +144,10 @@ struct vop2_win_data {
+ unsigned int layer_sel_id;
+ uint64_t feature;
+
++ uint8_t axi_bus_id;
++ uint8_t axi_yrgb_r_id;
++ uint8_t axi_uv_r_id;
++
+ unsigned int max_upscale_factor;
+ unsigned int max_downscale_factor;
+ const u8 dly[VOP2_DLY_MODE_MAX];
+@@ -308,6 +316,7 @@ enum dst_factor_mode {
+
+ #define RK3568_CLUSTER_WIN_CTRL0 0x00
+ #define RK3568_CLUSTER_WIN_CTRL1 0x04
++#define RK3568_CLUSTER_WIN_CTRL2 0x08
+ #define RK3568_CLUSTER_WIN_YRGB_MST 0x10
+ #define RK3568_CLUSTER_WIN_CBR_MST 0x14
+ #define RK3568_CLUSTER_WIN_VIR 0x18
+@@ -330,6 +339,7 @@ enum dst_factor_mode {
+ /* (E)smart register definition, offset relative to window base */
+ #define RK3568_SMART_CTRL0 0x00
+ #define RK3568_SMART_CTRL1 0x04
++#define RK3588_SMART_AXI_CTRL 0x08
+ #define RK3568_SMART_REGION0_CTRL 0x10
+ #define RK3568_SMART_REGION0_YRGB_MST 0x14
+ #define RK3568_SMART_REGION0_CBR_MST 0x18
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+index 18efb3fe1c000f..e473a8f8fd32d4 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+@@ -313,7 +313,7 @@ static const struct vop2_video_port_data rk3588_vop_video_ports[] = {
+ * AXI1 is a read only bus.
+ *
+ * Every window on a AXI bus must assigned two unique
+- * read id(yrgb_id/uv_id, valid id are 0x1~0xe).
++ * read id(yrgb_r_id/uv_r_id, valid id are 0x1~0xe).
+ *
+ * AXI0:
+ * Cluster0/1, Esmart0/1, WriteBack
+@@ -333,6 +333,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 0,
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 2,
++ .axi_uv_r_id = 3,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -349,6 +352,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_PRIMARY,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 6,
++ .axi_uv_r_id = 7,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -364,6 +370,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_PRIMARY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 2,
++ .axi_uv_r_id = 3,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -379,6 +388,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_PRIMARY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 6,
++ .axi_uv_r_id = 7,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -393,6 +405,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 2,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 0x0a,
++ .axi_uv_r_id = 0x0b,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+@@ -406,6 +421,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 3,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 0x0c,
++ .axi_uv_r_id = 0x01,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+@@ -419,6 +437,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 6,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 0x0a,
++ .axi_uv_r_id = 0x0b,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+@@ -432,6 +453,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 7,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 0x0c,
++ .axi_uv_r_id = 0x0d,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+diff --git a/drivers/gpu/drm/v3d/v3d_debugfs.c b/drivers/gpu/drm/v3d/v3d_debugfs.c
+index 19e3ee7ac897fe..76816f2551c100 100644
+--- a/drivers/gpu/drm/v3d/v3d_debugfs.c
++++ b/drivers/gpu/drm/v3d/v3d_debugfs.c
+@@ -237,8 +237,8 @@ static int v3d_measure_clock(struct seq_file *m, void *unused)
+ if (v3d->ver >= 40) {
+ int cycle_count_reg = V3D_PCTR_CYCLE_COUNT(v3d->ver);
+ V3D_CORE_WRITE(core, V3D_V4_PCTR_0_SRC_0_3,
+- V3D_SET_FIELD(cycle_count_reg,
+- V3D_PCTR_S0));
++ V3D_SET_FIELD_VER(cycle_count_reg,
++ V3D_PCTR_S0, v3d->ver));
+ V3D_CORE_WRITE(core, V3D_V4_PCTR_0_CLR, 1);
+ V3D_CORE_WRITE(core, V3D_V4_PCTR_0_EN, 1);
+ } else {
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 6ee56cbd3f1bfc..e3013ac3a5c2a6 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -240,17 +240,18 @@ void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon)
+
+ for (i = 0; i < ncounters; i++) {
+ u32 source = i / 4;
+- u32 channel = V3D_SET_FIELD(perfmon->counters[i], V3D_PCTR_S0);
++ u32 channel = V3D_SET_FIELD_VER(perfmon->counters[i], V3D_PCTR_S0,
++ v3d->ver);
+
+ i++;
+- channel |= V3D_SET_FIELD(i < ncounters ? perfmon->counters[i] : 0,
+- V3D_PCTR_S1);
++ channel |= V3D_SET_FIELD_VER(i < ncounters ? perfmon->counters[i] : 0,
++ V3D_PCTR_S1, v3d->ver);
+ i++;
+- channel |= V3D_SET_FIELD(i < ncounters ? perfmon->counters[i] : 0,
+- V3D_PCTR_S2);
++ channel |= V3D_SET_FIELD_VER(i < ncounters ? perfmon->counters[i] : 0,
++ V3D_PCTR_S2, v3d->ver);
+ i++;
+- channel |= V3D_SET_FIELD(i < ncounters ? perfmon->counters[i] : 0,
+- V3D_PCTR_S3);
++ channel |= V3D_SET_FIELD_VER(i < ncounters ? perfmon->counters[i] : 0,
++ V3D_PCTR_S3, v3d->ver);
+ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_SRC_X(source), channel);
+ }
+
+diff --git a/drivers/gpu/drm/v3d/v3d_regs.h b/drivers/gpu/drm/v3d/v3d_regs.h
+index 1b1a62ad95852b..6da3c69082bd6d 100644
+--- a/drivers/gpu/drm/v3d/v3d_regs.h
++++ b/drivers/gpu/drm/v3d/v3d_regs.h
+@@ -15,6 +15,14 @@
+ fieldval & field##_MASK; \
+ })
+
++#define V3D_SET_FIELD_VER(value, field, ver) \
++ ({ \
++ typeof(ver) _ver = (ver); \
++ u32 fieldval = (value) << field##_SHIFT(_ver); \
++ WARN_ON((fieldval & ~field##_MASK(_ver)) != 0); \
++ fieldval & field##_MASK(_ver); \
++ })
++
+ #define V3D_GET_FIELD(word, field) (((word) & field##_MASK) >> \
+ field##_SHIFT)
+
+@@ -354,18 +362,15 @@
+ #define V3D_V4_PCTR_0_SRC_28_31 0x0067c
+ #define V3D_V4_PCTR_0_SRC_X(x) (V3D_V4_PCTR_0_SRC_0_3 + \
+ 4 * (x))
+-# define V3D_PCTR_S0_MASK V3D_MASK(6, 0)
+-# define V3D_V7_PCTR_S0_MASK V3D_MASK(7, 0)
+-# define V3D_PCTR_S0_SHIFT 0
+-# define V3D_PCTR_S1_MASK V3D_MASK(14, 8)
+-# define V3D_V7_PCTR_S1_MASK V3D_MASK(15, 8)
+-# define V3D_PCTR_S1_SHIFT 8
+-# define V3D_PCTR_S2_MASK V3D_MASK(22, 16)
+-# define V3D_V7_PCTR_S2_MASK V3D_MASK(23, 16)
+-# define V3D_PCTR_S2_SHIFT 16
+-# define V3D_PCTR_S3_MASK V3D_MASK(30, 24)
+-# define V3D_V7_PCTR_S3_MASK V3D_MASK(31, 24)
+-# define V3D_PCTR_S3_SHIFT 24
++# define V3D_PCTR_S0_MASK(ver) (((ver) >= 71) ? V3D_MASK(7, 0) : V3D_MASK(6, 0))
++# define V3D_PCTR_S0_SHIFT(ver) 0
++# define V3D_PCTR_S1_MASK(ver) (((ver) >= 71) ? V3D_MASK(15, 8) : V3D_MASK(14, 8))
++# define V3D_PCTR_S1_SHIFT(ver) 8
++# define V3D_PCTR_S2_MASK(ver) (((ver) >= 71) ? V3D_MASK(23, 16) : V3D_MASK(22, 16))
++# define V3D_PCTR_S2_SHIFT(ver) 16
++# define V3D_PCTR_S3_MASK(ver) (((ver) >= 71) ? V3D_MASK(31, 24) : V3D_MASK(30, 24))
++# define V3D_PCTR_S3_SHIFT(ver) 24
++
+ #define V3D_PCTR_CYCLE_COUNT(ver) ((ver >= 71) ? 0 : 32)
+
+ /* Output values of the counters. */
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 935ccc38d12958..155deef867ac09 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1125,6 +1125,8 @@ static void hid_apply_multiplier(struct hid_device *hid,
+ while (multiplier_collection->parent_idx != -1 &&
+ multiplier_collection->type != HID_COLLECTION_LOGICAL)
+ multiplier_collection = &hid->collection[multiplier_collection->parent_idx];
++ if (multiplier_collection->type != HID_COLLECTION_LOGICAL)
++ multiplier_collection = NULL;
+
+ effective_multiplier = hid_calculate_multiplier(hid, multiplier);
+
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index fda9dce3da9980..9d80635a91ebd8 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -810,10 +810,23 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ break;
+ }
+
+- if ((usage->hid & 0xf0) == 0x90) { /* SystemControl*/
+- switch (usage->hid & 0xf) {
+- case 0xb: map_key_clear(KEY_DO_NOT_DISTURB); break;
+- default: goto ignore;
++ if ((usage->hid & 0xf0) == 0x90) { /* SystemControl & D-pad */
++ switch (usage->hid) {
++ case HID_GD_UP: usage->hat_dir = 1; break;
++ case HID_GD_DOWN: usage->hat_dir = 5; break;
++ case HID_GD_RIGHT: usage->hat_dir = 3; break;
++ case HID_GD_LEFT: usage->hat_dir = 7; break;
++ case HID_GD_DO_NOT_DISTURB:
++ map_key_clear(KEY_DO_NOT_DISTURB); break;
++ default: goto unknown;
++ }
++
++ if (usage->hid <= HID_GD_LEFT) {
++ if (field->dpad) {
++ map_abs(field->dpad);
++ goto ignore;
++ }
++ map_abs(ABS_HAT0X);
+ }
+ break;
+ }
+@@ -844,22 +857,6 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ if (field->application == HID_GD_SYSTEM_CONTROL)
+ goto ignore;
+
+- if ((usage->hid & 0xf0) == 0x90) { /* D-pad */
+- switch (usage->hid) {
+- case HID_GD_UP: usage->hat_dir = 1; break;
+- case HID_GD_DOWN: usage->hat_dir = 5; break;
+- case HID_GD_RIGHT: usage->hat_dir = 3; break;
+- case HID_GD_LEFT: usage->hat_dir = 7; break;
+- default: goto unknown;
+- }
+- if (field->dpad) {
+- map_abs(field->dpad);
+- goto ignore;
+- }
+- map_abs(ABS_HAT0X);
+- break;
+- }
+-
+ switch (usage->hid) {
+ /* These usage IDs map directly to the usage codes. */
+ case HID_GD_X: case HID_GD_Y: case HID_GD_Z:
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index d1b7ccfb3e051f..e07d63db5e1f47 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2078,7 +2078,7 @@ static const struct hid_device_id mt_devices[] = {
+ I2C_DEVICE_ID_GOODIX_01E8) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E8) },
++ I2C_DEVICE_ID_GOODIX_01E9) },
+
+ /* GoodTouch panels */
+ { .driver_data = MT_CLS_NSMU,
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index cf1679b0d4fbb5..6c3e758bbb09e3 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -170,6 +170,14 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
+ ep = &usbif->cur_altsetting->endpoint[1];
+ b_ep = ep->desc.bEndpointAddress;
+
++ /* Are the expected endpoints present? */
++ u8 ep_addr[1] = {b_ep};
++
++ if (!usb_check_int_endpoints(usbif, ep_addr)) {
++ hid_err(hdev, "Unexpected non-int endpoint\n");
++ return;
++ }
++
+ for (i = 0; i < ARRAY_SIZE(setup_arr); ++i) {
+ memcpy(send_buf, setup_arr[i], setup_arr_sizes[i]);
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 08a3c863f80a27..58480a3f4683fe 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -413,7 +413,7 @@ config SENSORS_ASPEED
+ will be called aspeed_pwm_tacho.
+
+ config SENSORS_ASPEED_G6
+- tristate "ASPEED g6 PWM and Fan tach driver"
++ tristate "ASPEED G6 PWM and Fan tach driver"
+ depends on ARCH_ASPEED || COMPILE_TEST
+ depends on PWM
+ help
+@@ -421,7 +421,7 @@ config SENSORS_ASPEED_G6
+ controllers.
+
+ This driver can also be built as a module. If so, the module
+- will be called aspeed_pwm_tacho.
++ will be called aspeed_g6_pwm_tach.
+
+ config SENSORS_ATXP1
+ tristate "Attansic ATXP1 VID controller"
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index ee04795b98aabe..fa3351351825b7 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -42,6 +42,9 @@
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#undef DEFAULT_SYMBOL_NAMESPACE
++#define DEFAULT_SYMBOL_NAMESPACE "HWMON_NCT6775"
++
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
+@@ -56,9 +59,6 @@
+ #include "lm75.h"
+ #include "nct6775.h"
+
+-#undef DEFAULT_SYMBOL_NAMESPACE
+-#define DEFAULT_SYMBOL_NAMESPACE HWMON_NCT6775
+-
+ #define USE_ALTERNATE
+
+ /* used to set data->name = nct6775_device_names[data->sio_kind] */
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index 9d88b4fa03e423..b3282785523d48 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -8,6 +8,9 @@
+ * Copyright (C) 2007 MontaVista Software Inc.
+ * Copyright (C) 2009 Provigent Ltd.
+ */
++
++#define DEFAULT_SYMBOL_NAMESPACE "I2C_DW_COMMON"
++
+ #include <linux/acpi.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -29,8 +32,6 @@
+ #include <linux/types.h>
+ #include <linux/units.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE I2C_DW_COMMON
+-
+ #include "i2c-designware-core.h"
+
+ static char *abort_sources[] = {
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index e8ac9a7bf0b3d2..28188c6d0555e0 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -8,6 +8,9 @@
+ * Copyright (C) 2007 MontaVista Software Inc.
+ * Copyright (C) 2009 Provigent Ltd.
+ */
++
++#define DEFAULT_SYMBOL_NAMESPACE "I2C_DW"
++
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/errno.h>
+@@ -22,8 +25,6 @@
+ #include <linux/regmap.h>
+ #include <linux/reset.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE I2C_DW
+-
+ #include "i2c-designware-core.h"
+
+ #define AMD_TIMEOUT_MIN_US 25
+diff --git a/drivers/i2c/busses/i2c-designware-slave.c b/drivers/i2c/busses/i2c-designware-slave.c
+index 7035296aa24ce0..f0f0f1f2131d0a 100644
+--- a/drivers/i2c/busses/i2c-designware-slave.c
++++ b/drivers/i2c/busses/i2c-designware-slave.c
+@@ -6,6 +6,9 @@
+ *
+ * Copyright (C) 2016 Synopsys Inc.
+ */
++
++#define DEFAULT_SYMBOL_NAMESPACE "I2C_DW"
++
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/errno.h>
+@@ -16,8 +19,6 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/regmap.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE I2C_DW
+-
+ #include "i2c-designware-core.h"
+
+ static void i2c_dw_configure_fifo_slave(struct dw_i2c_dev *dev)
+diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
+index 8d694672c1104f..dbcd3984f25780 100644
+--- a/drivers/i3c/master/dw-i3c-master.c
++++ b/drivers/i3c/master/dw-i3c-master.c
+@@ -1624,6 +1624,7 @@ EXPORT_SYMBOL_GPL(dw_i3c_common_probe);
+
+ void dw_i3c_common_remove(struct dw_i3c_master *master)
+ {
++ cancel_work_sync(&master->hj_work);
+ i3c_master_unregister(&master->base);
+
+ pm_runtime_disable(master->dev);
+diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile
+index 1211f4317a9f4f..aba96ca9bce5df 100644
+--- a/drivers/infiniband/hw/Makefile
++++ b/drivers/infiniband/hw/Makefile
+@@ -11,7 +11,7 @@ obj-$(CONFIG_INFINIBAND_OCRDMA) += ocrdma/
+ obj-$(CONFIG_INFINIBAND_VMWARE_PVRDMA) += vmw_pvrdma/
+ obj-$(CONFIG_INFINIBAND_USNIC) += usnic/
+ obj-$(CONFIG_INFINIBAND_HFI1) += hfi1/
+-obj-$(CONFIG_INFINIBAND_HNS) += hns/
++obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns/
+ obj-$(CONFIG_INFINIBAND_QEDR) += qedr/
+ obj-$(CONFIG_INFINIBAND_BNXT_RE) += bnxt_re/
+ obj-$(CONFIG_INFINIBAND_ERDMA) += erdma/
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 14e434ff51edea..a7067c3c067972 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -4395,9 +4395,10 @@ int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct vm_area_struct *vma)
+ case BNXT_RE_MMAP_TOGGLE_PAGE:
+ /* Driver doesn't expect write access for user space */
+ if (vma->vm_flags & VM_WRITE)
+- return -EFAULT;
+- ret = vm_insert_page(vma, vma->vm_start,
+- virt_to_page((void *)bnxt_entry->mem_offset));
++ ret = -EFAULT;
++ else
++ ret = vm_insert_page(vma, vma->vm_start,
++ virt_to_page((void *)bnxt_entry->mem_offset));
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
+index 80970a1738f8a6..034b85c4225555 100644
+--- a/drivers/infiniband/hw/cxgb4/device.c
++++ b/drivers/infiniband/hw/cxgb4/device.c
+@@ -1114,8 +1114,10 @@ static inline struct sk_buff *copy_gl_to_skb_pkt(const struct pkt_gl *gl,
+ * The math here assumes sizeof cpl_pass_accept_req >= sizeof
+ * cpl_rx_pkt.
+ */
+- skb = alloc_skb(gl->tot_len + sizeof(struct cpl_pass_accept_req) +
+- sizeof(struct rss_header) - pktshift, GFP_ATOMIC);
++ skb = alloc_skb(size_add(gl->tot_len,
++ sizeof(struct cpl_pass_accept_req) +
++ sizeof(struct rss_header)) - pktshift,
++ GFP_ATOMIC);
+ if (unlikely(!skb))
+ return NULL;
+
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index 7b5c4522b426a6..955f061a55e9ae 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -1599,6 +1599,7 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
+ int count;
+ int rq_flushed = 0, sq_flushed;
+ unsigned long flag;
++ struct ib_event ev;
+
+ pr_debug("qhp %p rchp %p schp %p\n", qhp, rchp, schp);
+
+@@ -1607,6 +1608,13 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
+ if (schp != rchp)
+ spin_lock(&schp->lock);
+ spin_lock(&qhp->lock);
++ if (qhp->srq && qhp->attr.state == C4IW_QP_STATE_ERROR &&
++ qhp->ibqp.event_handler) {
++ ev.device = qhp->ibqp.device;
++ ev.element.qp = &qhp->ibqp;
++ ev.event = IB_EVENT_QP_LAST_WQE_REACHED;
++ qhp->ibqp.event_handler(&ev, qhp->ibqp.qp_context);
++ }
+
+ if (qhp->wq.flushed) {
+ spin_unlock(&qhp->lock);
+diff --git a/drivers/infiniband/hw/hns/Kconfig b/drivers/infiniband/hw/hns/Kconfig
+index ab3fbba70789ca..44cdb706fe276d 100644
+--- a/drivers/infiniband/hw/hns/Kconfig
++++ b/drivers/infiniband/hw/hns/Kconfig
+@@ -1,21 +1,11 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-config INFINIBAND_HNS
+- tristate "HNS RoCE Driver"
+- depends on NET_VENDOR_HISILICON
+- depends on ARM64 || (COMPILE_TEST && 64BIT)
+- depends on (HNS_DSAF && HNS_ENET) || HNS3
+- help
+- This is a RoCE/RDMA driver for the Hisilicon RoCE engine.
+-
+- To compile HIP08 driver as module, choose M here.
+-
+ config INFINIBAND_HNS_HIP08
+- bool "Hisilicon Hip08 Family RoCE support"
+- depends on INFINIBAND_HNS && PCI && HNS3
+- depends on INFINIBAND_HNS=m || HNS3=y
++ tristate "Hisilicon Hip08 Family RoCE support"
++ depends on ARM64 || (COMPILE_TEST && 64BIT)
++ depends on PCI && HNS3
+ help
+ RoCE driver support for Hisilicon RoCE engine in Hisilicon Hip08 SoC.
+ The RoCE engine is a PCI device.
+
+- To compile this driver, choose Y here: if INFINIBAND_HNS is m, this
+- module will be called hns-roce-hw-v2.
++ To compile this driver, choose M here. This module will be called
++ hns-roce-hw-v2.
+diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
+index be1e1cdbcfa8a8..7917af8e6380e8 100644
+--- a/drivers/infiniband/hw/hns/Makefile
++++ b/drivers/infiniband/hw/hns/Makefile
+@@ -5,12 +5,9 @@
+
+ ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
+
+-hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
++hns-roce-hw-v2-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
+ hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
+ hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o hns_roce_srq.o hns_roce_restrack.o \
+- hns_roce_debugfs.o
++ hns_roce_debugfs.o hns_roce_hw_v2.o
+
+-ifdef CONFIG_INFINIBAND_HNS_HIP08
+-hns-roce-hw-v2-objs := hns_roce_hw_v2.o $(hns-roce-objs)
+-obj-$(CONFIG_INFINIBAND_HNS) += hns-roce-hw-v2.o
+-endif
++obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns-roce-hw-v2.o
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 529db874d67c69..b1bbdcff631d56 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -351,7 +351,7 @@ static int mlx4_ib_del_gid(const struct ib_gid_attr *attr, void **context)
+ struct mlx4_port_gid_table *port_gid_table;
+ int ret = 0;
+ int hw_update = 0;
+- struct gid_entry *gids;
++ struct gid_entry *gids = NULL;
+
+ if (!rdma_cap_roce_gid_table(attr->device, attr->port_num))
+ return -EINVAL;
+@@ -389,10 +389,10 @@ static int mlx4_ib_del_gid(const struct ib_gid_attr *attr, void **context)
+ }
+ spin_unlock_bh(&iboe->lock);
+
+- if (!ret && hw_update) {
++ if (gids)
+ ret = mlx4_ib_update_gids(gids, ibdev, attr->port_num);
+- kfree(gids);
+- }
++
++ kfree(gids);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 4b37446758fd4e..64b441542cd5dd 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -228,13 +228,27 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
+ unsigned long idx = ib_umem_start(odp) >> MLX5_IMR_MTT_SHIFT;
+ struct mlx5_ib_mr *imr = mr->parent;
+
++ /*
++ * If userspace is racing freeing the parent implicit ODP MR then we can
++ * loose the race with parent destruction. In this case
++ * mlx5_ib_free_odp_mr() will free everything in the implicit_children
++ * xarray so NOP is fine. This child MR cannot be destroyed here because
++ * we are under its umem_mutex.
++ */
+ if (!refcount_inc_not_zero(&imr->mmkey.usecount))
+ return;
+
+- xa_erase(&imr->implicit_children, idx);
++ xa_lock(&imr->implicit_children);
++ if (__xa_cmpxchg(&imr->implicit_children, idx, mr, NULL, GFP_KERNEL) !=
++ mr) {
++ xa_unlock(&imr->implicit_children);
++ return;
++ }
++
+ if (MLX5_CAP_ODP(mr_to_mdev(mr)->mdev, mem_page_fault))
+- xa_erase(&mr_to_mdev(mr)->odp_mkeys,
+- mlx5_base_mkey(mr->mmkey.key));
++ __xa_erase(&mr_to_mdev(mr)->odp_mkeys,
++ mlx5_base_mkey(mr->mmkey.key));
++ xa_unlock(&imr->implicit_children);
+
+ /* Freeing a MR is a sleeping operation, so bounce to a work queue */
+ INIT_WORK(&mr->odp_destroy.work, free_implicit_child_mr_work);
+@@ -500,18 +514,18 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr,
+ refcount_inc(&ret->mmkey.usecount);
+ goto out_lock;
+ }
+- xa_unlock(&imr->implicit_children);
+
+ if (MLX5_CAP_ODP(dev->mdev, mem_page_fault)) {
+- ret = xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),
+- &mr->mmkey, GFP_KERNEL);
++ ret = __xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),
++ &mr->mmkey, GFP_KERNEL);
+ if (xa_is_err(ret)) {
+ ret = ERR_PTR(xa_err(ret));
+- xa_erase(&imr->implicit_children, idx);
+- goto out_mr;
++ __xa_erase(&imr->implicit_children, idx);
++ goto out_lock;
+ }
+ mr->mmkey.type = MLX5_MKEY_IMPLICIT_CHILD;
+ }
++ xa_unlock(&imr->implicit_children);
+ mlx5_ib_dbg(mr_to_mdev(imr), "key %x mr %p\n", mr->mmkey.key, mr);
+ return mr;
+
+@@ -944,8 +958,7 @@ static struct mlx5_ib_mkey *find_odp_mkey(struct mlx5_ib_dev *dev, u32 key)
+ /*
+ * Handle a single data segment in a page-fault WQE or RDMA region.
+ *
+- * Returns number of OS pages retrieved on success. The caller may continue to
+- * the next data segment.
++ * Returns zero on success. The caller may continue to the next data segment.
+ * Can return the following error codes:
+ * -EAGAIN to designate a temporary error. The caller will abort handling the
+ * page fault and resolve it.
+@@ -958,7 +971,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ u32 *bytes_committed,
+ u32 *bytes_mapped)
+ {
+- int npages = 0, ret, i, outlen, cur_outlen = 0, depth = 0;
++ int ret, i, outlen, cur_outlen = 0, depth = 0, pages_in_range;
+ struct pf_frame *head = NULL, *frame;
+ struct mlx5_ib_mkey *mmkey;
+ struct mlx5_ib_mr *mr;
+@@ -993,13 +1006,20 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ case MLX5_MKEY_MR:
+ mr = container_of(mmkey, struct mlx5_ib_mr, mmkey);
+
++ pages_in_range = (ALIGN(io_virt + bcnt, PAGE_SIZE) -
++ (io_virt & PAGE_MASK)) >>
++ PAGE_SHIFT;
+ ret = pagefault_mr(mr, io_virt, bcnt, bytes_mapped, 0, false);
+ if (ret < 0)
+ goto end;
+
+ mlx5_update_odp_stats(mr, faults, ret);
+
+- npages += ret;
++ if (ret < pages_in_range) {
++ ret = -EFAULT;
++ goto end;
++ }
++
+ ret = 0;
+ break;
+
+@@ -1090,7 +1110,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ kfree(out);
+
+ *bytes_committed = 0;
+- return ret ? ret : npages;
++ return ret;
+ }
+
+ /*
+@@ -1109,8 +1129,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ * the committed bytes).
+ * @receive_queue: receive WQE end of sg list
+ *
+- * Returns the number of pages loaded if positive, zero for an empty WQE, or a
+- * negative error code.
++ * Returns zero for success or a negative error code.
+ */
+ static int pagefault_data_segments(struct mlx5_ib_dev *dev,
+ struct mlx5_pagefault *pfault,
+@@ -1118,7 +1137,7 @@ static int pagefault_data_segments(struct mlx5_ib_dev *dev,
+ void *wqe_end, u32 *bytes_mapped,
+ u32 *total_wqe_bytes, bool receive_queue)
+ {
+- int ret = 0, npages = 0;
++ int ret = 0;
+ u64 io_virt;
+ __be32 key;
+ u32 byte_count;
+@@ -1175,10 +1194,9 @@ static int pagefault_data_segments(struct mlx5_ib_dev *dev,
+ bytes_mapped);
+ if (ret < 0)
+ break;
+- npages += ret;
+ }
+
+- return ret < 0 ? ret : npages;
++ return ret;
+ }
+
+ /*
+@@ -1414,12 +1432,6 @@ static void mlx5_ib_mr_wqe_pfault_handler(struct mlx5_ib_dev *dev,
+ free_page((unsigned long)wqe_start);
+ }
+
+-static int pages_in_range(u64 address, u32 length)
+-{
+- return (ALIGN(address + length, PAGE_SIZE) -
+- (address & PAGE_MASK)) >> PAGE_SHIFT;
+-}
+-
+ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev,
+ struct mlx5_pagefault *pfault)
+ {
+@@ -1458,7 +1470,7 @@ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev,
+ if (ret == -EAGAIN) {
+ /* We're racing with an invalidation, don't prefetch */
+ prefetch_activated = 0;
+- } else if (ret < 0 || pages_in_range(address, length) > ret) {
++ } else if (ret < 0) {
+ mlx5_ib_page_fault_resume(dev, pfault, 1);
+ if (ret != -ENOENT)
+ mlx5_ib_dbg(dev, "PAGE FAULT error %d. QP 0x%llx, type: 0x%x\n",
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index d2f57ead78ad12..003f681e5dc022 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -129,7 +129,7 @@ enum rxe_device_param {
+ enum rxe_port_param {
+ RXE_PORT_GID_TBL_LEN = 1024,
+ RXE_PORT_PORT_CAP_FLAGS = IB_PORT_CM_SUP,
+- RXE_PORT_MAX_MSG_SZ = 0x800000,
++ RXE_PORT_MAX_MSG_SZ = (1UL << 31),
+ RXE_PORT_BAD_PKEY_CNTR = 0,
+ RXE_PORT_QKEY_VIOL_CNTR = 0,
+ RXE_PORT_LID = 0,
+diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
+index 67567d62195e86..d9cb682fd71f88 100644
+--- a/drivers/infiniband/sw/rxe/rxe_pool.c
++++ b/drivers/infiniband/sw/rxe/rxe_pool.c
+@@ -178,7 +178,6 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable)
+ {
+ struct rxe_pool *pool = elem->pool;
+ struct xarray *xa = &pool->xa;
+- static int timeout = RXE_POOL_TIMEOUT;
+ int ret, err = 0;
+ void *xa_ret;
+
+@@ -202,19 +201,19 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable)
+ * return to rdma-core
+ */
+ if (sleepable) {
+- if (!completion_done(&elem->complete) && timeout) {
++ if (!completion_done(&elem->complete)) {
+ ret = wait_for_completion_timeout(&elem->complete,
+- timeout);
++ msecs_to_jiffies(50000));
+
+ /* Shouldn't happen. There are still references to
+ * the object but, rather than deadlock, free the
+ * object or pass back to rdma-core.
+ */
+ if (WARN_ON(!ret))
+- err = -EINVAL;
++ err = -ETIMEDOUT;
+ }
+ } else {
+- unsigned long until = jiffies + timeout;
++ unsigned long until = jiffies + RXE_POOL_TIMEOUT;
+
+ /* AH objects are unique in that the destroy_ah verb
+ * can be called in atomic context. This delay
+@@ -226,7 +225,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable)
+ mdelay(1);
+
+ if (WARN_ON(!completion_done(&elem->complete)))
+- err = -EINVAL;
++ err = -ETIMEDOUT;
+ }
+
+ if (pool->cleanup)
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 8a5fc20fd18692..589ac0d8489dbd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -696,7 +696,7 @@ static int validate_send_wr(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
+ for (i = 0; i < ibwr->num_sge; i++)
+ length += ibwr->sg_list[i].length;
+
+- if (length > (1UL << 31)) {
++ if (length > RXE_PORT_MAX_MSG_SZ) {
+ rxe_err_qp(qp, "message length too long\n");
+ break;
+ }
+@@ -980,8 +980,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr)
+ for (i = 0; i < num_sge; i++)
+ length += ibwr->sg_list[i].length;
+
+- /* IBA max message size is 2^31 */
+- if (length >= (1UL<<31)) {
++ if (length > RXE_PORT_MAX_MSG_SZ) {
+ err = -EINVAL;
+ rxe_dbg("message length too long\n");
+ goto err_out;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 4e17d546d4ccf3..bf38ac6f87c47a 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -584,6 +584,9 @@ static void dev_free(struct kref *ref)
+ list_del(&dev->entry);
+ mutex_unlock(&pool->mutex);
+
++ if (pool->ops && pool->ops->deinit)
++ pool->ops->deinit(dev);
++
+ ib_dealloc_pd(dev->ib_pd);
+ kfree(dev);
+ }
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 2916e77f589b81..7289ae0b83aced 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -3978,7 +3978,6 @@ static struct srp_host *srp_add_port(struct srp_device *device, u32 port)
+ return host;
+
+ put_host:
+- device_del(&host->dev);
+ put_device(&host->dev);
+ return NULL;
+ }
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 6386fa4556d9b8..6fac9ee8dd3ed0 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -87,7 +87,6 @@ int amd_iommu_complete_ppr(struct device *dev, u32 pasid, int status, int tag);
+ */
+ void amd_iommu_flush_all_caches(struct amd_iommu *iommu);
+ void amd_iommu_update_and_flush_device_table(struct protection_domain *domain);
+-void amd_iommu_domain_update(struct protection_domain *domain);
+ void amd_iommu_domain_flush_pages(struct protection_domain *domain,
+ u64 address, size_t size);
+ void amd_iommu_dev_flush_pasid_pages(struct iommu_dev_data *dev_data,
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 8364cd6fa47d01..a24a97a2c6469b 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -1606,15 +1606,6 @@ void amd_iommu_update_and_flush_device_table(struct protection_domain *domain)
+ domain_flush_complete(domain);
+ }
+
+-void amd_iommu_domain_update(struct protection_domain *domain)
+-{
+- /* Update device table */
+- amd_iommu_update_and_flush_device_table(domain);
+-
+- /* Flush domain TLB(s) and wait for completion */
+- amd_iommu_domain_flush_all(domain);
+-}
+-
+ int amd_iommu_complete_ppr(struct device *dev, u32 pasid, int status, int tag)
+ {
+ struct iommu_dev_data *dev_data;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 353fea58cd318a..f1a8f8c75cb0e9 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2702,9 +2702,14 @@ static int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
+ * Translation Requests and Translated transactions are denied
+ * as though ATS is disabled for the stream (STE.EATS == 0b00),
+ * causing F_BAD_ATS_TREQ and F_TRANSL_FORBIDDEN events
+- * (IHI0070Ea 5.2 Stream Table Entry). Thus ATS can only be
+- * enabled if we have arm_smmu_domain, those always have page
+- * tables.
++ * (IHI0070Ea 5.2 Stream Table Entry).
++ *
++ * However, if we have installed a CD table and are using S1DSS
++ * then ATS will work in S1DSS bypass. See "13.6.4 Full ATS
++ * skipping stage 1".
++ *
++ * Disable ATS if we are going to create a normal 0b100 bypass
++ * STE.
+ */
+ state->ats_enabled = arm_smmu_ats_supported(master);
+ }
+@@ -3017,8 +3022,10 @@ static void arm_smmu_attach_dev_ste(struct iommu_domain *domain,
+ if (arm_smmu_ssids_in_use(&master->cd_table)) {
+ /*
+ * If a CD table has to be present then we need to run with ATS
+- * on even though the RID will fail ATS queries with UR. This is
+- * because we have no idea what the PASID's need.
++ * on because we have to assume a PASID is using ATS. For
++ * IDENTITY this will setup things so that S1DSS=bypass which
++ * follows the explanation in "13.6.4 Full ATS skipping stage 1"
++ * and allows for ATS on the RID to work.
+ */
+ state.cd_needs_ats = true;
+ arm_smmu_attach_prepare(&state, domain);
+diff --git a/drivers/iommu/iommufd/iova_bitmap.c b/drivers/iommu/iommufd/iova_bitmap.c
+index d90b9e253412ff..2cdc4f542df472 100644
+--- a/drivers/iommu/iommufd/iova_bitmap.c
++++ b/drivers/iommu/iommufd/iova_bitmap.c
+@@ -130,7 +130,7 @@ struct iova_bitmap {
+ static unsigned long iova_bitmap_offset_to_index(struct iova_bitmap *bitmap,
+ unsigned long iova)
+ {
+- unsigned long pgsize = 1 << bitmap->mapped.pgshift;
++ unsigned long pgsize = 1UL << bitmap->mapped.pgshift;
+
+ return iova / (BITS_PER_TYPE(*bitmap->bitmap) * pgsize);
+ }
+diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
+index b5f5d27ee9634e..649fe79d0f0cc6 100644
+--- a/drivers/iommu/iommufd/main.c
++++ b/drivers/iommu/iommufd/main.c
+@@ -130,7 +130,7 @@ static int iommufd_object_dec_wait_shortterm(struct iommufd_ctx *ictx,
+ if (wait_event_timeout(ictx->destroy_wait,
+ refcount_read(&to_destroy->shortterm_users) ==
+ 0,
+- msecs_to_jiffies(10000)))
++ msecs_to_jiffies(60000)))
+ return 0;
+
+ pr_crit("Time out waiting for iommufd object to become free\n");
+diff --git a/drivers/leds/leds-cht-wcove.c b/drivers/leds/leds-cht-wcove.c
+index b4998402b8c6f0..711ac4bd60580d 100644
+--- a/drivers/leds/leds-cht-wcove.c
++++ b/drivers/leds/leds-cht-wcove.c
+@@ -394,7 +394,7 @@ static int cht_wc_leds_probe(struct platform_device *pdev)
+ led->cdev.pattern_clear = cht_wc_leds_pattern_clear;
+ led->cdev.max_brightness = 255;
+
+- ret = led_classdev_register(&pdev->dev, &led->cdev);
++ ret = devm_led_classdev_register(&pdev->dev, &led->cdev);
+ if (ret < 0)
+ return ret;
+ }
+@@ -406,10 +406,6 @@ static int cht_wc_leds_probe(struct platform_device *pdev)
+ static void cht_wc_leds_remove(struct platform_device *pdev)
+ {
+ struct cht_wc_leds *leds = platform_get_drvdata(pdev);
+- int i;
+-
+- for (i = 0; i < CHT_WC_LED_COUNT; i++)
+- led_classdev_unregister(&leds->leds[i].cdev);
+
+ /* Restore LED1 regs if hw-control was active else leave LED1 off */
+ if (!(leds->led1_initial_regs.ctrl & CHT_WC_LED1_SWCTL))
+diff --git a/drivers/leds/leds-netxbig.c b/drivers/leds/leds-netxbig.c
+index af5a908b8d9edd..e95287416ef879 100644
+--- a/drivers/leds/leds-netxbig.c
++++ b/drivers/leds/leds-netxbig.c
+@@ -439,6 +439,7 @@ static int netxbig_leds_get_of_pdata(struct device *dev,
+ }
+ gpio_ext_pdev = of_find_device_by_node(gpio_ext_np);
+ if (!gpio_ext_pdev) {
++ of_node_put(gpio_ext_np);
+ dev_err(dev, "Failed to find platform device for gpio-ext\n");
+ return -ENODEV;
+ }
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index c3a42dd66ce551..2e3087556adb37 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1671,24 +1671,13 @@ __acquires(bitmap->lock)
+ }
+
+ static int bitmap_startwrite(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool behind)
++ unsigned long sectors)
+ {
+ struct bitmap *bitmap = mddev->bitmap;
+
+ if (!bitmap)
+ return 0;
+
+- if (behind) {
+- int bw;
+- atomic_inc(&bitmap->behind_writes);
+- bw = atomic_read(&bitmap->behind_writes);
+- if (bw > bitmap->behind_writes_used)
+- bitmap->behind_writes_used = bw;
+-
+- pr_debug("inc write-behind count %d/%lu\n",
+- bw, bitmap->mddev->bitmap_info.max_write_behind);
+- }
+-
+ while (sectors) {
+ sector_t blocks;
+ bitmap_counter_t *bmc;
+@@ -1737,21 +1726,13 @@ static int bitmap_startwrite(struct mddev *mddev, sector_t offset,
+ }
+
+ static void bitmap_endwrite(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool success, bool behind)
++ unsigned long sectors)
+ {
+ struct bitmap *bitmap = mddev->bitmap;
+
+ if (!bitmap)
+ return;
+
+- if (behind) {
+- if (atomic_dec_and_test(&bitmap->behind_writes))
+- wake_up(&bitmap->behind_wait);
+- pr_debug("dec write-behind count %d/%lu\n",
+- atomic_read(&bitmap->behind_writes),
+- bitmap->mddev->bitmap_info.max_write_behind);
+- }
+-
+ while (sectors) {
+ sector_t blocks;
+ unsigned long flags;
+@@ -1764,15 +1745,16 @@ static void bitmap_endwrite(struct mddev *mddev, sector_t offset,
+ return;
+ }
+
+- if (success && !bitmap->mddev->degraded &&
+- bitmap->events_cleared < bitmap->mddev->events) {
+- bitmap->events_cleared = bitmap->mddev->events;
+- bitmap->need_sync = 1;
+- sysfs_notify_dirent_safe(bitmap->sysfs_can_clear);
+- }
+-
+- if (!success && !NEEDED(*bmc))
++ if (!bitmap->mddev->degraded) {
++ if (bitmap->events_cleared < bitmap->mddev->events) {
++ bitmap->events_cleared = bitmap->mddev->events;
++ bitmap->need_sync = 1;
++ sysfs_notify_dirent_safe(
++ bitmap->sysfs_can_clear);
++ }
++ } else if (!NEEDED(*bmc)) {
+ *bmc |= NEEDED_MASK;
++ }
+
+ if (COUNTER(*bmc) == COUNTER_MAX)
+ wake_up(&bitmap->overflow_wait);
+@@ -2062,6 +2044,37 @@ static void md_bitmap_free(void *data)
+ kfree(bitmap);
+ }
+
++static void bitmap_start_behind_write(struct mddev *mddev)
++{
++ struct bitmap *bitmap = mddev->bitmap;
++ int bw;
++
++ if (!bitmap)
++ return;
++
++ atomic_inc(&bitmap->behind_writes);
++ bw = atomic_read(&bitmap->behind_writes);
++ if (bw > bitmap->behind_writes_used)
++ bitmap->behind_writes_used = bw;
++
++ pr_debug("inc write-behind count %d/%lu\n",
++ bw, bitmap->mddev->bitmap_info.max_write_behind);
++}
++
++static void bitmap_end_behind_write(struct mddev *mddev)
++{
++ struct bitmap *bitmap = mddev->bitmap;
++
++ if (!bitmap)
++ return;
++
++ if (atomic_dec_and_test(&bitmap->behind_writes))
++ wake_up(&bitmap->behind_wait);
++ pr_debug("dec write-behind count %d/%lu\n",
++ atomic_read(&bitmap->behind_writes),
++ bitmap->mddev->bitmap_info.max_write_behind);
++}
++
+ static void bitmap_wait_behind_writes(struct mddev *mddev)
+ {
+ struct bitmap *bitmap = mddev->bitmap;
+@@ -2342,7 +2355,10 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats)
+
+ if (!bitmap)
+ return -ENOENT;
+-
++ if (bitmap->mddev->bitmap_info.external)
++ return -ENOENT;
++ if (!bitmap->storage.sb_page) /* no superblock */
++ return -EINVAL;
+ sb = kmap_local_page(bitmap->storage.sb_page);
+ stats->sync_size = le64_to_cpu(sb->sync_size);
+ kunmap_local(sb);
+@@ -2981,6 +2997,9 @@ static struct bitmap_operations bitmap_ops = {
+ .dirty_bits = bitmap_dirty_bits,
+ .unplug = bitmap_unplug,
+ .daemon_work = bitmap_daemon_work,
++
++ .start_behind_write = bitmap_start_behind_write,
++ .end_behind_write = bitmap_end_behind_write,
+ .wait_behind_writes = bitmap_wait_behind_writes,
+
+ .startwrite = bitmap_startwrite,
+diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h
+index 662e6fc141a775..31c93019c76bf3 100644
+--- a/drivers/md/md-bitmap.h
++++ b/drivers/md/md-bitmap.h
+@@ -84,12 +84,15 @@ struct bitmap_operations {
+ unsigned long e);
+ void (*unplug)(struct mddev *mddev, bool sync);
+ void (*daemon_work)(struct mddev *mddev);
++
++ void (*start_behind_write)(struct mddev *mddev);
++ void (*end_behind_write)(struct mddev *mddev);
+ void (*wait_behind_writes)(struct mddev *mddev);
+
+ int (*startwrite)(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool behind);
++ unsigned long sectors);
+ void (*endwrite)(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool success, bool behind);
++ unsigned long sectors);
+ bool (*start_sync)(struct mddev *mddev, sector_t offset,
+ sector_t *blocks, bool degraded);
+ void (*end_sync)(struct mddev *mddev, sector_t offset, sector_t *blocks);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 67108c397c5a86..44c4c518430d9b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8376,6 +8376,10 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ return 0;
+
+ spin_unlock(&all_mddevs_lock);
++
++ /* prevent bitmap to be freed after checking */
++ mutex_lock(&mddev->bitmap_info.mutex);
++
+ spin_lock(&mddev->lock);
+ if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) {
+ seq_printf(seq, "%s : ", mdname(mddev));
+@@ -8451,6 +8455,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+ }
+ spin_unlock(&mddev->lock);
++ mutex_unlock(&mddev->bitmap_info.mutex);
+ spin_lock(&all_mddevs_lock);
+
+ if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
+@@ -8745,12 +8750,32 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+ }
+ EXPORT_SYMBOL_GPL(md_submit_discard_bio);
+
++static void md_bitmap_start(struct mddev *mddev,
++ struct md_io_clone *md_io_clone)
++{
++ if (mddev->pers->bitmap_sector)
++ mddev->pers->bitmap_sector(mddev, &md_io_clone->offset,
++ &md_io_clone->sectors);
++
++ mddev->bitmap_ops->startwrite(mddev, md_io_clone->offset,
++ md_io_clone->sectors);
++}
++
++static void md_bitmap_end(struct mddev *mddev, struct md_io_clone *md_io_clone)
++{
++ mddev->bitmap_ops->endwrite(mddev, md_io_clone->offset,
++ md_io_clone->sectors);
++}
++
+ static void md_end_clone_io(struct bio *bio)
+ {
+ struct md_io_clone *md_io_clone = bio->bi_private;
+ struct bio *orig_bio = md_io_clone->orig_bio;
+ struct mddev *mddev = md_io_clone->mddev;
+
++ if (bio_data_dir(orig_bio) == WRITE && mddev->bitmap)
++ md_bitmap_end(mddev, md_io_clone);
++
+ if (bio->bi_status && !orig_bio->bi_status)
+ orig_bio->bi_status = bio->bi_status;
+
+@@ -8775,6 +8800,12 @@ static void md_clone_bio(struct mddev *mddev, struct bio **bio)
+ if (blk_queue_io_stat(bdev->bd_disk->queue))
+ md_io_clone->start_time = bio_start_io_acct(*bio);
+
++ if (bio_data_dir(*bio) == WRITE && mddev->bitmap) {
++ md_io_clone->offset = (*bio)->bi_iter.bi_sector;
++ md_io_clone->sectors = bio_sectors(*bio);
++ md_bitmap_start(mddev, md_io_clone);
++ }
++
+ clone->bi_end_io = md_end_clone_io;
+ clone->bi_private = md_io_clone;
+ *bio = clone;
+@@ -8793,6 +8824,9 @@ void md_free_cloned_bio(struct bio *bio)
+ struct bio *orig_bio = md_io_clone->orig_bio;
+ struct mddev *mddev = md_io_clone->mddev;
+
++ if (bio_data_dir(orig_bio) == WRITE && mddev->bitmap)
++ md_bitmap_end(mddev, md_io_clone);
++
+ if (bio->bi_status && !orig_bio->bi_status)
+ orig_bio->bi_status = bio->bi_status;
+
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 5d2e6bd58e4da2..8826dce9717da9 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -746,6 +746,9 @@ struct md_personality
+ void *(*takeover) (struct mddev *mddev);
+ /* Changes the consistency policy of an active array. */
+ int (*change_consistency_policy)(struct mddev *mddev, const char *buf);
++ /* convert io ranges from array to bitmap */
++ void (*bitmap_sector)(struct mddev *mddev, sector_t *offset,
++ unsigned long *sectors);
+ };
+
+ struct md_sysfs_entry {
+@@ -828,6 +831,8 @@ struct md_io_clone {
+ struct mddev *mddev;
+ struct bio *orig_bio;
+ unsigned long start_time;
++ sector_t offset;
++ unsigned long sectors;
+ struct bio bio_clone;
+ };
+
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 6c9d24203f39f0..d83fe3b3abc009 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -420,10 +420,8 @@ static void close_write(struct r1bio *r1_bio)
+ r1_bio->behind_master_bio = NULL;
+ }
+
+- /* clear the bitmap if all writes complete successfully */
+- mddev->bitmap_ops->endwrite(mddev, r1_bio->sector, r1_bio->sectors,
+- !test_bit(R1BIO_Degraded, &r1_bio->state),
+- test_bit(R1BIO_BehindIO, &r1_bio->state));
++ if (test_bit(R1BIO_BehindIO, &r1_bio->state))
++ mddev->bitmap_ops->end_behind_write(mddev);
+ md_write_end(mddev);
+ }
+
+@@ -480,8 +478,6 @@ static void raid1_end_write_request(struct bio *bio)
+ if (!test_bit(Faulty, &rdev->flags))
+ set_bit(R1BIO_WriteError, &r1_bio->state);
+ else {
+- /* Fail the request */
+- set_bit(R1BIO_Degraded, &r1_bio->state);
+ /* Finished with this branch */
+ r1_bio->bios[mirror] = NULL;
+ to_put = bio;
+@@ -1492,11 +1488,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ break;
+ }
+ r1_bio->bios[i] = NULL;
+- if (!rdev || test_bit(Faulty, &rdev->flags)) {
+- if (i < conf->raid_disks)
+- set_bit(R1BIO_Degraded, &r1_bio->state);
++ if (!rdev || test_bit(Faulty, &rdev->flags))
+ continue;
+- }
+
+ atomic_inc(&rdev->nr_pending);
+ if (test_bit(WriteErrorSeen, &rdev->flags)) {
+@@ -1522,16 +1515,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ */
+ max_sectors = bad_sectors;
+ rdev_dec_pending(rdev, mddev);
+- /* We don't set R1BIO_Degraded as that
+- * only applies if the disk is
+- * missing, so it might be re-added,
+- * and we want to know to recover this
+- * chunk.
+- * In this case the device is here,
+- * and the fact that this chunk is not
+- * in-sync is recorded in the bad
+- * block log
+- */
+ continue;
+ }
+ if (is_bad) {
+@@ -1611,9 +1594,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ stats.behind_writes < max_write_behind)
+ alloc_behind_master_bio(r1_bio, bio);
+
+- mddev->bitmap_ops->startwrite(
+- mddev, r1_bio->sector, r1_bio->sectors,
+- test_bit(R1BIO_BehindIO, &r1_bio->state));
++ if (test_bit(R1BIO_BehindIO, &r1_bio->state))
++ mddev->bitmap_ops->start_behind_write(mddev);
+ first_clone = 0;
+ }
+
+@@ -2567,12 +2549,10 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
+ * errors.
+ */
+ fail = true;
+- if (!narrow_write_error(r1_bio, m)) {
++ if (!narrow_write_error(r1_bio, m))
+ md_error(conf->mddev,
+ conf->mirrors[m].rdev);
+ /* an I/O failed, we can't clear the bitmap */
+- set_bit(R1BIO_Degraded, &r1_bio->state);
+- }
+ rdev_dec_pending(conf->mirrors[m].rdev,
+ conf->mddev);
+ }
+@@ -2663,8 +2643,6 @@ static void raid1d(struct md_thread *thread)
+ list_del(&r1_bio->retry_list);
+ idx = sector_to_idx(r1_bio->sector);
+ atomic_dec(&conf->nr_queued[idx]);
+- if (mddev->degraded)
+- set_bit(R1BIO_Degraded, &r1_bio->state);
+ if (test_bit(R1BIO_WriteError, &r1_bio->state))
+ close_write(r1_bio);
+ raid_end_bio_io(r1_bio);
+diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
+index 5300cbaa58a415..33f318fcc268d8 100644
+--- a/drivers/md/raid1.h
++++ b/drivers/md/raid1.h
+@@ -188,7 +188,6 @@ struct r1bio {
+ enum r1bio_state {
+ R1BIO_Uptodate,
+ R1BIO_IsSync,
+- R1BIO_Degraded,
+ R1BIO_BehindIO,
+ /* Set ReadError on bios that experience a readerror so that
+ * raid1d knows what to do with them.
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 862b1fb71d864b..daf42acc4fb6f3 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -428,10 +428,6 @@ static void close_write(struct r10bio *r10_bio)
+ {
+ struct mddev *mddev = r10_bio->mddev;
+
+- /* clear the bitmap if all writes complete successfully */
+- mddev->bitmap_ops->endwrite(mddev, r10_bio->sector, r10_bio->sectors,
+- !test_bit(R10BIO_Degraded, &r10_bio->state),
+- false);
+ md_write_end(mddev);
+ }
+
+@@ -501,7 +497,6 @@ static void raid10_end_write_request(struct bio *bio)
+ set_bit(R10BIO_WriteError, &r10_bio->state);
+ else {
+ /* Fail the request */
+- set_bit(R10BIO_Degraded, &r10_bio->state);
+ r10_bio->devs[slot].bio = NULL;
+ to_put = bio;
+ dec_rdev = 1;
+@@ -1430,10 +1425,8 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ r10_bio->devs[i].bio = NULL;
+ r10_bio->devs[i].repl_bio = NULL;
+
+- if (!rdev && !rrdev) {
+- set_bit(R10BIO_Degraded, &r10_bio->state);
++ if (!rdev && !rrdev)
+ continue;
+- }
+ if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) {
+ sector_t first_bad;
+ sector_t dev_sector = r10_bio->devs[i].addr;
+@@ -1450,14 +1443,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ * to other devices yet
+ */
+ max_sectors = bad_sectors;
+- /* We don't set R10BIO_Degraded as that
+- * only applies if the disk is missing,
+- * so it might be re-added, and we want to
+- * know to recover this chunk.
+- * In this case the device is here, and the
+- * fact that this chunk is not in-sync is
+- * recorded in the bad block log.
+- */
+ continue;
+ }
+ if (is_bad) {
+@@ -1493,8 +1478,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ md_account_bio(mddev, &bio);
+ r10_bio->master_bio = bio;
+ atomic_set(&r10_bio->remaining, 1);
+- mddev->bitmap_ops->startwrite(mddev, r10_bio->sector, r10_bio->sectors,
+- false);
+
+ for (i = 0; i < conf->copies; i++) {
+ if (r10_bio->devs[i].bio)
+@@ -2910,11 +2893,8 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
+ rdev_dec_pending(rdev, conf->mddev);
+ } else if (bio != NULL && bio->bi_status) {
+ fail = true;
+- if (!narrow_write_error(r10_bio, m)) {
++ if (!narrow_write_error(r10_bio, m))
+ md_error(conf->mddev, rdev);
+- set_bit(R10BIO_Degraded,
+- &r10_bio->state);
+- }
+ rdev_dec_pending(rdev, conf->mddev);
+ }
+ bio = r10_bio->devs[m].repl_bio;
+@@ -2973,8 +2953,6 @@ static void raid10d(struct md_thread *thread)
+ r10_bio = list_first_entry(&tmp, struct r10bio,
+ retry_list);
+ list_del(&r10_bio->retry_list);
+- if (mddev->degraded)
+- set_bit(R10BIO_Degraded, &r10_bio->state);
+
+ if (test_bit(R10BIO_WriteError,
+ &r10_bio->state))
+diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
+index 2e75e88d08023f..3f16ad6904a9fb 100644
+--- a/drivers/md/raid10.h
++++ b/drivers/md/raid10.h
+@@ -161,7 +161,6 @@ enum r10bio_state {
+ R10BIO_IsSync,
+ R10BIO_IsRecover,
+ R10BIO_IsReshape,
+- R10BIO_Degraded,
+ /* Set ReadError on bios that experience a read error
+ * so that raid10d knows what to do with them.
+ */
+diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
+index b4f7b79fd187d0..011246e16a99e5 100644
+--- a/drivers/md/raid5-cache.c
++++ b/drivers/md/raid5-cache.c
+@@ -313,10 +313,6 @@ void r5c_handle_cached_data_endio(struct r5conf *conf,
+ if (sh->dev[i].written) {
+ set_bit(R5_UPTODATE, &sh->dev[i].flags);
+ r5c_return_dev_pending_writes(conf, &sh->dev[i]);
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- !test_bit(STRIPE_DEGRADED, &sh->state),
+- false);
+ }
+ }
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 2fa1f270fb1d3c..39e7596e78c0b0 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -906,8 +906,7 @@ static bool stripe_can_batch(struct stripe_head *sh)
+ if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ return false;
+ return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+- !test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
+- is_full_stripe_write(sh);
++ is_full_stripe_write(sh);
+ }
+
+ /* we only do back search */
+@@ -1345,8 +1344,6 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
+ submit_bio_noacct(rbi);
+ }
+ if (!rdev && !rrdev) {
+- if (op_is_write(op))
+- set_bit(STRIPE_DEGRADED, &sh->state);
+ pr_debug("skip op %d on disc %d for sector %llu\n",
+ bi->bi_opf, i, (unsigned long long)sh->sector);
+ clear_bit(R5_LOCKED, &sh->dev[i].flags);
+@@ -2884,7 +2881,6 @@ static void raid5_end_write_request(struct bio *bi)
+ set_bit(R5_MadeGoodRepl, &sh->dev[i].flags);
+ } else {
+ if (bi->bi_status) {
+- set_bit(STRIPE_DEGRADED, &sh->state);
+ set_bit(WriteErrorSeen, &rdev->flags);
+ set_bit(R5_WriteError, &sh->dev[i].flags);
+ if (!test_and_set_bit(WantReplacement, &rdev->flags))
+@@ -3548,29 +3544,9 @@ static void __add_stripe_bio(struct stripe_head *sh, struct bio *bi,
+ (*bip)->bi_iter.bi_sector, sh->sector, dd_idx,
+ sh->dev[dd_idx].sector);
+
+- if (conf->mddev->bitmap && firstwrite) {
+- /* Cannot hold spinlock over bitmap_startwrite,
+- * but must ensure this isn't added to a batch until
+- * we have added to the bitmap and set bm_seq.
+- * So set STRIPE_BITMAP_PENDING to prevent
+- * batching.
+- * If multiple __add_stripe_bio() calls race here they
+- * much all set STRIPE_BITMAP_PENDING. So only the first one
+- * to complete "bitmap_startwrite" gets to set
+- * STRIPE_BIT_DELAY. This is important as once a stripe
+- * is added to a batch, STRIPE_BIT_DELAY cannot be changed
+- * any more.
+- */
+- set_bit(STRIPE_BITMAP_PENDING, &sh->state);
+- spin_unlock_irq(&sh->stripe_lock);
+- conf->mddev->bitmap_ops->startwrite(conf->mddev, sh->sector,
+- RAID5_STRIPE_SECTORS(conf), false);
+- spin_lock_irq(&sh->stripe_lock);
+- clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
+- if (!sh->batch_head) {
+- sh->bm_seq = conf->seq_flush+1;
+- set_bit(STRIPE_BIT_DELAY, &sh->state);
+- }
++ if (conf->mddev->bitmap && firstwrite && !sh->batch_head) {
++ sh->bm_seq = conf->seq_flush+1;
++ set_bit(STRIPE_BIT_DELAY, &sh->state);
+ }
+ }
+
+@@ -3621,7 +3597,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ BUG_ON(sh->batch_head);
+ for (i = disks; i--; ) {
+ struct bio *bi;
+- int bitmap_end = 0;
+
+ if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
+ struct md_rdev *rdev = conf->disks[i].rdev;
+@@ -3646,8 +3621,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ sh->dev[i].towrite = NULL;
+ sh->overwrite_disks = 0;
+ spin_unlock_irq(&sh->stripe_lock);
+- if (bi)
+- bitmap_end = 1;
+
+ log_stripe_write_finished(sh);
+
+@@ -3662,11 +3635,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ bio_io_error(bi);
+ bi = nextbi;
+ }
+- if (bitmap_end)
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- false, false);
+- bitmap_end = 0;
+ /* and fail all 'written' */
+ bi = sh->dev[i].written;
+ sh->dev[i].written = NULL;
+@@ -3675,7 +3643,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ sh->dev[i].page = sh->dev[i].orig_page;
+ }
+
+- if (bi) bitmap_end = 1;
+ while (bi && bi->bi_iter.bi_sector <
+ sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
+ struct bio *bi2 = r5_next_bio(conf, bi, sh->dev[i].sector);
+@@ -3709,10 +3676,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ bi = nextbi;
+ }
+ }
+- if (bitmap_end)
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- false, false);
+ /* If we were in the middle of a write the parity block might
+ * still be locked - so just clear all R5_LOCKED flags
+ */
+@@ -4061,10 +4024,7 @@ static void handle_stripe_clean_event(struct r5conf *conf,
+ bio_endio(wbi);
+ wbi = wbi2;
+ }
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- !test_bit(STRIPE_DEGRADED, &sh->state),
+- false);
++
+ if (head_sh->batch_head) {
+ sh = list_first_entry(&sh->batch_list,
+ struct stripe_head,
+@@ -4341,7 +4301,6 @@ static void handle_parity_checks5(struct r5conf *conf, struct stripe_head *sh,
+ s->locked++;
+ set_bit(R5_Wantwrite, &dev->flags);
+
+- clear_bit(STRIPE_DEGRADED, &sh->state);
+ set_bit(STRIPE_INSYNC, &sh->state);
+ break;
+ case check_state_run:
+@@ -4498,7 +4457,6 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ clear_bit(R5_Wantwrite, &dev->flags);
+ s->locked--;
+ }
+- clear_bit(STRIPE_DEGRADED, &sh->state);
+
+ set_bit(STRIPE_INSYNC, &sh->state);
+ break;
+@@ -4892,8 +4850,7 @@ static void break_stripe_batch_list(struct stripe_head *head_sh,
+ (1 << STRIPE_COMPUTE_RUN) |
+ (1 << STRIPE_DISCARD) |
+ (1 << STRIPE_BATCH_READY) |
+- (1 << STRIPE_BATCH_ERR) |
+- (1 << STRIPE_BITMAP_PENDING)),
++ (1 << STRIPE_BATCH_ERR)),
+ "stripe state: %lx\n", sh->state);
+ WARN_ONCE(head_sh->state & ((1 << STRIPE_DISCARD) |
+ (1 << STRIPE_REPLACED)),
+@@ -4901,7 +4858,6 @@ static void break_stripe_batch_list(struct stripe_head *head_sh,
+
+ set_mask_bits(&sh->state, ~(STRIPE_EXPAND_SYNC_FLAGS |
+ (1 << STRIPE_PREREAD_ACTIVE) |
+- (1 << STRIPE_DEGRADED) |
+ (1 << STRIPE_ON_UNPLUG_LIST)),
+ head_sh->state & (1 << STRIPE_INSYNC));
+
+@@ -5785,10 +5741,6 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi)
+ }
+ spin_unlock_irq(&sh->stripe_lock);
+ if (conf->mddev->bitmap) {
+- for (d = 0; d < conf->raid_disks - conf->max_degraded;
+- d++)
+- mddev->bitmap_ops->startwrite(mddev, sh->sector,
+- RAID5_STRIPE_SECTORS(conf), false);
+ sh->bm_seq = conf->seq_flush + 1;
+ set_bit(STRIPE_BIT_DELAY, &sh->state);
+ }
+@@ -5929,6 +5881,54 @@ static enum reshape_loc get_reshape_loc(struct mddev *mddev,
+ return LOC_BEHIND_RESHAPE;
+ }
+
++static void raid5_bitmap_sector(struct mddev *mddev, sector_t *offset,
++ unsigned long *sectors)
++{
++ struct r5conf *conf = mddev->private;
++ sector_t start = *offset;
++ sector_t end = start + *sectors;
++ sector_t prev_start = start;
++ sector_t prev_end = end;
++ int sectors_per_chunk;
++ enum reshape_loc loc;
++ int dd_idx;
++
++ sectors_per_chunk = conf->chunk_sectors *
++ (conf->raid_disks - conf->max_degraded);
++ start = round_down(start, sectors_per_chunk);
++ end = round_up(end, sectors_per_chunk);
++
++ start = raid5_compute_sector(conf, start, 0, &dd_idx, NULL);
++ end = raid5_compute_sector(conf, end, 0, &dd_idx, NULL);
++
++ /*
++ * For LOC_INSIDE_RESHAPE, this IO will wait for reshape to make
++ * progress, hence it's the same as LOC_BEHIND_RESHAPE.
++ */
++ loc = get_reshape_loc(mddev, conf, prev_start);
++ if (likely(loc != LOC_AHEAD_OF_RESHAPE)) {
++ *offset = start;
++ *sectors = end - start;
++ return;
++ }
++
++ sectors_per_chunk = conf->prev_chunk_sectors *
++ (conf->previous_raid_disks - conf->max_degraded);
++ prev_start = round_down(prev_start, sectors_per_chunk);
++ prev_end = round_down(prev_end, sectors_per_chunk);
++
++ prev_start = raid5_compute_sector(conf, prev_start, 1, &dd_idx, NULL);
++ prev_end = raid5_compute_sector(conf, prev_end, 1, &dd_idx, NULL);
++
++ /*
++ * for LOC_AHEAD_OF_RESHAPE, reshape can make progress before this IO
++ * is handled in make_stripe_request(), we can't know this here hence
++ * we set bits for both.
++ */
++ *offset = min(start, prev_start);
++ *sectors = max(end, prev_end) - *offset;
++}
++
+ static enum stripe_result make_stripe_request(struct mddev *mddev,
+ struct r5conf *conf, struct stripe_request_ctx *ctx,
+ sector_t logical_sector, struct bio *bi)
+@@ -8977,6 +8977,7 @@ static struct md_personality raid6_personality =
+ .takeover = raid6_takeover,
+ .change_consistency_policy = raid5_change_consistency_policy,
+ .prepare_suspend = raid5_prepare_suspend,
++ .bitmap_sector = raid5_bitmap_sector,
+ };
+ static struct md_personality raid5_personality =
+ {
+@@ -9002,6 +9003,7 @@ static struct md_personality raid5_personality =
+ .takeover = raid5_takeover,
+ .change_consistency_policy = raid5_change_consistency_policy,
+ .prepare_suspend = raid5_prepare_suspend,
++ .bitmap_sector = raid5_bitmap_sector,
+ };
+
+ static struct md_personality raid4_personality =
+@@ -9028,6 +9030,7 @@ static struct md_personality raid4_personality =
+ .takeover = raid4_takeover,
+ .change_consistency_policy = raid5_change_consistency_policy,
+ .prepare_suspend = raid5_prepare_suspend,
++ .bitmap_sector = raid5_bitmap_sector,
+ };
+
+ static int __init raid5_init(void)
+diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
+index 896ecfc4afa6fa..2e42c1641049f9 100644
+--- a/drivers/md/raid5.h
++++ b/drivers/md/raid5.h
+@@ -358,7 +358,6 @@ enum {
+ STRIPE_REPLACED,
+ STRIPE_PREREAD_ACTIVE,
+ STRIPE_DELAYED,
+- STRIPE_DEGRADED,
+ STRIPE_BIT_DELAY,
+ STRIPE_EXPANDING,
+ STRIPE_EXPAND_SOURCE,
+@@ -372,9 +371,6 @@ enum {
+ STRIPE_ON_RELEASE_LIST,
+ STRIPE_BATCH_READY,
+ STRIPE_BATCH_ERR,
+- STRIPE_BITMAP_PENDING, /* Being added to bitmap, don't add
+- * to batch yet.
+- */
+ STRIPE_LOG_TRAPPED, /* trapped into log (see raid5-cache.c)
+ * this bit is used in two scenarios:
+ *
+diff --git a/drivers/media/i2c/imx290.c b/drivers/media/i2c/imx290.c
+index 458905dfb3e110..a87a265cd83957 100644
+--- a/drivers/media/i2c/imx290.c
++++ b/drivers/media/i2c/imx290.c
+@@ -269,7 +269,6 @@ static const struct cci_reg_sequence imx290_global_init_settings[] = {
+ { IMX290_WINWV, 1097 },
+ { IMX290_XSOUTSEL, IMX290_XSOUTSEL_XVSOUTSEL_VSYNC |
+ IMX290_XSOUTSEL_XHSOUTSEL_HSYNC },
+- { CCI_REG8(0x3011), 0x02 },
+ { CCI_REG8(0x3012), 0x64 },
+ { CCI_REG8(0x3013), 0x00 },
+ };
+@@ -277,6 +276,7 @@ static const struct cci_reg_sequence imx290_global_init_settings[] = {
+ static const struct cci_reg_sequence imx290_global_init_settings_290[] = {
+ { CCI_REG8(0x300f), 0x00 },
+ { CCI_REG8(0x3010), 0x21 },
++ { CCI_REG8(0x3011), 0x00 },
+ { CCI_REG8(0x3016), 0x09 },
+ { CCI_REG8(0x3070), 0x02 },
+ { CCI_REG8(0x3071), 0x11 },
+@@ -330,6 +330,7 @@ static const struct cci_reg_sequence xclk_regs[][IMX290_NUM_CLK_REGS] = {
+ };
+
+ static const struct cci_reg_sequence imx290_global_init_settings_327[] = {
++ { CCI_REG8(0x3011), 0x02 },
+ { CCI_REG8(0x309e), 0x4A },
+ { CCI_REG8(0x309f), 0x4A },
+ { CCI_REG8(0x313b), 0x61 },
+diff --git a/drivers/media/i2c/imx412.c b/drivers/media/i2c/imx412.c
+index 0bfe3046fcc872..c74097a59c4285 100644
+--- a/drivers/media/i2c/imx412.c
++++ b/drivers/media/i2c/imx412.c
+@@ -547,7 +547,7 @@ static int imx412_update_exp_gain(struct imx412 *imx412, u32 exposure, u32 gain)
+
+ lpfr = imx412->vblank + imx412->cur_mode->height;
+
+- dev_dbg(imx412->dev, "Set exp %u, analog gain %u, lpfr %u",
++ dev_dbg(imx412->dev, "Set exp %u, analog gain %u, lpfr %u\n",
+ exposure, gain, lpfr);
+
+ ret = imx412_write_reg(imx412, IMX412_REG_HOLD, 1, 1);
+@@ -594,7 +594,7 @@ static int imx412_set_ctrl(struct v4l2_ctrl *ctrl)
+ case V4L2_CID_VBLANK:
+ imx412->vblank = imx412->vblank_ctrl->val;
+
+- dev_dbg(imx412->dev, "Received vblank %u, new lpfr %u",
++ dev_dbg(imx412->dev, "Received vblank %u, new lpfr %u\n",
+ imx412->vblank,
+ imx412->vblank + imx412->cur_mode->height);
+
+@@ -613,7 +613,7 @@ static int imx412_set_ctrl(struct v4l2_ctrl *ctrl)
+ exposure = ctrl->val;
+ analog_gain = imx412->again_ctrl->val;
+
+- dev_dbg(imx412->dev, "Received exp %u, analog gain %u",
++ dev_dbg(imx412->dev, "Received exp %u, analog gain %u\n",
+ exposure, analog_gain);
+
+ ret = imx412_update_exp_gain(imx412, exposure, analog_gain);
+@@ -622,7 +622,7 @@ static int imx412_set_ctrl(struct v4l2_ctrl *ctrl)
+
+ break;
+ default:
+- dev_err(imx412->dev, "Invalid control %d", ctrl->id);
++ dev_err(imx412->dev, "Invalid control %d\n", ctrl->id);
+ ret = -EINVAL;
+ }
+
+@@ -803,14 +803,14 @@ static int imx412_start_streaming(struct imx412 *imx412)
+ ret = imx412_write_regs(imx412, reg_list->regs,
+ reg_list->num_of_regs);
+ if (ret) {
+- dev_err(imx412->dev, "fail to write initial registers");
++ dev_err(imx412->dev, "fail to write initial registers\n");
+ return ret;
+ }
+
+ /* Setup handler will write actual exposure and gain */
+ ret = __v4l2_ctrl_handler_setup(imx412->sd.ctrl_handler);
+ if (ret) {
+- dev_err(imx412->dev, "fail to setup handler");
++ dev_err(imx412->dev, "fail to setup handler\n");
+ return ret;
+ }
+
+@@ -821,7 +821,7 @@ static int imx412_start_streaming(struct imx412 *imx412)
+ ret = imx412_write_reg(imx412, IMX412_REG_MODE_SELECT,
+ 1, IMX412_MODE_STREAMING);
+ if (ret) {
+- dev_err(imx412->dev, "fail to start streaming");
++ dev_err(imx412->dev, "fail to start streaming\n");
+ return ret;
+ }
+
+@@ -895,7 +895,7 @@ static int imx412_detect(struct imx412 *imx412)
+ return ret;
+
+ if (val != IMX412_ID) {
+- dev_err(imx412->dev, "chip id mismatch: %x!=%x",
++ dev_err(imx412->dev, "chip id mismatch: %x!=%x\n",
+ IMX412_ID, val);
+ return -ENXIO;
+ }
+@@ -927,7 +927,7 @@ static int imx412_parse_hw_config(struct imx412 *imx412)
+ imx412->reset_gpio = devm_gpiod_get_optional(imx412->dev, "reset",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(imx412->reset_gpio)) {
+- dev_err(imx412->dev, "failed to get reset gpio %ld",
++ dev_err(imx412->dev, "failed to get reset gpio %ld\n",
+ PTR_ERR(imx412->reset_gpio));
+ return PTR_ERR(imx412->reset_gpio);
+ }
+@@ -935,13 +935,13 @@ static int imx412_parse_hw_config(struct imx412 *imx412)
+ /* Get sensor input clock */
+ imx412->inclk = devm_clk_get(imx412->dev, NULL);
+ if (IS_ERR(imx412->inclk)) {
+- dev_err(imx412->dev, "could not get inclk");
++ dev_err(imx412->dev, "could not get inclk\n");
+ return PTR_ERR(imx412->inclk);
+ }
+
+ rate = clk_get_rate(imx412->inclk);
+ if (rate != IMX412_INCLK_RATE) {
+- dev_err(imx412->dev, "inclk frequency mismatch");
++ dev_err(imx412->dev, "inclk frequency mismatch\n");
+ return -EINVAL;
+ }
+
+@@ -966,14 +966,14 @@ static int imx412_parse_hw_config(struct imx412 *imx412)
+
+ if (bus_cfg.bus.mipi_csi2.num_data_lanes != IMX412_NUM_DATA_LANES) {
+ dev_err(imx412->dev,
+- "number of CSI2 data lanes %d is not supported",
++ "number of CSI2 data lanes %d is not supported\n",
+ bus_cfg.bus.mipi_csi2.num_data_lanes);
+ ret = -EINVAL;
+ goto done_endpoint_free;
+ }
+
+ if (!bus_cfg.nr_of_link_frequencies) {
+- dev_err(imx412->dev, "no link frequencies defined");
++ dev_err(imx412->dev, "no link frequencies defined\n");
+ ret = -EINVAL;
+ goto done_endpoint_free;
+ }
+@@ -1034,7 +1034,7 @@ static int imx412_power_on(struct device *dev)
+
+ ret = clk_prepare_enable(imx412->inclk);
+ if (ret) {
+- dev_err(imx412->dev, "fail to enable inclk");
++ dev_err(imx412->dev, "fail to enable inclk\n");
+ goto error_reset;
+ }
+
+@@ -1145,7 +1145,7 @@ static int imx412_init_controls(struct imx412 *imx412)
+ imx412->hblank_ctrl->flags |= V4L2_CTRL_FLAG_READ_ONLY;
+
+ if (ctrl_hdlr->error) {
+- dev_err(imx412->dev, "control init failed: %d",
++ dev_err(imx412->dev, "control init failed: %d\n",
+ ctrl_hdlr->error);
+ v4l2_ctrl_handler_free(ctrl_hdlr);
+ return ctrl_hdlr->error;
+@@ -1183,7 +1183,7 @@ static int imx412_probe(struct i2c_client *client)
+
+ ret = imx412_parse_hw_config(imx412);
+ if (ret) {
+- dev_err(imx412->dev, "HW configuration is not supported");
++ dev_err(imx412->dev, "HW configuration is not supported\n");
+ return ret;
+ }
+
+@@ -1191,14 +1191,14 @@ static int imx412_probe(struct i2c_client *client)
+
+ ret = imx412_power_on(imx412->dev);
+ if (ret) {
+- dev_err(imx412->dev, "failed to power-on the sensor");
++ dev_err(imx412->dev, "failed to power-on the sensor\n");
+ goto error_mutex_destroy;
+ }
+
+ /* Check module identity */
+ ret = imx412_detect(imx412);
+ if (ret) {
+- dev_err(imx412->dev, "failed to find sensor: %d", ret);
++ dev_err(imx412->dev, "failed to find sensor: %d\n", ret);
+ goto error_power_off;
+ }
+
+@@ -1208,7 +1208,7 @@ static int imx412_probe(struct i2c_client *client)
+
+ ret = imx412_init_controls(imx412);
+ if (ret) {
+- dev_err(imx412->dev, "failed to init controls: %d", ret);
++ dev_err(imx412->dev, "failed to init controls: %d\n", ret);
+ goto error_power_off;
+ }
+
+@@ -1222,14 +1222,14 @@ static int imx412_probe(struct i2c_client *client)
+ imx412->pad.flags = MEDIA_PAD_FL_SOURCE;
+ ret = media_entity_pads_init(&imx412->sd.entity, 1, &imx412->pad);
+ if (ret) {
+- dev_err(imx412->dev, "failed to init entity pads: %d", ret);
++ dev_err(imx412->dev, "failed to init entity pads: %d\n", ret);
+ goto error_handler_free;
+ }
+
+ ret = v4l2_async_register_subdev_sensor(&imx412->sd);
+ if (ret < 0) {
+ dev_err(imx412->dev,
+- "failed to register async subdev: %d", ret);
++ "failed to register async subdev: %d\n", ret);
+ goto error_media_entity;
+ }
+
+diff --git a/drivers/media/i2c/ov9282.c b/drivers/media/i2c/ov9282.c
+index 9f52af6f047f3c..87e5d7ce5a47ee 100644
+--- a/drivers/media/i2c/ov9282.c
++++ b/drivers/media/i2c/ov9282.c
+@@ -40,7 +40,7 @@
+ /* Exposure control */
+ #define OV9282_REG_EXPOSURE 0x3500
+ #define OV9282_EXPOSURE_MIN 1
+-#define OV9282_EXPOSURE_OFFSET 12
++#define OV9282_EXPOSURE_OFFSET 25
+ #define OV9282_EXPOSURE_STEP 1
+ #define OV9282_EXPOSURE_DEFAULT 0x0282
+
+diff --git a/drivers/media/platform/marvell/mcam-core.c b/drivers/media/platform/marvell/mcam-core.c
+index c81593c969e057..a62c3a484cb3ff 100644
+--- a/drivers/media/platform/marvell/mcam-core.c
++++ b/drivers/media/platform/marvell/mcam-core.c
+@@ -935,7 +935,12 @@ static int mclk_enable(struct clk_hw *hw)
+ ret = pm_runtime_resume_and_get(cam->dev);
+ if (ret < 0)
+ return ret;
+- clk_enable(cam->clk[0]);
++ ret = clk_enable(cam->clk[0]);
++ if (ret) {
++ pm_runtime_put(cam->dev);
++ return ret;
++ }
++
+ mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div);
+ mcam_ctlr_power_up(cam);
+
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 1bf85c1cf96435..b8c9bb017fb5f6 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -2679,11 +2679,12 @@ static void mxc_jpeg_detach_pm_domains(struct mxc_jpeg_dev *jpeg)
+ int i;
+
+ for (i = 0; i < jpeg->num_domains; i++) {
+- if (jpeg->pd_dev[i] && !pm_runtime_suspended(jpeg->pd_dev[i]))
++ if (!IS_ERR_OR_NULL(jpeg->pd_dev[i]) &&
++ !pm_runtime_suspended(jpeg->pd_dev[i]))
+ pm_runtime_force_suspend(jpeg->pd_dev[i]);
+- if (jpeg->pd_link[i] && !IS_ERR(jpeg->pd_link[i]))
++ if (!IS_ERR_OR_NULL(jpeg->pd_link[i]))
+ device_link_del(jpeg->pd_link[i]);
+- if (jpeg->pd_dev[i] && !IS_ERR(jpeg->pd_dev[i]))
++ if (!IS_ERR_OR_NULL(jpeg->pd_dev[i]))
+ dev_pm_domain_detach(jpeg->pd_dev[i], true);
+ jpeg->pd_dev[i] = NULL;
+ jpeg->pd_link[i] = NULL;
+diff --git a/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c b/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c
+index 4091f1c0e78bdc..a71eb30323c8d2 100644
+--- a/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c
++++ b/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c
+@@ -861,6 +861,7 @@ int mxc_isi_video_buffer_prepare(struct mxc_isi_dev *isi, struct vb2_buffer *vb2
+ const struct mxc_isi_format_info *info,
+ const struct v4l2_pix_format_mplane *pix)
+ {
++ struct vb2_v4l2_buffer *v4l2_buf = to_vb2_v4l2_buffer(vb2);
+ unsigned int i;
+
+ for (i = 0; i < info->mem_planes; i++) {
+@@ -875,6 +876,8 @@ int mxc_isi_video_buffer_prepare(struct mxc_isi_dev *isi, struct vb2_buffer *vb2
+ vb2_set_plane_payload(vb2, i, size);
+ }
+
++ v4l2_buf->field = pix->field;
++
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/samsung/exynos4-is/mipi-csis.c b/drivers/media/platform/samsung/exynos4-is/mipi-csis.c
+index 4b9b20ba35041c..38c5f22b850b97 100644
+--- a/drivers/media/platform/samsung/exynos4-is/mipi-csis.c
++++ b/drivers/media/platform/samsung/exynos4-is/mipi-csis.c
+@@ -940,13 +940,19 @@ static int s5pcsis_pm_resume(struct device *dev, bool runtime)
+ state->supplies);
+ goto unlock;
+ }
+- clk_enable(state->clock[CSIS_CLK_GATE]);
++ ret = clk_enable(state->clock[CSIS_CLK_GATE]);
++ if (ret) {
++ phy_power_off(state->phy);
++ regulator_bulk_disable(CSIS_NUM_SUPPLIES,
++ state->supplies);
++ goto unlock;
++ }
+ }
+ if (state->flags & ST_STREAMING)
+ s5pcsis_start_stream(state);
+
+ state->flags &= ~ST_SUSPENDED;
+- unlock:
++unlock:
+ mutex_unlock(&state->lock);
+ return ret ? -EAGAIN : 0;
+ }
+diff --git a/drivers/media/platform/samsung/s3c-camif/camif-core.c b/drivers/media/platform/samsung/s3c-camif/camif-core.c
+index e4529f666e2060..8c597dd01713a6 100644
+--- a/drivers/media/platform/samsung/s3c-camif/camif-core.c
++++ b/drivers/media/platform/samsung/s3c-camif/camif-core.c
+@@ -527,10 +527,19 @@ static void s3c_camif_remove(struct platform_device *pdev)
+ static int s3c_camif_runtime_resume(struct device *dev)
+ {
+ struct camif_dev *camif = dev_get_drvdata(dev);
++ int ret;
++
++ ret = clk_enable(camif->clock[CLK_GATE]);
++ if (ret)
++ return ret;
+
+- clk_enable(camif->clock[CLK_GATE]);
+ /* null op on s3c244x */
+- clk_enable(camif->clock[CLK_CAM]);
++ ret = clk_enable(camif->clock[CLK_CAM]);
++ if (ret) {
++ clk_disable(camif->clock[CLK_GATE]);
++ return ret;
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/media/rc/iguanair.c b/drivers/media/rc/iguanair.c
+index 276bf3c8a8cb49..8af94246e5916e 100644
+--- a/drivers/media/rc/iguanair.c
++++ b/drivers/media/rc/iguanair.c
+@@ -194,8 +194,10 @@ static int iguanair_send(struct iguanair *ir, unsigned size)
+ if (rc)
+ return rc;
+
+- if (wait_for_completion_timeout(&ir->completion, TIMEOUT) == 0)
++ if (wait_for_completion_timeout(&ir->completion, TIMEOUT) == 0) {
++ usb_kill_urb(ir->urb_out);
+ return -ETIMEDOUT;
++ }
+
+ return rc;
+ }
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index 0d2c42819d3909..218f712f56b17c 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -322,13 +322,16 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ ret = -EOPNOTSUPP;
+ } else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ (msg[0].addr == state->af9033_i2c_addr[1])) {
++ /* demod access via firmware interface */
++ u32 reg;
++
+ if (msg[0].len < 3 || msg[1].len < 1) {
+ ret = -EOPNOTSUPP;
+ goto unlock;
+ }
+- /* demod access via firmware interface */
+- u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+- msg[0].buf[2];
++
++ reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
++ msg[0].buf[2];
+
+ if (msg[0].addr == state->af9033_i2c_addr[1])
+ reg |= 0x100000;
+@@ -385,13 +388,16 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ ret = -EOPNOTSUPP;
+ } else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ (msg[0].addr == state->af9033_i2c_addr[1])) {
++ /* demod access via firmware interface */
++ u32 reg;
++
+ if (msg[0].len < 3) {
+ ret = -EOPNOTSUPP;
+ goto unlock;
+ }
+- /* demod access via firmware interface */
+- u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+- msg[0].buf[2];
++
++ reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
++ msg[0].buf[2];
+
+ if (msg[0].addr == state->af9033_i2c_addr[1])
+ reg |= 0x100000;
+diff --git a/drivers/media/usb/dvb-usb-v2/lmedm04.c b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+index 8a34e6c0d6a6d1..f0537b741d1352 100644
+--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c
++++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+@@ -373,6 +373,7 @@ static int lme2510_int_read(struct dvb_usb_adapter *adap)
+ struct dvb_usb_device *d = adap_to_d(adap);
+ struct lme2510_state *lme_int = adap_to_priv(adap);
+ struct usb_host_endpoint *ep;
++ int ret;
+
+ lme_int->lme_urb = usb_alloc_urb(0, GFP_KERNEL);
+
+@@ -390,11 +391,20 @@ static int lme2510_int_read(struct dvb_usb_adapter *adap)
+
+ /* Quirk of pipe reporting PIPE_BULK but behaves as interrupt */
+ ep = usb_pipe_endpoint(d->udev, lme_int->lme_urb->pipe);
++ if (!ep) {
++ usb_free_urb(lme_int->lme_urb);
++ return -ENODEV;
++ }
+
+ if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK)
+ lme_int->lme_urb->pipe = usb_rcvbulkpipe(d->udev, 0xa);
+
+- usb_submit_urb(lme_int->lme_urb, GFP_KERNEL);
++ ret = usb_submit_urb(lme_int->lme_urb, GFP_KERNEL);
++ if (ret) {
++ usb_free_urb(lme_int->lme_urb);
++ return ret;
++ }
++
+ info("INT Interrupt Service Started");
+
+ return 0;
+diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
+index 16fa17bbd15eaa..83ed7821fa2a77 100644
+--- a/drivers/media/usb/uvc/uvc_queue.c
++++ b/drivers/media/usb/uvc/uvc_queue.c
+@@ -483,7 +483,8 @@ static void uvc_queue_buffer_complete(struct kref *ref)
+
+ buf->state = buf->error ? UVC_BUF_STATE_ERROR : UVC_BUF_STATE_DONE;
+ vb2_set_plane_payload(&buf->buf.vb2_buf, 0, buf->bytesused);
+- vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
++ vb2_buffer_done(&buf->buf.vb2_buf, buf->error ? VB2_BUF_STATE_ERROR :
++ VB2_BUF_STATE_DONE);
+ }
+
+ /*
+diff --git a/drivers/media/usb/uvc/uvc_status.c b/drivers/media/usb/uvc/uvc_status.c
+index a78a88c710e24a..b5f6682ff38311 100644
+--- a/drivers/media/usb/uvc/uvc_status.c
++++ b/drivers/media/usb/uvc/uvc_status.c
+@@ -269,6 +269,7 @@ int uvc_status_init(struct uvc_device *dev)
+ dev->int_urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!dev->int_urb) {
+ kfree(dev->status);
++ dev->status = NULL;
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/memory/tegra/tegra20-emc.c b/drivers/memory/tegra/tegra20-emc.c
+index 7193f848d17e66..9b7d30a21a5bd0 100644
+--- a/drivers/memory/tegra/tegra20-emc.c
++++ b/drivers/memory/tegra/tegra20-emc.c
+@@ -474,14 +474,15 @@ tegra_emc_find_node_by_ram_code(struct tegra_emc *emc)
+
+ ram_code = tegra_read_ram_code();
+
+- for (np = of_find_node_by_name(dev->of_node, "emc-tables"); np;
+- np = of_find_node_by_name(np, "emc-tables")) {
++ for_each_child_of_node(dev->of_node, np) {
++ if (!of_node_name_eq(np, "emc-tables"))
++ continue;
+ err = of_property_read_u32(np, "nvidia,ram-code", &value);
+ if (err || value != ram_code) {
+ struct device_node *lpddr2_np;
+ bool cfg_mismatches = false;
+
+- lpddr2_np = of_find_node_by_name(np, "lpddr2");
++ lpddr2_np = of_get_child_by_name(np, "lpddr2");
+ if (lpddr2_np) {
+ const struct lpddr2_info *info;
+
+@@ -518,7 +519,6 @@ tegra_emc_find_node_by_ram_code(struct tegra_emc *emc)
+ }
+
+ if (cfg_mismatches) {
+- of_node_put(np);
+ continue;
+ }
+ }
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index 2ce15f60eb1071..729e79e1be49fa 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -15,6 +15,7 @@
+ #include <linux/io.h>
+ #include <linux/init.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/of_platform.h>
+@@ -27,7 +28,7 @@
+
+ static struct platform_driver syscon_driver;
+
+-static DEFINE_SPINLOCK(syscon_list_slock);
++static DEFINE_MUTEX(syscon_list_lock);
+ static LIST_HEAD(syscon_list);
+
+ struct syscon {
+@@ -54,6 +55,8 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ struct resource res;
+ struct reset_control *reset;
+
++ WARN_ON(!mutex_is_locked(&syscon_list_lock));
++
+ struct syscon *syscon __free(kfree) = kzalloc(sizeof(*syscon), GFP_KERNEL);
+ if (!syscon)
+ return ERR_PTR(-ENOMEM);
+@@ -144,9 +147,7 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ syscon->regmap = regmap;
+ syscon->np = np;
+
+- spin_lock(&syscon_list_slock);
+ list_add_tail(&syscon->list, &syscon_list);
+- spin_unlock(&syscon_list_slock);
+
+ return_ptr(syscon);
+
+@@ -167,7 +168,7 @@ static struct regmap *device_node_get_regmap(struct device_node *np,
+ {
+ struct syscon *entry, *syscon = NULL;
+
+- spin_lock(&syscon_list_slock);
++ mutex_lock(&syscon_list_lock);
+
+ list_for_each_entry(entry, &syscon_list, list)
+ if (entry->np == np) {
+@@ -175,11 +176,11 @@ static struct regmap *device_node_get_regmap(struct device_node *np,
+ break;
+ }
+
+- spin_unlock(&syscon_list_slock);
+-
+ if (!syscon)
+ syscon = of_syscon_register(np, check_res);
+
++ mutex_unlock(&syscon_list_lock);
++
+ if (IS_ERR(syscon))
+ return ERR_CAST(syscon);
+
+@@ -210,7 +211,7 @@ int of_syscon_register_regmap(struct device_node *np, struct regmap *regmap)
+ return -ENOMEM;
+
+ /* check if syscon entry already exists */
+- spin_lock(&syscon_list_slock);
++ mutex_lock(&syscon_list_lock);
+
+ list_for_each_entry(entry, &syscon_list, list)
+ if (entry->np == np) {
+@@ -223,12 +224,12 @@ int of_syscon_register_regmap(struct device_node *np, struct regmap *regmap)
+
+ /* register the regmap in syscon list */
+ list_add_tail(&syscon->list, &syscon_list);
+- spin_unlock(&syscon_list_slock);
++ mutex_unlock(&syscon_list_lock);
+
+ return 0;
+
+ err_unlock:
+- spin_unlock(&syscon_list_slock);
++ mutex_unlock(&syscon_list_lock);
+ kfree(syscon);
+ return ret;
+ }
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index f150d8769f1986..285a748748d701 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -286,6 +286,7 @@ static int rtsx_usb_get_status_with_bulk(struct rtsx_ucr *ucr, u16 *status)
+ int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
+ {
+ int ret;
++ u8 interrupt_val = 0;
+ u16 *buf;
+
+ if (!status)
+@@ -308,6 +309,20 @@ int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
+ ret = rtsx_usb_get_status_with_bulk(ucr, status);
+ }
+
++ rtsx_usb_read_register(ucr, CARD_INT_PEND, &interrupt_val);
++ /* Cross check presence with interrupts */
++ if (*status & XD_CD)
++ if (!(interrupt_val & XD_INT))
++ *status &= ~XD_CD;
++
++ if (*status & SD_CD)
++ if (!(interrupt_val & SD_INT))
++ *status &= ~SD_CD;
++
++ if (*status & MS_CD)
++ if (!(interrupt_val & MS_INT))
++ *status &= ~MS_CD;
++
+ /* usb_control_msg may return positive when success */
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/mtd/hyperbus/hbmc-am654.c b/drivers/mtd/hyperbus/hbmc-am654.c
+index dbe3eb361cca28..4b6cbee23fe893 100644
+--- a/drivers/mtd/hyperbus/hbmc-am654.c
++++ b/drivers/mtd/hyperbus/hbmc-am654.c
+@@ -174,26 +174,30 @@ static int am654_hbmc_probe(struct platform_device *pdev)
+ priv->hbdev.np = of_get_next_child(np, NULL);
+ ret = of_address_to_resource(priv->hbdev.np, 0, &res);
+ if (ret)
+- return ret;
++ goto put_node;
+
+ if (of_property_read_bool(dev->of_node, "mux-controls")) {
+ struct mux_control *control = devm_mux_control_get(dev, NULL);
+
+- if (IS_ERR(control))
+- return PTR_ERR(control);
++ if (IS_ERR(control)) {
++ ret = PTR_ERR(control);
++ goto put_node;
++ }
+
+ ret = mux_control_select(control, 1);
+ if (ret) {
+ dev_err(dev, "Failed to select HBMC mux\n");
+- return ret;
++ goto put_node;
+ }
+ priv->mux_ctrl = control;
+ }
+
+ priv->hbdev.map.size = resource_size(&res);
+ priv->hbdev.map.virt = devm_ioremap_resource(dev, &res);
+- if (IS_ERR(priv->hbdev.map.virt))
+- return PTR_ERR(priv->hbdev.map.virt);
++ if (IS_ERR(priv->hbdev.map.virt)) {
++ ret = PTR_ERR(priv->hbdev.map.virt);
++ goto disable_mux;
++ }
+
+ priv->ctlr.dev = dev;
+ priv->ctlr.ops = &am654_hbmc_ops;
+@@ -226,6 +230,8 @@ static int am654_hbmc_probe(struct platform_device *pdev)
+ disable_mux:
+ if (priv->mux_ctrl)
+ mux_control_deselect(priv->mux_ctrl);
++put_node:
++ of_node_put(priv->hbdev.np);
+ return ret;
+ }
+
+@@ -241,6 +247,7 @@ static void am654_hbmc_remove(struct platform_device *pdev)
+
+ if (dev_priv->rx_chan)
+ dma_release_channel(dev_priv->rx_chan);
++ of_node_put(priv->hbdev.np);
+ }
+
+ static const struct of_device_id am654_hbmc_dt_ids[] = {
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 1b2ec0fec60c7a..e76df6a00ed4f5 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2342,6 +2342,11 @@ static int brcmnand_write(struct mtd_info *mtd, struct nand_chip *chip,
+ brcmnand_send_cmd(host, CMD_PROGRAM_PAGE);
+ status = brcmnand_waitfunc(chip);
+
++ if (status < 0) {
++ ret = status;
++ goto out;
++ }
++
+ if (status & NAND_STATUS_FAIL) {
+ dev_info(ctrl->dev, "program failed at %llx\n",
+ (unsigned long long)addr);
+diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h
+index d73ef262991d61..6fee9a41839c0b 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.h
++++ b/drivers/net/ethernet/broadcom/bgmac.h
+@@ -328,8 +328,7 @@
+ #define BGMAC_RX_FRAME_OFFSET 30 /* There are 2 unused bytes between header and real data */
+ #define BGMAC_RX_BUF_OFFSET (NET_SKB_PAD + NET_IP_ALIGN - \
+ BGMAC_RX_FRAME_OFFSET)
+-/* Jumbo frame size with FCS */
+-#define BGMAC_RX_MAX_FRAME_SIZE 9724
++#define BGMAC_RX_MAX_FRAME_SIZE 1536
+ #define BGMAC_RX_BUF_SIZE (BGMAC_RX_FRAME_OFFSET + BGMAC_RX_MAX_FRAME_SIZE)
+ #define BGMAC_RX_ALLOC_SIZE (SKB_DATA_ALIGN(BGMAC_RX_BUF_SIZE + BGMAC_RX_BUF_OFFSET) + \
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
+index 150cc94ae9f884..25a604379b2f43 100644
+--- a/drivers/net/ethernet/davicom/dm9000.c
++++ b/drivers/net/ethernet/davicom/dm9000.c
+@@ -1777,10 +1777,11 @@ static void dm9000_drv_remove(struct platform_device *pdev)
+
+ unregister_netdev(ndev);
+ dm9000_release_board(pdev, dm);
+- free_netdev(ndev); /* free device structure */
+ if (dm->power_supply)
+ regulator_disable(dm->power_supply);
+
++ free_netdev(ndev); /* free device structure */
++
+ dev_dbg(&pdev->dev, "released and freed device\n");
+ }
+
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 49d1748e0c043d..2b05d9c6c21a43 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -840,6 +840,8 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int hdr_len, total_len, data_left;
+ struct bufdesc *bdp = txq->bd.cur;
++ struct bufdesc *tmp_bdp;
++ struct bufdesc_ex *ebdp;
+ struct tso_t tso;
+ unsigned int index = 0;
+ int ret;
+@@ -913,7 +915,34 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
+ return 0;
+
+ err_release:
+- /* TODO: Release all used data descriptors for TSO */
++ /* Release all used data descriptors for TSO */
++ tmp_bdp = txq->bd.cur;
++
++ while (tmp_bdp != bdp) {
++ /* Unmap data buffers */
++ if (tmp_bdp->cbd_bufaddr &&
++ !IS_TSO_HEADER(txq, fec32_to_cpu(tmp_bdp->cbd_bufaddr)))
++ dma_unmap_single(&fep->pdev->dev,
++ fec32_to_cpu(tmp_bdp->cbd_bufaddr),
++ fec16_to_cpu(tmp_bdp->cbd_datlen),
++ DMA_TO_DEVICE);
++
++ /* Clear standard buffer descriptor fields */
++ tmp_bdp->cbd_sc = 0;
++ tmp_bdp->cbd_datlen = 0;
++ tmp_bdp->cbd_bufaddr = 0;
++
++ /* Handle extended descriptor if enabled */
++ if (fep->bufdesc_ex) {
++ ebdp = (struct bufdesc_ex *)tmp_bdp;
++ ebdp->cbd_esc = 0;
++ }
++
++ tmp_bdp = fec_enet_get_nextdesc(tmp_bdp, &txq->bd);
++ }
++
++ dev_kfree_skb_any(skb);
++
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index 9a63fbc6940831..b25fb400f4767e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -40,6 +40,21 @@ EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
+ */
+ static DEFINE_MUTEX(hnae3_common_lock);
+
++/* ensure the drivers being unloaded one by one */
++static DEFINE_MUTEX(hnae3_unload_lock);
++
++void hnae3_acquire_unload_lock(void)
++{
++ mutex_lock(&hnae3_unload_lock);
++}
++EXPORT_SYMBOL(hnae3_acquire_unload_lock);
++
++void hnae3_release_unload_lock(void)
++{
++ mutex_unlock(&hnae3_unload_lock);
++}
++EXPORT_SYMBOL(hnae3_release_unload_lock);
++
+ static bool hnae3_client_match(enum hnae3_client_type client_type)
+ {
+ if (client_type == HNAE3_CLIENT_KNIC ||
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index d873523e84f271..388c70331a55b5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -963,4 +963,6 @@ int hnae3_register_client(struct hnae3_client *client);
+ void hnae3_set_client_init_flag(struct hnae3_client *client,
+ struct hnae3_ae_dev *ae_dev,
+ unsigned int inited);
++void hnae3_acquire_unload_lock(void);
++void hnae3_release_unload_lock(void);
+ #endif
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 73825b6bd485d1..dc60ac3bde7f2c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -6002,9 +6002,11 @@ module_init(hns3_init_module);
+ */
+ static void __exit hns3_exit_module(void)
+ {
++ hnae3_acquire_unload_lock();
+ pci_unregister_driver(&hns3_driver);
+ hnae3_unregister_client(&client);
+ hns3_dbg_unregister_debugfs();
++ hnae3_release_unload_lock();
+ }
+ module_exit(hns3_exit_module);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 9a67fe0554a52b..06eedf80cfac4f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -12929,9 +12929,11 @@ static int __init hclge_init(void)
+
+ static void __exit hclge_exit(void)
+ {
++ hnae3_acquire_unload_lock();
+ hnae3_unregister_ae_algo_prepare(&ae_algo);
+ hnae3_unregister_ae_algo(&ae_algo);
+ destroy_workqueue(hclge_wq);
++ hnae3_release_unload_lock();
+ }
+ module_init(hclge_init);
+ module_exit(hclge_exit);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index d47bd8d6145f97..fd5992164846b1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -3412,8 +3412,10 @@ static int __init hclgevf_init(void)
+
+ static void __exit hclgevf_exit(void)
+ {
++ hnae3_acquire_unload_lock();
+ hnae3_unregister_ae_algo(&ae_algovf);
+ destroy_workqueue(hclgevf_wq);
++ hnae3_release_unload_lock();
+ }
+ module_init(hclgevf_init);
+ module_exit(hclgevf_exit);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index f782402cd78986..5516795cc250a8 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -773,6 +773,11 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
+ f->state = IAVF_VLAN_ADD;
+ adapter->num_vlan_filters++;
+ iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER);
++ } else if (f->state == IAVF_VLAN_REMOVE) {
++ /* IAVF_VLAN_REMOVE means that VLAN wasn't yet removed.
++ * We can safely only change the state here.
++ */
++ f->state = IAVF_VLAN_ACTIVE;
+ }
+
+ clearout:
+@@ -793,8 +798,18 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
+
+ f = iavf_find_vlan(adapter, vlan);
+ if (f) {
+- f->state = IAVF_VLAN_REMOVE;
+- iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER);
++ /* IAVF_ADD_VLAN means that VLAN wasn't even added yet.
++ * Remove it from the list.
++ */
++ if (f->state == IAVF_VLAN_ADD) {
++ list_del(&f->list);
++ kfree(f);
++ adapter->num_vlan_filters--;
++ } else {
++ f->state = IAVF_VLAN_REMOVE;
++ iavf_schedule_aq_request(adapter,
++ IAVF_FLAG_AQ_DEL_VLAN_FILTER);
++ }
+ }
+
+ spin_unlock_bh(&adapter->mac_vlan_list_lock);
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 80f3dfd2712430..66ae0352c6bca0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -1491,7 +1491,23 @@ struct ice_aqc_dnl_equa_param {
+ #define ICE_AQC_RX_EQU_POST1 (0x12 << ICE_AQC_RX_EQU_SHIFT)
+ #define ICE_AQC_RX_EQU_BFLF (0x13 << ICE_AQC_RX_EQU_SHIFT)
+ #define ICE_AQC_RX_EQU_BFHF (0x14 << ICE_AQC_RX_EQU_SHIFT)
+-#define ICE_AQC_RX_EQU_DRATE (0x15 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_GAINHF (0x20 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_GAINLF (0x21 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_GAINDC (0x22 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_BW (0x23 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_GAIN (0x30 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_GAIN2 (0x31 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_2 (0x32 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_3 (0x33 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_4 (0x34 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_5 (0x35 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_6 (0x36 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_7 (0x37 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_8 (0x38 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_9 (0x39 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_10 (0x3A << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_11 (0x3B << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_12 (0x3C << ICE_AQC_RX_EQU_SHIFT)
+ #define ICE_AQC_TX_EQU_PRE1 0x0
+ #define ICE_AQC_TX_EQU_PRE3 0x3
+ #define ICE_AQC_TX_EQU_ATTEN 0x4
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index d5cc934d135949..7d1feeb317be34 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -693,75 +693,52 @@ static int ice_get_port_topology(struct ice_hw *hw, u8 lport,
+ static int ice_get_tx_rx_equa(struct ice_hw *hw, u8 serdes_num,
+ struct ice_serdes_equalization_to_ethtool *ptr)
+ {
++ static const int tx = ICE_AQC_OP_CODE_TX_EQU;
++ static const int rx = ICE_AQC_OP_CODE_RX_EQU;
++ struct {
++ int data_in;
++ int opcode;
++ int *out;
++ } aq_params[] = {
++ { ICE_AQC_TX_EQU_PRE1, tx, &ptr->tx_equ_pre1 },
++ { ICE_AQC_TX_EQU_PRE3, tx, &ptr->tx_equ_pre3 },
++ { ICE_AQC_TX_EQU_ATTEN, tx, &ptr->tx_equ_atten },
++ { ICE_AQC_TX_EQU_POST1, tx, &ptr->tx_equ_post1 },
++ { ICE_AQC_TX_EQU_PRE2, tx, &ptr->tx_equ_pre2 },
++ { ICE_AQC_RX_EQU_PRE2, rx, &ptr->rx_equ_pre2 },
++ { ICE_AQC_RX_EQU_PRE1, rx, &ptr->rx_equ_pre1 },
++ { ICE_AQC_RX_EQU_POST1, rx, &ptr->rx_equ_post1 },
++ { ICE_AQC_RX_EQU_BFLF, rx, &ptr->rx_equ_bflf },
++ { ICE_AQC_RX_EQU_BFHF, rx, &ptr->rx_equ_bfhf },
++ { ICE_AQC_RX_EQU_CTLE_GAINHF, rx, &ptr->rx_equ_ctle_gainhf },
++ { ICE_AQC_RX_EQU_CTLE_GAINLF, rx, &ptr->rx_equ_ctle_gainlf },
++ { ICE_AQC_RX_EQU_CTLE_GAINDC, rx, &ptr->rx_equ_ctle_gaindc },
++ { ICE_AQC_RX_EQU_CTLE_BW, rx, &ptr->rx_equ_ctle_bw },
++ { ICE_AQC_RX_EQU_DFE_GAIN, rx, &ptr->rx_equ_dfe_gain },
++ { ICE_AQC_RX_EQU_DFE_GAIN2, rx, &ptr->rx_equ_dfe_gain_2 },
++ { ICE_AQC_RX_EQU_DFE_2, rx, &ptr->rx_equ_dfe_2 },
++ { ICE_AQC_RX_EQU_DFE_3, rx, &ptr->rx_equ_dfe_3 },
++ { ICE_AQC_RX_EQU_DFE_4, rx, &ptr->rx_equ_dfe_4 },
++ { ICE_AQC_RX_EQU_DFE_5, rx, &ptr->rx_equ_dfe_5 },
++ { ICE_AQC_RX_EQU_DFE_6, rx, &ptr->rx_equ_dfe_6 },
++ { ICE_AQC_RX_EQU_DFE_7, rx, &ptr->rx_equ_dfe_7 },
++ { ICE_AQC_RX_EQU_DFE_8, rx, &ptr->rx_equ_dfe_8 },
++ { ICE_AQC_RX_EQU_DFE_9, rx, &ptr->rx_equ_dfe_9 },
++ { ICE_AQC_RX_EQU_DFE_10, rx, &ptr->rx_equ_dfe_10 },
++ { ICE_AQC_RX_EQU_DFE_11, rx, &ptr->rx_equ_dfe_11 },
++ { ICE_AQC_RX_EQU_DFE_12, rx, &ptr->rx_equ_dfe_12 },
++ };
+ int err;
+
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE1,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_pre1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE3,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_pre3);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_ATTEN,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_atten);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_POST1,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_post1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE2,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_pre2);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_PRE2,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_pre2);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_PRE1,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_pre1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_POST1,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_post1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_BFLF,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_bflf);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_BFHF,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_bfhf);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_DRATE,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_drate);
+- if (err)
+- return err;
++ for (int i = 0; i < ARRAY_SIZE(aq_params); i++) {
++ err = ice_aq_get_phy_equalization(hw, aq_params[i].data_in,
++ aq_params[i].opcode,
++ serdes_num, aq_params[i].out);
++ if (err)
++ break;
++ }
+
+- return 0;
++ return err;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.h b/drivers/net/ethernet/intel/ice/ice_ethtool.h
+index 9acccae38625ae..23b2cfbc9684c0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.h
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.h
+@@ -10,17 +10,33 @@ struct ice_phy_type_to_ethtool {
+ };
+
+ struct ice_serdes_equalization_to_ethtool {
+- int rx_equalization_pre2;
+- int rx_equalization_pre1;
+- int rx_equalization_post1;
+- int rx_equalization_bflf;
+- int rx_equalization_bfhf;
+- int rx_equalization_drate;
+- int tx_equalization_pre1;
+- int tx_equalization_pre3;
+- int tx_equalization_atten;
+- int tx_equalization_post1;
+- int tx_equalization_pre2;
++ int rx_equ_pre2;
++ int rx_equ_pre1;
++ int rx_equ_post1;
++ int rx_equ_bflf;
++ int rx_equ_bfhf;
++ int rx_equ_ctle_gainhf;
++ int rx_equ_ctle_gainlf;
++ int rx_equ_ctle_gaindc;
++ int rx_equ_ctle_bw;
++ int rx_equ_dfe_gain;
++ int rx_equ_dfe_gain_2;
++ int rx_equ_dfe_2;
++ int rx_equ_dfe_3;
++ int rx_equ_dfe_4;
++ int rx_equ_dfe_5;
++ int rx_equ_dfe_6;
++ int rx_equ_dfe_7;
++ int rx_equ_dfe_8;
++ int rx_equ_dfe_9;
++ int rx_equ_dfe_10;
++ int rx_equ_dfe_11;
++ int rx_equ_dfe_12;
++ int tx_equ_pre1;
++ int tx_equ_pre3;
++ int tx_equ_atten;
++ int tx_equ_post1;
++ int tx_equ_pre2;
+ };
+
+ struct ice_regdump_to_ethtool {
+diff --git a/drivers/net/ethernet/intel/ice/ice_parser.h b/drivers/net/ethernet/intel/ice/ice_parser.h
+index 6509d807627cee..4f56d53d56b9ad 100644
+--- a/drivers/net/ethernet/intel/ice/ice_parser.h
++++ b/drivers/net/ethernet/intel/ice/ice_parser.h
+@@ -257,7 +257,6 @@ ice_pg_nm_cam_match(struct ice_pg_nm_cam_item *table, int size,
+ /*** ICE_SID_RXPARSER_BOOST_TCAM and ICE_SID_LBL_RXPARSER_TMEM sections ***/
+ #define ICE_BST_TCAM_TABLE_SIZE 256
+ #define ICE_BST_TCAM_KEY_SIZE 20
+-#define ICE_BST_KEY_TCAM_SIZE 19
+
+ /* Boost TCAM item */
+ struct ice_bst_tcam_item {
+@@ -401,7 +400,6 @@ u16 ice_xlt_kb_flag_get(struct ice_xlt_kb *kb, u64 pkt_flag);
+ #define ICE_PARSER_GPR_NUM 128
+ #define ICE_PARSER_FLG_NUM 64
+ #define ICE_PARSER_ERR_NUM 16
+-#define ICE_BST_KEY_SIZE 10
+ #define ICE_MARKER_ID_SIZE 9
+ #define ICE_MARKER_MAX_SIZE \
+ (ICE_MARKER_ID_SIZE * BITS_PER_BYTE - 1)
+@@ -431,13 +429,13 @@ struct ice_parser_rt {
+ u8 pkt_buf[ICE_PARSER_MAX_PKT_LEN + ICE_PARSER_PKT_REV];
+ u16 pkt_len;
+ u16 po;
+- u8 bst_key[ICE_BST_KEY_SIZE];
++ u8 bst_key[ICE_BST_TCAM_KEY_SIZE];
+ struct ice_pg_cam_key pg_key;
++ u8 pg_prio;
+ struct ice_alu *alu0;
+ struct ice_alu *alu1;
+ struct ice_alu *alu2;
+ struct ice_pg_cam_action *action;
+- u8 pg_prio;
+ struct ice_gpr_pu pu;
+ u8 markers[ICE_MARKER_ID_SIZE];
+ bool protocols[ICE_PO_PAIR_SIZE];
+diff --git a/drivers/net/ethernet/intel/ice/ice_parser_rt.c b/drivers/net/ethernet/intel/ice/ice_parser_rt.c
+index dedf5e854e4b76..3995d662e05099 100644
+--- a/drivers/net/ethernet/intel/ice/ice_parser_rt.c
++++ b/drivers/net/ethernet/intel/ice/ice_parser_rt.c
+@@ -125,22 +125,20 @@ static void ice_bst_key_init(struct ice_parser_rt *rt,
+ else
+ key[idd] = imem->b_kb.prio;
+
+- idd = ICE_BST_KEY_TCAM_SIZE - 1;
++ idd = ICE_BST_TCAM_KEY_SIZE - 2;
+ for (i = idd; i >= 0; i--) {
+ int j;
+
+ j = ho + idd - i;
+ if (j < ICE_PARSER_MAX_PKT_LEN)
+- key[i] = rt->pkt_buf[ho + idd - i];
++ key[i] = rt->pkt_buf[j];
+ else
+ key[i] = 0;
+ }
+
+- ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Generated Boost TCAM Key:\n");
+- ice_debug(rt->psr->hw, ICE_DBG_PARSER, "%02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+- key[0], key[1], key[2], key[3], key[4],
+- key[5], key[6], key[7], key[8], key[9]);
+- ice_debug(rt->psr->hw, ICE_DBG_PARSER, "\n");
++ ice_debug_array_w_prefix(rt->psr->hw, ICE_DBG_PARSER,
++ KBUILD_MODNAME ": Generated Boost TCAM Key",
++ key, ICE_BST_TCAM_KEY_SIZE);
+ }
+
+ static u16 ice_bit_rev_u16(u16 v, int len)
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
+index 4849590a5591f1..b28991dd187036 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
+@@ -376,6 +376,9 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
+ if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD))
+ break;
+
++ /* Ensure no other fields are read until DD flag is checked */
++ dma_rmb();
++
+ /* strip off FW internal code */
+ desc_err = le16_to_cpu(desc->ret_val) & 0xff;
+
+@@ -563,6 +566,9 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+ if (!(flags & IDPF_CTLQ_FLAG_DD))
+ break;
+
++ /* Ensure no other fields are read until DD flag is checked */
++ dma_rmb();
++
+ q_msg[i].vmvf_type = (flags &
+ (IDPF_CTLQ_FLAG_FTYPE_VM |
+ IDPF_CTLQ_FLAG_FTYPE_PF)) >>
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index db476b3314c8a5..dfd56fc5ff6550 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -174,7 +174,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ pci_set_master(pdev);
+ pci_set_drvdata(pdev, adapter);
+
+- adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0,
++ adapter->init_wq = alloc_workqueue("%s-%s-init",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->init_wq) {
+@@ -183,7 +184,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_free;
+ }
+
+- adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0,
++ adapter->serv_wq = alloc_workqueue("%s-%s-service",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->serv_wq) {
+@@ -192,7 +194,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_serv_wq_alloc;
+ }
+
+- adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0,
++ adapter->mbx_wq = alloc_workqueue("%s-%s-mbx",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->mbx_wq) {
+@@ -201,7 +204,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_mbx_wq_alloc;
+ }
+
+- adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0,
++ adapter->stats_wq = alloc_workqueue("%s-%s-stats",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->stats_wq) {
+@@ -210,7 +214,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_stats_wq_alloc;
+ }
+
+- adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0,
++ adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->vc_event_wq) {
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index d46c95f91b0d81..99bdb95bf22661 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -612,14 +612,15 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter,
+ return -EINVAL;
+ }
+ xn = &adapter->vcxn_mngr->ring[xn_idx];
++ idpf_vc_xn_lock(xn);
+ salt = FIELD_GET(IDPF_VC_XN_SALT_M, msg_info);
+ if (xn->salt != salt) {
+ dev_err_ratelimited(&adapter->pdev->dev, "Transaction salt does not match (%02x != %02x)\n",
+ xn->salt, salt);
++ idpf_vc_xn_unlock(xn);
+ return -EINVAL;
+ }
+
+- idpf_vc_xn_lock(xn);
+ switch (xn->state) {
+ case IDPF_VC_XN_WAITING:
+ /* success */
+@@ -3077,12 +3078,21 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)
+ */
+ void idpf_vc_core_deinit(struct idpf_adapter *adapter)
+ {
++ bool remove_in_prog;
++
+ if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags))
+ return;
+
++ /* Avoid transaction timeouts when called during reset */
++ remove_in_prog = test_bit(IDPF_REMOVE_IN_PROG, adapter->flags);
++ if (!remove_in_prog)
++ idpf_vc_xn_shutdown(adapter->vcxn_mngr);
++
+ idpf_deinit_task(adapter);
+ idpf_intr_rel(adapter);
+- idpf_vc_xn_shutdown(adapter->vcxn_mngr);
++
++ if (remove_in_prog)
++ idpf_vc_xn_shutdown(adapter->vcxn_mngr);
+
+ cancel_delayed_work_sync(&adapter->serv_task);
+ cancel_delayed_work_sync(&adapter->mbx_task);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 549436efc20488..730aa5632cceee 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -995,12 +995,6 @@ static void octep_get_stats64(struct net_device *netdev,
+ struct octep_device *oct = netdev_priv(netdev);
+ int q;
+
+- if (netif_running(netdev))
+- octep_ctrl_net_get_if_stats(oct,
+- OCTEP_CTRL_NET_INVALID_VFID,
+- &oct->iface_rx_stats,
+- &oct->iface_tx_stats);
+-
+ tx_packets = 0;
+ tx_bytes = 0;
+ rx_packets = 0;
+@@ -1018,10 +1012,6 @@ static void octep_get_stats64(struct net_device *netdev,
+ stats->tx_bytes = tx_bytes;
+ stats->rx_packets = rx_packets;
+ stats->rx_bytes = rx_bytes;
+- stats->multicast = oct->iface_rx_stats.mcast_pkts;
+- stats->rx_errors = oct->iface_rx_stats.err_pkts;
+- stats->collisions = oct->iface_tx_stats.xscol;
+- stats->tx_fifo_errors = oct->iface_tx_stats.undflw;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+index 7e6771c9cdbbab..4c699514fd57a0 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+@@ -799,14 +799,6 @@ static void octep_vf_get_stats64(struct net_device *netdev,
+ stats->tx_bytes = tx_bytes;
+ stats->rx_packets = rx_packets;
+ stats->rx_bytes = rx_bytes;
+- if (!octep_vf_get_if_stats(oct)) {
+- stats->multicast = oct->iface_rx_stats.mcast_pkts;
+- stats->rx_errors = oct->iface_rx_stats.err_pkts;
+- stats->rx_dropped = oct->iface_rx_stats.dropped_pkts_fifo_full +
+- oct->iface_rx_stats.err_pkts;
+- stats->rx_missed_errors = oct->iface_rx_stats.dropped_pkts_fifo_full;
+- stats->tx_dropped = oct->iface_tx_stats.dropped;
+- }
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c
+index 2c26eb18528372..20cf7ba9d75084 100644
+--- a/drivers/net/ethernet/mediatek/airoha_eth.c
++++ b/drivers/net/ethernet/mediatek/airoha_eth.c
+@@ -258,11 +258,11 @@
+ #define REG_GDM3_FWD_CFG GDM3_BASE
+ #define GDM3_PAD_EN_MASK BIT(28)
+
+-#define REG_GDM4_FWD_CFG (GDM4_BASE + 0x100)
++#define REG_GDM4_FWD_CFG GDM4_BASE
+ #define GDM4_PAD_EN_MASK BIT(28)
+ #define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8)
+
+-#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x33c)
++#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c)
+ #define GDM4_SPORT_OFF2_MASK GENMASK(19, 16)
+ #define GDM4_SPORT_OFF1_MASK GENMASK(15, 12)
+ #define GDM4_SPORT_OFF0_MASK GENMASK(11, 8)
+@@ -2123,17 +2123,14 @@ static void airoha_hw_cleanup(struct airoha_qdma *qdma)
+ if (!qdma->q_rx[i].ndesc)
+ continue;
+
+- napi_disable(&qdma->q_rx[i].napi);
+ netif_napi_del(&qdma->q_rx[i].napi);
+ airoha_qdma_cleanup_rx_queue(&qdma->q_rx[i]);
+ if (qdma->q_rx[i].page_pool)
+ page_pool_destroy(qdma->q_rx[i].page_pool);
+ }
+
+- for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++) {
+- napi_disable(&qdma->q_tx_irq[i].napi);
++ for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++)
+ netif_napi_del(&qdma->q_tx_irq[i].napi);
+- }
+
+ for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) {
+ if (!qdma->q_tx[i].ndesc)
+@@ -2158,6 +2155,21 @@ static void airoha_qdma_start_napi(struct airoha_qdma *qdma)
+ }
+ }
+
++static void airoha_qdma_stop_napi(struct airoha_qdma *qdma)
++{
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++)
++ napi_disable(&qdma->q_tx_irq[i].napi);
++
++ for (i = 0; i < ARRAY_SIZE(qdma->q_rx); i++) {
++ if (!qdma->q_rx[i].ndesc)
++ continue;
++
++ napi_disable(&qdma->q_rx[i].napi);
++ }
++}
++
+ static void airoha_update_hw_stats(struct airoha_gdm_port *port)
+ {
+ struct airoha_eth *eth = port->qdma->eth;
+@@ -2713,7 +2725,7 @@ static int airoha_probe(struct platform_device *pdev)
+
+ err = airoha_hw_init(pdev, eth);
+ if (err)
+- goto error;
++ goto error_hw_cleanup;
+
+ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
+ airoha_qdma_start_napi(ð->qdma[i]);
+@@ -2728,13 +2740,16 @@ static int airoha_probe(struct platform_device *pdev)
+ err = airoha_alloc_gdm_port(eth, np);
+ if (err) {
+ of_node_put(np);
+- goto error;
++ goto error_napi_stop;
+ }
+ }
+
+ return 0;
+
+-error:
++error_napi_stop:
++ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
++ airoha_qdma_stop_napi(ð->qdma[i]);
++error_hw_cleanup:
+ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
+ airoha_hw_cleanup(ð->qdma[i]);
+
+@@ -2755,8 +2770,10 @@ static void airoha_remove(struct platform_device *pdev)
+ struct airoha_eth *eth = platform_get_drvdata(pdev);
+ int i;
+
+- for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
++ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) {
++ airoha_qdma_stop_napi(ð->qdma[i]);
+ airoha_hw_cleanup(ð->qdma[i]);
++ }
+
+ for (i = 0; i < ARRAY_SIZE(eth->ports); i++) {
+ struct airoha_gdm_port *port = eth->ports[i];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
+index 3f4c58bada3745..ab5f8f07f1f7e5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
+@@ -70,7 +70,7 @@
+ u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \
+ _HWS_SET32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \
+ _HWS_SET32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \
+- (bit_off) % BITS_IN_DW, second_dw_mask); \
++ (bit_off + BITS_IN_DW) % BITS_IN_DW, second_dw_mask); \
+ } else { \
+ _HWS_SET32(p, v, byte_off, (bit_off), (mask)); \
+ } \
+diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+index 46245e0b24623d..43c84900369a36 100644
+--- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
++++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+@@ -14,7 +14,6 @@
+ #define MLXFW_FSM_STATE_WAIT_TIMEOUT_MS 30000
+ #define MLXFW_FSM_STATE_WAIT_ROUNDS \
+ (MLXFW_FSM_STATE_WAIT_TIMEOUT_MS / MLXFW_FSM_STATE_WAIT_CYCLE_MS)
+-#define MLXFW_FSM_MAX_COMPONENT_SIZE (10 * (1 << 20))
+
+ static const int mlxfw_fsm_state_errno[] = {
+ [MLXFW_FSM_STATE_ERR_ERROR] = -EIO,
+@@ -229,7 +228,6 @@ static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
+ return err;
+ }
+
+- comp_max_size = min_t(u32, comp_max_size, MLXFW_FSM_MAX_COMPONENT_SIZE);
+ if (comp->data_size > comp_max_size) {
+ MLXFW_ERR_MSG(mlxfw_dev, extack,
+ "Component size is bigger than limit", -EINVAL);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+index 69cd689dbc83e9..5afe6b155ef0d5 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+@@ -1003,10 +1003,10 @@ static void mlxsw_sp_mr_route_stats_update(struct mlxsw_sp *mlxsw_sp,
+ mr->mr_ops->route_stats(mlxsw_sp, mr_route->route_priv, &packets,
+ &bytes);
+
+- if (mr_route->mfc->mfc_un.res.pkt != packets)
+- mr_route->mfc->mfc_un.res.lastuse = jiffies;
+- mr_route->mfc->mfc_un.res.pkt = packets;
+- mr_route->mfc->mfc_un.res.bytes = bytes;
++ if (atomic_long_read(&mr_route->mfc->mfc_un.res.pkt) != packets)
++ WRITE_ONCE(mr_route->mfc->mfc_un.res.lastuse, jiffies);
++ atomic_long_set(&mr_route->mfc->mfc_un.res.pkt, packets);
++ atomic_long_set(&mr_route->mfc->mfc_un.res.bytes, bytes);
+ }
+
+ static void mlxsw_sp_mr_stats_update(struct work_struct *work)
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 6f6b0566c65bcb..cc4f0d16c76303 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -3208,10 +3208,15 @@ static int ravb_suspend(struct device *dev)
+
+ netif_device_detach(ndev);
+
+- if (priv->wol_enabled)
+- return ravb_wol_setup(ndev);
++ rtnl_lock();
++ if (priv->wol_enabled) {
++ ret = ravb_wol_setup(ndev);
++ rtnl_unlock();
++ return ret;
++ }
+
+ ret = ravb_close(ndev);
++ rtnl_unlock();
+ if (ret)
+ return ret;
+
+@@ -3236,19 +3241,20 @@ static int ravb_resume(struct device *dev)
+ if (!netif_running(ndev))
+ return 0;
+
++ rtnl_lock();
+ /* If WoL is enabled restore the interface. */
+- if (priv->wol_enabled) {
++ if (priv->wol_enabled)
+ ret = ravb_wol_restore(ndev);
+- if (ret)
+- return ret;
+- } else {
++ else
+ ret = pm_runtime_force_resume(dev);
+- if (ret)
+- return ret;
++ if (ret) {
++ rtnl_unlock();
++ return ret;
+ }
+
+ /* Reopening the interface will restore the device to the working state. */
+ ret = ravb_open(ndev);
++ rtnl_unlock();
+ if (ret < 0)
+ goto out_rpm_put;
+
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 7a25903e35c305..bc12c0c7347f6b 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -3494,10 +3494,12 @@ static int sh_eth_suspend(struct device *dev)
+
+ netif_device_detach(ndev);
+
++ rtnl_lock();
+ if (mdp->wol_enabled)
+ ret = sh_eth_wol_setup(ndev);
+ else
+ ret = sh_eth_close(ndev);
++ rtnl_unlock();
+
+ return ret;
+ }
+@@ -3511,10 +3513,12 @@ static int sh_eth_resume(struct device *dev)
+ if (!netif_running(ndev))
+ return 0;
+
++ rtnl_lock();
+ if (mdp->wol_enabled)
+ ret = sh_eth_wol_restore(ndev);
+ else
+ ret = sh_eth_open(ndev);
++ rtnl_unlock();
+
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/net/ethernet/sfc/ef100_ethtool.c b/drivers/net/ethernet/sfc/ef100_ethtool.c
+index 5c2551369812cb..6c3b74000d3b6a 100644
+--- a/drivers/net/ethernet/sfc/ef100_ethtool.c
++++ b/drivers/net/ethernet/sfc/ef100_ethtool.c
+@@ -59,6 +59,7 @@ const struct ethtool_ops ef100_ethtool_ops = {
+ .get_rxfh_indir_size = efx_ethtool_get_rxfh_indir_size,
+ .get_rxfh_key_size = efx_ethtool_get_rxfh_key_size,
+ .rxfh_per_ctx_key = true,
++ .cap_rss_rxnfc_adds = true,
+ .rxfh_priv_size = sizeof(struct efx_rss_context_priv),
+ .get_rxfh = efx_ethtool_get_rxfh,
+ .set_rxfh = efx_ethtool_set_rxfh,
+diff --git a/drivers/net/ethernet/sfc/ethtool.c b/drivers/net/ethernet/sfc/ethtool.c
+index bb1930818beba4..83d715544f7fb2 100644
+--- a/drivers/net/ethernet/sfc/ethtool.c
++++ b/drivers/net/ethernet/sfc/ethtool.c
+@@ -263,6 +263,7 @@ const struct ethtool_ops efx_ethtool_ops = {
+ .get_rxfh_indir_size = efx_ethtool_get_rxfh_indir_size,
+ .get_rxfh_key_size = efx_ethtool_get_rxfh_key_size,
+ .rxfh_per_ctx_key = true,
++ .cap_rss_rxnfc_adds = true,
+ .rxfh_priv_size = sizeof(struct efx_rss_context_priv),
+ .get_rxfh = efx_ethtool_get_rxfh,
+ .set_rxfh = efx_ethtool_set_rxfh,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index cf7b59b8cc64b3..918d7f2e8ba992 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7236,6 +7236,36 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
+ if (priv->dma_cap.tsoen)
+ dev_info(priv->device, "TSO supported\n");
+
++ if (priv->dma_cap.number_rx_queues &&
++ priv->plat->rx_queues_to_use > priv->dma_cap.number_rx_queues) {
++ dev_warn(priv->device,
++ "Number of Rx queues (%u) exceeds dma capability\n",
++ priv->plat->rx_queues_to_use);
++ priv->plat->rx_queues_to_use = priv->dma_cap.number_rx_queues;
++ }
++ if (priv->dma_cap.number_tx_queues &&
++ priv->plat->tx_queues_to_use > priv->dma_cap.number_tx_queues) {
++ dev_warn(priv->device,
++ "Number of Tx queues (%u) exceeds dma capability\n",
++ priv->plat->tx_queues_to_use);
++ priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues;
++ }
++
++ if (priv->dma_cap.rx_fifo_size &&
++ priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
++ dev_warn(priv->device,
++ "Rx FIFO size (%u) exceeds dma capability\n",
++ priv->plat->rx_fifo_size);
++ priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
++ }
++ if (priv->dma_cap.tx_fifo_size &&
++ priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
++ dev_warn(priv->device,
++ "Tx FIFO size (%u) exceeds dma capability\n",
++ priv->plat->tx_fifo_size);
++ priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size;
++ }
++
+ priv->hw->vlan_fail_q_en =
+ (priv->plat->flags & STMMAC_FLAG_VLAN_FAIL_Q_EN);
+ priv->hw->vlan_fail_q = priv->plat->vlan_fail_q;
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index dfca13b82bdce2..b13c7e958e6b4e 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2207,7 +2207,7 @@ static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+
+- if (tx_chn->irq)
++ if (tx_chn->irq > 0)
+ devm_free_irq(dev, tx_chn->irq, tx_chn);
+
+ netif_napi_del(&tx_chn->napi_tx);
+diff --git a/drivers/net/netdevsim/netdevsim.h b/drivers/net/netdevsim/netdevsim.h
+index bf02efa10956a6..84181dcb98831f 100644
+--- a/drivers/net/netdevsim/netdevsim.h
++++ b/drivers/net/netdevsim/netdevsim.h
+@@ -129,6 +129,7 @@ struct netdevsim {
+ u32 sleep;
+ u32 __ports[2][NSIM_UDP_TUNNEL_N_PORTS];
+ u32 (*ports)[NSIM_UDP_TUNNEL_N_PORTS];
++ struct dentry *ddir;
+ struct debugfs_u32_array dfs_ports[2];
+ } udp_ports;
+
+diff --git a/drivers/net/netdevsim/udp_tunnels.c b/drivers/net/netdevsim/udp_tunnels.c
+index 02dc3123eb6c16..640b4983a9a0d1 100644
+--- a/drivers/net/netdevsim/udp_tunnels.c
++++ b/drivers/net/netdevsim/udp_tunnels.c
+@@ -112,9 +112,11 @@ nsim_udp_tunnels_info_reset_write(struct file *file, const char __user *data,
+ struct net_device *dev = file->private_data;
+ struct netdevsim *ns = netdev_priv(dev);
+
+- memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports));
+ rtnl_lock();
+- udp_tunnel_nic_reset_ntf(dev);
++ if (dev->reg_state == NETREG_REGISTERED) {
++ memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports));
++ udp_tunnel_nic_reset_ntf(dev);
++ }
+ rtnl_unlock();
+
+ return count;
+@@ -144,23 +146,23 @@ int nsim_udp_tunnels_info_create(struct nsim_dev *nsim_dev,
+ else
+ ns->udp_ports.ports = nsim_dev->udp_ports.__ports;
+
+- debugfs_create_u32("udp_ports_inject_error", 0600,
+- ns->nsim_dev_port->ddir,
++ ns->udp_ports.ddir = debugfs_create_dir("udp_ports",
++ ns->nsim_dev_port->ddir);
++
++ debugfs_create_u32("inject_error", 0600, ns->udp_ports.ddir,
+ &ns->udp_ports.inject_error);
+
+ ns->udp_ports.dfs_ports[0].array = ns->udp_ports.ports[0];
+ ns->udp_ports.dfs_ports[0].n_elements = NSIM_UDP_TUNNEL_N_PORTS;
+- debugfs_create_u32_array("udp_ports_table0", 0400,
+- ns->nsim_dev_port->ddir,
++ debugfs_create_u32_array("table0", 0400, ns->udp_ports.ddir,
+ &ns->udp_ports.dfs_ports[0]);
+
+ ns->udp_ports.dfs_ports[1].array = ns->udp_ports.ports[1];
+ ns->udp_ports.dfs_ports[1].n_elements = NSIM_UDP_TUNNEL_N_PORTS;
+- debugfs_create_u32_array("udp_ports_table1", 0400,
+- ns->nsim_dev_port->ddir,
++ debugfs_create_u32_array("table1", 0400, ns->udp_ports.ddir,
+ &ns->udp_ports.dfs_ports[1]);
+
+- debugfs_create_file("udp_ports_reset", 0200, ns->nsim_dev_port->ddir,
++ debugfs_create_file("reset", 0200, ns->udp_ports.ddir,
+ dev, &nsim_udp_tunnels_info_reset_fops);
+
+ /* Note: it's not normal to allocate the info struct like this!
+@@ -196,6 +198,9 @@ int nsim_udp_tunnels_info_create(struct nsim_dev *nsim_dev,
+
+ void nsim_udp_tunnels_info_destroy(struct net_device *dev)
+ {
++ struct netdevsim *ns = netdev_priv(dev);
++
++ debugfs_remove_recursive(ns->udp_ports.ddir);
+ kfree(dev->udp_tunnel_nic_info);
+ dev->udp_tunnel_nic_info = NULL;
+ }
+diff --git a/drivers/net/phy/marvell-88q2xxx.c b/drivers/net/phy/marvell-88q2xxx.c
+index c812f16eaa3a88..b3a5a0af19da66 100644
+--- a/drivers/net/phy/marvell-88q2xxx.c
++++ b/drivers/net/phy/marvell-88q2xxx.c
+@@ -95,6 +95,10 @@
+
+ #define MDIO_MMD_PCS_MV_TDR_OFF_CUTOFF 65246
+
++struct mv88q2xxx_priv {
++ bool enable_temp;
++};
++
+ struct mmd_val {
+ int devad;
+ u32 regnum;
+@@ -669,17 +673,12 @@ static const struct hwmon_chip_info mv88q2xxx_hwmon_chip_info = {
+
+ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+ {
++ struct mv88q2xxx_priv *priv = phydev->priv;
+ struct device *dev = &phydev->mdio.dev;
+ struct device *hwmon;
+ char *hwmon_name;
+- int ret;
+-
+- /* Enable temperature sense */
+- ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, MDIO_MMD_PCS_MV_TEMP_SENSOR2,
+- MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
+- if (ret < 0)
+- return ret;
+
++ priv->enable_temp = true;
+ hwmon_name = devm_hwmon_sanitize_name(dev, dev_name(dev));
+ if (IS_ERR(hwmon_name))
+ return PTR_ERR(hwmon_name);
+@@ -702,6 +701,14 @@ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+
+ static int mv88q2xxx_probe(struct phy_device *phydev)
+ {
++ struct mv88q2xxx_priv *priv;
++
++ priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
++ if (!priv)
++ return -ENOMEM;
++
++ phydev->priv = priv;
++
+ return mv88q2xxx_hwmon_probe(phydev);
+ }
+
+@@ -792,6 +799,18 @@ static int mv88q222x_revb1_revb2_config_init(struct phy_device *phydev)
+
+ static int mv88q222x_config_init(struct phy_device *phydev)
+ {
++ struct mv88q2xxx_priv *priv = phydev->priv;
++ int ret;
++
++ /* Enable temperature sense */
++ if (priv->enable_temp) {
++ ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
++ MDIO_MMD_PCS_MV_TEMP_SENSOR2,
++ MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
++ if (ret < 0)
++ return ret;
++ }
++
+ if (phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] == PHY_ID_88Q2220_REVB0)
+ return mv88q222x_revb0_config_init(phydev);
+ else
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 5aa41d5f7765a6..5ca6ecf0ce5fbc 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1329,9 +1329,9 @@ int tap_queue_resize(struct tap_dev *tap)
+ list_for_each_entry(q, &tap->queue_list, next)
+ rings[i++] = &q->ring;
+
+- ret = ptr_ring_resize_multiple(rings, n,
+- dev->tx_queue_len, GFP_KERNEL,
+- __skb_array_destroy_skb);
++ ret = ptr_ring_resize_multiple_bh(rings, n,
++ dev->tx_queue_len, GFP_KERNEL,
++ __skb_array_destroy_skb);
+
+ kfree(rings);
+ return ret;
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 1c85dda83825d8..7f4ef219eee44f 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -1175,6 +1175,13 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ return -EBUSY;
+ }
+
++ if (netdev_has_upper_dev(port_dev, dev)) {
++ NL_SET_ERR_MSG(extack, "Device is already a lower device of the team interface");
++ netdev_err(dev, "Device %s is already a lower device of the team interface\n",
++ portname);
++ return -EBUSY;
++ }
++
+ if (port_dev->features & NETIF_F_VLAN_CHALLENGED &&
+ vlan_uses_dev(dev)) {
+ NL_SET_ERR_MSG(extack, "Device is VLAN challenged and team device has VLAN set up");
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 03fe9e3ee7af15..6fc60950100c7c 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3697,9 +3697,9 @@ static int tun_queue_resize(struct tun_struct *tun)
+ list_for_each_entry(tfile, &tun->disabled, next)
+ rings[i++] = &tfile->tx_ring;
+
+- ret = ptr_ring_resize_multiple(rings, n,
+- dev->tx_queue_len, GFP_KERNEL,
+- tun_ptr_free);
++ ret = ptr_ring_resize_multiple_bh(rings, n,
++ dev->tx_queue_len, GFP_KERNEL,
++ tun_ptr_free);
+
+ kfree(rings);
+ return ret;
+diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c
+index 01a3b2417a5401..ddff6f19ff98eb 100644
+--- a/drivers/net/usb/rtl8150.c
++++ b/drivers/net/usb/rtl8150.c
+@@ -71,6 +71,14 @@
+ #define MSR_SPEED (1<<3)
+ #define MSR_LINK (1<<2)
+
++/* USB endpoints */
++enum rtl8150_usb_ep {
++ RTL8150_USB_EP_CONTROL = 0,
++ RTL8150_USB_EP_BULK_IN = 1,
++ RTL8150_USB_EP_BULK_OUT = 2,
++ RTL8150_USB_EP_INT_IN = 3,
++};
++
+ /* Interrupt pipe data */
+ #define INT_TSR 0x00
+ #define INT_RSR 0x01
+@@ -867,6 +875,13 @@ static int rtl8150_probe(struct usb_interface *intf,
+ struct usb_device *udev = interface_to_usbdev(intf);
+ rtl8150_t *dev;
+ struct net_device *netdev;
++ static const u8 bulk_ep_addr[] = {
++ RTL8150_USB_EP_BULK_IN | USB_DIR_IN,
++ RTL8150_USB_EP_BULK_OUT | USB_DIR_OUT,
++ 0};
++ static const u8 int_ep_addr[] = {
++ RTL8150_USB_EP_INT_IN | USB_DIR_IN,
++ 0};
+
+ netdev = alloc_etherdev(sizeof(rtl8150_t));
+ if (!netdev)
+@@ -880,6 +895,13 @@ static int rtl8150_probe(struct usb_interface *intf,
+ return -ENOMEM;
+ }
+
++ /* Verify that all required endpoints are present */
++ if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
++ !usb_check_int_endpoints(intf, int_ep_addr)) {
++ dev_err(&intf->dev, "couldn't find required endpoints\n");
++ goto out;
++ }
++
+ tasklet_setup(&dev->tl, rx_fixup);
+ spin_lock_init(&dev->rx_pool_lock);
+
+diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
+index d2023e7131bd4f..6e6e9f05509ab0 100644
+--- a/drivers/net/vxlan/vxlan_vnifilter.c
++++ b/drivers/net/vxlan/vxlan_vnifilter.c
+@@ -411,6 +411,11 @@ static int vxlan_vnifilter_dump(struct sk_buff *skb, struct netlink_callback *cb
+ struct tunnel_msg *tmsg;
+ struct net_device *dev;
+
++ if (cb->nlh->nlmsg_len < nlmsg_msg_size(sizeof(struct tunnel_msg))) {
++ NL_SET_ERR_MSG(cb->extack, "Invalid msg length");
++ return -EINVAL;
++ }
++
+ tmsg = nlmsg_data(cb->nlh);
+
+ if (tmsg->flags & ~TUNNEL_MSG_VALID_USER_FLAGS) {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 40088e62572e12..40b52d12b43235 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -3872,6 +3872,7 @@ int ath11k_dp_process_rx_err(struct ath11k_base *ab, struct napi_struct *napi,
+ ath11k_hal_rx_msdu_link_info_get(link_desc_va, &num_msdus, msdu_cookies,
+ &rbm);
+ if (rbm != HAL_RX_BUF_RBM_WBM_IDLE_DESC_LIST &&
++ rbm != HAL_RX_BUF_RBM_SW1_BM &&
+ rbm != HAL_RX_BUF_RBM_SW3_BM) {
+ ab->soc_stats.invalid_rbm++;
+ ath11k_warn(ab, "invalid return buffer manager %d\n", rbm);
+diff --git a/drivers/net/wireless/ath/ath11k/hal_rx.c b/drivers/net/wireless/ath/ath11k/hal_rx.c
+index 8f7dd43dc1bd8e..753bd93f02123d 100644
+--- a/drivers/net/wireless/ath/ath11k/hal_rx.c
++++ b/drivers/net/wireless/ath/ath11k/hal_rx.c
+@@ -372,7 +372,8 @@ int ath11k_hal_wbm_desc_parse_err(struct ath11k_base *ab, void *desc,
+
+ ret_buf_mgr = FIELD_GET(BUFFER_ADDR_INFO1_RET_BUF_MGR,
+ wbm_desc->buf_addr_info.info1);
+- if (ret_buf_mgr != HAL_RX_BUF_RBM_SW3_BM) {
++ if (ret_buf_mgr != HAL_RX_BUF_RBM_SW1_BM &&
++ ret_buf_mgr != HAL_RX_BUF_RBM_SW3_BM) {
+ ab->soc_stats.invalid_rbm++;
+ return -EINVAL;
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 8946141aa0dce6..fbf5d57283576f 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -7220,9 +7220,9 @@ ath12k_mac_vdev_start_restart(struct ath12k_vif *arvif,
+ chandef->chan->band,
+ arvif->vif->type);
+ arg.min_power = 0;
+- arg.max_power = chandef->chan->max_power * 2;
+- arg.max_reg_power = chandef->chan->max_reg_power * 2;
+- arg.max_antenna_gain = chandef->chan->max_antenna_gain * 2;
++ arg.max_power = chandef->chan->max_power;
++ arg.max_reg_power = chandef->chan->max_reg_power;
++ arg.max_antenna_gain = chandef->chan->max_antenna_gain;
+
+ arg.pref_tx_streams = ar->num_tx_chains;
+ arg.pref_rx_streams = ar->num_rx_chains;
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 408776562a7e56..cd36cab6db75d3 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -1590,7 +1590,10 @@ static int wcn36xx_probe(struct platform_device *pdev)
+ }
+
+ n_channels = wcn_band_2ghz.n_channels + wcn_band_5ghz.n_channels;
+- wcn->chan_survey = devm_kmalloc(wcn->dev, n_channels, GFP_KERNEL);
++ wcn->chan_survey = devm_kcalloc(wcn->dev,
++ n_channels,
++ sizeof(struct wcn36xx_chan_survey),
++ GFP_KERNEL);
+ if (!wcn->chan_survey) {
+ ret = -ENOMEM;
+ goto out_wq;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+index 31e080e4da6697..ab3d6cfcb02bde 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+@@ -6,6 +6,8 @@
+ #ifndef _fwil_h_
+ #define _fwil_h_
+
++#include "debug.h"
++
+ /*******************************************************************************
+ * Dongle command codes that are interpreted by firmware
+ ******************************************************************************/
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+index 091fb6fd7c787c..834f7c9bb9e92d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+@@ -13,9 +13,12 @@
+ #include <linux/efi.h>
+ #include "fw/runtime.h"
+
+-#define IWL_EFI_VAR_GUID EFI_GUID(0x92daaf2f, 0xc02b, 0x455b, \
+- 0xb2, 0xec, 0xf5, 0xa3, \
+- 0x59, 0x4f, 0x4a, 0xea)
++#define IWL_EFI_WIFI_GUID EFI_GUID(0x92daaf2f, 0xc02b, 0x455b, \
++ 0xb2, 0xec, 0xf5, 0xa3, \
++ 0x59, 0x4f, 0x4a, 0xea)
++#define IWL_EFI_WIFI_BT_GUID EFI_GUID(0xe65d8884, 0xd4af, 0x4b20, \
++ 0x8d, 0x03, 0x77, 0x2e, \
++ 0xcc, 0x3d, 0xa5, 0x31)
+
+ struct iwl_uefi_pnvm_mem_desc {
+ __le32 addr;
+@@ -61,7 +64,7 @@ void *iwl_uefi_get_pnvm(struct iwl_trans *trans, size_t *len)
+
+ *len = 0;
+
+- data = iwl_uefi_get_variable(IWL_UEFI_OEM_PNVM_NAME, &IWL_EFI_VAR_GUID,
++ data = iwl_uefi_get_variable(IWL_UEFI_OEM_PNVM_NAME, &IWL_EFI_WIFI_GUID,
+ &package_size);
+ if (IS_ERR(data)) {
+ IWL_DEBUG_FW(trans,
+@@ -76,18 +79,18 @@ void *iwl_uefi_get_pnvm(struct iwl_trans *trans, size_t *len)
+ return data;
+ }
+
+-static
+-void *iwl_uefi_get_verified_variable(struct iwl_trans *trans,
+- efi_char16_t *uefi_var_name,
+- char *var_name,
+- unsigned int expected_size,
+- unsigned long *size)
++static void *
++iwl_uefi_get_verified_variable_guid(struct iwl_trans *trans,
++ efi_guid_t *guid,
++ efi_char16_t *uefi_var_name,
++ char *var_name,
++ unsigned int expected_size,
++ unsigned long *size)
+ {
+ void *var;
+ unsigned long var_size;
+
+- var = iwl_uefi_get_variable(uefi_var_name, &IWL_EFI_VAR_GUID,
+- &var_size);
++ var = iwl_uefi_get_variable(uefi_var_name, guid, &var_size);
+
+ if (IS_ERR(var)) {
+ IWL_DEBUG_RADIO(trans,
+@@ -112,6 +115,18 @@ void *iwl_uefi_get_verified_variable(struct iwl_trans *trans,
+ return var;
+ }
+
++static void *
++iwl_uefi_get_verified_variable(struct iwl_trans *trans,
++ efi_char16_t *uefi_var_name,
++ char *var_name,
++ unsigned int expected_size,
++ unsigned long *size)
++{
++ return iwl_uefi_get_verified_variable_guid(trans, &IWL_EFI_WIFI_GUID,
++ uefi_var_name, var_name,
++ expected_size, size);
++}
++
+ int iwl_uefi_handle_tlv_mem_desc(struct iwl_trans *trans, const u8 *data,
+ u32 tlv_len, struct iwl_pnvm_image *pnvm_data)
+ {
+@@ -311,8 +326,9 @@ void iwl_uefi_get_step_table(struct iwl_trans *trans)
+ if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ return;
+
+- data = iwl_uefi_get_verified_variable(trans, IWL_UEFI_STEP_NAME,
+- "STEP", sizeof(*data), NULL);
++ data = iwl_uefi_get_verified_variable_guid(trans, &IWL_EFI_WIFI_BT_GUID,
++ IWL_UEFI_STEP_NAME,
++ "STEP", sizeof(*data), NULL);
+ if (IS_ERR(data))
+ return;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/coex.c b/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
+index b607961970e970..9b8624304fa308 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
+@@ -530,18 +530,15 @@ static void iwl_mvm_bt_coex_notif_iterator(void *_data, u8 *mac,
+ struct ieee80211_vif *vif)
+ {
+ struct iwl_mvm *mvm = _data;
++ struct ieee80211_bss_conf *link_conf;
++ unsigned int link_id;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+ return;
+
+- for (int link_id = 0;
+- link_id < IEEE80211_MLD_MAX_NUM_LINKS;
+- link_id++) {
+- struct ieee80211_bss_conf *link_conf =
+- rcu_dereference_check(vif->link_conf[link_id],
+- lockdep_is_held(&mvm->mutex));
++ for_each_vif_active_link(vif, link_conf, link_id) {
+ struct ieee80211_chanctx_conf *chanctx_conf =
+ rcu_dereference_check(link_conf->chanctx_conf,
+ lockdep_is_held(&mvm->mutex));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index ca026b5256ce33..5f4942f6cc68e4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1880,7 +1880,9 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ IWL_DEBUG_TX_REPLY(mvm,
+ "Next reclaimed packet:%d\n",
+ next_reclaimed);
+- iwl_mvm_count_mpdu(mvmsta, sta_id, 1, true, 0);
++ if (tid < IWL_MAX_TID_COUNT)
++ iwl_mvm_count_mpdu(mvmsta, sta_id, 1,
++ true, 0);
+ } else {
+ IWL_DEBUG_TX_REPLY(mvm,
+ "NDP - don't update next_reclaimed\n");
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 9d5561f441347b..0ca83f1a3e3ea2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -958,11 +958,11 @@ int mt76_set_channel(struct mt76_phy *phy, struct cfg80211_chan_def *chandef,
+
+ if (chandef->chan != phy->main_chan)
+ memset(phy->chan_state, 0, sizeof(*phy->chan_state));
+- mt76_worker_enable(&dev->tx_worker);
+
+ ret = dev->drv->set_channel(phy);
+
+ clear_bit(MT76_RESET, &phy->state);
++ mt76_worker_enable(&dev->tx_worker);
+ mt76_worker_schedule(&dev->tx_worker);
+
+ mutex_unlock(&dev->mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 96e34277fece9b..1cc8fc8fefe740 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -1113,7 +1113,7 @@ mt7615_mcu_uni_add_dev(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ {
+ struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+
+- return mt76_connac_mcu_uni_add_dev(phy->mt76, &vif->bss_conf,
++ return mt76_connac_mcu_uni_add_dev(phy->mt76, &vif->bss_conf, &mvif->mt76,
+ &mvif->sta.wcid, enable);
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 864246f9408899..7d07e720e4ec1d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1137,10 +1137,10 @@ EXPORT_SYMBOL_GPL(mt76_connac_mcu_wtbl_ba_tlv);
+
+ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ struct ieee80211_bss_conf *bss_conf,
++ struct mt76_vif *mvif,
+ struct mt76_wcid *wcid,
+ bool enable)
+ {
+- struct mt76_vif *mvif = (struct mt76_vif *)bss_conf->vif->drv_priv;
+ struct mt76_dev *dev = phy->dev;
+ struct {
+ struct {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 1b0e80dfc346b8..57a8340fa70097 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -1938,6 +1938,7 @@ void mt76_connac_mcu_sta_ba_tlv(struct sk_buff *skb,
+ bool enable, bool tx);
+ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ struct ieee80211_bss_conf *bss_conf,
++ struct mt76_vif *mvif,
+ struct mt76_wcid *wcid,
+ bool enable);
+ int mt76_connac_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 6bef96e3d2a3d9..77d82ccd73079d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -82,7 +82,7 @@ static ssize_t mt7915_thermal_temp_store(struct device *dev,
+ return ret;
+
+ mutex_lock(&phy->dev->mt76.mutex);
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 60, 130);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 60 * 1000, 130 * 1000), 1000);
+
+ if ((i - 1 == MT7915_CRIT_TEMP_IDX &&
+ val > phy->throttle_temp[MT7915_MAX_TEMP_IDX]) ||
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index cf77ce0c875991..799e8d2cc7e6ec 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1388,6 +1388,8 @@ mt7915_mac_restart(struct mt7915_dev *dev)
+ if (dev_is_pci(mdev->dev)) {
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ if (dev->hif2) {
++ mt76_wr(dev, MT_PCIE_RECOG_ID,
++ dev->hif2->index | MT_PCIE_RECOG_ID_SEM);
+ if (is_mt7915(mdev))
+ mt76_wr(dev, MT_PCIE1_MAC_INT_ENABLE, 0xff);
+ else
+@@ -1442,9 +1444,11 @@ static void
+ mt7915_mac_full_reset(struct mt7915_dev *dev)
+ {
+ struct mt76_phy *ext_phy;
++ struct mt7915_phy *phy2;
+ int i;
+
+ ext_phy = dev->mt76.phys[MT_BAND1];
++ phy2 = ext_phy ? ext_phy->priv : NULL;
+
+ dev->recovery.hw_full_reset = true;
+
+@@ -1474,6 +1478,9 @@ mt7915_mac_full_reset(struct mt7915_dev *dev)
+
+ memset(dev->mt76.wcid_mask, 0, sizeof(dev->mt76.wcid_mask));
+ dev->mt76.vif_mask = 0;
++ dev->phy.omac_mask = 0;
++ if (phy2)
++ phy2->omac_mask = 0;
+
+ i = mt76_wcid_alloc(dev->mt76.wcid_mask, MT7915_WTBL_STA);
+ dev->mt76.global_wcid.idx = i;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index d75e8dea1fbdc8..8c0d63cebf3e14 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -246,8 +246,10 @@ static int mt7915_add_interface(struct ieee80211_hw *hw,
+ phy->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
+
+ idx = mt76_wcid_alloc(dev->mt76.wcid_mask, mt7915_wtbl_size(dev));
+- if (idx < 0)
+- return -ENOSPC;
++ if (idx < 0) {
++ ret = -ENOSPC;
++ goto out;
++ }
+
+ INIT_LIST_HEAD(&mvif->sta.rc_list);
+ INIT_LIST_HEAD(&mvif->sta.wcid.poll_list);
+@@ -619,8 +621,9 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
+ if (changed & BSS_CHANGED_ASSOC)
+ set_bss_info = vif->cfg.assoc;
+ if (changed & BSS_CHANGED_BEACON_ENABLED &&
++ info->enable_beacon &&
+ vif->type != NL80211_IFTYPE_AP)
+- set_bss_info = set_sta = info->enable_beacon;
++ set_bss_info = set_sta = 1;
+
+ if (set_bss_info == 1)
+ mt7915_mcu_add_bss_info(phy, vif, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index 44e112b8b5b368..2e7604eed27b02 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -484,7 +484,7 @@ static u32 __mt7915_reg_addr(struct mt7915_dev *dev, u32 addr)
+ continue;
+
+ ofs = addr - dev->reg.map[i].phys;
+- if (ofs > dev->reg.map[i].size)
++ if (ofs >= dev->reg.map[i].size)
+ continue;
+
+ return dev->reg.map[i].maps + ofs;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index ac0b1f0eb27c14..5fe872ef2e939b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -191,6 +191,7 @@ struct mt7915_hif {
+ struct device *dev;
+ void __iomem *regs;
+ int irq;
++ u32 index;
+ };
+
+ struct mt7915_phy {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
+index 39132894e8ea29..07b0a5766eab7d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
+@@ -42,6 +42,7 @@ static struct mt7915_hif *mt7915_pci_get_hif2(u32 idx)
+ continue;
+
+ get_device(hif->dev);
++ hif->index = idx;
+ goto out;
+ }
+ hif = NULL;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 047106b65d2bc6..bd1455698ebe5f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -647,6 +647,7 @@ mt7921_vif_connect_iter(void *priv, u8 *mac,
+ ieee80211_disconnect(vif, true);
+
+ mt76_connac_mcu_uni_add_dev(&dev->mphy, &vif->bss_conf,
++ &mvif->bss_conf.mt76,
+ &mvif->sta.deflink.wcid, true);
+ mt7921_mcu_set_tx(dev, vif);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index a7f5bfbc02ed1f..e2dfd3670c4c93 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -308,6 +308,7 @@ mt7921_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ mvif->bss_conf.mt76.wmm_idx = mvif->bss_conf.mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
+
+ ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, &vif->bss_conf,
++ &mvif->bss_conf.mt76,
+ &mvif->sta.deflink.wcid, true);
+ if (ret)
+ goto out;
+@@ -531,7 +532,13 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ } else {
+ if (idx == *wcid_keyidx)
+ *wcid_keyidx = -1;
+- goto out;
++
++ /* For security issue we don't trigger the key deletion when
++ * reassociating. But we should trigger the deletion process
++ * to avoid using incorrect cipher after disconnection,
++ */
++ if (vif->type != NL80211_IFTYPE_STATION || vif->cfg.assoc)
++ goto out;
+ }
+
+ mt76_wcid_key_setup(&dev->mt76, wcid, key);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+index 634c42bbf23f67..a095fb31e391a1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+@@ -49,7 +49,7 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
+ break;
+ mlink = list_first_entry(&sta_poll_list,
+ struct mt792x_link_sta, wcid.poll_list);
+- msta = container_of(mlink, struct mt792x_sta, deflink);
++ msta = mlink->sta;
+ spin_lock_bh(&dev->mt76.sta_poll_lock);
+ list_del_init(&mlink->wcid.poll_list);
+ spin_unlock_bh(&dev->mt76.sta_poll_lock);
+@@ -1271,6 +1271,7 @@ mt7925_vif_connect_iter(void *priv, u8 *mac,
+ struct mt792x_dev *dev = mvif->phy->dev;
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ struct ieee80211_bss_conf *bss_conf;
++ struct mt792x_bss_conf *mconf;
+ int i;
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+@@ -1278,8 +1279,9 @@ mt7925_vif_connect_iter(void *priv, u8 *mac,
+
+ for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+ bss_conf = mt792x_vif_to_bss_conf(vif, i);
++ mconf = mt792x_vif_to_link(mvif, i);
+
+- mt76_connac_mcu_uni_add_dev(&dev->mphy, bss_conf,
++ mt76_connac_mcu_uni_add_dev(&dev->mphy, bss_conf, &mconf->mt76,
+ &mvif->sta.deflink.wcid, true);
+ mt7925_mcu_set_tx(dev, bss_conf);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index 791c8b00e11264..ddc67423efe2cb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -365,18 +365,14 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ mconf->mt76.omac_idx = ieee80211_vif_is_mld(vif) ?
+ 0 : mconf->mt76.idx;
+ mconf->mt76.band_idx = 0xff;
+- mconf->mt76.wmm_idx = mconf->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
++ mconf->mt76.wmm_idx = ieee80211_vif_is_mld(vif) ?
++ 0 : mconf->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
+
+ if (mvif->phy->mt76->chandef.chan->band != NL80211_BAND_2GHZ)
+ mconf->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL + 4;
+ else
+ mconf->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL;
+
+- ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf,
+- &mlink->wcid, true);
+- if (ret)
+- goto out;
+-
+ dev->mt76.vif_mask |= BIT_ULL(mconf->mt76.idx);
+ mvif->phy->omac_mask |= BIT_ULL(mconf->mt76.omac_idx);
+
+@@ -384,7 +380,7 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+
+ INIT_LIST_HEAD(&mlink->wcid.poll_list);
+ mlink->wcid.idx = idx;
+- mlink->wcid.phy_idx = mconf->mt76.band_idx;
++ mlink->wcid.phy_idx = 0;
+ mlink->wcid.hw_key_idx = -1;
+ mlink->wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ mt76_wcid_init(&mlink->wcid);
+@@ -395,6 +391,12 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ ewma_rssi_init(&mconf->rssi);
+
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mlink->wcid);
++
++ ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mconf->mt76,
++ &mlink->wcid, true);
++ if (ret)
++ goto out;
++
+ if (vif->txq) {
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+ mtxq->wcid = idx;
+@@ -837,6 +839,7 @@ static int mt7925_mac_link_sta_add(struct mt76_dev *mdev,
+ u8 link_id = link_sta->link_id;
+ struct mt792x_link_sta *mlink;
+ struct mt792x_sta *msta;
++ struct mt76_wcid *wcid;
+ int ret, idx;
+
+ msta = (struct mt792x_sta *)link_sta->sta->drv_priv;
+@@ -850,11 +853,20 @@ static int mt7925_mac_link_sta_add(struct mt76_dev *mdev,
+ INIT_LIST_HEAD(&mlink->wcid.poll_list);
+ mlink->wcid.sta = 1;
+ mlink->wcid.idx = idx;
+- mlink->wcid.phy_idx = mconf->mt76.band_idx;
++ mlink->wcid.phy_idx = 0;
+ mlink->wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ mlink->last_txs = jiffies;
+ mlink->wcid.link_id = link_sta->link_id;
+ mlink->wcid.link_valid = !!link_sta->sta->valid_links;
++ mlink->sta = msta;
++
++ wcid = &mlink->wcid;
++ ewma_signal_init(&wcid->rssi);
++ rcu_assign_pointer(dev->mt76.wcid[wcid->idx], wcid);
++ mt76_wcid_init(wcid);
++ ewma_avg_signal_init(&mlink->avg_ack_signal);
++ memset(mlink->airtime_ac, 0,
++ sizeof(msta->deflink.airtime_ac));
+
+ ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+ if (ret)
+@@ -866,9 +878,14 @@ static int mt7925_mac_link_sta_add(struct mt76_dev *mdev,
+ link_conf = mt792x_vif_to_bss_conf(vif, link_id);
+
+ /* should update bss info before STA add */
+- if (vif->type == NL80211_IFTYPE_STATION && !link_sta->sta->tdls)
+- mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx,
+- link_conf, link_sta, false);
++ if (vif->type == NL80211_IFTYPE_STATION && !link_sta->sta->tdls) {
++ if (ieee80211_vif_is_mld(vif))
++ mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx,
++ link_conf, link_sta, link_sta != mlink->pri_link);
++ else
++ mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx,
++ link_conf, link_sta, false);
++ }
+
+ if (ieee80211_vif_is_mld(vif) &&
+ link_sta == mlink->pri_link) {
+@@ -904,7 +921,6 @@ mt7925_mac_sta_add_links(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, unsigned long new_links)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+- struct mt76_wcid *wcid;
+ unsigned int link_id;
+ int err = 0;
+
+@@ -921,14 +937,6 @@ mt7925_mac_sta_add_links(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ err = -ENOMEM;
+ break;
+ }
+-
+- wcid = &mlink->wcid;
+- ewma_signal_init(&wcid->rssi);
+- rcu_assign_pointer(dev->mt76.wcid[wcid->idx], wcid);
+- mt76_wcid_init(wcid);
+- ewma_avg_signal_init(&mlink->avg_ack_signal);
+- memset(mlink->airtime_ac, 0,
+- sizeof(msta->deflink.airtime_ac));
+ }
+
+ msta->valid_links |= BIT(link_id);
+@@ -1141,8 +1149,7 @@ static void mt7925_mac_link_sta_remove(struct mt76_dev *mdev,
+ struct mt792x_bss_conf *mconf;
+
+ mconf = mt792x_link_conf_to_mconf(link_conf);
+- mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx, link_conf,
+- link_sta, false);
++ mt792x_mac_link_bss_remove(dev, mconf, mlink);
+ }
+
+ spin_lock_bh(&mdev->sta_poll_lock);
+@@ -1200,12 +1207,45 @@ void mt7925_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ {
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
++ struct {
++ struct {
++ u8 omac_idx;
++ u8 band_idx;
++ __le16 pad;
++ } __packed hdr;
++ struct req_tlv {
++ __le16 tag;
++ __le16 len;
++ u8 active;
++ u8 link_idx; /* hw link idx */
++ u8 omac_addr[ETH_ALEN];
++ } __packed tlv;
++ } dev_req = {
++ .hdr = {
++ .omac_idx = 0,
++ .band_idx = 0,
++ },
++ .tlv = {
++ .tag = cpu_to_le16(DEV_INFO_ACTIVE),
++ .len = cpu_to_le16(sizeof(struct req_tlv)),
++ .active = true,
++ },
++ };
+ unsigned long rem;
+
+ rem = ieee80211_vif_is_mld(vif) ? msta->valid_links : BIT(0);
+
+ mt7925_mac_sta_remove_links(dev, vif, sta, rem);
+
++ if (ieee80211_vif_is_mld(vif)) {
++ mt7925_mcu_set_dbdc(&dev->mphy, false);
++
++ /* recovery omac address for the legacy interface */
++ memcpy(dev_req.tlv.omac_addr, vif->addr, ETH_ALEN);
++ mt76_mcu_send_msg(mdev, MCU_UNI_CMD(DEV_INFO_UPDATE),
++ &dev_req, sizeof(dev_req), true);
++ }
++
+ if (vif->type == NL80211_IFTYPE_STATION) {
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+@@ -1250,22 +1290,22 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ case IEEE80211_AMPDU_RX_START:
+ mt76_rx_aggr_start(&dev->mt76, &msta->deflink.wcid, tid, ssn,
+ params->buf_size);
+- mt7925_mcu_uni_rx_ba(dev, params, true);
++ mt7925_mcu_uni_rx_ba(dev, vif, params, true);
+ break;
+ case IEEE80211_AMPDU_RX_STOP:
+ mt76_rx_aggr_stop(&dev->mt76, &msta->deflink.wcid, tid);
+- mt7925_mcu_uni_rx_ba(dev, params, false);
++ mt7925_mcu_uni_rx_ba(dev, vif, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_OPERATIONAL:
+ mtxq->aggr = true;
+ mtxq->send_bar = false;
+- mt7925_mcu_uni_tx_ba(dev, params, true);
++ mt7925_mcu_uni_tx_ba(dev, vif, params, true);
+ break;
+ case IEEE80211_AMPDU_TX_STOP_FLUSH:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->deflink.wcid.ampdu_state);
+- mt7925_mcu_uni_tx_ba(dev, params, false);
++ mt7925_mcu_uni_tx_ba(dev, vif, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_START:
+ set_bit(tid, &msta->deflink.wcid.ampdu_state);
+@@ -1274,7 +1314,7 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ case IEEE80211_AMPDU_TX_STOP_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->deflink.wcid.ampdu_state);
+- mt7925_mcu_uni_tx_ba(dev, params, false);
++ mt7925_mcu_uni_tx_ba(dev, vif, params, false);
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ break;
+ }
+@@ -1895,6 +1935,13 @@ static void mt7925_link_info_changed(struct ieee80211_hw *hw,
+ if (changed & (BSS_CHANGED_QOS | BSS_CHANGED_BEACON_ENABLED))
+ mt7925_mcu_set_tx(dev, info);
+
++ if (changed & BSS_CHANGED_BSSID) {
++ if (ieee80211_vif_is_mld(vif) &&
++ hweight16(mvif->valid_links) == 2)
++ /* Indicate the secondary setup done */
++ mt7925_mcu_uni_bss_bcnft(dev, info, true);
++ }
++
+ mt792x_mutex_release(dev);
+ }
+
+@@ -1946,6 +1993,8 @@ mt7925_change_vif_links(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ GFP_KERNEL);
+ mlink = devm_kzalloc(dev->mt76.dev, sizeof(*mlink),
+ GFP_KERNEL);
++ if (!mconf || !mlink)
++ return -ENOMEM;
+ }
+
+ mconfs[link_id] = mconf;
+@@ -1974,6 +2023,8 @@ mt7925_change_vif_links(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ goto free;
+
+ if (mconf != &mvif->bss_conf) {
++ mt7925_mcu_set_bss_pm(dev, link_conf, true);
++
+ err = mt7925_set_mlo_roc(phy, &mvif->bss_conf,
+ vif->active_links);
+ if (err < 0)
+@@ -2071,18 +2122,16 @@ static void mt7925_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct mt792x_chanctx *mctx = (struct mt792x_chanctx *)ctx->drv_priv;
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+- struct ieee80211_bss_conf *pri_link_conf;
+ struct mt792x_bss_conf *mconf;
+
+ mutex_lock(&dev->mt76.mutex);
+
+ if (ieee80211_vif_is_mld(vif)) {
+ mconf = mt792x_vif_to_link(mvif, link_conf->link_id);
+- pri_link_conf = mt792x_vif_to_bss_conf(vif, mvif->deflink_id);
+
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ mconf == &mvif->bss_conf)
+- mt7925_mcu_add_bss_info(&dev->phy, NULL, pri_link_conf,
++ mt7925_mcu_add_bss_info(&dev->phy, NULL, link_conf,
+ NULL, false);
+ } else {
+ mconf = &mvif->bss_conf;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 748ea6adbc6b39..ce3d8197b026a6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -123,10 +123,8 @@ EXPORT_SYMBOL_GPL(mt7925_mcu_regval);
+ int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
+ struct ieee80211_bss_conf *link_conf)
+ {
+- struct ieee80211_vif *mvif = container_of((void *)link_conf->vif,
+- struct ieee80211_vif,
+- drv_priv);
+ struct mt792x_bss_conf *mconf = mt792x_link_conf_to_mconf(link_conf);
++ struct ieee80211_vif *mvif = link_conf->vif;
+ struct sk_buff *skb;
+ int i, len = min_t(int, mvif->cfg.arp_addr_cnt,
+ IEEE80211_BSS_ARP_ADDR_LIST_LEN);
+@@ -531,10 +529,10 @@ void mt7925_mcu_rx_event(struct mt792x_dev *dev, struct sk_buff *skb)
+
+ static int
+ mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
++ struct mt76_wcid *wcid,
+ struct ieee80211_ampdu_params *params,
+ bool enable, bool tx)
+ {
+- struct mt76_wcid *wcid = (struct mt76_wcid *)params->sta->drv_priv;
+ struct sta_rec_ba_uni *ba;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+@@ -562,28 +560,60 @@ mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+
+ /** starec & wtbl **/
+ int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+- struct mt792x_vif *mvif = msta->vif;
++ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++ struct mt792x_link_sta *mlink;
++ struct mt792x_bss_conf *mconf;
++ unsigned long usable_links = ieee80211_vif_usable_links(vif);
++ struct mt76_wcid *wcid;
++ u8 link_id, ret;
++
++ for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
++ mconf = mt792x_vif_to_link(mvif, link_id);
++ mlink = mt792x_sta_to_link(msta, link_id);
++ wcid = &mlink->wcid;
+
+- if (enable && !params->amsdu)
+- msta->deflink.wcid.amsdu = false;
++ if (enable && !params->amsdu)
++ mlink->wcid.amsdu = false;
+
+- return mt7925_mcu_sta_ba(&dev->mt76, &mvif->bss_conf.mt76, params,
+- enable, true);
++ ret = mt7925_mcu_sta_ba(&dev->mt76, &mconf->mt76, wcid, params,
++ enable, true);
++ if (ret < 0)
++ break;
++ }
++
++ return ret;
+ }
+
+ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+- struct mt792x_vif *mvif = msta->vif;
++ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++ struct mt792x_link_sta *mlink;
++ struct mt792x_bss_conf *mconf;
++ unsigned long usable_links = ieee80211_vif_usable_links(vif);
++ struct mt76_wcid *wcid;
++ u8 link_id, ret;
++
++ for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
++ mconf = mt792x_vif_to_link(mvif, link_id);
++ mlink = mt792x_sta_to_link(msta, link_id);
++ wcid = &mlink->wcid;
++
++ ret = mt7925_mcu_sta_ba(&dev->mt76, &mconf->mt76, wcid, params,
++ enable, false);
++ if (ret < 0)
++ break;
++ }
+
+- return mt7925_mcu_sta_ba(&dev->mt76, &mvif->bss_conf.mt76, params,
+- enable, false);
++ return ret;
+ }
+
+ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+@@ -638,7 +668,7 @@ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ for (offset = 0; offset < len; offset += le32_to_cpu(clc->len)) {
+ clc = (const struct mt7925_clc *)(clc_base + offset);
+
+- if (clc->idx > ARRAY_SIZE(phy->clc))
++ if (clc->idx >= ARRAY_SIZE(phy->clc))
+ break;
+
+ /* do not init buf again if chip reset triggered */
+@@ -823,7 +853,7 @@ mt7925_mcu_get_nic_capability(struct mt792x_dev *dev)
+ mt7925_mcu_parse_phy_cap(dev, tlv->data);
+ break;
+ case MT_NIC_CAP_CHIP_CAP:
+- memcpy(&dev->phy.chip_cap, (void *)skb->data, sizeof(u64));
++ dev->phy.chip_cap = le64_to_cpu(*(__le64 *)tlv->data);
+ break;
+ case MT_NIC_CAP_EML_CAP:
+ mt7925_mcu_parse_eml_cap(dev, tlv->data);
+@@ -1153,7 +1183,12 @@ int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
+ u8 rsv[4];
+ } __packed hdr;
+ struct roc_acquire_tlv roc[2];
+- } __packed req;
++ } __packed req = {
++ .roc[0].tag = cpu_to_le16(UNI_ROC_NUM),
++ .roc[0].len = cpu_to_le16(sizeof(struct roc_acquire_tlv)),
++ .roc[1].tag = cpu_to_le16(UNI_ROC_NUM),
++ .roc[1].len = cpu_to_le16(sizeof(struct roc_acquire_tlv))
++ };
+
+ if (!mconf || hweight16(vif->valid_links) < 2 ||
+ hweight16(sel_links) != 2)
+@@ -1200,6 +1235,8 @@ int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
+ req.roc[i].bw_from_ap = CMD_CBW_20MHZ;
+ req.roc[i].center_chan = center_ch;
+ req.roc[i].center_chan_from_ap = center_ch;
++ req.roc[i].center_chan2 = 0;
++ req.roc[i].center_chan2_from_ap = 0;
+
+ /* STR : 0xfe indicates BAND_ALL with enabling DBDC
+ * EMLSR : 0xff indicates (BAND_AUTO) without DBDC
+@@ -1215,7 +1252,7 @@ int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
+ }
+
+ return mt76_mcu_send_msg(&mvif->phy->dev->mt76, MCU_UNI_CMD(ROC),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+@@ -1264,7 +1301,7 @@ int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+ }
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(ROC),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+@@ -1294,7 +1331,7 @@ int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+ };
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(ROC),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ int mt7925_mcu_set_eeprom(struct mt792x_dev *dev)
+@@ -1357,7 +1394,7 @@ int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev,
+ &ps_req, sizeof(ps_req), true);
+ }
+
+-static int
++int
+ mt7925_mcu_uni_bss_bcnft(struct mt792x_dev *dev,
+ struct ieee80211_bss_conf *link_conf, bool enable)
+ {
+@@ -1447,12 +1484,12 @@ mt7925_mcu_set_bss_pm(struct mt792x_dev *dev,
+ int err;
+
+ err = mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+- &req1, sizeof(req1), false);
++ &req1, sizeof(req1), true);
+ if (err < 0 || !enable)
+ return err;
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ static void
+@@ -1898,7 +1935,11 @@ int mt7925_mcu_sta_update(struct mt792x_dev *dev,
+ mlink = mt792x_sta_to_link(msta, link_sta->link_id);
+ }
+ info.wcid = link_sta ? &mlink->wcid : &mvif->sta.deflink.wcid;
+- info.newly = link_sta ? state != MT76_STA_INFO_STATE_ASSOC : true;
++
++ if (link_sta)
++ info.newly = state != MT76_STA_INFO_STATE_ASSOC;
++ else
++ info.newly = state == MT76_STA_INFO_STATE_ASSOC ? false : true;
+
+ if (ieee80211_vif_is_mld(vif))
+ err = mt7925_mcu_mlo_sta_cmd(&dev->mphy, &info);
+@@ -1914,32 +1955,21 @@ int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ {
+ #define MT7925_FIF_BIT_CLR BIT(1)
+ #define MT7925_FIF_BIT_SET BIT(0)
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+- unsigned long valid = ieee80211_vif_is_mld(vif) ?
+- mvif->valid_links : BIT(0);
+- struct ieee80211_bss_conf *bss_conf;
+ int err = 0;
+- int i;
+
+ if (enable) {
+- for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+- bss_conf = mt792x_vif_to_bss_conf(vif, i);
+- err = mt7925_mcu_uni_bss_bcnft(dev, bss_conf, true);
+- if (err < 0)
+- return err;
+- }
++ err = mt7925_mcu_uni_bss_bcnft(dev, &vif->bss_conf, true);
++ if (err < 0)
++ return err;
+
+ return mt7925_mcu_set_rxfilter(dev, 0,
+ MT7925_FIF_BIT_SET,
+ MT_WF_RFCR_DROP_OTHER_BEACON);
+ }
+
+- for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+- bss_conf = mt792x_vif_to_bss_conf(vif, i);
+- err = mt7925_mcu_set_bss_pm(dev, bss_conf, false);
+- if (err)
+- return err;
+- }
++ err = mt7925_mcu_set_bss_pm(dev, &vif->bss_conf, false);
++ if (err < 0)
++ return err;
+
+ return mt7925_mcu_set_rxfilter(dev, 0,
+ MT7925_FIF_BIT_CLR,
+@@ -1976,8 +2006,6 @@ int mt7925_get_txpwr_info(struct mt792x_dev *dev, u8 band_idx, struct mt7925_txp
+ int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable)
+ {
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+-
+ struct {
+ struct {
+ u8 band_idx;
+@@ -1991,7 +2019,7 @@ int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ } __packed enable;
+ } __packed req = {
+ .hdr = {
+- .band_idx = mvif->bss_conf.mt76.band_idx,
++ .band_idx = 0,
+ },
+ .enable = {
+ .tag = cpu_to_le16(UNI_SNIFFER_ENABLE),
+@@ -2050,7 +2078,7 @@ int mt7925_mcu_config_sniffer(struct mt792x_vif *vif,
+ } __packed tlv;
+ } __packed req = {
+ .hdr = {
+- .band_idx = vif->bss_conf.mt76.band_idx,
++ .band_idx = 0,
+ },
+ .tlv = {
+ .tag = cpu_to_le16(UNI_SNIFFER_CONFIG),
+@@ -2179,11 +2207,27 @@ void mt7925_mcu_bss_rlm_tlv(struct sk_buff *skb, struct mt76_phy *phy,
+ req = (struct bss_rlm_tlv *)tlv;
+ req->control_channel = chandef->chan->hw_value;
+ req->center_chan = ieee80211_frequency_to_channel(freq1);
+- req->center_chan2 = ieee80211_frequency_to_channel(freq2);
++ req->center_chan2 = 0;
+ req->tx_streams = hweight8(phy->antenna_mask);
+ req->ht_op_info = 4; /* set HT 40M allowed */
+ req->rx_streams = hweight8(phy->antenna_mask);
+- req->band = band;
++ req->center_chan2 = 0;
++ req->sco = 0;
++ req->band = 1;
++
++ switch (band) {
++ case NL80211_BAND_2GHZ:
++ req->band = 1;
++ break;
++ case NL80211_BAND_5GHZ:
++ req->band = 2;
++ break;
++ case NL80211_BAND_6GHZ:
++ req->band = 3;
++ break;
++ default:
++ break;
++ }
+
+ switch (chandef->width) {
+ case NL80211_CHAN_WIDTH_40:
+@@ -2194,6 +2238,7 @@ void mt7925_mcu_bss_rlm_tlv(struct sk_buff *skb, struct mt76_phy *phy,
+ break;
+ case NL80211_CHAN_WIDTH_80P80:
+ req->bw = CMD_CBW_8080MHZ;
++ req->center_chan2 = ieee80211_frequency_to_channel(freq2);
+ break;
+ case NL80211_CHAN_WIDTH_160:
+ req->bw = CMD_CBW_160MHZ;
+@@ -2463,6 +2508,7 @@ static void
+ mt7925_mcu_bss_mld_tlv(struct sk_buff *skb,
+ struct ieee80211_bss_conf *link_conf)
+ {
++ struct ieee80211_vif *vif = link_conf->vif;
+ struct mt792x_bss_conf *mconf = mt792x_link_conf_to_mconf(link_conf);
+ struct mt792x_vif *mvif = (struct mt792x_vif *)link_conf->vif->drv_priv;
+ struct bss_mld_tlv *mld;
+@@ -2483,7 +2529,7 @@ mt7925_mcu_bss_mld_tlv(struct sk_buff *skb,
+ mld->eml_enable = !!(link_conf->vif->cfg.eml_cap &
+ IEEE80211_EML_CAP_EMLSR_SUPP);
+
+- memcpy(mld->mac_addr, link_conf->addr, ETH_ALEN);
++ memcpy(mld->mac_addr, vif->addr, ETH_ALEN);
+ }
+
+ static void
+@@ -2614,7 +2660,7 @@ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ MCU_UNI_CMD(BSS_INFO_UPDATE), true);
+ }
+
+-int mt7925_mcu_set_dbdc(struct mt76_phy *phy)
++int mt7925_mcu_set_dbdc(struct mt76_phy *phy, bool enable)
+ {
+ struct mt76_dev *mdev = phy->dev;
+
+@@ -2634,7 +2680,7 @@ int mt7925_mcu_set_dbdc(struct mt76_phy *phy)
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_MBMC_SETTING, sizeof(*conf));
+ conf = (struct mbmc_conf_tlv *)tlv;
+
+- conf->mbmc_en = 1;
++ conf->mbmc_en = enable;
+ conf->band = 0; /* unused */
+
+ err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SET_DBDC_PARMS),
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+index ac53bdc993322f..fe6a613ba00889 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+@@ -616,7 +616,7 @@ mt7925_mcu_get_cipher(int cipher)
+ }
+ }
+
+-int mt7925_mcu_set_dbdc(struct mt76_phy *phy);
++int mt7925_mcu_set_dbdc(struct mt76_phy *phy, bool enable);
+ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *scan_req);
+ int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+@@ -643,4 +643,7 @@ int mt7925_mcu_set_chctx(struct mt76_phy *phy, struct mt76_vif *mvif,
+ int mt7925_mcu_set_rate_txpower(struct mt76_phy *phy);
+ int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
+ struct ieee80211_bss_conf *link_conf);
++int
++mt7925_mcu_uni_bss_bcnft(struct mt792x_dev *dev,
++ struct ieee80211_bss_conf *link_conf, bool enable);
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+index f5c02e5f506633..df3c705d1cb3fa 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+@@ -242,9 +242,11 @@ int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ bool enable);
+ int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+ void mt7925_scan_work(struct work_struct *work);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x.h b/drivers/net/wireless/mediatek/mt76/mt792x.h
+index ab12616ec2b87c..2b8b9b2977f74a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x.h
++++ b/drivers/net/wireless/mediatek/mt76/mt792x.h
+@@ -241,6 +241,7 @@ static inline struct mt792x_bss_conf *
+ mt792x_vif_to_link(struct mt792x_vif *mvif, u8 link_id)
+ {
+ struct ieee80211_vif *vif;
++ struct mt792x_bss_conf *bss_conf;
+
+ vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv);
+
+@@ -248,8 +249,10 @@ mt792x_vif_to_link(struct mt792x_vif *mvif, u8 link_id)
+ link_id >= IEEE80211_LINK_UNSPECIFIED)
+ return &mvif->bss_conf;
+
+- return rcu_dereference_protected(mvif->link_conf[link_id],
+- lockdep_is_held(&mvif->phy->dev->mt76.mutex));
++ bss_conf = rcu_dereference_protected(mvif->link_conf[link_id],
++ lockdep_is_held(&mvif->phy->dev->mt76.mutex));
++
++ return bss_conf ? bss_conf : &mvif->bss_conf;
+ }
+
+ static inline struct mt792x_link_sta *
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_core.c b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+index 78fe37c2e07b59..b87eed4d168df5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_core.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+@@ -147,7 +147,8 @@ void mt792x_mac_link_bss_remove(struct mt792x_dev *dev,
+ link_conf = mt792x_vif_to_bss_conf(vif, mconf->link_id);
+
+ mt76_connac_free_pending_tx_skbs(&dev->pm, &mlink->wcid);
+- mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mlink->wcid, false);
++ mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mconf->mt76,
++ &mlink->wcid, false);
+
+ rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_mac.c b/drivers/net/wireless/mediatek/mt76/mt792x_mac.c
+index 106273935b267f..05978d9c7b916a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_mac.c
+@@ -153,7 +153,7 @@ struct mt76_wcid *mt792x_rx_get_wcid(struct mt792x_dev *dev, u16 idx,
+ return NULL;
+
+ link = container_of(wcid, struct mt792x_link_sta, wcid);
+- sta = container_of(link, struct mt792x_sta, deflink);
++ sta = link->sta;
+ if (!sta->vif)
+ return NULL;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 5e96973226bbb5..d8a013812d1e37 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -16,9 +16,6 @@
+
+ static const struct ieee80211_iface_limit if_limits[] = {
+ {
+- .max = 1,
+- .types = BIT(NL80211_IFTYPE_ADHOC)
+- }, {
+ .max = 16,
+ .types = BIT(NL80211_IFTYPE_AP)
+ #ifdef CONFIG_MAC80211_MESH
+@@ -85,7 +82,7 @@ static ssize_t mt7996_thermal_temp_store(struct device *dev,
+ return ret;
+
+ mutex_lock(&phy->dev->mt76.mutex);
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 40, 130);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 40 * 1000, 130 * 1000), 1000);
+
+ /* add a safety margin ~10 */
+ if ((i - 1 == MT7996_CRIT_TEMP_IDX &&
+@@ -1080,6 +1077,9 @@ mt7996_init_he_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ he_cap_elem->phy_cap_info[2] = IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ |
+ IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ;
+
++ he_cap_elem->phy_cap_info[7] =
++ IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
++
+ switch (iftype) {
+ case NL80211_IFTYPE_AP:
+ he_cap_elem->mac_cap_info[0] |= IEEE80211_HE_MAC_CAP0_TWT_RES;
+@@ -1119,8 +1119,7 @@ mt7996_init_he_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
+ he_cap_elem->phy_cap_info[7] |=
+- IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP |
+- IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
++ IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP;
+ he_cap_elem->phy_cap_info[8] |=
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G |
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU |
+@@ -1190,7 +1189,9 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+
+ eht_cap_elem->mac_cap_info[0] =
+ IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+- IEEE80211_EHT_MAC_CAP0_OM_CONTROL;
++ IEEE80211_EHT_MAC_CAP0_OM_CONTROL |
++ u8_encode_bits(IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_11454,
++ IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK);
+
+ eht_cap_elem->phy_cap_info[0] =
+ IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+@@ -1233,21 +1234,20 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK;
+
+ eht_cap_elem->phy_cap_info[4] =
++ IEEE80211_EHT_PHY_CAP4_EHT_MU_PPDU_4_EHT_LTF_08_GI |
+ u8_encode_bits(min_t(int, sts - 1, 2),
+ IEEE80211_EHT_PHY_CAP4_MAX_NC_MASK);
+
+ eht_cap_elem->phy_cap_info[5] =
+ u8_encode_bits(IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_16US,
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK) |
+- u8_encode_bits(u8_get_bits(0x11, GENMASK(1, 0)),
++ u8_encode_bits(u8_get_bits(1, GENMASK(1, 0)),
+ IEEE80211_EHT_PHY_CAP5_MAX_NUM_SUPP_EHT_LTF_MASK);
+
+ val = width == NL80211_CHAN_WIDTH_320 ? 0xf :
+ width == NL80211_CHAN_WIDTH_160 ? 0x7 :
+ width == NL80211_CHAN_WIDTH_80 ? 0x3 : 0x1;
+ eht_cap_elem->phy_cap_info[6] =
+- u8_encode_bits(u8_get_bits(0x11, GENMASK(4, 2)),
+- IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK) |
+ u8_encode_bits(val, IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK);
+
+ val = u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_RX) |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 0d21414e2c884a..f590902fdeea37 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -819,6 +819,7 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ struct ieee80211_key_conf *key, int pid,
+ enum mt76_txq_id qid, u32 changed)
+ {
++ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = info->control.vif;
+ u8 band_idx = (info->hw_queue & MT_TX_HW_QUEUE_PHY) >> 2;
+@@ -886,8 +887,9 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ val = MT_TXD6_DIS_MAT | MT_TXD6_DAS;
+ if (is_mt7996(&dev->mt76))
+ val |= FIELD_PREP(MT_TXD6_MSDU_CNT, 1);
+- else
++ else if (is_8023 || !ieee80211_is_mgmt(hdr->frame_control))
+ val |= FIELD_PREP(MT_TXD6_MSDU_CNT_V2, 1);
++
+ txwi[6] = cpu_to_le32(val);
+ txwi[7] = 0;
+
+@@ -897,7 +899,6 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ mt7996_mac_write_txwi_80211(dev, txwi, skb, key);
+
+ if (txwi[1] & cpu_to_le32(MT_TXD1_FIXED_RATE)) {
+- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ bool mcast = ieee80211_is_data(hdr->frame_control) &&
+ is_multicast_ether_addr(hdr->addr1);
+ u8 idx = MT7996_BASIC_RATES_TBL;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 39f071ece35e6e..4d11083b86c092 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -496,8 +496,7 @@ static void mt7996_configure_filter(struct ieee80211_hw *hw,
+
+ MT76_FILTER(CONTROL, MT_WF_RFCR_DROP_CTS |
+ MT_WF_RFCR_DROP_RTS |
+- MT_WF_RFCR_DROP_CTL_RSV |
+- MT_WF_RFCR_DROP_NDPA);
++ MT_WF_RFCR_DROP_CTL_RSV);
+
+ *total_flags = flags;
+ mt76_wr(dev, MT_WF_RFCR(phy->mt76->band_idx), phy->rxfilter);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 6c445a9dbc03d8..265958f7b78711 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -2070,7 +2070,7 @@ mt7996_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7996_dev *dev,
+ cap |= STA_CAP_VHT_TX_STBC;
+ if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_1)
+ cap |= STA_CAP_VHT_RX_STBC;
+- if (vif->bss_conf.vht_ldpc &&
++ if ((vif->type != NL80211_IFTYPE_AP || vif->bss_conf.vht_ldpc) &&
+ (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC))
+ cap |= STA_CAP_VHT_LDPC;
+
+@@ -3666,6 +3666,13 @@ int mt7996_mcu_get_chip_config(struct mt7996_dev *dev, u32 *cap)
+
+ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ {
++ enum {
++ IDX_TX_TIME,
++ IDX_RX_TIME,
++ IDX_OBSS_AIRTIME,
++ IDX_NON_WIFI_TIME,
++ IDX_NUM
++ };
+ struct {
+ struct {
+ u8 band;
+@@ -3675,16 +3682,15 @@ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ __le16 tag;
+ __le16 len;
+ __le32 offs;
+- } data[4];
++ } data[IDX_NUM];
+ } __packed req = {
+ .hdr.band = phy->mt76->band_idx,
+ };
+- /* strict order */
+ static const u32 offs[] = {
+- UNI_MIB_TX_TIME,
+- UNI_MIB_RX_TIME,
+- UNI_MIB_OBSS_AIRTIME,
+- UNI_MIB_NON_WIFI_TIME,
++ [IDX_TX_TIME] = UNI_MIB_TX_TIME,
++ [IDX_RX_TIME] = UNI_MIB_RX_TIME,
++ [IDX_OBSS_AIRTIME] = UNI_MIB_OBSS_AIRTIME,
++ [IDX_NON_WIFI_TIME] = UNI_MIB_NON_WIFI_TIME,
+ };
+ struct mt76_channel_state *state = phy->mt76->chan_state;
+ struct mt76_channel_state *state_ts = &phy->state_ts;
+@@ -3693,7 +3699,7 @@ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ struct sk_buff *skb;
+ int i, ret;
+
+- for (i = 0; i < 4; i++) {
++ for (i = 0; i < IDX_NUM; i++) {
+ req.data[i].tag = cpu_to_le16(UNI_CMD_MIB_DATA);
+ req.data[i].len = cpu_to_le16(sizeof(req.data[i]));
+ req.data[i].offs = cpu_to_le32(offs[i]);
+@@ -3712,17 +3718,24 @@ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ goto out;
+
+ #define __res_u64(s) le64_to_cpu(res[s].data)
+- state->cc_tx += __res_u64(1) - state_ts->cc_tx;
+- state->cc_bss_rx += __res_u64(2) - state_ts->cc_bss_rx;
+- state->cc_rx += __res_u64(2) + __res_u64(3) - state_ts->cc_rx;
+- state->cc_busy += __res_u64(0) + __res_u64(1) + __res_u64(2) + __res_u64(3) -
++ state->cc_tx += __res_u64(IDX_TX_TIME) - state_ts->cc_tx;
++ state->cc_bss_rx += __res_u64(IDX_RX_TIME) - state_ts->cc_bss_rx;
++ state->cc_rx += __res_u64(IDX_RX_TIME) +
++ __res_u64(IDX_OBSS_AIRTIME) -
++ state_ts->cc_rx;
++ state->cc_busy += __res_u64(IDX_TX_TIME) +
++ __res_u64(IDX_RX_TIME) +
++ __res_u64(IDX_OBSS_AIRTIME) +
++ __res_u64(IDX_NON_WIFI_TIME) -
+ state_ts->cc_busy;
+-
+ out:
+- state_ts->cc_tx = __res_u64(1);
+- state_ts->cc_bss_rx = __res_u64(2);
+- state_ts->cc_rx = __res_u64(2) + __res_u64(3);
+- state_ts->cc_busy = __res_u64(0) + __res_u64(1) + __res_u64(2) + __res_u64(3);
++ state_ts->cc_tx = __res_u64(IDX_TX_TIME);
++ state_ts->cc_bss_rx = __res_u64(IDX_RX_TIME);
++ state_ts->cc_rx = __res_u64(IDX_RX_TIME) + __res_u64(IDX_OBSS_AIRTIME);
++ state_ts->cc_busy = __res_u64(IDX_TX_TIME) +
++ __res_u64(IDX_RX_TIME) +
++ __res_u64(IDX_OBSS_AIRTIME) +
++ __res_u64(IDX_NON_WIFI_TIME);
+ #undef __res_u64
+
+ dev_kfree_skb(skb);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+index 40e45fb2b62607..442f72450352b0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+@@ -177,7 +177,7 @@ static u32 __mt7996_reg_addr(struct mt7996_dev *dev, u32 addr)
+ continue;
+
+ ofs = addr - dev->reg.map[i].phys;
+- if (ofs > dev->reg.map[i].size)
++ if (ofs >= dev->reg.map[i].size)
+ continue;
+
+ return dev->reg.map[i].mapped + ofs;
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index 58ff068233894e..f9e67b8c3b3c89 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -33,9 +33,9 @@ int __mt76u_vendor_request(struct mt76_dev *dev, u8 req, u8 req_type,
+
+ ret = usb_control_msg(udev, pipe, req, req_type, val,
+ offset, buf, len, MT_VEND_REQ_TOUT_MS);
+- if (ret == -ENODEV)
++ if (ret == -ENODEV || ret == -EPROTO)
+ set_bit(MT76_REMOVED, &dev->phy.state);
+- if (ret >= 0 || ret == -ENODEV)
++ if (ret >= 0 || ret == -ENODEV || ret == -EPROTO)
+ return ret;
+ usleep_range(5000, 10000);
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index aab4605de9c47c..ff61867d142fa4 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -575,9 +575,15 @@ static void rtl_free_entries_from_ack_queue(struct ieee80211_hw *hw,
+
+ void rtl_deinit_core(struct ieee80211_hw *hw)
+ {
++ struct rtl_priv *rtlpriv = rtl_priv(hw);
++
+ rtl_c2hcmd_launcher(hw, 0);
+ rtl_free_entries_from_scan_list(hw);
+ rtl_free_entries_from_ack_queue(hw, false);
++ if (rtlpriv->works.rtl_wq) {
++ destroy_workqueue(rtlpriv->works.rtl_wq);
++ rtlpriv->works.rtl_wq = NULL;
++ }
+ }
+ EXPORT_SYMBOL_GPL(rtl_deinit_core);
+
+@@ -2696,9 +2702,6 @@ MODULE_AUTHOR("Larry Finger <Larry.FInger@lwfinger.net>");
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Realtek 802.11n PCI wireless core");
+
+-struct rtl_global_var rtl_global_var = {};
+-EXPORT_SYMBOL_GPL(rtl_global_var);
+-
+ static int __init rtl_core_module_init(void)
+ {
+ BUILD_BUG_ON(TX_PWR_BY_RATE_NUM_RATE < TX_PWR_BY_RATE_NUM_SECTION);
+@@ -2712,10 +2715,6 @@ static int __init rtl_core_module_init(void)
+ /* add debugfs */
+ rtl_debugfs_add_topdir();
+
+- /* init some global vars */
+- INIT_LIST_HEAD(&rtl_global_var.glb_priv_list);
+- spin_lock_init(&rtl_global_var.glb_list_lock);
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.h b/drivers/net/wireless/realtek/rtlwifi/base.h
+index f081a9a90563f5..f3a6a43a42eca8 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.h
++++ b/drivers/net/wireless/realtek/rtlwifi/base.h
+@@ -124,7 +124,6 @@ int rtl_send_smps_action(struct ieee80211_hw *hw,
+ u8 *rtl_find_ie(u8 *data, unsigned int len, u8 ie);
+ void rtl_recognize_peer(struct ieee80211_hw *hw, u8 *data, unsigned int len);
+ u8 rtl_tid_to_ac(u8 tid);
+-extern struct rtl_global_var rtl_global_var;
+ void rtl_phy_scan_operation_backup(struct ieee80211_hw *hw, u8 operation);
+
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 11709b6c83f1aa..0eafc4d125f91d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -295,46 +295,6 @@ static bool rtl_pci_get_amd_l1_patch(struct ieee80211_hw *hw)
+ return status;
+ }
+
+-static bool rtl_pci_check_buddy_priv(struct ieee80211_hw *hw,
+- struct rtl_priv **buddy_priv)
+-{
+- struct rtl_priv *rtlpriv = rtl_priv(hw);
+- struct rtl_pci_priv *pcipriv = rtl_pcipriv(hw);
+- struct rtl_priv *tpriv = NULL, *iter;
+- struct rtl_pci_priv *tpcipriv = NULL;
+-
+- if (!list_empty(&rtlpriv->glb_var->glb_priv_list)) {
+- list_for_each_entry(iter, &rtlpriv->glb_var->glb_priv_list,
+- list) {
+- tpcipriv = (struct rtl_pci_priv *)iter->priv;
+- rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+- "pcipriv->ndis_adapter.funcnumber %x\n",
+- pcipriv->ndis_adapter.funcnumber);
+- rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+- "tpcipriv->ndis_adapter.funcnumber %x\n",
+- tpcipriv->ndis_adapter.funcnumber);
+-
+- if (pcipriv->ndis_adapter.busnumber ==
+- tpcipriv->ndis_adapter.busnumber &&
+- pcipriv->ndis_adapter.devnumber ==
+- tpcipriv->ndis_adapter.devnumber &&
+- pcipriv->ndis_adapter.funcnumber !=
+- tpcipriv->ndis_adapter.funcnumber) {
+- tpriv = iter;
+- break;
+- }
+- }
+- }
+-
+- rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+- "find_buddy_priv %d\n", tpriv != NULL);
+-
+- if (tpriv)
+- *buddy_priv = tpriv;
+-
+- return tpriv != NULL;
+-}
+-
+ static void rtl_pci_parse_configuration(struct pci_dev *pdev,
+ struct ieee80211_hw *hw)
+ {
+@@ -1696,8 +1656,6 @@ static void rtl_pci_deinit(struct ieee80211_hw *hw)
+ synchronize_irq(rtlpci->pdev->irq);
+ tasklet_kill(&rtlpriv->works.irq_tasklet);
+ cancel_work_sync(&rtlpriv->works.lps_change_work);
+-
+- destroy_workqueue(rtlpriv->works.rtl_wq);
+ }
+
+ static int rtl_pci_init(struct ieee80211_hw *hw, struct pci_dev *pdev)
+@@ -2011,7 +1969,6 @@ static bool _rtl_pci_find_adapter(struct pci_dev *pdev,
+ pcipriv->ndis_adapter.amd_l1_patch);
+
+ rtl_pci_parse_configuration(pdev, hw);
+- list_add_tail(&rtlpriv->list, &rtlpriv->glb_var->glb_priv_list);
+
+ return true;
+ }
+@@ -2158,7 +2115,6 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ rtlpriv->rtlhal.interface = INTF_PCI;
+ rtlpriv->cfg = (struct rtl_hal_cfg *)(id->driver_data);
+ rtlpriv->intf_ops = &rtl_pci_ops;
+- rtlpriv->glb_var = &rtl_global_var;
+ rtl_efuse_ops_init(hw);
+
+ /* MEM map */
+@@ -2209,7 +2165,7 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ if (rtlpriv->cfg->ops->init_sw_vars(hw)) {
+ pr_err("Can't init_sw_vars\n");
+ err = -ENODEV;
+- goto fail3;
++ goto fail2;
+ }
+ rtl_init_sw_leds(hw);
+
+@@ -2227,14 +2183,14 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ err = rtl_pci_init(hw, pdev);
+ if (err) {
+ pr_err("Failed to init PCI\n");
+- goto fail3;
++ goto fail4;
+ }
+
+ err = ieee80211_register_hw(hw);
+ if (err) {
+ pr_err("Can't register mac80211 hw.\n");
+ err = -ENODEV;
+- goto fail3;
++ goto fail5;
+ }
+ rtlpriv->mac80211.mac80211_registered = 1;
+
+@@ -2257,16 +2213,19 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ set_bit(RTL_STATUS_INTERFACE_START, &rtlpriv->status);
+ return 0;
+
+-fail3:
+- pci_set_drvdata(pdev, NULL);
++fail5:
++ rtl_pci_deinit(hw);
++fail4:
+ rtl_deinit_core(hw);
++fail3:
++ wait_for_completion(&rtlpriv->firmware_loading_complete);
++ rtlpriv->cfg->ops->deinit_sw_vars(hw);
+
+ fail2:
+ if (rtlpriv->io.pci_mem_start != 0)
+ pci_iounmap(pdev, (void __iomem *)rtlpriv->io.pci_mem_start);
+
+ pci_release_regions(pdev);
+- complete(&rtlpriv->firmware_loading_complete);
+
+ fail1:
+ if (hw)
+@@ -2317,7 +2276,6 @@ void rtl_pci_disconnect(struct pci_dev *pdev)
+ if (rtlpci->using_msi)
+ pci_disable_msi(rtlpci->pdev);
+
+- list_del(&rtlpriv->list);
+ if (rtlpriv->io.pci_mem_start != 0) {
+ pci_iounmap(pdev, (void __iomem *)rtlpriv->io.pci_mem_start);
+ pci_release_regions(pdev);
+@@ -2376,7 +2334,6 @@ EXPORT_SYMBOL(rtl_pci_resume);
+ const struct rtl_intf_ops rtl_pci_ops = {
+ .adapter_start = rtl_pci_start,
+ .adapter_stop = rtl_pci_stop,
+- .check_buddy_priv = rtl_pci_check_buddy_priv,
+ .adapter_tx = rtl_pci_tx,
+ .flush = rtl_pci_flush,
+ .reset_trx_ring = rtl_pci_reset_trx_ring,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c
+index bbf8ff63dcedb4..e63c67b1861b5f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c
+@@ -64,22 +64,23 @@ static void rtl92se_fw_cb(const struct firmware *firmware, void *context)
+
+ rtl_dbg(rtlpriv, COMP_ERR, DBG_LOUD,
+ "Firmware callback routine entered!\n");
+- complete(&rtlpriv->firmware_loading_complete);
+ if (!firmware) {
+ pr_err("Firmware %s not available\n", fw_name);
+ rtlpriv->max_fw_size = 0;
+- return;
++ goto exit;
+ }
+ if (firmware->size > rtlpriv->max_fw_size) {
+ pr_err("Firmware is too big!\n");
+ rtlpriv->max_fw_size = 0;
+ release_firmware(firmware);
+- return;
++ goto exit;
+ }
+ pfirmware = (struct rt_firmware *)rtlpriv->rtlhal.pfirmware;
+ memcpy(pfirmware->sz_fw_tmpbuffer, firmware->data, firmware->size);
+ pfirmware->sz_fw_tmpbufferlen = firmware->size;
+ release_firmware(firmware);
++exit:
++ complete(&rtlpriv->firmware_loading_complete);
+ }
+
+ static int rtl92s_init_sw_vars(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+index 1be51ea3f3c820..9eddbada8af12c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+@@ -2033,8 +2033,10 @@ static bool _rtl8821ae_phy_config_bb_with_pgheaderfile(struct ieee80211_hw *hw,
+ if (!_rtl8821ae_check_condition(hw, v1)) {
+ i += 2; /* skip the pair of expression*/
+ v2 = array[i+1];
+- while (v2 != 0xDEAD)
++ while (v2 != 0xDEAD) {
+ i += 3;
++ v2 = array[i + 1];
++ }
+ }
+ }
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index d37a017b2b814f..f5718e570011e6 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -629,11 +629,6 @@ static void _rtl_usb_cleanup_rx(struct ieee80211_hw *hw)
+ tasklet_kill(&rtlusb->rx_work_tasklet);
+ cancel_work_sync(&rtlpriv->works.lps_change_work);
+
+- if (rtlpriv->works.rtl_wq) {
+- destroy_workqueue(rtlpriv->works.rtl_wq);
+- rtlpriv->works.rtl_wq = NULL;
+- }
+-
+ skb_queue_purge(&rtlusb->rx_queue);
+
+ while ((urb = usb_get_from_anchor(&rtlusb->rx_cleanup_urbs))) {
+@@ -1028,19 +1023,22 @@ int rtl_usb_probe(struct usb_interface *intf,
+ err = ieee80211_register_hw(hw);
+ if (err) {
+ pr_err("Can't register mac80211 hw.\n");
+- goto error_out;
++ goto error_init_vars;
+ }
+ rtlpriv->mac80211.mac80211_registered = 1;
+
+ set_bit(RTL_STATUS_INTERFACE_START, &rtlpriv->status);
+ return 0;
+
++error_init_vars:
++ wait_for_completion(&rtlpriv->firmware_loading_complete);
++ rtlpriv->cfg->ops->deinit_sw_vars(hw);
+ error_out:
++ rtl_usb_deinit(hw);
+ rtl_deinit_core(hw);
+ error_out2:
+ _rtl_usb_io_handler_release(hw);
+ usb_put_dev(udev);
+- complete(&rtlpriv->firmware_loading_complete);
+ kfree(rtlpriv->usb_data);
+ ieee80211_free_hw(hw);
+ return -ENODEV;
+diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+index ae6e351bc83c91..f1830ddcdd8c19 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+@@ -2270,8 +2270,6 @@ struct rtl_intf_ops {
+ /*com */
+ int (*adapter_start)(struct ieee80211_hw *hw);
+ void (*adapter_stop)(struct ieee80211_hw *hw);
+- bool (*check_buddy_priv)(struct ieee80211_hw *hw,
+- struct rtl_priv **buddy_priv);
+
+ int (*adapter_tx)(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta,
+@@ -2514,14 +2512,6 @@ struct dig_t {
+ u32 rssi_max;
+ };
+
+-struct rtl_global_var {
+- /* from this list we can get
+- * other adapter's rtl_priv
+- */
+- struct list_head glb_priv_list;
+- spinlock_t glb_list_lock;
+-};
+-
+ #define IN_4WAY_TIMEOUT_TIME (30 * MSEC_PER_SEC) /* 30 seconds */
+
+ struct rtl_btc_info {
+@@ -2667,9 +2657,7 @@ struct rtl_scan_list {
+ struct rtl_priv {
+ struct ieee80211_hw *hw;
+ struct completion firmware_loading_complete;
+- struct list_head list;
+ struct rtl_priv *buddy_priv;
+- struct rtl_global_var *glb_var;
+ struct rtl_dmsp_ctl dmsp_ctl;
+ struct rtl_locks locks;
+ struct rtl_works works;
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
+index ba6332da8019c1..4df4e04c3e67d7 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.c
++++ b/drivers/net/wireless/realtek/rtw89/chan.c
+@@ -10,6 +10,10 @@
+ #include "ps.h"
+ #include "util.h"
+
++static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
++ enum rtw89_chanctx_idx idx1,
++ enum rtw89_chanctx_idx idx2);
++
+ static enum rtw89_subband rtw89_get_subband_type(enum rtw89_band band,
+ u8 center_chan)
+ {
+@@ -226,11 +230,15 @@ static void rtw89_config_default_chandef(struct rtw89_dev *rtwdev)
+ void rtw89_entity_init(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
+
+ hal->entity_pause = false;
+ bitmap_zero(hal->entity_map, NUM_OF_RTW89_CHANCTX);
+ bitmap_zero(hal->changes, NUM_OF_RTW89_CHANCTX_CHANGES);
+ atomic_set(&hal->roc_chanctx_idx, RTW89_CHANCTX_IDLE);
++
++ INIT_LIST_HEAD(&mgnt->active_list);
++
+ rtw89_config_default_chandef(rtwdev);
+ }
+
+@@ -272,6 +280,143 @@ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev,
+ }
+ }
+
++static void rtw89_normalize_link_chanctx(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++ struct rtw89_vif_link *cur;
++
++ if (unlikely(!rtwvif_link->chanctx_assigned))
++ return;
++
++ cur = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (!cur || !cur->chanctx_assigned)
++ return;
++
++ if (cur == rtwvif_link)
++ return;
++
++ rtw89_swap_chanctx(rtwdev, rtwvif_link->chanctx_idx, cur->chanctx_idx);
++}
++
++const struct rtw89_chan *__rtw89_mgnt_chan_get(struct rtw89_dev *rtwdev,
++ const char *caller_message,
++ u8 link_index)
++{
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
++ enum rtw89_chanctx_idx chanctx_idx;
++ enum rtw89_chanctx_idx roc_idx;
++ enum rtw89_entity_mode mode;
++ u8 role_index;
++
++ lockdep_assert_held(&rtwdev->mutex);
++
++ if (unlikely(link_index >= __RTW89_MLD_MAX_LINK_NUM)) {
++ WARN(1, "link index %u is invalid (max link inst num: %d)\n",
++ link_index, __RTW89_MLD_MAX_LINK_NUM);
++ goto dflt;
++ }
++
++ mode = rtw89_get_entity_mode(rtwdev);
++ switch (mode) {
++ case RTW89_ENTITY_MODE_SCC_OR_SMLD:
++ case RTW89_ENTITY_MODE_MCC:
++ role_index = 0;
++ break;
++ case RTW89_ENTITY_MODE_MCC_PREPARE:
++ role_index = 1;
++ break;
++ default:
++ WARN(1, "Invalid ent mode: %d\n", mode);
++ goto dflt;
++ }
++
++ chanctx_idx = mgnt->chanctx_tbl[role_index][link_index];
++ if (chanctx_idx == RTW89_CHANCTX_IDLE)
++ goto dflt;
++
++ roc_idx = atomic_read(&hal->roc_chanctx_idx);
++ if (roc_idx != RTW89_CHANCTX_IDLE) {
++ /* ROC is ongoing (given ROC runs on RTW89_ROC_BY_LINK_INDEX).
++ * If @link_index is the same as RTW89_ROC_BY_LINK_INDEX, get
++ * the ongoing ROC chanctx.
++ */
++ if (link_index == RTW89_ROC_BY_LINK_INDEX)
++ chanctx_idx = roc_idx;
++ }
++
++ return rtw89_chan_get(rtwdev, chanctx_idx);
++
++dflt:
++ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
++ "%s (%s): prefetch NULL on link index %u\n",
++ __func__, caller_message ?: "", link_index);
++
++ return rtw89_chan_get(rtwdev, RTW89_CHANCTX_0);
++}
++EXPORT_SYMBOL(__rtw89_mgnt_chan_get);
++
++static void rtw89_entity_recalc_mgnt_roles(struct rtw89_dev *rtwdev)
++{
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
++ struct rtw89_vif_link *link;
++ struct rtw89_vif *role;
++ u8 pos = 0;
++ int i, j;
++
++ lockdep_assert_held(&rtwdev->mutex);
++
++ for (i = 0; i < RTW89_MAX_INTERFACE_NUM; i++)
++ mgnt->active_roles[i] = NULL;
++
++ for (i = 0; i < RTW89_MAX_INTERFACE_NUM; i++) {
++ for (j = 0; j < __RTW89_MLD_MAX_LINK_NUM; j++)
++ mgnt->chanctx_tbl[i][j] = RTW89_CHANCTX_IDLE;
++ }
++
++ /* To be consistent with legacy behavior, expect the first active role
++ * which uses RTW89_CHANCTX_0 to put at position 0, and make its first
++ * link instance take RTW89_CHANCTX_0. (normalizing)
++ */
++ list_for_each_entry(role, &mgnt->active_list, mgnt_entry) {
++ for (i = 0; i < role->links_inst_valid_num; i++) {
++ link = rtw89_vif_get_link_inst(role, i);
++ if (!link || !link->chanctx_assigned)
++ continue;
++
++ if (link->chanctx_idx == RTW89_CHANCTX_0) {
++ rtw89_normalize_link_chanctx(rtwdev, link);
++
++ list_del(&role->mgnt_entry);
++ list_add(&role->mgnt_entry, &mgnt->active_list);
++ goto fill;
++ }
++ }
++ }
++
++fill:
++ list_for_each_entry(role, &mgnt->active_list, mgnt_entry) {
++ if (unlikely(pos >= RTW89_MAX_INTERFACE_NUM)) {
++ rtw89_warn(rtwdev,
++ "%s: active roles are over max iface num\n",
++ __func__);
++ break;
++ }
++
++ for (i = 0; i < role->links_inst_valid_num; i++) {
++ link = rtw89_vif_get_link_inst(role, i);
++ if (!link || !link->chanctx_assigned)
++ continue;
++
++ mgnt->chanctx_tbl[pos][i] = link->chanctx_idx;
++ }
++
++ mgnt->active_roles[pos++] = role;
++ }
++}
++
+ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
+ {
+ DECLARE_BITMAP(recalc_map, NUM_OF_RTW89_CHANCTX) = {};
+@@ -298,9 +443,14 @@ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
+ set_bit(RTW89_CHANCTX_0, recalc_map);
+ fallthrough;
+ case 1:
+- mode = RTW89_ENTITY_MODE_SCC;
++ mode = RTW89_ENTITY_MODE_SCC_OR_SMLD;
+ break;
+ case 2 ... NUM_OF_RTW89_CHANCTX:
++ if (w.active_roles == 1) {
++ mode = RTW89_ENTITY_MODE_SCC_OR_SMLD;
++ break;
++ }
++
+ if (w.active_roles != NUM_OF_RTW89_MCC_ROLES) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "unhandled ent: %d chanctxs %d roles\n",
+@@ -327,6 +477,8 @@ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
+ rtw89_assign_entity_chan(rtwdev, idx, &chan);
+ }
+
++ rtw89_entity_recalc_mgnt_roles(rtwdev);
++
+ if (hal->entity_pause)
+ return rtw89_get_entity_mode(rtwdev);
+
+@@ -650,7 +802,7 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+
+ mcc_role->limit.max_toa = max_toa_us / 1024;
+ mcc_role->limit.max_tob = max_tob_us / 1024;
+- mcc_role->limit.max_dur = max_dur_us / 1024;
++ mcc_role->limit.max_dur = mcc_role->limit.max_toa + mcc_role->limit.max_tob;
+ mcc_role->limit.enable = true;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -716,6 +868,7 @@ struct rtw89_mcc_fill_role_selector {
+ };
+
+ static_assert((u8)NUM_OF_RTW89_CHANCTX >= NUM_OF_RTW89_MCC_ROLES);
++static_assert(RTW89_MAX_INTERFACE_NUM >= NUM_OF_RTW89_MCC_ROLES);
+
+ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+@@ -745,14 +898,18 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+
+ static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
+ struct rtw89_mcc_fill_role_selector sel = {};
+ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
+ int ret;
++ int i;
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (!rtw89_vif_is_active_role(rtwvif))
+- continue;
++ for (i = 0; i < NUM_OF_RTW89_MCC_ROLES; i++) {
++ rtwvif = mgnt->active_roles[i];
++ if (!rtwvif)
++ break;
+
+ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
+ if (unlikely(!rtwvif_link)) {
+@@ -760,14 +917,7 @@ static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+ continue;
+ }
+
+- if (sel.bind_vif[rtwvif_link->chanctx_idx]) {
+- rtw89_warn(rtwdev,
+- "MCC skip extra vif <macid %d> on chanctx[%d]\n",
+- rtwvif_link->mac_id, rtwvif_link->chanctx_idx);
+- continue;
+- }
+-
+- sel.bind_vif[rtwvif_link->chanctx_idx] = rtwvif_link;
++ sel.bind_vif[i] = rtwvif_link;
+ }
+
+ ret = rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_fill_role_iterator, &sel);
+@@ -2381,7 +2531,25 @@ void rtw89_chanctx_pause(struct rtw89_dev *rtwdev,
+ hal->entity_pause = true;
+ }
+
+-void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
++static void rtw89_chanctx_proceed_cb(struct rtw89_dev *rtwdev,
++ const struct rtw89_chanctx_cb_parm *parm)
++{
++ int ret;
++
++ if (!parm || !parm->cb)
++ return;
++
++ ret = parm->cb(rtwdev, parm->data);
++ if (ret)
++ rtw89_warn(rtwdev, "%s (%s): cb failed: %d\n", __func__,
++ parm->caller ?: "unknown", ret);
++}
++
++/* pass @cb_parm if there is a @cb_parm->cb which needs to invoke right after
++ * call rtw89_set_channel() and right before proceed entity according to mode.
++ */
++void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev,
++ const struct rtw89_chanctx_cb_parm *cb_parm)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_entity_mode mode;
+@@ -2389,14 +2557,18 @@ void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+- if (!hal->entity_pause)
++ if (unlikely(!hal->entity_pause)) {
++ rtw89_chanctx_proceed_cb(rtwdev, cb_parm);
+ return;
++ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "chanctx proceed\n");
+
+ hal->entity_pause = false;
+ rtw89_set_channel(rtwdev);
+
++ rtw89_chanctx_proceed_cb(rtwdev, cb_parm);
++
+ mode = rtw89_get_entity_mode(rtwdev);
+ switch (mode) {
+ case RTW89_ENTITY_MODE_MCC:
+@@ -2501,12 +2673,18 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
+ struct rtw89_entity_weight w = {};
+
+ rtwvif_link->chanctx_idx = cfg->idx;
+ rtwvif_link->chanctx_assigned = true;
+ cfg->ref_count++;
+
++ if (list_empty(&rtwvif->mgnt_entry))
++ list_add_tail(&rtwvif->mgnt_entry, &mgnt->active_list);
++
+ if (cfg->idx == RTW89_CHANCTX_0)
+ goto out;
+
+@@ -2526,6 +2704,7 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_chanctx_idx roll;
+ enum rtw89_entity_mode cur;
+@@ -2536,6 +2715,9 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ rtwvif_link->chanctx_assigned = false;
+ cfg->ref_count--;
+
++ if (!rtw89_vif_is_active_role(rtwvif))
++ list_del_init(&rtwvif->mgnt_entry);
++
+ if (cfg->ref_count != 0)
+ goto out;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.h b/drivers/net/wireless/realtek/rtw89/chan.h
+index 4ed777ea506485..092a6f676894f5 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.h
++++ b/drivers/net/wireless/realtek/rtw89/chan.h
+@@ -38,23 +38,32 @@ enum rtw89_chanctx_pause_reasons {
+ RTW89_CHANCTX_PAUSE_REASON_ROC,
+ };
+
++struct rtw89_chanctx_cb_parm {
++ int (*cb)(struct rtw89_dev *rtwdev, void *data);
++ void *data;
++ const char *caller;
++};
++
+ struct rtw89_entity_weight {
+ unsigned int active_chanctxs;
+ unsigned int active_roles;
+ };
+
+-static inline bool rtw89_get_entity_state(struct rtw89_dev *rtwdev)
++static inline bool rtw89_get_entity_state(struct rtw89_dev *rtwdev,
++ enum rtw89_phy_idx phy_idx)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+
+- return READ_ONCE(hal->entity_active);
++ return READ_ONCE(hal->entity_active[phy_idx]);
+ }
+
+-static inline void rtw89_set_entity_state(struct rtw89_dev *rtwdev, bool active)
++static inline void rtw89_set_entity_state(struct rtw89_dev *rtwdev,
++ enum rtw89_phy_idx phy_idx,
++ bool active)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+
+- WRITE_ONCE(hal->entity_active, active);
++ WRITE_ONCE(hal->entity_active[phy_idx], active);
+ }
+
+ static inline
+@@ -97,7 +106,16 @@ void rtw89_queue_chanctx_change(struct rtw89_dev *rtwdev,
+ void rtw89_chanctx_track(struct rtw89_dev *rtwdev);
+ void rtw89_chanctx_pause(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_pause_reasons rsn);
+-void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev);
++void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev,
++ const struct rtw89_chanctx_cb_parm *cb_parm);
++
++const struct rtw89_chan *__rtw89_mgnt_chan_get(struct rtw89_dev *rtwdev,
++ const char *caller_message,
++ u8 link_index);
++
++#define rtw89_mgnt_chan_get(rtwdev, link_index) \
++ __rtw89_mgnt_chan_get(rtwdev, __func__, link_index)
++
+ int rtw89_chanctx_ops_add(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx);
+ void rtw89_chanctx_ops_remove(struct rtw89_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 5b8e65f6de6a4e..f82a26be6fa82b 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -192,13 +192,13 @@ static const struct ieee80211_iface_combination rtw89_iface_combs[] = {
+ {
+ .limits = rtw89_iface_limits,
+ .n_limits = ARRAY_SIZE(rtw89_iface_limits),
+- .max_interfaces = 2,
++ .max_interfaces = RTW89_MAX_INTERFACE_NUM,
+ .num_different_channels = 1,
+ },
+ {
+ .limits = rtw89_iface_limits_mcc,
+ .n_limits = ARRAY_SIZE(rtw89_iface_limits_mcc),
+- .max_interfaces = 2,
++ .max_interfaces = RTW89_MAX_INTERFACE_NUM,
+ .num_different_channels = 2,
+ },
+ };
+@@ -341,83 +341,47 @@ void rtw89_get_channel_params(const struct cfg80211_chan_def *chandef,
+ rtw89_chan_create(chan, center_chan, channel->hw_value, band, bandwidth);
+ }
+
+-void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev)
++static void __rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev,
++ const struct rtw89_chan *chan,
++ enum rtw89_phy_idx phy_idx)
+ {
+- struct rtw89_hal *hal = &rtwdev->hal;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- const struct rtw89_chan *chan;
+- enum rtw89_chanctx_idx chanctx_idx;
+- enum rtw89_chanctx_idx roc_idx;
+- enum rtw89_phy_idx phy_idx;
+- enum rtw89_entity_mode mode;
+ bool entity_active;
+
+- entity_active = rtw89_get_entity_state(rtwdev);
++ entity_active = rtw89_get_entity_state(rtwdev, phy_idx);
+ if (!entity_active)
+ return;
+
+- mode = rtw89_get_entity_mode(rtwdev);
+- switch (mode) {
+- case RTW89_ENTITY_MODE_SCC:
+- case RTW89_ENTITY_MODE_MCC:
+- chanctx_idx = RTW89_CHANCTX_0;
+- break;
+- case RTW89_ENTITY_MODE_MCC_PREPARE:
+- chanctx_idx = RTW89_CHANCTX_1;
+- break;
+- default:
+- WARN(1, "Invalid ent mode: %d\n", mode);
+- return;
+- }
++ chip->ops->set_txpwr(rtwdev, chan, phy_idx);
++}
+
+- roc_idx = atomic_read(&hal->roc_chanctx_idx);
+- if (roc_idx != RTW89_CHANCTX_IDLE)
+- chanctx_idx = roc_idx;
++void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev)
++{
++ const struct rtw89_chan *chan;
+
+- phy_idx = RTW89_PHY_0;
+- chan = rtw89_chan_get(rtwdev, chanctx_idx);
+- chip->ops->set_txpwr(rtwdev, chan, phy_idx);
++ chan = rtw89_mgnt_chan_get(rtwdev, 0);
++ __rtw89_core_set_chip_txpwr(rtwdev, chan, RTW89_PHY_0);
++
++ if (!rtwdev->support_mlo)
++ return;
++
++ chan = rtw89_mgnt_chan_get(rtwdev, 1);
++ __rtw89_core_set_chip_txpwr(rtwdev, chan, RTW89_PHY_1);
+ }
+
+-int rtw89_set_channel(struct rtw89_dev *rtwdev)
++static void __rtw89_set_channel(struct rtw89_dev *rtwdev,
++ const struct rtw89_chan *chan,
++ enum rtw89_mac_idx mac_idx,
++ enum rtw89_phy_idx phy_idx)
+ {
+- struct rtw89_hal *hal = &rtwdev->hal;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_chan_rcd *chan_rcd;
+- const struct rtw89_chan *chan;
+- enum rtw89_chanctx_idx chanctx_idx;
+- enum rtw89_chanctx_idx roc_idx;
+- enum rtw89_mac_idx mac_idx;
+- enum rtw89_phy_idx phy_idx;
+ struct rtw89_channel_help_params bak;
+- enum rtw89_entity_mode mode;
+ bool entity_active;
+
+- entity_active = rtw89_get_entity_state(rtwdev);
+-
+- mode = rtw89_entity_recalc(rtwdev);
+- switch (mode) {
+- case RTW89_ENTITY_MODE_SCC:
+- case RTW89_ENTITY_MODE_MCC:
+- chanctx_idx = RTW89_CHANCTX_0;
+- break;
+- case RTW89_ENTITY_MODE_MCC_PREPARE:
+- chanctx_idx = RTW89_CHANCTX_1;
+- break;
+- default:
+- WARN(1, "Invalid ent mode: %d\n", mode);
+- return -EINVAL;
+- }
+-
+- roc_idx = atomic_read(&hal->roc_chanctx_idx);
+- if (roc_idx != RTW89_CHANCTX_IDLE)
+- chanctx_idx = roc_idx;
++ entity_active = rtw89_get_entity_state(rtwdev, phy_idx);
+
+- mac_idx = RTW89_MAC_0;
+- phy_idx = RTW89_PHY_0;
+-
+- chan = rtw89_chan_get(rtwdev, chanctx_idx);
+- chan_rcd = rtw89_chan_rcd_get(rtwdev, chanctx_idx);
++ chan_rcd = rtw89_chan_rcd_get_by_chan(chan);
+
+ rtw89_chip_set_channel_prepare(rtwdev, &bak, chan, mac_idx, phy_idx);
+
+@@ -432,7 +396,29 @@ int rtw89_set_channel(struct rtw89_dev *rtwdev)
+ rtw89_chip_rfk_band_changed(rtwdev, phy_idx, chan);
+ }
+
+- rtw89_set_entity_state(rtwdev, true);
++ rtw89_set_entity_state(rtwdev, phy_idx, true);
++}
++
++int rtw89_set_channel(struct rtw89_dev *rtwdev)
++{
++ const struct rtw89_chan *chan;
++ enum rtw89_entity_mode mode;
++
++ mode = rtw89_entity_recalc(rtwdev);
++ if (mode < 0 || mode >= NUM_OF_RTW89_ENTITY_MODE) {
++ WARN(1, "Invalid ent mode: %d\n", mode);
++ return -EINVAL;
++ }
++
++ chan = rtw89_mgnt_chan_get(rtwdev, 0);
++ __rtw89_set_channel(rtwdev, chan, RTW89_MAC_0, RTW89_PHY_0);
++
++ if (!rtwdev->support_mlo)
++ return 0;
++
++ chan = rtw89_mgnt_chan_get(rtwdev, 1);
++ __rtw89_set_channel(rtwdev, chan, RTW89_MAC_1, RTW89_PHY_1);
++
+ return 0;
+ }
+
+@@ -3157,9 +3143,10 @@ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
+
+- rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, RTW89_ROC_BY_LINK_INDEX);
+ if (unlikely(!rtwvif_link)) {
+- rtw89_err(rtwdev, "roc start: find no link on HW-0\n");
++ rtw89_err(rtwdev, "roc start: find no link on HW-%u\n",
++ RTW89_ROC_BY_LINK_INDEX);
+ return;
+ }
+
+@@ -3211,9 +3198,10 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
+
+- rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, RTW89_ROC_BY_LINK_INDEX);
+ if (unlikely(!rtwvif_link)) {
+- rtw89_err(rtwdev, "roc end: find no link on HW-0\n");
++ rtw89_err(rtwdev, "roc end: find no link on HW-%u\n",
++ RTW89_ROC_BY_LINK_INDEX);
+ return;
+ }
+
+@@ -3224,7 +3212,7 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+
+ roc->state = RTW89_ROC_IDLE;
+ rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, NULL);
+- rtw89_chanctx_proceed(rtwdev);
++ rtw89_chanctx_proceed(rtwdev, NULL);
+ ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, false);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index de33320b1354cd..ff3048d2489f12 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -3424,6 +3424,8 @@ enum rtw89_roc_state {
+ RTW89_ROC_MGMT,
+ };
+
++#define RTW89_ROC_BY_LINK_INDEX 0
++
+ struct rtw89_roc {
+ struct ieee80211_channel chan;
+ struct delayed_work roc_work;
+@@ -4619,7 +4621,7 @@ enum rtw89_chanctx_changes {
+ };
+
+ enum rtw89_entity_mode {
+- RTW89_ENTITY_MODE_SCC,
++ RTW89_ENTITY_MODE_SCC_OR_SMLD,
+ RTW89_ENTITY_MODE_MCC_PREPARE,
+ RTW89_ENTITY_MODE_MCC,
+
+@@ -4628,6 +4630,16 @@ enum rtw89_entity_mode {
+ RTW89_ENTITY_MODE_UNHANDLED = -ESRCH,
+ };
+
++#define RTW89_MAX_INTERFACE_NUM 2
++
++/* only valid when running with chanctx_ops */
++struct rtw89_entity_mgnt {
++ struct list_head active_list;
++ struct rtw89_vif *active_roles[RTW89_MAX_INTERFACE_NUM];
++ enum rtw89_chanctx_idx chanctx_tbl[RTW89_MAX_INTERFACE_NUM]
++ [__RTW89_MLD_MAX_LINK_NUM];
++};
++
+ struct rtw89_chanctx {
+ struct cfg80211_chan_def chandef;
+ struct rtw89_chan chan;
+@@ -4668,9 +4680,10 @@ struct rtw89_hal {
+ struct rtw89_chanctx chanctx[NUM_OF_RTW89_CHANCTX];
+ struct cfg80211_chan_def roc_chandef;
+
+- bool entity_active;
++ bool entity_active[RTW89_PHY_MAX];
+ bool entity_pause;
+ enum rtw89_entity_mode entity_mode;
++ struct rtw89_entity_mgnt entity_mgnt;
+
+ struct rtw89_edcca_bak edcca_bak;
+ u32 disabled_dm_bitmap; /* bitmap of enum rtw89_dm_type */
+@@ -5607,6 +5620,7 @@ struct rtw89_dev {
+ struct rtw89_vif {
+ struct rtw89_dev *rtwdev;
+ struct list_head list;
++ struct list_head mgnt_entry;
+
+ u8 mac_addr[ETH_ALEN];
+ __be32 ip_addr;
+@@ -6361,6 +6375,15 @@ const struct rtw89_chan_rcd *rtw89_chan_rcd_get(struct rtw89_dev *rtwdev,
+ return &hal->chanctx[idx].rcd;
+ }
+
++static inline
++const struct rtw89_chan_rcd *rtw89_chan_rcd_get_by_chan(const struct rtw89_chan *chan)
++{
++ const struct rtw89_chanctx *chanctx =
++ container_of_const(chan, struct rtw89_chanctx, chan);
++
++ return &chanctx->rcd;
++}
++
+ static inline
+ const struct rtw89_chan *rtw89_scan_chan_get(struct rtw89_dev *rtwdev)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index e6bceef691e9be..620e076d1b597d 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6637,21 +6637,24 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_HW_SCAN);
+ }
+
+-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
+- struct rtw89_vif_link *rtwvif_link,
+- bool aborted)
++struct rtw89_hw_scan_complete_cb_data {
++ struct rtw89_vif_link *rtwvif_link;
++ bool aborted;
++};
++
++static int rtw89_hw_scan_complete_cb(struct rtw89_dev *rtwdev, void *data)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
++ struct rtw89_hw_scan_complete_cb_data *cb_data = data;
++ struct rtw89_vif_link *rtwvif_link = cb_data->rtwvif_link;
+ struct cfg80211_scan_info info = {
+- .aborted = aborted,
++ .aborted = cb_data->aborted,
+ };
+ struct rtw89_vif *rtwvif;
+
+ if (!rtwvif_link)
+- return;
+-
+- rtw89_chanctx_proceed(rtwdev);
++ return -EINVAL;
+
+ rtwvif = rtwvif_link->rtwvif;
+
+@@ -6672,6 +6675,29 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
+ scan_info->last_chan_idx = 0;
+ scan_info->scanning_vif = NULL;
+ scan_info->abort = false;
++
++ return 0;
++}
++
++void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ bool aborted)
++{
++ struct rtw89_hw_scan_complete_cb_data cb_data = {
++ .rtwvif_link = rtwvif_link,
++ .aborted = aborted,
++ };
++ const struct rtw89_chanctx_cb_parm cb_parm = {
++ .cb = rtw89_hw_scan_complete_cb,
++ .data = &cb_data,
++ .caller = __func__,
++ };
++
++ /* The things here needs to be done after setting channel (for coex)
++ * and before proceeding entity mode (for MCC). So, pass a callback
++ * of them for the right sequence rather than doing them directly.
++ */
++ rtw89_chanctx_proceed(rtwdev, &cb_parm);
+ }
+
+ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 4e15d539e3d1c4..4574aa62839b02 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -1483,7 +1483,8 @@ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ clear_bit(RTW89_FLAG_CMAC1_FUNC, rtwdev->flags);
+ clear_bit(RTW89_FLAG_FW_RDY, rtwdev->flags);
+ rtw89_write8(rtwdev, R_AX_SCOREBOARD + 3, MAC_AX_NOTIFY_PWR_MAJOR);
+- rtw89_set_entity_state(rtwdev, false);
++ rtw89_set_entity_state(rtwdev, RTW89_PHY_0, false);
++ rtw89_set_entity_state(rtwdev, RTW89_PHY_1, false);
+ }
+
+ return 0;
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 13fb3cac27016b..8351a70d325d4a 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -189,8 +189,10 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+
+ rtw89_core_txq_init(rtwdev, vif->txq);
+
+- if (!rtw89_rtwvif_in_list(rtwdev, rtwvif))
++ if (!rtw89_rtwvif_in_list(rtwdev, rtwvif)) {
+ list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
++ INIT_LIST_HEAD(&rtwvif->mgnt_entry);
++ }
+
+ ether_addr_copy(rtwvif->mac_addr, vif->addr);
+
+@@ -1271,11 +1273,11 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw,
+ if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw))
+ return;
+
+- if (!rtwdev->scanning)
+- return;
+-
+ mutex_lock(&rtwdev->mutex);
+
++ if (!rtwdev->scanning)
++ goto out;
++
+ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
+ if (unlikely(!rtwvif_link)) {
+ rtw89_err(rtwdev, "cancel hw scan: find no link on HW-0\n");
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 0c77b8524160db..42805ed7ca1202 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -2612,24 +2612,24 @@ static int wl1271_op_add_interface(struct ieee80211_hw *hw,
+ if (test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags) ||
+ test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) {
+ ret = -EBUSY;
+- goto out;
++ goto out_unlock;
+ }
+
+
+ ret = wl12xx_init_vif_data(wl, vif);
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+
+ wlvif->wl = wl;
+ role_type = wl12xx_get_role_type(wl, wlvif);
+ if (role_type == WL12XX_INVALID_ROLE_TYPE) {
+ ret = -EINVAL;
+- goto out;
++ goto out_unlock;
+ }
+
+ ret = wlcore_allocate_hw_queue_base(wl, wlvif);
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+
+ /*
+ * TODO: after the nvs issue will be solved, move this block
+@@ -2644,7 +2644,7 @@ static int wl1271_op_add_interface(struct ieee80211_hw *hw,
+
+ ret = wl12xx_init_fw(wl);
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+ }
+
+ /*
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 249914b90dbfa7..4c409efd8cec17 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3085,7 +3085,7 @@ int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, u8 csi,
+ static int nvme_get_effects_log(struct nvme_ctrl *ctrl, u8 csi,
+ struct nvme_effects_log **log)
+ {
+- struct nvme_effects_log *cel = xa_load(&ctrl->cels, csi);
++ struct nvme_effects_log *old, *cel = xa_load(&ctrl->cels, csi);
+ int ret;
+
+ if (cel)
+@@ -3102,7 +3102,11 @@ static int nvme_get_effects_log(struct nvme_ctrl *ctrl, u8 csi,
+ return ret;
+ }
+
+- xa_store(&ctrl->cels, csi, cel, GFP_KERNEL);
++ old = xa_store(&ctrl->cels, csi, cel, GFP_KERNEL);
++ if (xa_is_err(old)) {
++ kfree(cel);
++ return xa_err(old);
++ }
+ out:
+ *log = cel;
+ return 0;
+@@ -3164,6 +3168,25 @@ static int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl)
+ return ret;
+ }
+
++static int nvme_init_effects_log(struct nvme_ctrl *ctrl,
++ u8 csi, struct nvme_effects_log **log)
++{
++ struct nvme_effects_log *effects, *old;
++
++ effects = kzalloc(sizeof(*effects), GFP_KERNEL);
++ if (!effects)
++ return -ENOMEM;
++
++ old = xa_store(&ctrl->cels, csi, effects, GFP_KERNEL);
++ if (xa_is_err(old)) {
++ kfree(effects);
++ return xa_err(old);
++ }
++
++ *log = effects;
++ return 0;
++}
++
+ static void nvme_init_known_nvm_effects(struct nvme_ctrl *ctrl)
+ {
+ struct nvme_effects_log *log = ctrl->effects;
+@@ -3210,10 +3233,9 @@ static int nvme_init_effects(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ }
+
+ if (!ctrl->effects) {
+- ctrl->effects = kzalloc(sizeof(*ctrl->effects), GFP_KERNEL);
+- if (!ctrl->effects)
+- return -ENOMEM;
+- xa_store(&ctrl->cels, NVME_CSI_NVM, ctrl->effects, GFP_KERNEL);
++ ret = nvme_init_effects_log(ctrl, NVME_CSI_NVM, &ctrl->effects);
++ if (ret < 0)
++ return ret;
+ }
+
+ nvme_init_known_nvm_effects(ctrl);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 55abfe5e1d2548..8305d3c1280748 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -54,6 +54,8 @@ MODULE_PARM_DESC(tls_handshake_timeout,
+ "nvme TLS handshake timeout in seconds (default 10)");
+ #endif
+
++static atomic_t nvme_tcp_cpu_queues[NR_CPUS];
++
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+ /* lockdep can detect a circular dependency of the form
+ * sk_lock -> mmap_lock (page fault) -> fs locks -> sk_lock
+@@ -127,6 +129,7 @@ enum nvme_tcp_queue_flags {
+ NVME_TCP_Q_ALLOCATED = 0,
+ NVME_TCP_Q_LIVE = 1,
+ NVME_TCP_Q_POLLING = 2,
++ NVME_TCP_Q_IO_CPU_SET = 3,
+ };
+
+ enum nvme_tcp_recv_state {
+@@ -1562,23 +1565,56 @@ static bool nvme_tcp_poll_queue(struct nvme_tcp_queue *queue)
+ ctrl->io_queues[HCTX_TYPE_POLL];
+ }
+
++/**
++ * Track the number of queues assigned to each cpu using a global per-cpu
++ * counter and select the least used cpu from the mq_map. Our goal is to spread
++ * different controllers I/O threads across different cpu cores.
++ *
++ * Note that the accounting is not 100% perfect, but we don't need to be, we're
++ * simply putting our best effort to select the best candidate cpu core that we
++ * find at any given point.
++ */
+ static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue)
+ {
+ struct nvme_tcp_ctrl *ctrl = queue->ctrl;
+- int qid = nvme_tcp_queue_id(queue);
+- int n = 0;
++ struct blk_mq_tag_set *set = &ctrl->tag_set;
++ int qid = nvme_tcp_queue_id(queue) - 1;
++ unsigned int *mq_map = NULL;
++ int cpu, min_queues = INT_MAX, io_cpu;
++
++ if (wq_unbound)
++ goto out;
+
+ if (nvme_tcp_default_queue(queue))
+- n = qid - 1;
++ mq_map = set->map[HCTX_TYPE_DEFAULT].mq_map;
+ else if (nvme_tcp_read_queue(queue))
+- n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] - 1;
++ mq_map = set->map[HCTX_TYPE_READ].mq_map;
+ else if (nvme_tcp_poll_queue(queue))
+- n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] -
+- ctrl->io_queues[HCTX_TYPE_READ] - 1;
+- if (wq_unbound)
+- queue->io_cpu = WORK_CPU_UNBOUND;
+- else
+- queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
++ mq_map = set->map[HCTX_TYPE_POLL].mq_map;
++
++ if (WARN_ON(!mq_map))
++ goto out;
++
++ /* Search for the least used cpu from the mq_map */
++ io_cpu = WORK_CPU_UNBOUND;
++ for_each_online_cpu(cpu) {
++ int num_queues = atomic_read(&nvme_tcp_cpu_queues[cpu]);
++
++ if (mq_map[cpu] != qid)
++ continue;
++ if (num_queues < min_queues) {
++ io_cpu = cpu;
++ min_queues = num_queues;
++ }
++ }
++ if (io_cpu != WORK_CPU_UNBOUND) {
++ queue->io_cpu = io_cpu;
++ atomic_inc(&nvme_tcp_cpu_queues[io_cpu]);
++ set_bit(NVME_TCP_Q_IO_CPU_SET, &queue->flags);
++ }
++out:
++ dev_dbg(ctrl->ctrl.device, "queue %d: using cpu %d\n",
++ qid, queue->io_cpu);
+ }
+
+ static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid)
+@@ -1722,7 +1758,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid,
+
+ queue->sock->sk->sk_allocation = GFP_ATOMIC;
+ queue->sock->sk->sk_use_task_frag = false;
+- nvme_tcp_set_queue_io_cpu(queue);
++ queue->io_cpu = WORK_CPU_UNBOUND;
+ queue->request = NULL;
+ queue->data_remaining = 0;
+ queue->ddgst_remaining = 0;
+@@ -1844,6 +1880,9 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ if (!test_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))
+ return;
+
++ if (test_and_clear_bit(NVME_TCP_Q_IO_CPU_SET, &queue->flags))
++ atomic_dec(&nvme_tcp_cpu_queues[queue->io_cpu]);
++
+ mutex_lock(&queue->queue_lock);
+ if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
+ __nvme_tcp_stop_queue(queue);
+@@ -1878,9 +1917,10 @@ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+ nvme_tcp_init_recv_ctx(queue);
+ nvme_tcp_setup_sock_ops(queue);
+
+- if (idx)
++ if (idx) {
++ nvme_tcp_set_queue_io_cpu(queue);
+ ret = nvmf_connect_io_queue(nctrl, idx);
+- else
++ } else
+ ret = nvmf_connect_admin_queue(nctrl);
+
+ if (!ret) {
+@@ -2856,6 +2896,7 @@ static struct nvmf_transport_ops nvme_tcp_transport = {
+ static int __init nvme_tcp_init_module(void)
+ {
+ unsigned int wq_flags = WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_SYSFS;
++ int cpu;
+
+ BUILD_BUG_ON(sizeof(struct nvme_tcp_hdr) != 8);
+ BUILD_BUG_ON(sizeof(struct nvme_tcp_cmd_pdu) != 72);
+@@ -2873,6 +2914,9 @@ static int __init nvme_tcp_init_module(void)
+ if (!nvme_tcp_wq)
+ return -ENOMEM;
+
++ for_each_possible_cpu(cpu)
++ atomic_set(&nvme_tcp_cpu_queues[cpu], 0);
++
+ nvmf_register_transport(&nvme_tcp_transport);
+ return 0;
+ }
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 546e76ac407cfd..8c80f4dc8b3fae 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -8,7 +8,6 @@
+
+ #define pr_fmt(fmt) "OF: fdt: " fmt
+
+-#include <linux/acpi.h>
+ #include <linux/crash_dump.h>
+ #include <linux/crc32.h>
+ #include <linux/kernel.h>
+@@ -512,8 +511,6 @@ void __init early_init_fdt_scan_reserved_mem(void)
+ break;
+ memblock_reserve(base, size);
+ }
+-
+- fdt_init_reserved_mem();
+ }
+
+ /**
+@@ -1214,14 +1211,10 @@ void __init unflatten_device_tree(void)
+ {
+ void *fdt = initial_boot_params;
+
+- /* Don't use the bootloader provided DTB if ACPI is enabled */
+- if (!acpi_disabled)
+- fdt = NULL;
++ /* Save the statically-placed regions in the reserved_mem array */
++ fdt_scan_reserved_mem_reg_nodes();
+
+- /*
+- * Populate an empty root node when ACPI is enabled or bootloader
+- * doesn't provide one.
+- */
++ /* Populate an empty root node when bootloader doesn't provide one */
+ if (!fdt) {
+ fdt = (void *) __dtb_empty_root_begin;
+ /* fdt_totalsize() will be used for copy size */
+diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
+index c235d6c909a16a..10698862252572 100644
+--- a/drivers/of/of_private.h
++++ b/drivers/of/of_private.h
+@@ -9,6 +9,7 @@
+ */
+
+ #define FDT_ALIGN_SIZE 8
++#define MAX_RESERVED_REGIONS 64
+
+ /**
+ * struct alias_prop - Alias property in 'aliases' node
+@@ -183,7 +184,7 @@ static inline struct device_node *__of_get_dma_parent(const struct device_node *
+ #endif
+
+ int fdt_scan_reserved_mem(void);
+-void fdt_init_reserved_mem(void);
++void __init fdt_scan_reserved_mem_reg_nodes(void);
+
+ bool of_fdt_device_is_available(const void *blob, unsigned long node);
+
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 46e1c3fbc7692c..45445a1600a968 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -27,7 +27,6 @@
+
+ #include "of_private.h"
+
+-#define MAX_RESERVED_REGIONS 64
+ static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
+ static int reserved_mem_count;
+
+@@ -51,11 +50,13 @@ static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
+ memblock_phys_free(base, size);
+ }
+
+- kmemleak_ignore_phys(base);
++ if (!err)
++ kmemleak_ignore_phys(base);
+
+ return err;
+ }
+
++static void __init fdt_init_reserved_mem_node(struct reserved_mem *rmem);
+ /*
+ * fdt_reserved_mem_save_node() - save fdt node for second pass initialization
+ */
+@@ -74,6 +75,9 @@ static void __init fdt_reserved_mem_save_node(unsigned long node, const char *un
+ rmem->base = base;
+ rmem->size = size;
+
++ /* Call the region specific initialization function */
++ fdt_init_reserved_mem_node(rmem);
++
+ reserved_mem_count++;
+ return;
+ }
+@@ -106,7 +110,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ phys_addr_t base, size;
+ int len;
+ const __be32 *prop;
+- int first = 1;
+ bool nomap;
+
+ prop = of_get_flat_dt_prop(node, "reg", &len);
+@@ -134,10 +137,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ uname, &base, (unsigned long)(size / SZ_1M));
+
+ len -= t_len;
+- if (first) {
+- fdt_reserved_mem_save_node(node, uname, base, size);
+- first = 0;
+- }
+ }
+ return 0;
+ }
+@@ -165,12 +164,82 @@ static int __init __reserved_mem_check_root(unsigned long node)
+ return 0;
+ }
+
++static void __init __rmem_check_for_overlap(void);
++
++/**
++ * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined
++ * reserved memory regions.
++ *
++ * This function is used to scan through the DT and store the
++ * information for the reserved memory regions that are defined using
++ * the "reg" property. The region node number, name, base address, and
++ * size are all stored in the reserved_mem array by calling the
++ * fdt_reserved_mem_save_node() function.
++ */
++void __init fdt_scan_reserved_mem_reg_nodes(void)
++{
++ int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
++ const void *fdt = initial_boot_params;
++ phys_addr_t base, size;
++ const __be32 *prop;
++ int node, child;
++ int len;
++
++ if (!fdt)
++ return;
++
++ node = fdt_path_offset(fdt, "/reserved-memory");
++ if (node < 0) {
++ pr_info("Reserved memory: No reserved-memory node in the DT\n");
++ return;
++ }
++
++ if (__reserved_mem_check_root(node)) {
++ pr_err("Reserved memory: unsupported node format, ignoring\n");
++ return;
++ }
++
++ fdt_for_each_subnode(child, fdt, node) {
++ const char *uname;
++
++ prop = of_get_flat_dt_prop(child, "reg", &len);
++ if (!prop)
++ continue;
++ if (!of_fdt_device_is_available(fdt, child))
++ continue;
++
++ uname = fdt_get_name(fdt, child, NULL);
++ if (len && len % t_len != 0) {
++ pr_err("Reserved memory: invalid reg property in '%s', skipping node.\n",
++ uname);
++ continue;
++ }
++
++ if (len > t_len)
++ pr_warn("%s() ignores %d regions in node '%s'\n",
++ __func__, len / t_len - 1, uname);
++
++ base = dt_mem_next_cell(dt_root_addr_cells, &prop);
++ size = dt_mem_next_cell(dt_root_size_cells, &prop);
++
++ if (size)
++ fdt_reserved_mem_save_node(child, uname, base, size);
++ }
++
++ /* check for overlapping reserved regions */
++ __rmem_check_for_overlap();
++}
++
++static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname);
++
+ /*
+ * fdt_scan_reserved_mem() - scan a single FDT node for reserved memory
+ */
+ int __init fdt_scan_reserved_mem(void)
+ {
+ int node, child;
++ int dynamic_nodes_cnt = 0;
++ int dynamic_nodes[MAX_RESERVED_REGIONS];
+ const void *fdt = initial_boot_params;
+
+ node = fdt_path_offset(fdt, "/reserved-memory");
+@@ -192,8 +261,24 @@ int __init fdt_scan_reserved_mem(void)
+ uname = fdt_get_name(fdt, child, NULL);
+
+ err = __reserved_mem_reserve_reg(child, uname);
+- if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL))
+- fdt_reserved_mem_save_node(child, uname, 0, 0);
++ /*
++ * Save the nodes for the dynamically-placed regions
++ * into an array which will be used for allocation right
++ * after all the statically-placed regions are reserved
++ * or marked as no-map. This is done to avoid dynamically
++ * allocating from one of the statically-placed regions.
++ */
++ if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL)) {
++ dynamic_nodes[dynamic_nodes_cnt] = child;
++ dynamic_nodes_cnt++;
++ }
++ }
++ for (int i = 0; i < dynamic_nodes_cnt; i++) {
++ const char *uname;
++
++ child = dynamic_nodes[i];
++ uname = fdt_get_name(fdt, child, NULL);
++ __reserved_mem_alloc_size(child, uname);
+ }
+ return 0;
+ }
+@@ -253,8 +338,7 @@ static int __init __reserved_mem_alloc_in_range(phys_addr_t size,
+ * __reserved_mem_alloc_size() - allocate reserved memory described by
+ * 'size', 'alignment' and 'alloc-ranges' properties.
+ */
+-static int __init __reserved_mem_alloc_size(unsigned long node,
+- const char *uname, phys_addr_t *res_base, phys_addr_t *res_size)
++static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname)
+ {
+ int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
+ phys_addr_t start = 0, end = 0;
+@@ -334,9 +418,8 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
+ return -ENOMEM;
+ }
+
+- *res_base = base;
+- *res_size = size;
+-
++ /* Save region in the reserved_mem array */
++ fdt_reserved_mem_save_node(node, uname, base, size);
+ return 0;
+ }
+
+@@ -425,48 +508,37 @@ static void __init __rmem_check_for_overlap(void)
+ }
+
+ /**
+- * fdt_init_reserved_mem() - allocate and init all saved reserved memory regions
++ * fdt_init_reserved_mem_node() - Initialize a reserved memory region
++ * @rmem: reserved_mem struct of the memory region to be initialized.
++ *
++ * This function is used to call the region specific initialization
++ * function for a reserved memory region.
+ */
+-void __init fdt_init_reserved_mem(void)
++static void __init fdt_init_reserved_mem_node(struct reserved_mem *rmem)
+ {
+- int i;
+-
+- /* check for overlapping reserved regions */
+- __rmem_check_for_overlap();
+-
+- for (i = 0; i < reserved_mem_count; i++) {
+- struct reserved_mem *rmem = &reserved_mem[i];
+- unsigned long node = rmem->fdt_node;
+- int err = 0;
+- bool nomap;
++ unsigned long node = rmem->fdt_node;
++ int err = 0;
++ bool nomap;
+
+- nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
++ nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+
+- if (rmem->size == 0)
+- err = __reserved_mem_alloc_size(node, rmem->name,
+- &rmem->base, &rmem->size);
+- if (err == 0) {
+- err = __reserved_mem_init_node(rmem);
+- if (err != 0 && err != -ENOENT) {
+- pr_info("node %s compatible matching fail\n",
+- rmem->name);
+- if (nomap)
+- memblock_clear_nomap(rmem->base, rmem->size);
+- else
+- memblock_phys_free(rmem->base,
+- rmem->size);
+- } else {
+- phys_addr_t end = rmem->base + rmem->size - 1;
+- bool reusable =
+- (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
+-
+- pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
+- &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
+- nomap ? "nomap" : "map",
+- reusable ? "reusable" : "non-reusable",
+- rmem->name ? rmem->name : "unknown");
+- }
+- }
++ err = __reserved_mem_init_node(rmem);
++ if (err != 0 && err != -ENOENT) {
++ pr_info("node %s compatible matching fail\n", rmem->name);
++ if (nomap)
++ memblock_clear_nomap(rmem->base, rmem->size);
++ else
++ memblock_phys_free(rmem->base, rmem->size);
++ } else {
++ phys_addr_t end = rmem->base + rmem->size - 1;
++ bool reusable =
++ (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
++
++ pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
++ &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
++ nomap ? "nomap" : "map",
++ reusable ? "reusable" : "non-reusable",
++ rmem->name ? rmem->name : "unknown");
+ }
+ }
+
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 7bd8390f2fba5e..906a33ae717f78 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1317,9 +1317,9 @@ static struct device_node *parse_interrupt_map(struct device_node *np,
+ addrcells = of_bus_n_addr_cells(np);
+
+ imap = of_get_property(np, "interrupt-map", &imaplen);
+- imaplen /= sizeof(*imap);
+ if (!imap)
+ return NULL;
++ imaplen /= sizeof(*imap);
+
+ imap_end = imap + imaplen;
+
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 3aa18737470fa2..5ac209472c0cf6 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -101,11 +101,30 @@ struct opp_table *_find_opp_table(struct device *dev)
+ * representation in the OPP table and manage the clock configuration themselves
+ * in an platform specific way.
+ */
+-static bool assert_single_clk(struct opp_table *opp_table)
++static bool assert_single_clk(struct opp_table *opp_table,
++ unsigned int __always_unused index)
+ {
+ return !WARN_ON(opp_table->clk_count > 1);
+ }
+
++/*
++ * Returns true if clock table is large enough to contain the clock index.
++ */
++static bool assert_clk_index(struct opp_table *opp_table,
++ unsigned int index)
++{
++ return opp_table->clk_count > index;
++}
++
++/*
++ * Returns true if bandwidth table is large enough to contain the bandwidth index.
++ */
++static bool assert_bandwidth_index(struct opp_table *opp_table,
++ unsigned int index)
++{
++ return opp_table->path_count > index;
++}
++
+ /**
+ * dev_pm_opp_get_voltage() - Gets the voltage corresponding to an opp
+ * @opp: opp for which voltage has to be returned for
+@@ -499,12 +518,12 @@ static struct dev_pm_opp *_opp_table_find_key(struct opp_table *opp_table,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+ bool (*compare)(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp,
+ unsigned long opp_key, unsigned long key),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
+
+ /* Assert that the requirement is met */
+- if (assert && !assert(opp_table))
++ if (assert && !assert(opp_table, index))
+ return ERR_PTR(-EINVAL);
+
+ mutex_lock(&opp_table->lock);
+@@ -532,7 +551,7 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+ bool (*compare)(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp,
+ unsigned long opp_key, unsigned long key),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ struct opp_table *opp_table;
+ struct dev_pm_opp *opp;
+@@ -555,7 +574,7 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available,
+ static struct dev_pm_opp *_find_key_exact(struct device *dev,
+ unsigned long key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ /*
+ * The value of key will be updated here, but will be ignored as the
+@@ -568,7 +587,7 @@ static struct dev_pm_opp *_find_key_exact(struct device *dev,
+ static struct dev_pm_opp *_opp_table_find_key_ceil(struct opp_table *opp_table,
+ unsigned long *key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ return _opp_table_find_key(opp_table, key, index, available, read,
+ _compare_ceil, assert);
+@@ -577,7 +596,7 @@ static struct dev_pm_opp *_opp_table_find_key_ceil(struct opp_table *opp_table,
+ static struct dev_pm_opp *_find_key_ceil(struct device *dev, unsigned long *key,
+ int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ return _find_key(dev, key, index, available, read, _compare_ceil,
+ assert);
+@@ -586,7 +605,7 @@ static struct dev_pm_opp *_find_key_ceil(struct device *dev, unsigned long *key,
+ static struct dev_pm_opp *_find_key_floor(struct device *dev,
+ unsigned long *key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ return _find_key(dev, key, index, available, read, _compare_floor,
+ assert);
+@@ -647,7 +666,8 @@ struct dev_pm_opp *
+ dev_pm_opp_find_freq_exact_indexed(struct device *dev, unsigned long freq,
+ u32 index, bool available)
+ {
+- return _find_key_exact(dev, freq, index, available, _read_freq, NULL);
++ return _find_key_exact(dev, freq, index, available, _read_freq,
++ assert_clk_index);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact_indexed);
+
+@@ -707,7 +727,8 @@ struct dev_pm_opp *
+ dev_pm_opp_find_freq_ceil_indexed(struct device *dev, unsigned long *freq,
+ u32 index)
+ {
+- return _find_key_ceil(dev, freq, index, true, _read_freq, NULL);
++ return _find_key_ceil(dev, freq, index, true, _read_freq,
++ assert_clk_index);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil_indexed);
+
+@@ -760,7 +781,7 @@ struct dev_pm_opp *
+ dev_pm_opp_find_freq_floor_indexed(struct device *dev, unsigned long *freq,
+ u32 index)
+ {
+- return _find_key_floor(dev, freq, index, true, _read_freq, NULL);
++ return _find_key_floor(dev, freq, index, true, _read_freq, assert_clk_index);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor_indexed);
+
+@@ -878,7 +899,8 @@ struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev, unsigned int *bw,
+ unsigned long temp = *bw;
+ struct dev_pm_opp *opp;
+
+- opp = _find_key_ceil(dev, &temp, index, true, _read_bw, NULL);
++ opp = _find_key_ceil(dev, &temp, index, true, _read_bw,
++ assert_bandwidth_index);
+ *bw = temp;
+ return opp;
+ }
+@@ -909,7 +931,8 @@ struct dev_pm_opp *dev_pm_opp_find_bw_floor(struct device *dev,
+ unsigned long temp = *bw;
+ struct dev_pm_opp *opp;
+
+- opp = _find_key_floor(dev, &temp, index, true, _read_bw, NULL);
++ opp = _find_key_floor(dev, &temp, index, true, _read_bw,
++ assert_bandwidth_index);
+ *bw = temp;
+ return opp;
+ }
+@@ -1702,7 +1725,7 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
+ if (IS_ERR(opp_table))
+ return;
+
+- if (!assert_single_clk(opp_table))
++ if (!assert_single_clk(opp_table, 0))
+ goto put_table;
+
+ mutex_lock(&opp_table->lock);
+@@ -2054,7 +2077,7 @@ int _opp_add_v1(struct opp_table *opp_table, struct device *dev,
+ unsigned long tol, u_volt = data->u_volt;
+ int ret;
+
+- if (!assert_single_clk(opp_table))
++ if (!assert_single_clk(opp_table, 0))
+ return -EINVAL;
+
+ new_opp = _opp_allocate(opp_table);
+@@ -2911,7 +2934,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
+ return r;
+ }
+
+- if (!assert_single_clk(opp_table)) {
++ if (!assert_single_clk(opp_table, 0)) {
+ r = -EINVAL;
+ goto put_table;
+ }
+@@ -2987,7 +3010,7 @@ int dev_pm_opp_adjust_voltage(struct device *dev, unsigned long freq,
+ return r;
+ }
+
+- if (!assert_single_clk(opp_table)) {
++ if (!assert_single_clk(opp_table, 0)) {
+ r = -EINVAL;
+ goto put_table;
+ }
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 55c8cfef97d489..dcab0e7ace1068 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -959,7 +959,7 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
+
+ ret = _of_opp_alloc_required_opps(opp_table, new_opp);
+ if (ret)
+- goto free_opp;
++ goto put_node;
+
+ if (!of_property_read_u32(np, "clock-latency-ns", &val))
+ new_opp->clock_latency_ns = val;
+@@ -1009,6 +1009,8 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
+
+ free_required_opps:
+ _of_opp_free_required_opps(opp_table, new_opp);
++put_node:
++ of_node_put(np);
+ free_opp:
+ _opp_free(new_opp);
+
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index c8d5c90aa4d45b..ad3028b755d16a 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -598,10 +598,9 @@ static int imx_pcie_attach_pd(struct device *dev)
+
+ static int imx6sx_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
+ {
+- if (enable)
+- regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
+- IMX6SX_GPR12_PCIE_TEST_POWERDOWN);
+-
++ regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
++ IMX6SX_GPR12_PCIE_TEST_POWERDOWN,
++ enable ? 0 : IMX6SX_GPR12_PCIE_TEST_POWERDOWN);
+ return 0;
+ }
+
+@@ -630,19 +629,20 @@ static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
+ {
+ int offset = imx_pcie_grp_offset(imx_pcie);
+
+- if (enable) {
+- regmap_clear_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE);
+- regmap_set_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN);
+- }
+-
++ regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
++ IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE,
++ enable ? 0 : IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE);
++ regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
++ IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN,
++ enable ? IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN : 0);
+ return 0;
+ }
+
+ static int imx7d_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
+ {
+- if (!enable)
+- regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
+- IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
++ regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
++ IMX7D_GPR12_PCIE_PHY_REFCLK_SEL,
++ enable ? 0 : IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
+ return 0;
+ }
+
+@@ -775,6 +775,7 @@ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
+ static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie)
+ {
+ reset_control_deassert(imx_pcie->pciephy_reset);
++ reset_control_deassert(imx_pcie->apps_reset);
+
+ if (imx_pcie->drvdata->core_reset)
+ imx_pcie->drvdata->core_reset(imx_pcie, false);
+@@ -966,7 +967,9 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp)
+ goto err_clk_disable;
+ }
+
+- ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
++ ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE,
++ imx_pcie->drvdata->mode == DW_PCIE_EP_TYPE ?
++ PHY_MODE_PCIE_EP : PHY_MODE_PCIE_RC);
+ if (ret) {
+ dev_err(dev, "unable to set PCIe PHY mode\n");
+ goto err_phy_exit;
+@@ -1391,7 +1394,6 @@ static int imx_pcie_probe(struct platform_device *pdev)
+ switch (imx_pcie->drvdata->variant) {
+ case IMX8MQ:
+ case IMX8MQ_EP:
+- case IMX7D:
+ if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR)
+ imx_pcie->controller_id = 1;
+ break;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 3e41865c72904e..120e2aca5164ab 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -946,6 +946,7 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
+ return ret;
+ }
+
++ dw_pcie_stop_link(pci);
+ if (pci->pp.ops->deinit)
+ pci->pp.ops->deinit(&pci->pp);
+
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 6483e1874477ef..4c141e05f84e9c 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1559,6 +1559,8 @@ static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
+ pci_lock_rescan_remove();
+ pci_rescan_bus(pp->bridge->bus);
+ pci_unlock_rescan_remove();
++
++ qcom_pcie_icc_opp_update(pcie);
+ } else {
+ dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
+ status);
+diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c
+index 047e2cef5afcd5..c5e0d025bc4359 100644
+--- a/drivers/pci/controller/pcie-rcar-ep.c
++++ b/drivers/pci/controller/pcie-rcar-ep.c
+@@ -107,7 +107,7 @@ static int rcar_pcie_parse_outbound_ranges(struct rcar_pcie_endpoint *ep,
+ }
+ if (!devm_request_mem_region(&pdev->dev, res->start,
+ resource_size(res),
+- outbound_name)) {
++ res->name)) {
+ dev_err(pcie->dev, "Cannot request memory region %s.\n",
+ outbound_name);
+ return -EIO;
+diff --git a/drivers/pci/controller/plda/pcie-microchip-host.c b/drivers/pci/controller/plda/pcie-microchip-host.c
+index 48f60a04b740ba..3fdfffdf027001 100644
+--- a/drivers/pci/controller/plda/pcie-microchip-host.c
++++ b/drivers/pci/controller/plda/pcie-microchip-host.c
+@@ -7,27 +7,31 @@
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ */
+
++#include <linux/align.h>
++#include <linux/bits.h>
+ #include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/irqchip/chained_irq.h>
+ #include <linux/irqdomain.h>
++#include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/msi.h>
+ #include <linux/of_address.h>
+ #include <linux/of_pci.h>
+ #include <linux/pci-ecam.h>
+ #include <linux/platform_device.h>
++#include <linux/wordpart.h>
+
+ #include "../../pci.h"
+ #include "pcie-plda.h"
+
++#define MC_MAX_NUM_INBOUND_WINDOWS 8
++#define MPFS_NC_BOUNCE_ADDR 0x80000000
++
+ /* PCIe Bridge Phy and Controller Phy offsets */
+ #define MC_PCIE1_BRIDGE_ADDR 0x00008000u
+ #define MC_PCIE1_CTRL_ADDR 0x0000a000u
+
+-#define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR)
+-#define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR)
+-
+ /* PCIe Controller Phy Regs */
+ #define SEC_ERROR_EVENT_CNT 0x20
+ #define DED_ERROR_EVENT_CNT 0x24
+@@ -128,7 +132,6 @@
+ [EVENT_LOCAL_ ## x] = { __stringify(x), s }
+
+ #define PCIE_EVENT(x) \
+- .base = MC_PCIE_CTRL_ADDR, \
+ .offset = PCIE_EVENT_INT, \
+ .mask_offset = PCIE_EVENT_INT, \
+ .mask_high = 1, \
+@@ -136,7 +139,6 @@
+ .enb_mask = PCIE_EVENT_INT_ENB_MASK
+
+ #define SEC_EVENT(x) \
+- .base = MC_PCIE_CTRL_ADDR, \
+ .offset = SEC_ERROR_INT, \
+ .mask_offset = SEC_ERROR_INT_MASK, \
+ .mask = SEC_ERROR_INT_ ## x ## _INT, \
+@@ -144,7 +146,6 @@
+ .enb_mask = 0
+
+ #define DED_EVENT(x) \
+- .base = MC_PCIE_CTRL_ADDR, \
+ .offset = DED_ERROR_INT, \
+ .mask_offset = DED_ERROR_INT_MASK, \
+ .mask_high = 1, \
+@@ -152,7 +153,6 @@
+ .enb_mask = 0
+
+ #define LOCAL_EVENT(x) \
+- .base = MC_PCIE_BRIDGE_ADDR, \
+ .offset = ISTATUS_LOCAL, \
+ .mask_offset = IMASK_LOCAL, \
+ .mask_high = 0, \
+@@ -179,7 +179,8 @@ struct event_map {
+
+ struct mc_pcie {
+ struct plda_pcie_rp plda;
+- void __iomem *axi_base_addr;
++ void __iomem *bridge_base_addr;
++ void __iomem *ctrl_base_addr;
+ };
+
+ struct cause {
+@@ -253,7 +254,6 @@ static struct event_map local_status_to_event[] = {
+ };
+
+ static struct {
+- u32 base;
+ u32 offset;
+ u32 mask;
+ u32 shift;
+@@ -325,8 +325,7 @@ static inline u32 reg_to_event(u32 reg, struct event_map field)
+
+ static u32 pcie_events(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+- u32 reg = readl_relaxed(ctrl_base_addr + PCIE_EVENT_INT);
++ u32 reg = readl_relaxed(port->ctrl_base_addr + PCIE_EVENT_INT);
+ u32 val = 0;
+ int i;
+
+@@ -338,8 +337,7 @@ static u32 pcie_events(struct mc_pcie *port)
+
+ static u32 sec_errors(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+- u32 reg = readl_relaxed(ctrl_base_addr + SEC_ERROR_INT);
++ u32 reg = readl_relaxed(port->ctrl_base_addr + SEC_ERROR_INT);
+ u32 val = 0;
+ int i;
+
+@@ -351,8 +349,7 @@ static u32 sec_errors(struct mc_pcie *port)
+
+ static u32 ded_errors(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+- u32 reg = readl_relaxed(ctrl_base_addr + DED_ERROR_INT);
++ u32 reg = readl_relaxed(port->ctrl_base_addr + DED_ERROR_INT);
+ u32 val = 0;
+ int i;
+
+@@ -364,8 +361,7 @@ static u32 ded_errors(struct mc_pcie *port)
+
+ static u32 local_events(struct mc_pcie *port)
+ {
+- void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+- u32 reg = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
++ u32 reg = readl_relaxed(port->bridge_base_addr + ISTATUS_LOCAL);
+ u32 val = 0;
+ int i;
+
+@@ -412,8 +408,12 @@ static void mc_ack_event_irq(struct irq_data *data)
+ void __iomem *addr;
+ u32 mask;
+
+- addr = mc_port->axi_base_addr + event_descs[event].base +
+- event_descs[event].offset;
++ if (event_descs[event].offset == ISTATUS_LOCAL)
++ addr = mc_port->bridge_base_addr;
++ else
++ addr = mc_port->ctrl_base_addr;
++
++ addr += event_descs[event].offset;
+ mask = event_descs[event].mask;
+ mask |= event_descs[event].enb_mask;
+
+@@ -429,8 +429,12 @@ static void mc_mask_event_irq(struct irq_data *data)
+ u32 mask;
+ u32 val;
+
+- addr = mc_port->axi_base_addr + event_descs[event].base +
+- event_descs[event].mask_offset;
++ if (event_descs[event].offset == ISTATUS_LOCAL)
++ addr = mc_port->bridge_base_addr;
++ else
++ addr = mc_port->ctrl_base_addr;
++
++ addr += event_descs[event].mask_offset;
+ mask = event_descs[event].mask;
+ if (event_descs[event].enb_mask) {
+ mask <<= PCIE_EVENT_INT_ENB_SHIFT;
+@@ -460,8 +464,12 @@ static void mc_unmask_event_irq(struct irq_data *data)
+ u32 mask;
+ u32 val;
+
+- addr = mc_port->axi_base_addr + event_descs[event].base +
+- event_descs[event].mask_offset;
++ if (event_descs[event].offset == ISTATUS_LOCAL)
++ addr = mc_port->bridge_base_addr;
++ else
++ addr = mc_port->ctrl_base_addr;
++
++ addr += event_descs[event].mask_offset;
+ mask = event_descs[event].mask;
+
+ if (event_descs[event].enb_mask)
+@@ -554,26 +562,20 @@ static const struct plda_event mc_event = {
+
+ static inline void mc_clear_secs(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+-
+- writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
+- SEC_ERROR_INT);
+- writel_relaxed(0, ctrl_base_addr + SEC_ERROR_EVENT_CNT);
++ writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT,
++ port->ctrl_base_addr + SEC_ERROR_INT);
++ writel_relaxed(0, port->ctrl_base_addr + SEC_ERROR_EVENT_CNT);
+ }
+
+ static inline void mc_clear_deds(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+-
+- writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
+- DED_ERROR_INT);
+- writel_relaxed(0, ctrl_base_addr + DED_ERROR_EVENT_CNT);
++ writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT,
++ port->ctrl_base_addr + DED_ERROR_INT);
++ writel_relaxed(0, port->ctrl_base_addr + DED_ERROR_EVENT_CNT);
+ }
+
+ static void mc_disable_interrupts(struct mc_pcie *port)
+ {
+- void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+ u32 val;
+
+ /* Ensure ECC bypass is enabled */
+@@ -581,22 +583,22 @@ static void mc_disable_interrupts(struct mc_pcie *port)
+ ECC_CONTROL_RX_RAM_ECC_BYPASS |
+ ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS |
+ ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS;
+- writel_relaxed(val, ctrl_base_addr + ECC_CONTROL);
++ writel_relaxed(val, port->ctrl_base_addr + ECC_CONTROL);
+
+ /* Disable SEC errors and clear any outstanding */
+- writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
+- SEC_ERROR_INT_MASK);
++ writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT,
++ port->ctrl_base_addr + SEC_ERROR_INT_MASK);
+ mc_clear_secs(port);
+
+ /* Disable DED errors and clear any outstanding */
+- writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
+- DED_ERROR_INT_MASK);
++ writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT,
++ port->ctrl_base_addr + DED_ERROR_INT_MASK);
+ mc_clear_deds(port);
+
+ /* Disable local interrupts and clear any outstanding */
+- writel_relaxed(0, bridge_base_addr + IMASK_LOCAL);
+- writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_LOCAL);
+- writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_MSI);
++ writel_relaxed(0, port->bridge_base_addr + IMASK_LOCAL);
++ writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_LOCAL);
++ writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_MSI);
+
+ /* Disable PCIe events and clear any outstanding */
+ val = PCIE_EVENT_INT_L2_EXIT_INT |
+@@ -605,11 +607,96 @@ static void mc_disable_interrupts(struct mc_pcie *port)
+ PCIE_EVENT_INT_L2_EXIT_INT_MASK |
+ PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK |
+ PCIE_EVENT_INT_DLUP_EXIT_INT_MASK;
+- writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT);
++ writel_relaxed(val, port->ctrl_base_addr + PCIE_EVENT_INT);
+
+ /* Disable host interrupts and clear any outstanding */
+- writel_relaxed(0, bridge_base_addr + IMASK_HOST);
+- writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
++ writel_relaxed(0, port->bridge_base_addr + IMASK_HOST);
++ writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_HOST);
++}
++
++static void mc_pcie_setup_inbound_atr(struct mc_pcie *port, int window_index,
++ u64 axi_addr, u64 pcie_addr, u64 size)
++{
++ u32 table_offset = window_index * ATR_ENTRY_SIZE;
++ void __iomem *table_addr = port->bridge_base_addr + table_offset;
++ u32 atr_sz;
++ u32 val;
++
++ atr_sz = ilog2(size) - 1;
++
++ val = ALIGN_DOWN(lower_32_bits(pcie_addr), SZ_4K);
++ val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz);
++ val |= ATR_IMPL_ENABLE;
++
++ writel(val, table_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
++
++ writel(upper_32_bits(pcie_addr), table_addr + ATR0_PCIE_WIN0_SRC_ADDR);
++
++ writel(lower_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_LSB);
++ writel(upper_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_UDW);
++
++ writel(TRSL_ID_AXI4_MASTER_0, table_addr + ATR0_PCIE_WIN0_TRSL_PARAM);
++}
++
++static int mc_pcie_setup_inbound_ranges(struct platform_device *pdev,
++ struct mc_pcie *port)
++{
++ struct device *dev = &pdev->dev;
++ struct device_node *dn = dev->of_node;
++ struct of_range_parser parser;
++ struct of_range range;
++ int atr_index = 0;
++
++ /*
++ * MPFS PCIe Root Port is 32-bit only, behind a Fabric Interface
++ * Controller FPGA logic block which contains the AXI-S interface.
++ *
++ * From the point of view of the PCIe Root Port, there are only two
++ * supported Root Port configurations:
++ *
++ * Configuration 1: for use with fully coherent designs; supports a
++ * window from 0x0 (CPU space) to specified PCIe space.
++ *
++ * Configuration 2: for use with non-coherent designs; supports two
++ * 1 GB windows to CPU space; one mapping CPU space 0 to PCIe space
++ * 0x80000000 and a second mapping CPU space 0x40000000 to PCIe
++ * space 0xc0000000. This cfg needs two windows because of how the
++ * MSI space is allocated in the AXI-S range on MPFS.
++ *
++ * The FIC interface outside the PCIe block *must* complete the
++ * inbound address translation as per MCHP MPFS FPGA design
++ * guidelines.
++ */
++ if (device_property_read_bool(dev, "dma-noncoherent")) {
++ /*
++ * Always need same two tables in this case. Need two tables
++ * due to hardware interactions between address and size.
++ */
++ mc_pcie_setup_inbound_atr(port, 0, 0,
++ MPFS_NC_BOUNCE_ADDR, SZ_1G);
++ mc_pcie_setup_inbound_atr(port, 1, SZ_1G,
++ MPFS_NC_BOUNCE_ADDR + SZ_1G, SZ_1G);
++ } else {
++ /* Find any DMA ranges */
++ if (of_pci_dma_range_parser_init(&parser, dn)) {
++ /* No DMA range property - setup default */
++ mc_pcie_setup_inbound_atr(port, 0, 0, 0, SZ_4G);
++ return 0;
++ }
++
++ for_each_of_range(&parser, &range) {
++ if (atr_index >= MC_MAX_NUM_INBOUND_WINDOWS) {
++ dev_err(dev, "too many inbound ranges; %d available tables\n",
++ MC_MAX_NUM_INBOUND_WINDOWS);
++ return -EINVAL;
++ }
++ mc_pcie_setup_inbound_atr(port, atr_index, 0,
++ range.pci_addr, range.size);
++ atr_index++;
++ }
++ }
++
++ return 0;
+ }
+
+ static int mc_platform_init(struct pci_config_window *cfg)
+@@ -617,12 +704,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
+ struct device *dev = cfg->parent;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
+- void __iomem *bridge_base_addr =
+- port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ int ret;
+
+ /* Configure address translation table 0 for PCIe config space */
+- plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start,
++ plda_pcie_setup_window(port->bridge_base_addr, 0, cfg->res.start,
+ cfg->res.start,
+ resource_size(&cfg->res));
+
+@@ -634,6 +719,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
+ if (ret)
+ return ret;
+
++ ret = mc_pcie_setup_inbound_ranges(pdev, port);
++ if (ret)
++ return ret;
++
+ port->plda.event_ops = &mc_event_ops;
+ port->plda.event_irq_chip = &mc_event_irq_chip;
+ port->plda.events_bitmap = GENMASK(NUM_EVENTS - 1, 0);
+@@ -649,7 +738,7 @@ static int mc_platform_init(struct pci_config_window *cfg)
+ static int mc_host_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+- void __iomem *bridge_base_addr;
++ void __iomem *apb_base_addr;
+ struct plda_pcie_rp *plda;
+ int ret;
+ u32 val;
+@@ -661,30 +750,45 @@ static int mc_host_probe(struct platform_device *pdev)
+ plda = &port->plda;
+ plda->dev = dev;
+
+- port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(port->axi_base_addr))
+- return PTR_ERR(port->axi_base_addr);
++ port->bridge_base_addr = devm_platform_ioremap_resource_byname(pdev,
++ "bridge");
++ port->ctrl_base_addr = devm_platform_ioremap_resource_byname(pdev,
++ "ctrl");
++ if (!IS_ERR(port->bridge_base_addr) && !IS_ERR(port->ctrl_base_addr))
++ goto addrs_set;
++
++ /*
++ * The original, incorrect, binding that lumped the control and
++ * bridge addresses together still needs to be handled by the driver.
++ */
++ apb_base_addr = devm_platform_ioremap_resource_byname(pdev, "apb");
++ if (IS_ERR(apb_base_addr))
++ return dev_err_probe(dev, PTR_ERR(apb_base_addr),
++ "both legacy apb register and ctrl/bridge regions missing");
++
++ port->bridge_base_addr = apb_base_addr + MC_PCIE1_BRIDGE_ADDR;
++ port->ctrl_base_addr = apb_base_addr + MC_PCIE1_CTRL_ADDR;
+
++addrs_set:
+ mc_disable_interrupts(port);
+
+- bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+- plda->bridge_addr = bridge_base_addr;
++ plda->bridge_addr = port->bridge_base_addr;
+ plda->num_events = NUM_EVENTS;
+
+ /* Allow enabling MSI by disabling MSI-X */
+- val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
++ val = readl(port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
+ val &= ~MSIX_CAP_MASK;
+- writel(val, bridge_base_addr + PCIE_PCI_IRQ_DW0);
++ writel(val, port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
+
+ /* Pick num vectors from bitfile programmed onto FPGA fabric */
+- val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
++ val = readl(port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
+ val &= NUM_MSI_MSGS_MASK;
+ val >>= NUM_MSI_MSGS_SHIFT;
+
+ plda->msi.num_vectors = 1 << val;
+
+ /* Pick vector address from design */
+- plda->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR);
++ plda->msi.vector_phy = readl_relaxed(port->bridge_base_addr + IMSI_ADDR);
+
+ ret = mc_pcie_init_clks(dev);
+ if (ret) {
+diff --git a/drivers/pci/controller/plda/pcie-plda-host.c b/drivers/pci/controller/plda/pcie-plda-host.c
+index 8533dc618d45f0..4153214ca41038 100644
+--- a/drivers/pci/controller/plda/pcie-plda-host.c
++++ b/drivers/pci/controller/plda/pcie-plda-host.c
+@@ -8,11 +8,14 @@
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ */
+
++#include <linux/align.h>
++#include <linux/bitfield.h>
+ #include <linux/irqchip/chained_irq.h>
+ #include <linux/irqdomain.h>
+ #include <linux/msi.h>
+ #include <linux/pci_regs.h>
+ #include <linux/pci-ecam.h>
++#include <linux/wordpart.h>
+
+ #include "pcie-plda.h"
+
+@@ -502,8 +505,9 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_PARAM);
+
+- val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
+- ATR_IMPL_ENABLE;
++ val = ALIGN_DOWN(lower_32_bits(axi_addr), SZ_4K);
++ val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz);
++ val |= ATR_IMPL_ENABLE;
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_SRCADDR_PARAM);
+
+@@ -518,13 +522,20 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ val = upper_32_bits(pci_addr);
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
++}
++EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
++
++void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port)
++{
++ void __iomem *bridge_base_addr = port->bridge_addr;
++ u32 val;
+
+ val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+ val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
+ writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+ writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
+ }
+-EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
++EXPORT_SYMBOL_GPL(plda_pcie_setup_inbound_address_translation);
+
+ int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
+ struct plda_pcie_rp *port)
+diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
+index 0e7dc0d8e5ba11..61ece26065ea09 100644
+--- a/drivers/pci/controller/plda/pcie-plda.h
++++ b/drivers/pci/controller/plda/pcie-plda.h
+@@ -89,14 +89,15 @@
+
+ /* PCIe AXI slave table init defines */
+ #define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u
+-#define ATR_SIZE_SHIFT 1
+-#define ATR_IMPL_ENABLE 1
++#define ATR_SIZE_MASK GENMASK(6, 1)
++#define ATR_IMPL_ENABLE BIT(0)
+ #define ATR0_AXI4_SLV0_SRC_ADDR 0x804u
+ #define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u
+ #define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu
+ #define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u
+ #define PCIE_TX_RX_INTERFACE 0x00000000u
+ #define PCIE_CONFIG_INTERFACE 0x00000001u
++#define TRSL_ID_AXI4_MASTER_0 0x00000004u
+
+ #define CONFIG_SPACE_ADDR_OFFSET 0x1000u
+
+@@ -204,6 +205,7 @@ int plda_init_interrupts(struct platform_device *pdev,
+ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ phys_addr_t axi_addr, phys_addr_t pci_addr,
+ size_t size);
++void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port);
+ int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
+ struct plda_pcie_rp *port);
+ int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 7c2ed6eae53ad1..14b4c68ab4e1a2 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -251,7 +251,7 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
+
+ fail_back_rx:
+ dma_release_channel(epf_test->dma_chan_rx);
+- epf_test->dma_chan_tx = NULL;
++ epf_test->dma_chan_rx = NULL;
+
+ fail_back_tx:
+ dma_cap_zero(mask);
+@@ -361,8 +361,8 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+
+ ktime_get_ts64(&start);
+ if (reg->flags & FLAG_USE_DMA) {
+- if (epf_test->dma_private) {
+- dev_err(dev, "Cannot transfer data using DMA\n");
++ if (!dma_has_cap(DMA_MEMCPY, epf_test->dma_chan_tx->device->cap_mask)) {
++ dev_err(dev, "DMA controller doesn't support MEMCPY\n");
+ ret = -EINVAL;
+ goto err_map_addr;
+ }
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index 62f7dff437309f..de665342dc16d0 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -856,7 +856,7 @@ void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_pci_epc_release, devm_pci_epc_match,
++ r = devres_release(dev, devm_pci_epc_release, devm_pci_epc_match,
+ epc);
+ dev_WARN_ONCE(dev, r, "couldn't find PCI EPC resource\n");
+ }
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index f4f10c60c1d23b..dcc662be080004 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -438,9 +438,9 @@ static void nmk_prcm_altcx_set_mode(struct nmk_pinctrl *npct,
+ * - Any spurious wake up event during switch sequence to be ignored and
+ * cleared
+ */
+-static void nmk_gpio_glitch_slpm_init(unsigned int *slpm)
++static int nmk_gpio_glitch_slpm_init(unsigned int *slpm)
+ {
+- int i;
++ int i, j, ret;
+
+ for (i = 0; i < NMK_MAX_BANKS; i++) {
+ struct nmk_gpio_chip *chip = nmk_gpio_chips[i];
+@@ -449,11 +449,21 @@ static void nmk_gpio_glitch_slpm_init(unsigned int *slpm)
+ if (!chip)
+ break;
+
+- clk_enable(chip->clk);
++ ret = clk_enable(chip->clk);
++ if (ret) {
++ for (j = 0; j < i; j++) {
++ chip = nmk_gpio_chips[j];
++ clk_disable(chip->clk);
++ }
++
++ return ret;
++ }
+
+ slpm[i] = readl(chip->addr + NMK_GPIO_SLPC);
+ writel(temp, chip->addr + NMK_GPIO_SLPC);
+ }
++
++ return 0;
+ }
+
+ static void nmk_gpio_glitch_slpm_restore(unsigned int *slpm)
+@@ -923,7 +933,9 @@ static int nmk_pmx_set(struct pinctrl_dev *pctldev, unsigned int function,
+
+ slpm[nmk_chip->bank] &= ~BIT(bit);
+ }
+- nmk_gpio_glitch_slpm_init(slpm);
++ ret = nmk_gpio_glitch_slpm_init(slpm);
++ if (ret)
++ goto out_pre_slpm_init;
+ }
+
+ for (i = 0; i < g->grp.npins; i++) {
+@@ -940,7 +952,10 @@ static int nmk_pmx_set(struct pinctrl_dev *pctldev, unsigned int function,
+ dev_dbg(npct->dev, "setting pin %d to altsetting %d\n",
+ g->grp.pins[i], g->altsetting);
+
+- clk_enable(nmk_chip->clk);
++ ret = clk_enable(nmk_chip->clk);
++ if (ret)
++ goto out_glitch;
++
+ /*
+ * If the pin is switching to altfunc, and there was an
+ * interrupt installed on it which has been lazy disabled,
+@@ -988,6 +1003,7 @@ static int nmk_gpio_request_enable(struct pinctrl_dev *pctldev,
+ struct nmk_gpio_chip *nmk_chip;
+ struct gpio_chip *chip;
+ unsigned int bit;
++ int ret;
+
+ if (!range) {
+ dev_err(npct->dev, "invalid range\n");
+@@ -1004,7 +1020,9 @@ static int nmk_gpio_request_enable(struct pinctrl_dev *pctldev,
+
+ find_nmk_gpio_from_pin(pin, &bit);
+
+- clk_enable(nmk_chip->clk);
++ ret = clk_enable(nmk_chip->clk);
++ if (ret)
++ return ret;
+ /* There is no glitch when converting any pin to GPIO */
+ __nmk_gpio_set_mode(nmk_chip, bit, NMK_GPIO_ALT_GPIO);
+ clk_disable(nmk_chip->clk);
+@@ -1058,6 +1076,7 @@ static int nmk_pin_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ unsigned long cfg;
+ int pull, slpm, output, val, i;
+ bool lowemi, gpiomode, sleep;
++ int ret;
+
+ nmk_chip = find_nmk_gpio_from_pin(pin, &bit);
+ if (!nmk_chip) {
+@@ -1116,7 +1135,9 @@ static int nmk_pin_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ output ? (val ? "high" : "low") : "",
+ lowemi ? "on" : "off");
+
+- clk_enable(nmk_chip->clk);
++ ret = clk_enable(nmk_chip->clk);
++ if (ret)
++ return ret;
+ if (gpiomode)
+ /* No glitch when going to GPIO mode */
+ __nmk_gpio_set_mode(nmk_chip, bit, NMK_GPIO_ALT_GPIO);
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 7f66ec73199a9c..a12766b3bc8a73 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -908,12 +908,13 @@ static bool amd_gpio_should_save(struct amd_gpio *gpio_dev, unsigned int pin)
+ return false;
+ }
+
+-static int amd_gpio_suspend(struct device *dev)
++static int amd_gpio_suspend_hibernate_common(struct device *dev, bool is_suspend)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+ unsigned long flags;
+ int i;
++ u32 wake_mask = is_suspend ? WAKE_SOURCE_SUSPEND : WAKE_SOURCE_HIBERNATE;
+
+ for (i = 0; i < desc->npins; i++) {
+ int pin = desc->pins[i].number;
+@@ -925,11 +926,11 @@ static int amd_gpio_suspend(struct device *dev)
+ gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin * 4) & ~PIN_IRQ_PENDING;
+
+ /* mask any interrupts not intended to be a wake source */
+- if (!(gpio_dev->saved_regs[i] & WAKE_SOURCE)) {
++ if (!(gpio_dev->saved_regs[i] & wake_mask)) {
+ writel(gpio_dev->saved_regs[i] & ~BIT(INTERRUPT_MASK_OFF),
+ gpio_dev->base + pin * 4);
+- pm_pr_dbg("Disabling GPIO #%d interrupt for suspend.\n",
+- pin);
++ pm_pr_dbg("Disabling GPIO #%d interrupt for %s.\n",
++ pin, is_suspend ? "suspend" : "hibernate");
+ }
+
+ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+@@ -938,6 +939,16 @@ static int amd_gpio_suspend(struct device *dev)
+ return 0;
+ }
+
++static int amd_gpio_suspend(struct device *dev)
++{
++ return amd_gpio_suspend_hibernate_common(dev, true);
++}
++
++static int amd_gpio_hibernate(struct device *dev)
++{
++ return amd_gpio_suspend_hibernate_common(dev, false);
++}
++
+ static int amd_gpio_resume(struct device *dev)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+@@ -961,8 +972,12 @@ static int amd_gpio_resume(struct device *dev)
+ }
+
+ static const struct dev_pm_ops amd_gpio_pm_ops = {
+- SET_LATE_SYSTEM_SLEEP_PM_OPS(amd_gpio_suspend,
+- amd_gpio_resume)
++ .suspend_late = amd_gpio_suspend,
++ .resume_early = amd_gpio_resume,
++ .freeze_late = amd_gpio_hibernate,
++ .thaw_early = amd_gpio_resume,
++ .poweroff_late = amd_gpio_hibernate,
++ .restore_early = amd_gpio_resume,
+ };
+ #endif
+
+diff --git a/drivers/pinctrl/pinctrl-amd.h b/drivers/pinctrl/pinctrl-amd.h
+index cf59089f277639..c9522c62d7910f 100644
+--- a/drivers/pinctrl/pinctrl-amd.h
++++ b/drivers/pinctrl/pinctrl-amd.h
+@@ -80,10 +80,9 @@
+ #define FUNCTION_MASK GENMASK(1, 0)
+ #define FUNCTION_INVALID GENMASK(7, 0)
+
+-#define WAKE_SOURCE (BIT(WAKE_CNTRL_OFF_S0I3) | \
+- BIT(WAKE_CNTRL_OFF_S3) | \
+- BIT(WAKE_CNTRL_OFF_S4) | \
+- BIT(WAKECNTRL_Z_OFF))
++#define WAKE_SOURCE_SUSPEND (BIT(WAKE_CNTRL_OFF_S0I3) | \
++ BIT(WAKE_CNTRL_OFF_S3))
++#define WAKE_SOURCE_HIBERNATE BIT(WAKE_CNTRL_OFF_S4)
+
+ struct amd_function {
+ const char *name;
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index b79c211c037496..ac6dc22b37c98e 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -636,7 +636,7 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
+ if (clk_enable(b->drvdata->pclk)) {
+ dev_err(b->gpio_chip.parent,
+ "unable to enable clock for pending IRQs\n");
+- return;
++ goto out;
+ }
+ }
+
+@@ -652,6 +652,7 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
+ if (eintd->nr_banks)
+ clk_disable(eintd->banks[0]->drvdata->pclk);
+
++out:
+ chained_irq_exit(chip, desc);
+ }
+
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 5b7fa77c118436..03f3f707d27555 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -86,7 +86,6 @@ struct stm32_pinctrl_group {
+
+ struct stm32_gpio_bank {
+ void __iomem *base;
+- struct clk *clk;
+ struct reset_control *rstc;
+ spinlock_t lock;
+ struct gpio_chip gpio_chip;
+@@ -108,6 +107,7 @@ struct stm32_pinctrl {
+ unsigned ngroups;
+ const char **grp_names;
+ struct stm32_gpio_bank *banks;
++ struct clk_bulk_data *clks;
+ unsigned nbanks;
+ const struct stm32_pinctrl_match_data *match_data;
+ struct irq_domain *domain;
+@@ -1308,12 +1308,6 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+ if (IS_ERR(bank->base))
+ return PTR_ERR(bank->base);
+
+- err = clk_prepare_enable(bank->clk);
+- if (err) {
+- dev_err(dev, "failed to prepare_enable clk (%d)\n", err);
+- return err;
+- }
+-
+ bank->gpio_chip = stm32_gpio_template;
+
+ fwnode_property_read_string(fwnode, "st,bank-name", &bank->gpio_chip.label);
+@@ -1360,26 +1354,20 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+ bank->fwnode, &stm32_gpio_domain_ops,
+ bank);
+
+- if (!bank->domain) {
+- err = -ENODEV;
+- goto err_clk;
+- }
++ if (!bank->domain)
++ return -ENODEV;
+ }
+
+ names = devm_kcalloc(dev, npins, sizeof(char *), GFP_KERNEL);
+- if (!names) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!names)
++ return -ENOMEM;
+
+ for (i = 0; i < npins; i++) {
+ stm32_pin = stm32_pctrl_get_desc_pin_from_gpio(pctl, bank, i);
+ if (stm32_pin && stm32_pin->pin.name) {
+ names[i] = devm_kasprintf(dev, GFP_KERNEL, "%s", stm32_pin->pin.name);
+- if (!names[i]) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!names[i])
++ return -ENOMEM;
+ } else {
+ names[i] = NULL;
+ }
+@@ -1390,15 +1378,11 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+ err = gpiochip_add_data(&bank->gpio_chip, bank);
+ if (err) {
+ dev_err(dev, "Failed to add gpiochip(%d)!\n", bank_nr);
+- goto err_clk;
++ return err;
+ }
+
+ dev_info(dev, "%s bank added\n", bank->gpio_chip.label);
+ return 0;
+-
+-err_clk:
+- clk_disable_unprepare(bank->clk);
+- return err;
+ }
+
+ static struct irq_domain *stm32_pctrl_get_irq_domain(struct platform_device *pdev)
+@@ -1621,6 +1605,11 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ if (!pctl->banks)
+ return -ENOMEM;
+
++ pctl->clks = devm_kcalloc(dev, banks, sizeof(*pctl->clks),
++ GFP_KERNEL);
++ if (!pctl->clks)
++ return -ENOMEM;
++
+ i = 0;
+ for_each_gpiochip_node(dev, child) {
+ struct stm32_gpio_bank *bank = &pctl->banks[i];
+@@ -1632,24 +1621,27 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ return -EPROBE_DEFER;
+ }
+
+- bank->clk = of_clk_get_by_name(np, NULL);
+- if (IS_ERR(bank->clk)) {
++ pctl->clks[i].clk = of_clk_get_by_name(np, NULL);
++ if (IS_ERR(pctl->clks[i].clk)) {
+ fwnode_handle_put(child);
+- return dev_err_probe(dev, PTR_ERR(bank->clk),
++ return dev_err_probe(dev, PTR_ERR(pctl->clks[i].clk),
+ "failed to get clk\n");
+ }
++ pctl->clks[i].id = "pctl";
+ i++;
+ }
+
++ ret = clk_bulk_prepare_enable(banks, pctl->clks);
++ if (ret) {
++ dev_err(dev, "failed to prepare_enable clk (%d)\n", ret);
++ return ret;
++ }
++
+ for_each_gpiochip_node(dev, child) {
+ ret = stm32_gpiolib_register_bank(pctl, child);
+ if (ret) {
+ fwnode_handle_put(child);
+-
+- for (i = 0; i < pctl->nbanks; i++)
+- clk_disable_unprepare(pctl->banks[i].clk);
+-
+- return ret;
++ goto err_register;
+ }
+
+ pctl->nbanks++;
+@@ -1658,6 +1650,15 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ dev_info(dev, "Pinctrl STM32 initialized\n");
+
+ return 0;
++err_register:
++ for (i = 0; i < pctl->nbanks; i++) {
++ struct stm32_gpio_bank *bank = &pctl->banks[i];
++
++ gpiochip_remove(&bank->gpio_chip);
++ }
++
++ clk_bulk_disable_unprepare(banks, pctl->clks);
++ return ret;
+ }
+
+ static int __maybe_unused stm32_pinctrl_restore_gpio_regs(
+@@ -1726,10 +1727,8 @@ static int __maybe_unused stm32_pinctrl_restore_gpio_regs(
+ int __maybe_unused stm32_pinctrl_suspend(struct device *dev)
+ {
+ struct stm32_pinctrl *pctl = dev_get_drvdata(dev);
+- int i;
+
+- for (i = 0; i < pctl->nbanks; i++)
+- clk_disable(pctl->banks[i].clk);
++ clk_bulk_disable(pctl->nbanks, pctl->clks);
+
+ return 0;
+ }
+@@ -1738,10 +1737,11 @@ int __maybe_unused stm32_pinctrl_resume(struct device *dev)
+ {
+ struct stm32_pinctrl *pctl = dev_get_drvdata(dev);
+ struct stm32_pinctrl_group *g = pctl->groups;
+- int i;
++ int i, ret;
+
+- for (i = 0; i < pctl->nbanks; i++)
+- clk_enable(pctl->banks[i].clk);
++ ret = clk_bulk_enable(pctl->nbanks, pctl->clks);
++ if (ret)
++ return ret;
+
+ for (i = 0; i < pctl->ngroups; i++, g++)
+ stm32_pinctrl_restore_gpio_regs(pctl, g->pin);
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 9d18dfca6a673b..9ff7b487dc4892 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -1168,7 +1168,7 @@ static int mlxbf_pmc_program_l3_counter(unsigned int blk_num, u32 cnt_num, u32 e
+ /* Method to handle crspace counter programming */
+ static int mlxbf_pmc_program_crspace_counter(unsigned int blk_num, u32 cnt_num, u32 evt)
+ {
+- void *addr;
++ void __iomem *addr;
+ u32 word;
+ int ret;
+
+@@ -1192,7 +1192,7 @@ static int mlxbf_pmc_program_crspace_counter(unsigned int blk_num, u32 cnt_num,
+ /* Method to clear crspace counter value */
+ static int mlxbf_pmc_clear_crspace_counter(unsigned int blk_num, u32 cnt_num)
+ {
+- void *addr;
++ void __iomem *addr;
+
+ addr = pmc->block[blk_num].mmio_base +
+ MLXBF_PMC_CRSPACE_PERFMON_VAL0(pmc->block[blk_num].counters) +
+@@ -1405,7 +1405,7 @@ static int mlxbf_pmc_read_l3_event(unsigned int blk_num, u32 cnt_num, u64 *resul
+ static int mlxbf_pmc_read_crspace_event(unsigned int blk_num, u32 cnt_num, u64 *result)
+ {
+ u32 word, evt;
+- void *addr;
++ void __iomem *addr;
+ int ret;
+
+ addr = pmc->block[blk_num].mmio_base +
+diff --git a/drivers/platform/x86/x86-android-tablets/lenovo.c b/drivers/platform/x86/x86-android-tablets/lenovo.c
+index ae087f1471c174..a60efbaf4817fe 100644
+--- a/drivers/platform/x86/x86-android-tablets/lenovo.c
++++ b/drivers/platform/x86/x86-android-tablets/lenovo.c
+@@ -601,7 +601,7 @@ static const struct regulator_init_data lenovo_yoga_tab2_1380_bq24190_vbus_init_
+ .num_consumer_supplies = 1,
+ };
+
+-struct bq24190_platform_data lenovo_yoga_tab2_1380_bq24190_pdata = {
++static struct bq24190_platform_data lenovo_yoga_tab2_1380_bq24190_pdata = {
+ .regulator_init_data = &lenovo_yoga_tab2_1380_bq24190_vbus_init_data,
+ };
+
+@@ -726,7 +726,7 @@ static const struct platform_device_info lenovo_yoga_tab2_1380_pdevs[] __initcon
+ },
+ };
+
+-const char * const lenovo_yoga_tab2_1380_modules[] __initconst = {
++static const char * const lenovo_yoga_tab2_1380_modules[] __initconst = {
+ "bq24190_charger", /* For the Vbus regulator for lc824206xa */
+ NULL
+ };
+diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c
+index 791fdc9326dd60..93e662912b5313 100644
+--- a/drivers/pps/clients/pps-gpio.c
++++ b/drivers/pps/clients/pps-gpio.c
+@@ -214,8 +214,8 @@ static int pps_gpio_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- dev_info(data->pps->dev, "Registered IRQ %d as PPS source\n",
+- data->irq);
++ dev_dbg(&data->pps->dev, "Registered IRQ %d as PPS source\n",
++ data->irq);
+
+ return 0;
+ }
+diff --git a/drivers/pps/clients/pps-ktimer.c b/drivers/pps/clients/pps-ktimer.c
+index d33106bd7a290f..2f465549b843f7 100644
+--- a/drivers/pps/clients/pps-ktimer.c
++++ b/drivers/pps/clients/pps-ktimer.c
+@@ -56,7 +56,7 @@ static struct pps_source_info pps_ktimer_info = {
+
+ static void __exit pps_ktimer_exit(void)
+ {
+- dev_info(pps->dev, "ktimer PPS source unregistered\n");
++ dev_dbg(&pps->dev, "ktimer PPS source unregistered\n");
+
+ del_timer_sync(&ktimer);
+ pps_unregister_source(pps);
+@@ -74,7 +74,7 @@ static int __init pps_ktimer_init(void)
+ timer_setup(&ktimer, pps_ktimer_event, 0);
+ mod_timer(&ktimer, jiffies + HZ);
+
+- dev_info(pps->dev, "ktimer PPS source registered\n");
++ dev_dbg(&pps->dev, "ktimer PPS source registered\n");
+
+ return 0;
+ }
+diff --git a/drivers/pps/clients/pps-ldisc.c b/drivers/pps/clients/pps-ldisc.c
+index 443d6bae19d14d..fa5660f3c4b707 100644
+--- a/drivers/pps/clients/pps-ldisc.c
++++ b/drivers/pps/clients/pps-ldisc.c
+@@ -32,7 +32,7 @@ static void pps_tty_dcd_change(struct tty_struct *tty, bool active)
+ pps_event(pps, &ts, active ? PPS_CAPTUREASSERT :
+ PPS_CAPTURECLEAR, NULL);
+
+- dev_dbg(pps->dev, "PPS %s at %lu\n",
++ dev_dbg(&pps->dev, "PPS %s at %lu\n",
+ active ? "assert" : "clear", jiffies);
+ }
+
+@@ -69,7 +69,7 @@ static int pps_tty_open(struct tty_struct *tty)
+ goto err_unregister;
+ }
+
+- dev_info(pps->dev, "source \"%s\" added\n", info.path);
++ dev_dbg(&pps->dev, "source \"%s\" added\n", info.path);
+
+ return 0;
+
+@@ -89,7 +89,7 @@ static void pps_tty_close(struct tty_struct *tty)
+ if (WARN_ON(!pps))
+ return;
+
+- dev_info(pps->dev, "removed\n");
++ dev_info(&pps->dev, "removed\n");
+ pps_unregister_source(pps);
+ }
+
+diff --git a/drivers/pps/clients/pps_parport.c b/drivers/pps/clients/pps_parport.c
+index abaffb4e1c1ce9..24db06750297d5 100644
+--- a/drivers/pps/clients/pps_parport.c
++++ b/drivers/pps/clients/pps_parport.c
+@@ -81,7 +81,7 @@ static void parport_irq(void *handle)
+ /* check the signal (no signal means the pulse is lost this time) */
+ if (!signal_is_set(port)) {
+ local_irq_restore(flags);
+- dev_err(dev->pps->dev, "lost the signal\n");
++ dev_err(&dev->pps->dev, "lost the signal\n");
+ goto out_assert;
+ }
+
+@@ -98,7 +98,7 @@ static void parport_irq(void *handle)
+ /* timeout */
+ dev->cw_err++;
+ if (dev->cw_err >= CLEAR_WAIT_MAX_ERRORS) {
+- dev_err(dev->pps->dev, "disabled clear edge capture after %d"
++ dev_err(&dev->pps->dev, "disabled clear edge capture after %d"
+ " timeouts\n", dev->cw_err);
+ dev->cw = 0;
+ dev->cw_err = 0;
+diff --git a/drivers/pps/kapi.c b/drivers/pps/kapi.c
+index d9d566f70ed199..92d1b62ea239d7 100644
+--- a/drivers/pps/kapi.c
++++ b/drivers/pps/kapi.c
+@@ -41,7 +41,7 @@ static void pps_add_offset(struct pps_ktime *ts, struct pps_ktime *offset)
+ static void pps_echo_client_default(struct pps_device *pps, int event,
+ void *data)
+ {
+- dev_info(pps->dev, "echo %s %s\n",
++ dev_info(&pps->dev, "echo %s %s\n",
+ event & PPS_CAPTUREASSERT ? "assert" : "",
+ event & PPS_CAPTURECLEAR ? "clear" : "");
+ }
+@@ -112,7 +112,7 @@ struct pps_device *pps_register_source(struct pps_source_info *info,
+ goto kfree_pps;
+ }
+
+- dev_info(pps->dev, "new PPS source %s\n", info->name);
++ dev_dbg(&pps->dev, "new PPS source %s\n", info->name);
+
+ return pps;
+
+@@ -166,7 +166,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event,
+ /* check event type */
+ BUG_ON((event & (PPS_CAPTUREASSERT | PPS_CAPTURECLEAR)) == 0);
+
+- dev_dbg(pps->dev, "PPS event at %lld.%09ld\n",
++ dev_dbg(&pps->dev, "PPS event at %lld.%09ld\n",
+ (s64)ts->ts_real.tv_sec, ts->ts_real.tv_nsec);
+
+ timespec_to_pps_ktime(&ts_real, ts->ts_real);
+@@ -188,7 +188,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event,
+ /* Save the time stamp */
+ pps->assert_tu = ts_real;
+ pps->assert_sequence++;
+- dev_dbg(pps->dev, "capture assert seq #%u\n",
++ dev_dbg(&pps->dev, "capture assert seq #%u\n",
+ pps->assert_sequence);
+
+ captured = ~0;
+@@ -202,7 +202,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event,
+ /* Save the time stamp */
+ pps->clear_tu = ts_real;
+ pps->clear_sequence++;
+- dev_dbg(pps->dev, "capture clear seq #%u\n",
++ dev_dbg(&pps->dev, "capture clear seq #%u\n",
+ pps->clear_sequence);
+
+ captured = ~0;
+diff --git a/drivers/pps/kc.c b/drivers/pps/kc.c
+index 50dc59af45be24..fbd23295afd7d9 100644
+--- a/drivers/pps/kc.c
++++ b/drivers/pps/kc.c
+@@ -43,11 +43,11 @@ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args)
+ pps_kc_hardpps_mode = 0;
+ pps_kc_hardpps_dev = NULL;
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_info(pps->dev, "unbound kernel"
++ dev_info(&pps->dev, "unbound kernel"
+ " consumer\n");
+ } else {
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_err(pps->dev, "selected kernel consumer"
++ dev_err(&pps->dev, "selected kernel consumer"
+ " is not bound\n");
+ return -EINVAL;
+ }
+@@ -57,11 +57,11 @@ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args)
+ pps_kc_hardpps_mode = bind_args->edge;
+ pps_kc_hardpps_dev = pps;
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_info(pps->dev, "bound kernel consumer: "
++ dev_info(&pps->dev, "bound kernel consumer: "
+ "edge=0x%x\n", bind_args->edge);
+ } else {
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_err(pps->dev, "another kernel consumer"
++ dev_err(&pps->dev, "another kernel consumer"
+ " is already bound\n");
+ return -EINVAL;
+ }
+@@ -83,7 +83,7 @@ void pps_kc_remove(struct pps_device *pps)
+ pps_kc_hardpps_mode = 0;
+ pps_kc_hardpps_dev = NULL;
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_info(pps->dev, "unbound kernel consumer"
++ dev_info(&pps->dev, "unbound kernel consumer"
+ " on device removal\n");
+ } else
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
+index 25d47907db175e..6a02245ea35fec 100644
+--- a/drivers/pps/pps.c
++++ b/drivers/pps/pps.c
+@@ -25,7 +25,7 @@
+ * Local variables
+ */
+
+-static dev_t pps_devt;
++static int pps_major;
+ static struct class *pps_class;
+
+ static DEFINE_MUTEX(pps_idr_lock);
+@@ -62,7 +62,7 @@ static int pps_cdev_pps_fetch(struct pps_device *pps, struct pps_fdata *fdata)
+ else {
+ unsigned long ticks;
+
+- dev_dbg(pps->dev, "timeout %lld.%09d\n",
++ dev_dbg(&pps->dev, "timeout %lld.%09d\n",
+ (long long) fdata->timeout.sec,
+ fdata->timeout.nsec);
+ ticks = fdata->timeout.sec * HZ;
+@@ -80,7 +80,7 @@ static int pps_cdev_pps_fetch(struct pps_device *pps, struct pps_fdata *fdata)
+
+ /* Check for pending signals */
+ if (err == -ERESTARTSYS) {
+- dev_dbg(pps->dev, "pending signal caught\n");
++ dev_dbg(&pps->dev, "pending signal caught\n");
+ return -EINTR;
+ }
+
+@@ -98,7 +98,7 @@ static long pps_cdev_ioctl(struct file *file,
+
+ switch (cmd) {
+ case PPS_GETPARAMS:
+- dev_dbg(pps->dev, "PPS_GETPARAMS\n");
++ dev_dbg(&pps->dev, "PPS_GETPARAMS\n");
+
+ spin_lock_irq(&pps->lock);
+
+@@ -114,7 +114,7 @@ static long pps_cdev_ioctl(struct file *file,
+ break;
+
+ case PPS_SETPARAMS:
+- dev_dbg(pps->dev, "PPS_SETPARAMS\n");
++ dev_dbg(&pps->dev, "PPS_SETPARAMS\n");
+
+ /* Check the capabilities */
+ if (!capable(CAP_SYS_TIME))
+@@ -124,14 +124,14 @@ static long pps_cdev_ioctl(struct file *file,
+ if (err)
+ return -EFAULT;
+ if (!(params.mode & (PPS_CAPTUREASSERT | PPS_CAPTURECLEAR))) {
+- dev_dbg(pps->dev, "capture mode unspecified (%x)\n",
++ dev_dbg(&pps->dev, "capture mode unspecified (%x)\n",
+ params.mode);
+ return -EINVAL;
+ }
+
+ /* Check for supported capabilities */
+ if ((params.mode & ~pps->info.mode) != 0) {
+- dev_dbg(pps->dev, "unsupported capabilities (%x)\n",
++ dev_dbg(&pps->dev, "unsupported capabilities (%x)\n",
+ params.mode);
+ return -EINVAL;
+ }
+@@ -144,7 +144,7 @@ static long pps_cdev_ioctl(struct file *file,
+ /* Restore the read only parameters */
+ if ((params.mode & (PPS_TSFMT_TSPEC | PPS_TSFMT_NTPFP)) == 0) {
+ /* section 3.3 of RFC 2783 interpreted */
+- dev_dbg(pps->dev, "time format unspecified (%x)\n",
++ dev_dbg(&pps->dev, "time format unspecified (%x)\n",
+ params.mode);
+ pps->params.mode |= PPS_TSFMT_TSPEC;
+ }
+@@ -165,7 +165,7 @@ static long pps_cdev_ioctl(struct file *file,
+ break;
+
+ case PPS_GETCAP:
+- dev_dbg(pps->dev, "PPS_GETCAP\n");
++ dev_dbg(&pps->dev, "PPS_GETCAP\n");
+
+ err = put_user(pps->info.mode, iuarg);
+ if (err)
+@@ -176,7 +176,7 @@ static long pps_cdev_ioctl(struct file *file,
+ case PPS_FETCH: {
+ struct pps_fdata fdata;
+
+- dev_dbg(pps->dev, "PPS_FETCH\n");
++ dev_dbg(&pps->dev, "PPS_FETCH\n");
+
+ err = copy_from_user(&fdata, uarg, sizeof(struct pps_fdata));
+ if (err)
+@@ -206,7 +206,7 @@ static long pps_cdev_ioctl(struct file *file,
+ case PPS_KC_BIND: {
+ struct pps_bind_args bind_args;
+
+- dev_dbg(pps->dev, "PPS_KC_BIND\n");
++ dev_dbg(&pps->dev, "PPS_KC_BIND\n");
+
+ /* Check the capabilities */
+ if (!capable(CAP_SYS_TIME))
+@@ -218,7 +218,7 @@ static long pps_cdev_ioctl(struct file *file,
+
+ /* Check for supported capabilities */
+ if ((bind_args.edge & ~pps->info.mode) != 0) {
+- dev_err(pps->dev, "unsupported capabilities (%x)\n",
++ dev_err(&pps->dev, "unsupported capabilities (%x)\n",
+ bind_args.edge);
+ return -EINVAL;
+ }
+@@ -227,7 +227,7 @@ static long pps_cdev_ioctl(struct file *file,
+ if (bind_args.tsformat != PPS_TSFMT_TSPEC ||
+ (bind_args.edge & ~PPS_CAPTUREBOTH) != 0 ||
+ bind_args.consumer != PPS_KC_HARDPPS) {
+- dev_err(pps->dev, "invalid kernel consumer bind"
++ dev_err(&pps->dev, "invalid kernel consumer bind"
+ " parameters (%x)\n", bind_args.edge);
+ return -EINVAL;
+ }
+@@ -259,7 +259,7 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ struct pps_fdata fdata;
+ int err;
+
+- dev_dbg(pps->dev, "PPS_FETCH\n");
++ dev_dbg(&pps->dev, "PPS_FETCH\n");
+
+ err = copy_from_user(&compat, uarg, sizeof(struct pps_fdata_compat));
+ if (err)
+@@ -296,20 +296,36 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ #define pps_cdev_compat_ioctl NULL
+ #endif
+
++static struct pps_device *pps_idr_get(unsigned long id)
++{
++ struct pps_device *pps;
++
++ mutex_lock(&pps_idr_lock);
++ pps = idr_find(&pps_idr, id);
++ if (pps)
++ get_device(&pps->dev);
++
++ mutex_unlock(&pps_idr_lock);
++ return pps;
++}
++
+ static int pps_cdev_open(struct inode *inode, struct file *file)
+ {
+- struct pps_device *pps = container_of(inode->i_cdev,
+- struct pps_device, cdev);
++ struct pps_device *pps = pps_idr_get(iminor(inode));
++
++ if (!pps)
++ return -ENODEV;
++
+ file->private_data = pps;
+- kobject_get(&pps->dev->kobj);
+ return 0;
+ }
+
+ static int pps_cdev_release(struct inode *inode, struct file *file)
+ {
+- struct pps_device *pps = container_of(inode->i_cdev,
+- struct pps_device, cdev);
+- kobject_put(&pps->dev->kobj);
++ struct pps_device *pps = file->private_data;
++
++ WARN_ON(pps->id != iminor(inode));
++ put_device(&pps->dev);
+ return 0;
+ }
+
+@@ -331,22 +347,13 @@ static void pps_device_destruct(struct device *dev)
+ {
+ struct pps_device *pps = dev_get_drvdata(dev);
+
+- cdev_del(&pps->cdev);
+-
+- /* Now we can release the ID for re-use */
+ pr_debug("deallocating pps%d\n", pps->id);
+- mutex_lock(&pps_idr_lock);
+- idr_remove(&pps_idr, pps->id);
+- mutex_unlock(&pps_idr_lock);
+-
+- kfree(dev);
+ kfree(pps);
+ }
+
+ int pps_register_cdev(struct pps_device *pps)
+ {
+ int err;
+- dev_t devt;
+
+ mutex_lock(&pps_idr_lock);
+ /*
+@@ -363,40 +370,29 @@ int pps_register_cdev(struct pps_device *pps)
+ goto out_unlock;
+ }
+ pps->id = err;
+- mutex_unlock(&pps_idr_lock);
+-
+- devt = MKDEV(MAJOR(pps_devt), pps->id);
+-
+- cdev_init(&pps->cdev, &pps_cdev_fops);
+- pps->cdev.owner = pps->info.owner;
+
+- err = cdev_add(&pps->cdev, devt, 1);
+- if (err) {
+- pr_err("%s: failed to add char device %d:%d\n",
+- pps->info.name, MAJOR(pps_devt), pps->id);
++ pps->dev.class = pps_class;
++ pps->dev.parent = pps->info.dev;
++ pps->dev.devt = MKDEV(pps_major, pps->id);
++ dev_set_drvdata(&pps->dev, pps);
++ dev_set_name(&pps->dev, "pps%d", pps->id);
++ err = device_register(&pps->dev);
++ if (err)
+ goto free_idr;
+- }
+- pps->dev = device_create(pps_class, pps->info.dev, devt, pps,
+- "pps%d", pps->id);
+- if (IS_ERR(pps->dev)) {
+- err = PTR_ERR(pps->dev);
+- goto del_cdev;
+- }
+
+ /* Override the release function with our own */
+- pps->dev->release = pps_device_destruct;
++ pps->dev.release = pps_device_destruct;
+
+- pr_debug("source %s got cdev (%d:%d)\n", pps->info.name,
+- MAJOR(pps_devt), pps->id);
++ pr_debug("source %s got cdev (%d:%d)\n", pps->info.name, pps_major,
++ pps->id);
+
++ get_device(&pps->dev);
++ mutex_unlock(&pps_idr_lock);
+ return 0;
+
+-del_cdev:
+- cdev_del(&pps->cdev);
+-
+ free_idr:
+- mutex_lock(&pps_idr_lock);
+ idr_remove(&pps_idr, pps->id);
++ put_device(&pps->dev);
+ out_unlock:
+ mutex_unlock(&pps_idr_lock);
+ return err;
+@@ -406,7 +402,13 @@ void pps_unregister_cdev(struct pps_device *pps)
+ {
+ pr_debug("unregistering pps%d\n", pps->id);
+ pps->lookup_cookie = NULL;
+- device_destroy(pps_class, pps->dev->devt);
++ device_destroy(pps_class, pps->dev.devt);
++
++ /* Now we can release the ID for re-use */
++ mutex_lock(&pps_idr_lock);
++ idr_remove(&pps_idr, pps->id);
++ put_device(&pps->dev);
++ mutex_unlock(&pps_idr_lock);
+ }
+
+ /*
+@@ -426,6 +428,11 @@ void pps_unregister_cdev(struct pps_device *pps)
+ * so that it will not be used again, even if the pps device cannot
+ * be removed from the idr due to pending references holding the minor
+ * number in use.
++ *
++ * Since pps_idr holds a reference to the device, the returned
++ * pps_device is guaranteed to be valid until pps_unregister_cdev() is
++ * called on it. But after calling pps_unregister_cdev(), it may be
++ * freed at any time.
+ */
+ struct pps_device *pps_lookup_dev(void const *cookie)
+ {
+@@ -448,13 +455,11 @@ EXPORT_SYMBOL(pps_lookup_dev);
+ static void __exit pps_exit(void)
+ {
+ class_destroy(pps_class);
+- unregister_chrdev_region(pps_devt, PPS_MAX_SOURCES);
++ __unregister_chrdev(pps_major, 0, PPS_MAX_SOURCES, "pps");
+ }
+
+ static int __init pps_init(void)
+ {
+- int err;
+-
+ pps_class = class_create("pps");
+ if (IS_ERR(pps_class)) {
+ pr_err("failed to allocate class\n");
+@@ -462,8 +467,9 @@ static int __init pps_init(void)
+ }
+ pps_class->dev_groups = pps_groups;
+
+- err = alloc_chrdev_region(&pps_devt, 0, PPS_MAX_SOURCES, "pps");
+- if (err < 0) {
++ pps_major = __register_chrdev(0, 0, PPS_MAX_SOURCES, "pps",
++ &pps_cdev_fops);
++ if (pps_major < 0) {
+ pr_err("failed to allocate char device region\n");
+ goto remove_class;
+ }
+@@ -476,8 +482,7 @@ static int __init pps_init(void)
+
+ remove_class:
+ class_destroy(pps_class);
+-
+- return err;
++ return pps_major;
+ }
+
+ subsys_initcall(pps_init);
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index ea96a14d72d141..bf6468c56419c5 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -4,6 +4,7 @@
+ *
+ * Copyright (C) 2010 OMICRON electronics GmbH
+ */
++#include <linux/compat.h>
+ #include <linux/module.h>
+ #include <linux/posix-clock.h>
+ #include <linux/poll.h>
+@@ -176,6 +177,9 @@ long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd,
+ struct timespec64 ts;
+ int enable, err = 0;
+
++ if (in_compat_syscall() && cmd != PTP_ENABLE_PPS && cmd != PTP_ENABLE_PPS2)
++ arg = (unsigned long)compat_ptr(arg);
++
+ tsevq = pccontext->private_clkdata;
+
+ switch (cmd) {
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 5feecaadde8e05..120db96d9e95d6 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -4420,7 +4420,7 @@ ptp_ocp_complete(struct ptp_ocp *bp)
+
+ pps = pps_lookup_dev(bp->ptp);
+ if (pps)
+- ptp_ocp_symlink(bp, pps->dev, "pps");
++ ptp_ocp_symlink(bp, &pps->dev, "pps");
+
+ ptp_ocp_debugfs_add_device(bp);
+
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 210368099a0642..174939359ae3eb 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -6,7 +6,7 @@
+ * Copyright (C) 2011-2012 Avionic Design GmbH
+ */
+
+-#define DEFAULT_SYMBOL_NAMESPACE PWM
++#define DEFAULT_SYMBOL_NAMESPACE "PWM"
+
+ #include <linux/acpi.h>
+ #include <linux/module.h>
+diff --git a/drivers/pwm/pwm-dwc-core.c b/drivers/pwm/pwm-dwc-core.c
+index c8425493b95d85..6dabec93a3c641 100644
+--- a/drivers/pwm/pwm-dwc-core.c
++++ b/drivers/pwm/pwm-dwc-core.c
+@@ -9,7 +9,7 @@
+ * Author: Raymond Tan <raymond.tan@intel.com>
+ */
+
+-#define DEFAULT_SYMBOL_NAMESPACE dwc_pwm
++#define DEFAULT_SYMBOL_NAMESPACE "dwc_pwm"
+
+ #include <linux/bitops.h>
+ #include <linux/export.h>
+diff --git a/drivers/pwm/pwm-lpss.c b/drivers/pwm/pwm-lpss.c
+index 867e2bc8c601c8..3b99feb3bb4918 100644
+--- a/drivers/pwm/pwm-lpss.c
++++ b/drivers/pwm/pwm-lpss.c
+@@ -19,7 +19,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/time.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE PWM_LPSS
++#define DEFAULT_SYMBOL_NAMESPACE "PWM_LPSS"
+
+ #include "pwm-lpss.h"
+
+diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
+index 989731256f5030..5832dce8ed9d58 100644
+--- a/drivers/pwm/pwm-stm32-lp.c
++++ b/drivers/pwm/pwm-stm32-lp.c
+@@ -167,8 +167,12 @@ static int stm32_pwm_lp_get_state(struct pwm_chip *chip,
+ regmap_read(priv->regmap, STM32_LPTIM_CR, &val);
+ state->enabled = !!FIELD_GET(STM32_LPTIM_ENABLE, val);
+ /* Keep PWM counter clock refcount in sync with PWM initial state */
+- if (state->enabled)
+- clk_enable(priv->clk);
++ if (state->enabled) {
++ int ret = clk_enable(priv->clk);
++
++ if (ret)
++ return ret;
++ }
+
+ regmap_read(priv->regmap, STM32_LPTIM_CFGR, &val);
+ presc = FIELD_GET(STM32_LPTIM_PRESC, val);
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index eb24054f972973..4f231f8aae7d4c 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -688,8 +688,11 @@ static int stm32_pwm_probe(struct platform_device *pdev)
+ chip->ops = &stm32pwm_ops;
+
+ /* Initialize clock refcount to number of enabled PWM channels. */
+- for (i = 0; i < num_enabled; i++)
+- clk_enable(priv->clk);
++ for (i = 0; i < num_enabled; i++) {
++ ret = clk_enable(priv->clk);
++ if (ret)
++ return ret;
++ }
+
+ ret = devm_pwmchip_add(dev, chip);
+ if (ret < 0)
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 1179766811f583..4bb2652740d001 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -4946,7 +4946,7 @@ int _regulator_bulk_get(struct device *dev, int num_consumers,
+ consumers[i].supply, get_type);
+ if (IS_ERR(consumers[i].consumer)) {
+ ret = dev_err_probe(dev, PTR_ERR(consumers[i].consumer),
+- "Failed to get supply '%s'",
++ "Failed to get supply '%s'\n",
+ consumers[i].supply);
+ consumers[i].consumer = NULL;
+ goto err;
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 3f490d81abc28f..deab0b95b6637d 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -446,7 +446,7 @@ int of_regulator_match(struct device *dev, struct device_node *node,
+ "failed to parse DT for regulator %pOFn\n",
+ child);
+ of_node_put(child);
+- return -EINVAL;
++ goto err_put;
+ }
+ match->of_node = of_node_get(child);
+ count++;
+@@ -455,6 +455,18 @@ int of_regulator_match(struct device *dev, struct device_node *node,
+ }
+
+ return count;
++
++err_put:
++ for (i = 0; i < num_matches; i++) {
++ struct of_regulator_match *match = &matches[i];
++
++ match->init_data = NULL;
++ if (match->of_node) {
++ of_node_put(match->of_node);
++ match->of_node = NULL;
++ }
++ }
++ return -EINVAL;
+ }
+ EXPORT_SYMBOL_GPL(of_regulator_match);
+
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index e744c07507eede..f98a11d4cf2920 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -1326,6 +1326,11 @@ static int scp_cluster_init(struct platform_device *pdev, struct mtk_scp_of_clus
+ return ret;
+ }
+
++static const struct of_device_id scp_core_match[] = {
++ { .compatible = "mediatek,scp-core" },
++ {}
++};
++
+ static int scp_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -1357,13 +1362,15 @@ static int scp_probe(struct platform_device *pdev)
+ INIT_LIST_HEAD(&scp_cluster->mtk_scp_list);
+ mutex_init(&scp_cluster->cluster_lock);
+
+- ret = devm_of_platform_populate(dev);
++ ret = of_platform_populate(dev_of_node(dev), scp_core_match, NULL, dev);
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to populate platform devices\n");
+
+ ret = scp_cluster_init(pdev, scp_cluster);
+- if (ret)
++ if (ret) {
++ of_platform_depopulate(dev);
+ return ret;
++ }
+
+ return 0;
+ }
+@@ -1379,6 +1386,7 @@ static void scp_remove(struct platform_device *pdev)
+ rproc_del(scp->rproc);
+ scp_free(scp);
+ }
++ of_platform_depopulate(&pdev->dev);
+ mutex_destroy(&scp_cluster->cluster_lock);
+ }
+
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index f276956f2c5cec..ef6febe3563307 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2486,6 +2486,13 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+ rproc->dev.driver_data = rproc;
+ idr_init(&rproc->notifyids);
+
++ /* Assign a unique device index and name */
++ rproc->index = ida_alloc(&rproc_dev_index, GFP_KERNEL);
++ if (rproc->index < 0) {
++ dev_err(dev, "ida_alloc failed: %d\n", rproc->index);
++ goto put_device;
++ }
++
+ rproc->name = kstrdup_const(name, GFP_KERNEL);
+ if (!rproc->name)
+ goto put_device;
+@@ -2496,13 +2503,6 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+ if (rproc_alloc_ops(rproc, ops))
+ goto put_device;
+
+- /* Assign a unique device index and name */
+- rproc->index = ida_alloc(&rproc_dev_index, GFP_KERNEL);
+- if (rproc->index < 0) {
+- dev_err(dev, "ida_alloc failed: %d\n", rproc->index);
+- goto put_device;
+- }
+-
+ dev_set_name(&rproc->dev, "remoteproc%d", rproc->index);
+
+ atomic_set(&rproc->power, 0);
+diff --git a/drivers/rtc/rtc-loongson.c b/drivers/rtc/rtc-loongson.c
+index e8ffc1ab90b02f..90e9d97a86b487 100644
+--- a/drivers/rtc/rtc-loongson.c
++++ b/drivers/rtc/rtc-loongson.c
+@@ -114,6 +114,13 @@ static irqreturn_t loongson_rtc_isr(int irq, void *id)
+ struct loongson_rtc_priv *priv = (struct loongson_rtc_priv *)id;
+
+ rtc_update_irq(priv->rtcdev, 1, RTC_AF | RTC_IRQF);
++
++ /*
++ * The TOY_MATCH0_REG should be cleared 0 here,
++ * otherwise the interrupt cannot be cleared.
++ */
++ regmap_write(priv->regmap, TOY_MATCH0_REG, 0);
++
+ return IRQ_HANDLED;
+ }
+
+@@ -131,11 +138,7 @@ static u32 loongson_rtc_handler(void *id)
+ writel(RTC_STS, priv->pm_base + PM1_STS_REG);
+ spin_unlock(&priv->lock);
+
+- /*
+- * The TOY_MATCH0_REG should be cleared 0 here,
+- * otherwise the interrupt cannot be cleared.
+- */
+- return regmap_write(priv->regmap, TOY_MATCH0_REG, 0);
++ return ACPI_INTERRUPT_HANDLED;
+ }
+
+ static int loongson_rtc_set_enabled(struct device *dev)
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index fdbc07f14036af..905986c616559b 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -322,7 +322,16 @@ static const struct rtc_class_ops pcf85063_rtc_ops = {
+ static int pcf85063_nvmem_read(void *priv, unsigned int offset,
+ void *val, size_t bytes)
+ {
+- return regmap_read(priv, PCF85063_REG_RAM, val);
++ unsigned int tmp;
++ int ret;
++
++ ret = regmap_read(priv, PCF85063_REG_RAM, &tmp);
++ if (ret < 0)
++ return ret;
++
++ *(u8 *)val = tmp;
++
++ return 0;
+ }
+
+ static int pcf85063_nvmem_write(void *priv, unsigned int offset,
+diff --git a/drivers/rtc/rtc-tps6594.c b/drivers/rtc/rtc-tps6594.c
+index e696676341378e..7c6246e3f02923 100644
+--- a/drivers/rtc/rtc-tps6594.c
++++ b/drivers/rtc/rtc-tps6594.c
+@@ -37,7 +37,7 @@
+ #define MAX_OFFSET (277774)
+
+ // Number of ticks per hour
+-#define TICKS_PER_HOUR (32768 * 3600)
++#define TICKS_PER_HOUR (32768 * 3600LL)
+
+ // Multiplier for ppb conversions
+ #define PPB_MULT NANO
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index fbffd451031fdb..45bd001206a2b8 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -245,7 +245,6 @@ static void sclp_request_timeout(bool force_restart);
+ static void sclp_process_queue(void);
+ static void __sclp_make_read_req(void);
+ static int sclp_init_mask(int calculate);
+-static int sclp_init(void);
+
+ static void
+ __sclp_queue_read_req(void)
+@@ -1251,8 +1250,7 @@ static struct platform_driver sclp_pdrv = {
+
+ /* Initialize SCLP driver. Return zero if driver is operational, non-zero
+ * otherwise. */
+-static int
+-sclp_init(void)
++int sclp_init(void)
+ {
+ unsigned long flags;
+ int rc = 0;
+@@ -1305,13 +1303,7 @@ sclp_init(void)
+
+ static __init int sclp_initcall(void)
+ {
+- int rc;
+-
+- rc = platform_driver_register(&sclp_pdrv);
+- if (rc)
+- return rc;
+-
+- return sclp_init();
++ return platform_driver_register(&sclp_pdrv);
+ }
+
+ arch_initcall(sclp_initcall);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 10b8e4dc64f8b0..7589f48aebc80f 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -2951,6 +2951,7 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc)
+ .max_hw_sectors = MPI3MR_MAX_APP_XFER_SECTORS,
+ .max_segments = MPI3MR_MAX_APP_XFER_SEGMENTS,
+ };
++ struct request_queue *q;
+
+ device_initialize(bsg_dev);
+
+@@ -2966,14 +2967,17 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc)
+ return;
+ }
+
+- mrioc->bsg_queue = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), &lim,
++ q = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), &lim,
+ mpi3mr_bsg_request, NULL, 0);
+- if (IS_ERR(mrioc->bsg_queue)) {
++ if (IS_ERR(q)) {
+ ioc_err(mrioc, "%s: bsg registration failed\n",
+ dev_name(bsg_dev));
+ device_del(bsg_dev);
+ put_device(bsg_dev);
++ return;
+ }
++
++ mrioc->bsg_queue = q;
+ }
+
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 16ac2267c71e19..c1d8f2c91a5e51 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -5629,8 +5629,7 @@ _base_static_config_pages(struct MPT3SAS_ADAPTER *ioc)
+ if (!ioc->is_gen35_ioc && ioc->manu_pg11.EEDPTagMode == 0) {
+ pr_err("%s: overriding NVDATA EEDPTagMode setting\n",
+ ioc->name);
+- ioc->manu_pg11.EEDPTagMode &= ~0x3;
+- ioc->manu_pg11.EEDPTagMode |= 0x1;
++ ioc->manu_pg11.EEDPTagMode = 0x1;
+ mpt3sas_config_set_manufacturing_pg11(ioc, &mpi_reply,
+ &ioc->manu_pg11);
+ }
+diff --git a/drivers/soc/atmel/soc.c b/drivers/soc/atmel/soc.c
+index 2a42b28931c96d..298b542dd1c064 100644
+--- a/drivers/soc/atmel/soc.c
++++ b/drivers/soc/atmel/soc.c
+@@ -399,7 +399,7 @@ static const struct of_device_id at91_soc_allowed_list[] __initconst = {
+
+ static int __init atmel_soc_device_init(void)
+ {
+- struct device_node *np = of_find_node_by_path("/");
++ struct device_node *np __free(device_node) = of_find_node_by_path("/");
+
+ if (!of_match_node(at91_soc_allowed_list, np))
+ return 0;
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 4a2f84c4d22e5f..532b2e9c31d0d3 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -1561,10 +1561,15 @@ static int omap2_mcspi_probe(struct platform_device *pdev)
+ }
+
+ mcspi->ref_clk = devm_clk_get_optional_enabled(&pdev->dev, NULL);
+- if (IS_ERR(mcspi->ref_clk))
+- mcspi->ref_clk_hz = OMAP2_MCSPI_MAX_FREQ;
+- else
++ if (IS_ERR(mcspi->ref_clk)) {
++ status = PTR_ERR(mcspi->ref_clk);
++ dev_err_probe(&pdev->dev, status, "Failed to get ref_clk");
++ goto free_ctlr;
++ }
++ if (mcspi->ref_clk)
+ mcspi->ref_clk_hz = clk_get_rate(mcspi->ref_clk);
++ else
++ mcspi->ref_clk_hz = OMAP2_MCSPI_MAX_FREQ;
+ ctlr->max_speed_hz = mcspi->ref_clk_hz;
+ ctlr->min_speed_hz = mcspi->ref_clk_hz >> 15;
+
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index b67455bda972b2..de4c182474329d 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -379,12 +379,21 @@ static int zynq_qspi_setup_op(struct spi_device *spi)
+ {
+ struct spi_controller *ctlr = spi->controller;
+ struct zynq_qspi *qspi = spi_controller_get_devdata(ctlr);
++ int ret;
+
+ if (ctlr->busy)
+ return -EBUSY;
+
+- clk_enable(qspi->refclk);
+- clk_enable(qspi->pclk);
++ ret = clk_enable(qspi->refclk);
++ if (ret)
++ return ret;
++
++ ret = clk_enable(qspi->pclk);
++ if (ret) {
++ clk_disable(qspi->refclk);
++ return ret;
++ }
++
+ zynq_qspi_write(qspi, ZYNQ_QSPI_ENABLE_OFFSET,
+ ZYNQ_QSPI_ENABLE_ENABLE_MASK);
+
+diff --git a/drivers/staging/media/imx/imx-media-of.c b/drivers/staging/media/imx/imx-media-of.c
+index 118bff988bc7e6..bb28daa4d71334 100644
+--- a/drivers/staging/media/imx/imx-media-of.c
++++ b/drivers/staging/media/imx/imx-media-of.c
+@@ -54,22 +54,18 @@ int imx_media_add_of_subdevs(struct imx_media_dev *imxmd,
+ break;
+
+ ret = imx_media_of_add_csi(imxmd, csi_np);
++ of_node_put(csi_np);
+ if (ret) {
+ /* unavailable or already added is not an error */
+ if (ret == -ENODEV || ret == -EEXIST) {
+- of_node_put(csi_np);
+ continue;
+ }
+
+ /* other error, can't continue */
+- goto err_out;
++ return ret;
+ }
+ }
+
+ return 0;
+-
+-err_out:
+- of_node_put(csi_np);
+- return ret;
+ }
+ EXPORT_SYMBOL_GPL(imx_media_add_of_subdevs);
+diff --git a/drivers/staging/media/max96712/max96712.c b/drivers/staging/media/max96712/max96712.c
+index 6bdbccbee05ac3..b528727ada75c6 100644
+--- a/drivers/staging/media/max96712/max96712.c
++++ b/drivers/staging/media/max96712/max96712.c
+@@ -421,7 +421,6 @@ static int max96712_probe(struct i2c_client *client)
+ return -ENOMEM;
+
+ priv->client = client;
+- i2c_set_clientdata(client, priv);
+
+ priv->regmap = devm_regmap_init_i2c(client, &max96712_i2c_regmap);
+ if (IS_ERR(priv->regmap))
+@@ -454,7 +453,8 @@ static int max96712_probe(struct i2c_client *client)
+
+ static void max96712_remove(struct i2c_client *client)
+ {
+- struct max96712_priv *priv = i2c_get_clientdata(client);
++ struct v4l2_subdev *sd = i2c_get_clientdata(client);
++ struct max96712_priv *priv = container_of(sd, struct max96712_priv, sd);
+
+ v4l2_async_unregister_subdev(&priv->sd);
+
+diff --git a/drivers/tty/mips_ejtag_fdc.c b/drivers/tty/mips_ejtag_fdc.c
+index afbf7738c7c47c..58b28be63c79b1 100644
+--- a/drivers/tty/mips_ejtag_fdc.c
++++ b/drivers/tty/mips_ejtag_fdc.c
+@@ -1154,7 +1154,7 @@ static char kgdbfdc_rbuf[4];
+
+ /* write buffer to allow compaction */
+ static unsigned int kgdbfdc_wbuflen;
+-static char kgdbfdc_wbuf[4];
++static u8 kgdbfdc_wbuf[4];
+
+ static void __iomem *kgdbfdc_setup(void)
+ {
+@@ -1215,7 +1215,7 @@ static int kgdbfdc_read_char(void)
+ /* push an FDC word from write buffer to TX FIFO */
+ static void kgdbfdc_push_one(void)
+ {
+- const char *bufs[1] = { kgdbfdc_wbuf };
++ const u8 *bufs[1] = { kgdbfdc_wbuf };
+ struct fdc_word word;
+ void __iomem *regs;
+ unsigned int i;
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3509af7dc52b88..11519aa2598a01 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2059,7 +2059,8 @@ static void serial8250_break_ctl(struct uart_port *port, int break_state)
+ serial8250_rpm_put(up);
+ }
+
+-static void wait_for_lsr(struct uart_8250_port *up, int bits)
++/* Returns true if @bits were set, false on timeout */
++static bool wait_for_lsr(struct uart_8250_port *up, int bits)
+ {
+ unsigned int status, tmout = 10000;
+
+@@ -2074,11 +2075,11 @@ static void wait_for_lsr(struct uart_8250_port *up, int bits)
+ udelay(1);
+ touch_nmi_watchdog();
+ }
++
++ return (tmout != 0);
+ }
+
+-/*
+- * Wait for transmitter & holding register to empty
+- */
++/* Wait for transmitter and holding register to empty with timeout */
+ static void wait_for_xmitr(struct uart_8250_port *up, int bits)
+ {
+ unsigned int tmout;
+@@ -3297,6 +3298,16 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS);
+ }
+
++static void fifo_wait_for_lsr(struct uart_8250_port *up, unsigned int count)
++{
++ unsigned int i;
++
++ for (i = 0; i < count; i++) {
++ if (wait_for_lsr(up, UART_LSR_THRE))
++ return;
++ }
++}
++
+ /*
+ * Print a string to the serial port using the device FIFO
+ *
+@@ -3306,13 +3317,15 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ static void serial8250_console_fifo_write(struct uart_8250_port *up,
+ const char *s, unsigned int count)
+ {
+- int i;
+ const char *end = s + count;
+ unsigned int fifosize = up->tx_loadsz;
++ unsigned int tx_count = 0;
+ bool cr_sent = false;
++ unsigned int i;
+
+ while (s != end) {
+- wait_for_lsr(up, UART_LSR_THRE);
++ /* Allow timeout for each byte of a possibly full FIFO */
++ fifo_wait_for_lsr(up, fifosize);
+
+ for (i = 0; i < fifosize && s != end; ++i) {
+ if (*s == '\n' && !cr_sent) {
+@@ -3323,7 +3336,14 @@ static void serial8250_console_fifo_write(struct uart_8250_port *up,
+ cr_sent = false;
+ }
+ }
++ tx_count = i;
+ }
++
++ /*
++ * Allow timeout for each byte written since the caller will only wait
++ * for UART_LSR_BOTH_EMPTY using the timeout of a single character
++ */
++ fifo_wait_for_lsr(up, tx_count);
+ }
+
+ /*
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index ad88a33a504f53..6a0a1cce3a897f 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -8,7 +8,7 @@
+ */
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+-#define DEFAULT_SYMBOL_NAMESPACE SERIAL_NXP_SC16IS7XX
++#define DEFAULT_SYMBOL_NAMESPACE "SERIAL_NXP_SC16IS7XX"
+
+ #include <linux/bits.h>
+ #include <linux/clk.h>
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 6c09d97ae00658..58023f735c195f 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -257,6 +257,7 @@ int ufs_bsg_probe(struct ufs_hba *hba)
+ NULL, 0);
+ if (IS_ERR(q)) {
+ ret = PTR_ERR(q);
++ device_del(bsg_dev);
+ goto out;
+ }
+
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 98114c2827c098..244e3e04e1ad74 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1660,8 +1660,6 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ u8 tx_thr_num_pkt_prd = 0;
+ u8 tx_max_burst_prd = 0;
+ u8 tx_fifo_resize_max_num;
+- const char *usb_psy_name;
+- int ret;
+
+ /* default to highest possible threshold */
+ lpm_nyet_threshold = 0xf;
+@@ -1696,13 +1694,6 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+
+ dwc->sys_wakeup = device_may_wakeup(dwc->sysdev);
+
+- ret = device_property_read_string(dev, "usb-psy-name", &usb_psy_name);
+- if (ret >= 0) {
+- dwc->usb_psy = power_supply_get_by_name(usb_psy_name);
+- if (!dwc->usb_psy)
+- dev_err(dev, "couldn't get usb power supply\n");
+- }
+-
+ dwc->has_lpm_erratum = device_property_read_bool(dev,
+ "snps,has-lpm-erratum");
+ device_property_read_u8(dev, "snps,lpm-nyet-threshold",
+@@ -2105,6 +2096,23 @@ static int dwc3_get_num_ports(struct dwc3 *dwc)
+ return 0;
+ }
+
++static struct power_supply *dwc3_get_usb_power_supply(struct dwc3 *dwc)
++{
++ struct power_supply *usb_psy;
++ const char *usb_psy_name;
++ int ret;
++
++ ret = device_property_read_string(dwc->dev, "usb-psy-name", &usb_psy_name);
++ if (ret < 0)
++ return NULL;
++
++ usb_psy = power_supply_get_by_name(usb_psy_name);
++ if (!usb_psy)
++ return ERR_PTR(-EPROBE_DEFER);
++
++ return usb_psy;
++}
++
+ static int dwc3_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -2161,6 +2169,10 @@ static int dwc3_probe(struct platform_device *pdev)
+
+ dwc3_get_software_properties(dwc);
+
++ dwc->usb_psy = dwc3_get_usb_power_supply(dwc);
++ if (IS_ERR(dwc->usb_psy))
++ return dev_err_probe(dev, PTR_ERR(dwc->usb_psy), "couldn't get usb power supply\n");
++
+ dwc->reset = devm_reset_control_array_get_optional_shared(dev);
+ if (IS_ERR(dwc->reset)) {
+ ret = PTR_ERR(dwc->reset);
+@@ -2585,12 +2597,15 @@ static int dwc3_resume(struct device *dev)
+ pinctrl_pm_select_default_state(dev);
+
+ pm_runtime_disable(dev);
+- pm_runtime_set_active(dev);
++ ret = pm_runtime_set_active(dev);
++ if (ret)
++ goto out;
+
+ ret = dwc3_resume_common(dwc, PMSG_RESUME);
+ if (ret)
+ pm_runtime_set_suspended(dev);
+
++out:
+ pm_runtime_enable(dev);
+
+ return ret;
+diff --git a/drivers/usb/dwc3/dwc3-am62.c b/drivers/usb/dwc3/dwc3-am62.c
+index 538185a4d1b4fb..c507e576bbe084 100644
+--- a/drivers/usb/dwc3/dwc3-am62.c
++++ b/drivers/usb/dwc3/dwc3-am62.c
+@@ -166,6 +166,7 @@ static int phy_syscon_pll_refclk(struct dwc3_am62 *am62)
+ if (ret)
+ return ret;
+
++ of_node_put(args.np);
+ am62->offset = args.args[0];
+
+ /* Core voltage. PHY_CORE_VOLTAGE bit Recommended to be 0 always */
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 15bb3aa12aa8b4..48dee166e5d89c 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1066,7 +1066,6 @@ static void usbg_cmd_work(struct work_struct *work)
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+ TCM_UNSUPPORTED_SCSI_OPCODE, 1);
+- transport_generic_free_cmd(&cmd->se_cmd, 0);
+ }
+
+ static struct usbg_cmd *usbg_get_cmd(struct f_uas *fu,
+@@ -1195,7 +1194,6 @@ static void bot_cmd_work(struct work_struct *work)
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+ TCM_UNSUPPORTED_SCSI_OPCODE, 1);
+- transport_generic_free_cmd(&cmd->se_cmd, 0);
+ }
+
+ static int bot_submit_command(struct f_uas *fu,
+@@ -2051,9 +2049,14 @@ static void tcm_delayed_set_alt(struct work_struct *wq)
+
+ static int tcm_get_alt(struct usb_function *f, unsigned intf)
+ {
+- if (intf == bot_intf_desc.bInterfaceNumber)
++ struct f_uas *fu = to_f_uas(f);
++
++ if (fu->iface != intf)
++ return -EOPNOTSUPP;
++
++ if (fu->flags & USBG_IS_BOT)
+ return USB_G_ALT_INT_BBB;
+- if (intf == uasp_intf_desc.bInterfaceNumber)
++ else if (fu->flags & USBG_IS_UAS)
+ return USB_G_ALT_INT_UAS;
+
+ return -EOPNOTSUPP;
+@@ -2063,6 +2066,9 @@ static int tcm_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ {
+ struct f_uas *fu = to_f_uas(f);
+
++ if (fu->iface != intf)
++ return -EOPNOTSUPP;
++
+ if ((alt == USB_G_ALT_INT_BBB) || (alt == USB_G_ALT_INT_UAS)) {
+ struct guas_setup_wq *work;
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index b267dae14d3904..4384b86ea7b66c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -422,7 +422,8 @@ static void xhci_handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ if ((xhci->cmd_ring->dequeue != xhci->cmd_ring->enqueue) &&
+ !(xhci->xhc_state & XHCI_STATE_DYING)) {
+ xhci->current_cmd = cur_cmd;
+- xhci_mod_cmd_timer(xhci);
++ if (cur_cmd)
++ xhci_mod_cmd_timer(xhci);
+ xhci_ring_cmd_db(xhci);
+ }
+ }
+diff --git a/drivers/usb/storage/Makefile b/drivers/usb/storage/Makefile
+index 46635fa4a3405d..28db337f190bf5 100644
+--- a/drivers/usb/storage/Makefile
++++ b/drivers/usb/storage/Makefile
+@@ -8,7 +8,7 @@
+
+ ccflags-y := -I $(srctree)/drivers/scsi
+
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_STORAGE
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_STORAGE"'
+
+ obj-$(CONFIG_USB_UAS) += uas.o
+ obj-$(CONFIG_USB_STORAGE) += usb-storage.o
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 24a6a4354df8ba..b2c83f552da55d 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -27,6 +27,7 @@
+ #define VPPS_NEW_MIN_PERCENT 95
+ #define VPPS_VALID_MIN_MV 100
+ #define VSINKDISCONNECT_PD_MIN_PERCENT 90
++#define VPPS_SHUTDOWN_MIN_PERCENT 85
+
+ struct tcpci {
+ struct device *dev;
+@@ -366,7 +367,8 @@ static int tcpci_enable_auto_vbus_discharge(struct tcpc_dev *dev, bool enable)
+ }
+
+ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum typec_pwr_opmode mode,
+- bool pps_active, u32 requested_vbus_voltage_mv)
++ bool pps_active, u32 requested_vbus_voltage_mv,
++ u32 apdo_min_voltage_mv)
+ {
+ struct tcpci *tcpci = tcpc_to_tcpci(dev);
+ unsigned int pwr_ctrl, threshold = 0;
+@@ -388,9 +390,12 @@ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum ty
+ threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
+ } else if (mode == TYPEC_PWR_MODE_PD) {
+ if (pps_active)
+- threshold = ((VPPS_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
+- VSINKPD_MIN_IR_DROP_MV - VPPS_VALID_MIN_MV) *
+- VSINKDISCONNECT_PD_MIN_PERCENT / 100;
++ /*
++ * To prevent disconnect when the source is in Current Limit Mode.
++ * Set the threshold to the lowest possible voltage vPpsShutdown (min)
++ */
++ threshold = VPPS_SHUTDOWN_MIN_PERCENT * apdo_min_voltage_mv / 100 -
++ VSINKPD_MIN_IR_DROP_MV;
+ else
+ threshold = ((VSRC_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
+ VSINKPD_MIN_IR_DROP_MV - VSRC_VALID_MIN_MV) *
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 7ae341a403424c..48ddf27704619d 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2928,10 +2928,12 @@ static int tcpm_set_auto_vbus_discharge_threshold(struct tcpm_port *port,
+ return 0;
+
+ ret = port->tcpc->set_auto_vbus_discharge_threshold(port->tcpc, mode, pps_active,
+- requested_vbus_voltage);
++ requested_vbus_voltage,
++ port->pps_data.min_volt);
+ tcpm_log_force(port,
+- "set_auto_vbus_discharge_threshold mode:%d pps_active:%c vbus:%u ret:%d",
+- mode, pps_active ? 'y' : 'n', requested_vbus_voltage, ret);
++ "set_auto_vbus_discharge_threshold mode:%d pps_active:%c vbus:%u pps_apdo_min_volt:%u ret:%d",
++ mode, pps_active ? 'y' : 'n', requested_vbus_voltage,
++ port->pps_data.min_volt, ret);
+
+ return ret;
+ }
+@@ -4757,7 +4759,7 @@ static void run_state_machine(struct tcpm_port *port)
+ port->caps_count = 0;
+ port->pd_capable = true;
+ tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
+- PD_T_SEND_SOURCE_CAP);
++ PD_T_SENDER_RESPONSE);
+ }
+ break;
+ case SRC_SEND_CAPABILITIES_TIMEOUT:
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c b/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c
+index d5a43b3bf45ec9..c46108a16a9dd3 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c
+@@ -102,6 +102,7 @@ struct device_node *dss_of_port_get_parent_device(struct device_node *port)
+ np = of_get_next_parent(np);
+ }
+
++ of_node_put(np);
+ return NULL;
+ }
+
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 563d842014dfba..cc239251e19383 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -301,6 +301,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ node = of_parse_phandle(pdev->dev.of_node, "memory-region", 0);
+ if (node) {
+ ret = of_address_to_resource(node, 0, &res);
++ of_node_put(node);
+ if (ret) {
+ dev_err(dev, "No memory address assigned to the region.\n");
+ goto err_iomap;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index ada363af5aab8e..50edd1cae28ace 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1472,7 +1472,12 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ op->file[1].vnode = vnode;
+ }
+
+- return afs_do_sync_operation(op);
++ ret = afs_do_sync_operation(op);
++
++ /* Not all systems that can host afs servers have ENOTEMPTY. */
++ if (ret == -EEXIST)
++ ret = -ENOTEMPTY;
++ return ret;
+
+ error:
+ return afs_put_operation(op);
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index c9d620175e80ca..d9760b2a8d8de4 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -1346,6 +1346,15 @@ extern void afs_send_simple_reply(struct afs_call *, const void *, size_t);
+ extern int afs_extract_data(struct afs_call *, bool);
+ extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause);
+
++static inline void afs_see_call(struct afs_call *call, enum afs_call_trace why)
++{
++ int r = refcount_read(&call->ref);
++
++ trace_afs_call(call->debug_id, why, r,
++ atomic_read(&call->net->nr_outstanding_calls),
++ __builtin_return_address(0));
++}
++
+ static inline void afs_make_op_call(struct afs_operation *op, struct afs_call *call,
+ gfp_t gfp)
+ {
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 9f2a3bb56ec69e..a122c6366ce19f 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -430,11 +430,16 @@ void afs_make_call(struct afs_call *call, gfp_t gfp)
+ return;
+
+ error_do_abort:
+- if (ret != -ECONNABORTED) {
++ if (ret != -ECONNABORTED)
+ rxrpc_kernel_abort_call(call->net->socket, rxcall,
+ RX_USER_ABORT, ret,
+ afs_abort_send_data_error);
+- } else {
++ if (call->async) {
++ afs_see_call(call, afs_call_trace_async_abort);
++ return;
++ }
++
++ if (ret == -ECONNABORTED) {
+ len = 0;
+ iov_iter_kvec(&msg.msg_iter, ITER_DEST, NULL, 0, 0);
+ rxrpc_kernel_recv_data(call->net->socket, rxcall,
+@@ -445,6 +450,8 @@ void afs_make_call(struct afs_call *call, gfp_t gfp)
+ call->error = ret;
+ trace_afs_call_done(call);
+ error_kill_call:
++ if (call->async)
++ afs_see_call(call, afs_call_trace_async_kill);
+ if (call->type->done)
+ call->type->done(call);
+
+@@ -602,7 +609,6 @@ static void afs_deliver_to_call(struct afs_call *call)
+ abort_code = 0;
+ call_complete:
+ afs_set_call_complete(call, ret, remote_abort);
+- state = AFS_CALL_COMPLETE;
+ goto done;
+ }
+
+diff --git a/fs/afs/xdr_fs.h b/fs/afs/xdr_fs.h
+index 8ca8681645077d..cc5f143d21a347 100644
+--- a/fs/afs/xdr_fs.h
++++ b/fs/afs/xdr_fs.h
+@@ -88,7 +88,7 @@ union afs_xdr_dir_block {
+
+ struct {
+ struct afs_xdr_dir_hdr hdr;
+- u8 alloc_ctrs[AFS_DIR_MAX_BLOCKS];
++ u8 alloc_ctrs[AFS_DIR_BLOCKS_WITH_CTR];
+ __be16 hashtable[AFS_DIR_HASHTBL_SIZE];
+ } meta;
+
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index 024227aba4cd5f..362845f9aaaefa 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -666,8 +666,9 @@ static int yfs_deliver_fs_remove_file2(struct afs_call *call)
+ static void yfs_done_fs_remove_file2(struct afs_call *call)
+ {
+ if (call->error == -ECONNABORTED &&
+- call->abort_code == RX_INVALID_OPERATION) {
+- set_bit(AFS_SERVER_FL_NO_RM2, &call->server->flags);
++ (call->abort_code == RX_INVALID_OPERATION ||
++ call->abort_code == RXGEN_OPCODE)) {
++ set_bit(AFS_SERVER_FL_NO_RM2, &call->op->server->flags);
+ call->op->flags |= AFS_OPERATION_DOWNGRADE;
+ }
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index a3c861b2a6d25d..9d9ce308488dd3 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2001,6 +2001,53 @@ static int can_nocow_file_extent(struct btrfs_path *path,
+ return ret < 0 ? ret : can_nocow;
+ }
+
++/*
++ * Cleanup the dirty folios which will never be submitted due to error.
++ *
++ * When running a delalloc range, we may need to split the ranges (due to
++ * fragmentation or NOCOW). If we hit an error in the later part, we will error
++ * out and previously successfully executed range will never be submitted, thus
++ * we have to cleanup those folios by clearing their dirty flag, starting and
++ * finishing the writeback.
++ */
++static void cleanup_dirty_folios(struct btrfs_inode *inode,
++ struct folio *locked_folio,
++ u64 start, u64 end, int error)
++{
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
++ struct address_space *mapping = inode->vfs_inode.i_mapping;
++ pgoff_t start_index = start >> PAGE_SHIFT;
++ pgoff_t end_index = end >> PAGE_SHIFT;
++ u32 len;
++
++ ASSERT(end + 1 - start < U32_MAX);
++ ASSERT(IS_ALIGNED(start, fs_info->sectorsize) &&
++ IS_ALIGNED(end + 1, fs_info->sectorsize));
++ len = end + 1 - start;
++
++ /*
++ * Handle the locked folio first.
++ * The btrfs_folio_clamp_*() helpers can handle range out of the folio case.
++ */
++ btrfs_folio_clamp_finish_io(fs_info, locked_folio, start, len);
++
++ for (pgoff_t index = start_index; index <= end_index; index++) {
++ struct folio *folio;
++
++ /* Already handled at the beginning. */
++ if (index == locked_folio->index)
++ continue;
++ folio = __filemap_get_folio(mapping, index, FGP_LOCK, GFP_NOFS);
++ /* Cache already dropped, no need to do any cleanup. */
++ if (IS_ERR(folio))
++ continue;
++ btrfs_folio_clamp_finish_io(fs_info, locked_folio, start, len);
++ folio_unlock(folio);
++ folio_put(folio);
++ }
++ mapping_set_error(mapping, error);
++}
++
+ /*
+ * when nowcow writeback call back. This checks for snapshots or COW copies
+ * of the extents that exist in the file, and COWs the file as required.
+@@ -2016,6 +2063,11 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ struct btrfs_root *root = inode->root;
+ struct btrfs_path *path;
+ u64 cow_start = (u64)-1;
++ /*
++ * If not 0, represents the inclusive end of the last fallback_to_cow()
++ * range. Only for error handling.
++ */
++ u64 cow_end = 0;
+ u64 cur_offset = start;
+ int ret;
+ bool check_prev = true;
+@@ -2176,6 +2228,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ found_key.offset - 1);
+ cow_start = (u64)-1;
+ if (ret) {
++ cow_end = found_key.offset - 1;
+ btrfs_dec_nocow_writers(nocow_bg);
+ goto error;
+ }
+@@ -2249,24 +2302,54 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ cow_start = cur_offset;
+
+ if (cow_start != (u64)-1) {
+- cur_offset = end;
+ ret = fallback_to_cow(inode, locked_folio, cow_start, end);
+ cow_start = (u64)-1;
+- if (ret)
++ if (ret) {
++ cow_end = end;
+ goto error;
++ }
+ }
+
+ btrfs_free_path(path);
+ return 0;
+
+ error:
++ /*
++ * There are several error cases:
++ *
++ * 1) Failed without falling back to COW
++ * start cur_offset end
++ * |/////////////| |
++ *
++ * For range [start, cur_offset) the folios are already unlocked (except
++ * @locked_folio), EXTENT_DELALLOC already removed.
++ * Only need to clear the dirty flag as they will never be submitted.
++ * Ordered extent and extent maps are handled by
++ * btrfs_mark_ordered_io_finished() inside run_delalloc_range().
++ *
++ * 2) Failed with error from fallback_to_cow()
++ * start cur_offset cow_end end
++ * |/////////////|-----------| |
++ *
++ * For range [start, cur_offset) it's the same as case 1).
++ * But for range [cur_offset, cow_end), the folios have dirty flag
++ * cleared and unlocked, EXTENT_DEALLLOC cleared by cow_file_range().
++ *
++ * Thus we should not call extent_clear_unlock_delalloc() on range
++ * [cur_offset, cow_end), as the folios are already unlocked.
++ *
++ * So clear the folio dirty flags for [start, cur_offset) first.
++ */
++ if (cur_offset > start)
++ cleanup_dirty_folios(inode, locked_folio, start, cur_offset - 1, ret);
++
+ /*
+ * If an error happened while a COW region is outstanding, cur_offset
+- * needs to be reset to cow_start to ensure the COW region is unlocked
+- * as well.
++ * needs to be reset to @cow_end + 1 to skip the COW range, as
++ * cow_file_range() will do the proper cleanup at error.
+ */
+- if (cow_start != (u64)-1)
+- cur_offset = cow_start;
++ if (cow_end)
++ cur_offset = cow_end + 1;
+
+ /*
+ * We need to lock the extent here because we're clearing DELALLOC and
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index e70ed857fc743b..4fcd6cd4c1c244 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1839,9 +1839,19 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ * Thus its reserved space should all be zero, no matter if qgroup
+ * is consistent or the mode.
+ */
+- WARN_ON(qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] ||
+- qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] ||
+- qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]);
++ if (qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] ||
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] ||
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]) {
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++ btrfs_warn_rl(fs_info,
++"to be deleted qgroup %u/%llu has non-zero numbers, data %llu meta prealloc %llu meta pertrans %llu",
++ btrfs_qgroup_level(qgroup->qgroupid),
++ btrfs_qgroup_subvolid(qgroup->qgroupid),
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA],
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC],
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]);
++
++ }
+ /*
+ * The same for rfer/excl numbers, but that's only if our qgroup is
+ * consistent and if it's in regular qgroup mode.
+@@ -1850,8 +1860,9 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ */
+ if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_FULL &&
+ !(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT)) {
+- if (WARN_ON(qgroup->rfer || qgroup->excl ||
+- qgroup->rfer_cmpr || qgroup->excl_cmpr)) {
++ if (qgroup->rfer || qgroup->excl ||
++ qgroup->rfer_cmpr || qgroup->excl_cmpr) {
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+ btrfs_warn_rl(fs_info,
+ "to be deleted qgroup %u/%llu has non-zero numbers, rfer %llu rfer_cmpr %llu excl %llu excl_cmpr %llu",
+ btrfs_qgroup_level(qgroup->qgroupid),
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index fe4d719d506bf5..ec7328a6bfd755 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -868,6 +868,7 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ unsigned long writeback_bitmap;
+ unsigned long ordered_bitmap;
+ unsigned long checked_bitmap;
++ unsigned long locked_bitmap;
+ unsigned long flags;
+
+ ASSERT(folio_test_private(folio) && folio_get_private(folio));
+@@ -880,15 +881,16 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ GET_SUBPAGE_BITMAP(subpage, fs_info, writeback, &writeback_bitmap);
+ GET_SUBPAGE_BITMAP(subpage, fs_info, ordered, &ordered_bitmap);
+ GET_SUBPAGE_BITMAP(subpage, fs_info, checked, &checked_bitmap);
+- GET_SUBPAGE_BITMAP(subpage, fs_info, locked, &checked_bitmap);
++ GET_SUBPAGE_BITMAP(subpage, fs_info, locked, &locked_bitmap);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+
+ dump_page(folio_page(folio, 0), "btrfs subpage dump");
+ btrfs_warn(fs_info,
+-"start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl",
++"start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl locked=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl",
+ start, len, folio_pos(folio),
+ sectors_per_page, &uptodate_bitmap,
+ sectors_per_page, &dirty_bitmap,
++ sectors_per_page, &locked_bitmap,
+ sectors_per_page, &writeback_bitmap,
+ sectors_per_page, &ordered_bitmap,
+ sectors_per_page, &checked_bitmap);
+diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h
+index 4b85d91d0e18b0..cdb554e0d215e2 100644
+--- a/fs/btrfs/subpage.h
++++ b/fs/btrfs/subpage.h
+@@ -152,6 +152,19 @@ DECLARE_BTRFS_SUBPAGE_OPS(writeback);
+ DECLARE_BTRFS_SUBPAGE_OPS(ordered);
+ DECLARE_BTRFS_SUBPAGE_OPS(checked);
+
++/*
++ * Helper for error cleanup, where a folio will have its dirty flag cleared,
++ * with writeback started and finished.
++ */
++static inline void btrfs_folio_clamp_finish_io(struct btrfs_fs_info *fs_info,
++ struct folio *locked_folio,
++ u64 start, u32 len)
++{
++ btrfs_folio_clamp_clear_dirty(fs_info, locked_folio, start, len);
++ btrfs_folio_clamp_set_writeback(fs_info, locked_folio, start, len);
++ btrfs_folio_clamp_clear_writeback(fs_info, locked_folio, start, len);
++}
++
+ bool btrfs_subpage_clear_and_test_dirty(const struct btrfs_fs_info *fs_info,
+ struct folio *folio, u64 start, u32 len);
+
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 8292e488d3d777..73343503ea60e4 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -972,7 +972,7 @@ static int btrfs_fill_super(struct super_block *sb,
+
+ err = open_ctree(sb, fs_devices);
+ if (err) {
+- btrfs_err(fs_info, "open_ctree failed");
++ btrfs_err(fs_info, "open_ctree failed: %d", err);
+ return err;
+ }
+
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index dddedaef5e93dd..0c01e4423ee2a8 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -824,9 +824,12 @@ static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
+ r->res_first_lkid = 0;
+ }
+
+- /* A dir record will not be on the scan list. */
+- if (r->res_dir_nodeid != our_nodeid)
+- del_scan(ls, r);
++ /* we always deactivate scan timer for the rsb, when
++ * we move it out of the inactive state as rsb state
++ * can be changed and scan timers are only for inactive
++ * rsbs.
++ */
++ del_scan(ls, r);
+ list_move(&r->res_slow_list, &ls->ls_slow_active);
+ rsb_clear_flag(r, RSB_INACTIVE);
+ kref_init(&r->res_ref); /* ref is now used in active state */
+@@ -989,10 +992,10 @@ static int find_rsb_nodir(struct dlm_ls *ls, const void *name, int len,
+ r->res_nodeid = 0;
+ }
+
++ del_scan(ls, r);
+ list_move(&r->res_slow_list, &ls->ls_slow_active);
+ rsb_clear_flag(r, RSB_INACTIVE);
+ kref_init(&r->res_ref);
+- del_scan(ls, r);
+ write_unlock_bh(&ls->ls_rsbtbl_lock);
+
+ goto out;
+@@ -1337,9 +1340,13 @@ static int _dlm_master_lookup(struct dlm_ls *ls, int from_nodeid, const char *na
+ __dlm_master_lookup(ls, r, our_nodeid, from_nodeid, true, flags,
+ r_nodeid, result);
+
+- /* A dir record rsb should never be on scan list. */
+- /* Try to fix this with del_scan? */
+- WARN_ON(!list_empty(&r->res_scan_list));
++ /* A dir record rsb should never be on scan list.
++ * Except when we are the dir and master node.
++ * This function should only be called by the dir
++ * node.
++ */
++ WARN_ON(!list_empty(&r->res_scan_list) &&
++ r->res_master_nodeid != our_nodeid);
+
+ write_unlock_bh(&ls->ls_rsbtbl_lock);
+
+@@ -1430,16 +1437,23 @@ static void deactivate_rsb(struct kref *kref)
+ list_move(&r->res_slow_list, &ls->ls_slow_inactive);
+
+ /*
+- * When the rsb becomes unused:
+- * - If it's not a dir record for a remote master rsb,
+- * then it is put on the scan list to be freed.
+- * - If it's a dir record for a remote master rsb,
+- * then it is kept in the inactive state until
+- * receive_remove() from the master node.
++ * When the rsb becomes unused, there are two possibilities:
++ * 1. Leave the inactive rsb in place (don't remove it).
++ * 2. Add it to the scan list to be removed.
++ *
++ * 1 is done when the rsb is acting as the dir record
++ * for a remotely mastered rsb. The rsb must be left
++ * in place as an inactive rsb to act as the dir record.
++ *
++ * 2 is done when a) the rsb is not the master and not the
++ * dir record, b) when the rsb is both the master and the
++ * dir record, c) when the rsb is master but not dir record.
++ *
++ * (If no directory is used, the rsb can always be removed.)
+ */
+- if (!dlm_no_directory(ls) &&
+- (r->res_master_nodeid != our_nodeid) &&
+- (dlm_dir_nodeid(r) != our_nodeid))
++ if (dlm_no_directory(ls) ||
++ (r->res_master_nodeid == our_nodeid ||
++ dlm_dir_nodeid(r) != our_nodeid))
+ add_scan(ls, r);
+
+ if (r->res_lvbptr) {
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index cb3a10b041c278..f2d88a3581695a 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -462,7 +462,8 @@ static bool dlm_lowcomms_con_has_addr(const struct connection *con,
+ int dlm_lowcomms_addr(int nodeid, struct sockaddr_storage *addr)
+ {
+ struct connection *con;
+- bool ret, idx;
++ bool ret;
++ int idx;
+
+ idx = srcu_read_lock(&connections_srcu);
+ con = nodeid2con(nodeid, GFP_NOFS);
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 77e785a6dfa7ff..edbabb3256c9ac 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -205,12 +205,6 @@ enum {
+ EROFS_ZIP_CACHE_READAROUND
+ };
+
+-/* basic unit of the workstation of a super_block */
+-struct erofs_workgroup {
+- pgoff_t index;
+- struct lockref lockref;
+-};
+-
+ enum erofs_kmap_type {
+ EROFS_NO_KMAP, /* don't map the buffer */
+ EROFS_KMAP, /* use kmap_local_page() to map the buffer */
+@@ -452,20 +446,15 @@ static inline void erofs_pagepool_add(struct page **pagepool, struct page *page)
+ void erofs_release_pages(struct page **pagepool);
+
+ #ifdef CONFIG_EROFS_FS_ZIP
+-void erofs_workgroup_put(struct erofs_workgroup *grp);
+-struct erofs_workgroup *erofs_find_workgroup(struct super_block *sb,
+- pgoff_t index);
+-struct erofs_workgroup *erofs_insert_workgroup(struct super_block *sb,
+- struct erofs_workgroup *grp);
+-void erofs_workgroup_free_rcu(struct erofs_workgroup *grp);
++extern atomic_long_t erofs_global_shrink_cnt;
+ void erofs_shrinker_register(struct super_block *sb);
+ void erofs_shrinker_unregister(struct super_block *sb);
+ int __init erofs_init_shrinker(void);
+ void erofs_exit_shrinker(void);
+ int __init z_erofs_init_subsystem(void);
+ void z_erofs_exit_subsystem(void);
+-int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
+- struct erofs_workgroup *egrp);
++unsigned long z_erofs_shrink_scan(struct erofs_sb_info *sbi,
++ unsigned long nr_shrink);
+ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ int flags);
+ void *z_erofs_get_gbuf(unsigned int requiredpages);
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 1a00f061798a3c..a8fb4b525f5443 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -44,12 +44,15 @@ __Z_EROFS_BVSET(z_erofs_bvset_inline, Z_EROFS_INLINE_BVECS);
+ * A: Field should be accessed / updated in atomic for parallelized code.
+ */
+ struct z_erofs_pcluster {
+- struct erofs_workgroup obj;
+ struct mutex lock;
++ struct lockref lockref;
+
+ /* A: point to next chained pcluster or TAILs */
+ z_erofs_next_pcluster_t next;
+
++ /* I: start block address of this pcluster */
++ erofs_off_t index;
++
+ /* L: the maximum decompression size of this round */
+ unsigned int length;
+
+@@ -108,7 +111,7 @@ struct z_erofs_decompressqueue {
+
+ static inline bool z_erofs_is_inline_pcluster(struct z_erofs_pcluster *pcl)
+ {
+- return !pcl->obj.index;
++ return !pcl->index;
+ }
+
+ static inline unsigned int z_erofs_pclusterpages(struct z_erofs_pcluster *pcl)
+@@ -548,7 +551,7 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ if (READ_ONCE(pcl->compressed_bvecs[i].page))
+ continue;
+
+- page = find_get_page(mc, pcl->obj.index + i);
++ page = find_get_page(mc, pcl->index + i);
+ if (!page) {
+ /* I/O is needed, no possible to decompress directly */
+ standalone = false;
+@@ -564,13 +567,13 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ continue;
+ set_page_private(newpage, Z_EROFS_PREALLOCATED_PAGE);
+ }
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ if (!pcl->compressed_bvecs[i].page) {
+ pcl->compressed_bvecs[i].page = page ? page : newpage;
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ continue;
+ }
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+
+ if (page)
+ put_page(page);
+@@ -587,11 +590,9 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ }
+
+ /* (erofs_shrinker) disconnect cached encoded data with pclusters */
+-int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
+- struct erofs_workgroup *grp)
++static int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
++ struct z_erofs_pcluster *pcl)
+ {
+- struct z_erofs_pcluster *const pcl =
+- container_of(grp, struct z_erofs_pcluster, obj);
+ unsigned int pclusterpages = z_erofs_pclusterpages(pcl);
+ struct folio *folio;
+ int i;
+@@ -626,8 +627,8 @@ static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
+ return true;
+
+ ret = false;
+- spin_lock(&pcl->obj.lockref.lock);
+- if (pcl->obj.lockref.count <= 0) {
++ spin_lock(&pcl->lockref.lock);
++ if (pcl->lockref.count <= 0) {
+ DBG_BUGON(z_erofs_is_inline_pcluster(pcl));
+ for (; bvec < end; ++bvec) {
+ if (bvec->page && page_folio(bvec->page) == folio) {
+@@ -638,7 +639,7 @@ static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
+ }
+ }
+ }
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ return ret;
+ }
+
+@@ -689,15 +690,15 @@ static int z_erofs_attach_page(struct z_erofs_decompress_frontend *fe,
+
+ if (exclusive) {
+ /* give priority for inplaceio to use file pages first */
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ while (fe->icur > 0) {
+ if (pcl->compressed_bvecs[--fe->icur].page)
+ continue;
+ pcl->compressed_bvecs[fe->icur] = *bvec;
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ return 0;
+ }
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+
+ /* otherwise, check if it can be used as a bvpage */
+ if (fe->mode >= Z_EROFS_PCLUSTER_FOLLOWED &&
+@@ -710,13 +711,30 @@ static int z_erofs_attach_page(struct z_erofs_decompress_frontend *fe,
+ return ret;
+ }
+
++static bool z_erofs_get_pcluster(struct z_erofs_pcluster *pcl)
++{
++ if (lockref_get_not_zero(&pcl->lockref))
++ return true;
++
++ spin_lock(&pcl->lockref.lock);
++ if (__lockref_is_dead(&pcl->lockref)) {
++ spin_unlock(&pcl->lockref.lock);
++ return false;
++ }
++
++ if (!pcl->lockref.count++)
++ atomic_long_dec(&erofs_global_shrink_cnt);
++ spin_unlock(&pcl->lockref.lock);
++ return true;
++}
++
+ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ {
+ struct erofs_map_blocks *map = &fe->map;
+ struct super_block *sb = fe->inode->i_sb;
++ struct erofs_sb_info *sbi = EROFS_SB(sb);
+ bool ztailpacking = map->m_flags & EROFS_MAP_META;
+- struct z_erofs_pcluster *pcl;
+- struct erofs_workgroup *grp;
++ struct z_erofs_pcluster *pcl, *pre;
+ int err;
+
+ if (!(map->m_flags & EROFS_MAP_ENCODED) ||
+@@ -730,8 +748,8 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ if (IS_ERR(pcl))
+ return PTR_ERR(pcl);
+
+- spin_lock_init(&pcl->obj.lockref.lock);
+- pcl->obj.lockref.count = 1; /* one ref for this request */
++ spin_lock_init(&pcl->lockref.lock);
++ pcl->lockref.count = 1; /* one ref for this request */
+ pcl->algorithmformat = map->m_algorithmformat;
+ pcl->length = 0;
+ pcl->partial = true;
+@@ -749,19 +767,26 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ DBG_BUGON(!mutex_trylock(&pcl->lock));
+
+ if (ztailpacking) {
+- pcl->obj.index = 0; /* which indicates ztailpacking */
++ pcl->index = 0; /* which indicates ztailpacking */
+ } else {
+- pcl->obj.index = erofs_blknr(sb, map->m_pa);
+-
+- grp = erofs_insert_workgroup(fe->inode->i_sb, &pcl->obj);
+- if (IS_ERR(grp)) {
+- err = PTR_ERR(grp);
+- goto err_out;
++ pcl->index = erofs_blknr(sb, map->m_pa);
++ while (1) {
++ xa_lock(&sbi->managed_pslots);
++ pre = __xa_cmpxchg(&sbi->managed_pslots, pcl->index,
++ NULL, pcl, GFP_KERNEL);
++ if (!pre || xa_is_err(pre) || z_erofs_get_pcluster(pre)) {
++ xa_unlock(&sbi->managed_pslots);
++ break;
++ }
++ /* try to legitimize the current in-tree one */
++ xa_unlock(&sbi->managed_pslots);
++ cond_resched();
+ }
+-
+- if (grp != &pcl->obj) {
+- fe->pcl = container_of(grp,
+- struct z_erofs_pcluster, obj);
++ if (xa_is_err(pre)) {
++ err = xa_err(pre);
++ goto err_out;
++ } else if (pre) {
++ fe->pcl = pre;
+ err = -EEXIST;
+ goto err_out;
+ }
+@@ -781,7 +806,7 @@ static int z_erofs_pcluster_begin(struct z_erofs_decompress_frontend *fe)
+ struct erofs_map_blocks *map = &fe->map;
+ struct super_block *sb = fe->inode->i_sb;
+ erofs_blk_t blknr = erofs_blknr(sb, map->m_pa);
+- struct erofs_workgroup *grp = NULL;
++ struct z_erofs_pcluster *pcl = NULL;
+ int ret;
+
+ DBG_BUGON(fe->pcl);
+@@ -789,14 +814,23 @@ static int z_erofs_pcluster_begin(struct z_erofs_decompress_frontend *fe)
+ DBG_BUGON(fe->owned_head == Z_EROFS_PCLUSTER_NIL);
+
+ if (!(map->m_flags & EROFS_MAP_META)) {
+- grp = erofs_find_workgroup(sb, blknr);
++ while (1) {
++ rcu_read_lock();
++ pcl = xa_load(&EROFS_SB(sb)->managed_pslots, blknr);
++ if (!pcl || z_erofs_get_pcluster(pcl)) {
++ DBG_BUGON(pcl && blknr != pcl->index);
++ rcu_read_unlock();
++ break;
++ }
++ rcu_read_unlock();
++ }
+ } else if ((map->m_pa & ~PAGE_MASK) + map->m_plen > PAGE_SIZE) {
+ DBG_BUGON(1);
+ return -EFSCORRUPTED;
+ }
+
+- if (grp) {
+- fe->pcl = container_of(grp, struct z_erofs_pcluster, obj);
++ if (pcl) {
++ fe->pcl = pcl;
+ ret = -EEXIST;
+ } else {
+ ret = z_erofs_register_pcluster(fe);
+@@ -851,12 +885,72 @@ static void z_erofs_rcu_callback(struct rcu_head *head)
+ struct z_erofs_pcluster, rcu));
+ }
+
+-void erofs_workgroup_free_rcu(struct erofs_workgroup *grp)
++static bool erofs_try_to_release_pcluster(struct erofs_sb_info *sbi,
++ struct z_erofs_pcluster *pcl)
+ {
+- struct z_erofs_pcluster *const pcl =
+- container_of(grp, struct z_erofs_pcluster, obj);
++ int free = false;
++
++ spin_lock(&pcl->lockref.lock);
++ if (pcl->lockref.count)
++ goto out;
++
++ /*
++ * Note that all cached folios should be detached before deleted from
++ * the XArray. Otherwise some folios could be still attached to the
++ * orphan old pcluster when the new one is available in the tree.
++ */
++ if (erofs_try_to_free_all_cached_folios(sbi, pcl))
++ goto out;
++
++ /*
++ * It's impossible to fail after the pcluster is freezed, but in order
++ * to avoid some race conditions, add a DBG_BUGON to observe this.
++ */
++ DBG_BUGON(__xa_erase(&sbi->managed_pslots, pcl->index) != pcl);
++
++ lockref_mark_dead(&pcl->lockref);
++ free = true;
++out:
++ spin_unlock(&pcl->lockref.lock);
++ if (free) {
++ atomic_long_dec(&erofs_global_shrink_cnt);
++ call_rcu(&pcl->rcu, z_erofs_rcu_callback);
++ }
++ return free;
++}
++
++unsigned long z_erofs_shrink_scan(struct erofs_sb_info *sbi,
++ unsigned long nr_shrink)
++{
++ struct z_erofs_pcluster *pcl;
++ unsigned long index, freed = 0;
++
++ xa_lock(&sbi->managed_pslots);
++ xa_for_each(&sbi->managed_pslots, index, pcl) {
++ /* try to shrink each valid pcluster */
++ if (!erofs_try_to_release_pcluster(sbi, pcl))
++ continue;
++ xa_unlock(&sbi->managed_pslots);
++
++ ++freed;
++ if (!--nr_shrink)
++ return freed;
++ xa_lock(&sbi->managed_pslots);
++ }
++ xa_unlock(&sbi->managed_pslots);
++ return freed;
++}
++
++static void z_erofs_put_pcluster(struct z_erofs_pcluster *pcl)
++{
++ if (lockref_put_or_lock(&pcl->lockref))
++ return;
+
+- call_rcu(&pcl->rcu, z_erofs_rcu_callback);
++ DBG_BUGON(__lockref_is_dead(&pcl->lockref));
++ if (pcl->lockref.count == 1)
++ atomic_long_inc(&erofs_global_shrink_cnt);
++ --pcl->lockref.count;
++ spin_unlock(&pcl->lockref.lock);
+ }
+
+ static void z_erofs_pcluster_end(struct z_erofs_decompress_frontend *fe)
+@@ -877,7 +971,7 @@ static void z_erofs_pcluster_end(struct z_erofs_decompress_frontend *fe)
+ * any longer if the pcluster isn't hosted by ourselves.
+ */
+ if (fe->mode < Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE)
+- erofs_workgroup_put(&pcl->obj);
++ z_erofs_put_pcluster(pcl);
+
+ fe->pcl = NULL;
+ }
+@@ -1309,7 +1403,7 @@ static int z_erofs_decompress_queue(const struct z_erofs_decompressqueue *io,
+ if (z_erofs_is_inline_pcluster(be.pcl))
+ z_erofs_free_pcluster(be.pcl);
+ else
+- erofs_workgroup_put(&be.pcl->obj);
++ z_erofs_put_pcluster(be.pcl);
+ }
+ return err;
+ }
+@@ -1391,9 +1485,9 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ bvec->bv_offset = 0;
+ bvec->bv_len = PAGE_SIZE;
+ repeat:
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ zbv = pcl->compressed_bvecs[nr];
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ if (!zbv.page)
+ goto out_allocfolio;
+
+@@ -1455,23 +1549,23 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ folio_put(folio);
+ out_allocfolio:
+ page = __erofs_allocpage(&f->pagepool, gfp, true);
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ if (unlikely(pcl->compressed_bvecs[nr].page != zbv.page)) {
+ if (page)
+ erofs_pagepool_add(&f->pagepool, page);
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ cond_resched();
+ goto repeat;
+ }
+ pcl->compressed_bvecs[nr].page = page ? page : ERR_PTR(-ENOMEM);
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ bvec->bv_page = page;
+ if (!page)
+ return;
+ folio = page_folio(page);
+ out_tocache:
+ if (!tocache || bs != PAGE_SIZE ||
+- filemap_add_folio(mc, folio, pcl->obj.index + nr, gfp)) {
++ filemap_add_folio(mc, folio, pcl->index + nr, gfp)) {
+ /* turn into a temporary shortlived folio (1 ref) */
+ folio->private = (void *)Z_EROFS_SHORTLIVED_PAGE;
+ return;
+@@ -1603,7 +1697,7 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+
+ /* no device id here, thus it will always succeed */
+ mdev = (struct erofs_map_dev) {
+- .m_pa = erofs_pos(sb, pcl->obj.index),
++ .m_pa = erofs_pos(sb, pcl->index),
+ };
+ (void)erofs_map_dev(sb, &mdev);
+
+diff --git a/fs/erofs/zutil.c b/fs/erofs/zutil.c
+index 37afe202484091..75704f58ecfa92 100644
+--- a/fs/erofs/zutil.c
++++ b/fs/erofs/zutil.c
+@@ -2,6 +2,7 @@
+ /*
+ * Copyright (C) 2018 HUAWEI, Inc.
+ * https://www.huawei.com/
++ * Copyright (C) 2024 Alibaba Cloud
+ */
+ #include "internal.h"
+
+@@ -19,13 +20,12 @@ static unsigned int z_erofs_gbuf_count, z_erofs_gbuf_nrpages,
+ module_param_named(global_buffers, z_erofs_gbuf_count, uint, 0444);
+ module_param_named(reserved_pages, z_erofs_rsv_nrpages, uint, 0444);
+
+-static atomic_long_t erofs_global_shrink_cnt; /* for all mounted instances */
+-/* protected by 'erofs_sb_list_lock' */
+-static unsigned int shrinker_run_no;
++atomic_long_t erofs_global_shrink_cnt; /* for all mounted instances */
+
+-/* protects the mounted 'erofs_sb_list' */
++/* protects `erofs_sb_list_lock` and the mounted `erofs_sb_list` */
+ static DEFINE_SPINLOCK(erofs_sb_list_lock);
+ static LIST_HEAD(erofs_sb_list);
++static unsigned int shrinker_run_no;
+ static struct shrinker *erofs_shrinker_info;
+
+ static unsigned int z_erofs_gbuf_id(void)
+@@ -214,145 +214,6 @@ void erofs_release_pages(struct page **pagepool)
+ }
+ }
+
+-static bool erofs_workgroup_get(struct erofs_workgroup *grp)
+-{
+- if (lockref_get_not_zero(&grp->lockref))
+- return true;
+-
+- spin_lock(&grp->lockref.lock);
+- if (__lockref_is_dead(&grp->lockref)) {
+- spin_unlock(&grp->lockref.lock);
+- return false;
+- }
+-
+- if (!grp->lockref.count++)
+- atomic_long_dec(&erofs_global_shrink_cnt);
+- spin_unlock(&grp->lockref.lock);
+- return true;
+-}
+-
+-struct erofs_workgroup *erofs_find_workgroup(struct super_block *sb,
+- pgoff_t index)
+-{
+- struct erofs_sb_info *sbi = EROFS_SB(sb);
+- struct erofs_workgroup *grp;
+-
+-repeat:
+- rcu_read_lock();
+- grp = xa_load(&sbi->managed_pslots, index);
+- if (grp) {
+- if (!erofs_workgroup_get(grp)) {
+- /* prefer to relax rcu read side */
+- rcu_read_unlock();
+- goto repeat;
+- }
+-
+- DBG_BUGON(index != grp->index);
+- }
+- rcu_read_unlock();
+- return grp;
+-}
+-
+-struct erofs_workgroup *erofs_insert_workgroup(struct super_block *sb,
+- struct erofs_workgroup *grp)
+-{
+- struct erofs_sb_info *const sbi = EROFS_SB(sb);
+- struct erofs_workgroup *pre;
+-
+- DBG_BUGON(grp->lockref.count < 1);
+-repeat:
+- xa_lock(&sbi->managed_pslots);
+- pre = __xa_cmpxchg(&sbi->managed_pslots, grp->index,
+- NULL, grp, GFP_KERNEL);
+- if (pre) {
+- if (xa_is_err(pre)) {
+- pre = ERR_PTR(xa_err(pre));
+- } else if (!erofs_workgroup_get(pre)) {
+- /* try to legitimize the current in-tree one */
+- xa_unlock(&sbi->managed_pslots);
+- cond_resched();
+- goto repeat;
+- }
+- grp = pre;
+- }
+- xa_unlock(&sbi->managed_pslots);
+- return grp;
+-}
+-
+-static void __erofs_workgroup_free(struct erofs_workgroup *grp)
+-{
+- atomic_long_dec(&erofs_global_shrink_cnt);
+- erofs_workgroup_free_rcu(grp);
+-}
+-
+-void erofs_workgroup_put(struct erofs_workgroup *grp)
+-{
+- if (lockref_put_or_lock(&grp->lockref))
+- return;
+-
+- DBG_BUGON(__lockref_is_dead(&grp->lockref));
+- if (grp->lockref.count == 1)
+- atomic_long_inc(&erofs_global_shrink_cnt);
+- --grp->lockref.count;
+- spin_unlock(&grp->lockref.lock);
+-}
+-
+-static bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi,
+- struct erofs_workgroup *grp)
+-{
+- int free = false;
+-
+- spin_lock(&grp->lockref.lock);
+- if (grp->lockref.count)
+- goto out;
+-
+- /*
+- * Note that all cached pages should be detached before deleted from
+- * the XArray. Otherwise some cached pages could be still attached to
+- * the orphan old workgroup when the new one is available in the tree.
+- */
+- if (erofs_try_to_free_all_cached_folios(sbi, grp))
+- goto out;
+-
+- /*
+- * It's impossible to fail after the workgroup is freezed,
+- * however in order to avoid some race conditions, add a
+- * DBG_BUGON to observe this in advance.
+- */
+- DBG_BUGON(__xa_erase(&sbi->managed_pslots, grp->index) != grp);
+-
+- lockref_mark_dead(&grp->lockref);
+- free = true;
+-out:
+- spin_unlock(&grp->lockref.lock);
+- if (free)
+- __erofs_workgroup_free(grp);
+- return free;
+-}
+-
+-static unsigned long erofs_shrink_workstation(struct erofs_sb_info *sbi,
+- unsigned long nr_shrink)
+-{
+- struct erofs_workgroup *grp;
+- unsigned int freed = 0;
+- unsigned long index;
+-
+- xa_lock(&sbi->managed_pslots);
+- xa_for_each(&sbi->managed_pslots, index, grp) {
+- /* try to shrink each valid workgroup */
+- if (!erofs_try_to_release_workgroup(sbi, grp))
+- continue;
+- xa_unlock(&sbi->managed_pslots);
+-
+- ++freed;
+- if (!--nr_shrink)
+- return freed;
+- xa_lock(&sbi->managed_pslots);
+- }
+- xa_unlock(&sbi->managed_pslots);
+- return freed;
+-}
+-
+ void erofs_shrinker_register(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+@@ -369,8 +230,8 @@ void erofs_shrinker_unregister(struct super_block *sb)
+ struct erofs_sb_info *const sbi = EROFS_SB(sb);
+
+ mutex_lock(&sbi->umount_mutex);
+- /* clean up all remaining workgroups in memory */
+- erofs_shrink_workstation(sbi, ~0UL);
++ /* clean up all remaining pclusters in memory */
++ z_erofs_shrink_scan(sbi, ~0UL);
+
+ spin_lock(&erofs_sb_list_lock);
+ list_del(&sbi->list);
+@@ -418,9 +279,7 @@ static unsigned long erofs_shrink_scan(struct shrinker *shrink,
+
+ spin_unlock(&erofs_sb_list_lock);
+ sbi->shrinker_run_no = run_no;
+-
+- freed += erofs_shrink_workstation(sbi, nr - freed);
+-
++ freed += z_erofs_shrink_scan(sbi, nr - freed);
+ spin_lock(&erofs_sb_list_lock);
+ /* Get the next list element before we move this one */
+ p = p->next;
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 47a5c806cf1628..54dd52de7269da 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -175,7 +175,8 @@ static unsigned long dir_block_index(unsigned int level,
+ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
+ struct page *dentry_page,
+ const struct f2fs_filename *fname,
+- int *max_slots)
++ int *max_slots,
++ bool use_hash)
+ {
+ struct f2fs_dentry_block *dentry_blk;
+ struct f2fs_dentry_ptr d;
+@@ -183,7 +184,7 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
+ dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page);
+
+ make_dentry_ptr_block(dir, &d, dentry_blk);
+- return f2fs_find_target_dentry(&d, fname, max_slots);
++ return f2fs_find_target_dentry(&d, fname, max_slots, use_hash);
+ }
+
+ static inline int f2fs_match_name(const struct inode *dir,
+@@ -208,7 +209,8 @@ static inline int f2fs_match_name(const struct inode *dir,
+ }
+
+ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+- const struct f2fs_filename *fname, int *max_slots)
++ const struct f2fs_filename *fname, int *max_slots,
++ bool use_hash)
+ {
+ struct f2fs_dir_entry *de;
+ unsigned long bit_pos = 0;
+@@ -231,7 +233,7 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+ continue;
+ }
+
+- if (de->hash_code == fname->hash) {
++ if (!use_hash || de->hash_code == fname->hash) {
+ res = f2fs_match_name(d->inode, fname,
+ d->filename[bit_pos],
+ le16_to_cpu(de->name_len));
+@@ -258,11 +260,12 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ unsigned int level,
+ const struct f2fs_filename *fname,
+- struct page **res_page)
++ struct page **res_page,
++ bool use_hash)
+ {
+ int s = GET_DENTRY_SLOTS(fname->disk_name.len);
+ unsigned int nbucket, nblock;
+- unsigned int bidx, end_block;
++ unsigned int bidx, end_block, bucket_no;
+ struct page *dentry_page;
+ struct f2fs_dir_entry *de = NULL;
+ pgoff_t next_pgofs;
+@@ -272,8 +275,11 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ nbucket = dir_buckets(level, F2FS_I(dir)->i_dir_level);
+ nblock = bucket_blocks(level);
+
++ bucket_no = use_hash ? le32_to_cpu(fname->hash) % nbucket : 0;
++
++start_find_bucket:
+ bidx = dir_block_index(level, F2FS_I(dir)->i_dir_level,
+- le32_to_cpu(fname->hash) % nbucket);
++ bucket_no);
+ end_block = bidx + nblock;
+
+ while (bidx < end_block) {
+@@ -290,7 +296,7 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ }
+ }
+
+- de = find_in_block(dir, dentry_page, fname, &max_slots);
++ de = find_in_block(dir, dentry_page, fname, &max_slots, use_hash);
+ if (IS_ERR(de)) {
+ *res_page = ERR_CAST(de);
+ de = NULL;
+@@ -307,12 +313,18 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ bidx++;
+ }
+
+- if (!de && room && F2FS_I(dir)->chash != fname->hash) {
+- F2FS_I(dir)->chash = fname->hash;
+- F2FS_I(dir)->clevel = level;
+- }
++ if (de)
++ return de;
+
+- return de;
++ if (likely(use_hash)) {
++ if (room && F2FS_I(dir)->chash != fname->hash) {
++ F2FS_I(dir)->chash = fname->hash;
++ F2FS_I(dir)->clevel = level;
++ }
++ } else if (++bucket_no < nbucket) {
++ goto start_find_bucket;
++ }
++ return NULL;
+ }
+
+ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+@@ -323,11 +335,15 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+ struct f2fs_dir_entry *de = NULL;
+ unsigned int max_depth;
+ unsigned int level;
++ bool use_hash = true;
+
+ *res_page = NULL;
+
++#if IS_ENABLED(CONFIG_UNICODE)
++start_find_entry:
++#endif
+ if (f2fs_has_inline_dentry(dir)) {
+- de = f2fs_find_in_inline_dir(dir, fname, res_page);
++ de = f2fs_find_in_inline_dir(dir, fname, res_page, use_hash);
+ goto out;
+ }
+
+@@ -343,11 +359,18 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+ }
+
+ for (level = 0; level < max_depth; level++) {
+- de = find_in_level(dir, level, fname, res_page);
++ de = find_in_level(dir, level, fname, res_page, use_hash);
+ if (de || IS_ERR(*res_page))
+ break;
+ }
++
+ out:
++#if IS_ENABLED(CONFIG_UNICODE)
++ if (IS_CASEFOLDED(dir) && !de && use_hash) {
++ use_hash = false;
++ goto start_find_entry;
++ }
++#endif
+ /* This is to increase the speed of f2fs_create */
+ if (!de)
+ F2FS_I(dir)->task = current;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index cec3dd205b3df8..b52df8aa95350e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3579,7 +3579,8 @@ int f2fs_prepare_lookup(struct inode *dir, struct dentry *dentry,
+ struct f2fs_filename *fname);
+ void f2fs_free_filename(struct f2fs_filename *fname);
+ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+- const struct f2fs_filename *fname, int *max_slots);
++ const struct f2fs_filename *fname, int *max_slots,
++ bool use_hash);
+ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ unsigned int start_pos, struct fscrypt_str *fstr);
+ void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent,
+@@ -4199,7 +4200,8 @@ int f2fs_write_inline_data(struct inode *inode, struct folio *folio);
+ int f2fs_recover_inline_data(struct inode *inode, struct page *npage);
+ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ const struct f2fs_filename *fname,
+- struct page **res_page);
++ struct page **res_page,
++ bool use_hash);
+ int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent,
+ struct page *ipage);
+ int f2fs_add_inline_entry(struct inode *dir, const struct f2fs_filename *fname,
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 005babf1bed1e3..3b91a95d42764f 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -352,7 +352,8 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage)
+
+ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ const struct f2fs_filename *fname,
+- struct page **res_page)
++ struct page **res_page,
++ bool use_hash)
+ {
+ struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ struct f2fs_dir_entry *de;
+@@ -369,7 +370,7 @@ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ inline_dentry = inline_data_addr(dir, ipage);
+
+ make_dentry_ptr_inline(dir, &d, inline_dentry);
+- de = f2fs_find_target_dentry(&d, fname, NULL);
++ de = f2fs_find_target_dentry(&d, fname, NULL, use_hash);
+ unlock_page(ipage);
+ if (IS_ERR(de)) {
+ *res_page = ERR_CAST(de);
+diff --git a/fs/file_table.c b/fs/file_table.c
+index eed5ffad9997c2..18735dc8269a10 100644
+--- a/fs/file_table.c
++++ b/fs/file_table.c
+@@ -125,7 +125,7 @@ static struct ctl_table fs_stat_sysctls[] = {
+ .data = &sysctl_nr_open,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec_minmax,
++ .proc_handler = proc_douintvec_minmax,
+ .extra1 = &sysctl_nr_open_min,
+ .extra2 = &sysctl_nr_open_max,
+ },
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index 084f6ed2dd7a69..94f3cc42c74035 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -94,32 +94,17 @@ __uml_setup("hostfs=", hostfs_args,
+ static char *__dentry_name(struct dentry *dentry, char *name)
+ {
+ char *p = dentry_path_raw(dentry, name, PATH_MAX);
+- char *root;
+- size_t len;
+- struct hostfs_fs_info *fsi;
+-
+- fsi = dentry->d_sb->s_fs_info;
+- root = fsi->host_root_path;
+- len = strlen(root);
+- if (IS_ERR(p)) {
+- __putname(name);
+- return NULL;
+- }
+-
+- /*
+- * This function relies on the fact that dentry_path_raw() will place
+- * the path name at the end of the provided buffer.
+- */
+- BUG_ON(p + strlen(p) + 1 != name + PATH_MAX);
++ struct hostfs_fs_info *fsi = dentry->d_sb->s_fs_info;
++ char *root = fsi->host_root_path;
++ size_t len = strlen(root);
+
+- strscpy(name, root, PATH_MAX);
+- if (len > p - name) {
++ if (IS_ERR(p) || len > p - name) {
+ __putname(name);
+ return NULL;
+ }
+
+- if (p > name + len)
+- strcpy(name + len, p);
++ memcpy(name, root, len);
++ memmove(name + len, p, name + PATH_MAX - p);
+
+ return name;
+ }
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 637528e6368ef7..21b2b38fae9f3a 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -331,7 +331,7 @@ nfs_local_pgio_done(struct nfs_pgio_header *hdr, long status)
+ hdr->res.op_status = NFS4_OK;
+ hdr->task.tk_status = 0;
+ } else {
+- hdr->res.op_status = nfs4_stat_to_errno(status);
++ hdr->res.op_status = nfs_localio_errno_to_nfs4_stat(status);
+ hdr->task.tk_status = status;
+ }
+ }
+@@ -669,7 +669,7 @@ nfs_local_commit_done(struct nfs_commit_data *data, int status)
+ data->task.tk_status = 0;
+ } else {
+ nfs_reset_boot_verifier(data->inode);
+- data->res.op_status = nfs4_stat_to_errno(status);
++ data->res.op_status = nfs_localio_errno_to_nfs4_stat(status);
+ data->task.tk_status = status;
+ }
+ }
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 531c9c20ef1d1b..9f0d69e6526443 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -552,7 +552,7 @@ static int nfs42_do_offload_cancel_async(struct file *dst,
+ .rpc_message = &msg,
+ .callback_ops = &nfs42_offload_cancel_ops,
+ .workqueue = nfsiod_workqueue,
+- .flags = RPC_TASK_ASYNC,
++ .flags = RPC_TASK_ASYNC | RPC_TASK_MOVEABLE,
+ };
+ int status;
+
+diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
+index 9e3ae53e220583..becc3149aa9e5c 100644
+--- a/fs/nfs/nfs42xdr.c
++++ b/fs/nfs/nfs42xdr.c
+@@ -144,9 +144,11 @@
+ decode_putfh_maxsz + \
+ decode_offload_cancel_maxsz)
+ #define NFS4_enc_copy_notify_sz (compound_encode_hdr_maxsz + \
++ encode_sequence_maxsz + \
+ encode_putfh_maxsz + \
+ encode_copy_notify_maxsz)
+ #define NFS4_dec_copy_notify_sz (compound_decode_hdr_maxsz + \
++ decode_sequence_maxsz + \
+ decode_putfh_maxsz + \
+ decode_copy_notify_maxsz)
+ #define NFS4_enc_deallocate_sz (compound_encode_hdr_maxsz + \
+diff --git a/fs/nfs_common/common.c b/fs/nfs_common/common.c
+index 34a115176f97eb..af09aed09fd279 100644
+--- a/fs/nfs_common/common.c
++++ b/fs/nfs_common/common.c
+@@ -15,7 +15,7 @@ static const struct {
+ { NFS_OK, 0 },
+ { NFSERR_PERM, -EPERM },
+ { NFSERR_NOENT, -ENOENT },
+- { NFSERR_IO, -errno_NFSERR_IO},
++ { NFSERR_IO, -EIO },
+ { NFSERR_NXIO, -ENXIO },
+ /* { NFSERR_EAGAIN, -EAGAIN }, */
+ { NFSERR_ACCES, -EACCES },
+@@ -45,7 +45,6 @@ static const struct {
+ { NFSERR_SERVERFAULT, -EREMOTEIO },
+ { NFSERR_BADTYPE, -EBADTYPE },
+ { NFSERR_JUKEBOX, -EJUKEBOX },
+- { -1, -EIO }
+ };
+
+ /**
+@@ -59,26 +58,29 @@ int nfs_stat_to_errno(enum nfs_stat status)
+ {
+ int i;
+
+- for (i = 0; nfs_errtbl[i].stat != -1; i++) {
++ for (i = 0; i < ARRAY_SIZE(nfs_errtbl); i++) {
+ if (nfs_errtbl[i].stat == (int)status)
+ return nfs_errtbl[i].errno;
+ }
+- return nfs_errtbl[i].errno;
++ return -EIO;
+ }
+ EXPORT_SYMBOL_GPL(nfs_stat_to_errno);
+
+ /*
+ * We need to translate between nfs v4 status return values and
+ * the local errno values which may not be the same.
++ *
++ * nfs4_errtbl_common[] is used before more specialized mappings
++ * available in nfs4_errtbl[] or nfs4_errtbl_localio[].
+ */
+ static const struct {
+ int stat;
+ int errno;
+-} nfs4_errtbl[] = {
++} nfs4_errtbl_common[] = {
+ { NFS4_OK, 0 },
+ { NFS4ERR_PERM, -EPERM },
+ { NFS4ERR_NOENT, -ENOENT },
+- { NFS4ERR_IO, -errno_NFSERR_IO},
++ { NFS4ERR_IO, -EIO },
+ { NFS4ERR_NXIO, -ENXIO },
+ { NFS4ERR_ACCESS, -EACCES },
+ { NFS4ERR_EXIST, -EEXIST },
+@@ -98,15 +100,20 @@ static const struct {
+ { NFS4ERR_BAD_COOKIE, -EBADCOOKIE },
+ { NFS4ERR_NOTSUPP, -ENOTSUPP },
+ { NFS4ERR_TOOSMALL, -ETOOSMALL },
+- { NFS4ERR_SERVERFAULT, -EREMOTEIO },
+ { NFS4ERR_BADTYPE, -EBADTYPE },
+- { NFS4ERR_LOCKED, -EAGAIN },
+ { NFS4ERR_SYMLINK, -ELOOP },
+- { NFS4ERR_OP_ILLEGAL, -EOPNOTSUPP },
+ { NFS4ERR_DEADLOCK, -EDEADLK },
++};
++
++static const struct {
++ int stat;
++ int errno;
++} nfs4_errtbl[] = {
++ { NFS4ERR_SERVERFAULT, -EREMOTEIO },
++ { NFS4ERR_LOCKED, -EAGAIN },
++ { NFS4ERR_OP_ILLEGAL, -EOPNOTSUPP },
+ { NFS4ERR_NOXATTR, -ENODATA },
+ { NFS4ERR_XATTR2BIG, -E2BIG },
+- { -1, -EIO }
+ };
+
+ /*
+@@ -116,7 +123,14 @@ static const struct {
+ int nfs4_stat_to_errno(int stat)
+ {
+ int i;
+- for (i = 0; nfs4_errtbl[i].stat != -1; i++) {
++
++ /* First check nfs4_errtbl_common */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl_common); i++) {
++ if (nfs4_errtbl_common[i].stat == stat)
++ return nfs4_errtbl_common[i].errno;
++ }
++ /* Then check nfs4_errtbl */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl); i++) {
+ if (nfs4_errtbl[i].stat == stat)
+ return nfs4_errtbl[i].errno;
+ }
+@@ -132,3 +146,56 @@ int nfs4_stat_to_errno(int stat)
+ return -stat;
+ }
+ EXPORT_SYMBOL_GPL(nfs4_stat_to_errno);
++
++/*
++ * This table is useful for conversion from local errno to NFS error.
++ * It provides more logically correct mappings for use with LOCALIO
++ * (which is focused on converting from errno to NFS status).
++ */
++static const struct {
++ int stat;
++ int errno;
++} nfs4_errtbl_localio[] = {
++ /* Map errors differently than nfs4_errtbl */
++ { NFS4ERR_IO, -EREMOTEIO },
++ { NFS4ERR_DELAY, -EAGAIN },
++ { NFS4ERR_FBIG, -E2BIG },
++ /* Map errors not handled by nfs4_errtbl */
++ { NFS4ERR_STALE, -EBADF },
++ { NFS4ERR_STALE, -EOPENSTALE },
++ { NFS4ERR_DELAY, -ETIMEDOUT },
++ { NFS4ERR_DELAY, -ERESTARTSYS },
++ { NFS4ERR_DELAY, -ENOMEM },
++ { NFS4ERR_IO, -ETXTBSY },
++ { NFS4ERR_IO, -EBUSY },
++ { NFS4ERR_SERVERFAULT, -ESERVERFAULT },
++ { NFS4ERR_SERVERFAULT, -ENFILE },
++ { NFS4ERR_IO, -EUCLEAN },
++ { NFS4ERR_PERM, -ENOKEY },
++};
++
++/*
++ * Convert an errno to an NFS error code for LOCALIO.
++ */
++__u32 nfs_localio_errno_to_nfs4_stat(int errno)
++{
++ int i;
++
++ /* First check nfs4_errtbl_common */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl_common); i++) {
++ if (nfs4_errtbl_common[i].errno == errno)
++ return nfs4_errtbl_common[i].stat;
++ }
++ /* Then check nfs4_errtbl_localio */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl_localio); i++) {
++ if (nfs4_errtbl_localio[i].errno == errno)
++ return nfs4_errtbl_localio[i].stat;
++ }
++ /* If we cannot translate the error, the recovery routines should
++ * handle it.
++ * Note: remaining NFSv4 error codes have values > 10000, so should
++ * not conflict with native Linux error codes.
++ */
++ return NFS4ERR_SERVERFAULT;
++}
++EXPORT_SYMBOL_GPL(nfs_localio_errno_to_nfs4_stat);
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index f61c58fbf117d3..0cc32e9c71cbf0 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -400,7 +400,7 @@ int nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr, ino_t *ino)
+ return 0;
+ }
+
+-void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
++int nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
+ struct folio *folio, struct inode *inode)
+ {
+ size_t from = offset_in_folio(folio, de);
+@@ -410,11 +410,15 @@ void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
+
+ folio_lock(folio);
+ err = nilfs_prepare_chunk(folio, from, to);
+- BUG_ON(err);
++ if (unlikely(err)) {
++ folio_unlock(folio);
++ return err;
++ }
+ de->inode = cpu_to_le64(inode->i_ino);
+ de->file_type = fs_umode_to_ftype(inode->i_mode);
+ nilfs_commit_chunk(folio, mapping, from, to);
+ inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
++ return 0;
+ }
+
+ /*
+@@ -543,7 +547,10 @@ int nilfs_delete_entry(struct nilfs_dir_entry *dir, struct folio *folio)
+ from = (char *)pde - kaddr;
+ folio_lock(folio);
+ err = nilfs_prepare_chunk(folio, from, to);
+- BUG_ON(err);
++ if (unlikely(err)) {
++ folio_unlock(folio);
++ goto out;
++ }
+ if (pde)
+ pde->rec_len = nilfs_rec_len_to_disk(to - from);
+ dir->inode = 0;
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index 1d836a5540f3b1..e02fae6757f126 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -406,8 +406,10 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ err = PTR_ERR(new_de);
+ goto out_dir;
+ }
+- nilfs_set_link(new_dir, new_de, new_folio, old_inode);
++ err = nilfs_set_link(new_dir, new_de, new_folio, old_inode);
+ folio_release_kmap(new_folio, new_de);
++ if (unlikely(err))
++ goto out_dir;
+ nilfs_mark_inode_dirty(new_dir);
+ inode_set_ctime_current(new_inode);
+ if (dir_de)
+@@ -430,28 +432,27 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ */
+ inode_set_ctime_current(old_inode);
+
+- nilfs_delete_entry(old_de, old_folio);
+-
+- if (dir_de) {
+- nilfs_set_link(old_inode, dir_de, dir_folio, new_dir);
+- folio_release_kmap(dir_folio, dir_de);
+- drop_nlink(old_dir);
++ err = nilfs_delete_entry(old_de, old_folio);
++ if (likely(!err)) {
++ if (dir_de) {
++ err = nilfs_set_link(old_inode, dir_de, dir_folio,
++ new_dir);
++ drop_nlink(old_dir);
++ }
++ nilfs_mark_inode_dirty(old_dir);
+ }
+- folio_release_kmap(old_folio, old_de);
+-
+- nilfs_mark_inode_dirty(old_dir);
+ nilfs_mark_inode_dirty(old_inode);
+
+- err = nilfs_transaction_commit(old_dir->i_sb);
+- return err;
+-
+ out_dir:
+ if (dir_de)
+ folio_release_kmap(dir_folio, dir_de);
+ out_old:
+ folio_release_kmap(old_folio, old_de);
+ out:
+- nilfs_transaction_abort(old_dir->i_sb);
++ if (likely(!err))
++ err = nilfs_transaction_commit(old_dir->i_sb);
++ else
++ nilfs_transaction_abort(old_dir->i_sb);
+ return err;
+ }
+
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index dff241c53fc583..cb6ed54accd7ba 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -261,8 +261,8 @@ struct nilfs_dir_entry *nilfs_find_entry(struct inode *, const struct qstr *,
+ int nilfs_delete_entry(struct nilfs_dir_entry *, struct folio *);
+ int nilfs_empty_dir(struct inode *);
+ struct nilfs_dir_entry *nilfs_dotdot(struct inode *, struct folio **);
+-void nilfs_set_link(struct inode *, struct nilfs_dir_entry *,
+- struct folio *, struct inode *);
++int nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
++ struct folio *folio, struct inode *inode);
+
+ /* file.c */
+ extern int nilfs_sync_file(struct file *, loff_t, loff_t, int);
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 6dd8b854cd1f38..06f18fe86407e4 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -392,6 +392,11 @@ void nilfs_clear_dirty_pages(struct address_space *mapping)
+ /**
+ * nilfs_clear_folio_dirty - discard dirty folio
+ * @folio: dirty folio that will be discarded
++ *
++ * nilfs_clear_folio_dirty() clears working states including dirty state for
++ * the folio and its buffers. If the folio has buffers, clear only if it is
++ * confirmed that none of the buffer heads are busy (none have valid
++ * references and none are locked).
+ */
+ void nilfs_clear_folio_dirty(struct folio *folio)
+ {
+@@ -399,10 +404,6 @@ void nilfs_clear_folio_dirty(struct folio *folio)
+
+ BUG_ON(!folio_test_locked(folio));
+
+- folio_clear_uptodate(folio);
+- folio_clear_mappedtodisk(folio);
+- folio_clear_checked(folio);
+-
+ head = folio_buffers(folio);
+ if (head) {
+ const unsigned long clear_bits =
+@@ -410,6 +411,25 @@ void nilfs_clear_folio_dirty(struct folio *folio)
+ BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) |
+ BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) |
+ BIT(BH_Delay));
++ bool busy, invalidated = false;
++
++recheck_buffers:
++ busy = false;
++ bh = head;
++ do {
++ if (atomic_read(&bh->b_count) | buffer_locked(bh)) {
++ busy = true;
++ break;
++ }
++ } while (bh = bh->b_this_page, bh != head);
++
++ if (busy) {
++ if (invalidated)
++ return;
++ invalidate_bh_lrus();
++ invalidated = true;
++ goto recheck_buffers;
++ }
+
+ bh = head;
+ do {
+@@ -419,6 +439,9 @@ void nilfs_clear_folio_dirty(struct folio *folio)
+ } while (bh = bh->b_this_page, bh != head);
+ }
+
++ folio_clear_uptodate(folio);
++ folio_clear_mappedtodisk(folio);
++ folio_clear_checked(folio);
+ __nilfs_clear_folio_dirty(folio);
+ }
+
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 58725183089733..58a598b548fa28 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -734,7 +734,6 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
+ if (!head)
+ head = create_empty_buffers(folio,
+ i_blocksize(inode), 0);
+- folio_unlock(folio);
+
+ bh = head;
+ do {
+@@ -744,11 +743,14 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
+ list_add_tail(&bh->b_assoc_buffers, listp);
+ ndirties++;
+ if (unlikely(ndirties >= nlimit)) {
++ folio_unlock(folio);
+ folio_batch_release(&fbatch);
+ cond_resched();
+ return ndirties;
+ }
+ } while (bh = bh->b_this_page, bh != head);
++
++ folio_unlock(folio);
+ }
+ folio_batch_release(&fbatch);
+ cond_resched();
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index 3404e7a30c330c..15d9acd456ecce 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -761,6 +761,11 @@ static int ocfs2_release_dquot(struct dquot *dquot)
+ handle = ocfs2_start_trans(osb,
+ ocfs2_calc_qdel_credits(dquot->dq_sb, dquot->dq_id.type));
+ if (IS_ERR(handle)) {
++ /*
++ * Mark dquot as inactive to avoid endless cycle in
++ * quota_release_workfn().
++ */
++ clear_bit(DQ_ACTIVE_B, &dquot->dq_flags);
+ status = PTR_ERR(handle);
+ mlog_errno(status);
+ goto out_ilock;
+diff --git a/fs/pstore/blk.c b/fs/pstore/blk.c
+index 65b2473e22ff9c..fa6b8cb788a1f2 100644
+--- a/fs/pstore/blk.c
++++ b/fs/pstore/blk.c
+@@ -89,7 +89,7 @@ static struct pstore_device_info *pstore_device_info;
+ _##name_ = check_size(name, alignsize); \
+ else \
+ _##name_ = 0; \
+- /* Synchronize module parameters with resuls. */ \
++ /* Synchronize module parameters with results. */ \
+ name = _##name_ / 1024; \
+ dev->zone.name = _##name_; \
+ }
+@@ -121,7 +121,7 @@ static int __register_pstore_device(struct pstore_device_info *dev)
+ if (pstore_device_info)
+ return -EBUSY;
+
+- /* zero means not limit on which backends to attempt to store. */
++ /* zero means no limit on which backends attempt to store. */
+ if (!dev->flags)
+ dev->flags = UINT_MAX;
+
+diff --git a/fs/select.c b/fs/select.c
+index a77907faf2b459..834f438296e2ba 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -787,7 +787,7 @@ static inline int get_sigset_argpack(struct sigset_argpack *to,
+ }
+ return 0;
+ Efault:
+- user_access_end();
++ user_read_access_end();
+ return -EFAULT;
+ }
+
+@@ -1361,7 +1361,7 @@ static inline int get_compat_sigset_argpack(struct compat_sigset_argpack *to,
+ }
+ return 0;
+ Efault:
+- user_access_end();
++ user_read_access_end();
+ return -EFAULT;
+ }
+
+diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c
+index c68ad526a4de1b..ebe9a7d7c70e86 100644
+--- a/fs/smb/client/cifsacl.c
++++ b/fs/smb/client/cifsacl.c
+@@ -1395,7 +1395,7 @@ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ struct smb_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb,
+ const struct cifs_fid *cifsfid, u32 *pacllen,
+- u32 __maybe_unused unused)
++ u32 info)
+ {
+ struct smb_ntsd *pntsd = NULL;
+ unsigned int xid;
+@@ -1407,7 +1407,7 @@ struct smb_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb,
+
+ xid = get_xid();
+ rc = CIFSSMBGetCIFSACL(xid, tlink_tcon(tlink), cifsfid->netfid, &pntsd,
+- pacllen);
++ pacllen, info);
+ free_xid(xid);
+
+ cifs_put_tlink(tlink);
+@@ -1419,7 +1419,7 @@ struct smb_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb,
+ }
+
+ static struct smb_ntsd *get_cifs_acl_by_path(struct cifs_sb_info *cifs_sb,
+- const char *path, u32 *pacllen)
++ const char *path, u32 *pacllen, u32 info)
+ {
+ struct smb_ntsd *pntsd = NULL;
+ int oplock = 0;
+@@ -1446,9 +1446,12 @@ static struct smb_ntsd *get_cifs_acl_by_path(struct cifs_sb_info *cifs_sb,
+ .fid = &fid,
+ };
+
++ if (info & SACL_SECINFO)
++ oparms.desired_access |= SYSTEM_SECURITY;
++
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (!rc) {
+- rc = CIFSSMBGetCIFSACL(xid, tcon, fid.netfid, &pntsd, pacllen);
++ rc = CIFSSMBGetCIFSACL(xid, tcon, fid.netfid, &pntsd, pacllen, info);
+ CIFSSMBClose(xid, tcon, fid.netfid);
+ }
+
+@@ -1472,7 +1475,7 @@ struct smb_ntsd *get_cifs_acl(struct cifs_sb_info *cifs_sb,
+ if (inode)
+ open_file = find_readable_file(CIFS_I(inode), true);
+ if (!open_file)
+- return get_cifs_acl_by_path(cifs_sb, path, pacllen);
++ return get_cifs_acl_by_path(cifs_sb, path, pacllen, info);
+
+ pntsd = get_cifs_acl_by_fid(cifs_sb, &open_file->fid, pacllen, info);
+ cifsFileInfo_put(open_file);
+@@ -1485,7 +1488,7 @@ int set_cifs_acl(struct smb_ntsd *pnntsd, __u32 acllen,
+ {
+ int oplock = 0;
+ unsigned int xid;
+- int rc, access_flags;
++ int rc, access_flags = 0;
+ struct cifs_tcon *tcon;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct tcon_link *tlink = cifs_sb_tlink(cifs_sb);
+@@ -1498,10 +1501,12 @@ int set_cifs_acl(struct smb_ntsd *pnntsd, __u32 acllen,
+ tcon = tlink_tcon(tlink);
+ xid = get_xid();
+
+- if (aclflag == CIFS_ACL_OWNER || aclflag == CIFS_ACL_GROUP)
+- access_flags = WRITE_OWNER;
+- else
+- access_flags = WRITE_DAC;
++ if (aclflag & CIFS_ACL_OWNER || aclflag & CIFS_ACL_GROUP)
++ access_flags |= WRITE_OWNER;
++ if (aclflag & CIFS_ACL_SACL)
++ access_flags |= SYSTEM_SECURITY;
++ if (aclflag & CIFS_ACL_DACL)
++ access_flags |= WRITE_DAC;
+
+ oparms = (struct cifs_open_parms) {
+ .tcon = tcon,
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index a697e53ccee2be..907af3cbffd1bc 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -568,7 +568,7 @@ extern int CIFSSMBSetEA(const unsigned int xid, struct cifs_tcon *tcon,
+ const struct nls_table *nls_codepage,
+ struct cifs_sb_info *cifs_sb);
+ extern int CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon,
+- __u16 fid, struct smb_ntsd **acl_inf, __u32 *buflen);
++ __u16 fid, struct smb_ntsd **acl_inf, __u32 *buflen, __u32 info);
+ extern int CIFSSMBSetCIFSACL(const unsigned int, struct cifs_tcon *, __u16,
+ struct smb_ntsd *pntsd, __u32 len, int aclflag);
+ extern int cifs_do_get_acl(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 0eae60731c20c0..e2a14e25da87ce 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -3427,7 +3427,7 @@ validate_ntransact(char *buf, char **ppparm, char **ppdata,
+ /* Get Security Descriptor (by handle) from remote server for a file or dir */
+ int
+ CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon, __u16 fid,
+- struct smb_ntsd **acl_inf, __u32 *pbuflen)
++ struct smb_ntsd **acl_inf, __u32 *pbuflen, __u32 info)
+ {
+ int rc = 0;
+ int buf_type = 0;
+@@ -3450,7 +3450,7 @@ CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon, __u16 fid,
+ pSMB->MaxSetupCount = 0;
+ pSMB->Fid = fid; /* file handle always le */
+ pSMB->AclFlags = cpu_to_le32(CIFS_ACL_OWNER | CIFS_ACL_GROUP |
+- CIFS_ACL_DACL);
++ CIFS_ACL_DACL | info);
+ pSMB->ByteCount = cpu_to_le16(11); /* 3 bytes pad + 8 bytes parm */
+ inc_rfc1001_len(pSMB, 11);
+ iov[0].iov_base = (char *)pSMB;
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index 273358d20a46c9..50f96259d9adc2 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -413,7 +413,7 @@ _initiate_cifs_search(const unsigned int xid, struct file *file,
+ cifsFile->invalidHandle = false;
+ } else if ((rc == -EOPNOTSUPP) &&
+ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {
+- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;
++ cifs_autodisable_serverino(cifs_sb);
+ goto ffirst_retry;
+ }
+ error_exit:
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index d3abb99cc99094..e56a8df23fec9a 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -674,11 +674,12 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+-static void wsl_to_fattr(struct cifs_open_info_data *data,
++static bool wsl_to_fattr(struct cifs_open_info_data *data,
+ struct cifs_sb_info *cifs_sb,
+ u32 tag, struct cifs_fattr *fattr)
+ {
+ struct smb2_file_full_ea_info *ea;
++ bool have_xattr_dev = false;
+ u32 next = 0;
+
+ switch (tag) {
+@@ -721,13 +722,24 @@ static void wsl_to_fattr(struct cifs_open_info_data *data,
+ fattr->cf_uid = wsl_make_kuid(cifs_sb, v);
+ else if (!strncmp(name, SMB2_WSL_XATTR_GID, nlen))
+ fattr->cf_gid = wsl_make_kgid(cifs_sb, v);
+- else if (!strncmp(name, SMB2_WSL_XATTR_MODE, nlen))
++ else if (!strncmp(name, SMB2_WSL_XATTR_MODE, nlen)) {
++ /* File type in reparse point tag and in xattr mode must match. */
++ if (S_DT(fattr->cf_mode) != S_DT(le32_to_cpu(*(__le32 *)v)))
++ return false;
+ fattr->cf_mode = (umode_t)le32_to_cpu(*(__le32 *)v);
+- else if (!strncmp(name, SMB2_WSL_XATTR_DEV, nlen))
++ } else if (!strncmp(name, SMB2_WSL_XATTR_DEV, nlen)) {
+ fattr->cf_rdev = reparse_mkdev(v);
++ have_xattr_dev = true;
++ }
+ } while (next);
+ out:
++
++ /* Major and minor numbers for char and block devices are mandatory. */
++ if (!have_xattr_dev && (tag == IO_REPARSE_TAG_LX_CHR || tag == IO_REPARSE_TAG_LX_BLK))
++ return false;
++
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
++ return true;
+ }
+
+ static bool posix_reparse_to_fattr(struct cifs_sb_info *cifs_sb,
+@@ -801,7 +813,9 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ case IO_REPARSE_TAG_AF_UNIX:
+ case IO_REPARSE_TAG_LX_CHR:
+ case IO_REPARSE_TAG_LX_BLK:
+- wsl_to_fattr(data, cifs_sb, tag, fattr);
++ ok = wsl_to_fattr(data, cifs_sb, tag, fattr);
++ if (!ok)
++ return false;
+ break;
+ case IO_REPARSE_TAG_NFS:
+ ok = posix_reparse_to_fattr(cifs_sb, fattr, data);
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 7571fefeb83aa1..6bacf754b57efd 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -658,7 +658,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+
+ while (bytes_left >= (ssize_t)sizeof(*p)) {
+ memset(&tmp_iface, 0, sizeof(tmp_iface));
+- tmp_iface.speed = le64_to_cpu(p->LinkSpeed);
++ /* default to 1Gbps when link speed is unset */
++ tmp_iface.speed = le64_to_cpu(p->LinkSpeed) ?: 1000000000;
+ tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
+ tmp_iface.rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;
+
+diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c
+index 5cc69beaa62ecf..10a86c02a8b328 100644
+--- a/fs/ubifs/debug.c
++++ b/fs/ubifs/debug.c
+@@ -946,16 +946,20 @@ void ubifs_dump_tnc(struct ubifs_info *c)
+
+ pr_err("\n");
+ pr_err("(pid %d) start dumping TNC tree\n", current->pid);
+- znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, NULL);
+- level = znode->level;
+- pr_err("== Level %d ==\n", level);
+- while (znode) {
+- if (level != znode->level) {
+- level = znode->level;
+- pr_err("== Level %d ==\n", level);
++ if (c->zroot.znode) {
++ znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, NULL);
++ level = znode->level;
++ pr_err("== Level %d ==\n", level);
++ while (znode) {
++ if (level != znode->level) {
++ level = znode->level;
++ pr_err("== Level %d ==\n", level);
++ }
++ ubifs_dump_znode(c, znode);
++ znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, znode);
+ }
+- ubifs_dump_znode(c, znode);
+- znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, znode);
++ } else {
++ pr_err("empty TNC tree in memory\n");
+ }
+ pr_err("(pid %d) finish dumping TNC tree\n", current->pid);
+ }
+diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
+index aa4dbda7b5365e..6bcbdc8bf186da 100644
+--- a/fs/xfs/xfs_buf.c
++++ b/fs/xfs/xfs_buf.c
+@@ -663,9 +663,8 @@ xfs_buf_find_insert(
+ spin_unlock(&bch->bc_lock);
+ goto out_free_buf;
+ }
+- if (bp) {
++ if (bp && atomic_inc_not_zero(&bp->b_hold)) {
+ /* found an existing buffer */
+- atomic_inc(&bp->b_hold);
+ spin_unlock(&bch->bc_lock);
+ error = xfs_buf_find_lock(bp, flags);
+ if (error)
+diff --git a/fs/xfs/xfs_notify_failure.c b/fs/xfs/xfs_notify_failure.c
+index fa50e5308292d3..0b0b0f31aca274 100644
+--- a/fs/xfs/xfs_notify_failure.c
++++ b/fs/xfs/xfs_notify_failure.c
+@@ -153,6 +153,79 @@ xfs_dax_notify_failure_thaw(
+ thaw_super(sb, FREEZE_HOLDER_USERSPACE);
+ }
+
++static int
++xfs_dax_translate_range(
++ struct xfs_buftarg *btp,
++ u64 offset,
++ u64 len,
++ xfs_daddr_t *daddr,
++ uint64_t *bblen)
++{
++ u64 dev_start = btp->bt_dax_part_off;
++ u64 dev_len = bdev_nr_bytes(btp->bt_bdev);
++ u64 dev_end = dev_start + dev_len - 1;
++
++ /* Notify failure on the whole device. */
++ if (offset == 0 && len == U64_MAX) {
++ offset = dev_start;
++ len = dev_len;
++ }
++
++ /* Ignore the range out of filesystem area */
++ if (offset + len - 1 < dev_start)
++ return -ENXIO;
++ if (offset > dev_end)
++ return -ENXIO;
++
++ /* Calculate the real range when it touches the boundary */
++ if (offset > dev_start)
++ offset -= dev_start;
++ else {
++ len -= dev_start - offset;
++ offset = 0;
++ }
++ if (offset + len - 1 > dev_end)
++ len = dev_end - offset + 1;
++
++ *daddr = BTOBB(offset);
++ *bblen = BTOBB(len);
++ return 0;
++}
++
++static int
++xfs_dax_notify_logdev_failure(
++ struct xfs_mount *mp,
++ u64 offset,
++ u64 len,
++ int mf_flags)
++{
++ xfs_daddr_t daddr;
++ uint64_t bblen;
++ int error;
++
++ /*
++ * Return ENXIO instead of shutting down the filesystem if the failed
++ * region is beyond the end of the log.
++ */
++ error = xfs_dax_translate_range(mp->m_logdev_targp,
++ offset, len, &daddr, &bblen);
++ if (error)
++ return error;
++
++ /*
++ * In the pre-remove case the failure notification is attempting to
++ * trigger a force unmount. The expectation is that the device is
++ * still present, but its removal is in progress and can not be
++ * cancelled, proceed with accessing the log device.
++ */
++ if (mf_flags & MF_MEM_PRE_REMOVE)
++ return 0;
++
++ xfs_err(mp, "ondisk log corrupt, shutting down fs!");
++ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_ONDISK);
++ return -EFSCORRUPTED;
++}
++
+ static int
+ xfs_dax_notify_ddev_failure(
+ struct xfs_mount *mp,
+@@ -263,8 +336,9 @@ xfs_dax_notify_failure(
+ int mf_flags)
+ {
+ struct xfs_mount *mp = dax_holder(dax_dev);
+- u64 ddev_start;
+- u64 ddev_end;
++ xfs_daddr_t daddr;
++ uint64_t bblen;
++ int error;
+
+ if (!(mp->m_super->s_flags & SB_BORN)) {
+ xfs_warn(mp, "filesystem is not ready for notify_failure()!");
+@@ -279,17 +353,7 @@ xfs_dax_notify_failure(
+
+ if (mp->m_logdev_targp && mp->m_logdev_targp->bt_daxdev == dax_dev &&
+ mp->m_logdev_targp != mp->m_ddev_targp) {
+- /*
+- * In the pre-remove case the failure notification is attempting
+- * to trigger a force unmount. The expectation is that the
+- * device is still present, but its removal is in progress and
+- * can not be cancelled, proceed with accessing the log device.
+- */
+- if (mf_flags & MF_MEM_PRE_REMOVE)
+- return 0;
+- xfs_err(mp, "ondisk log corrupt, shutting down fs!");
+- xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_ONDISK);
+- return -EFSCORRUPTED;
++ return xfs_dax_notify_logdev_failure(mp, offset, len, mf_flags);
+ }
+
+ if (!xfs_has_rmapbt(mp)) {
+@@ -297,33 +361,12 @@ xfs_dax_notify_failure(
+ return -EOPNOTSUPP;
+ }
+
+- ddev_start = mp->m_ddev_targp->bt_dax_part_off;
+- ddev_end = ddev_start + bdev_nr_bytes(mp->m_ddev_targp->bt_bdev) - 1;
+-
+- /* Notify failure on the whole device. */
+- if (offset == 0 && len == U64_MAX) {
+- offset = ddev_start;
+- len = bdev_nr_bytes(mp->m_ddev_targp->bt_bdev);
+- }
+-
+- /* Ignore the range out of filesystem area */
+- if (offset + len - 1 < ddev_start)
+- return -ENXIO;
+- if (offset > ddev_end)
+- return -ENXIO;
+-
+- /* Calculate the real range when it touches the boundary */
+- if (offset > ddev_start)
+- offset -= ddev_start;
+- else {
+- len -= ddev_start - offset;
+- offset = 0;
+- }
+- if (offset + len - 1 > ddev_end)
+- len = ddev_end - offset + 1;
++ error = xfs_dax_translate_range(mp->m_ddev_targp, offset, len, &daddr,
++ &bblen);
++ if (error)
++ return error;
+
+- return xfs_dax_notify_ddev_failure(mp, BTOBB(offset), BTOBB(len),
+- mf_flags);
++ return xfs_dax_notify_ddev_failure(mp, daddr, bblen, mf_flags);
+ }
+
+ const struct dax_holder_operations xfs_dax_holder_operations = {
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index d076ebd19a61e8..78b24b09048885 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -763,6 +763,7 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+ *event_status))
+ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_hw_disable_all_gpes(void))
++ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_hw_enable_all_wakeup_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
+diff --git a/include/dt-bindings/clock/imx93-clock.h b/include/dt-bindings/clock/imx93-clock.h
+index 787c9e74dc96db..c393fad3a3469c 100644
+--- a/include/dt-bindings/clock/imx93-clock.h
++++ b/include/dt-bindings/clock/imx93-clock.h
+@@ -204,6 +204,11 @@
+ #define IMX93_CLK_A55_SEL 199
+ #define IMX93_CLK_A55_CORE 200
+ #define IMX93_CLK_PDM_IPG 201
+-#define IMX93_CLK_END 202
++#define IMX91_CLK_ENET1_QOS_TSN 202
++#define IMX91_CLK_ENET_TIMER 203
++#define IMX91_CLK_ENET2_REGULAR 204
++#define IMX91_CLK_ENET2_REGULAR_GATE 205
++#define IMX91_CLK_ENET1_QOS_TSN_GATE 206
++#define IMX93_CLK_SPDIF_IPG 207
+
+ #endif
+diff --git a/include/dt-bindings/clock/sun50i-a64-ccu.h b/include/dt-bindings/clock/sun50i-a64-ccu.h
+index 175892189e9dcb..4f220ea7a23cc5 100644
+--- a/include/dt-bindings/clock/sun50i-a64-ccu.h
++++ b/include/dt-bindings/clock/sun50i-a64-ccu.h
+@@ -44,7 +44,9 @@
+ #define _DT_BINDINGS_CLK_SUN50I_A64_H_
+
+ #define CLK_PLL_VIDEO0 7
++#define CLK_PLL_VIDEO0_2X 8
+ #define CLK_PLL_PERIPH0 11
++#define CLK_PLL_MIPI 17
+
+ #define CLK_CPUX 21
+ #define CLK_BUS_MIPI_DSI 28
+diff --git a/include/linux/btf.h b/include/linux/btf.h
+index b8a583194c4a97..d99178ce01d21d 100644
+--- a/include/linux/btf.h
++++ b/include/linux/btf.h
+@@ -352,6 +352,11 @@ static inline bool btf_type_is_scalar(const struct btf_type *t)
+ return btf_type_is_int(t) || btf_type_is_enum(t);
+ }
+
++static inline bool btf_type_is_fwd(const struct btf_type *t)
++{
++ return BTF_INFO_KIND(t->info) == BTF_KIND_FWD;
++}
++
+ static inline bool btf_type_is_typedef(const struct btf_type *t)
+ {
+ return BTF_INFO_KIND(t->info) == BTF_KIND_TYPEDEF;
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index 45e598fe34766f..77e6e195d1d687 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -52,8 +52,8 @@ extern void do_coredump(const kernel_siginfo_t *siginfo);
+ #define __COREDUMP_PRINTK(Level, Format, ...) \
+ do { \
+ char comm[TASK_COMM_LEN]; \
+- \
+- get_task_comm(comm, current); \
++ /* This will always be NUL terminated. */ \
++ memcpy(comm, current->comm, sizeof(comm)); \
+ printk_ratelimited(Level "coredump: %d(%*pE): " Format "\n", \
+ task_tgid_vnr(current), (int)strlen(comm), comm, ##__VA_ARGS__); \
+ } while (0) \
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 12f6dc5675987b..b8b935b526033f 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -734,6 +734,9 @@ struct kernel_ethtool_ts_info {
+ * @rxfh_per_ctx_key: device supports setting different RSS key for each
+ * additional context. Netlink API should report hfunc, key, and input_xfrm
+ * for every context, not just context 0.
++ * @cap_rss_rxnfc_adds: device supports nonzero ring_cookie in filters with
++ * %FLOW_RSS flag; the queue ID from the filter is added to the value from
++ * the indirection table to determine the delivery queue.
+ * @rxfh_indir_space: max size of RSS indirection tables, if indirection table
+ * size as returned by @get_rxfh_indir_size may change during lifetime
+ * of the device. Leave as 0 if the table size is constant.
+@@ -956,6 +959,7 @@ struct ethtool_ops {
+ u32 cap_rss_ctx_supported:1;
+ u32 cap_rss_sym_xor_supported:1;
+ u32 rxfh_per_ctx_key:1;
++ u32 cap_rss_rxnfc_adds:1;
+ u32 rxfh_indir_space;
+ u16 rxfh_key_space;
+ u16 rxfh_priv_size;
+diff --git a/include/linux/export.h b/include/linux/export.h
+index 0bbd02fd351db9..1e04dbc675c2fa 100644
+--- a/include/linux/export.h
++++ b/include/linux/export.h
+@@ -60,7 +60,7 @@
+ #endif
+
+ #ifdef DEFAULT_SYMBOL_NAMESPACE
+-#define _EXPORT_SYMBOL(sym, license) __EXPORT_SYMBOL(sym, license, __stringify(DEFAULT_SYMBOL_NAMESPACE))
++#define _EXPORT_SYMBOL(sym, license) __EXPORT_SYMBOL(sym, license, DEFAULT_SYMBOL_NAMESPACE)
+ #else
+ #define _EXPORT_SYMBOL(sym, license) __EXPORT_SYMBOL(sym, license, "")
+ #endif
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index a7d60a1c72a09a..dd33423012538d 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -218,6 +218,7 @@ struct hid_item {
+ #define HID_GD_DOWN 0x00010091
+ #define HID_GD_RIGHT 0x00010092
+ #define HID_GD_LEFT 0x00010093
++#define HID_GD_DO_NOT_DISTURB 0x0001009b
+ /* Microsoft Win8 Wireless Radio Controls CA usage codes */
+ #define HID_GD_RFKILL_BTN 0x000100c6
+ #define HID_GD_RFKILL_LED 0x000100c7
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 456bca45ff0528..3750e56bfcbb36 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -5053,28 +5053,24 @@ static inline u8 ieee80211_mle_common_size(const u8 *data)
+ {
+ const struct ieee80211_multi_link_elem *mle = (const void *)data;
+ u16 control = le16_to_cpu(mle->control);
+- u8 common = 0;
+
+ switch (u16_get_bits(control, IEEE80211_ML_CONTROL_TYPE)) {
+ case IEEE80211_ML_CONTROL_TYPE_BASIC:
+ case IEEE80211_ML_CONTROL_TYPE_PREQ:
+ case IEEE80211_ML_CONTROL_TYPE_TDLS:
+ case IEEE80211_ML_CONTROL_TYPE_RECONF:
++ case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+ /*
+ * The length is the first octet pointed by mle->variable so no
+ * need to add anything
+ */
+ break;
+- case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+- if (control & IEEE80211_MLC_PRIO_ACCESS_PRES_AP_MLD_MAC_ADDR)
+- common += ETH_ALEN;
+- return common;
+ default:
+ WARN_ON(1);
+ return 0;
+ }
+
+- return sizeof(*mle) + common + mle->variable[0];
++ return sizeof(*mle) + mle->variable[0];
+ }
+
+ /**
+@@ -5312,8 +5308,7 @@ static inline bool ieee80211_mle_size_ok(const u8 *data, size_t len)
+ check_common_len = true;
+ break;
+ case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+- if (control & IEEE80211_MLC_PRIO_ACCESS_PRES_AP_MLD_MAC_ADDR)
+- common += ETH_ALEN;
++ common = ETH_ALEN + 1;
+ break;
+ default:
+ /* we don't know this type */
+diff --git a/include/linux/kallsyms.h b/include/linux/kallsyms.h
+index c3f075e8f60cb6..1c6a6c1704d8d0 100644
+--- a/include/linux/kallsyms.h
++++ b/include/linux/kallsyms.h
+@@ -57,10 +57,10 @@ static inline void *dereference_symbol_descriptor(void *ptr)
+
+ preempt_disable();
+ mod = __module_address((unsigned long)ptr);
+- preempt_enable();
+
+ if (mod)
+ ptr = dereference_module_function_descriptor(mod, ptr);
++ preempt_enable();
+ #endif
+ return ptr;
+ }
+diff --git a/include/linux/mroute_base.h b/include/linux/mroute_base.h
+index 9dd4bf1572553f..58a2401e4b551b 100644
+--- a/include/linux/mroute_base.h
++++ b/include/linux/mroute_base.h
+@@ -146,9 +146,9 @@ struct mr_mfc {
+ unsigned long last_assert;
+ int minvif;
+ int maxvif;
+- unsigned long bytes;
+- unsigned long pkt;
+- unsigned long wrong_if;
++ atomic_long_t bytes;
++ atomic_long_t pkt;
++ atomic_long_t wrong_if;
+ unsigned long lastuse;
+ unsigned char ttls[MAXVIFS];
+ refcount_t refcount;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 8896705ccd638b..02d3bafebbe77c 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2222,7 +2222,7 @@ struct net_device {
+ void *atalk_ptr;
+ #endif
+ #if IS_ENABLED(CONFIG_AX25)
+- void *ax25_ptr;
++ struct ax25_dev __rcu *ax25_ptr;
+ #endif
+ #if IS_ENABLED(CONFIG_CFG80211)
+ struct wireless_dev *ieee80211_ptr;
+diff --git a/include/linux/nfs_common.h b/include/linux/nfs_common.h
+index 5fc02df882521e..a541c3a0288750 100644
+--- a/include/linux/nfs_common.h
++++ b/include/linux/nfs_common.h
+@@ -9,9 +9,10 @@
+ #include <uapi/linux/nfs.h>
+
+ /* Mapping from NFS error code to "errno" error code. */
+-#define errno_NFSERR_IO EIO
+
+ int nfs_stat_to_errno(enum nfs_stat status);
+ int nfs4_stat_to_errno(int stat);
+
++__u32 nfs_localio_errno_to_nfs4_stat(int errno);
++
+ #endif /* _LINUX_NFS_COMMON_H */
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index fb908843f20928..347901525a46ae 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1266,12 +1266,18 @@ static inline void perf_sample_save_callchain(struct perf_sample_data *data,
+ }
+
+ static inline void perf_sample_save_raw_data(struct perf_sample_data *data,
++ struct perf_event *event,
+ struct perf_raw_record *raw)
+ {
+ struct perf_raw_frag *frag = &raw->frag;
+ u32 sum = 0;
+ int size;
+
++ if (!(event->attr.sample_type & PERF_SAMPLE_RAW))
++ return;
++ if (WARN_ON_ONCE(data->sample_flags & PERF_SAMPLE_RAW))
++ return;
++
+ do {
+ sum += frag->size;
+ if (perf_raw_frag_last(frag))
+diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
+index 78c8ac4951b581..c7abce28ed2995 100644
+--- a/include/linux/pps_kernel.h
++++ b/include/linux/pps_kernel.h
+@@ -56,8 +56,7 @@ struct pps_device {
+
+ unsigned int id; /* PPS source unique ID */
+ void const *lookup_cookie; /* For pps_lookup_dev() only */
+- struct cdev cdev;
+- struct device *dev;
++ struct device dev;
+ struct fasync_struct *async_queue; /* fasync method */
+ spinlock_t lock;
+ };
+diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
+index fd037c127bb071..551329220e4f34 100644
+--- a/include/linux/ptr_ring.h
++++ b/include/linux/ptr_ring.h
+@@ -615,15 +615,14 @@ static inline int ptr_ring_resize_noprof(struct ptr_ring *r, int size, gfp_t gfp
+ /*
+ * Note: producer lock is nested within consumer lock, so if you
+ * resize you must make sure all uses nest correctly.
+- * In particular if you consume ring in interrupt or BH context, you must
+- * disable interrupts/BH when doing so.
++ * In particular if you consume ring in BH context, you must
++ * disable BH when doing so.
+ */
+-static inline int ptr_ring_resize_multiple_noprof(struct ptr_ring **rings,
+- unsigned int nrings,
+- int size,
+- gfp_t gfp, void (*destroy)(void *))
++static inline int ptr_ring_resize_multiple_bh_noprof(struct ptr_ring **rings,
++ unsigned int nrings,
++ int size, gfp_t gfp,
++ void (*destroy)(void *))
+ {
+- unsigned long flags;
+ void ***queues;
+ int i;
+
+@@ -638,12 +637,12 @@ static inline int ptr_ring_resize_multiple_noprof(struct ptr_ring **rings,
+ }
+
+ for (i = 0; i < nrings; ++i) {
+- spin_lock_irqsave(&(rings[i])->consumer_lock, flags);
++ spin_lock_bh(&(rings[i])->consumer_lock);
+ spin_lock(&(rings[i])->producer_lock);
+ queues[i] = __ptr_ring_swap_queue(rings[i], queues[i],
+ size, gfp, destroy);
+ spin_unlock(&(rings[i])->producer_lock);
+- spin_unlock_irqrestore(&(rings[i])->consumer_lock, flags);
++ spin_unlock_bh(&(rings[i])->consumer_lock);
+ }
+
+ for (i = 0; i < nrings; ++i)
+@@ -662,8 +661,8 @@ static inline int ptr_ring_resize_multiple_noprof(struct ptr_ring **rings,
+ noqueues:
+ return -ENOMEM;
+ }
+-#define ptr_ring_resize_multiple(...) \
+- alloc_hooks(ptr_ring_resize_multiple_noprof(__VA_ARGS__))
++#define ptr_ring_resize_multiple_bh(...) \
++ alloc_hooks(ptr_ring_resize_multiple_bh_noprof(__VA_ARGS__))
+
+ static inline void ptr_ring_cleanup(struct ptr_ring *r, void (*destroy)(void *))
+ {
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 02eaf84c8626f4..8982820dae2131 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -944,6 +944,7 @@ struct task_struct {
+ unsigned sched_reset_on_fork:1;
+ unsigned sched_contributes_to_load:1;
+ unsigned sched_migrated:1;
++ unsigned sched_task_hot:1;
+
+ /* Force alignment to the next boundary: */
+ unsigned :0;
+diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
+index 926496c9cc9c3b..bf178238a3083d 100644
+--- a/include/linux/skb_array.h
++++ b/include/linux/skb_array.h
+@@ -199,17 +199,18 @@ static inline int skb_array_resize(struct skb_array *a, int size, gfp_t gfp)
+ return ptr_ring_resize(&a->ring, size, gfp, __skb_array_destroy_skb);
+ }
+
+-static inline int skb_array_resize_multiple_noprof(struct skb_array **rings,
+- int nrings, unsigned int size,
+- gfp_t gfp)
++static inline int skb_array_resize_multiple_bh_noprof(struct skb_array **rings,
++ int nrings,
++ unsigned int size,
++ gfp_t gfp)
+ {
+ BUILD_BUG_ON(offsetof(struct skb_array, ring));
+- return ptr_ring_resize_multiple_noprof((struct ptr_ring **)rings,
+- nrings, size, gfp,
+- __skb_array_destroy_skb);
++ return ptr_ring_resize_multiple_bh_noprof((struct ptr_ring **)rings,
++ nrings, size, gfp,
++ __skb_array_destroy_skb);
+ }
+-#define skb_array_resize_multiple(...) \
+- alloc_hooks(skb_array_resize_multiple_noprof(__VA_ARGS__))
++#define skb_array_resize_multiple_bh(...) \
++ alloc_hooks(skb_array_resize_multiple_bh_noprof(__VA_ARGS__))
+
+ static inline void skb_array_cleanup(struct skb_array *a)
+ {
+diff --git a/include/linux/usb/tcpm.h b/include/linux/usb/tcpm.h
+index 061da9546a8131..b22e659f81ba54 100644
+--- a/include/linux/usb/tcpm.h
++++ b/include/linux/usb/tcpm.h
+@@ -163,7 +163,8 @@ struct tcpc_dev {
+ void (*frs_sourcing_vbus)(struct tcpc_dev *dev);
+ int (*enable_auto_vbus_discharge)(struct tcpc_dev *dev, bool enable);
+ int (*set_auto_vbus_discharge_threshold)(struct tcpc_dev *dev, enum typec_pwr_opmode mode,
+- bool pps_active, u32 requested_vbus_voltage);
++ bool pps_active, u32 requested_vbus_voltage,
++ u32 pps_apdo_min_voltage);
+ bool (*is_vbus_vsafe0v)(struct tcpc_dev *dev);
+ void (*set_partner_usb_comm_capable)(struct tcpc_dev *dev, bool enable);
+ void (*check_contaminant)(struct tcpc_dev *dev);
+diff --git a/include/net/ax25.h b/include/net/ax25.h
+index cb622d84cd0cc4..4ee141aae0a29d 100644
+--- a/include/net/ax25.h
++++ b/include/net/ax25.h
+@@ -231,6 +231,7 @@ typedef struct ax25_dev {
+ #endif
+ refcount_t refcount;
+ bool device_up;
++ struct rcu_head rcu;
+ } ax25_dev;
+
+ typedef struct ax25_cb {
+@@ -290,9 +291,8 @@ static inline void ax25_dev_hold(ax25_dev *ax25_dev)
+
+ static inline void ax25_dev_put(ax25_dev *ax25_dev)
+ {
+- if (refcount_dec_and_test(&ax25_dev->refcount)) {
+- kfree(ax25_dev);
+- }
++ if (refcount_dec_and_test(&ax25_dev->refcount))
++ kfree_rcu(ax25_dev, rcu);
+ }
+ static inline __be16 ax25_type_trans(struct sk_buff *skb, struct net_device *dev)
+ {
+@@ -335,9 +335,9 @@ void ax25_digi_invert(const ax25_digi *, ax25_digi *);
+ extern spinlock_t ax25_dev_lock;
+
+ #if IS_ENABLED(CONFIG_AX25)
+-static inline ax25_dev *ax25_dev_ax25dev(struct net_device *dev)
++static inline ax25_dev *ax25_dev_ax25dev(const struct net_device *dev)
+ {
+- return dev->ax25_ptr;
++ return rcu_dereference_rtnl(dev->ax25_ptr);
+ }
+ #endif
+
+diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h
+index 74ff688568a0c6..f475757daafba9 100644
+--- a/include/net/inetpeer.h
++++ b/include/net/inetpeer.h
+@@ -96,30 +96,28 @@ static inline struct in6_addr *inetpeer_get_addr_v6(struct inetpeer_addr *iaddr)
+
+ /* can be called with or without local BH being disabled */
+ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
+- const struct inetpeer_addr *daddr,
+- int create);
++ const struct inetpeer_addr *daddr);
+
+ static inline struct inet_peer *inet_getpeer_v4(struct inet_peer_base *base,
+ __be32 v4daddr,
+- int vif, int create)
++ int vif)
+ {
+ struct inetpeer_addr daddr;
+
+ daddr.a4.addr = v4daddr;
+ daddr.a4.vif = vif;
+ daddr.family = AF_INET;
+- return inet_getpeer(base, &daddr, create);
++ return inet_getpeer(base, &daddr);
+ }
+
+ static inline struct inet_peer *inet_getpeer_v6(struct inet_peer_base *base,
+- const struct in6_addr *v6daddr,
+- int create)
++ const struct in6_addr *v6daddr)
+ {
+ struct inetpeer_addr daddr;
+
+ daddr.a6 = *v6daddr;
+ daddr.family = AF_INET6;
+- return inet_getpeer(base, &daddr, create);
++ return inet_getpeer(base, &daddr);
+ }
+
+ static inline int inetpeer_addr_cmp(const struct inetpeer_addr *a,
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 471c353d32a4a5..788513cc384b7f 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -442,6 +442,9 @@ struct nft_set_ext;
+ * @remove: remove element from set
+ * @walk: iterate over all set elements
+ * @get: get set elements
++ * @ksize: kernel set size
++ * @usize: userspace set size
++ * @adjust_maxsize: delta to adjust maximum set size
+ * @commit: commit set elements
+ * @abort: abort set elements
+ * @privsize: function to return size of set private data
+@@ -495,6 +498,9 @@ struct nft_set_ops {
+ const struct nft_set *set,
+ const struct nft_set_elem *elem,
+ unsigned int flags);
++ u32 (*ksize)(u32 size);
++ u32 (*usize)(u32 size);
++ u32 (*adjust_maxsize)(const struct nft_set *set);
+ void (*commit)(struct nft_set *set);
+ void (*abort)(const struct nft_set *set);
+ u64 (*privsize)(const struct nlattr * const nla[],
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index ae60d66640954c..23dd647fe0248c 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -43,6 +43,7 @@ struct netns_xfrm {
+ struct hlist_head __rcu *state_bysrc;
+ struct hlist_head __rcu *state_byspi;
+ struct hlist_head __rcu *state_byseq;
++ struct hlist_head __percpu *state_cache_input;
+ unsigned int state_hmask;
+ unsigned int state_num;
+ struct work_struct state_hash_work;
+diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
+index 4880b3a7aced5b..4229e4fcd2a9ee 100644
+--- a/include/net/pkt_cls.h
++++ b/include/net/pkt_cls.h
+@@ -75,11 +75,11 @@ static inline bool tcf_block_non_null_shared(struct tcf_block *block)
+ }
+
+ #ifdef CONFIG_NET_CLS_ACT
+-DECLARE_STATIC_KEY_FALSE(tcf_bypass_check_needed_key);
++DECLARE_STATIC_KEY_FALSE(tcf_sw_enabled_key);
+
+ static inline bool tcf_block_bypass_sw(struct tcf_block *block)
+ {
+- return block && block->bypass_wanted;
++ return block && !atomic_read(&block->useswcnt);
+ }
+ #endif
+
+@@ -759,6 +759,15 @@ tc_cls_common_offload_init(struct flow_cls_common_offload *cls_common,
+ cls_common->extack = extack;
+ }
+
++static inline void tcf_proto_update_usesw(struct tcf_proto *tp, u32 flags)
++{
++ if (tp->usesw)
++ return;
++ if (tc_skip_sw(flags) && tc_in_hw(flags))
++ return;
++ tp->usesw = true;
++}
++
+ #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
+ static inline struct tc_skb_ext *tc_skb_ext_alloc(struct sk_buff *skb)
+ {
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 5d74fa7e694cc8..1e6324f0d4efda 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -425,6 +425,7 @@ struct tcf_proto {
+ spinlock_t lock;
+ bool deleting;
+ bool counted;
++ bool usesw;
+ refcount_t refcnt;
+ struct rcu_head rcu;
+ struct hlist_node destroy_ht_node;
+@@ -474,9 +475,7 @@ struct tcf_block {
+ struct flow_block flow_block;
+ struct list_head owner_list;
+ bool keep_dst;
+- bool bypass_wanted;
+- atomic_t filtercnt; /* Number of filters */
+- atomic_t skipswcnt; /* Number of skip_sw filters */
++ atomic_t useswcnt;
+ atomic_t offloadcnt; /* Number of oddloaded filters */
+ unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
+ unsigned int lockeddevcnt; /* Number of devs that require rtnl lock. */
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index a0bdd58f401c0f..83e9ef25b8d0d4 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -184,10 +184,13 @@ struct xfrm_state {
+ };
+ struct hlist_node byspi;
+ struct hlist_node byseq;
++ struct hlist_node state_cache;
++ struct hlist_node state_cache_input;
+
+ refcount_t refcnt;
+ spinlock_t lock;
+
++ u32 pcpu_num;
+ struct xfrm_id id;
+ struct xfrm_selector sel;
+ struct xfrm_mark mark;
+@@ -536,6 +539,7 @@ struct xfrm_policy_queue {
+ * @xp_net: network namespace the policy lives in
+ * @bydst: hlist node for SPD hash table or rbtree list
+ * @byidx: hlist node for index hash table
++ * @state_cache_list: hlist head for policy cached xfrm states
+ * @lock: serialize changes to policy structure members
+ * @refcnt: reference count, freed once it reaches 0
+ * @pos: kernel internal tie-breaker to determine age of policy
+@@ -566,6 +570,8 @@ struct xfrm_policy {
+ struct hlist_node bydst;
+ struct hlist_node byidx;
+
++ struct hlist_head state_cache_list;
++
+ /* This lock only affects elements except for entry. */
+ rwlock_t lock;
+ refcount_t refcnt;
+@@ -1217,9 +1223,19 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+
+ if (xo) {
+ x = xfrm_input_state(skb);
+- if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET)
+- return (xo->flags & CRYPTO_DONE) &&
+- (xo->status & CRYPTO_SUCCESS);
++ if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) {
++ bool check = (xo->flags & CRYPTO_DONE) &&
++ (xo->status & CRYPTO_SUCCESS);
++
++ /* The packets here are plain ones and secpath was
++ * needed to indicate that hardware already handled
++ * them and there is no need to do nothing in addition.
++ *
++ * Consume secpath which was set by drivers.
++ */
++ secpath_reset(skb);
++ return check;
++ }
+ }
+
+ return __xfrm_check_nopolicy(net, skb, dir) ||
+@@ -1645,6 +1661,10 @@ int xfrm_state_update(struct xfrm_state *x);
+ struct xfrm_state *xfrm_state_lookup(struct net *net, u32 mark,
+ const xfrm_address_t *daddr, __be32 spi,
+ u8 proto, unsigned short family);
++struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
++ const xfrm_address_t *daddr,
++ __be32 spi, u8 proto,
++ unsigned short family);
+ struct xfrm_state *xfrm_state_lookup_byaddr(struct net *net, u32 mark,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+@@ -1684,7 +1704,7 @@ struct xfrmk_spdinfo {
+ u32 spdhmcnt;
+ };
+
+-struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq);
++struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num);
+ int xfrm_state_delete(struct xfrm_state *x);
+ int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync);
+ int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid);
+@@ -1796,7 +1816,7 @@ int verify_spi_info(u8 proto, u32 min, u32 max, struct netlink_ext_ack *extack);
+ int xfrm_alloc_spi(struct xfrm_state *x, u32 minspi, u32 maxspi,
+ struct netlink_ext_ack *extack);
+ struct xfrm_state *xfrm_find_acq(struct net *net, const struct xfrm_mark *mark,
+- u8 mode, u32 reqid, u32 if_id, u8 proto,
++ u8 mode, u32 reqid, u32 if_id, u32 pcpu_num, u8 proto,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr, int create,
+ unsigned short family);
+diff --git a/include/sound/hdaudio_ext.h b/include/sound/hdaudio_ext.h
+index 957295364a5e3c..4c7a40e149a594 100644
+--- a/include/sound/hdaudio_ext.h
++++ b/include/sound/hdaudio_ext.h
+@@ -2,8 +2,6 @@
+ #ifndef __SOUND_HDAUDIO_EXT_H
+ #define __SOUND_HDAUDIO_EXT_H
+
+-#include <linux/io-64-nonatomic-lo-hi.h>
+-#include <linux/iopoll.h>
+ #include <sound/hdaudio.h>
+
+ int snd_hdac_ext_bus_init(struct hdac_bus *bus, struct device *dev,
+@@ -119,49 +117,6 @@ int snd_hdac_ext_bus_link_put(struct hdac_bus *bus, struct hdac_ext_link *hlink)
+
+ void snd_hdac_ext_bus_link_power(struct hdac_device *codec, bool enable);
+
+-#define snd_hdac_adsp_writeb(chip, reg, value) \
+- snd_hdac_reg_writeb(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readb(chip, reg) \
+- snd_hdac_reg_readb(chip, (chip)->dsp_ba + (reg))
+-#define snd_hdac_adsp_writew(chip, reg, value) \
+- snd_hdac_reg_writew(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readw(chip, reg) \
+- snd_hdac_reg_readw(chip, (chip)->dsp_ba + (reg))
+-#define snd_hdac_adsp_writel(chip, reg, value) \
+- snd_hdac_reg_writel(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readl(chip, reg) \
+- snd_hdac_reg_readl(chip, (chip)->dsp_ba + (reg))
+-#define snd_hdac_adsp_writeq(chip, reg, value) \
+- snd_hdac_reg_writeq(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readq(chip, reg) \
+- snd_hdac_reg_readq(chip, (chip)->dsp_ba + (reg))
+-
+-#define snd_hdac_adsp_updateb(chip, reg, mask, val) \
+- snd_hdac_adsp_writeb(chip, reg, \
+- (snd_hdac_adsp_readb(chip, reg) & ~(mask)) | (val))
+-#define snd_hdac_adsp_updatew(chip, reg, mask, val) \
+- snd_hdac_adsp_writew(chip, reg, \
+- (snd_hdac_adsp_readw(chip, reg) & ~(mask)) | (val))
+-#define snd_hdac_adsp_updatel(chip, reg, mask, val) \
+- snd_hdac_adsp_writel(chip, reg, \
+- (snd_hdac_adsp_readl(chip, reg) & ~(mask)) | (val))
+-#define snd_hdac_adsp_updateq(chip, reg, mask, val) \
+- snd_hdac_adsp_writeq(chip, reg, \
+- (snd_hdac_adsp_readq(chip, reg) & ~(mask)) | (val))
+-
+-#define snd_hdac_adsp_readb_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readb_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-#define snd_hdac_adsp_readw_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readw_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-#define snd_hdac_adsp_readl_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readl_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-#define snd_hdac_adsp_readq_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readq_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-
+ struct hdac_ext_device;
+
+ /* ops common to all codec drivers */
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index a0aed1a428a183..9a75590227f262 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -118,6 +118,8 @@ enum yfs_cm_operation {
+ */
+ #define afs_call_traces \
+ EM(afs_call_trace_alloc, "ALLOC") \
++ EM(afs_call_trace_async_abort, "ASYAB") \
++ EM(afs_call_trace_async_kill, "ASYKL") \
+ EM(afs_call_trace_free, "FREE ") \
+ EM(afs_call_trace_get, "GET ") \
+ EM(afs_call_trace_put, "PUT ") \
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index cc22596c7250cf..666fe1779ccc63 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -117,6 +117,7 @@
+ #define rxrpc_call_poke_traces \
+ EM(rxrpc_call_poke_abort, "Abort") \
+ EM(rxrpc_call_poke_complete, "Compl") \
++ EM(rxrpc_call_poke_conn_abort, "Conn-abort") \
+ EM(rxrpc_call_poke_error, "Error") \
+ EM(rxrpc_call_poke_idle, "Idle") \
+ EM(rxrpc_call_poke_set_timeout, "Set-timo") \
+@@ -282,6 +283,7 @@
+ EM(rxrpc_call_see_activate_client, "SEE act-clnt") \
+ EM(rxrpc_call_see_connect_failed, "SEE con-fail") \
+ EM(rxrpc_call_see_connected, "SEE connect ") \
++ EM(rxrpc_call_see_conn_abort, "SEE conn-abt") \
+ EM(rxrpc_call_see_disconnected, "SEE disconn ") \
+ EM(rxrpc_call_see_distribute_error, "SEE dist-err") \
+ EM(rxrpc_call_see_input, "SEE input ") \
+@@ -956,6 +958,29 @@ TRACE_EVENT(rxrpc_rx_abort,
+ __entry->abort_code)
+ );
+
++TRACE_EVENT(rxrpc_rx_conn_abort,
++ TP_PROTO(const struct rxrpc_connection *conn, const struct sk_buff *skb),
++
++ TP_ARGS(conn, skb),
++
++ TP_STRUCT__entry(
++ __field(unsigned int, conn)
++ __field(rxrpc_serial_t, serial)
++ __field(u32, abort_code)
++ ),
++
++ TP_fast_assign(
++ __entry->conn = conn->debug_id;
++ __entry->serial = rxrpc_skb(skb)->hdr.serial;
++ __entry->abort_code = skb->priority;
++ ),
++
++ TP_printk("C=%08x ABORT %08x ac=%d",
++ __entry->conn,
++ __entry->serial,
++ __entry->abort_code)
++ );
++
+ TRACE_EVENT(rxrpc_rx_challenge,
+ TP_PROTO(struct rxrpc_connection *conn, rxrpc_serial_t serial,
+ u32 version, u32 nonce, u32 min_level),
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index f28701500714f6..d73a97e3030a86 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -322,6 +322,7 @@ enum xfrm_attr_type_t {
+ XFRMA_MTIMER_THRESH, /* __u32 in seconds for input SA */
+ XFRMA_SA_DIR, /* __u8 */
+ XFRMA_NAT_KEEPALIVE_INTERVAL, /* __u32 in seconds for NAT keepalive */
++ XFRMA_SA_PCPU, /* __u32 */
+ __XFRMA_MAX
+
+ #define XFRMA_OUTPUT_MARK XFRMA_SET_MARK /* Compatibility */
+@@ -437,6 +438,7 @@ struct xfrm_userpolicy_info {
+ #define XFRM_POLICY_LOCALOK 1 /* Allow user to override global policy */
+ /* Automatically expand selector to include matching ICMP payloads. */
+ #define XFRM_POLICY_ICMP 2
++#define XFRM_POLICY_CPU_ACQUIRE 4
+ __u8 share;
+ };
+
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 883510a3e8d075..874f9e2defd583 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -340,7 +340,7 @@ int io_uring_cmd_sock(struct io_uring_cmd *cmd, unsigned int issue_flags)
+ if (!prot || !prot->ioctl)
+ return -EOPNOTSUPP;
+
+- switch (cmd->sqe->cmd_op) {
++ switch (cmd->cmd_op) {
+ case SOCKET_URING_OP_SIOCINQ:
+ ret = prot->ioctl(sk, SIOCINQ, &arg);
+ if (ret)
+diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
+index e52b3ad231b9c4..93e48c7cad4eff 100644
+--- a/kernel/bpf/arena.c
++++ b/kernel/bpf/arena.c
+@@ -212,7 +212,7 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
+ struct vma_list {
+ struct vm_area_struct *vma;
+ struct list_head head;
+- atomic_t mmap_count;
++ refcount_t mmap_count;
+ };
+
+ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
+@@ -222,7 +222,7 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
+ vml = kmalloc(sizeof(*vml), GFP_KERNEL);
+ if (!vml)
+ return -ENOMEM;
+- atomic_set(&vml->mmap_count, 1);
++ refcount_set(&vml->mmap_count, 1);
+ vma->vm_private_data = vml;
+ vml->vma = vma;
+ list_add(&vml->head, &arena->vma_list);
+@@ -233,7 +233,7 @@ static void arena_vm_open(struct vm_area_struct *vma)
+ {
+ struct vma_list *vml = vma->vm_private_data;
+
+- atomic_inc(&vml->mmap_count);
++ refcount_inc(&vml->mmap_count);
+ }
+
+ static void arena_vm_close(struct vm_area_struct *vma)
+@@ -242,7 +242,7 @@ static void arena_vm_close(struct vm_area_struct *vma)
+ struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+ struct vma_list *vml = vma->vm_private_data;
+
+- if (!atomic_dec_and_test(&vml->mmap_count))
++ if (!refcount_dec_and_test(&vml->mmap_count))
+ return;
+ guard(mutex)(&arena->lock);
+ /* update link list under lock */
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index c938dea5ddbf3a..050c784498abec 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -797,8 +797,12 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
+ smap->elem_size = offsetof(struct bpf_local_storage_elem,
+ sdata.data[attr->value_size]);
+
+- smap->bpf_ma = bpf_ma;
+- if (bpf_ma) {
++ /* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
++ * preemptible context. Thus, enforce all storages to use
++ * bpf_mem_alloc when CONFIG_PREEMPT_RT is enabled.
++ */
++ smap->bpf_ma = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : bpf_ma;
++ if (smap->bpf_ma) {
+ err = bpf_mem_alloc_init(&smap->selem_ma, smap->elem_size, false);
+ if (err)
+ goto free_smap;
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index b3a2ce1e5e22ec..b70d0eef8a284d 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -311,6 +311,20 @@ void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_ops_desc)
+ kfree(arg_info);
+ }
+
++static bool is_module_member(const struct btf *btf, u32 id)
++{
++ const struct btf_type *t;
++
++ t = btf_type_resolve_ptr(btf, id, NULL);
++ if (!t)
++ return false;
++
++ if (!__btf_type_is_struct(t) && !btf_type_is_fwd(t))
++ return false;
++
++ return !strcmp(btf_name_by_offset(btf, t->name_off), "module");
++}
++
+ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ struct btf *btf,
+ struct bpf_verifier_log *log)
+@@ -390,6 +404,13 @@ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ goto errout;
+ }
+
++ if (!st_ops_ids[IDX_MODULE_ID] && is_module_member(btf, member->type)) {
++ pr_warn("'struct module' btf id not found. Is CONFIG_MODULES enabled? bpf_struct_ops '%s' needs module support.\n",
++ st_ops->name);
++ err = -EOPNOTSUPP;
++ goto errout;
++ }
++
+ func_proto = btf_type_resolve_func_ptr(btf,
+ member->type,
+ NULL);
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 41d20b7199c4af..a44f4be592be79 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -498,11 +498,6 @@ bool btf_type_is_void(const struct btf_type *t)
+ return t == &btf_void;
+ }
+
+-static bool btf_type_is_fwd(const struct btf_type *t)
+-{
+- return BTF_INFO_KIND(t->info) == BTF_KIND_FWD;
+-}
+-
+ static bool btf_type_is_datasec(const struct btf_type *t)
+ {
+ return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 3d45ebe8afb48d..a05aeb34589641 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1593,10 +1593,24 @@ void bpf_timer_cancel_and_free(void *val)
+ * To avoid these issues, punt to workqueue context when we are in a
+ * timer callback.
+ */
+- if (this_cpu_read(hrtimer_running))
++ if (this_cpu_read(hrtimer_running)) {
+ queue_work(system_unbound_wq, &t->cb.delete_work);
+- else
++ return;
++ }
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++ /* If the timer is running on other CPU, also use a kworker to
++ * wait for the completion of the timer instead of trying to
++ * acquire a sleepable lock in hrtimer_cancel() to wait for its
++ * completion.
++ */
++ if (hrtimer_try_to_cancel(&t->timer) >= 0)
++ kfree_rcu(t, cb.rcu);
++ else
++ queue_work(system_unbound_wq, &t->cb.delete_work);
++ } else {
+ bpf_timer_delete_work(&t->cb.delete_work);
++ }
+ }
+
+ /* This function is called by map_delete/update_elem for individual element and
+diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
+index ff5683a57f7712..3b2bdca9f1d4b0 100644
+--- a/kernel/dma/coherent.c
++++ b/kernel/dma/coherent.c
+@@ -330,7 +330,8 @@ int dma_init_global_coherent(phys_addr_t phys_addr, size_t size)
+ #include <linux/of_reserved_mem.h>
+
+ #ifdef CONFIG_DMA_GLOBAL_POOL
+-static struct reserved_mem *dma_reserved_default_memory __initdata;
++static phys_addr_t dma_reserved_default_memory_base __initdata;
++static phys_addr_t dma_reserved_default_memory_size __initdata;
+ #endif
+
+ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
+@@ -376,9 +377,10 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
+
+ #ifdef CONFIG_DMA_GLOBAL_POOL
+ if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) {
+- WARN(dma_reserved_default_memory,
++ WARN(dma_reserved_default_memory_size,
+ "Reserved memory: region for default DMA coherent area is redefined\n");
+- dma_reserved_default_memory = rmem;
++ dma_reserved_default_memory_base = rmem->base;
++ dma_reserved_default_memory_size = rmem->size;
+ }
+ #endif
+
+@@ -391,10 +393,10 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
+ #ifdef CONFIG_DMA_GLOBAL_POOL
+ static int __init dma_init_reserved_memory(void)
+ {
+- if (!dma_reserved_default_memory)
++ if (!dma_reserved_default_memory_size)
+ return -ENOMEM;
+- return dma_init_global_coherent(dma_reserved_default_memory->base,
+- dma_reserved_default_memory->size);
++ return dma_init_global_coherent(dma_reserved_default_memory_base,
++ dma_reserved_default_memory_size);
+ }
+ core_initcall(dma_init_reserved_memory);
+ #endif /* CONFIG_DMA_GLOBAL_POOL */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index df27d08a723269..501d8c2fedff40 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10375,9 +10375,9 @@ static struct pmu perf_tracepoint = {
+ };
+
+ static int perf_tp_filter_match(struct perf_event *event,
+- struct perf_sample_data *data)
++ struct perf_raw_record *raw)
+ {
+- void *record = data->raw->frag.data;
++ void *record = raw->frag.data;
+
+ /* only top level events have filters set */
+ if (event->parent)
+@@ -10389,7 +10389,7 @@ static int perf_tp_filter_match(struct perf_event *event,
+ }
+
+ static int perf_tp_event_match(struct perf_event *event,
+- struct perf_sample_data *data,
++ struct perf_raw_record *raw,
+ struct pt_regs *regs)
+ {
+ if (event->hw.state & PERF_HES_STOPPED)
+@@ -10400,7 +10400,7 @@ static int perf_tp_event_match(struct perf_event *event,
+ if (event->attr.exclude_kernel && !user_mode(regs))
+ return 0;
+
+- if (!perf_tp_filter_match(event, data))
++ if (!perf_tp_filter_match(event, raw))
+ return 0;
+
+ return 1;
+@@ -10426,6 +10426,7 @@ EXPORT_SYMBOL_GPL(perf_trace_run_bpf_submit);
+ static void __perf_tp_event_target_task(u64 count, void *record,
+ struct pt_regs *regs,
+ struct perf_sample_data *data,
++ struct perf_raw_record *raw,
+ struct perf_event *event)
+ {
+ struct trace_entry *entry = record;
+@@ -10435,13 +10436,17 @@ static void __perf_tp_event_target_task(u64 count, void *record,
+ /* Cannot deliver synchronous signal to other task. */
+ if (event->attr.sigtrap)
+ return;
+- if (perf_tp_event_match(event, data, regs))
++ if (perf_tp_event_match(event, raw, regs)) {
++ perf_sample_data_init(data, 0, 0);
++ perf_sample_save_raw_data(data, event, raw);
+ perf_swevent_event(event, count, data, regs);
++ }
+ }
+
+ static void perf_tp_event_target_task(u64 count, void *record,
+ struct pt_regs *regs,
+ struct perf_sample_data *data,
++ struct perf_raw_record *raw,
+ struct perf_event_context *ctx)
+ {
+ unsigned int cpu = smp_processor_id();
+@@ -10449,15 +10454,15 @@ static void perf_tp_event_target_task(u64 count, void *record,
+ struct perf_event *event, *sibling;
+
+ perf_event_groups_for_cpu_pmu(event, &ctx->pinned_groups, cpu, pmu) {
+- __perf_tp_event_target_task(count, record, regs, data, event);
++ __perf_tp_event_target_task(count, record, regs, data, raw, event);
+ for_each_sibling_event(sibling, event)
+- __perf_tp_event_target_task(count, record, regs, data, sibling);
++ __perf_tp_event_target_task(count, record, regs, data, raw, sibling);
+ }
+
+ perf_event_groups_for_cpu_pmu(event, &ctx->flexible_groups, cpu, pmu) {
+- __perf_tp_event_target_task(count, record, regs, data, event);
++ __perf_tp_event_target_task(count, record, regs, data, raw, event);
+ for_each_sibling_event(sibling, event)
+- __perf_tp_event_target_task(count, record, regs, data, sibling);
++ __perf_tp_event_target_task(count, record, regs, data, raw, sibling);
+ }
+ }
+
+@@ -10475,15 +10480,10 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ },
+ };
+
+- perf_sample_data_init(&data, 0, 0);
+- perf_sample_save_raw_data(&data, &raw);
+-
+ perf_trace_buf_update(record, event_type);
+
+ hlist_for_each_entry_rcu(event, head, hlist_entry) {
+- if (perf_tp_event_match(event, &data, regs)) {
+- perf_swevent_event(event, count, &data, regs);
+-
++ if (perf_tp_event_match(event, &raw, regs)) {
+ /*
+ * Here use the same on-stack perf_sample_data,
+ * some members in data are event-specific and
+@@ -10493,7 +10493,8 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ * because data->sample_flags is set.
+ */
+ perf_sample_data_init(&data, 0, 0);
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
++ perf_swevent_event(event, count, &data, regs);
+ }
+ }
+
+@@ -10510,7 +10511,7 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ goto unlock;
+
+ raw_spin_lock(&ctx->lock);
+- perf_tp_event_target_task(count, record, regs, &data, ctx);
++ perf_tp_event_target_task(count, record, regs, &data, &raw, ctx);
+ raw_spin_unlock(&ctx->lock);
+ unlock:
+ rcu_read_unlock();
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index fe0272cd84a51a..a29df4b02a2ed9 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -441,10 +441,6 @@ static inline struct cpumask *irq_desc_get_pending_mask(struct irq_desc *desc)
+ {
+ return desc->pending_mask;
+ }
+-static inline bool handle_enforce_irqctx(struct irq_data *data)
+-{
+- return irqd_is_handle_enforce_irqctx(data);
+-}
+ bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear);
+ #else /* CONFIG_GENERIC_PENDING_IRQ */
+ static inline bool irq_can_move_pcntxt(struct irq_data *data)
+@@ -471,11 +467,12 @@ static inline bool irq_fixup_move_pending(struct irq_desc *desc, bool fclear)
+ {
+ return false;
+ }
++#endif /* !CONFIG_GENERIC_PENDING_IRQ */
++
+ static inline bool handle_enforce_irqctx(struct irq_data *data)
+ {
+- return false;
++ return irqd_is_handle_enforce_irqctx(data);
+ }
+-#endif /* !CONFIG_GENERIC_PENDING_IRQ */
+
+ #if !defined(CONFIG_IRQ_DOMAIN) || !defined(CONFIG_IRQ_DOMAIN_HIERARCHY)
+ static inline int irq_domain_activate_irq(struct irq_data *data, bool reserve)
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 49b9bca9de12f7..93a07387af3b75 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -2583,7 +2583,10 @@ static noinline int do_init_module(struct module *mod)
+ #endif
+ ret = module_enable_rodata_ro(mod, true);
+ if (ret)
+- goto fail_mutex_unlock;
++ pr_warn("%s: module_enable_rodata_ro_after_init() returned %d, "
++ "ro_after_init data might still be writable\n",
++ mod->name, ret);
++
+ mod_tree_remove_init(mod);
+ module_arch_freeing_init(mod);
+ for_class_mod_mem_type(type, init) {
+@@ -2622,8 +2625,6 @@ static noinline int do_init_module(struct module *mod)
+
+ return 0;
+
+-fail_mutex_unlock:
+- mutex_unlock(&module_mutex);
+ fail_free_freeinit:
+ kfree(freeinit);
+ fail:
+diff --git a/kernel/padata.c b/kernel/padata.c
+index d899f34558afcc..22770372bdf329 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -47,6 +47,22 @@ struct padata_mt_job_state {
+ static void padata_free_pd(struct parallel_data *pd);
+ static void __init padata_mt_helper(struct work_struct *work);
+
++static inline void padata_get_pd(struct parallel_data *pd)
++{
++ refcount_inc(&pd->refcnt);
++}
++
++static inline void padata_put_pd_cnt(struct parallel_data *pd, int cnt)
++{
++ if (refcount_sub_and_test(cnt, &pd->refcnt))
++ padata_free_pd(pd);
++}
++
++static inline void padata_put_pd(struct parallel_data *pd)
++{
++ padata_put_pd_cnt(pd, 1);
++}
++
+ static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index)
+ {
+ int cpu, target_cpu;
+@@ -206,7 +222,7 @@ int padata_do_parallel(struct padata_shell *ps,
+ if ((pinst->flags & PADATA_RESET))
+ goto out;
+
+- refcount_inc(&pd->refcnt);
++ padata_get_pd(pd);
+ padata->pd = pd;
+ padata->cb_cpu = *cb_cpu;
+
+@@ -336,8 +352,14 @@ static void padata_reorder(struct parallel_data *pd)
+ smp_mb();
+
+ reorder = per_cpu_ptr(pd->reorder_list, pd->cpu);
+- if (!list_empty(&reorder->list) && padata_find_next(pd, false))
++ if (!list_empty(&reorder->list) && padata_find_next(pd, false)) {
++ /*
++ * Other context(eg. the padata_serial_worker) can finish the request.
++ * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish.
++ */
++ padata_get_pd(pd);
+ queue_work(pinst->serial_wq, &pd->reorder_work);
++ }
+ }
+
+ static void invoke_padata_reorder(struct work_struct *work)
+@@ -348,6 +370,8 @@ static void invoke_padata_reorder(struct work_struct *work)
+ pd = container_of(work, struct parallel_data, reorder_work);
+ padata_reorder(pd);
+ local_bh_enable();
++ /* Pairs with putting the reorder_work in the serial_wq */
++ padata_put_pd(pd);
+ }
+
+ static void padata_serial_worker(struct work_struct *serial_work)
+@@ -380,8 +404,7 @@ static void padata_serial_worker(struct work_struct *serial_work)
+ }
+ local_bh_enable();
+
+- if (refcount_sub_and_test(cnt, &pd->refcnt))
+- padata_free_pd(pd);
++ padata_put_pd_cnt(pd, cnt);
+ }
+
+ /**
+@@ -688,8 +711,7 @@ static int padata_replace(struct padata_instance *pinst)
+ synchronize_rcu();
+
+ list_for_each_entry_continue_reverse(ps, &pinst->pslist, list)
+- if (refcount_dec_and_test(&ps->opd->refcnt))
+- padata_free_pd(ps->opd);
++ padata_put_pd(ps->opd);
+
+ pinst->flags &= ~PADATA_RESET;
+
+@@ -977,7 +999,7 @@ static ssize_t padata_sysfs_store(struct kobject *kobj, struct attribute *attr,
+
+ pinst = kobj2pinst(kobj);
+ pentry = attr2pentry(attr);
+- if (pentry->show)
++ if (pentry->store)
+ ret = pentry->store(pinst, attr, buf, count);
+
+ return ret;
+@@ -1128,11 +1150,16 @@ void padata_free_shell(struct padata_shell *ps)
+ if (!ps)
+ return;
+
++ /*
++ * Wait for all _do_serial calls to finish to avoid touching
++ * freed pd's and ps's.
++ */
++ synchronize_rcu();
++
+ mutex_lock(&ps->pinst->lock);
+ list_del(&ps->list);
+ pd = rcu_dereference_protected(ps->pd, 1);
+- if (refcount_dec_and_test(&pd->refcnt))
+- padata_free_pd(pd);
++ padata_put_pd(pd);
+ mutex_unlock(&ps->pinst->lock);
+
+ kfree(ps);
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index e35829d360390f..b483fcea811b1a 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -608,7 +608,11 @@ int hibernation_platform_enter(void)
+
+ local_irq_disable();
+ system_state = SYSTEM_SUSPEND;
+- syscore_suspend();
++
++ error = syscore_suspend();
++ if (error)
++ goto Enable_irqs;
++
+ if (pm_wakeup_pending()) {
+ error = -EAGAIN;
+ goto Power_up;
+@@ -620,6 +624,7 @@ int hibernation_platform_enter(void)
+
+ Power_up:
+ syscore_resume();
++ Enable_irqs:
+ system_state = SYSTEM_RUNNING;
+ local_irq_enable();
+
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index 3fcb48502adbd8..5eef70000b439d 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -335,3 +335,9 @@ bool printk_get_next_message(struct printk_message *pmsg, u64 seq,
+ void console_prepend_dropped(struct printk_message *pmsg, unsigned long dropped);
+ void console_prepend_replay(struct printk_message *pmsg);
+ #endif
++
++#ifdef CONFIG_SMP
++bool is_printk_cpu_sync_owner(void);
++#else
++static inline bool is_printk_cpu_sync_owner(void) { return false; }
++#endif
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index beb808f4c367b9..7530df62ff7cbc 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -4892,6 +4892,11 @@ void console_try_replay_all(void)
+ static atomic_t printk_cpu_sync_owner = ATOMIC_INIT(-1);
+ static atomic_t printk_cpu_sync_nested = ATOMIC_INIT(0);
+
++bool is_printk_cpu_sync_owner(void)
++{
++ return (atomic_read(&printk_cpu_sync_owner) == raw_smp_processor_id());
++}
++
+ /**
+ * __printk_cpu_sync_wait() - Busy wait until the printk cpu-reentrant
+ * spinning lock is not owned by any CPU.
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index 2b35a9d3919d8b..e6198da7c7354a 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -43,10 +43,15 @@ bool is_printk_legacy_deferred(void)
+ /*
+ * The per-CPU variable @printk_context can be read safely in any
+ * context. CPU migration is always disabled when set.
++ *
++ * A context holding the printk_cpu_sync must not spin waiting for
++ * another CPU. For legacy printing, it could be the console_lock
++ * or the port lock.
+ */
+ return (force_legacy_kthread() ||
+ this_cpu_read(printk_context) ||
+- in_nmi());
++ in_nmi() ||
++ is_printk_cpu_sync_owner());
+ }
+
+ asmlinkage int vprintk(const char *fmt, va_list args)
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d07dc87787dff3..aba41c69f09c42 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2024,10 +2024,10 @@ void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
+ */
+ uclamp_rq_inc(rq, p);
+
+- if (!(flags & ENQUEUE_RESTORE)) {
++ psi_enqueue(p, flags);
++
++ if (!(flags & ENQUEUE_RESTORE))
+ sched_info_enqueue(rq, p);
+- psi_enqueue(p, flags & ENQUEUE_MIGRATED);
+- }
+
+ if (sched_core_enabled(rq))
+ sched_core_enqueue(rq, p);
+@@ -2044,10 +2044,10 @@ inline bool dequeue_task(struct rq *rq, struct task_struct *p, int flags)
+ if (!(flags & DEQUEUE_NOCLOCK))
+ update_rq_clock(rq);
+
+- if (!(flags & DEQUEUE_SAVE)) {
++ if (!(flags & DEQUEUE_SAVE))
+ sched_info_dequeue(rq, p);
+- psi_dequeue(p, !(flags & DEQUEUE_SLEEP));
+- }
++
++ psi_dequeue(p, flags);
+
+ /*
+ * Must be before ->dequeue_task() because ->dequeue_task() can 'fail'
+@@ -6507,6 +6507,45 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ #define SM_PREEMPT 1
+ #define SM_RTLOCK_WAIT 2
+
++/*
++ * Helper function for __schedule()
++ *
++ * If a task does not have signals pending, deactivate it
++ * Otherwise marks the task's __state as RUNNING
++ */
++static bool try_to_block_task(struct rq *rq, struct task_struct *p,
++ unsigned long task_state)
++{
++ int flags = DEQUEUE_NOCLOCK;
++
++ if (signal_pending_state(task_state, p)) {
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ return false;
++ }
++
++ p->sched_contributes_to_load =
++ (task_state & TASK_UNINTERRUPTIBLE) &&
++ !(task_state & TASK_NOLOAD) &&
++ !(task_state & TASK_FROZEN);
++
++ if (unlikely(is_special_task_state(task_state)))
++ flags |= DEQUEUE_SPECIAL;
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ block_task(rq, p, flags);
++ return true;
++}
++
+ /*
+ * __schedule() is the main scheduler function.
+ *
+@@ -6554,7 +6593,6 @@ static void __sched notrace __schedule(int sched_mode)
+ * as a preemption by schedule_debug() and RCU.
+ */
+ bool preempt = sched_mode > SM_NONE;
+- bool block = false;
+ unsigned long *switch_count;
+ unsigned long prev_state;
+ struct rq_flags rf;
+@@ -6615,33 +6653,7 @@ static void __sched notrace __schedule(int sched_mode)
+ goto picked;
+ }
+ } else if (!preempt && prev_state) {
+- if (signal_pending_state(prev_state, prev)) {
+- WRITE_ONCE(prev->__state, TASK_RUNNING);
+- } else {
+- int flags = DEQUEUE_NOCLOCK;
+-
+- prev->sched_contributes_to_load =
+- (prev_state & TASK_UNINTERRUPTIBLE) &&
+- !(prev_state & TASK_NOLOAD) &&
+- !(prev_state & TASK_FROZEN);
+-
+- if (unlikely(is_special_task_state(prev_state)))
+- flags |= DEQUEUE_SPECIAL;
+-
+- /*
+- * __schedule() ttwu()
+- * prev_state = prev->state; if (p->on_rq && ...)
+- * if (prev_state) goto out;
+- * p->on_rq = 0; smp_acquire__after_ctrl_dep();
+- * p->state = TASK_WAKING
+- *
+- * Where __schedule() and ttwu() have matching control dependencies.
+- *
+- * After this, schedule() must not care about p->state any more.
+- */
+- block_task(rq, prev, flags);
+- block = true;
+- }
++ try_to_block_task(rq, prev, prev_state);
+ switch_count = &prev->nvcsw;
+ }
+
+@@ -6686,7 +6698,8 @@ static void __sched notrace __schedule(int sched_mode)
+
+ migrate_disable_switch(rq, prev);
+ psi_account_irqtime(rq, prev, next);
+- psi_sched_switch(prev, next, block);
++ psi_sched_switch(prev, next, !task_on_rq_queued(prev) ||
++ prev->se.sched_delayed);
+
+ trace_sched_switch(preempt, prev, next, prev_state);
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 28c77904ea749f..e51d5ce730be15 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -83,7 +83,7 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+
+ if (unlikely(sg_policy->limits_changed)) {
+ sg_policy->limits_changed = false;
+- sg_policy->need_freq_update = true;
++ sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
+ return true;
+ }
+
+@@ -96,7 +96,7 @@ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
+ unsigned int next_freq)
+ {
+ if (sg_policy->need_freq_update)
+- sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
++ sg_policy->need_freq_update = false;
+ else if (sg_policy->next_freq == next_freq)
+ return false;
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 60be5f8bbe7115..65e7be64487202 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5647,9 +5647,9 @@ static struct sched_entity *
+ pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq)
+ {
+ /*
+- * Enabling NEXT_BUDDY will affect latency but not fairness.
++ * Picking the ->next buddy will affect latency but not fairness.
+ */
+- if (sched_feat(NEXT_BUDDY) &&
++ if (sched_feat(PICK_BUDDY) &&
+ cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) {
+ /* ->next will never be delayed */
+ SCHED_WARN_ON(cfs_rq->next->sched_delayed);
+@@ -9418,6 +9418,8 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+ int tsk_cache_hot;
+
+ lockdep_assert_rq_held(env->src_rq);
++ if (p->sched_task_hot)
++ p->sched_task_hot = 0;
+
+ /*
+ * We do not migrate tasks that are:
+@@ -9490,10 +9492,8 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+
+ if (tsk_cache_hot <= 0 ||
+ env->sd->nr_balance_failed > env->sd->cache_nice_tries) {
+- if (tsk_cache_hot == 1) {
+- schedstat_inc(env->sd->lb_hot_gained[env->idle]);
+- schedstat_inc(p->stats.nr_forced_migrations);
+- }
++ if (tsk_cache_hot == 1)
++ p->sched_task_hot = 1;
+ return 1;
+ }
+
+@@ -9508,6 +9508,12 @@ static void detach_task(struct task_struct *p, struct lb_env *env)
+ {
+ lockdep_assert_rq_held(env->src_rq);
+
++ if (p->sched_task_hot) {
++ p->sched_task_hot = 0;
++ schedstat_inc(env->sd->lb_hot_gained[env->idle]);
++ schedstat_inc(p->stats.nr_forced_migrations);
++ }
++
+ deactivate_task(env->src_rq, p, DEQUEUE_NOCLOCK);
+ set_task_cpu(p, env->dst_cpu);
+ }
+@@ -9668,6 +9674,9 @@ static int detach_tasks(struct lb_env *env)
+
+ continue;
+ next:
++ if (p->sched_task_hot)
++ schedstat_inc(p->stats.nr_failed_migrations_hot);
++
+ list_move(&p->se.group_node, tasks);
+ }
+
+diff --git a/kernel/sched/features.h b/kernel/sched/features.h
+index 290874079f60d9..050d7503064e3a 100644
+--- a/kernel/sched/features.h
++++ b/kernel/sched/features.h
+@@ -31,6 +31,15 @@ SCHED_FEAT(PREEMPT_SHORT, true)
+ */
+ SCHED_FEAT(NEXT_BUDDY, false)
+
++/*
++ * Allow completely ignoring cfs_rq->next; which can be set from various
++ * places:
++ * - NEXT_BUDDY (wakeup preemption)
++ * - yield_to_task()
++ * - cgroup dequeue / pick
++ */
++SCHED_FEAT(PICK_BUDDY, true)
++
+ /*
+ * Consider buddies to be cache hot, decreases the likeliness of a
+ * cache buddy being migrated away, increases cache locality.
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index f2ef520513c4a2..5426969cf478a0 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2095,34 +2095,6 @@ static inline const struct cpumask *task_user_cpus(struct task_struct *p)
+
+ #endif /* CONFIG_SMP */
+
+-#include "stats.h"
+-
+-#if defined(CONFIG_SCHED_CORE) && defined(CONFIG_SCHEDSTATS)
+-
+-extern void __sched_core_account_forceidle(struct rq *rq);
+-
+-static inline void sched_core_account_forceidle(struct rq *rq)
+-{
+- if (schedstat_enabled())
+- __sched_core_account_forceidle(rq);
+-}
+-
+-extern void __sched_core_tick(struct rq *rq);
+-
+-static inline void sched_core_tick(struct rq *rq)
+-{
+- if (sched_core_enabled(rq) && schedstat_enabled())
+- __sched_core_tick(rq);
+-}
+-
+-#else /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS): */
+-
+-static inline void sched_core_account_forceidle(struct rq *rq) { }
+-
+-static inline void sched_core_tick(struct rq *rq) { }
+-
+-#endif /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS) */
+-
+ #ifdef CONFIG_CGROUP_SCHED
+
+ /*
+@@ -3209,6 +3181,34 @@ extern void nohz_run_idle_balance(int cpu);
+ static inline void nohz_run_idle_balance(int cpu) { }
+ #endif
+
++#include "stats.h"
++
++#if defined(CONFIG_SCHED_CORE) && defined(CONFIG_SCHEDSTATS)
++
++extern void __sched_core_account_forceidle(struct rq *rq);
++
++static inline void sched_core_account_forceidle(struct rq *rq)
++{
++ if (schedstat_enabled())
++ __sched_core_account_forceidle(rq);
++}
++
++extern void __sched_core_tick(struct rq *rq);
++
++static inline void sched_core_tick(struct rq *rq)
++{
++ if (sched_core_enabled(rq) && schedstat_enabled())
++ __sched_core_tick(rq);
++}
++
++#else /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS): */
++
++static inline void sched_core_account_forceidle(struct rq *rq) { }
++
++static inline void sched_core_tick(struct rq *rq) { }
++
++#endif /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS) */
++
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+
+ struct irqtime {
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 767e098a3bd132..6ade91bce63ee3 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -127,21 +127,29 @@ static inline void psi_account_irqtime(struct rq *rq, struct task_struct *curr,
+ * go through migration requeues. In this case, *sleeping* states need
+ * to be transferred.
+ */
+-static inline void psi_enqueue(struct task_struct *p, bool migrate)
++static inline void psi_enqueue(struct task_struct *p, int flags)
+ {
+ int clear = 0, set = 0;
+
+ if (static_branch_likely(&psi_disabled))
+ return;
+
++ /* Same runqueue, nothing changed for psi */
++ if (flags & ENQUEUE_RESTORE)
++ return;
++
++ /* psi_sched_switch() will handle the flags */
++ if (task_on_cpu(task_rq(p), p))
++ return;
++
+ if (p->se.sched_delayed) {
+ /* CPU migration of "sleeping" task */
+- SCHED_WARN_ON(!migrate);
++ SCHED_WARN_ON(!(flags & ENQUEUE_MIGRATED));
+ if (p->in_memstall)
+ set |= TSK_MEMSTALL;
+ if (p->in_iowait)
+ set |= TSK_IOWAIT;
+- } else if (migrate) {
++ } else if (flags & ENQUEUE_MIGRATED) {
+ /* CPU migration of runnable task */
+ set = TSK_RUNNING;
+ if (p->in_memstall)
+@@ -158,17 +166,14 @@ static inline void psi_enqueue(struct task_struct *p, bool migrate)
+ psi_task_change(p, clear, set);
+ }
+
+-static inline void psi_dequeue(struct task_struct *p, bool migrate)
++static inline void psi_dequeue(struct task_struct *p, int flags)
+ {
+ if (static_branch_likely(&psi_disabled))
+ return;
+
+- /*
+- * When migrating a task to another CPU, clear all psi
+- * state. The enqueue callback above will work it out.
+- */
+- if (migrate)
+- psi_task_change(p, p->psi_flags, 0);
++ /* Same runqueue, nothing changed for psi */
++ if (flags & DEQUEUE_SAVE)
++ return;
+
+ /*
+ * A voluntary sleep is a dequeue followed by a task switch. To
+@@ -176,6 +181,14 @@ static inline void psi_dequeue(struct task_struct *p, bool migrate)
+ * TSK_RUNNING and TSK_IOWAIT for us when it moves TSK_ONCPU.
+ * Do nothing here.
+ */
++ if (flags & DEQUEUE_SLEEP)
++ return;
++
++ /*
++ * When migrating a task to another CPU, clear all psi
++ * state. The enqueue callback above will work it out.
++ */
++ psi_task_change(p, p->psi_flags, 0);
+ }
+
+ static inline void psi_ttwu_dequeue(struct task_struct *p)
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 1784ed1fb3fe5d..f9cb7896c1b966 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -1471,7 +1471,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ struct rq *rq, *p_rq;
+ int yielded = 0;
+
+- scoped_guard (irqsave) {
++ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
+ rq = this_rq();
+
+ again:
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 50881898e758d8..449efaaa387a68 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -619,7 +619,8 @@ static const struct bpf_func_proto bpf_perf_event_read_value_proto = {
+
+ static __always_inline u64
+ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+- u64 flags, struct perf_sample_data *sd)
++ u64 flags, struct perf_raw_record *raw,
++ struct perf_sample_data *sd)
+ {
+ struct bpf_array *array = container_of(map, struct bpf_array, map);
+ unsigned int cpu = smp_processor_id();
+@@ -644,6 +645,8 @@ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+ if (unlikely(event->oncpu != cpu))
+ return -EOPNOTSUPP;
+
++ perf_sample_save_raw_data(sd, event, raw);
++
+ return perf_event_output(event, sd, regs);
+ }
+
+@@ -687,9 +690,8 @@ BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
+ }
+
+ perf_sample_data_init(sd, 0, 0);
+- perf_sample_save_raw_data(sd, &raw);
+
+- err = __bpf_perf_event_output(regs, map, flags, sd);
++ err = __bpf_perf_event_output(regs, map, flags, &raw, sd);
+ out:
+ this_cpu_dec(bpf_trace_nest_level);
+ preempt_enable();
+@@ -748,9 +750,8 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+
+ perf_fetch_caller_regs(regs);
+ perf_sample_data_init(sd, 0, 0);
+- perf_sample_save_raw_data(sd, &raw);
+
+- ret = __bpf_perf_event_output(regs, map, flags, sd);
++ ret = __bpf_perf_event_output(regs, map, flags, &raw, sd);
+ out:
+ this_cpu_dec(bpf_event_output_nest_level);
+ preempt_enable();
+@@ -832,7 +833,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
+ if (unlikely(is_global_init(current)))
+ return -EPERM;
+
+- if (irqs_disabled()) {
++ if (!preemptible()) {
+ /* Do an early check on signal validity. Otherwise,
+ * the error is lost in deferred irq_work.
+ */
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index 6c902639728b76..0e9a1d4cf89be0 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -584,10 +584,6 @@ static struct bucket_table *rhashtable_insert_one(
+ */
+ rht_assign_locked(bkt, obj);
+
+- atomic_inc(&ht->nelems);
+- if (rht_grow_above_75(ht, tbl))
+- schedule_work(&ht->run_work);
+-
+ return NULL;
+ }
+
+@@ -615,15 +611,23 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key,
+ new_tbl = rht_dereference_rcu(tbl->future_tbl, ht);
+ data = ERR_PTR(-EAGAIN);
+ } else {
++ bool inserted;
++
+ flags = rht_lock(tbl, bkt);
+ data = rhashtable_lookup_one(ht, bkt, tbl,
+ hash, key, obj);
+ new_tbl = rhashtable_insert_one(ht, bkt, tbl,
+ hash, obj, data);
++ inserted = data && !new_tbl;
++ if (inserted)
++ atomic_inc(&ht->nelems);
+ if (PTR_ERR(new_tbl) != -EEXIST)
+ data = ERR_CAST(new_tbl);
+
+ rht_unlock(tbl, bkt, flags);
++
++ if (inserted && rht_grow_above_75(ht, tbl))
++ schedule_work(&ht->run_work);
+ }
+ } while (!IS_ERR_OR_NULL(new_tbl));
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 53db98d2c4a1b3..ae1d184d035a4d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1139,6 +1139,7 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
+ {
+ struct mem_cgroup *iter;
+ int ret = 0;
++ int i = 0;
+
+ BUG_ON(mem_cgroup_is_root(memcg));
+
+@@ -1147,8 +1148,12 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
+ struct task_struct *task;
+
+ css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it);
+- while (!ret && (task = css_task_iter_next(&it)))
++ while (!ret && (task = css_task_iter_next(&it))) {
++ /* Avoid potential softlockup warning */
++ if ((++i & 1023) == 0)
++ cond_resched();
+ ret = fn(task, arg);
++ }
+ css_task_iter_end(&it);
+ if (ret) {
+ mem_cgroup_iter_break(memcg, iter);
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 4d7a0004df2cac..8aa712afd8ae1a 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -45,6 +45,7 @@
+ #include <linux/init.h>
+ #include <linux/mmu_notifier.h>
+ #include <linux/cred.h>
++#include <linux/nmi.h>
+
+ #include <asm/tlb.h>
+ #include "internal.h"
+@@ -431,10 +432,15 @@ static void dump_tasks(struct oom_control *oc)
+ mem_cgroup_scan_tasks(oc->memcg, dump_task, oc);
+ else {
+ struct task_struct *p;
++ int i = 0;
+
+ rcu_read_lock();
+- for_each_process(p)
++ for_each_process(p) {
++ /* Avoid potential softlockup warning */
++ if ((++i & 1023) == 0)
++ touch_softlockup_watchdog();
+ dump_task(p, oc);
++ }
+ rcu_read_unlock();
+ }
+ }
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index d6f9fae06a9d81..aa6c714892ec9d 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -467,7 +467,7 @@ static int ax25_ctl_ioctl(const unsigned int cmd, void __user *arg)
+ goto out_put;
+ }
+
+-static void ax25_fillin_cb_from_dev(ax25_cb *ax25, ax25_dev *ax25_dev)
++static void ax25_fillin_cb_from_dev(ax25_cb *ax25, const ax25_dev *ax25_dev)
+ {
+ ax25->rtt = msecs_to_jiffies(ax25_dev->values[AX25_VALUES_T1]) / 2;
+ ax25->t1 = msecs_to_jiffies(ax25_dev->values[AX25_VALUES_T1]);
+@@ -677,22 +677,22 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- rtnl_lock();
+- dev = __dev_get_by_name(&init_net, devname);
++ rcu_read_lock();
++ dev = dev_get_by_name_rcu(&init_net, devname);
+ if (!dev) {
+- rtnl_unlock();
++ rcu_read_unlock();
+ res = -ENODEV;
+ break;
+ }
+
+ ax25->ax25_dev = ax25_dev_ax25dev(dev);
+ if (!ax25->ax25_dev) {
+- rtnl_unlock();
++ rcu_read_unlock();
+ res = -ENODEV;
+ break;
+ }
+ ax25_fillin_cb(ax25, ax25->ax25_dev);
+- rtnl_unlock();
++ rcu_read_unlock();
+ break;
+
+ default:
+diff --git a/net/ax25/ax25_dev.c b/net/ax25/ax25_dev.c
+index 9efd6690b34436..3733c0254a5084 100644
+--- a/net/ax25/ax25_dev.c
++++ b/net/ax25/ax25_dev.c
+@@ -90,7 +90,7 @@ void ax25_dev_device_up(struct net_device *dev)
+
+ spin_lock_bh(&ax25_dev_lock);
+ list_add(&ax25_dev->list, &ax25_dev_list);
+- dev->ax25_ptr = ax25_dev;
++ rcu_assign_pointer(dev->ax25_ptr, ax25_dev);
+ spin_unlock_bh(&ax25_dev_lock);
+
+ ax25_register_dev_sysctl(ax25_dev);
+@@ -125,7 +125,7 @@ void ax25_dev_device_down(struct net_device *dev)
+ }
+ }
+
+- dev->ax25_ptr = NULL;
++ RCU_INIT_POINTER(dev->ax25_ptr, NULL);
+ spin_unlock_bh(&ax25_dev_lock);
+ netdev_put(dev, &ax25_dev->dev_tracker);
+ ax25_dev_put(ax25_dev);
+diff --git a/net/ax25/ax25_ip.c b/net/ax25/ax25_ip.c
+index 36249776c021e7..215d4ccf12b913 100644
+--- a/net/ax25/ax25_ip.c
++++ b/net/ax25/ax25_ip.c
+@@ -122,6 +122,7 @@ netdev_tx_t ax25_ip_xmit(struct sk_buff *skb)
+ if (dev == NULL)
+ dev = skb->dev;
+
++ rcu_read_lock();
+ if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL) {
+ kfree_skb(skb);
+ goto put;
+@@ -202,7 +203,7 @@ netdev_tx_t ax25_ip_xmit(struct sk_buff *skb)
+ ax25_queue_xmit(skb, dev);
+
+ put:
+-
++ rcu_read_unlock();
+ ax25_route_lock_unuse();
+ return NETDEV_TX_OK;
+ }
+diff --git a/net/ax25/ax25_out.c b/net/ax25/ax25_out.c
+index 3db76d2470e954..8bca2ace98e51b 100644
+--- a/net/ax25/ax25_out.c
++++ b/net/ax25/ax25_out.c
+@@ -39,10 +39,14 @@ ax25_cb *ax25_send_frame(struct sk_buff *skb, int paclen, const ax25_address *sr
+ * specified.
+ */
+ if (paclen == 0) {
+- if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
++ rcu_read_lock();
++ ax25_dev = ax25_dev_ax25dev(dev);
++ if (!ax25_dev) {
++ rcu_read_unlock();
+ return NULL;
+-
++ }
+ paclen = ax25_dev->values[AX25_VALUES_PACLEN];
++ rcu_read_unlock();
+ }
+
+ /*
+@@ -53,13 +57,19 @@ ax25_cb *ax25_send_frame(struct sk_buff *skb, int paclen, const ax25_address *sr
+ return ax25; /* It already existed */
+ }
+
+- if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
++ rcu_read_lock();
++ ax25_dev = ax25_dev_ax25dev(dev);
++ if (!ax25_dev) {
++ rcu_read_unlock();
+ return NULL;
++ }
+
+- if ((ax25 = ax25_create_cb()) == NULL)
++ if ((ax25 = ax25_create_cb()) == NULL) {
++ rcu_read_unlock();
+ return NULL;
+-
++ }
+ ax25_fillin_cb(ax25, ax25_dev);
++ rcu_read_unlock();
+
+ ax25->source_addr = *src;
+ ax25->dest_addr = *dest;
+@@ -358,7 +368,9 @@ void ax25_queue_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ unsigned char *ptr;
+
++ rcu_read_lock();
+ skb->protocol = ax25_type_trans(skb, ax25_fwd_dev(dev));
++ rcu_read_unlock();
+
+ ptr = skb_push(skb, 1);
+ *ptr = 0x00; /* KISS */
+diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c
+index b7c4d656a94b71..69de75db0c9c21 100644
+--- a/net/ax25/ax25_route.c
++++ b/net/ax25/ax25_route.c
+@@ -406,6 +406,7 @@ int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+ ax25_route_lock_unuse();
+ return -EHOSTUNREACH;
+ }
++ rcu_read_lock();
+ if ((ax25->ax25_dev = ax25_dev_ax25dev(ax25_rt->dev)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto put;
+@@ -442,6 +443,7 @@ int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+ }
+
+ put:
++ rcu_read_unlock();
+ ax25_route_lock_unuse();
+ return err;
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1867a6a8d76da9..2e0fe38d0e877d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1279,7 +1279,9 @@ int dev_change_name(struct net_device *dev, const char *newname)
+ rollback:
+ ret = device_rename(&dev->dev, dev->name);
+ if (ret) {
++ write_seqlock_bh(&netdev_rename_lock);
+ memcpy(dev->name, oldname, IFNAMSIZ);
++ write_sequnlock_bh(&netdev_rename_lock);
+ WRITE_ONCE(dev->name_assign_type, old_assign_type);
+ up_write(&devnet_rename_sem);
+ return ret;
+@@ -2134,8 +2136,8 @@ EXPORT_SYMBOL_GPL(net_dec_egress_queue);
+ #endif
+
+ #ifdef CONFIG_NET_CLS_ACT
+-DEFINE_STATIC_KEY_FALSE(tcf_bypass_check_needed_key);
+-EXPORT_SYMBOL(tcf_bypass_check_needed_key);
++DEFINE_STATIC_KEY_FALSE(tcf_sw_enabled_key);
++EXPORT_SYMBOL(tcf_sw_enabled_key);
+ #endif
+
+ DEFINE_STATIC_KEY_FALSE(netstamp_needed_key);
+@@ -4028,10 +4030,13 @@ static int tc_run(struct tcx_entry *entry, struct sk_buff *skb,
+ if (!miniq)
+ return ret;
+
+- if (static_branch_unlikely(&tcf_bypass_check_needed_key)) {
+- if (tcf_block_bypass_sw(miniq->block))
+- return ret;
+- }
++ /* Global bypass */
++ if (!static_branch_likely(&tcf_sw_enabled_key))
++ return ret;
++
++ /* Block-wise bypass */
++ if (tcf_block_bypass_sw(miniq->block))
++ return ret;
+
+ tc_skb_cb(skb)->mru = 0;
+ tc_skb_cb(skb)->post_ct = false;
+@@ -9590,6 +9595,10 @@ static int dev_xdp_attach(struct net_device *dev, struct netlink_ext_ack *extack
+ NL_SET_ERR_MSG(extack, "Program bound to different device");
+ return -EINVAL;
+ }
++ if (bpf_prog_is_dev_bound(new_prog->aux) && mode == XDP_MODE_SKB) {
++ NL_SET_ERR_MSG(extack, "Can't attach device-bound programs in generic mode");
++ return -EINVAL;
++ }
+ if (new_prog->expected_attach_type == BPF_XDP_DEVMAP) {
+ NL_SET_ERR_MSG(extack, "BPF_XDP_DEVMAP programs can not be attached to a device");
+ return -EINVAL;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 46da488ff0703f..a2f990bf51e5e1 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -7662,7 +7662,7 @@ static const struct bpf_func_proto bpf_sock_ops_load_hdr_opt_proto = {
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+- .arg2_type = ARG_PTR_TO_MEM,
++ .arg2_type = ARG_PTR_TO_MEM | MEM_WRITE,
+ .arg3_type = ARG_CONST_SIZE,
+ .arg4_type = ARG_ANYTHING,
+ };
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 86a2476678c484..5dd54a81339806 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -303,7 +303,7 @@ static int proc_do_dev_weight(const struct ctl_table *table, int write,
+ int ret, weight;
+
+ mutex_lock(&dev_weight_mutex);
+- ret = proc_dointvec(table, write, buffer, lenp, ppos);
++ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ if (!ret && write) {
+ weight = READ_ONCE(weight_p);
+ WRITE_ONCE(net_hotdata.dev_rx_weight, weight * dev_weight_rx_bias);
+@@ -396,6 +396,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_do_dev_weight,
++ .extra1 = SYSCTL_ONE,
+ },
+ {
+ .procname = "dev_weight_rx_bias",
+@@ -403,6 +404,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_do_dev_weight,
++ .extra1 = SYSCTL_ONE,
+ },
+ {
+ .procname = "dev_weight_tx_bias",
+@@ -410,6 +412,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_do_dev_weight,
++ .extra1 = SYSCTL_ONE,
+ },
+ {
+ .procname = "netdev_max_backlog",
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 65cfe76dafbe2e..8b9692c35e7067 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -992,7 +992,13 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ if (rc)
+ return rc;
+
+- if (ops->get_rxfh) {
++ /* Nonzero ring with RSS only makes sense if NIC adds them together */
++ if (cmd == ETHTOOL_SRXCLSRLINS && info.fs.flow_type & FLOW_RSS &&
++ !ops->cap_rss_rxnfc_adds &&
++ ethtool_get_flow_spec_ring(info.fs.ring_cookie))
++ return -EINVAL;
++
++ if (cmd == ETHTOOL_SRXFH && ops->get_rxfh) {
+ struct ethtool_rxfh_param rxfh = {};
+
+ rc = ops->get_rxfh(dev, &rxfh);
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index e3f0ef6b851bb4..4d18dc29b30438 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -90,7 +90,7 @@ int ethnl_ops_begin(struct net_device *dev)
+ pm_runtime_get_sync(dev->dev.parent);
+
+ if (!netif_device_present(dev) ||
+- dev->reg_state == NETREG_UNREGISTERING) {
++ dev->reg_state >= NETREG_UNREGISTERING) {
+ ret = -ENODEV;
+ goto err;
+ }
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 40c5fbbd155d66..c0217476eb17f9 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -688,9 +688,12 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ frame->is_vlan = true;
+
+ if (frame->is_vlan) {
+- if (skb->mac_len < offsetofend(struct hsr_vlan_ethhdr, vlanhdr))
++ /* Note: skb->mac_len might be wrong here. */
++ if (!pskb_may_pull(skb,
++ skb_mac_offset(skb) +
++ offsetofend(struct hsr_vlan_ethhdr, vlanhdr)))
+ return -EINVAL;
+- vlan_hdr = (struct hsr_vlan_ethhdr *)ethhdr;
++ vlan_hdr = (struct hsr_vlan_ethhdr *)skb_mac_header(skb);
+ proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto;
+ /* FIXME: */
+ netdev_warn_once(skb->dev, "VLAN not yet supported");
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 80c4ea0e12f48a..e0d94270da28a3 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -53,9 +53,9 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ if (sp->len == XFRM_MAX_DEPTH)
+ goto out_reset;
+
+- x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+- (xfrm_address_t *)&ip_hdr(skb)->daddr,
+- spi, IPPROTO_ESP, AF_INET);
++ x = xfrm_input_state_lookup(dev_net(skb->dev), skb->mark,
++ (xfrm_address_t *)&ip_hdr(skb)->daddr,
++ spi, IPPROTO_ESP, AF_INET);
+
+ if (unlikely(x && x->dir && x->dir != XFRM_SA_DIR_IN)) {
+ /* non-offload path will record the error and audit log */
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index c3ad41573b33ea..932bd775fc2682 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -312,7 +312,6 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ struct dst_entry *dst = &rt->dst;
+ struct inet_peer *peer;
+ bool rc = true;
+- int vif;
+
+ if (!apply_ratelimit)
+ return true;
+@@ -321,12 +320,12 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ if (dst->dev && (dst->dev->flags&IFF_LOOPBACK))
+ goto out;
+
+- vif = l3mdev_master_ifindex(dst->dev);
+- peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, vif, 1);
++ rcu_read_lock();
++ peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr,
++ l3mdev_master_ifindex_rcu(dst->dev));
+ rc = inet_peer_xrlim_allow(peer,
+ READ_ONCE(net->ipv4.sysctl_icmp_ratelimit));
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
+ out:
+ if (!rc)
+ __ICMP_INC_STATS(net, ICMP_MIB_RATELIMITHOST);
+diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
+index 5bd7599634517a..9c5ffe3b5f776f 100644
+--- a/net/ipv4/inetpeer.c
++++ b/net/ipv4/inetpeer.c
+@@ -95,6 +95,7 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
+ {
+ struct rb_node **pp, *parent, *next;
+ struct inet_peer *p;
++ u32 now;
+
+ pp = &base->rb_root.rb_node;
+ parent = NULL;
+@@ -108,8 +109,9 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
+ p = rb_entry(parent, struct inet_peer, rb_node);
+ cmp = inetpeer_addr_cmp(daddr, &p->daddr);
+ if (cmp == 0) {
+- if (!refcount_inc_not_zero(&p->refcnt))
+- break;
++ now = jiffies;
++ if (READ_ONCE(p->dtime) != now)
++ WRITE_ONCE(p->dtime, now);
+ return p;
+ }
+ if (gc_stack) {
+@@ -155,9 +157,6 @@ static void inet_peer_gc(struct inet_peer_base *base,
+ for (i = 0; i < gc_cnt; i++) {
+ p = gc_stack[i];
+
+- /* The READ_ONCE() pairs with the WRITE_ONCE()
+- * in inet_putpeer()
+- */
+ delta = (__u32)jiffies - READ_ONCE(p->dtime);
+
+ if (delta < ttl || !refcount_dec_if_one(&p->refcnt))
+@@ -173,31 +172,23 @@ static void inet_peer_gc(struct inet_peer_base *base,
+ }
+ }
+
++/* Must be called under RCU : No refcount change is done here. */
+ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
+- const struct inetpeer_addr *daddr,
+- int create)
++ const struct inetpeer_addr *daddr)
+ {
+ struct inet_peer *p, *gc_stack[PEER_MAX_GC];
+ struct rb_node **pp, *parent;
+ unsigned int gc_cnt, seq;
+- int invalidated;
+
+ /* Attempt a lockless lookup first.
+ * Because of a concurrent writer, we might not find an existing entry.
+ */
+- rcu_read_lock();
+ seq = read_seqbegin(&base->lock);
+ p = lookup(daddr, base, seq, NULL, &gc_cnt, &parent, &pp);
+- invalidated = read_seqretry(&base->lock, seq);
+- rcu_read_unlock();
+
+ if (p)
+ return p;
+
+- /* If no writer did a change during our lookup, we can return early. */
+- if (!create && !invalidated)
+- return NULL;
+-
+ /* retry an exact lookup, taking the lock before.
+ * At least, nodes should be hot in our cache.
+ */
+@@ -206,12 +197,12 @@ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
+
+ gc_cnt = 0;
+ p = lookup(daddr, base, seq, gc_stack, &gc_cnt, &parent, &pp);
+- if (!p && create) {
++ if (!p) {
+ p = kmem_cache_alloc(peer_cachep, GFP_ATOMIC);
+ if (p) {
+ p->daddr = *daddr;
+ p->dtime = (__u32)jiffies;
+- refcount_set(&p->refcnt, 2);
++ refcount_set(&p->refcnt, 1);
+ atomic_set(&p->rid, 0);
+ p->metrics[RTAX_LOCK-1] = INETPEER_METRICS_NEW;
+ p->rate_tokens = 0;
+@@ -236,15 +227,9 @@ EXPORT_SYMBOL_GPL(inet_getpeer);
+
+ void inet_putpeer(struct inet_peer *p)
+ {
+- /* The WRITE_ONCE() pairs with itself (we run lockless)
+- * and the READ_ONCE() in inet_peer_gc()
+- */
+- WRITE_ONCE(p->dtime, (__u32)jiffies);
+-
+ if (refcount_dec_and_test(&p->refcnt))
+ call_rcu(&p->rcu, inetpeer_free_rcu);
+ }
+-EXPORT_SYMBOL_GPL(inet_putpeer);
+
+ /*
+ * Check transmit rate limitation for given message.
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index a92664a5ef2efe..9ca0a183a55ffa 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -82,15 +82,20 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
+ static void ip4_frag_init(struct inet_frag_queue *q, const void *a)
+ {
+ struct ipq *qp = container_of(q, struct ipq, q);
+- struct net *net = q->fqdir->net;
+-
+ const struct frag_v4_compare_key *key = a;
++ struct net *net = q->fqdir->net;
++ struct inet_peer *p = NULL;
+
+ q->key.v4 = *key;
+ qp->ecn = 0;
+- qp->peer = q->fqdir->max_dist ?
+- inet_getpeer_v4(net->ipv4.peers, key->saddr, key->vif, 1) :
+- NULL;
++ if (q->fqdir->max_dist) {
++ rcu_read_lock();
++ p = inet_getpeer_v4(net->ipv4.peers, key->saddr, key->vif);
++ if (p && !refcount_inc_not_zero(&p->refcnt))
++ p = NULL;
++ rcu_read_unlock();
++ }
++ qp->peer = p;
+ }
+
+ static void ip4_frag_free(struct inet_frag_queue *q)
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 449a2ac40bdc00..de0d9cc7806a15 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -817,7 +817,7 @@ static void ipmr_update_thresholds(struct mr_table *mrt, struct mr_mfc *cache,
+ cache->mfc_un.res.maxvif = vifi + 1;
+ }
+ }
+- cache->mfc_un.res.lastuse = jiffies;
++ WRITE_ONCE(cache->mfc_un.res.lastuse, jiffies);
+ }
+
+ static int vif_add(struct net *net, struct mr_table *mrt,
+@@ -1667,9 +1667,9 @@ int ipmr_ioctl(struct sock *sk, int cmd, void *arg)
+ rcu_read_lock();
+ c = ipmr_cache_find(mrt, sr->src.s_addr, sr->grp.s_addr);
+ if (c) {
+- sr->pktcnt = c->_c.mfc_un.res.pkt;
+- sr->bytecnt = c->_c.mfc_un.res.bytes;
+- sr->wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr->pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr->bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr->wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+ return 0;
+ }
+@@ -1739,9 +1739,9 @@ int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+ rcu_read_lock();
+ c = ipmr_cache_find(mrt, sr.src.s_addr, sr.grp.s_addr);
+ if (c) {
+- sr.pktcnt = c->_c.mfc_un.res.pkt;
+- sr.bytecnt = c->_c.mfc_un.res.bytes;
+- sr.wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr.pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr.bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr.wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+@@ -1974,9 +1974,9 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
+ int vif, ct;
+
+ vif = c->_c.mfc_parent;
+- c->_c.mfc_un.res.pkt++;
+- c->_c.mfc_un.res.bytes += skb->len;
+- c->_c.mfc_un.res.lastuse = jiffies;
++ atomic_long_inc(&c->_c.mfc_un.res.pkt);
++ atomic_long_add(skb->len, &c->_c.mfc_un.res.bytes);
++ WRITE_ONCE(c->_c.mfc_un.res.lastuse, jiffies);
+
+ if (c->mfc_origin == htonl(INADDR_ANY) && true_vifi >= 0) {
+ struct mfc_cache *cache_proxy;
+@@ -2007,7 +2007,7 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
+ goto dont_forward;
+ }
+
+- c->_c.mfc_un.res.wrong_if++;
++ atomic_long_inc(&c->_c.mfc_un.res.wrong_if);
+
+ if (true_vifi >= 0 && mrt->mroute_do_assert &&
+ /* pimsm uses asserts, when switching from RPT to SPT,
+@@ -3015,9 +3015,9 @@ static int ipmr_mfc_seq_show(struct seq_file *seq, void *v)
+
+ if (it->cache != &mrt->mfc_unres_queue) {
+ seq_printf(seq, " %8lu %8lu %8lu",
+- mfc->_c.mfc_un.res.pkt,
+- mfc->_c.mfc_un.res.bytes,
+- mfc->_c.mfc_un.res.wrong_if);
++ atomic_long_read(&mfc->_c.mfc_un.res.pkt),
++ atomic_long_read(&mfc->_c.mfc_un.res.bytes),
++ atomic_long_read(&mfc->_c.mfc_un.res.wrong_if));
+ for (n = mfc->_c.mfc_un.res.minvif;
+ n < mfc->_c.mfc_un.res.maxvif; n++) {
+ if (VIF_EXISTS(mrt, n) &&
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index f0af12a2f70bcd..28d77d454d442e 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -263,9 +263,9 @@ int mr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
+ lastuse = READ_ONCE(c->mfc_un.res.lastuse);
+ lastuse = time_after_eq(jiffies, lastuse) ? jiffies - lastuse : 0;
+
+- mfcs.mfcs_packets = c->mfc_un.res.pkt;
+- mfcs.mfcs_bytes = c->mfc_un.res.bytes;
+- mfcs.mfcs_wrong_if = c->mfc_un.res.wrong_if;
++ mfcs.mfcs_packets = atomic_long_read(&c->mfc_un.res.pkt);
++ mfcs.mfcs_bytes = atomic_long_read(&c->mfc_un.res.bytes);
++ mfcs.mfcs_wrong_if = atomic_long_read(&c->mfc_un.res.wrong_if);
+ if (nla_put_64bit(skb, RTA_MFC_STATS, sizeof(mfcs), &mfcs, RTA_PAD) ||
+ nla_put_u64_64bit(skb, RTA_EXPIRES, jiffies_to_clock_t(lastuse),
+ RTA_PAD))
+@@ -330,9 +330,6 @@ int mr_table_dump(struct mr_table *mrt, struct sk_buff *skb,
+ list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
+ if (e < s_e)
+ goto next_entry2;
+- if (filter->dev &&
+- !mr_mfc_uses_dev(mrt, mfc, filter->dev))
+- goto next_entry2;
+
+ err = fill(mrt, skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, mfc, RTM_NEWROUTE, flags);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 723ac9181558c3..2a27913588d05a 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -870,11 +870,11 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ }
+ log_martians = IN_DEV_LOG_MARTIANS(in_dev);
+ vif = l3mdev_master_ifindex_rcu(rt->dst.dev);
+- rcu_read_unlock();
+
+ net = dev_net(rt->dst.dev);
+- peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr, vif, 1);
++ peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr, vif);
+ if (!peer) {
++ rcu_read_unlock();
+ icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST,
+ rt_nexthop(rt, ip_hdr(skb)->daddr));
+ return;
+@@ -893,7 +893,7 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ */
+ if (peer->n_redirects >= ip_rt_redirect_number) {
+ peer->rate_last = jiffies;
+- goto out_put_peer;
++ goto out_unlock;
+ }
+
+ /* Check for load limit; set rate_last to the latest sent
+@@ -914,8 +914,8 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ &ip_hdr(skb)->saddr, inet_iif(skb),
+ &ip_hdr(skb)->daddr, &gw);
+ }
+-out_put_peer:
+- inet_putpeer(peer);
++out_unlock:
++ rcu_read_unlock();
+ }
+
+ static int ip_error(struct sk_buff *skb)
+@@ -975,9 +975,9 @@ static int ip_error(struct sk_buff *skb)
+ break;
+ }
+
++ rcu_read_lock();
+ peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr,
+- l3mdev_master_ifindex(skb->dev), 1);
+-
++ l3mdev_master_ifindex_rcu(skb->dev));
+ send = true;
+ if (peer) {
+ now = jiffies;
+@@ -989,8 +989,9 @@ static int ip_error(struct sk_buff *skb)
+ peer->rate_tokens -= ip_rt_error_cost;
+ else
+ send = false;
+- inet_putpeer(peer);
+ }
++ rcu_read_unlock();
++
+ if (send)
+ icmp_send(skb, ICMP_DEST_UNREACH, code, 0);
+
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
+index 5dbed91c617825..76c23675ae50ab 100644
+--- a/net/ipv4/tcp_cubic.c
++++ b/net/ipv4/tcp_cubic.c
+@@ -392,6 +392,10 @@ static void hystart_update(struct sock *sk, u32 delay)
+ if (after(tp->snd_una, ca->end_seq))
+ bictcp_hystart_reset(sk);
+
++ /* hystart triggers when cwnd is larger than some threshold */
++ if (tcp_snd_cwnd(tp) < hystart_low_window)
++ return;
++
+ if (hystart_detect & HYSTART_ACK_TRAIN) {
+ u32 now = bictcp_clock_us(sk);
+
+@@ -467,9 +471,7 @@ __bpf_kfunc static void cubictcp_acked(struct sock *sk, const struct ack_sample
+ if (ca->delay_min == 0 || ca->delay_min > delay)
+ ca->delay_min = delay;
+
+- /* hystart triggers when cwnd is larger than some threshold */
+- if (!ca->found && tcp_in_slow_start(tp) && hystart &&
+- tcp_snd_cwnd(tp) >= hystart_low_window)
++ if (!ca->found && tcp_in_slow_start(tp) && hystart)
+ hystart_update(sk, delay);
+ }
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 8efc58716ce969..6d5387811c32ad 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -265,11 +265,14 @@ static u16 tcp_select_window(struct sock *sk)
+ u32 cur_win, new_win;
+
+ /* Make the window 0 if we failed to queue the data because we
+- * are out of memory. The window is temporary, so we don't store
+- * it on the socket.
++ * are out of memory.
+ */
+- if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM))
++ if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) {
++ tp->pred_flags = 0;
++ tp->rcv_wnd = 0;
++ tp->rcv_wup = tp->rcv_nxt;
+ return 0;
++ }
+
+ cur_win = tcp_receive_window(tp);
+ new_win = __tcp_select_window(sk);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index ff85242720a0a9..d2eeb6fc49b382 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -420,6 +420,49 @@ u32 udp_ehashfn(const struct net *net, const __be32 laddr, const __u16 lport,
+ udp_ehash_secret + net_hash_mix(net));
+ }
+
++/**
++ * udp4_lib_lookup1() - Simplified lookup using primary hash (destination port)
++ * @net: Network namespace
++ * @saddr: Source address, network order
++ * @sport: Source port, network order
++ * @daddr: Destination address, network order
++ * @hnum: Destination port, host order
++ * @dif: Destination interface index
++ * @sdif: Destination bridge port index, if relevant
++ * @udptable: Set of UDP hash tables
++ *
++ * Simplified lookup to be used as fallback if no sockets are found due to a
++ * potential race between (receive) address change, and lookup happening before
++ * the rehash operation. This function ignores SO_REUSEPORT groups while scoring
++ * result sockets, because if we have one, we don't need the fallback at all.
++ *
++ * Called under rcu_read_lock().
++ *
++ * Return: socket with highest matching score if any, NULL if none
++ */
++static struct sock *udp4_lib_lookup1(const struct net *net,
++ __be32 saddr, __be16 sport,
++ __be32 daddr, unsigned int hnum,
++ int dif, int sdif,
++ const struct udp_table *udptable)
++{
++ unsigned int slot = udp_hashfn(net, hnum, udptable->mask);
++ struct udp_hslot *hslot = &udptable->hash[slot];
++ struct sock *sk, *result = NULL;
++ int score, badness = 0;
++
++ sk_for_each_rcu(sk, &hslot->head) {
++ score = compute_score(sk, net,
++ saddr, sport, daddr, hnum, dif, sdif);
++ if (score > badness) {
++ result = sk;
++ badness = score;
++ }
++ }
++
++ return result;
++}
++
+ /* called with rcu_read_lock() */
+ static struct sock *udp4_lib_lookup2(const struct net *net,
+ __be32 saddr, __be16 sport,
+@@ -525,6 +568,19 @@ struct sock *__udp4_lib_lookup(const struct net *net, __be32 saddr,
+ result = udp4_lib_lookup2(net, saddr, sport,
+ htonl(INADDR_ANY), hnum, dif, sdif,
+ hslot2, skb);
++ if (!IS_ERR_OR_NULL(result))
++ goto done;
++
++ /* Primary hash (destination port) lookup as fallback for this race:
++ * 1. __ip4_datagram_connect() sets sk_rcv_saddr
++ * 2. lookup (this function): new sk_rcv_saddr, hashes not updated yet
++ * 3. rehash operation updating _secondary and four-tuple_ hashes
++ * The primary hash doesn't need an update after 1., so, thanks to this
++ * further step, 1. and 3. don't need to be atomic against the lookup.
++ */
++ result = udp4_lib_lookup1(net, saddr, sport, daddr, hnum, dif, sdif,
++ udptable);
++
+ done:
+ if (IS_ERR(result))
+ return NULL;
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 919ebfabbe4ee2..7b41fb4f00b587 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -80,9 +80,9 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ if (sp->len == XFRM_MAX_DEPTH)
+ goto out_reset;
+
+- x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+- (xfrm_address_t *)&ipv6_hdr(skb)->daddr,
+- spi, IPPROTO_ESP, AF_INET6);
++ x = xfrm_input_state_lookup(dev_net(skb->dev), skb->mark,
++ (xfrm_address_t *)&ipv6_hdr(skb)->daddr,
++ spi, IPPROTO_ESP, AF_INET6);
+
+ if (unlikely(x && x->dir && x->dir != XFRM_SA_DIR_IN)) {
+ /* non-offload path will record the error and audit log */
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index 071b0bc1179d81..a6984a29fdb9dd 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -222,10 +222,10 @@ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+ if (rt->rt6i_dst.plen < 128)
+ tmo >>= ((128 - rt->rt6i_dst.plen)>>5);
+
+- peer = inet_getpeer_v6(net->ipv6.peers, &fl6->daddr, 1);
++ rcu_read_lock();
++ peer = inet_getpeer_v6(net->ipv6.peers, &fl6->daddr);
+ res = inet_peer_xrlim_allow(peer, tmo);
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
+ }
+ if (!res)
+ __ICMP6_INC_STATS(net, ip6_dst_idev(dst),
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index f26841f1490f5c..434ddf263b88a3 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -613,15 +613,15 @@ int ip6_forward(struct sk_buff *skb)
+ else
+ target = &hdr->daddr;
+
+- peer = inet_getpeer_v6(net->ipv6.peers, &hdr->daddr, 1);
++ rcu_read_lock();
++ peer = inet_getpeer_v6(net->ipv6.peers, &hdr->daddr);
+
+ /* Limit redirects both by destination (here)
+ and by source (inside ndisc_send_redirect)
+ */
+ if (inet_peer_xrlim_allow(peer, 1*HZ))
+ ndisc_send_redirect(skb, target);
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
+ } else {
+ int addrtype = ipv6_addr_type(&hdr->saddr);
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index d5057401701c1a..440048d609c37a 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -506,9 +506,9 @@ static int ipmr_mfc_seq_show(struct seq_file *seq, void *v)
+
+ if (it->cache != &mrt->mfc_unres_queue) {
+ seq_printf(seq, " %8lu %8lu %8lu",
+- mfc->_c.mfc_un.res.pkt,
+- mfc->_c.mfc_un.res.bytes,
+- mfc->_c.mfc_un.res.wrong_if);
++ atomic_long_read(&mfc->_c.mfc_un.res.pkt),
++ atomic_long_read(&mfc->_c.mfc_un.res.bytes),
++ atomic_long_read(&mfc->_c.mfc_un.res.wrong_if));
+ for (n = mfc->_c.mfc_un.res.minvif;
+ n < mfc->_c.mfc_un.res.maxvif; n++) {
+ if (VIF_EXISTS(mrt, n) &&
+@@ -870,7 +870,7 @@ static void ip6mr_update_thresholds(struct mr_table *mrt,
+ cache->mfc_un.res.maxvif = vifi + 1;
+ }
+ }
+- cache->mfc_un.res.lastuse = jiffies;
++ WRITE_ONCE(cache->mfc_un.res.lastuse, jiffies);
+ }
+
+ static int mif6_add(struct net *net, struct mr_table *mrt,
+@@ -1928,9 +1928,9 @@ int ip6mr_ioctl(struct sock *sk, int cmd, void *arg)
+ c = ip6mr_cache_find(mrt, &sr->src.sin6_addr,
+ &sr->grp.sin6_addr);
+ if (c) {
+- sr->pktcnt = c->_c.mfc_un.res.pkt;
+- sr->bytecnt = c->_c.mfc_un.res.bytes;
+- sr->wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr->pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr->bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr->wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+ return 0;
+ }
+@@ -2000,9 +2000,9 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+ rcu_read_lock();
+ c = ip6mr_cache_find(mrt, &sr.src.sin6_addr, &sr.grp.sin6_addr);
+ if (c) {
+- sr.pktcnt = c->_c.mfc_un.res.pkt;
+- sr.bytecnt = c->_c.mfc_un.res.bytes;
+- sr.wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr.pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr.bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr.wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+@@ -2125,9 +2125,9 @@ static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
+ int true_vifi = ip6mr_find_vif(mrt, dev);
+
+ vif = c->_c.mfc_parent;
+- c->_c.mfc_un.res.pkt++;
+- c->_c.mfc_un.res.bytes += skb->len;
+- c->_c.mfc_un.res.lastuse = jiffies;
++ atomic_long_inc(&c->_c.mfc_un.res.pkt);
++ atomic_long_add(skb->len, &c->_c.mfc_un.res.bytes);
++ WRITE_ONCE(c->_c.mfc_un.res.lastuse, jiffies);
+
+ if (ipv6_addr_any(&c->mf6c_origin) && true_vifi >= 0) {
+ struct mfc6_cache *cache_proxy;
+@@ -2145,7 +2145,7 @@ static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
+ * Wrong interface: drop packet and (maybe) send PIM assert.
+ */
+ if (rcu_access_pointer(mrt->vif_table[vif].dev) != dev) {
+- c->_c.mfc_un.res.wrong_if++;
++ atomic_long_inc(&c->_c.mfc_un.res.wrong_if);
+
+ if (true_vifi >= 0 && mrt->mroute_do_assert &&
+ /* pimsm uses asserts, when switching from RPT to SPT,
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index aba94a34867379..d044c67019de6d 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1731,10 +1731,12 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
+ "Redirect: destination is not a neighbour\n");
+ goto release;
+ }
+- peer = inet_getpeer_v6(net->ipv6.peers, &ipv6_hdr(skb)->saddr, 1);
++
++ rcu_read_lock();
++ peer = inet_getpeer_v6(net->ipv6.peers, &ipv6_hdr(skb)->saddr);
+ ret = inet_peer_xrlim_allow(peer, 1*HZ);
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
++
+ if (!ret)
+ goto release;
+
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 0cef8ae5d1ea18..896c9c827a288c 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -159,6 +159,49 @@ static int compute_score(struct sock *sk, const struct net *net,
+ return score;
+ }
+
++/**
++ * udp6_lib_lookup1() - Simplified lookup using primary hash (destination port)
++ * @net: Network namespace
++ * @saddr: Source address, network order
++ * @sport: Source port, network order
++ * @daddr: Destination address, network order
++ * @hnum: Destination port, host order
++ * @dif: Destination interface index
++ * @sdif: Destination bridge port index, if relevant
++ * @udptable: Set of UDP hash tables
++ *
++ * Simplified lookup to be used as fallback if no sockets are found due to a
++ * potential race between (receive) address change, and lookup happening before
++ * the rehash operation. This function ignores SO_REUSEPORT groups while scoring
++ * result sockets, because if we have one, we don't need the fallback at all.
++ *
++ * Called under rcu_read_lock().
++ *
++ * Return: socket with highest matching score if any, NULL if none
++ */
++static struct sock *udp6_lib_lookup1(const struct net *net,
++ const struct in6_addr *saddr, __be16 sport,
++ const struct in6_addr *daddr,
++ unsigned int hnum, int dif, int sdif,
++ const struct udp_table *udptable)
++{
++ unsigned int slot = udp_hashfn(net, hnum, udptable->mask);
++ struct udp_hslot *hslot = &udptable->hash[slot];
++ struct sock *sk, *result = NULL;
++ int score, badness = 0;
++
++ sk_for_each_rcu(sk, &hslot->head) {
++ score = compute_score(sk, net,
++ saddr, sport, daddr, hnum, dif, sdif);
++ if (score > badness) {
++ result = sk;
++ badness = score;
++ }
++ }
++
++ return result;
++}
++
+ /* called with rcu_read_lock() */
+ static struct sock *udp6_lib_lookup2(const struct net *net,
+ const struct in6_addr *saddr, __be16 sport,
+@@ -263,6 +306,13 @@ struct sock *__udp6_lib_lookup(const struct net *net,
+ result = udp6_lib_lookup2(net, saddr, sport,
+ &in6addr_any, hnum, dif, sdif,
+ hslot2, skb);
++ if (!IS_ERR_OR_NULL(result))
++ goto done;
++
++ /* Cover address change/lookup/rehash race: see __udp4_lib_lookup() */
++ result = udp6_lib_lookup1(net, saddr, sport, daddr, hnum, dif, sdif,
++ udptable);
++
+ done:
+ if (IS_ERR(result))
+ return NULL;
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index f79fb99271ed84..c56bb4f451e6de 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1354,7 +1354,7 @@ static int pfkey_getspi(struct sock *sk, struct sk_buff *skb, const struct sadb_
+ }
+
+ if (hdr->sadb_msg_seq) {
+- x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq);
++ x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq, UINT_MAX);
+ if (x && !xfrm_addr_equal(&x->id.daddr, xdaddr, family)) {
+ xfrm_state_put(x);
+ x = NULL;
+@@ -1362,7 +1362,8 @@ static int pfkey_getspi(struct sock *sk, struct sk_buff *skb, const struct sadb_
+ }
+
+ if (!x)
+- x = xfrm_find_acq(net, &dummy_mark, mode, reqid, 0, proto, xdaddr, xsaddr, 1, family);
++ x = xfrm_find_acq(net, &dummy_mark, mode, reqid, 0, UINT_MAX,
++ proto, xdaddr, xsaddr, 1, family);
+
+ if (x == NULL)
+ return -ENOENT;
+@@ -1417,7 +1418,7 @@ static int pfkey_acquire(struct sock *sk, struct sk_buff *skb, const struct sadb
+ if (hdr->sadb_msg_seq == 0 || hdr->sadb_msg_errno == 0)
+ return 0;
+
+- x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq);
++ x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq, UINT_MAX);
+ if (x == NULL)
+ return 0;
+
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index 68596ef78b15ee..d0b145888e1398 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -728,7 +728,7 @@ static ssize_t ieee80211_if_parse_active_links(struct ieee80211_sub_if_data *sda
+ {
+ u16 active_links;
+
+- if (kstrtou16(buf, 0, &active_links))
++ if (kstrtou16(buf, 0, &active_links) || !active_links)
+ return -EINVAL;
+
+ return ieee80211_set_active_links(&sdata->vif, active_links) ?: buflen;
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index d382d9729e853f..a06644084d15d1 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -724,6 +724,9 @@ static inline void drv_flush_sta(struct ieee80211_local *local,
+ if (sdata && !check_sdata_in_driver(sdata))
+ return;
+
++ if (!sta->uploaded)
++ return;
++
+ trace_drv_flush_sta(local, sdata, &sta->sta);
+ if (local->ops->flush_sta)
+ local->ops->flush_sta(&local->hw, &sdata->vif, &sta->sta);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 694b43091fec6b..6f3a86040cfcd8 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2994,6 +2994,7 @@ ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta
+ }
+
+ IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_frames);
++ ieee80211_set_qos_hdr(sdata, fwd_skb);
+ ieee80211_add_pending_skb(local, fwd_skb);
+
+ rx_accept:
+diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c
+index b0dd008e2114bc..dd595d9b5e50c7 100644
+--- a/net/mptcp/ctrl.c
++++ b/net/mptcp/ctrl.c
+@@ -405,9 +405,9 @@ void mptcp_active_detect_blackhole(struct sock *ssk, bool expired)
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPCAPABLEACTIVEDROP);
+ subflow->mpc_drop = 1;
+ mptcp_subflow_early_fallback(mptcp_sk(subflow->conn), subflow);
+- } else {
+- subflow->mpc_drop = 0;
+ }
++ } else if (ssk->sk_state == TCP_SYN_SENT) {
++ subflow->mpc_drop = 0;
+ }
+ }
+
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 123f3f2972841a..fd2de185bc939f 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -108,7 +108,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ mp_opt->suboptions |= OPTION_MPTCP_DSS;
+ mp_opt->use_map = 1;
+ mp_opt->mpc_map = 1;
+- mp_opt->use_ack = 0;
+ mp_opt->data_len = get_unaligned_be16(ptr);
+ ptr += 2;
+ }
+@@ -157,11 +156,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ pr_debug("DSS\n");
+ ptr++;
+
+- /* we must clear 'mpc_map' be able to detect MP_CAPABLE
+- * map vs DSS map in mptcp_incoming_options(), and reconstruct
+- * map info accordingly
+- */
+- mp_opt->mpc_map = 0;
+ flags = (*ptr++) & MPTCP_DSS_FLAG_MASK;
+ mp_opt->data_fin = (flags & MPTCP_DSS_DATA_FIN) != 0;
+ mp_opt->dsn64 = (flags & MPTCP_DSS_DSN64) != 0;
+@@ -369,8 +363,11 @@ void mptcp_get_options(const struct sk_buff *skb,
+ const unsigned char *ptr;
+ int length;
+
+- /* initialize option status */
+- mp_opt->suboptions = 0;
++ /* Ensure that casting the whole status to u32 is efficient and safe */
++ BUILD_BUG_ON(sizeof_field(struct mptcp_options_received, status) != sizeof(u32));
++ BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct mptcp_options_received, status),
++ sizeof(u32)));
++ *(u32 *)&mp_opt->status = 0;
+
+ length = (th->doff * 4) - sizeof(struct tcphdr);
+ ptr = (const unsigned char *)(th + 1);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 45a2b5f05d38b0..8c4f934d198cc6 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -2049,7 +2049,8 @@ int mptcp_pm_nl_set_flags(struct sk_buff *skb, struct genl_info *info)
+ return -EINVAL;
+ }
+ if ((addr.flags & MPTCP_PM_ADDR_FLAG_FULLMESH) &&
+- (entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) {
++ (entry->flags & (MPTCP_PM_ADDR_FLAG_SIGNAL |
++ MPTCP_PM_ADDR_FLAG_IMPLICIT))) {
+ spin_unlock_bh(&pernet->lock);
+ GENL_SET_ERR_MSG(info, "invalid addr flags");
+ return -EINVAL;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 4b9d850ce85a25..fac774825aff39 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1766,8 +1766,10 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg,
+ * see mptcp_disconnect().
+ * Attempt it again outside the problematic scope.
+ */
+- if (!mptcp_disconnect(sk, 0))
++ if (!mptcp_disconnect(sk, 0)) {
++ sk->sk_disconnects++;
+ sk->sk_socket->state = SS_UNCONNECTED;
++ }
+ }
+ inet_clear_bit(DEFER_CONNECT, sk);
+
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 73526f1d768fcb..b70a303e082878 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -149,22 +149,24 @@ struct mptcp_options_received {
+ u32 subflow_seq;
+ u16 data_len;
+ __sum16 csum;
+- u16 suboptions;
++ struct_group(status,
++ u16 suboptions;
++ u16 use_map:1,
++ dsn64:1,
++ data_fin:1,
++ use_ack:1,
++ ack64:1,
++ mpc_map:1,
++ reset_reason:4,
++ reset_transient:1,
++ echo:1,
++ backup:1,
++ deny_join_id0:1,
++ __unused:2;
++ );
++ u8 join_id;
+ u32 token;
+ u32 nonce;
+- u16 use_map:1,
+- dsn64:1,
+- data_fin:1,
+- use_ack:1,
+- ack64:1,
+- mpc_map:1,
+- reset_reason:4,
+- reset_transient:1,
+- echo:1,
+- backup:1,
+- deny_join_id0:1,
+- __unused:2;
+- u8 join_id;
+ u64 thmac;
+ u8 hmac[MPTCPOPT_HMAC_LEN];
+ struct mptcp_addr_info addr;
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 14bd66909ca455..4a8ce2949faeac 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -1089,14 +1089,12 @@ static int ncsi_rsp_handler_netlink(struct ncsi_request *nr)
+ static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr)
+ {
+ struct ncsi_dev_priv *ndp = nr->ndp;
++ struct sockaddr *saddr = &ndp->pending_mac;
+ struct net_device *ndev = ndp->ndev.dev;
+ struct ncsi_rsp_gmcma_pkt *rsp;
+- struct sockaddr saddr;
+- int ret = -1;
+ int i;
+
+ rsp = (struct ncsi_rsp_gmcma_pkt *)skb_network_header(nr->rsp);
+- saddr.sa_family = ndev->type;
+ ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+
+ netdev_info(ndev, "NCSI: Received %d provisioned MAC addresses\n",
+@@ -1108,20 +1106,20 @@ static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr)
+ rsp->addresses[i][4], rsp->addresses[i][5]);
+ }
+
++ saddr->sa_family = ndev->type;
+ for (i = 0; i < rsp->address_count; i++) {
+- memcpy(saddr.sa_data, &rsp->addresses[i], ETH_ALEN);
+- ret = ndev->netdev_ops->ndo_set_mac_address(ndev, &saddr);
+- if (ret < 0) {
++ if (!is_valid_ether_addr(rsp->addresses[i])) {
+ netdev_warn(ndev, "NCSI: Unable to assign %pM to device\n",
+- saddr.sa_data);
++ rsp->addresses[i]);
+ continue;
+ }
+- netdev_warn(ndev, "NCSI: Set MAC address to %pM\n", saddr.sa_data);
++ memcpy(saddr->sa_data, rsp->addresses[i], ETH_ALEN);
++ netdev_warn(ndev, "NCSI: Will set MAC address to %pM\n", saddr->sa_data);
+ break;
+ }
+
+- ndp->gma_flag = ret == 0;
+- return ret;
++ ndp->gma_flag = 1;
++ return 0;
+ }
+
+ static struct ncsi_rsp_handler {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 42dc8cc721ff7b..939510247ef5a6 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4647,6 +4647,14 @@ static int nf_tables_fill_set_concat(struct sk_buff *skb,
+ return 0;
+ }
+
++static u32 nft_set_userspace_size(const struct nft_set_ops *ops, u32 size)
++{
++ if (ops->usize)
++ return ops->usize(size);
++
++ return size;
++}
++
+ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ const struct nft_set *set, u16 event, u16 flags)
+ {
+@@ -4717,7 +4725,8 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ if (!nest)
+ goto nla_put_failure;
+ if (set->size &&
+- nla_put_be32(skb, NFTA_SET_DESC_SIZE, htonl(set->size)))
++ nla_put_be32(skb, NFTA_SET_DESC_SIZE,
++ htonl(nft_set_userspace_size(set->ops, set->size))))
+ goto nla_put_failure;
+
+ if (set->field_count > 1 &&
+@@ -4959,7 +4968,7 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ const struct nlattr *nla)
+ {
+- u32 num_regs = 0, key_num_regs = 0;
++ u32 len = 0, num_regs;
+ struct nlattr *attr;
+ int rem, err, i;
+
+@@ -4973,12 +4982,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ }
+
+ for (i = 0; i < desc->field_count; i++)
+- num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
++ len += round_up(desc->field_len[i], sizeof(u32));
+
+- key_num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
+- if (key_num_regs != num_regs)
++ if (len != desc->klen)
+ return -EINVAL;
+
++ num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
+ if (num_regs > NFT_REG32_COUNT)
+ return -E2BIG;
+
+@@ -5085,6 +5094,15 @@ static bool nft_set_is_same(const struct nft_set *set,
+ return true;
+ }
+
++static u32 nft_set_kernel_size(const struct nft_set_ops *ops,
++ const struct nft_set_desc *desc)
++{
++ if (ops->ksize)
++ return ops->ksize(desc->size);
++
++ return desc->size;
++}
++
+ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ const struct nlattr * const nla[])
+ {
+@@ -5267,6 +5285,9 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ if (err < 0)
+ return err;
+
++ if (desc.size)
++ desc.size = nft_set_kernel_size(set->ops, &desc);
++
+ err = 0;
+ if (!nft_set_is_same(set, &desc, exprs, num_exprs, flags)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_SET_NAME]);
+@@ -5289,6 +5310,9 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ if (IS_ERR(ops))
+ return PTR_ERR(ops);
+
++ if (desc.size)
++ desc.size = nft_set_kernel_size(ops, &desc);
++
+ udlen = 0;
+ if (nla[NFTA_SET_USERDATA])
+ udlen = nla_len(nla[NFTA_SET_USERDATA]);
+@@ -6855,6 +6879,27 @@ static bool nft_setelem_valid_key_end(const struct nft_set *set,
+ return true;
+ }
+
++static u32 nft_set_maxsize(const struct nft_set *set)
++{
++ u32 maxsize, delta;
++
++ if (!set->size)
++ return UINT_MAX;
++
++ if (set->ops->adjust_maxsize)
++ delta = set->ops->adjust_maxsize(set);
++ else
++ delta = 0;
++
++ if (check_add_overflow(set->size, set->ndeact, &maxsize))
++ return UINT_MAX;
++
++ if (check_add_overflow(maxsize, delta, &maxsize))
++ return UINT_MAX;
++
++ return maxsize;
++}
++
+ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ const struct nlattr *attr, u32 nlmsg_flags)
+ {
+@@ -7218,7 +7263,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ }
+
+ if (!(flags & NFT_SET_ELEM_CATCHALL)) {
+- unsigned int max = set->size ? set->size + set->ndeact : UINT_MAX;
++ unsigned int max = nft_set_maxsize(set);
+
+ if (!atomic_add_unless(&set->nelems, 1, max)) {
+ err = -ENFILE;
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index 2f732fae5a831e..da9ebd00b19891 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -289,6 +289,15 @@ static bool nft_flow_offload_skip(struct sk_buff *skb, int family)
+ return false;
+ }
+
++static void flow_offload_ct_tcp(struct nf_conn *ct)
++{
++ /* conntrack will not see all packets, disable tcp window validation. */
++ spin_lock_bh(&ct->lock);
++ ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++ ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++ spin_unlock_bh(&ct->lock);
++}
++
+ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
+@@ -356,11 +365,8 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ goto err_flow_alloc;
+
+ flow_offload_route_init(flow, &route);
+-
+- if (tcph) {
+- ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
+- ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
+- }
++ if (tcph)
++ flow_offload_ct_tcp(ct);
+
+ __set_bit(NF_FLOW_HW_BIDIRECTIONAL, &flow->flags);
+ ret = flow_offload_add(flowtable, flow);
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index b7ea21327549b3..2e8ef16ff191d4 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -750,6 +750,46 @@ static void nft_rbtree_gc_init(const struct nft_set *set)
+ priv->last_gc = jiffies;
+ }
+
++/* rbtree stores ranges as singleton elements, each range is composed of two
++ * elements ...
++ */
++static u32 nft_rbtree_ksize(u32 size)
++{
++ return size * 2;
++}
++
++/* ... hide this detail to userspace. */
++static u32 nft_rbtree_usize(u32 size)
++{
++ if (!size)
++ return 0;
++
++ return size / 2;
++}
++
++static u32 nft_rbtree_adjust_maxsize(const struct nft_set *set)
++{
++ struct nft_rbtree *priv = nft_set_priv(set);
++ struct nft_rbtree_elem *rbe;
++ struct rb_node *node;
++ const void *key;
++
++ node = rb_last(&priv->root);
++ if (!node)
++ return 0;
++
++ rbe = rb_entry(node, struct nft_rbtree_elem, node);
++ if (!nft_rbtree_interval_end(rbe))
++ return 0;
++
++ key = nft_set_ext_key(&rbe->ext);
++ if (memchr(key, 1, set->klen))
++ return 0;
++
++ /* this is the all-zero no-match element. */
++ return 1;
++}
++
+ const struct nft_set_type nft_set_rbtree_type = {
+ .features = NFT_SET_INTERVAL | NFT_SET_MAP | NFT_SET_OBJECT | NFT_SET_TIMEOUT,
+ .ops = {
+@@ -768,5 +808,8 @@ const struct nft_set_type nft_set_rbtree_type = {
+ .lookup = nft_rbtree_lookup,
+ .walk = nft_rbtree_walk,
+ .get = nft_rbtree_get,
++ .ksize = nft_rbtree_ksize,
++ .usize = nft_rbtree_usize,
++ .adjust_maxsize = nft_rbtree_adjust_maxsize,
+ },
+ };
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 59050caab65c8b..72c65d938a150e 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -397,15 +397,15 @@ static int rose_setsockopt(struct socket *sock, int level, int optname,
+ {
+ struct sock *sk = sock->sk;
+ struct rose_sock *rose = rose_sk(sk);
+- int opt;
++ unsigned int opt;
+
+ if (level != SOL_ROSE)
+ return -ENOPROTOOPT;
+
+- if (optlen < sizeof(int))
++ if (optlen < sizeof(unsigned int))
+ return -EINVAL;
+
+- if (copy_from_sockptr(&opt, optval, sizeof(int)))
++ if (copy_from_sockptr(&opt, optval, sizeof(unsigned int)))
+ return -EFAULT;
+
+ switch (optname) {
+@@ -414,31 +414,31 @@ static int rose_setsockopt(struct socket *sock, int level, int optname,
+ return 0;
+
+ case ROSE_T1:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->t1 = opt * HZ;
+ return 0;
+
+ case ROSE_T2:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->t2 = opt * HZ;
+ return 0;
+
+ case ROSE_T3:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->t3 = opt * HZ;
+ return 0;
+
+ case ROSE_HOLDBACK:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->hb = opt * HZ;
+ return 0;
+
+ case ROSE_IDLE:
+- if (opt < 0)
++ if (opt > UINT_MAX / (60 * HZ))
+ return -EINVAL;
+ rose->idle = opt * 60 * HZ;
+ return 0;
+diff --git a/net/rose/rose_timer.c b/net/rose/rose_timer.c
+index f06ddbed3fed63..1525773e94aa17 100644
+--- a/net/rose/rose_timer.c
++++ b/net/rose/rose_timer.c
+@@ -122,6 +122,10 @@ static void rose_heartbeat_expiry(struct timer_list *t)
+ struct rose_sock *rose = rose_sk(sk);
+
+ bh_lock_sock(sk);
++ if (sock_owned_by_user(sk)) {
++ sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ/20);
++ goto out;
++ }
+ switch (rose->state) {
+ case ROSE_STATE_0:
+ /* Magic here: If we listen() and a new link dies before it
+@@ -152,6 +156,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
+ }
+
+ rose_start_heartbeat(sk);
++out:
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ }
+@@ -162,6 +167,10 @@ static void rose_timer_expiry(struct timer_list *t)
+ struct sock *sk = &rose->sock;
+
+ bh_lock_sock(sk);
++ if (sock_owned_by_user(sk)) {
++ sk_reset_timer(sk, &rose->timer, jiffies + HZ/20);
++ goto out;
++ }
+ switch (rose->state) {
+ case ROSE_STATE_1: /* T1 */
+ case ROSE_STATE_4: /* T2 */
+@@ -182,6 +191,7 @@ static void rose_timer_expiry(struct timer_list *t)
+ }
+ break;
+ }
++out:
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ }
+@@ -192,6 +202,10 @@ static void rose_idletimer_expiry(struct timer_list *t)
+ struct sock *sk = &rose->sock;
+
+ bh_lock_sock(sk);
++ if (sock_owned_by_user(sk)) {
++ sk_reset_timer(sk, &rose->idletimer, jiffies + HZ/20);
++ goto out;
++ }
+ rose_clear_queues(sk);
+
+ rose_write_internal(sk, ROSE_CLEAR_REQUEST);
+@@ -207,6 +221,7 @@ static void rose_idletimer_expiry(struct timer_list *t)
+ sk->sk_state_change(sk);
+ sock_set_flag(sk, SOCK_DEAD);
+ }
++out:
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ }
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 598b4ee389fc1e..2a1396cd892f30 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -63,11 +63,12 @@ int rxrpc_abort_conn(struct rxrpc_connection *conn, struct sk_buff *skb,
+ /*
+ * Mark a connection as being remotely aborted.
+ */
+-static bool rxrpc_input_conn_abort(struct rxrpc_connection *conn,
++static void rxrpc_input_conn_abort(struct rxrpc_connection *conn,
+ struct sk_buff *skb)
+ {
+- return rxrpc_set_conn_aborted(conn, skb, skb->priority, -ECONNABORTED,
+- RXRPC_CALL_REMOTELY_ABORTED);
++ trace_rxrpc_rx_conn_abort(conn, skb);
++ rxrpc_set_conn_aborted(conn, skb, skb->priority, -ECONNABORTED,
++ RXRPC_CALL_REMOTELY_ABORTED);
+ }
+
+ /*
+@@ -202,11 +203,14 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn)
+
+ for (i = 0; i < RXRPC_MAXCALLS; i++) {
+ call = conn->channels[i].call;
+- if (call)
++ if (call) {
++ rxrpc_see_call(call, rxrpc_call_see_conn_abort);
+ rxrpc_set_call_completion(call,
+ conn->completion,
+ conn->abort_code,
+ conn->error);
++ rxrpc_poke_call(call, rxrpc_call_poke_conn_abort);
++ }
+ }
+
+ _leave("");
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 552ba84a255c43..5d0842efde69ff 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -238,7 +238,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+ bool use;
+ int slot;
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+
+ while (!list_empty(collector)) {
+ peer = list_entry(collector->next,
+@@ -249,7 +249,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+ continue;
+
+ use = __rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ if (use) {
+ keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME;
+@@ -269,17 +269,17 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+ */
+ slot += cursor;
+ slot &= mask;
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ list_add_tail(&peer->keepalive_link,
+ &rxnet->peer_keepalive[slot & mask]);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+ rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive);
+ }
+ rxrpc_put_peer(peer, rxrpc_peer_put_keepalive);
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ }
+
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+ }
+
+ /*
+@@ -309,7 +309,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work)
+ * second; the bucket at cursor + 1 goes at now + 1s and so
+ * on...
+ */
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ list_splice_init(&rxnet->peer_keepalive_new, &collector);
+
+ stop = cursor + ARRAY_SIZE(rxnet->peer_keepalive);
+@@ -321,7 +321,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work)
+ }
+
+ base = now;
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ rxnet->peer_keepalive_base = base;
+ rxnet->peer_keepalive_cursor = cursor;
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 49dcda67a0d591..956fc7ea4b7346 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -313,10 +313,10 @@ void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer)
+ hash_key = rxrpc_peer_hash_key(local, &peer->srx);
+ rxrpc_init_peer(local, peer, hash_key);
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key);
+ list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+ }
+
+ /*
+@@ -348,7 +348,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
+ return NULL;
+ }
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+
+ /* Need to check that we aren't racing with someone else */
+ peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
+@@ -361,7 +361,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
+ &rxnet->peer_keepalive_new);
+ }
+
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ if (peer)
+ rxrpc_free_peer(candidate);
+@@ -411,10 +411,10 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
+
+ ASSERT(hlist_empty(&peer->error_targets));
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ hash_del_rcu(&peer->hash_link);
+ list_del_init(&peer->keepalive_link);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ rxrpc_free_peer(peer);
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index bbc778c233c892..dfa3067084948f 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -390,6 +390,7 @@ static struct tcf_proto *tcf_proto_create(const char *kind, u32 protocol,
+ tp->protocol = protocol;
+ tp->prio = prio;
+ tp->chain = chain;
++ tp->usesw = !tp->ops->reoffload;
+ spin_lock_init(&tp->lock);
+ refcount_set(&tp->refcnt, 1);
+
+@@ -410,39 +411,31 @@ static void tcf_proto_get(struct tcf_proto *tp)
+ refcount_inc(&tp->refcnt);
+ }
+
+-static void tcf_maintain_bypass(struct tcf_block *block)
++static void tcf_proto_count_usesw(struct tcf_proto *tp, bool add)
+ {
+- int filtercnt = atomic_read(&block->filtercnt);
+- int skipswcnt = atomic_read(&block->skipswcnt);
+- bool bypass_wanted = filtercnt > 0 && filtercnt == skipswcnt;
+-
+- if (bypass_wanted != block->bypass_wanted) {
+ #ifdef CONFIG_NET_CLS_ACT
+- if (bypass_wanted)
+- static_branch_inc(&tcf_bypass_check_needed_key);
+- else
+- static_branch_dec(&tcf_bypass_check_needed_key);
+-#endif
+- block->bypass_wanted = bypass_wanted;
++ struct tcf_block *block = tp->chain->block;
++ bool counted = false;
++
++ if (!add) {
++ if (tp->usesw && tp->counted) {
++ if (!atomic_dec_return(&block->useswcnt))
++ static_branch_dec(&tcf_sw_enabled_key);
++ tp->counted = false;
++ }
++ return;
+ }
+-}
+-
+-static void tcf_block_filter_cnt_update(struct tcf_block *block, bool *counted, bool add)
+-{
+- lockdep_assert_not_held(&block->cb_lock);
+
+- down_write(&block->cb_lock);
+- if (*counted != add) {
+- if (add) {
+- atomic_inc(&block->filtercnt);
+- *counted = true;
+- } else {
+- atomic_dec(&block->filtercnt);
+- *counted = false;
+- }
++ spin_lock(&tp->lock);
++ if (tp->usesw && !tp->counted) {
++ counted = true;
++ tp->counted = true;
+ }
+- tcf_maintain_bypass(block);
+- up_write(&block->cb_lock);
++ spin_unlock(&tp->lock);
++
++ if (counted && atomic_inc_return(&block->useswcnt) == 1)
++ static_branch_inc(&tcf_sw_enabled_key);
++#endif
+ }
+
+ static void tcf_chain_put(struct tcf_chain *chain);
+@@ -451,7 +444,7 @@ static void tcf_proto_destroy(struct tcf_proto *tp, bool rtnl_held,
+ bool sig_destroy, struct netlink_ext_ack *extack)
+ {
+ tp->ops->destroy(tp, rtnl_held, extack);
+- tcf_block_filter_cnt_update(tp->chain->block, &tp->counted, false);
++ tcf_proto_count_usesw(tp, false);
+ if (sig_destroy)
+ tcf_proto_signal_destroyed(tp->chain, tp);
+ tcf_chain_put(tp->chain);
+@@ -2404,7 +2397,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ tfilter_notify(net, skb, n, tp, block, q, parent, fh,
+ RTM_NEWTFILTER, false, rtnl_held, extack);
+ tfilter_put(tp, fh);
+- tcf_block_filter_cnt_update(block, &tp->counted, true);
++ tcf_proto_count_usesw(tp, true);
+ /* q pointer is NULL for shared blocks */
+ if (q)
+ q->flags &= ~TCQ_F_CAN_BYPASS;
+@@ -3521,8 +3514,6 @@ static void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
+ if (*flags & TCA_CLS_FLAGS_IN_HW)
+ return;
+ *flags |= TCA_CLS_FLAGS_IN_HW;
+- if (tc_skip_sw(*flags))
+- atomic_inc(&block->skipswcnt);
+ atomic_inc(&block->offloadcnt);
+ }
+
+@@ -3531,8 +3522,6 @@ static void tcf_block_offload_dec(struct tcf_block *block, u32 *flags)
+ if (!(*flags & TCA_CLS_FLAGS_IN_HW))
+ return;
+ *flags &= ~TCA_CLS_FLAGS_IN_HW;
+- if (tc_skip_sw(*flags))
+- atomic_dec(&block->skipswcnt);
+ atomic_dec(&block->offloadcnt);
+ }
+
+diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
+index 1941ebec23ff9c..7fbe42f0e5c2b7 100644
+--- a/net/sched/cls_bpf.c
++++ b/net/sched/cls_bpf.c
+@@ -509,6 +509,8 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(prog->gen_flags))
+ prog->gen_flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, prog->gen_flags);
++
+ if (oldprog) {
+ idr_replace(&head->handle_idr, prog, handle);
+ list_replace_rcu(&oldprog->link, &prog->link);
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 1008ec8a464c93..03505673d5234d 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2503,6 +2503,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(fnew->flags))
+ fnew->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, fnew->flags);
++
+ spin_lock(&tp->lock);
+
+ /* tp was deleted concurrently. -EAGAIN will cause caller to lookup
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 9f1e62ca508d04..f03bf5da39ee83 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -228,6 +228,8 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(new->flags))
+ new->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, new->flags);
++
+ *arg = head;
+ rcu_assign_pointer(tp->root, new);
+ return 0;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index d3a03c57545bcc..2a1c00048fd6f4 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -951,6 +951,8 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(new->flags))
+ new->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, new->flags);
++
+ u32_replace_knode(tp, tp_c, new);
+ tcf_unbind_filter(tp, &n->res);
+ tcf_exts_get_net(&n->exts);
+@@ -1164,6 +1166,8 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(n->flags))
+ n->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, n->flags);
++
+ ins = &ht->ht[TC_U32_HASH(handle)];
+ for (pins = rtnl_dereference(*ins); pins;
+ ins = &pins->next, pins = rtnl_dereference(*ins))
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index a1d27bc039a364..d26ac6bd9b1080 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1664,6 +1664,10 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ q = qdisc_lookup(dev, tcm->tcm_handle);
+ if (!q)
+ goto create_n_graft;
++ if (q->parent != tcm->tcm_parent) {
++ NL_SET_ERR_MSG(extack, "Cannot move an existing qdisc to a different parent");
++ return -EINVAL;
++ }
+ if (n->nlmsg_flags & NLM_F_EXCL) {
+ NL_SET_ERR_MSG(extack, "Exclusivity flag on, cannot override");
+ return -EEXIST;
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 38ec18f73de43a..8874ae6680952a 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -911,8 +911,8 @@ static int pfifo_fast_change_tx_queue_len(struct Qdisc *sch,
+ bands[prio] = q;
+ }
+
+- return skb_array_resize_multiple(bands, PFIFO_FAST_BANDS, new_len,
+- GFP_KERNEL);
++ return skb_array_resize_multiple_bh(bands, PFIFO_FAST_BANDS, new_len,
++ GFP_KERNEL);
+ }
+
+ struct Qdisc_ops pfifo_fast_ops __read_mostly = {
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 3b9245a3c767a6..65d5b59da58303 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -77,12 +77,6 @@
+ #define SFQ_EMPTY_SLOT 0xffff
+ #define SFQ_DEFAULT_HASH_DIVISOR 1024
+
+-/* We use 16 bits to store allot, and want to handle packets up to 64K
+- * Scale allot by 8 (1<<3) so that no overflow occurs.
+- */
+-#define SFQ_ALLOT_SHIFT 3
+-#define SFQ_ALLOT_SIZE(X) DIV_ROUND_UP(X, 1 << SFQ_ALLOT_SHIFT)
+-
+ /* This type should contain at least SFQ_MAX_DEPTH + 1 + SFQ_MAX_FLOWS values */
+ typedef u16 sfq_index;
+
+@@ -104,7 +98,7 @@ struct sfq_slot {
+ sfq_index next; /* next slot in sfq RR chain */
+ struct sfq_head dep; /* anchor in dep[] chains */
+ unsigned short hash; /* hash value (index in ht[]) */
+- short allot; /* credit for this slot */
++ int allot; /* credit for this slot */
+
+ unsigned int backlog;
+ struct red_vars vars;
+@@ -120,7 +114,6 @@ struct sfq_sched_data {
+ siphash_key_t perturbation;
+ u8 cur_depth; /* depth of longest slot */
+ u8 flags;
+- unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */
+ struct tcf_proto __rcu *filter_list;
+ struct tcf_block *block;
+ sfq_index *ht; /* Hash table ('divisor' slots) */
+@@ -456,7 +449,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+ */
+ q->tail = slot;
+ /* We could use a bigger initial quantum for new flows */
+- slot->allot = q->scaled_quantum;
++ slot->allot = q->quantum;
+ }
+ if (++sch->q.qlen <= q->limit)
+ return NET_XMIT_SUCCESS;
+@@ -493,7 +486,7 @@ sfq_dequeue(struct Qdisc *sch)
+ slot = &q->slots[a];
+ if (slot->allot <= 0) {
+ q->tail = slot;
+- slot->allot += q->scaled_quantum;
++ slot->allot += q->quantum;
+ goto next_slot;
+ }
+ skb = slot_dequeue_head(slot);
+@@ -512,7 +505,7 @@ sfq_dequeue(struct Qdisc *sch)
+ }
+ q->tail->next = next_a;
+ } else {
+- slot->allot -= SFQ_ALLOT_SIZE(qdisc_pkt_len(skb));
++ slot->allot -= qdisc_pkt_len(skb);
+ }
+ return skb;
+ }
+@@ -595,7 +588,7 @@ static void sfq_rehash(struct Qdisc *sch)
+ q->tail->next = x;
+ }
+ q->tail = slot;
+- slot->allot = q->scaled_quantum;
++ slot->allot = q->quantum;
+ }
+ }
+ sch->q.qlen -= dropped;
+@@ -628,7 +621,8 @@ static void sfq_perturbation(struct timer_list *t)
+ rcu_read_unlock();
+ }
+
+-static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
++static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
++ struct netlink_ext_ack *extack)
+ {
+ struct sfq_sched_data *q = qdisc_priv(sch);
+ struct tc_sfq_qopt *ctl = nla_data(opt);
+@@ -646,14 +640,10 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ (!is_power_of_2(ctl->divisor) || ctl->divisor > 65536))
+ return -EINVAL;
+
+- /* slot->allot is a short, make sure quantum is not too big. */
+- if (ctl->quantum) {
+- unsigned int scaled = SFQ_ALLOT_SIZE(ctl->quantum);
+-
+- if (scaled <= 0 || scaled > SHRT_MAX)
+- return -EINVAL;
++ if ((int)ctl->quantum < 0) {
++ NL_SET_ERR_MSG_MOD(extack, "invalid quantum");
++ return -EINVAL;
+ }
+-
+ if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+ ctl_v1->Wlog, ctl_v1->Scell_log, NULL))
+ return -EINVAL;
+@@ -662,11 +652,13 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ if (!p)
+ return -ENOMEM;
+ }
++ if (ctl->limit == 1) {
++ NL_SET_ERR_MSG_MOD(extack, "invalid limit");
++ return -EINVAL;
++ }
+ sch_tree_lock(sch);
+- if (ctl->quantum) {
++ if (ctl->quantum)
+ q->quantum = ctl->quantum;
+- q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum);
+- }
+ WRITE_ONCE(q->perturb_period, ctl->perturb_period * HZ);
+ if (ctl->flows)
+ q->maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
+@@ -762,12 +754,11 @@ static int sfq_init(struct Qdisc *sch, struct nlattr *opt,
+ q->divisor = SFQ_DEFAULT_HASH_DIVISOR;
+ q->maxflows = SFQ_DEFAULT_FLOWS;
+ q->quantum = psched_mtu(qdisc_dev(sch));
+- q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum);
+ q->perturb_period = 0;
+ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+
+ if (opt) {
+- int err = sfq_change(sch, opt);
++ int err = sfq_change(sch, opt, extack);
+ if (err)
+ return err;
+ }
+@@ -878,7 +869,7 @@ static int sfq_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ if (idx != SFQ_EMPTY_SLOT) {
+ const struct sfq_slot *slot = &q->slots[idx];
+
+- xstats.allot = slot->allot << SFQ_ALLOT_SHIFT;
++ xstats.allot = slot->allot;
+ qs.qlen = slot->qlen;
+ qs.backlog = slot->backlog;
+ }
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 6cc7b846cff1bb..ebc41a7b13dbec 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2738,7 +2738,7 @@ int smc_accept(struct socket *sock, struct socket *new_sock,
+ release_sock(clcsk);
+ } else if (!atomic_read(&smc_sk(nsk)->conn.bytes_to_rcv)) {
+ lock_sock(nsk);
+- smc_rx_wait(smc_sk(nsk), &timeo, smc_rx_data_available);
++ smc_rx_wait(smc_sk(nsk), &timeo, 0, smc_rx_data_available);
+ release_sock(nsk);
+ }
+ }
+diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c
+index f0cbe77a80b440..79047721df5110 100644
+--- a/net/smc/smc_rx.c
++++ b/net/smc/smc_rx.c
+@@ -238,22 +238,23 @@ static int smc_rx_splice(struct pipe_inode_info *pipe, char *src, size_t len,
+ return -ENOMEM;
+ }
+
+-static int smc_rx_data_available_and_no_splice_pend(struct smc_connection *conn)
++static int smc_rx_data_available_and_no_splice_pend(struct smc_connection *conn, size_t peeked)
+ {
+- return atomic_read(&conn->bytes_to_rcv) &&
++ return smc_rx_data_available(conn, peeked) &&
+ !atomic_read(&conn->splice_pending);
+ }
+
+ /* blocks rcvbuf consumer until >=len bytes available or timeout or interrupted
+ * @smc smc socket
+ * @timeo pointer to max seconds to wait, pointer to value 0 for no timeout
++ * @peeked number of bytes already peeked
+ * @fcrit add'l criterion to evaluate as function pointer
+ * Returns:
+ * 1 if at least 1 byte available in rcvbuf or if socket error/shutdown.
+ * 0 otherwise (nothing in rcvbuf nor timeout, e.g. interrupted).
+ */
+-int smc_rx_wait(struct smc_sock *smc, long *timeo,
+- int (*fcrit)(struct smc_connection *conn))
++int smc_rx_wait(struct smc_sock *smc, long *timeo, size_t peeked,
++ int (*fcrit)(struct smc_connection *conn, size_t baseline))
+ {
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ struct smc_connection *conn = &smc->conn;
+@@ -262,7 +263,7 @@ int smc_rx_wait(struct smc_sock *smc, long *timeo,
+ struct sock *sk = &smc->sk;
+ int rc;
+
+- if (fcrit(conn))
++ if (fcrit(conn, peeked))
+ return 1;
+ sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ add_wait_queue(sk_sleep(sk), &wait);
+@@ -271,7 +272,7 @@ int smc_rx_wait(struct smc_sock *smc, long *timeo,
+ cflags->peer_conn_abort ||
+ READ_ONCE(sk->sk_shutdown) & RCV_SHUTDOWN ||
+ conn->killed ||
+- fcrit(conn),
++ fcrit(conn, peeked),
+ &wait);
+ remove_wait_queue(sk_sleep(sk), &wait);
+ sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+@@ -322,11 +323,11 @@ static int smc_rx_recv_urg(struct smc_sock *smc, struct msghdr *msg, int len,
+ return -EAGAIN;
+ }
+
+-static bool smc_rx_recvmsg_data_available(struct smc_sock *smc)
++static bool smc_rx_recvmsg_data_available(struct smc_sock *smc, size_t peeked)
+ {
+ struct smc_connection *conn = &smc->conn;
+
+- if (smc_rx_data_available(conn))
++ if (smc_rx_data_available(conn, peeked))
+ return true;
+ else if (conn->urg_state == SMC_URG_VALID)
+ /* we received a single urgent Byte - skip */
+@@ -344,10 +345,10 @@ static bool smc_rx_recvmsg_data_available(struct smc_sock *smc)
+ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ struct pipe_inode_info *pipe, size_t len, int flags)
+ {
+- size_t copylen, read_done = 0, read_remaining = len;
++ size_t copylen, read_done = 0, read_remaining = len, peeked_bytes = 0;
+ size_t chunk_len, chunk_off, chunk_len_sum;
+ struct smc_connection *conn = &smc->conn;
+- int (*func)(struct smc_connection *conn);
++ int (*func)(struct smc_connection *conn, size_t baseline);
+ union smc_host_cursor cons;
+ int readable, chunk;
+ char *rcvbuf_base;
+@@ -384,14 +385,14 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ if (conn->killed)
+ break;
+
+- if (smc_rx_recvmsg_data_available(smc))
++ if (smc_rx_recvmsg_data_available(smc, peeked_bytes))
+ goto copy;
+
+ if (sk->sk_shutdown & RCV_SHUTDOWN) {
+ /* smc_cdc_msg_recv_action() could have run after
+ * above smc_rx_recvmsg_data_available()
+ */
+- if (smc_rx_recvmsg_data_available(smc))
++ if (smc_rx_recvmsg_data_available(smc, peeked_bytes))
+ goto copy;
+ break;
+ }
+@@ -425,26 +426,28 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ }
+ }
+
+- if (!smc_rx_data_available(conn)) {
+- smc_rx_wait(smc, &timeo, smc_rx_data_available);
++ if (!smc_rx_data_available(conn, peeked_bytes)) {
++ smc_rx_wait(smc, &timeo, peeked_bytes, smc_rx_data_available);
+ continue;
+ }
+
+ copy:
+ /* initialize variables for 1st iteration of subsequent loop */
+ /* could be just 1 byte, even after waiting on data above */
+- readable = atomic_read(&conn->bytes_to_rcv);
++ readable = smc_rx_data_available(conn, peeked_bytes);
+ splbytes = atomic_read(&conn->splice_pending);
+ if (!readable || (msg && splbytes)) {
+ if (splbytes)
+ func = smc_rx_data_available_and_no_splice_pend;
+ else
+ func = smc_rx_data_available;
+- smc_rx_wait(smc, &timeo, func);
++ smc_rx_wait(smc, &timeo, peeked_bytes, func);
+ continue;
+ }
+
+ smc_curs_copy(&cons, &conn->local_tx_ctrl.cons, conn);
++ if ((flags & MSG_PEEK) && peeked_bytes)
++ smc_curs_add(conn->rmb_desc->len, &cons, peeked_bytes);
+ /* subsequent splice() calls pick up where previous left */
+ if (splbytes)
+ smc_curs_add(conn->rmb_desc->len, &cons, splbytes);
+@@ -480,6 +483,8 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ }
+ read_remaining -= chunk_len;
+ read_done += chunk_len;
++ if (flags & MSG_PEEK)
++ peeked_bytes += chunk_len;
+
+ if (chunk_len_sum == copylen)
+ break; /* either on 1st or 2nd iteration */
+diff --git a/net/smc/smc_rx.h b/net/smc/smc_rx.h
+index db823c97d824ea..994f5e42d1ba26 100644
+--- a/net/smc/smc_rx.h
++++ b/net/smc/smc_rx.h
+@@ -21,11 +21,11 @@ void smc_rx_init(struct smc_sock *smc);
+
+ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ struct pipe_inode_info *pipe, size_t len, int flags);
+-int smc_rx_wait(struct smc_sock *smc, long *timeo,
+- int (*fcrit)(struct smc_connection *conn));
+-static inline int smc_rx_data_available(struct smc_connection *conn)
++int smc_rx_wait(struct smc_sock *smc, long *timeo, size_t peeked,
++ int (*fcrit)(struct smc_connection *conn, size_t baseline));
++static inline int smc_rx_data_available(struct smc_connection *conn, size_t peeked)
+ {
+- return atomic_read(&conn->bytes_to_rcv);
++ return atomic_read(&conn->bytes_to_rcv) - peeked;
+ }
+
+ #endif /* SMC_RX_H */
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 59e2c46240f5c1..3bfbb789c4beed 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1083,9 +1083,6 @@ static void svc_tcp_fragment_received(struct svc_sock *svsk)
+ /* If we have more data, signal svc_xprt_enqueue() to try again */
+ svsk->sk_tcplen = 0;
+ svsk->sk_marker = xdr_zero;
+-
+- smp_wmb();
+- tcp_set_rcvlowat(svsk->sk_sk, 1);
+ }
+
+ /**
+@@ -1175,17 +1172,10 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
+ goto err_delete;
+ if (len == want)
+ svc_tcp_fragment_received(svsk);
+- else {
+- /* Avoid more ->sk_data_ready() calls until the rest
+- * of the message has arrived. This reduces service
+- * thread wake-ups on large incoming messages. */
+- tcp_set_rcvlowat(svsk->sk_sk,
+- svc_sock_reclen(svsk) - svsk->sk_tcplen);
+-
++ else
+ trace_svcsock_tcp_recv_short(&svsk->sk_xprt,
+ svc_sock_reclen(svsk),
+ svsk->sk_tcplen - sizeof(rpc_fraghdr));
+- }
+ goto err_noclose;
+ error:
+ if (len != -EAGAIN)
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 15724f171b0f96..f5d116a1bdea1a 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1519,6 +1519,11 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ if (err < 0)
+ goto out;
+
++ /* sk_err might have been set as a result of an earlier
++ * (failed) connect attempt.
++ */
++ sk->sk_err = 0;
++
+ /* Mark sock as connecting and set the error code to in
+ * progress in case this is a non-blocking connect.
+ */
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d0aed41ded2f19..18e132cdea72a8 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -763,12 +763,11 @@ static void cfg80211_scan_req_add_chan(struct cfg80211_scan_request *request,
+ }
+ }
+
++ request->n_channels++;
+ request->channels[n_channels] = chan;
+ if (add_to_6ghz)
+ request->scan_6ghz_params[request->n_6ghz_params].channel_idx =
+ n_channels;
+-
+- request->n_channels++;
+ }
+
+ static bool cfg80211_find_ssid_match(struct cfg80211_colocated_ap *ap,
+@@ -858,9 +857,7 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
+ if (ret)
+ continue;
+
+- entry = kzalloc(sizeof(*entry) + IEEE80211_MAX_SSID_LEN,
+- GFP_ATOMIC);
+-
++ entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
+ if (!entry)
+ continue;
+
+diff --git a/net/wireless/tests/scan.c b/net/wireless/tests/scan.c
+index 9f458be7165951..79a99cf5e8922f 100644
+--- a/net/wireless/tests/scan.c
++++ b/net/wireless/tests/scan.c
+@@ -810,6 +810,8 @@ static void test_cfg80211_parse_colocated_ap(struct kunit *test)
+ skb_put_data(input, "123", 3);
+
+ ies = kunit_kzalloc(test, struct_size(ies, data, input->len), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_NULL(test, ies);
++
+ ies->len = input->len;
+ memcpy(ies->data, input->data, input->len);
+
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index 91357ccaf4afe3..5b9ee63e30b69d 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -132,6 +132,7 @@ static const struct nla_policy compat_policy[XFRMA_MAX+1] = {
+ [XFRMA_MTIMER_THRESH] = { .type = NLA_U32 },
+ [XFRMA_SA_DIR] = NLA_POLICY_RANGE(NLA_U8, XFRM_SA_DIR_IN, XFRM_SA_DIR_OUT),
+ [XFRMA_NAT_KEEPALIVE_INTERVAL] = { .type = NLA_U32 },
++ [XFRMA_SA_PCPU] = { .type = NLA_U32 },
+ };
+
+ static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
+@@ -282,9 +283,10 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
+ case XFRMA_MTIMER_THRESH:
+ case XFRMA_SA_DIR:
+ case XFRMA_NAT_KEEPALIVE_INTERVAL:
++ case XFRMA_SA_PCPU:
+ return xfrm_nla_cpy(dst, src, nla_len(src));
+ default:
+- BUILD_BUG_ON(XFRMA_MAX != XFRMA_NAT_KEEPALIVE_INTERVAL);
++ BUILD_BUG_ON(XFRMA_MAX != XFRMA_SA_PCPU);
+ pr_warn_once("unsupported nla_type %d\n", src->nla_type);
+ return -EOPNOTSUPP;
+ }
+@@ -439,7 +441,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ int err;
+
+ if (type > XFRMA_MAX) {
+- BUILD_BUG_ON(XFRMA_MAX != XFRMA_NAT_KEEPALIVE_INTERVAL);
++ BUILD_BUG_ON(XFRMA_MAX != XFRMA_SA_PCPU);
+ NL_SET_ERR_MSG(extack, "Bad attribute");
+ return -EOPNOTSUPP;
+ }
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 749e7eea99e465..841a60a6fbfea3 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -572,7 +572,7 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+ goto drop;
+ }
+
+- x = xfrm_state_lookup(net, mark, daddr, spi, nexthdr, family);
++ x = xfrm_input_state_lookup(net, mark, daddr, spi, nexthdr, family);
+ if (x == NULL) {
+ secpath_reset(skb);
+ XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index a2ea9dbac90b36..8a1b83191a6cdf 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -434,6 +434,7 @@ struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp)
+ if (policy) {
+ write_pnet(&policy->xp_net, net);
+ INIT_LIST_HEAD(&policy->walk.all);
++ INIT_HLIST_HEAD(&policy->state_cache_list);
+ INIT_HLIST_NODE(&policy->bydst);
+ INIT_HLIST_NODE(&policy->byidx);
+ rwlock_init(&policy->lock);
+@@ -475,6 +476,9 @@ EXPORT_SYMBOL(xfrm_policy_destroy);
+
+ static void xfrm_policy_kill(struct xfrm_policy *policy)
+ {
++ struct net *net = xp_net(policy);
++ struct xfrm_state *x;
++
+ xfrm_dev_policy_delete(policy);
+
+ write_lock_bh(&policy->lock);
+@@ -490,6 +494,13 @@ static void xfrm_policy_kill(struct xfrm_policy *policy)
+ if (del_timer(&policy->timer))
+ xfrm_pol_put(policy);
+
++ /* XXX: Flush state cache */
++ spin_lock_bh(&net->xfrm.xfrm_state_lock);
++ hlist_for_each_entry_rcu(x, &policy->state_cache_list, state_cache) {
++ hlist_del_init_rcu(&x->state_cache);
++ }
++ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++
+ xfrm_pol_put(policy);
+ }
+
+@@ -3275,6 +3286,7 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net,
+ dst_release(dst);
+ dst = dst_orig;
+ }
++
+ ok:
+ xfrm_pols_put(pols, drop_pols);
+ if (dst && dst->xfrm &&
+diff --git a/net/xfrm/xfrm_replay.c b/net/xfrm/xfrm_replay.c
+index bc56c630572527..235bbefc2abae2 100644
+--- a/net/xfrm/xfrm_replay.c
++++ b/net/xfrm/xfrm_replay.c
+@@ -714,10 +714,12 @@ static int xfrm_replay_overflow_offload_esn(struct xfrm_state *x, struct sk_buff
+ oseq += skb_shinfo(skb)->gso_segs;
+ }
+
+- if (unlikely(xo->seq.low < replay_esn->oseq)) {
+- XFRM_SKB_CB(skb)->seq.output.hi = ++oseq_hi;
+- xo->seq.hi = oseq_hi;
+- replay_esn->oseq_hi = oseq_hi;
++ if (unlikely(oseq < replay_esn->oseq)) {
++ replay_esn->oseq_hi = ++oseq_hi;
++ if (xo->seq.low < replay_esn->oseq) {
++ XFRM_SKB_CB(skb)->seq.output.hi = oseq_hi;
++ xo->seq.hi = oseq_hi;
++ }
+ if (replay_esn->oseq_hi == 0) {
+ replay_esn->oseq--;
+ replay_esn->oseq_hi--;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 37478d36a8dff7..711e816fc4041e 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -34,6 +34,8 @@
+
+ #define xfrm_state_deref_prot(table, net) \
+ rcu_dereference_protected((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock))
++#define xfrm_state_deref_check(table, net) \
++ rcu_dereference_check((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock))
+
+ static void xfrm_state_gc_task(struct work_struct *work);
+
+@@ -62,6 +64,8 @@ static inline unsigned int xfrm_dst_hash(struct net *net,
+ u32 reqid,
+ unsigned short family)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_dst_hash(daddr, saddr, reqid, family, net->xfrm.state_hmask);
+ }
+
+@@ -70,6 +74,8 @@ static inline unsigned int xfrm_src_hash(struct net *net,
+ const xfrm_address_t *saddr,
+ unsigned short family)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_src_hash(daddr, saddr, family, net->xfrm.state_hmask);
+ }
+
+@@ -77,11 +83,15 @@ static inline unsigned int
+ xfrm_spi_hash(struct net *net, const xfrm_address_t *daddr,
+ __be32 spi, u8 proto, unsigned short family)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_spi_hash(daddr, spi, proto, family, net->xfrm.state_hmask);
+ }
+
+ static unsigned int xfrm_seq_hash(struct net *net, u32 seq)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_seq_hash(seq, net->xfrm.state_hmask);
+ }
+
+@@ -665,6 +675,7 @@ struct xfrm_state *xfrm_state_alloc(struct net *net)
+ refcount_set(&x->refcnt, 1);
+ atomic_set(&x->tunnel_users, 0);
+ INIT_LIST_HEAD(&x->km.all);
++ INIT_HLIST_NODE(&x->state_cache);
+ INIT_HLIST_NODE(&x->bydst);
+ INIT_HLIST_NODE(&x->bysrc);
+ INIT_HLIST_NODE(&x->byspi);
+@@ -679,6 +690,7 @@ struct xfrm_state *xfrm_state_alloc(struct net *net)
+ x->lft.hard_packet_limit = XFRM_INF;
+ x->replay_maxage = 0;
+ x->replay_maxdiff = 0;
++ x->pcpu_num = UINT_MAX;
+ spin_lock_init(&x->lock);
+ }
+ return x;
+@@ -743,12 +755,18 @@ int __xfrm_state_delete(struct xfrm_state *x)
+
+ if (x->km.state != XFRM_STATE_DEAD) {
+ x->km.state = XFRM_STATE_DEAD;
++
+ spin_lock(&net->xfrm.xfrm_state_lock);
+ list_del(&x->km.all);
+ hlist_del_rcu(&x->bydst);
+ hlist_del_rcu(&x->bysrc);
+ if (x->km.seq)
+ hlist_del_rcu(&x->byseq);
++ if (!hlist_unhashed(&x->state_cache))
++ hlist_del_rcu(&x->state_cache);
++ if (!hlist_unhashed(&x->state_cache_input))
++ hlist_del_rcu(&x->state_cache_input);
++
+ if (x->id.spi)
+ hlist_del_rcu(&x->byspi);
+ net->xfrm.state_num--;
+@@ -1033,16 +1051,38 @@ xfrm_init_tempstate(struct xfrm_state *x, const struct flowi *fl,
+ x->props.family = tmpl->encap_family;
+ }
+
+-static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark,
++struct xfrm_hash_state_ptrs {
++ const struct hlist_head *bydst;
++ const struct hlist_head *bysrc;
++ const struct hlist_head *byspi;
++ unsigned int hmask;
++};
++
++static void xfrm_hash_ptrs_get(const struct net *net, struct xfrm_hash_state_ptrs *ptrs)
++{
++ unsigned int sequence;
++
++ do {
++ sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
++
++ ptrs->bydst = xfrm_state_deref_check(net->xfrm.state_bydst, net);
++ ptrs->bysrc = xfrm_state_deref_check(net->xfrm.state_bysrc, net);
++ ptrs->byspi = xfrm_state_deref_check(net->xfrm.state_byspi, net);
++ ptrs->hmask = net->xfrm.state_hmask;
++ } while (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence));
++}
++
++static struct xfrm_state *__xfrm_state_lookup_all(const struct xfrm_hash_state_ptrs *state_ptrs,
++ u32 mark,
+ const xfrm_address_t *daddr,
+ __be32 spi, u8 proto,
+ unsigned short family,
+ struct xfrm_dev_offload *xdo)
+ {
+- unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family);
++ unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask);
+ struct xfrm_state *x;
+
+- hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) {
++ hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ if (xdo->type == XFRM_DEV_OFFLOAD_PACKET) {
+ if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
+@@ -1076,15 +1116,16 @@ static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark,
+ return NULL;
+ }
+
+-static struct xfrm_state *__xfrm_state_lookup(struct net *net, u32 mark,
++static struct xfrm_state *__xfrm_state_lookup(const struct xfrm_hash_state_ptrs *state_ptrs,
++ u32 mark,
+ const xfrm_address_t *daddr,
+ __be32 spi, u8 proto,
+ unsigned short family)
+ {
+- unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family);
++ unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask);
+ struct xfrm_state *x;
+
+- hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) {
++ hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) {
+ if (x->props.family != family ||
+ x->id.spi != spi ||
+ x->id.proto != proto ||
+@@ -1101,15 +1142,63 @@ static struct xfrm_state *__xfrm_state_lookup(struct net *net, u32 mark,
+ return NULL;
+ }
+
+-static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark,
++struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
++ const xfrm_address_t *daddr,
++ __be32 spi, u8 proto,
++ unsigned short family)
++{
++ struct xfrm_hash_state_ptrs state_ptrs;
++ struct hlist_head *state_cache_input;
++ struct xfrm_state *x = NULL;
++
++ state_cache_input = raw_cpu_ptr(net->xfrm.state_cache_input);
++
++ rcu_read_lock();
++ hlist_for_each_entry_rcu(x, state_cache_input, state_cache_input) {
++ if (x->props.family != family ||
++ x->id.spi != spi ||
++ x->id.proto != proto ||
++ !xfrm_addr_equal(&x->id.daddr, daddr, family))
++ continue;
++
++ if ((mark & x->mark.m) != x->mark.v)
++ continue;
++ if (!xfrm_state_hold_rcu(x))
++ continue;
++ goto out;
++ }
++
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family);
++
++ if (x && x->km.state == XFRM_STATE_VALID) {
++ spin_lock_bh(&net->xfrm.xfrm_state_lock);
++ if (hlist_unhashed(&x->state_cache_input)) {
++ hlist_add_head_rcu(&x->state_cache_input, state_cache_input);
++ } else {
++ hlist_del_rcu(&x->state_cache_input);
++ hlist_add_head_rcu(&x->state_cache_input, state_cache_input);
++ }
++ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++ }
++
++out:
++ rcu_read_unlock();
++ return x;
++}
++EXPORT_SYMBOL(xfrm_input_state_lookup);
++
++static struct xfrm_state *__xfrm_state_lookup_byaddr(const struct xfrm_hash_state_ptrs *state_ptrs,
++ u32 mark,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+ u8 proto, unsigned short family)
+ {
+- unsigned int h = xfrm_src_hash(net, daddr, saddr, family);
++ unsigned int h = __xfrm_src_hash(daddr, saddr, family, state_ptrs->hmask);
+ struct xfrm_state *x;
+
+- hlist_for_each_entry_rcu(x, net->xfrm.state_bysrc + h, bysrc) {
++ hlist_for_each_entry_rcu(x, state_ptrs->bysrc + h, bysrc) {
+ if (x->props.family != family ||
+ x->id.proto != proto ||
+ !xfrm_addr_equal(&x->id.daddr, daddr, family) ||
+@@ -1129,14 +1218,17 @@ static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark,
+ static inline struct xfrm_state *
+ __xfrm_state_locate(struct xfrm_state *x, int use_spi, int family)
+ {
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct net *net = xs_net(x);
+ u32 mark = x->mark.v & x->mark.m;
+
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
+ if (use_spi)
+- return __xfrm_state_lookup(net, mark, &x->id.daddr,
++ return __xfrm_state_lookup(&state_ptrs, mark, &x->id.daddr,
+ x->id.spi, x->id.proto, family);
+ else
+- return __xfrm_state_lookup_byaddr(net, mark,
++ return __xfrm_state_lookup_byaddr(&state_ptrs, mark,
+ &x->id.daddr,
+ &x->props.saddr,
+ x->id.proto, family);
+@@ -1155,6 +1247,12 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ struct xfrm_state **best, int *acq_in_progress,
+ int *error)
+ {
++ /* We need the cpu id just as a lookup key,
++ * we don't require it to be stable.
++ */
++ unsigned int pcpu_id = get_cpu();
++ put_cpu();
++
+ /* Resolution logic:
+ * 1. There is a valid state with matching selector. Done.
+ * 2. Valid state with inappropriate selector. Skip.
+@@ -1174,13 +1272,18 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ &fl->u.__fl_common))
+ return;
+
++ if (x->pcpu_num != UINT_MAX && x->pcpu_num != pcpu_id)
++ return;
++
+ if (!*best ||
++ ((*best)->pcpu_num == UINT_MAX && x->pcpu_num == pcpu_id) ||
+ (*best)->km.dying > x->km.dying ||
+ ((*best)->km.dying == x->km.dying &&
+ (*best)->curlft.add_time < x->curlft.add_time))
+ *best = x;
+ } else if (x->km.state == XFRM_STATE_ACQ) {
+- *acq_in_progress = 1;
++ if (!*best || x->pcpu_num == pcpu_id)
++ *acq_in_progress = 1;
+ } else if (x->km.state == XFRM_STATE_ERROR ||
+ x->km.state == XFRM_STATE_EXPIRED) {
+ if ((!x->sel.family ||
+@@ -1199,6 +1302,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ unsigned short family, u32 if_id)
+ {
+ static xfrm_address_t saddr_wildcard = { };
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct net *net = xp_net(pol);
+ unsigned int h, h_wildcard;
+ struct xfrm_state *x, *x0, *to_put;
+@@ -1209,14 +1313,64 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ unsigned short encap_family = tmpl->encap_family;
+ unsigned int sequence;
+ struct km_event c;
++ unsigned int pcpu_id;
++ bool cached = false;
++
++ /* We need the cpu id just as a lookup key,
++ * we don't require it to be stable.
++ */
++ pcpu_id = get_cpu();
++ put_cpu();
+
+ to_put = NULL;
+
+ sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
+
+ rcu_read_lock();
+- h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
+- hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h, bydst) {
++ hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) {
++ if (x->props.family == encap_family &&
++ x->props.reqid == tmpl->reqid &&
++ (mark & x->mark.m) == x->mark.v &&
++ x->if_id == if_id &&
++ !(x->props.flags & XFRM_STATE_WILDRECV) &&
++ xfrm_state_addr_check(x, daddr, saddr, encap_family) &&
++ tmpl->mode == x->props.mode &&
++ tmpl->id.proto == x->id.proto &&
++ (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
++ xfrm_state_look_at(pol, x, fl, encap_family,
++ &best, &acquire_in_progress, &error);
++ }
++
++ if (best)
++ goto cached;
++
++ hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) {
++ if (x->props.family == encap_family &&
++ x->props.reqid == tmpl->reqid &&
++ (mark & x->mark.m) == x->mark.v &&
++ x->if_id == if_id &&
++ !(x->props.flags & XFRM_STATE_WILDRECV) &&
++ xfrm_addr_equal(&x->id.daddr, daddr, encap_family) &&
++ tmpl->mode == x->props.mode &&
++ tmpl->id.proto == x->id.proto &&
++ (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
++ xfrm_state_look_at(pol, x, fl, family,
++ &best, &acquire_in_progress, &error);
++ }
++
++cached:
++ cached = true;
++ if (best)
++ goto found;
++ else if (error)
++ best = NULL;
++ else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */
++ WARN_ON(1);
++
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask);
++ hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
+ if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
+@@ -1249,8 +1403,9 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ if (best || acquire_in_progress)
+ goto found;
+
+- h_wildcard = xfrm_dst_hash(net, daddr, &saddr_wildcard, tmpl->reqid, encap_family);
+- hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h_wildcard, bydst) {
++ h_wildcard = __xfrm_dst_hash(daddr, &saddr_wildcard, tmpl->reqid,
++ encap_family, state_ptrs.hmask);
++ hlist_for_each_entry_rcu(x, state_ptrs.bydst + h_wildcard, bydst) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
+ if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
+@@ -1282,10 +1437,13 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ }
+
+ found:
+- x = best;
++ if (!(pol->flags & XFRM_POLICY_CPU_ACQUIRE) ||
++ (best && (best->pcpu_num == pcpu_id)))
++ x = best;
++
+ if (!x && !error && !acquire_in_progress) {
+ if (tmpl->id.spi &&
+- (x0 = __xfrm_state_lookup_all(net, mark, daddr,
++ (x0 = __xfrm_state_lookup_all(&state_ptrs, mark, daddr,
+ tmpl->id.spi, tmpl->id.proto,
+ encap_family,
+ &pol->xdo)) != NULL) {
+@@ -1314,6 +1472,8 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ xfrm_init_tempstate(x, fl, tmpl, daddr, saddr, family);
+ memcpy(&x->mark, &pol->mark, sizeof(x->mark));
+ x->if_id = if_id;
++ if ((pol->flags & XFRM_POLICY_CPU_ACQUIRE) && best)
++ x->pcpu_num = pcpu_id;
+
+ error = security_xfrm_state_alloc_acquire(x, pol->security, fl->flowi_secid);
+ if (error) {
+@@ -1352,6 +1512,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ x->km.state = XFRM_STATE_ACQ;
+ x->dir = XFRM_SA_DIR_OUT;
+ list_add(&x->km.all, &net->xfrm.state_all);
++ h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
+ XFRM_STATE_INSERT(bydst, &x->bydst,
+ net->xfrm.state_bydst + h,
+ x->xso.type);
+@@ -1359,6 +1520,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ XFRM_STATE_INSERT(bysrc, &x->bysrc,
+ net->xfrm.state_bysrc + h,
+ x->xso.type);
++ INIT_HLIST_NODE(&x->state_cache);
+ if (x->id.spi) {
+ h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, encap_family);
+ XFRM_STATE_INSERT(byspi, &x->byspi,
+@@ -1392,6 +1554,11 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ x = NULL;
+ error = -ESRCH;
+ }
++
++ /* Use the already installed 'fallback' while the CPU-specific
++ * SA acquire is handled*/
++ if (best)
++ x = best;
+ }
+ out:
+ if (x) {
+@@ -1402,6 +1569,15 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ } else {
+ *err = acquire_in_progress ? -EAGAIN : error;
+ }
++
++ if (x && x->km.state == XFRM_STATE_VALID && !cached &&
++ (!(pol->flags & XFRM_POLICY_CPU_ACQUIRE) || x->pcpu_num == pcpu_id)) {
++ spin_lock_bh(&net->xfrm.xfrm_state_lock);
++ if (hlist_unhashed(&x->state_cache))
++ hlist_add_head_rcu(&x->state_cache, &pol->state_cache_list);
++ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++ }
++
+ rcu_read_unlock();
+ if (to_put)
+ xfrm_state_put(to_put);
+@@ -1524,12 +1700,14 @@ static void __xfrm_state_bump_genids(struct xfrm_state *xnew)
+ unsigned int h;
+ u32 mark = xnew->mark.v & xnew->mark.m;
+ u32 if_id = xnew->if_id;
++ u32 cpu_id = xnew->pcpu_num;
+
+ h = xfrm_dst_hash(net, &xnew->id.daddr, &xnew->props.saddr, reqid, family);
+ hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) {
+ if (x->props.family == family &&
+ x->props.reqid == reqid &&
+ x->if_id == if_id &&
++ x->pcpu_num == cpu_id &&
+ (mark & x->mark.m) == x->mark.v &&
+ xfrm_addr_equal(&x->id.daddr, &xnew->id.daddr, family) &&
+ xfrm_addr_equal(&x->props.saddr, &xnew->props.saddr, family))
+@@ -1552,7 +1730,7 @@ EXPORT_SYMBOL(xfrm_state_insert);
+ static struct xfrm_state *__find_acq_core(struct net *net,
+ const struct xfrm_mark *m,
+ unsigned short family, u8 mode,
+- u32 reqid, u32 if_id, u8 proto,
++ u32 reqid, u32 if_id, u32 pcpu_num, u8 proto,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+ int create)
+@@ -1569,6 +1747,7 @@ static struct xfrm_state *__find_acq_core(struct net *net,
+ x->id.spi != 0 ||
+ x->id.proto != proto ||
+ (mark & x->mark.m) != x->mark.v ||
++ x->pcpu_num != pcpu_num ||
+ !xfrm_addr_equal(&x->id.daddr, daddr, family) ||
+ !xfrm_addr_equal(&x->props.saddr, saddr, family))
+ continue;
+@@ -1602,6 +1781,7 @@ static struct xfrm_state *__find_acq_core(struct net *net,
+ break;
+ }
+
++ x->pcpu_num = pcpu_num;
+ x->km.state = XFRM_STATE_ACQ;
+ x->id.proto = proto;
+ x->props.family = family;
+@@ -1630,7 +1810,7 @@ static struct xfrm_state *__find_acq_core(struct net *net,
+ return x;
+ }
+
+-static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq);
++static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num);
+
+ int xfrm_state_add(struct xfrm_state *x)
+ {
+@@ -1656,7 +1836,7 @@ int xfrm_state_add(struct xfrm_state *x)
+ }
+
+ if (use_spi && x->km.seq) {
+- x1 = __xfrm_find_acq_byseq(net, mark, x->km.seq);
++ x1 = __xfrm_find_acq_byseq(net, mark, x->km.seq, x->pcpu_num);
+ if (x1 && ((x1->id.proto != x->id.proto) ||
+ !xfrm_addr_equal(&x1->id.daddr, &x->id.daddr, family))) {
+ to_put = x1;
+@@ -1666,7 +1846,7 @@ int xfrm_state_add(struct xfrm_state *x)
+
+ if (use_spi && !x1)
+ x1 = __find_acq_core(net, &x->mark, family, x->props.mode,
+- x->props.reqid, x->if_id, x->id.proto,
++ x->props.reqid, x->if_id, x->pcpu_num, x->id.proto,
+ &x->id.daddr, &x->props.saddr, 0);
+
+ __xfrm_state_bump_genids(x);
+@@ -1791,6 +1971,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ x->props.flags = orig->props.flags;
+ x->props.extra_flags = orig->props.extra_flags;
+
++ x->pcpu_num = orig->pcpu_num;
+ x->if_id = orig->if_id;
+ x->tfcpad = orig->tfcpad;
+ x->replay_maxdiff = orig->replay_maxdiff;
+@@ -2041,10 +2222,13 @@ struct xfrm_state *
+ xfrm_state_lookup(struct net *net, u32 mark, const xfrm_address_t *daddr, __be32 spi,
+ u8 proto, unsigned short family)
+ {
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct xfrm_state *x;
+
+ rcu_read_lock();
+- x = __xfrm_state_lookup(net, mark, daddr, spi, proto, family);
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family);
+ rcu_read_unlock();
+ return x;
+ }
+@@ -2055,10 +2239,14 @@ xfrm_state_lookup_byaddr(struct net *net, u32 mark,
+ const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ u8 proto, unsigned short family)
+ {
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct xfrm_state *x;
+
+ spin_lock_bh(&net->xfrm.xfrm_state_lock);
+- x = __xfrm_state_lookup_byaddr(net, mark, daddr, saddr, proto, family);
++
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ x = __xfrm_state_lookup_byaddr(&state_ptrs, mark, daddr, saddr, proto, family);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ return x;
+ }
+@@ -2066,13 +2254,14 @@ EXPORT_SYMBOL(xfrm_state_lookup_byaddr);
+
+ struct xfrm_state *
+ xfrm_find_acq(struct net *net, const struct xfrm_mark *mark, u8 mode, u32 reqid,
+- u32 if_id, u8 proto, const xfrm_address_t *daddr,
++ u32 if_id, u32 pcpu_num, u8 proto, const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr, int create, unsigned short family)
+ {
+ struct xfrm_state *x;
+
+ spin_lock_bh(&net->xfrm.xfrm_state_lock);
+- x = __find_acq_core(net, mark, family, mode, reqid, if_id, proto, daddr, saddr, create);
++ x = __find_acq_core(net, mark, family, mode, reqid, if_id, pcpu_num,
++ proto, daddr, saddr, create);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+
+ return x;
+@@ -2207,7 +2396,7 @@ xfrm_state_sort(struct xfrm_state **dst, struct xfrm_state **src, int n,
+
+ /* Silly enough, but I'm lazy to build resolution list */
+
+-static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq)
++static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num)
+ {
+ unsigned int h = xfrm_seq_hash(net, seq);
+ struct xfrm_state *x;
+@@ -2215,6 +2404,7 @@ static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 s
+ hlist_for_each_entry_rcu(x, net->xfrm.state_byseq + h, byseq) {
+ if (x->km.seq == seq &&
+ (mark & x->mark.m) == x->mark.v &&
++ x->pcpu_num == pcpu_num &&
+ x->km.state == XFRM_STATE_ACQ) {
+ xfrm_state_hold(x);
+ return x;
+@@ -2224,12 +2414,12 @@ static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 s
+ return NULL;
+ }
+
+-struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq)
++struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num)
+ {
+ struct xfrm_state *x;
+
+ spin_lock_bh(&net->xfrm.xfrm_state_lock);
+- x = __xfrm_find_acq_byseq(net, mark, seq);
++ x = __xfrm_find_acq_byseq(net, mark, seq, pcpu_num);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ return x;
+ }
+@@ -2988,6 +3178,11 @@ int __net_init xfrm_state_init(struct net *net)
+ net->xfrm.state_byseq = xfrm_hash_alloc(sz);
+ if (!net->xfrm.state_byseq)
+ goto out_byseq;
++
++ net->xfrm.state_cache_input = alloc_percpu(struct hlist_head);
++ if (!net->xfrm.state_cache_input)
++ goto out_state_cache_input;
++
+ net->xfrm.state_hmask = ((sz / sizeof(struct hlist_head)) - 1);
+
+ net->xfrm.state_num = 0;
+@@ -2997,6 +3192,8 @@ int __net_init xfrm_state_init(struct net *net)
+ &net->xfrm.xfrm_state_lock);
+ return 0;
+
++out_state_cache_input:
++ xfrm_hash_free(net->xfrm.state_byseq, sz);
+ out_byseq:
+ xfrm_hash_free(net->xfrm.state_byspi, sz);
+ out_byspi:
+@@ -3026,6 +3223,7 @@ void xfrm_state_fini(struct net *net)
+ xfrm_hash_free(net->xfrm.state_bysrc, sz);
+ WARN_ON(!hlist_empty(net->xfrm.state_bydst));
+ xfrm_hash_free(net->xfrm.state_bydst, sz);
++ free_percpu(net->xfrm.state_cache_input);
+ }
+
+ #ifdef CONFIG_AUDITSYSCALL
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index e3b8ce89831abf..87013623773a2b 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -460,6 +460,12 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ }
+ }
+
++ if (!sa_dir && attrs[XFRMA_SA_PCPU]) {
++ NL_SET_ERR_MSG(extack, "SA_PCPU only supported with SA_DIR");
++ err = -EINVAL;
++ goto out;
++ }
++
+ out:
+ return err;
+ }
+@@ -841,6 +847,12 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ x->nat_keepalive_interval =
+ nla_get_u32(attrs[XFRMA_NAT_KEEPALIVE_INTERVAL]);
+
++ if (attrs[XFRMA_SA_PCPU]) {
++ x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);
++ if (x->pcpu_num >= num_possible_cpus())
++ goto error;
++ }
++
+ err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV], extack);
+ if (err)
+ goto error;
+@@ -1296,6 +1308,11 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
+ if (ret)
+ goto out;
+ }
++ if (x->pcpu_num != UINT_MAX) {
++ ret = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
++ if (ret)
++ goto out;
++ }
+ if (x->dir)
+ ret = nla_put_u8(skb, XFRMA_SA_DIR, x->dir);
+
+@@ -1700,6 +1717,7 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ u32 mark;
+ struct xfrm_mark m;
+ u32 if_id = 0;
++ u32 pcpu_num = UINT_MAX;
+
+ p = nlmsg_data(nlh);
+ err = verify_spi_info(p->info.id.proto, p->min, p->max, extack);
+@@ -1716,8 +1734,16 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (attrs[XFRMA_IF_ID])
+ if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+
++ if (attrs[XFRMA_SA_PCPU]) {
++ pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);
++ if (pcpu_num >= num_possible_cpus()) {
++ err = -EINVAL;
++ goto out_noput;
++ }
++ }
++
+ if (p->info.seq) {
+- x = xfrm_find_acq_byseq(net, mark, p->info.seq);
++ x = xfrm_find_acq_byseq(net, mark, p->info.seq, pcpu_num);
+ if (x && !xfrm_addr_equal(&x->id.daddr, daddr, family)) {
+ xfrm_state_put(x);
+ x = NULL;
+@@ -1726,7 +1752,7 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+
+ if (!x)
+ x = xfrm_find_acq(net, &m, p->info.mode, p->info.reqid,
+- if_id, p->info.id.proto, daddr,
++ if_id, pcpu_num, p->info.id.proto, daddr,
+ &p->info.saddr, 1,
+ family);
+ err = -ENOENT;
+@@ -2526,7 +2552,8 @@ static inline unsigned int xfrm_aevent_msgsize(struct xfrm_state *x)
+ + nla_total_size(sizeof(struct xfrm_mark))
+ + nla_total_size(4) /* XFRM_AE_RTHR */
+ + nla_total_size(4) /* XFRM_AE_ETHR */
+- + nla_total_size(sizeof(x->dir)); /* XFRMA_SA_DIR */
++ + nla_total_size(sizeof(x->dir)) /* XFRMA_SA_DIR */
++ + nla_total_size(4); /* XFRMA_SA_PCPU */
+ }
+
+ static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct km_event *c)
+@@ -2582,6 +2609,11 @@ static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct
+ err = xfrm_if_id_put(skb, x->if_id);
+ if (err)
+ goto out_cancel;
++ if (x->pcpu_num != UINT_MAX) {
++ err = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
++ if (err)
++ goto out_cancel;
++ }
+
+ if (x->dir) {
+ err = nla_put_u8(skb, XFRMA_SA_DIR, x->dir);
+@@ -2852,6 +2884,13 @@ static int xfrm_add_acquire(struct sk_buff *skb, struct nlmsghdr *nlh,
+
+ xfrm_mark_get(attrs, &mark);
+
++ if (attrs[XFRMA_SA_PCPU]) {
++ x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);
++ err = -EINVAL;
++ if (x->pcpu_num >= num_possible_cpus())
++ goto free_state;
++ }
++
+ err = verify_newpolicy_info(&ua->policy, extack);
+ if (err)
+ goto free_state;
+@@ -3182,6 +3221,7 @@ const struct nla_policy xfrma_policy[XFRMA_MAX+1] = {
+ [XFRMA_MTIMER_THRESH] = { .type = NLA_U32 },
+ [XFRMA_SA_DIR] = NLA_POLICY_RANGE(NLA_U8, XFRM_SA_DIR_IN, XFRM_SA_DIR_OUT),
+ [XFRMA_NAT_KEEPALIVE_INTERVAL] = { .type = NLA_U32 },
++ [XFRMA_SA_PCPU] = { .type = NLA_U32 },
+ };
+ EXPORT_SYMBOL_GPL(xfrma_policy);
+
+@@ -3348,7 +3388,8 @@ static inline unsigned int xfrm_expire_msgsize(void)
+ {
+ return NLMSG_ALIGN(sizeof(struct xfrm_user_expire)) +
+ nla_total_size(sizeof(struct xfrm_mark)) +
+- nla_total_size(sizeof_field(struct xfrm_state, dir));
++ nla_total_size(sizeof_field(struct xfrm_state, dir)) +
++ nla_total_size(4); /* XFRMA_SA_PCPU */
+ }
+
+ static int build_expire(struct sk_buff *skb, struct xfrm_state *x, const struct km_event *c)
+@@ -3374,6 +3415,11 @@ static int build_expire(struct sk_buff *skb, struct xfrm_state *x, const struct
+ err = xfrm_if_id_put(skb, x->if_id);
+ if (err)
+ return err;
++ if (x->pcpu_num != UINT_MAX) {
++ err = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
++ if (err)
++ return err;
++ }
+
+ if (x->dir) {
+ err = nla_put_u8(skb, XFRMA_SA_DIR, x->dir);
+@@ -3481,6 +3527,8 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
+ }
+ if (x->if_id)
+ l += nla_total_size(sizeof(x->if_id));
++ if (x->pcpu_num)
++ l += nla_total_size(sizeof(x->pcpu_num));
+
+ /* Must count x->lastused as it may become non-zero behind our back. */
+ l += nla_total_size_64bit(sizeof(u64));
+@@ -3587,6 +3635,7 @@ static inline unsigned int xfrm_acquire_msgsize(struct xfrm_state *x,
+ + nla_total_size(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr)
+ + nla_total_size(sizeof(struct xfrm_mark))
+ + nla_total_size(xfrm_user_sec_ctx_size(x->security))
++ + nla_total_size(4) /* XFRMA_SA_PCPU */
+ + userpolicy_type_attrsize();
+ }
+
+@@ -3623,6 +3672,8 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x,
+ err = xfrm_if_id_put(skb, xp->if_id);
+ if (!err && xp->xdo.dev)
+ err = copy_user_offload(&xp->xdo, skb);
++ if (!err && x->pcpu_num != UINT_MAX)
++ err = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
+ if (err) {
+ nlmsg_cancel(skb, nlh);
+ return err;
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index 57565dfd74a260..07fab2ef534e8d 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -91,6 +91,9 @@ static int parse_path(char *env_path, const char ***const path_list)
+ }
+ }
+ *path_list = malloc(num_paths * sizeof(**path_list));
++ if (!*path_list)
++ return -1;
++
+ for (i = 0; i < num_paths; i++)
+ (*path_list)[i] = strsep(&env_path, ENV_DELIMITER);
+
+@@ -127,6 +130,10 @@ static int populate_ruleset_fs(const char *const env_var, const int ruleset_fd,
+ env_path_name = strdup(env_path_name);
+ unsetenv(env_var);
+ num_paths = parse_path(env_path_name, &path_list);
++ if (num_paths < 0) {
++ fprintf(stderr, "Failed to allocate memory\n");
++ goto out_free_name;
++ }
+ if (num_paths == 1 && path_list[0][0] == '\0') {
+ /*
+ * Allows to not use all possible restrictions (e.g. use
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 01a9f567d5af48..fe5e132fcea89a 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -371,10 +371,10 @@ quiet_cmd_lzo_with_size = LZO $@
+ cmd_lzo_with_size = { cat $(real-prereqs) | $(KLZOP) -9; $(size_append); } > $@
+
+ quiet_cmd_lz4 = LZ4 $@
+- cmd_lz4 = cat $(real-prereqs) | $(LZ4) -l -c1 stdin stdout > $@
++ cmd_lz4 = cat $(real-prereqs) | $(LZ4) -l -9 - - > $@
+
+ quiet_cmd_lz4_with_size = LZ4 $@
+- cmd_lz4_with_size = { cat $(real-prereqs) | $(LZ4) -l -c1 stdin stdout; \
++ cmd_lz4_with_size = { cat $(real-prereqs) | $(LZ4) -l -9 - -; \
+ $(size_append); } > $@
+
+ # U-Boot mkimage
+diff --git a/scripts/genksyms/genksyms.c b/scripts/genksyms/genksyms.c
+index f3901c55df239d..bbc6b7d3088c15 100644
+--- a/scripts/genksyms/genksyms.c
++++ b/scripts/genksyms/genksyms.c
+@@ -239,6 +239,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ "unchanged\n");
+ }
+ sym->is_declared = 1;
++ free_list(defn, NULL);
+ return sym;
+ } else if (!sym->is_declared) {
+ if (sym->is_override && flag_preserve) {
+@@ -247,6 +248,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ print_type_name(type, name);
+ fprintf(stderr, " modversion change\n");
+ sym->is_declared = 1;
++ free_list(defn, NULL);
+ return sym;
+ } else {
+ status = is_unknown_symbol(sym) ?
+@@ -254,6 +256,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ }
+ } else {
+ error_with_pos("redefinition of %s", name);
++ free_list(defn, NULL);
+ return sym;
+ }
+ break;
+@@ -269,11 +272,15 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ break;
+ }
+ }
++
++ free_list(sym->defn, NULL);
++ free(sym->name);
++ free(sym);
+ --nsyms;
+ }
+
+ sym = xmalloc(sizeof(*sym));
+- sym->name = name;
++ sym->name = xstrdup(name);
+ sym->type = type;
+ sym->defn = defn;
+ sym->expansion_trail = NULL;
+@@ -480,7 +487,7 @@ static void read_reference(FILE *f)
+ defn = def;
+ def = read_node(f);
+ }
+- subsym = add_reference_symbol(xstrdup(sym->string), sym->tag,
++ subsym = add_reference_symbol(sym->string, sym->tag,
+ defn, is_extern);
+ subsym->is_override = is_override;
+ free_node(sym);
+diff --git a/scripts/genksyms/genksyms.h b/scripts/genksyms/genksyms.h
+index 21ed2ec2d98ca8..5621533dcb8e43 100644
+--- a/scripts/genksyms/genksyms.h
++++ b/scripts/genksyms/genksyms.h
+@@ -32,7 +32,7 @@ struct string_list {
+
+ struct symbol {
+ struct symbol *hash_next;
+- const char *name;
++ char *name;
+ enum symbol_type type;
+ struct string_list *defn;
+ struct symbol *expansion_trail;
+diff --git a/scripts/genksyms/parse.y b/scripts/genksyms/parse.y
+index 8e9b5e69e8f01d..689cb6bb40b657 100644
+--- a/scripts/genksyms/parse.y
++++ b/scripts/genksyms/parse.y
+@@ -152,14 +152,19 @@ simple_declaration:
+ ;
+
+ init_declarator_list_opt:
+- /* empty */ { $$ = NULL; }
+- | init_declarator_list
++ /* empty */ { $$ = NULL; }
++ | init_declarator_list { free_list(decl_spec, NULL); $$ = $1; }
+ ;
+
+ init_declarator_list:
+ init_declarator
+ { struct string_list *decl = *$1;
+ *$1 = NULL;
++
++ /* avoid sharing among multiple init_declarators */
++ if (decl_spec)
++ decl_spec = copy_list_range(decl_spec, NULL);
++
+ add_symbol(current_name,
+ is_typedef ? SYM_TYPEDEF : SYM_NORMAL, decl, is_extern);
+ current_name = NULL;
+@@ -170,6 +175,11 @@ init_declarator_list:
+ *$3 = NULL;
+ free_list(*$2, NULL);
+ *$2 = decl_spec;
++
++ /* avoid sharing among multiple init_declarators */
++ if (decl_spec)
++ decl_spec = copy_list_range(decl_spec, NULL);
++
+ add_symbol(current_name,
+ is_typedef ? SYM_TYPEDEF : SYM_NORMAL, decl, is_extern);
+ current_name = NULL;
+@@ -472,12 +482,12 @@ enumerator_list:
+ enumerator:
+ IDENT
+ {
+- const char *name = strdup((*$1)->string);
++ const char *name = (*$1)->string;
+ add_symbol(name, SYM_ENUM_CONST, NULL, 0);
+ }
+ | IDENT '=' EXPRESSION_PHRASE
+ {
+- const char *name = strdup((*$1)->string);
++ const char *name = (*$1)->string;
+ struct string_list *expr = copy_list_range(*$3, *$2);
+ add_symbol(name, SYM_ENUM_CONST, expr, 0);
+ }
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index 4286d5e7f95dc1..3b55e7a4131d9a 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -360,10 +360,12 @@ int conf_read_simple(const char *name, int def)
+
+ *p = '\0';
+
+- in = zconf_fopen(env);
++ name = env;
++
++ in = zconf_fopen(name);
+ if (in) {
+ conf_message("using defaults found in %s",
+- env);
++ name);
+ goto load;
+ }
+
+diff --git a/scripts/kconfig/symbol.c b/scripts/kconfig/symbol.c
+index a3af93aaaf32af..453721e66c4ebc 100644
+--- a/scripts/kconfig/symbol.c
++++ b/scripts/kconfig/symbol.c
+@@ -376,6 +376,7 @@ static void sym_warn_unmet_dep(const struct symbol *sym)
+ " Selected by [m]:\n");
+
+ fputs(str_get(&gs), stderr);
++ str_free(&gs);
+ sym_warnings++;
+ }
+
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index e31b97a9f175aa..7adb25150488fc 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -937,10 +937,6 @@ static access_mask_t get_mode_access(const umode_t mode)
+ switch (mode & S_IFMT) {
+ case S_IFLNK:
+ return LANDLOCK_ACCESS_FS_MAKE_SYM;
+- case 0:
+- /* A zero mode translates to S_IFREG. */
+- case S_IFREG:
+- return LANDLOCK_ACCESS_FS_MAKE_REG;
+ case S_IFDIR:
+ return LANDLOCK_ACCESS_FS_MAKE_DIR;
+ case S_IFCHR:
+@@ -951,9 +947,12 @@ static access_mask_t get_mode_access(const umode_t mode)
+ return LANDLOCK_ACCESS_FS_MAKE_FIFO;
+ case S_IFSOCK:
+ return LANDLOCK_ACCESS_FS_MAKE_SOCK;
++ case S_IFREG:
++ case 0:
++ /* A zero mode translates to S_IFREG. */
+ default:
+- WARN_ON_ONCE(1);
+- return 0;
++ /* Treats weird files as regular files. */
++ return LANDLOCK_ACCESS_FS_MAKE_REG;
+ }
+ }
+
+diff --git a/sound/core/seq/Kconfig b/sound/core/seq/Kconfig
+index 0374bbf51cd4d3..e4f58cb985d47c 100644
+--- a/sound/core/seq/Kconfig
++++ b/sound/core/seq/Kconfig
+@@ -62,7 +62,7 @@ config SND_SEQ_VIRMIDI
+
+ config SND_SEQ_UMP
+ bool "Support for UMP events"
+- default y if SND_SEQ_UMP_CLIENT
++ default SND_UMP
+ help
+ Say Y here to enable the support for handling UMP (Universal MIDI
+ Packet) events via ALSA sequencer infrastructure, which is an
+@@ -71,6 +71,6 @@ config SND_SEQ_UMP
+ among legacy and UMP clients.
+
+ config SND_SEQ_UMP_CLIENT
+- def_tristate SND_UMP
++ def_tristate SND_UMP && SND_SEQ_UMP
+
+ endif # SND_SEQUENCER
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8c4de5a253addf..5d99a4ea176a15 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10143,6 +10143,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1360, "Acer Aspire A115", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x141f, "Acer Spin SP513-54N", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+diff --git a/sound/soc/amd/acp/acp-i2s.c b/sound/soc/amd/acp/acp-i2s.c
+index 56ce9e4b6accc7..92c5ff0deea2cd 100644
+--- a/sound/soc/amd/acp/acp-i2s.c
++++ b/sound/soc/amd/acp/acp-i2s.c
+@@ -181,6 +181,7 @@ static int acp_i2s_set_tdm_slot(struct snd_soc_dai *dai, u32 tx_mask, u32 rx_mas
+ break;
+ default:
+ dev_err(dev, "Unknown chip revision %d\n", chip->acp_rev);
++ spin_unlock_irq(&adata->acp_lock);
+ return -EINVAL;
+ }
+ }
+diff --git a/sound/soc/codecs/Makefile b/sound/soc/codecs/Makefile
+index 54cbc3feae3277..69cb0b39f22007 100644
+--- a/sound/soc/codecs/Makefile
++++ b/sound/soc/codecs/Makefile
+@@ -79,7 +79,7 @@ snd-soc-cs35l56-shared-y := cs35l56-shared.o
+ snd-soc-cs35l56-i2c-y := cs35l56-i2c.o
+ snd-soc-cs35l56-spi-y := cs35l56-spi.o
+ snd-soc-cs35l56-sdw-y := cs35l56-sdw.o
+-snd-soc-cs40l50-objs := cs40l50-codec.o
++snd-soc-cs40l50-y := cs40l50-codec.o
+ snd-soc-cs42l42-y := cs42l42.o
+ snd-soc-cs42l42-i2c-y := cs42l42-i2c.o
+ snd-soc-cs42l42-sdw-y := cs42l42-sdw.o
+@@ -324,8 +324,8 @@ snd-soc-wcd-classh-y := wcd-clsh-v2.o
+ snd-soc-wcd-mbhc-y := wcd-mbhc-v2.o
+ snd-soc-wcd9335-y := wcd9335.o
+ snd-soc-wcd934x-y := wcd934x.o
+-snd-soc-wcd937x-objs := wcd937x.o
+-snd-soc-wcd937x-sdw-objs := wcd937x-sdw.o
++snd-soc-wcd937x-y := wcd937x.o
++snd-soc-wcd937x-sdw-y := wcd937x-sdw.o
+ snd-soc-wcd938x-y := wcd938x.o
+ snd-soc-wcd938x-sdw-y := wcd938x-sdw.o
+ snd-soc-wcd939x-y := wcd939x.o
+diff --git a/sound/soc/codecs/da7213.c b/sound/soc/codecs/da7213.c
+index 486db60bf2dd14..f17f02d01f8c0f 100644
+--- a/sound/soc/codecs/da7213.c
++++ b/sound/soc/codecs/da7213.c
+@@ -2191,6 +2191,8 @@ static int da7213_i2c_probe(struct i2c_client *i2c)
+ return ret;
+ }
+
++ mutex_init(&da7213->ctrl_lock);
++
+ pm_runtime_set_autosuspend_delay(&i2c->dev, 100);
+ pm_runtime_use_autosuspend(&i2c->dev);
+ pm_runtime_set_active(&i2c->dev);
+diff --git a/sound/soc/intel/avs/apl.c b/sound/soc/intel/avs/apl.c
+index 27516ef5718591..3dccf0a57a3a11 100644
+--- a/sound/soc/intel/avs/apl.c
++++ b/sound/soc/intel/avs/apl.c
+@@ -12,6 +12,7 @@
+ #include "avs.h"
+ #include "messages.h"
+ #include "path.h"
++#include "registers.h"
+ #include "topology.h"
+
+ static irqreturn_t avs_apl_dsp_interrupt(struct avs_dev *adev)
+@@ -125,7 +126,7 @@ int avs_apl_coredump(struct avs_dev *adev, union avs_notify_msg *msg)
+ struct avs_apl_log_buffer_layout layout;
+ void __iomem *addr, *buf;
+ size_t dump_size;
+- u16 offset = 0;
++ u32 offset = 0;
+ u8 *dump, *pos;
+
+ dump_size = AVS_FW_REGS_SIZE + msg->ext.coredump.stack_dump_size;
+diff --git a/sound/soc/intel/avs/cnl.c b/sound/soc/intel/avs/cnl.c
+index bd3c4bb8bf5a17..03f8fb0dc187f5 100644
+--- a/sound/soc/intel/avs/cnl.c
++++ b/sound/soc/intel/avs/cnl.c
+@@ -9,6 +9,7 @@
+ #include <sound/hdaudio_ext.h>
+ #include "avs.h"
+ #include "messages.h"
++#include "registers.h"
+
+ static void avs_cnl_ipc_interrupt(struct avs_dev *adev)
+ {
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 73d4bde9b2f788..82839d0994ee3e 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -829,10 +829,10 @@ static const struct avs_spec jsl_desc = {
+ .hipc = &cnl_hipc_spec,
+ };
+
+-#define AVS_TGL_BASED_SPEC(sname) \
++#define AVS_TGL_BASED_SPEC(sname, min) \
+ static const struct avs_spec sname##_desc = { \
+ .name = #sname, \
+- .min_fw_version = { 10, 29, 0, 5646 }, \
++ .min_fw_version = { 10, min, 0, 5646 }, \
+ .dsp_ops = &avs_tgl_dsp_ops, \
+ .core_init_mask = 1, \
+ .attributes = AVS_PLATATTR_IMR, \
+@@ -840,11 +840,11 @@ static const struct avs_spec sname##_desc = { \
+ .hipc = &cnl_hipc_spec, \
+ }
+
+-AVS_TGL_BASED_SPEC(lkf);
+-AVS_TGL_BASED_SPEC(tgl);
+-AVS_TGL_BASED_SPEC(ehl);
+-AVS_TGL_BASED_SPEC(adl);
+-AVS_TGL_BASED_SPEC(adl_n);
++AVS_TGL_BASED_SPEC(lkf, 28);
++AVS_TGL_BASED_SPEC(tgl, 29);
++AVS_TGL_BASED_SPEC(ehl, 30);
++AVS_TGL_BASED_SPEC(adl, 35);
++AVS_TGL_BASED_SPEC(adl_n, 35);
+
+ static const struct pci_device_id avs_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, HDA_SKL_LP, &skl_desc) },
+diff --git a/sound/soc/intel/avs/loader.c b/sound/soc/intel/avs/loader.c
+index 890efd2f1feabe..37de077a998386 100644
+--- a/sound/soc/intel/avs/loader.c
++++ b/sound/soc/intel/avs/loader.c
+@@ -308,7 +308,7 @@ avs_hda_init_rom(struct avs_dev *adev, unsigned int dma_id, bool purge)
+ }
+
+ /* await ROM init */
+- ret = snd_hdac_adsp_readq_poll(adev, spec->sram->rom_status_offset, reg,
++ ret = snd_hdac_adsp_readl_poll(adev, spec->sram->rom_status_offset, reg,
+ (reg & 0xF) == AVS_ROM_INIT_DONE ||
+ (reg & 0xF) == APL_ROM_FW_ENTERED,
+ AVS_ROM_INIT_POLLING_US, APL_ROM_INIT_TIMEOUT_US);
+diff --git a/sound/soc/intel/avs/registers.h b/sound/soc/intel/avs/registers.h
+index f76e91cff2a9a6..5b6d60eb3c18bd 100644
+--- a/sound/soc/intel/avs/registers.h
++++ b/sound/soc/intel/avs/registers.h
+@@ -9,6 +9,8 @@
+ #ifndef __SOUND_SOC_INTEL_AVS_REGS_H
+ #define __SOUND_SOC_INTEL_AVS_REGS_H
+
++#include <linux/io-64-nonatomic-lo-hi.h>
++#include <linux/iopoll.h>
+ #include <linux/sizes.h>
+
+ #define AZX_PCIREG_PGCTL 0x44
+@@ -98,4 +100,47 @@
+ #define avs_downlink_addr(adev) \
+ avs_sram_addr(adev, AVS_DOWNLINK_WINDOW)
+
++#define snd_hdac_adsp_writeb(adev, reg, value) \
++ snd_hdac_reg_writeb(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readb(adev, reg) \
++ snd_hdac_reg_readb(&(adev)->base.core, (adev)->dsp_ba + (reg))
++#define snd_hdac_adsp_writew(adev, reg, value) \
++ snd_hdac_reg_writew(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readw(adev, reg) \
++ snd_hdac_reg_readw(&(adev)->base.core, (adev)->dsp_ba + (reg))
++#define snd_hdac_adsp_writel(adev, reg, value) \
++ snd_hdac_reg_writel(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readl(adev, reg) \
++ snd_hdac_reg_readl(&(adev)->base.core, (adev)->dsp_ba + (reg))
++#define snd_hdac_adsp_writeq(adev, reg, value) \
++ snd_hdac_reg_writeq(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readq(adev, reg) \
++ snd_hdac_reg_readq(&(adev)->base.core, (adev)->dsp_ba + (reg))
++
++#define snd_hdac_adsp_updateb(adev, reg, mask, val) \
++ snd_hdac_adsp_writeb(adev, reg, \
++ (snd_hdac_adsp_readb(adev, reg) & ~(mask)) | (val))
++#define snd_hdac_adsp_updatew(adev, reg, mask, val) \
++ snd_hdac_adsp_writew(adev, reg, \
++ (snd_hdac_adsp_readw(adev, reg) & ~(mask)) | (val))
++#define snd_hdac_adsp_updatel(adev, reg, mask, val) \
++ snd_hdac_adsp_writel(adev, reg, \
++ (snd_hdac_adsp_readl(adev, reg) & ~(mask)) | (val))
++#define snd_hdac_adsp_updateq(adev, reg, mask, val) \
++ snd_hdac_adsp_writeq(adev, reg, \
++ (snd_hdac_adsp_readq(adev, reg) & ~(mask)) | (val))
++
++#define snd_hdac_adsp_readb_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readb_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++#define snd_hdac_adsp_readw_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readw_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++#define snd_hdac_adsp_readl_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readl_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++#define snd_hdac_adsp_readq_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readq_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++
+ #endif /* __SOUND_SOC_INTEL_AVS_REGS_H */
+diff --git a/sound/soc/intel/avs/skl.c b/sound/soc/intel/avs/skl.c
+index 34f859d6e5a49a..d66ef000de9ee7 100644
+--- a/sound/soc/intel/avs/skl.c
++++ b/sound/soc/intel/avs/skl.c
+@@ -12,6 +12,7 @@
+ #include "avs.h"
+ #include "cldma.h"
+ #include "messages.h"
++#include "registers.h"
+
+ void avs_skl_ipc_interrupt(struct avs_dev *adev)
+ {
+diff --git a/sound/soc/intel/avs/topology.c b/sound/soc/intel/avs/topology.c
+index 5cda527020c7bf..d612f20ed98937 100644
+--- a/sound/soc/intel/avs/topology.c
++++ b/sound/soc/intel/avs/topology.c
+@@ -1466,7 +1466,7 @@ avs_tplg_path_template_create(struct snd_soc_component *comp, struct avs_tplg *o
+
+ static const struct avs_tplg_token_parser mod_init_config_parsers[] = {
+ {
+- .token = AVS_TKN_MOD_INIT_CONFIG_ID_U32,
++ .token = AVS_TKN_INIT_CONFIG_ID_U32,
+ .type = SND_SOC_TPLG_TUPLE_TYPE_WORD,
+ .offset = offsetof(struct avs_tplg_init_config, id),
+ .parse = avs_parse_word_token,
+@@ -1519,7 +1519,7 @@ static int avs_tplg_parse_initial_configs(struct snd_soc_component *comp,
+ esize = le32_to_cpu(tuples->size) + le32_to_cpu(tmp->size);
+
+ ret = parse_dictionary_entries(comp, tuples, esize, config, 1, sizeof(*config),
+- AVS_TKN_MOD_INIT_CONFIG_ID_U32,
++ AVS_TKN_INIT_CONFIG_ID_U32,
+ mod_init_config_parsers,
+ ARRAY_SIZE(mod_init_config_parsers));
+
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 41042259f2b26e..9f2dc24d44cb54 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -22,6 +22,8 @@ static int quirk_override = -1;
+ module_param_named(quirk, quirk_override, int, 0444);
+ MODULE_PARM_DESC(quirk, "Board-specific quirk override");
+
++#define DMIC_DEFAULT_CHANNELS 2
++
+ static void log_quirks(struct device *dev)
+ {
+ if (SOC_SDW_JACK_JDSRC(sof_sdw_quirk))
+@@ -584,17 +586,32 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3838")
++ DMI_MATCH(DMI_PRODUCT_NAME, "83JX")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
+ },
+ {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3832")
++ DMI_MATCH(DMI_PRODUCT_NAME, "83LC")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83MC")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
++ }, {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83NM")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
+ },
+ {
+ .callback = sof_sdw_quirk_cb,
+@@ -1063,17 +1080,19 @@ static int sof_card_dai_links_create(struct snd_soc_card *card)
+ hdmi_num = SOF_PRE_TGL_HDMI_COUNT;
+
+ /* enable dmic01 & dmic16k */
+- if (sof_sdw_quirk & SOC_SDW_PCH_DMIC || mach_params->dmic_num) {
+- if (ctx->ignore_internal_dmic)
+- dev_warn(dev, "Ignoring PCH DMIC\n");
+- else
+- dmic_num = 2;
++ if (ctx->ignore_internal_dmic) {
++ dev_warn(dev, "Ignoring internal DMIC\n");
++ mach_params->dmic_num = 0;
++ } else if (mach_params->dmic_num) {
++ dmic_num = 2;
++ } else if (sof_sdw_quirk & SOC_SDW_PCH_DMIC) {
++ dmic_num = 2;
++ /*
++ * mach_params->dmic_num will be used to set the cfg-mics value of
++ * card->components string. Set it to the default value.
++ */
++ mach_params->dmic_num = DMIC_DEFAULT_CHANNELS;
+ }
+- /*
+- * mach_params->dmic_num will be used to set the cfg-mics value of card->components
+- * string. Overwrite it to the actual number of PCH DMICs used in the device.
+- */
+- mach_params->dmic_num = dmic_num;
+
+ if (sof_sdw_quirk & SOF_SSP_BT_OFFLOAD_PRESENT)
+ bt_num = 1;
+diff --git a/sound/soc/mediatek/mt8365/Makefile b/sound/soc/mediatek/mt8365/Makefile
+index 52ba45a8498a20..b197025e34bb80 100644
+--- a/sound/soc/mediatek/mt8365/Makefile
++++ b/sound/soc/mediatek/mt8365/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+ # MTK Platform driver
+-snd-soc-mt8365-pcm-objs := \
++snd-soc-mt8365-pcm-y := \
+ mt8365-afe-clk.o \
+ mt8365-afe-pcm.o \
+ mt8365-dai-adda.o \
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index d1f28699652fe3..acd75e48851fcf 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -22,7 +22,6 @@
+
+ #define DRV_NAME "rockchip-i2s-tdm"
+
+-#define DEFAULT_MCLK_FS 256
+ #define CH_GRP_MAX 4 /* The max channel 8 / 2 */
+ #define MULTIPLEX_CH_MAX 10
+
+@@ -70,6 +69,8 @@ struct rk_i2s_tdm_dev {
+ bool has_playback;
+ bool has_capture;
+ struct snd_soc_dai_driver *dai;
++ unsigned int mclk_rx_freq;
++ unsigned int mclk_tx_freq;
+ };
+
+ static int to_ch_num(unsigned int val)
+@@ -645,6 +646,27 @@ static int rockchip_i2s_trcm_mode(struct snd_pcm_substream *substream,
+ return 0;
+ }
+
++static int rockchip_i2s_tdm_set_sysclk(struct snd_soc_dai *cpu_dai, int stream,
++ unsigned int freq, int dir)
++{
++ struct rk_i2s_tdm_dev *i2s_tdm = to_info(cpu_dai);
++
++ if (i2s_tdm->clk_trcm) {
++ i2s_tdm->mclk_tx_freq = freq;
++ i2s_tdm->mclk_rx_freq = freq;
++ } else {
++ if (stream == SNDRV_PCM_STREAM_PLAYBACK)
++ i2s_tdm->mclk_tx_freq = freq;
++ else
++ i2s_tdm->mclk_rx_freq = freq;
++ }
++
++ dev_dbg(i2s_tdm->dev, "The target mclk_%s freq is: %d\n",
++ stream ? "rx" : "tx", freq);
++
++ return 0;
++}
++
+ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params,
+ struct snd_soc_dai *dai)
+@@ -659,15 +681,19 @@ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
+
+ if (i2s_tdm->clk_trcm == TRCM_TX) {
+ mclk = i2s_tdm->mclk_tx;
++ mclk_rate = i2s_tdm->mclk_tx_freq;
+ } else if (i2s_tdm->clk_trcm == TRCM_RX) {
+ mclk = i2s_tdm->mclk_rx;
++ mclk_rate = i2s_tdm->mclk_rx_freq;
+ } else if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mclk = i2s_tdm->mclk_tx;
++ mclk_rate = i2s_tdm->mclk_tx_freq;
+ } else {
+ mclk = i2s_tdm->mclk_rx;
++ mclk_rate = i2s_tdm->mclk_rx_freq;
+ }
+
+- err = clk_set_rate(mclk, DEFAULT_MCLK_FS * params_rate(params));
++ err = clk_set_rate(mclk, mclk_rate);
+ if (err)
+ return err;
+
+@@ -827,6 +853,7 @@ static const struct snd_soc_dai_ops rockchip_i2s_tdm_dai_ops = {
+ .hw_params = rockchip_i2s_tdm_hw_params,
+ .set_bclk_ratio = rockchip_i2s_tdm_set_bclk_ratio,
+ .set_fmt = rockchip_i2s_tdm_set_fmt,
++ .set_sysclk = rockchip_i2s_tdm_set_sysclk,
+ .set_tdm_slot = rockchip_dai_tdm_slot,
+ .trigger = rockchip_i2s_tdm_trigger,
+ };
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index 040ce0431fd2c5..32db2cead8a4ec 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -258,8 +258,7 @@ static void rz_ssi_stream_quit(struct rz_ssi_priv *ssi,
+ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
+ unsigned int channels)
+ {
+- static s8 ckdv[16] = { 1, 2, 4, 8, 16, 32, 64, 128,
+- 6, 12, 24, 48, 96, -1, -1, -1 };
++ static u8 ckdv[] = { 1, 2, 4, 8, 16, 32, 64, 128, 6, 12, 24, 48, 96 };
+ unsigned int channel_bits = 32; /* System Word Length */
+ unsigned long bclk_rate = rate * channels * channel_bits;
+ unsigned int div;
+diff --git a/sound/soc/sunxi/sun4i-spdif.c b/sound/soc/sunxi/sun4i-spdif.c
+index 0aa4164232464e..7cf623cbe9ed4b 100644
+--- a/sound/soc/sunxi/sun4i-spdif.c
++++ b/sound/soc/sunxi/sun4i-spdif.c
+@@ -176,6 +176,7 @@ struct sun4i_spdif_quirks {
+ unsigned int reg_dac_txdata;
+ bool has_reset;
+ unsigned int val_fctl_ftx;
++ unsigned int mclk_multiplier;
+ };
+
+ struct sun4i_spdif_dev {
+@@ -313,6 +314,7 @@ static int sun4i_spdif_hw_params(struct snd_pcm_substream *substream,
+ default:
+ return -EINVAL;
+ }
++ mclk *= host->quirks->mclk_multiplier;
+
+ ret = clk_set_rate(host->spdif_clk, mclk);
+ if (ret < 0) {
+@@ -347,6 +349,7 @@ static int sun4i_spdif_hw_params(struct snd_pcm_substream *substream,
+ default:
+ return -EINVAL;
+ }
++ mclk_div *= host->quirks->mclk_multiplier;
+
+ reg_val = 0;
+ reg_val |= SUN4I_SPDIF_TXCFG_ASS;
+@@ -540,24 +543,28 @@ static struct snd_soc_dai_driver sun4i_spdif_dai = {
+ static const struct sun4i_spdif_quirks sun4i_a10_spdif_quirks = {
+ .reg_dac_txdata = SUN4I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN4I_SPDIF_FCTL_FTX,
++ .mclk_multiplier = 1,
+ };
+
+ static const struct sun4i_spdif_quirks sun6i_a31_spdif_quirks = {
+ .reg_dac_txdata = SUN4I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN4I_SPDIF_FCTL_FTX,
+ .has_reset = true,
++ .mclk_multiplier = 1,
+ };
+
+ static const struct sun4i_spdif_quirks sun8i_h3_spdif_quirks = {
+ .reg_dac_txdata = SUN8I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN4I_SPDIF_FCTL_FTX,
+ .has_reset = true,
++ .mclk_multiplier = 4,
+ };
+
+ static const struct sun4i_spdif_quirks sun50i_h6_spdif_quirks = {
+ .reg_dac_txdata = SUN8I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN50I_H6_SPDIF_FCTL_FTX,
+ .has_reset = true,
++ .mclk_multiplier = 1,
+ };
+
+ static const struct of_device_id sun4i_spdif_of_match[] = {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 7968d6a2f592ac..a97efb7b131ea2 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2343,6 +2343,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
++ DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index 156b62a163c5a6..8a48cc2536f566 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -226,7 +226,7 @@ static int load_xbc_from_initrd(int fd, char **buf)
+ /* Wrong Checksum */
+ rcsum = xbc_calc_checksum(*buf, size);
+ if (csum != rcsum) {
+- pr_err("checksum error: %d != %d\n", csum, rcsum);
++ pr_err("checksum error: %u != %u\n", csum, rcsum);
+ return -EINVAL;
+ }
+
+@@ -395,7 +395,7 @@ static int apply_xbc(const char *path, const char *xbc_path)
+ xbc_get_info(&ret, NULL);
+ printf("\tNumber of nodes: %d\n", ret);
+ printf("\tSize: %u bytes\n", (unsigned int)size);
+- printf("\tChecksum: %d\n", (unsigned int)csum);
++ printf("\tChecksum: %u\n", (unsigned int)csum);
+
+ /* TODO: Check the options by schema */
+ xbc_exit();
+diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
+index 2f082b01ff2284..42ec5ddaab8dc8 100644
+--- a/tools/include/uapi/linux/if_xdp.h
++++ b/tools/include/uapi/linux/if_xdp.h
+@@ -117,12 +117,12 @@ struct xdp_options {
+ ((1ULL << XSK_UNALIGNED_BUF_OFFSET_SHIFT) - 1)
+
+ /* Request transmit timestamp. Upon completion, put it into tx_timestamp
+- * field of union xsk_tx_metadata.
++ * field of struct xsk_tx_metadata.
+ */
+ #define XDP_TXMD_FLAGS_TIMESTAMP (1 << 0)
+
+ /* Request transmit checksum offload. Checksum start position and offset
+- * are communicated via csum_start and csum_offset fields of union
++ * are communicated via csum_start and csum_offset fields of struct
+ * xsk_tx_metadata.
+ */
+ #define XDP_TXMD_FLAGS_CHECKSUM (1 << 1)
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 3c131039c52326..27e7bfae953bd3 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -1185,6 +1185,7 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
+
+ elf = elf_begin(fd, ELF_C_READ, NULL);
+ if (!elf) {
++ err = -LIBBPF_ERRNO__FORMAT;
+ pr_warn("failed to open %s as ELF file\n", path);
+ goto done;
+ }
+diff --git a/tools/lib/bpf/btf_relocate.c b/tools/lib/bpf/btf_relocate.c
+index 4f7399d85eab3d..8ef8003480dac8 100644
+--- a/tools/lib/bpf/btf_relocate.c
++++ b/tools/lib/bpf/btf_relocate.c
+@@ -212,7 +212,7 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
+ * need to match both name and size, otherwise embedding the base
+ * struct/union in the split type is invalid.
+ */
+- for (id = r->nr_dist_base_types; id < r->nr_split_types; id++) {
++ for (id = r->nr_dist_base_types; id < r->nr_dist_base_types + r->nr_split_types; id++) {
+ err = btf_mark_embedded_composite_type_ids(r, id);
+ if (err)
+ goto done;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 6985ab0f1ca9e8..777600822d8e45 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -567,17 +567,15 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+ }
+ obj->elf = elf_begin(obj->fd, ELF_C_READ_MMAP, NULL);
+ if (!obj->elf) {
+- err = -errno;
+ pr_warn_elf("failed to parse ELF file '%s'", filename);
+- return err;
++ return -EINVAL;
+ }
+
+ /* Sanity check ELF file high-level properties */
+ ehdr = elf64_getehdr(obj->elf);
+ if (!ehdr) {
+- err = -errno;
+ pr_warn_elf("failed to get ELF header for %s", filename);
+- return err;
++ return -EINVAL;
+ }
+ if (ehdr->e_ident[EI_DATA] != host_endianness) {
+ err = -EOPNOTSUPP;
+@@ -593,9 +591,8 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+ }
+
+ if (elf_getshdrstrndx(obj->elf, &obj->shstrs_sec_idx)) {
+- err = -errno;
+ pr_warn_elf("failed to get SHSTRTAB section index for %s", filename);
+- return err;
++ return -EINVAL;
+ }
+
+ scn = NULL;
+@@ -605,26 +602,23 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+
+ shdr = elf64_getshdr(scn);
+ if (!shdr) {
+- err = -errno;
+ pr_warn_elf("failed to get section #%zu header for %s",
+ sec_idx, filename);
+- return err;
++ return -EINVAL;
+ }
+
+ sec_name = elf_strptr(obj->elf, obj->shstrs_sec_idx, shdr->sh_name);
+ if (!sec_name) {
+- err = -errno;
+ pr_warn_elf("failed to get section #%zu name for %s",
+ sec_idx, filename);
+- return err;
++ return -EINVAL;
+ }
+
+ data = elf_getdata(scn, 0);
+ if (!data) {
+- err = -errno;
+ pr_warn_elf("failed to get section #%zu (%s) data from %s",
+ sec_idx, sec_name, filename);
+- return err;
++ return -EINVAL;
+ }
+
+ sec = add_src_sec(obj, sec_name);
+@@ -2635,14 +2629,14 @@ int bpf_linker__finalize(struct bpf_linker *linker)
+
+ /* Finalize ELF layout */
+ if (elf_update(linker->elf, ELF_C_NULL) < 0) {
+- err = -errno;
++ err = -EINVAL;
+ pr_warn_elf("failed to finalize ELF layout");
+ return libbpf_err(err);
+ }
+
+ /* Write out final ELF contents */
+ if (elf_update(linker->elf, ELF_C_WRITE) < 0) {
+- err = -errno;
++ err = -EINVAL;
+ pr_warn_elf("failed to write ELF contents");
+ return libbpf_err(err);
+ }
+diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
+index 93794f01bb67cb..6ff28e7bf5e3da 100644
+--- a/tools/lib/bpf/usdt.c
++++ b/tools/lib/bpf/usdt.c
+@@ -659,7 +659,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ * [0] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation
+ */
+ usdt_abs_ip = note.loc_addr;
+- if (base_addr)
++ if (base_addr && note.base_addr)
+ usdt_abs_ip += base_addr - note.base_addr;
+
+ /* When attaching uprobes (which is what USDTs basically are)
+diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
+index e16cef160bc2cb..ce32cb35007d6f 100644
+--- a/tools/net/ynl/lib/ynl.c
++++ b/tools/net/ynl/lib/ynl.c
+@@ -95,7 +95,7 @@ ynl_err_walk(struct ynl_sock *ys, void *start, void *end, unsigned int off,
+
+ ynl_attr_for_each_payload(start, data_len, attr) {
+ astart_off = (char *)attr - (char *)start;
+- aend_off = astart_off + ynl_attr_data_len(attr);
++ aend_off = (char *)ynl_attr_data_end(attr) - (char *)start;
+ if (aend_off <= off)
+ continue;
+
+diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
+index dc42de1785cee7..908165fcec7de3 100644
+--- a/tools/perf/MANIFEST
++++ b/tools/perf/MANIFEST
+@@ -1,5 +1,6 @@
+ arch/arm64/tools/gen-sysreg.awk
+ arch/arm64/tools/sysreg
++arch/*/include/uapi/asm/bpf_perf_event.h
+ tools/perf
+ tools/arch
+ tools/scripts
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index d6989195a061ff..11e49cafa3af9d 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -2367,10 +2367,10 @@ int cmd_inject(int argc, const char **argv)
+ };
+ int ret;
+ const char *known_build_ids = NULL;
+- bool build_ids;
+- bool build_id_all;
+- bool mmap2_build_ids;
+- bool mmap2_build_id_all;
++ bool build_ids = false;
++ bool build_id_all = false;
++ bool mmap2_build_ids = false;
++ bool mmap2_build_id_all = false;
+
+ struct option options[] = {
+ OPT_BOOLEAN('b', "build-ids", &build_ids,
+diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
+index 062e2b56a2ab57..33a456980664a0 100644
+--- a/tools/perf/builtin-lock.c
++++ b/tools/perf/builtin-lock.c
+@@ -1591,8 +1591,8 @@ static const struct {
+ { LCB_F_PERCPU | LCB_F_WRITE, "pcpu-sem:W", "percpu-rwsem" },
+ { LCB_F_MUTEX, "mutex", "mutex" },
+ { LCB_F_MUTEX | LCB_F_SPIN, "mutex", "mutex" },
+- /* alias for get_type_flag() */
+- { LCB_F_MUTEX | LCB_F_SPIN, "mutex-spin", "mutex" },
++ /* alias for optimistic spinning only */
++ { LCB_F_MUTEX | LCB_F_SPIN, "mutex:spin", "mutex-spin" },
+ };
+
+ static const char *get_type_str(unsigned int flags)
+@@ -1617,19 +1617,6 @@ static const char *get_type_name(unsigned int flags)
+ return "unknown";
+ }
+
+-static unsigned int get_type_flag(const char *str)
+-{
+- for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
+- if (!strcmp(lock_type_table[i].name, str))
+- return lock_type_table[i].flags;
+- }
+- for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
+- if (!strcmp(lock_type_table[i].str, str))
+- return lock_type_table[i].flags;
+- }
+- return UINT_MAX;
+-}
+-
+ static void lock_filter_finish(void)
+ {
+ zfree(&filters.types);
+@@ -2350,29 +2337,58 @@ static int parse_lock_type(const struct option *opt __maybe_unused, const char *
+ int unset __maybe_unused)
+ {
+ char *s, *tmp, *tok;
+- int ret = 0;
+
+ s = strdup(str);
+ if (s == NULL)
+ return -1;
+
+ for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) {
+- unsigned int flags = get_type_flag(tok);
++ bool found = false;
+
+- if (flags == -1U) {
+- pr_err("Unknown lock flags: %s\n", tok);
+- ret = -1;
+- break;
++ /* `tok` is `str` in `lock_type_table` if it contains ':'. */
++ if (strchr(tok, ':')) {
++ for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
++ if (!strcmp(lock_type_table[i].str, tok) &&
++ add_lock_type(lock_type_table[i].flags)) {
++ found = true;
++ break;
++ }
++ }
++
++ if (!found) {
++ pr_err("Unknown lock flags name: %s\n", tok);
++ free(s);
++ return -1;
++ }
++
++ continue;
+ }
+
+- if (!add_lock_type(flags)) {
+- ret = -1;
+- break;
++ /*
++ * Otherwise `tok` is `name` in `lock_type_table`.
++ * Single lock name could contain multiple flags.
++ */
++ for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
++ if (!strcmp(lock_type_table[i].name, tok)) {
++ if (add_lock_type(lock_type_table[i].flags)) {
++ found = true;
++ } else {
++ free(s);
++ return -1;
++ }
++ }
+ }
++
++ if (!found) {
++ pr_err("Unknown lock name: %s\n", tok);
++ free(s);
++ return -1;
++ }
++
+ }
+
+ free(s);
+- return ret;
++ return 0;
+ }
+
+ static bool add_lock_addr(unsigned long addr)
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 5dc17ffee27a2d..645deec294c842 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -1418,7 +1418,7 @@ int cmd_report(int argc, const char **argv)
+ OPT_STRING(0, "addr2line", &addr2line_path, "path",
+ "addr2line binary to use for line numbers"),
+ OPT_BOOLEAN(0, "demangle", &symbol_conf.demangle,
+- "Disable symbol demangling"),
++ "Symbol demangling. Enabled by default, use --no-demangle to disable."),
+ OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel,
+ "Enable kernel symbol demangling"),
+ OPT_BOOLEAN(0, "mem-mode", &report.mem_mode, "mem access profile"),
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index 724a7938632126..ca3e8eca6610e8 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -809,7 +809,7 @@ static void perf_event__process_sample(const struct perf_tool *tool,
+ * invalid --vmlinux ;-)
+ */
+ if (!machine->kptr_restrict_warned && !top->vmlinux_warned &&
+- __map__is_kernel(al.map) && map__has_symbols(al.map)) {
++ __map__is_kernel(al.map) && !map__has_symbols(al.map)) {
+ if (symbol_conf.vmlinux_name) {
+ char serr[256];
+
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index ffa1295273099e..ecd26e058baf67 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -2122,8 +2122,12 @@ static int trace__read_syscall_info(struct trace *trace, int id)
+ return PTR_ERR(sc->tp_format);
+ }
+
++ /*
++ * The tracepoint format contains __syscall_nr field, so it's one more
++ * than the actual number of syscall arguments.
++ */
+ if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ?
+- RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields))
++ RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields - 1))
+ return -ENOMEM;
+
+ sc->args = sc->tp_format->format.fields;
+diff --git a/tools/perf/tests/shell/trace_btf_enum.sh b/tools/perf/tests/shell/trace_btf_enum.sh
+index 5a3b8a5a9b5cf2..8d1e6bbeac9068 100755
+--- a/tools/perf/tests/shell/trace_btf_enum.sh
++++ b/tools/perf/tests/shell/trace_btf_enum.sh
+@@ -26,8 +26,12 @@ check_vmlinux() {
+ trace_landlock() {
+ echo "Tracing syscall ${syscall}"
+
+- # test flight just to see if landlock_add_rule and libbpf are available
+- $TESTPROG
++ # test flight just to see if landlock_add_rule is available
++ if ! perf trace $TESTPROG 2>&1 | grep -q landlock
++ then
++ echo "No landlock system call found, skipping to non-syscall tracing."
++ return
++ fi
+
+ if perf trace -e $syscall $TESTPROG 2>&1 | \
+ grep -q -E ".*landlock_add_rule\(ruleset_fd: 11, rule_type: (LANDLOCK_RULE_PATH_BENEATH|LANDLOCK_RULE_NET_PORT), rule_attr: 0x[a-f0-9]+, flags: 45\) = -1.*"
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index 13608237c50e05..c81444059ad077 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -289,7 +289,10 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session,
+ }
+
+ info_node->info_linear = info_linear;
+- perf_env__insert_bpf_prog_info(env, info_node);
++ if (!perf_env__insert_bpf_prog_info(env, info_node)) {
++ free(info_linear);
++ free(info_node);
++ }
+ info_linear = NULL;
+
+ /*
+@@ -480,7 +483,10 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ info_node = malloc(sizeof(struct bpf_prog_info_node));
+ if (info_node) {
+ info_node->info_linear = info_linear;
+- perf_env__insert_bpf_prog_info(env, info_node);
++ if (!perf_env__insert_bpf_prog_info(env, info_node)) {
++ free(info_linear);
++ free(info_node);
++ }
+ } else
+ free(info_linear);
+
+diff --git a/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
+index 4a62ed593e84ed..e4352881e3faa6 100644
+--- a/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
++++ b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
+@@ -431,9 +431,9 @@ static bool pid_filter__has(struct pids_filtered *pids, pid_t pid)
+ static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
+ {
+ bool augmented, do_output = false;
+- int zero = 0, size, aug_size, index,
+- value_size = sizeof(struct augmented_arg) - offsetof(struct augmented_arg, value);
++ int zero = 0, index, value_size = sizeof(struct augmented_arg) - offsetof(struct augmented_arg, value);
+ u64 output = 0; /* has to be u64, otherwise it won't pass the verifier */
++ s64 aug_size, size;
+ unsigned int nr, *beauty_map;
+ struct beauty_payload_enter *payload;
+ void *arg, *payload_offset;
+@@ -484,14 +484,11 @@ static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
+ } else if (size > 0 && size <= value_size) { /* struct */
+ if (!bpf_probe_read_user(((struct augmented_arg *)payload_offset)->value, size, arg))
+ augmented = true;
+- } else if (size < 0 && size >= -6) { /* buffer */
++ } else if ((int)size < 0 && size >= -6) { /* buffer */
+ index = -(size + 1);
+ barrier_var(index); // Prevent clang (noticed with v18) from removing the &= 7 trick.
+ index &= 7; // Satisfy the bounds checking with the verifier in some kernels.
+- aug_size = args->args[index];
+-
+- if (aug_size > TRACE_AUG_MAX_BUF)
+- aug_size = TRACE_AUG_MAX_BUF;
++ aug_size = args->args[index] > TRACE_AUG_MAX_BUF ? TRACE_AUG_MAX_BUF : args->args[index];
+
+ if (aug_size > 0) {
+ if (!bpf_probe_read_user(((struct augmented_arg *)payload_offset)->value, aug_size, arg))
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index 1edbccfc3281d2..d981b6f4bc5ea2 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -22,15 +22,19 @@ struct perf_env perf_env;
+ #include "bpf-utils.h"
+ #include <bpf/libbpf.h>
+
+-void perf_env__insert_bpf_prog_info(struct perf_env *env,
++bool perf_env__insert_bpf_prog_info(struct perf_env *env,
+ struct bpf_prog_info_node *info_node)
+ {
++ bool ret;
++
+ down_write(&env->bpf_progs.lock);
+- __perf_env__insert_bpf_prog_info(env, info_node);
++ ret = __perf_env__insert_bpf_prog_info(env, info_node);
+ up_write(&env->bpf_progs.lock);
++
++ return ret;
+ }
+
+-void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node)
++bool __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node)
+ {
+ __u32 prog_id = info_node->info_linear->info.id;
+ struct bpf_prog_info_node *node;
+@@ -48,13 +52,14 @@ void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info
+ p = &(*p)->rb_right;
+ } else {
+ pr_debug("duplicated bpf prog info %u\n", prog_id);
+- return;
++ return false;
+ }
+ }
+
+ rb_link_node(&info_node->rb_node, parent, p);
+ rb_insert_color(&info_node->rb_node, &env->bpf_progs.infos);
+ env->bpf_progs.infos_cnt++;
++ return true;
+ }
+
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
+index 51b36c36019be6..38de0af2a68081 100644
+--- a/tools/perf/util/env.h
++++ b/tools/perf/util/env.h
+@@ -176,9 +176,9 @@ const char *perf_env__raw_arch(struct perf_env *env);
+ int perf_env__nr_cpus_avail(struct perf_env *env);
+
+ void perf_env__init(struct perf_env *env);
+-void __perf_env__insert_bpf_prog_info(struct perf_env *env,
++bool __perf_env__insert_bpf_prog_info(struct perf_env *env,
+ struct bpf_prog_info_node *info_node);
+-void perf_env__insert_bpf_prog_info(struct perf_env *env,
++bool perf_env__insert_bpf_prog_info(struct perf_env *env,
+ struct bpf_prog_info_node *info_node);
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+ __u32 prog_id);
+diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c
+index b2536a59c44e64..90c6ce2212e4fe 100644
+--- a/tools/perf/util/expr.c
++++ b/tools/perf/util/expr.c
+@@ -288,7 +288,7 @@ struct expr_parse_ctx *expr__ctx_new(void)
+ {
+ struct expr_parse_ctx *ctx;
+
+- ctx = malloc(sizeof(struct expr_parse_ctx));
++ ctx = calloc(1, sizeof(struct expr_parse_ctx));
+ if (!ctx)
+ return NULL;
+
+@@ -297,9 +297,6 @@ struct expr_parse_ctx *expr__ctx_new(void)
+ free(ctx);
+ return NULL;
+ }
+- ctx->sctx.user_requested_cpu_list = NULL;
+- ctx->sctx.runtime = 0;
+- ctx->sctx.system_wide = false;
+
+ return ctx;
+ }
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index a6386d12afd729..7b99f58f7bf269 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -3188,7 +3188,10 @@ static int process_bpf_prog_info(struct feat_fd *ff, void *data __maybe_unused)
+ /* after reading from file, translate offset to address */
+ bpil_offs_to_addr(info_linear);
+ info_node->info_linear = info_linear;
+- __perf_env__insert_bpf_prog_info(env, info_node);
++ if (!__perf_env__insert_bpf_prog_info(env, info_node)) {
++ free(info_linear);
++ free(info_node);
++ }
+ }
+
+ up_write(&env->bpf_progs.lock);
+@@ -3235,7 +3238,8 @@ static int process_bpf_btf(struct feat_fd *ff, void *data __maybe_unused)
+ if (__do_read(ff, node->data, data_size))
+ goto out;
+
+- __perf_env__insert_btf(env, node);
++ if (!__perf_env__insert_btf(env, node))
++ free(node);
+ node = NULL;
+ }
+
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 27d5345d2b307a..9be2f4479f5257 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1003,7 +1003,7 @@ static int machine__get_running_kernel_start(struct machine *machine,
+
+ err = kallsyms__get_symbol_start(filename, "_edata", &addr);
+ if (err)
+- err = kallsyms__get_function_start(filename, "_etext", &addr);
++ err = kallsyms__get_symbol_start(filename, "_etext", &addr);
+ if (!err)
+ *end = addr;
+
+diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
+index 432399cbe5dd39..09c9cc326c08d4 100644
+--- a/tools/perf/util/maps.c
++++ b/tools/perf/util/maps.c
+@@ -1136,8 +1136,13 @@ struct map *maps__find_next_entry(struct maps *maps, struct map *map)
+ struct map *result = NULL;
+
+ down_read(maps__lock(maps));
++ while (!maps__maps_by_address_sorted(maps)) {
++ up_read(maps__lock(maps));
++ maps__sort_by_address(maps);
++ down_read(maps__lock(maps));
++ }
+ i = maps__by_address_index(maps, map);
+- if (i < maps__nr_maps(maps))
++ if (++i < maps__nr_maps(maps))
+ result = map__get(maps__maps_by_address(maps)[i]);
+
+ up_read(maps__lock(maps));
+diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
+index cb185c5659d6b3..68f5de2d79c72c 100644
+--- a/tools/perf/util/namespaces.c
++++ b/tools/perf/util/namespaces.c
+@@ -266,11 +266,16 @@ pid_t nsinfo__pid(const struct nsinfo *nsi)
+ return RC_CHK_ACCESS(nsi)->pid;
+ }
+
+-pid_t nsinfo__in_pidns(const struct nsinfo *nsi)
++bool nsinfo__in_pidns(const struct nsinfo *nsi)
+ {
+ return RC_CHK_ACCESS(nsi)->in_pidns;
+ }
+
++void nsinfo__set_in_pidns(struct nsinfo *nsi)
++{
++ RC_CHK_ACCESS(nsi)->in_pidns = true;
++}
++
+ void nsinfo__mountns_enter(struct nsinfo *nsi,
+ struct nscookie *nc)
+ {
+diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h
+index 8c0731c6cbb7ee..e95c79b80e27c8 100644
+--- a/tools/perf/util/namespaces.h
++++ b/tools/perf/util/namespaces.h
+@@ -58,7 +58,8 @@ void nsinfo__clear_need_setns(struct nsinfo *nsi);
+ pid_t nsinfo__tgid(const struct nsinfo *nsi);
+ pid_t nsinfo__nstgid(const struct nsinfo *nsi);
+ pid_t nsinfo__pid(const struct nsinfo *nsi);
+-pid_t nsinfo__in_pidns(const struct nsinfo *nsi);
++bool nsinfo__in_pidns(const struct nsinfo *nsi);
++void nsinfo__set_in_pidns(struct nsinfo *nsi);
+
+ void nsinfo__mountns_enter(struct nsinfo *nsi, struct nscookie *nc);
+ void nsinfo__mountns_exit(struct nscookie *nc);
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index ae6af354a81db5..08a399b0be286c 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -33,7 +33,7 @@ static int mperf_get_count_percent(unsigned int self_id, double *percent,
+ unsigned int cpu);
+ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ unsigned int cpu);
+-static struct timespec time_start, time_end;
++static struct timespec *time_start, *time_end;
+
+ static cstate_t mperf_cstates[MPERF_CSTATE_COUNT] = {
+ {
+@@ -174,7 +174,7 @@ static int mperf_get_count_percent(unsigned int id, double *percent,
+ dprint("%s: TSC Ref - mperf_diff: %llu, tsc_diff: %llu\n",
+ mperf_cstates[id].name, mperf_diff, tsc_diff);
+ } else if (max_freq_mode == MAX_FREQ_SYSFS) {
+- timediff = max_frequency * timespec_diff_us(time_start, time_end);
++ timediff = max_frequency * timespec_diff_us(time_start[cpu], time_end[cpu]);
+ *percent = 100.0 * mperf_diff / timediff;
+ dprint("%s: MAXFREQ - mperf_diff: %llu, time_diff: %llu\n",
+ mperf_cstates[id].name, mperf_diff, timediff);
+@@ -207,7 +207,7 @@ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ if (max_freq_mode == MAX_FREQ_TSC_REF) {
+ /* Calculate max_freq from TSC count */
+ tsc_diff = tsc_at_measure_end[cpu] - tsc_at_measure_start[cpu];
+- time_diff = timespec_diff_us(time_start, time_end);
++ time_diff = timespec_diff_us(time_start[cpu], time_end[cpu]);
+ max_frequency = tsc_diff / time_diff;
+ }
+
+@@ -226,9 +226,8 @@ static int mperf_start(void)
+ {
+ int cpu;
+
+- clock_gettime(CLOCK_REALTIME, &time_start);
+-
+ for (cpu = 0; cpu < cpu_count; cpu++) {
++ clock_gettime(CLOCK_REALTIME, &time_start[cpu]);
+ mperf_get_tsc(&tsc_at_measure_start[cpu]);
+ mperf_init_stats(cpu);
+ }
+@@ -243,9 +242,9 @@ static int mperf_stop(void)
+ for (cpu = 0; cpu < cpu_count; cpu++) {
+ mperf_measure_stats(cpu);
+ mperf_get_tsc(&tsc_at_measure_end[cpu]);
++ clock_gettime(CLOCK_REALTIME, &time_end[cpu]);
+ }
+
+- clock_gettime(CLOCK_REALTIME, &time_end);
+ return 0;
+ }
+
+@@ -349,6 +348,8 @@ struct cpuidle_monitor *mperf_register(void)
+ aperf_current_count = calloc(cpu_count, sizeof(unsigned long long));
+ tsc_at_measure_start = calloc(cpu_count, sizeof(unsigned long long));
+ tsc_at_measure_end = calloc(cpu_count, sizeof(unsigned long long));
++ time_start = calloc(cpu_count, sizeof(struct timespec));
++ time_end = calloc(cpu_count, sizeof(struct timespec));
+ mperf_monitor.name_len = strlen(mperf_monitor.name);
+ return &mperf_monitor;
+ }
+@@ -361,6 +362,8 @@ void mperf_unregister(void)
+ free(aperf_current_count);
+ free(tsc_at_measure_start);
+ free(tsc_at_measure_end);
++ free(time_start);
++ free(time_end);
+ free(is_valid);
+ }
+
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index 067717bce1d4ab..56c7ff6efcdabc 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -33,6 +33,9 @@ name as necessary to disambiguate it from others is necessary. Note that option
+ msr0xXXX is a hex offset, eg. msr0x10
+ /sys/path... is an absolute path to a sysfs attribute
+ <device> is a perf device from /sys/bus/event_source/devices/<device> eg. cstate_core
++ On Intel hybrid platforms, instead of one "cpu" perf device there are two, "cpu_core" and "cpu_atom" devices for P and E cores respectively.
++ Turbostat, in this case, allow user to use "cpu" device and will automatically detect the type of a CPU and translate it to "cpu_core" and "cpu_atom" accordingly.
++ For a complete example see "ADD PERF COUNTER EXAMPLE #2 (using virtual "cpu" device)".
+ <event> is a perf event for given device from /sys/bus/event_source/devices/<device>/events/<event> eg. c1-residency
+ perf/cstate_core/c1-residency would then use /sys/bus/event_source/devices/cstate_core/events/c1-residency
+
+@@ -387,6 +390,28 @@ CPU pCPU%c1 CPU%c1
+
+ .fi
+
++.SH ADD PERF COUNTER EXAMPLE #2 (using virtual cpu device)
++Here we run on hybrid, Raptor Lake platform.
++We limit turbostat to show output for just cpu0 (pcore) and cpu12 (ecore).
++We add a counter showing number of L3 cache misses, using virtual "cpu" device,
++labeling it with the column header, "VCMISS".
++We add a counter showing number of L3 cache misses, using virtual "cpu_core" device,
++labeling it with the column header, "PCMISS". This will fail on ecore cpu12.
++We add a counter showing number of L3 cache misses, using virtual "cpu_atom" device,
++labeling it with the column header, "ECMISS". This will fail on pcore cpu0.
++We display it only once, after the conclusion of 0.1 second sleep.
++.nf
++sudo ./turbostat --quiet --cpu 0,12 --show CPU --add perf/cpu/cache-misses,cpu,delta,raw,VCMISS --add perf/cpu_core/cache-misses,cpu,delta,raw,PCMISS --add perf/cpu_atom/cache-misses,cpu,delta,raw,ECMISS sleep .1
++turbostat: added_perf_counters_init_: perf/cpu_atom/cache-misses: failed to open counter on cpu0
++turbostat: added_perf_counters_init_: perf/cpu_core/cache-misses: failed to open counter on cpu12
++0.104630 sec
++CPU ECMISS PCMISS VCMISS
++- 0x0000000000000000 0x0000000000000000 0x0000000000000000
++0 0x0000000000000000 0x0000000000007951 0x0000000000007796
++12 0x000000000001137a 0x0000000000000000 0x0000000000011392
++
++.fi
++
+ .SH ADD PMT COUNTER EXAMPLE
+ Here we limit turbostat to showing just the CPU number 0.
+ We add two counters, showing crystal clock count and the DC6 residency.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index a5ebee8b23bbe3..235e82fe7d0a56 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -31,6 +31,9 @@
+ )
+ // end copied section
+
++#define CPUID_LEAF_MODEL_ID 0x1A
++#define CPUID_LEAF_MODEL_ID_CORE_TYPE_SHIFT 24
++
+ #define X86_VENDOR_INTEL 0
+
+ #include INTEL_FAMILY_HEADER
+@@ -89,6 +92,11 @@
+ #define PERF_DEV_NAME_BYTES 32
+ #define PERF_EVT_NAME_BYTES 32
+
++#define INTEL_ECORE_TYPE 0x20
++#define INTEL_PCORE_TYPE 0x40
++
++#define ROUND_UP_TO_PAGE_SIZE(n) (((n) + 0x1000UL-1UL) & ~(0x1000UL-1UL))
++
+ enum counter_scope { SCOPE_CPU, SCOPE_CORE, SCOPE_PACKAGE };
+ enum counter_type { COUNTER_ITEMS, COUNTER_CYCLES, COUNTER_SECONDS, COUNTER_USEC, COUNTER_K2M };
+ enum counter_format { FORMAT_RAW, FORMAT_DELTA, FORMAT_PERCENT, FORMAT_AVERAGE };
+@@ -1079,8 +1087,8 @@ int backwards_count;
+ char *progname;
+
+ #define CPU_SUBSET_MAXCPUS 1024 /* need to use before probe... */
+-cpu_set_t *cpu_present_set, *cpu_effective_set, *cpu_allowed_set, *cpu_affinity_set, *cpu_subset;
+-size_t cpu_present_setsize, cpu_effective_setsize, cpu_allowed_setsize, cpu_affinity_setsize, cpu_subset_size;
++cpu_set_t *cpu_present_set, *cpu_possible_set, *cpu_effective_set, *cpu_allowed_set, *cpu_affinity_set, *cpu_subset;
++size_t cpu_present_setsize, cpu_possible_setsize, cpu_effective_setsize, cpu_allowed_setsize, cpu_affinity_setsize, cpu_subset_size;
+ #define MAX_ADDED_THREAD_COUNTERS 24
+ #define MAX_ADDED_CORE_COUNTERS 8
+ #define MAX_ADDED_PACKAGE_COUNTERS 16
+@@ -1848,6 +1856,7 @@ struct cpu_topology {
+ int logical_node_id; /* 0-based count within the package */
+ int physical_core_id;
+ int thread_id;
++ int type;
+ cpu_set_t *put_ids; /* Processing Unit/Thread IDs */
+ } *cpus;
+
+@@ -5659,6 +5668,32 @@ int init_thread_id(int cpu)
+ return 0;
+ }
+
++int set_my_cpu_type(void)
++{
++ unsigned int eax, ebx, ecx, edx;
++ unsigned int max_level;
++
++ __cpuid(0, max_level, ebx, ecx, edx);
++
++ if (max_level < CPUID_LEAF_MODEL_ID)
++ return 0;
++
++ __cpuid(CPUID_LEAF_MODEL_ID, eax, ebx, ecx, edx);
++
++ return (eax >> CPUID_LEAF_MODEL_ID_CORE_TYPE_SHIFT);
++}
++
++int set_cpu_hybrid_type(int cpu)
++{
++ if (cpu_migrate(cpu))
++ return -1;
++
++ int type = set_my_cpu_type();
++
++ cpus[cpu].type = type;
++ return 0;
++}
++
+ /*
+ * snapshot_proc_interrupts()
+ *
+@@ -8188,6 +8223,33 @@ int dir_filter(const struct dirent *dirp)
+ return 0;
+ }
+
++char *possible_file = "/sys/devices/system/cpu/possible";
++char possible_buf[1024];
++
++int initialize_cpu_possible_set(void)
++{
++ FILE *fp;
++
++ fp = fopen(possible_file, "r");
++ if (!fp) {
++ warn("open %s", possible_file);
++ return -1;
++ }
++ if (fread(possible_buf, sizeof(char), 1024, fp) == 0) {
++ warn("read %s", possible_file);
++ goto err;
++ }
++ if (parse_cpu_str(possible_buf, cpu_possible_set, cpu_possible_setsize)) {
++ warnx("%s: cpu str malformat %s\n", possible_file, cpu_effective_str);
++ goto err;
++ }
++ return 0;
++
++err:
++ fclose(fp);
++ return -1;
++}
++
+ void topology_probe(bool startup)
+ {
+ int i;
+@@ -8219,6 +8281,16 @@ void topology_probe(bool startup)
+ CPU_ZERO_S(cpu_present_setsize, cpu_present_set);
+ for_all_proc_cpus(mark_cpu_present);
+
++ /*
++ * Allocate and initialize cpu_possible_set
++ */
++ cpu_possible_set = CPU_ALLOC((topo.max_cpu_num + 1));
++ if (cpu_possible_set == NULL)
++ err(3, "CPU_ALLOC");
++ cpu_possible_setsize = CPU_ALLOC_SIZE((topo.max_cpu_num + 1));
++ CPU_ZERO_S(cpu_possible_setsize, cpu_possible_set);
++ initialize_cpu_possible_set();
++
+ /*
+ * Allocate and initialize cpu_effective_set
+ */
+@@ -8287,6 +8359,8 @@ void topology_probe(bool startup)
+
+ for_all_proc_cpus(init_thread_id);
+
++ for_all_proc_cpus(set_cpu_hybrid_type);
++
+ /*
+ * For online cpus
+ * find max_core_id, max_package_id
+@@ -8551,6 +8625,35 @@ void check_perf_access(void)
+ bic_enabled &= ~BIC_IPC;
+ }
+
++bool perf_has_hybrid_devices(void)
++{
++ /*
++ * 0: unknown
++ * 1: has separate perf device for p and e core
++ * -1: doesn't have separate perf device for p and e core
++ */
++ static int cached;
++
++ if (cached > 0)
++ return true;
++
++ if (cached < 0)
++ return false;
++
++ if (access("/sys/bus/event_source/devices/cpu_core", F_OK)) {
++ cached = -1;
++ return false;
++ }
++
++ if (access("/sys/bus/event_source/devices/cpu_atom", F_OK)) {
++ cached = -1;
++ return false;
++ }
++
++ cached = 1;
++ return true;
++}
++
+ int added_perf_counters_init_(struct perf_counter_info *pinfo)
+ {
+ size_t num_domains = 0;
+@@ -8607,29 +8710,56 @@ int added_perf_counters_init_(struct perf_counter_info *pinfo)
+ if (domain_visited[next_domain])
+ continue;
+
+- perf_type = read_perf_type(pinfo->device);
++ /*
++ * Intel hybrid platforms expose different perf devices for P and E cores.
++ * Instead of one, "/sys/bus/event_source/devices/cpu" device, there are
++ * "/sys/bus/event_source/devices/{cpu_core,cpu_atom}".
++ *
++ * This makes it more complicated to the user, because most of the counters
++ * are available on both and have to be handled manually, otherwise.
++ *
++ * Code below, allow user to use the old "cpu" name, which is translated accordingly.
++ */
++ const char *perf_device = pinfo->device;
++
++ if (strcmp(perf_device, "cpu") == 0 && perf_has_hybrid_devices()) {
++ switch (cpus[cpu].type) {
++ case INTEL_PCORE_TYPE:
++ perf_device = "cpu_core";
++ break;
++
++ case INTEL_ECORE_TYPE:
++ perf_device = "cpu_atom";
++ break;
++
++ default: /* Don't change, we will probably fail and report a problem soon. */
++ break;
++ }
++ }
++
++ perf_type = read_perf_type(perf_device);
+ if (perf_type == (unsigned int)-1) {
+ warnx("%s: perf/%s/%s: failed to read %s",
+- __func__, pinfo->device, pinfo->event, "type");
++ __func__, perf_device, pinfo->event, "type");
+ continue;
+ }
+
+- perf_config = read_perf_config(pinfo->device, pinfo->event);
++ perf_config = read_perf_config(perf_device, pinfo->event);
+ if (perf_config == (unsigned int)-1) {
+ warnx("%s: perf/%s/%s: failed to read %s",
+- __func__, pinfo->device, pinfo->event, "config");
++ __func__, perf_device, pinfo->event, "config");
+ continue;
+ }
+
+ /* Scale is not required, some counters just don't have it. */
+- perf_scale = read_perf_scale(pinfo->device, pinfo->event);
++ perf_scale = read_perf_scale(perf_device, pinfo->event);
+ if (perf_scale == 0.0)
+ perf_scale = 1.0;
+
+ fd_perf = open_perf_counter(cpu, perf_type, perf_config, -1, 0);
+ if (fd_perf == -1) {
+ warnx("%s: perf/%s/%s: failed to open counter on cpu%d",
+- __func__, pinfo->device, pinfo->event, cpu);
++ __func__, perf_device, pinfo->event, cpu);
+ continue;
+ }
+
+@@ -8639,7 +8769,7 @@ int added_perf_counters_init_(struct perf_counter_info *pinfo)
+
+ if (debug)
+ fprintf(stderr, "Add perf/%s/%s cpu%d: %d\n",
+- pinfo->device, pinfo->event, cpu, pinfo->fd_perf_per_domain[next_domain]);
++ perf_device, pinfo->event, cpu, pinfo->fd_perf_per_domain[next_domain]);
+ }
+
+ pinfo = pinfo->next;
+@@ -8762,7 +8892,7 @@ struct pmt_mmio *pmt_mmio_open(unsigned int target_guid)
+ if (fd_pmt == -1)
+ goto loop_cleanup_and_break;
+
+- mmap_size = (size + 0x1000UL) & (~0x1000UL);
++ mmap_size = ROUND_UP_TO_PAGE_SIZE(size);
+ mmio = mmap(0, mmap_size, PROT_READ, MAP_SHARED, fd_pmt, 0);
+ if (mmio != MAP_FAILED) {
+
+@@ -9001,6 +9131,18 @@ void turbostat_init()
+ }
+ }
+
++void affinitize_child(void)
++{
++ /* Prefer cpu_possible_set, if available */
++ if (sched_setaffinity(0, cpu_possible_setsize, cpu_possible_set)) {
++ warn("sched_setaffinity cpu_possible_set");
++
++ /* Otherwise, allow child to run on same cpu set as turbostat */
++ if (sched_setaffinity(0, cpu_allowed_setsize, cpu_allowed_set))
++ warn("sched_setaffinity cpu_allowed_set");
++ }
++}
++
+ int fork_it(char **argv)
+ {
+ pid_t child_pid;
+@@ -9016,6 +9158,7 @@ int fork_it(char **argv)
+ child_pid = fork();
+ if (!child_pid) {
+ /* child */
++ affinitize_child();
+ execvp(argv[0], argv);
+ err(errno, "exec %s", argv[0]);
+ } else {
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index dacad94e2be42a..c76ad0be54e2ed 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -2419,6 +2419,11 @@ sub get_version {
+ return if ($have_version);
+ doprint "$make kernelrelease ... ";
+ $version = `$make -s kernelrelease | tail -1`;
++ if (!length($version)) {
++ run_command "$make allnoconfig" or return 0;
++ doprint "$make kernelrelease ... ";
++ $version = `$make -s kernelrelease | tail -1`;
++ }
+ chomp($version);
+ doprint "$version\n";
+ $have_version = 1;
+@@ -2960,8 +2965,6 @@ sub run_bisect_test {
+
+ my $failed = 0;
+ my $result;
+- my $output;
+- my $ret;
+
+ $in_bisect = 1;
+
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 43a02931847854..6fc29996ae2938 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -193,9 +193,9 @@ ifeq ($(shell expr $(MAKE_VERSION) \>= 4.4), 1)
+ $(let OUTPUT,$(OUTPUT)/,\
+ $(eval include ../../../build/Makefile.feature))
+ else
+-OUTPUT := $(OUTPUT)/
++override OUTPUT := $(OUTPUT)/
+ $(eval include ../../../build/Makefile.feature)
+-OUTPUT := $(patsubst %/,%,$(OUTPUT))
++override OUTPUT := $(patsubst %/,%,$(OUTPUT))
+ endif
+ endif
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf_distill.c b/tools/testing/selftests/bpf/prog_tests/btf_distill.c
+index ca84726d5ac1b9..b72b966df77b90 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf_distill.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf_distill.c
+@@ -385,7 +385,7 @@ static void test_distilled_base_missing_err(void)
+ "[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED");
+ btf5 = btf__new_empty();
+ if (!ASSERT_OK_PTR(btf5, "empty_reloc_btf"))
+- return;
++ goto cleanup;
+ btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ VALIDATE_RAW_BTF(
+ btf5,
+@@ -478,7 +478,7 @@ static void test_distilled_base_multi_err2(void)
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
+ btf5 = btf__new_empty();
+ if (!ASSERT_OK_PTR(btf5, "empty_reloc_btf"))
+- return;
++ goto cleanup;
+ btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [2] int */
+ VALIDATE_RAW_BTF(
+diff --git a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+index d50cbd8040d45f..e59af2aa660166 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
++++ b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+@@ -171,6 +171,10 @@ static void test_kprobe_fill_link_info(struct test_fill_link_info *skel,
+ /* See also arch_adjust_kprobe_addr(). */
+ if (skel->kconfig->CONFIG_X86_KERNEL_IBT)
+ entry_offset = 4;
++ if (skel->kconfig->CONFIG_PPC64 &&
++ skel->kconfig->CONFIG_KPROBES_ON_FTRACE &&
++ !skel->kconfig->CONFIG_PPC_FTRACE_OUT_OF_LINE)
++ entry_offset = 4;
+ err = verify_perf_link_info(link_fd, type, kprobe_addr, 0, entry_offset);
+ ASSERT_OK(err, "verify_perf_link_info");
+ } else {
+diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+index 21c5a37846adea..40f22454cf05b0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
++++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+@@ -1496,8 +1496,8 @@ static void test_tailcall_bpf2bpf_hierarchy_3(void)
+ RUN_TESTS(tailcall_bpf2bpf_hierarchy3);
+ }
+
+-/* test_tailcall_freplace checks that the attached freplace prog is OK to
+- * update the prog_array map.
++/* test_tailcall_freplace checks that the freplace prog fails to update the
++ * prog_array map, no matter whether the freplace prog attaches to its target.
+ */
+ static void test_tailcall_freplace(void)
+ {
+@@ -1505,7 +1505,7 @@ static void test_tailcall_freplace(void)
+ struct bpf_link *freplace_link = NULL;
+ struct bpf_program *freplace_prog;
+ struct tc_bpf2bpf *tc_skel = NULL;
+- int prog_fd, map_fd;
++ int prog_fd, tc_prog_fd, map_fd;
+ char buff[128] = {};
+ int err, key;
+
+@@ -1523,9 +1523,10 @@ static void test_tailcall_freplace(void)
+ if (!ASSERT_OK_PTR(tc_skel, "tc_bpf2bpf__open_and_load"))
+ goto out;
+
+- prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
++ tc_prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
+ freplace_prog = freplace_skel->progs.entry_freplace;
+- err = bpf_program__set_attach_target(freplace_prog, prog_fd, "subprog");
++ err = bpf_program__set_attach_target(freplace_prog, tc_prog_fd,
++ "subprog_tc");
+ if (!ASSERT_OK(err, "set_attach_target"))
+ goto out;
+
+@@ -1533,27 +1534,116 @@ static void test_tailcall_freplace(void)
+ if (!ASSERT_OK(err, "tailcall_freplace__load"))
+ goto out;
+
+- freplace_link = bpf_program__attach_freplace(freplace_prog, prog_fd,
+- "subprog");
++ map_fd = bpf_map__fd(freplace_skel->maps.jmp_table);
++ prog_fd = bpf_program__fd(freplace_prog);
++ key = 0;
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ ASSERT_ERR(err, "update jmp_table failure");
++
++ freplace_link = bpf_program__attach_freplace(freplace_prog, tc_prog_fd,
++ "subprog_tc");
+ if (!ASSERT_OK_PTR(freplace_link, "attach_freplace"))
+ goto out;
+
+- map_fd = bpf_map__fd(freplace_skel->maps.jmp_table);
+- prog_fd = bpf_program__fd(freplace_prog);
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ ASSERT_ERR(err, "update jmp_table failure");
++
++out:
++ bpf_link__destroy(freplace_link);
++ tailcall_freplace__destroy(freplace_skel);
++ tc_bpf2bpf__destroy(tc_skel);
++}
++
++/* test_tailcall_bpf2bpf_freplace checks the failure that fails to attach a tail
++ * callee prog with freplace prog or fails to update an extended prog to
++ * prog_array map.
++ */
++static void test_tailcall_bpf2bpf_freplace(void)
++{
++ struct tailcall_freplace *freplace_skel = NULL;
++ struct bpf_link *freplace_link = NULL;
++ struct tc_bpf2bpf *tc_skel = NULL;
++ char buff[128] = {};
++ int prog_fd, map_fd;
++ int err, key;
++
++ LIBBPF_OPTS(bpf_test_run_opts, topts,
++ .data_in = buff,
++ .data_size_in = sizeof(buff),
++ .repeat = 1,
++ );
++
++ tc_skel = tc_bpf2bpf__open_and_load();
++ if (!ASSERT_OK_PTR(tc_skel, "tc_bpf2bpf__open_and_load"))
++ goto out;
++
++ prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
++ freplace_skel = tailcall_freplace__open();
++ if (!ASSERT_OK_PTR(freplace_skel, "tailcall_freplace__open"))
++ goto out;
++
++ err = bpf_program__set_attach_target(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_OK(err, "set_attach_target"))
++ goto out;
++
++ err = tailcall_freplace__load(freplace_skel);
++ if (!ASSERT_OK(err, "tailcall_freplace__load"))
++ goto out;
++
++ /* OK to attach then detach freplace prog. */
++
++ freplace_link = bpf_program__attach_freplace(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_OK_PTR(freplace_link, "attach_freplace"))
++ goto out;
++
++ err = bpf_link__destroy(freplace_link);
++ if (!ASSERT_OK(err, "destroy link"))
++ goto out;
++
++ /* OK to update prog_array map then delete element from the map. */
++
+ key = 0;
++ map_fd = bpf_map__fd(freplace_skel->maps.jmp_table);
+ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
+ if (!ASSERT_OK(err, "update jmp_table"))
+ goto out;
+
+- prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
+- err = bpf_prog_test_run_opts(prog_fd, &topts);
+- ASSERT_OK(err, "test_run");
+- ASSERT_EQ(topts.retval, 34, "test_run retval");
++ err = bpf_map_delete_elem(map_fd, &key);
++ if (!ASSERT_OK(err, "delete_elem from jmp_table"))
++ goto out;
++
++ /* Fail to attach a tail callee prog with freplace prog. */
++
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ if (!ASSERT_OK(err, "update jmp_table"))
++ goto out;
++
++ freplace_link = bpf_program__attach_freplace(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_ERR_PTR(freplace_link, "attach_freplace failure"))
++ goto out;
++
++ err = bpf_map_delete_elem(map_fd, &key);
++ if (!ASSERT_OK(err, "delete_elem from jmp_table"))
++ goto out;
++
++ /* Fail to update an extended prog to prog_array map. */
++
++ freplace_link = bpf_program__attach_freplace(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_OK_PTR(freplace_link, "attach_freplace"))
++ goto out;
++
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ if (!ASSERT_ERR(err, "update jmp_table failure"))
++ goto out;
+
+ out:
+ bpf_link__destroy(freplace_link);
+- tc_bpf2bpf__destroy(tc_skel);
+ tailcall_freplace__destroy(freplace_skel);
++ tc_bpf2bpf__destroy(tc_skel);
+ }
+
+ void test_tailcalls(void)
+@@ -1606,4 +1696,6 @@ void test_tailcalls(void)
+ test_tailcall_bpf2bpf_hierarchy_3();
+ if (test__start_subtest("tailcall_freplace"))
+ test_tailcall_freplace();
++ if (test__start_subtest("tailcall_bpf2bpf_freplace"))
++ test_tailcall_bpf2bpf_freplace();
+ }
+diff --git a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+index 79f5087dade224..fe6249d99b315b 100644
+--- a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
++++ b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+@@ -5,10 +5,11 @@
+ #include "bpf_misc.h"
+
+ __noinline
+-int subprog(struct __sk_buff *skb)
++int subprog_tc(struct __sk_buff *skb)
+ {
+ int ret = 1;
+
++ __sink(skb);
+ __sink(ret);
+ /* let verifier know that 'subprog_tc' can change pointers to skb->data */
+ bpf_skb_change_proto(skb, 0, 0);
+@@ -18,7 +19,7 @@ int subprog(struct __sk_buff *skb)
+ SEC("tc")
+ int entry_tc(struct __sk_buff *skb)
+ {
+- return subprog(skb);
++ return subprog_tc(skb);
+ }
+
+ char __license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/test_fill_link_info.c b/tools/testing/selftests/bpf/progs/test_fill_link_info.c
+index 6afa834756e9fd..fac33a14f2009c 100644
+--- a/tools/testing/selftests/bpf/progs/test_fill_link_info.c
++++ b/tools/testing/selftests/bpf/progs/test_fill_link_info.c
+@@ -6,13 +6,20 @@
+ #include <stdbool.h>
+
+ extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
++extern bool CONFIG_PPC_FTRACE_OUT_OF_LINE __kconfig __weak;
++extern bool CONFIG_KPROBES_ON_FTRACE __kconfig __weak;
++extern bool CONFIG_PPC64 __kconfig __weak;
+
+-/* This function is here to have CONFIG_X86_KERNEL_IBT
+- * used and added to object BTF.
++/* This function is here to have CONFIG_X86_KERNEL_IBT,
++ * CONFIG_PPC_FTRACE_OUT_OF_LINE, CONFIG_KPROBES_ON_FTRACE,
++ * CONFIG_PPC6 used and added to object BTF.
+ */
+ int unused(void)
+ {
+- return CONFIG_X86_KERNEL_IBT ? 0 : 1;
++ return CONFIG_X86_KERNEL_IBT ||
++ CONFIG_PPC_FTRACE_OUT_OF_LINE ||
++ CONFIG_KPROBES_ON_FTRACE ||
++ CONFIG_PPC64 ? 0 : 1;
+ }
+
+ SEC("kprobe")
+diff --git a/tools/testing/selftests/bpf/test_tc_tunnel.sh b/tools/testing/selftests/bpf/test_tc_tunnel.sh
+index 7989ec60845455..cb55a908bb0d70 100755
+--- a/tools/testing/selftests/bpf/test_tc_tunnel.sh
++++ b/tools/testing/selftests/bpf/test_tc_tunnel.sh
+@@ -305,6 +305,7 @@ else
+ client_connect
+ verify_data
+ server_listen
++ wait_for_port ${port} ${netcat_opt}
+ fi
+
+ # serverside, use BPF for decap
+diff --git a/tools/testing/selftests/bpf/xdp_hw_metadata.c b/tools/testing/selftests/bpf/xdp_hw_metadata.c
+index 6f9956eed797f3..ad6c08dfd6c8cc 100644
+--- a/tools/testing/selftests/bpf/xdp_hw_metadata.c
++++ b/tools/testing/selftests/bpf/xdp_hw_metadata.c
+@@ -79,7 +79,7 @@ static int open_xsk(int ifindex, struct xsk *xsk, __u32 queue_id)
+ .fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
+ .comp_size = XSK_RING_CONS__DEFAULT_NUM_DESCS,
+ .frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE,
+- .flags = XSK_UMEM__DEFAULT_FLAGS,
++ .flags = XDP_UMEM_TX_METADATA_LEN,
+ .tx_metadata_len = sizeof(struct xsk_tx_metadata),
+ };
+ __u32 idx = 0;
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+index 384cfa3d38a6cd..92c2f0376c081d 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+@@ -142,7 +142,7 @@ function pre_ethtool {
+ }
+
+ function check_table {
+- local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1
++ local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1
+ local -n expected=$2
+ local last=$3
+
+@@ -212,7 +212,7 @@ function check_tables {
+ }
+
+ function print_table {
+- local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1
++ local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1
+ read -a have < $path
+
+ tree $NSIM_DEV_DFS/
+@@ -641,7 +641,7 @@ for port in 0 1; do
+ NSIM_NETDEV=`get_netdev_name old_netdevs`
+ ip link set dev $NSIM_NETDEV up
+
+- echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
++ echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error
+
+ msg="1 - create VxLANs v6"
+ exp0=( 0 0 0 0 )
+@@ -663,7 +663,7 @@ for port in 0 1; do
+ new_geneve gnv0 20000
+
+ msg="2 - destroy GENEVE"
+- echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
++ echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error
+ exp1=( `mke 20000 2` 0 0 0 )
+ del_dev gnv0
+
+@@ -764,7 +764,7 @@ for port in 0 1; do
+ msg="create VxLANs v4"
+ new_vxlan vxlan0 10000 $NSIM_NETDEV
+
+- echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++ echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="NIC device goes down"
+@@ -775,7 +775,7 @@ for port in 0 1; do
+ fi
+ check_tables
+
+- echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++ echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="NIC device goes up again"
+@@ -789,7 +789,7 @@ for port in 0 1; do
+ del_dev vxlan0
+ check_tables
+
+- echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++ echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="destroy NIC"
+@@ -896,7 +896,7 @@ msg="vacate VxLAN in overflow table"
+ exp0=( `mke 10000 1` `mke 10004 1` 0 `mke 10003 1` )
+ del_dev vxlan2
+
+-echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="tunnels destroyed 2"
+diff --git a/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc b/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc
+index 35e8d47d607259..8a7ce647a60d1c 100644
+--- a/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc
++++ b/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc
+@@ -15,11 +15,11 @@ find_alternate_gid() {
+ tac /etc/group | grep -v ":$original_gid:" | head -1 | cut -d: -f3
+ }
+
+-mount_tracefs_with_options() {
++remount_tracefs_with_options() {
+ local mount_point="$1"
+ local options="$2"
+
+- mount -t tracefs -o "$options" nodev "$mount_point"
++ mount -t tracefs -o "remount,$options" nodev "$mount_point"
+
+ setup
+ }
+@@ -81,7 +81,7 @@ test_gid_mount_option() {
+
+ # Unmount existing tracefs instance and mount with new GID
+ unmount_tracefs "$mount_point"
+- mount_tracefs_with_options "$mount_point" "$new_options"
++ remount_tracefs_with_options "$mount_point" "$new_options"
+
+ check_gid "$mount_point" "$other_group"
+
+@@ -92,7 +92,7 @@ test_gid_mount_option() {
+
+ # Unmount and remount with the original GID
+ unmount_tracefs "$mount_point"
+- mount_tracefs_with_options "$mount_point" "$mount_options"
++ remount_tracefs_with_options "$mount_point" "$mount_options"
+ check_gid "$mount_point" "$original_group"
+ }
+
+diff --git a/tools/testing/selftests/kselftest/ktap_helpers.sh b/tools/testing/selftests/kselftest/ktap_helpers.sh
+index 79a125eb24c2e8..14e7f3ec3f84c3 100644
+--- a/tools/testing/selftests/kselftest/ktap_helpers.sh
++++ b/tools/testing/selftests/kselftest/ktap_helpers.sh
+@@ -40,7 +40,7 @@ ktap_skip_all() {
+ __ktap_test() {
+ result="$1"
+ description="$2"
+- directive="$3" # optional
++ directive="${3:-}" # optional
+
+ local directive_str=
+ [ ! -z "$directive" ] && directive_str="# $directive"
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index a5a72415e37b06..666c9fde76da9d 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -760,33 +760,33 @@
+ /* Report with actual signedness to avoid weird output. */ \
+ switch (is_signed_type(__exp) * 2 + is_signed_type(__seen)) { \
+ case 0: { \
+- unsigned long long __exp_print = (uintptr_t)__exp; \
+- unsigned long long __seen_print = (uintptr_t)__seen; \
+- __TH_LOG("Expected %s (%llu) %s %s (%llu)", \
++ uintmax_t __exp_print = (uintmax_t)__exp; \
++ uintmax_t __seen_print = (uintmax_t)__seen; \
++ __TH_LOG("Expected %s (%ju) %s %s (%ju)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+ } \
+ case 1: { \
+- unsigned long long __exp_print = (uintptr_t)__exp; \
+- long long __seen_print = (intptr_t)__seen; \
+- __TH_LOG("Expected %s (%llu) %s %s (%lld)", \
++ uintmax_t __exp_print = (uintmax_t)__exp; \
++ intmax_t __seen_print = (intmax_t)__seen; \
++ __TH_LOG("Expected %s (%ju) %s %s (%jd)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+ } \
+ case 2: { \
+- long long __exp_print = (intptr_t)__exp; \
+- unsigned long long __seen_print = (uintptr_t)__seen; \
+- __TH_LOG("Expected %s (%lld) %s %s (%llu)", \
++ intmax_t __exp_print = (intmax_t)__exp; \
++ uintmax_t __seen_print = (uintmax_t)__seen; \
++ __TH_LOG("Expected %s (%jd) %s %s (%ju)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+ } \
+ case 3: { \
+- long long __exp_print = (intptr_t)__exp; \
+- long long __seen_print = (intptr_t)__seen; \
+- __TH_LOG("Expected %s (%lld) %s %s (%lld)", \
++ intmax_t __exp_print = (intmax_t)__exp; \
++ intmax_t __seen_print = (intmax_t)__seen; \
++ __TH_LOG("Expected %s (%jd) %s %s (%jd)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+diff --git a/tools/testing/selftests/landlock/Makefile b/tools/testing/selftests/landlock/Makefile
+index 348e2dbdb4e0b9..480f13e77fcc4b 100644
+--- a/tools/testing/selftests/landlock/Makefile
++++ b/tools/testing/selftests/landlock/Makefile
+@@ -13,11 +13,11 @@ TEST_GEN_PROGS := $(src_test:.c=)
+ TEST_GEN_PROGS_EXTENDED := true
+
+ # Short targets:
+-$(TEST_GEN_PROGS): LDLIBS += -lcap
++$(TEST_GEN_PROGS): LDLIBS += -lcap -lpthread
+ $(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
+
+ include ../lib.mk
+
+ # Targets with $(OUTPUT)/ prefix:
+-$(TEST_GEN_PROGS): LDLIBS += -lcap
++$(TEST_GEN_PROGS): LDLIBS += -lcap -lpthread
+ $(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 6788762188feac..97d360eae4f69e 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -2003,8 +2003,7 @@ static void test_execute(struct __test_metadata *const _metadata, const int err,
+ ASSERT_EQ(1, WIFEXITED(status));
+ ASSERT_EQ(err ? 2 : 0, WEXITSTATUS(status))
+ {
+- TH_LOG("Unexpected return code for \"%s\": %s", path,
+- strerror(errno));
++ TH_LOG("Unexpected return code for \"%s\"", path);
+ };
+ }
+
+diff --git a/tools/testing/selftests/net/lib/Makefile b/tools/testing/selftests/net/lib/Makefile
+index 82c3264b115eee..704b88b6a8d2a2 100644
+--- a/tools/testing/selftests/net/lib/Makefile
++++ b/tools/testing/selftests/net/lib/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-CFLAGS = -Wall -Wl,--no-as-needed -O2 -g
++CFLAGS += -Wall -Wl,--no-as-needed -O2 -g
+ CFLAGS += -I../../../../../usr/include/ $(KHDR_INCLUDES)
+ # Additional include paths needed by kselftest.h
+ CFLAGS += -I../../
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 5d796622e73099..580610c46e5aef 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -2,7 +2,7 @@
+
+ top_srcdir = ../../../../..
+
+-CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
++CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+
+ TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
+diff --git a/tools/testing/selftests/net/openvswitch/Makefile b/tools/testing/selftests/net/openvswitch/Makefile
+index 2f1508abc826b7..3fd1da2ec07d54 100644
+--- a/tools/testing/selftests/net/openvswitch/Makefile
++++ b/tools/testing/selftests/net/openvswitch/Makefile
+@@ -2,7 +2,7 @@
+
+ top_srcdir = ../../../../..
+
+-CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
++CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+
+ TEST_PROGS := openvswitch.sh
+
+diff --git a/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c b/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c
+index 580fcac0a09f31..b71ef8a493ed1a 100644
+--- a/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c
++++ b/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c
+@@ -20,7 +20,7 @@ static int test_gettimeofday(void)
+ gettimeofday(&tv_end, NULL);
+ }
+
+- timersub(&tv_start, &tv_end, &tv_diff);
++ timersub(&tv_end, &tv_start, &tv_diff);
+
+ printf("time = %.6f\n", tv_diff.tv_sec + (tv_diff.tv_usec) * 1e-6);
+
+diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
+index 5b9772cdf2651b..f6156790c3b4df 100644
+--- a/tools/testing/selftests/rseq/rseq.c
++++ b/tools/testing/selftests/rseq/rseq.c
+@@ -61,7 +61,6 @@ unsigned int rseq_size = -1U;
+ unsigned int rseq_flags;
+
+ static int rseq_ownership;
+-static int rseq_reg_success; /* At least one rseq registration has succeded. */
+
+ /* Allocate a large area for the TLS. */
+ #define RSEQ_THREAD_AREA_ALLOC_SIZE 1024
+@@ -152,14 +151,27 @@ int rseq_register_current_thread(void)
+ }
+ rc = sys_rseq(&__rseq_abi, get_rseq_min_alloc_size(), 0, RSEQ_SIG);
+ if (rc) {
+- if (RSEQ_READ_ONCE(rseq_reg_success)) {
++ /*
++ * After at least one thread has registered successfully
++ * (rseq_size > 0), the registration of other threads should
++ * never fail.
++ */
++ if (RSEQ_READ_ONCE(rseq_size) > 0) {
+ /* Incoherent success/failure within process. */
+ abort();
+ }
+ return -1;
+ }
+ assert(rseq_current_cpu_raw() >= 0);
+- RSEQ_WRITE_ONCE(rseq_reg_success, 1);
++
++ /*
++ * The first thread to register sets the rseq_size to mimic the libc
++ * behavior.
++ */
++ if (RSEQ_READ_ONCE(rseq_size) == 0) {
++ RSEQ_WRITE_ONCE(rseq_size, get_rseq_kernel_feature_size());
++ }
++
+ return 0;
+ }
+
+@@ -235,12 +247,18 @@ void rseq_init(void)
+ return;
+ }
+ rseq_ownership = 1;
+- if (!rseq_available()) {
+- rseq_size = 0;
+- return;
+- }
++
++ /* Calculate the offset of the rseq area from the thread pointer. */
+ rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer();
++
++ /* rseq flags are deprecated, always set to 0. */
+ rseq_flags = 0;
++
++ /*
++ * Set the size to 0 until at least one thread registers to mimic the
++ * libc behavior.
++ */
++ rseq_size = 0;
+ }
+
+ static __attribute__((destructor))
+diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
+index 4e217b620e0c7a..062d10925a1011 100644
+--- a/tools/testing/selftests/rseq/rseq.h
++++ b/tools/testing/selftests/rseq/rseq.h
+@@ -60,7 +60,14 @@
+ extern ptrdiff_t rseq_offset;
+
+ /*
+- * Size of the registered rseq area. 0 if the registration was
++ * The rseq ABI is composed of extensible feature fields. The extensions
++ * are done by appending additional fields at the end of the structure.
++ * The rseq_size defines the size of the active feature set which can be
++ * used by the application for the current rseq registration. Features
++ * starting at offset >= rseq_size are inactive and should not be used.
++ *
++ * The rseq_size is the intersection between the available allocation
++ * size for the rseq area and the feature size supported by the kernel.
+ * unsuccessful.
+ */
+ extern unsigned int rseq_size;
+diff --git a/tools/testing/selftests/timers/clocksource-switch.c b/tools/testing/selftests/timers/clocksource-switch.c
+index c5264594064c85..83faa4e354e389 100644
+--- a/tools/testing/selftests/timers/clocksource-switch.c
++++ b/tools/testing/selftests/timers/clocksource-switch.c
+@@ -156,8 +156,8 @@ int main(int argc, char **argv)
+ /* Check everything is sane before we start switching asynchronously */
+ if (do_sanity_check) {
+ for (i = 0; i < count; i++) {
+- printf("Validating clocksource %s\n",
+- clocksource_list[i]);
++ ksft_print_msg("Validating clocksource %s\n",
++ clocksource_list[i]);
+ if (change_clocksource(clocksource_list[i])) {
+ status = -1;
+ goto out;
+@@ -169,7 +169,7 @@ int main(int argc, char **argv)
+ }
+ }
+
+- printf("Running Asynchronous Switching Tests...\n");
++ ksft_print_msg("Running Asynchronous Switching Tests...\n");
+ pid = fork();
+ if (!pid)
+ return run_tests(runtime);
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-16 21:48 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-16 21:48 UTC (permalink / raw
To: gentoo-commits
commit: f8e6e0a09a78ef67abed5a29f23c6a2db0d259e9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 16 21:48:06 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Feb 16 21:48:06 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f8e6e0a0
fortify: Hide run-time copy size from value range tracking
Bug: https://bugs.gentoo.org/947270
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
...ortify-copy-size-value-range-tracking-fix.patch | 161 +++++++++++++++++++++
2 files changed, 165 insertions(+)
diff --git a/0000_README b/0000_README
index ceb862e7..499702fa 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-6.12.13.patch
From: https://www.kernel.org
Desc: Linux 6.12.13
+Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
+From: https://git.kernel.org/
+Desc: fortify: Hide run-time copy size from value range tracking
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1500_fortify-copy-size-value-range-tracking-fix.patch b/1500_fortify-copy-size-value-range-tracking-fix.patch
new file mode 100644
index 00000000..f751e02c
--- /dev/null
+++ b/1500_fortify-copy-size-value-range-tracking-fix.patch
@@ -0,0 +1,161 @@
+From 239d87327dcd361b0098038995f8908f3296864f Mon Sep 17 00:00:00 2001
+From: Kees Cook <kees@kernel.org>
+Date: Thu, 12 Dec 2024 17:28:06 -0800
+Subject: fortify: Hide run-time copy size from value range tracking
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+GCC performs value range tracking for variables as a way to provide better
+diagnostics. One place this is regularly seen is with warnings associated
+with bounds-checking, e.g. -Wstringop-overflow, -Wstringop-overread,
+-Warray-bounds, etc. In order to keep the signal-to-noise ratio high,
+warnings aren't emitted when a value range spans the entire value range
+representable by a given variable. For example:
+
+ unsigned int len;
+ char dst[8];
+ ...
+ memcpy(dst, src, len);
+
+If len's value is unknown, it has the full "unsigned int" range of [0,
+UINT_MAX], and GCC's compile-time bounds checks against memcpy() will
+be ignored. However, when a code path has been able to narrow the range:
+
+ if (len > 16)
+ return;
+ memcpy(dst, src, len);
+
+Then the range will be updated for the execution path. Above, len is
+now [0, 16] when reading memcpy(), so depending on other optimizations,
+we might see a -Wstringop-overflow warning like:
+
+ error: '__builtin_memcpy' writing between 9 and 16 bytes into region of size 8 [-Werror=stringop-overflow]
+
+When building with CONFIG_FORTIFY_SOURCE, the fortified run-time bounds
+checking can appear to narrow value ranges of lengths for memcpy(),
+depending on how the compiler constructs the execution paths during
+optimization passes, due to the checks against the field sizes. For
+example:
+
+ if (p_size_field != SIZE_MAX &&
+ p_size != p_size_field && p_size_field < size)
+
+As intentionally designed, these checks only affect the kernel warnings
+emitted at run-time and do not block the potentially overflowing memcpy(),
+so GCC thinks it needs to produce a warning about the resulting value
+range that might be reaching the memcpy().
+
+We have seen this manifest a few times now, with the most recent being
+with cpumasks:
+
+In function ‘bitmap_copy’,
+ inlined from ‘cpumask_copy’ at ./include/linux/cpumask.h:839:2,
+ inlined from ‘__padata_set_cpumasks’ at kernel/padata.c:730:2:
+./include/linux/fortify-string.h:114:33: error: ‘__builtin_memcpy’ reading between 257 and 536870904 bytes from a region of size 256 [-Werror=stringop-overread]
+ 114 | #define __underlying_memcpy __builtin_memcpy
+ | ^
+./include/linux/fortify-string.h:633:9: note: in expansion of macro ‘__underlying_memcpy’
+ 633 | __underlying_##op(p, q, __fortify_size); \
+ | ^~~~~~~~~~~~~
+./include/linux/fortify-string.h:678:26: note: in expansion of macro ‘__fortify_memcpy_chk’
+ 678 | #define memcpy(p, q, s) __fortify_memcpy_chk(p, q, s, \
+ | ^~~~~~~~~~~~~~~~~~~~
+./include/linux/bitmap.h:259:17: note: in expansion of macro ‘memcpy’
+ 259 | memcpy(dst, src, len);
+ | ^~~~~~
+kernel/padata.c: In function ‘__padata_set_cpumasks’:
+kernel/padata.c:713:48: note: source object ‘pcpumask’ of size [0, 256]
+ 713 | cpumask_var_t pcpumask,
+ | ~~~~~~~~~~~~~~^~~~~~~~
+
+This warning is _not_ emitted when CONFIG_FORTIFY_SOURCE is disabled,
+and with the recent -fdiagnostics-details we can confirm the origin of
+the warning is due to FORTIFY's bounds checking:
+
+../include/linux/bitmap.h:259:17: note: in expansion of macro 'memcpy'
+ 259 | memcpy(dst, src, len);
+ | ^~~~~~
+ '__padata_set_cpumasks': events 1-2
+../include/linux/fortify-string.h:613:36:
+ 612 | if (p_size_field != SIZE_MAX &&
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ 613 | p_size != p_size_field && p_size_field < size)
+ | ~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
+ | |
+ | (1) when the condition is evaluated to false
+ | (2) when the condition is evaluated to true
+ '__padata_set_cpumasks': event 3
+ 114 | #define __underlying_memcpy __builtin_memcpy
+ | ^
+ | |
+ | (3) out of array bounds here
+
+Note that the cpumask warning started appearing since bitmap functions
+were recently marked __always_inline in commit ed8cd2b3bd9f ("bitmap:
+Switch from inline to __always_inline"), which allowed GCC to gain
+visibility into the variables as they passed through the FORTIFY
+implementation.
+
+In order to silence these false positives but keep otherwise deterministic
+compile-time warnings intact, hide the length variable from GCC with
+OPTIMIZE_HIDE_VAR() before calling the builtin memcpy.
+
+Additionally add a comment about why all the macro args have copies with
+const storage.
+
+Reported-by: "Thomas Weißschuh" <linux@weissschuh.net>
+Closes: https://lore.kernel.org/all/db7190c8-d17f-4a0d-bc2f-5903c79f36c2@t-8ch.de/
+Reported-by: Nilay Shroff <nilay@linux.ibm.com>
+Closes: https://lore.kernel.org/all/20241112124127.1666300-1-nilay@linux.ibm.com/
+Tested-by: Nilay Shroff <nilay@linux.ibm.com>
+Acked-by: Yury Norov <yury.norov@gmail.com>
+Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Kees Cook <kees@kernel.org>
+---
+ include/linux/fortify-string.h | 14 +++++++++++++-
+ 1 file changed, 13 insertions(+), 1 deletion(-)
+
+(limited to 'include/linux/fortify-string.h')
+
+diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
+index 0d99bf11d260a3..e4ce1cae03bf77 100644
+--- a/include/linux/fortify-string.h
++++ b/include/linux/fortify-string.h
+@@ -616,6 +616,12 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ return false;
+ }
+
++/*
++ * To work around what seems to be an optimizer bug, the macro arguments
++ * need to have const copies or the values end up changed by the time they
++ * reach fortify_warn_once(). See commit 6f7630b1b5bc ("fortify: Capture
++ * __bos() results in const temp vars") for more details.
++ */
+ #define __fortify_memcpy_chk(p, q, size, p_size, q_size, \
+ p_size_field, q_size_field, op) ({ \
+ const size_t __fortify_size = (size_t)(size); \
+@@ -623,6 +629,8 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ const size_t __q_size = (q_size); \
+ const size_t __p_size_field = (p_size_field); \
+ const size_t __q_size_field = (q_size_field); \
++ /* Keep a mutable version of the size for the final copy. */ \
++ size_t __copy_size = __fortify_size; \
+ fortify_warn_once(fortify_memcpy_chk(__fortify_size, __p_size, \
+ __q_size, __p_size_field, \
+ __q_size_field, FORTIFY_FUNC_ ##op), \
+@@ -630,7 +638,11 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ __fortify_size, \
+ "field \"" #p "\" at " FILE_LINE, \
+ __p_size_field); \
+- __underlying_##op(p, q, __fortify_size); \
++ /* Hide only the run-time size from value range tracking to */ \
++ /* silence compile-time false positive bounds warnings. */ \
++ if (!__builtin_constant_p(__copy_size)) \
++ OPTIMIZER_HIDE_VAR(__copy_size); \
++ __underlying_##op(p, q, __copy_size); \
+ })
+
+ /*
+--
+cgit 1.2.3-korg
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-17 11:16 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-17 11:16 UTC (permalink / raw
To: gentoo-commits
commit: ac1b056c5231ef785a638ac21cacdd2697fd115c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 17 11:16:10 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 17 11:16:10 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ac1b056c
Linux patch 6.12.14
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1013_linux-6.12.14.patch | 17568 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 17572 insertions(+)
diff --git a/0000_README b/0000_README
index 499702fa..c6c607fe 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-6.12.13.patch
From: https://www.kernel.org
Desc: Linux 6.12.13
+Patch: 1013_linux-6.12.14.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.14
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1013_linux-6.12.14.patch b/1013_linux-6.12.14.patch
new file mode 100644
index 00000000..5243c324
--- /dev/null
+++ b/1013_linux-6.12.14.patch
@@ -0,0 +1,17568 @@
+diff --git a/Documentation/arch/arm64/elf_hwcaps.rst b/Documentation/arch/arm64/elf_hwcaps.rst
+index 694f67fa07d196..ab556426c7ac24 100644
+--- a/Documentation/arch/arm64/elf_hwcaps.rst
++++ b/Documentation/arch/arm64/elf_hwcaps.rst
+@@ -174,22 +174,28 @@ HWCAP2_DCPODP
+ Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
+
+ HWCAP2_SVE2
+- Functionality implied by ID_AA64ZFR0_EL1.SVEver == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SVEver == 0b0001.
+
+ HWCAP2_SVEAES
+- Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.AES == 0b0001.
+
+ HWCAP2_SVEPMULL
+- Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0010.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.AES == 0b0010.
+
+ HWCAP2_SVEBITPERM
+- Functionality implied by ID_AA64ZFR0_EL1.BitPerm == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.BitPerm == 0b0001.
+
+ HWCAP2_SVESHA3
+- Functionality implied by ID_AA64ZFR0_EL1.SHA3 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SHA3 == 0b0001.
+
+ HWCAP2_SVESM4
+- Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SM4 == 0b0001.
+
+ HWCAP2_FLAGM2
+ Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0010.
+@@ -198,16 +204,20 @@ HWCAP2_FRINT
+ Functionality implied by ID_AA64ISAR1_EL1.FRINTTS == 0b0001.
+
+ HWCAP2_SVEI8MM
+- Functionality implied by ID_AA64ZFR0_EL1.I8MM == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.I8MM == 0b0001.
+
+ HWCAP2_SVEF32MM
+- Functionality implied by ID_AA64ZFR0_EL1.F32MM == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.F32MM == 0b0001.
+
+ HWCAP2_SVEF64MM
+- Functionality implied by ID_AA64ZFR0_EL1.F64MM == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.F64MM == 0b0001.
+
+ HWCAP2_SVEBF16
+- Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.BF16 == 0b0001.
+
+ HWCAP2_I8MM
+ Functionality implied by ID_AA64ISAR1_EL1.I8MM == 0b0001.
+@@ -273,7 +283,8 @@ HWCAP2_EBF16
+ Functionality implied by ID_AA64ISAR1_EL1.BF16 == 0b0010.
+
+ HWCAP2_SVE_EBF16
+- Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0010.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.BF16 == 0b0010.
+
+ HWCAP2_CSSC
+ Functionality implied by ID_AA64ISAR2_EL1.CSSC == 0b0001.
+@@ -282,7 +293,8 @@ HWCAP2_RPRFM
+ Functionality implied by ID_AA64ISAR2_EL1.RPRFM == 0b0001.
+
+ HWCAP2_SVE2P1
+- Functionality implied by ID_AA64ZFR0_EL1.SVEver == 0b0010.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SVEver == 0b0010.
+
+ HWCAP2_SME2
+ Functionality implied by ID_AA64SMFR0_EL1.SMEver == 0b0001.
+@@ -309,7 +321,8 @@ HWCAP2_HBC
+ Functionality implied by ID_AA64ISAR2_EL1.BC == 0b0001.
+
+ HWCAP2_SVE_B16B16
+- Functionality implied by ID_AA64ZFR0_EL1.B16B16 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.B16B16 == 0b0001.
+
+ HWCAP2_LRCPC3
+ Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0011.
+diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst
+index c3e58856f75b36..96c03b9a644e4f 100644
+--- a/Documentation/gpu/drm-kms-helpers.rst
++++ b/Documentation/gpu/drm-kms-helpers.rst
+@@ -230,6 +230,9 @@ Panel Helper Reference
+ .. kernel-doc:: drivers/gpu/drm/drm_panel_orientation_quirks.c
+ :export:
+
++.. kernel-doc:: drivers/gpu/drm/drm_panel_backlight_quirks.c
++ :export:
++
+ Panel Self Refresh Helper Reference
+ ===================================
+
+diff --git a/Makefile b/Makefile
+index 5442ff45f963ed..26a471dbed62a5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi b/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi
+index 6e67d99832ac25..ba7fdaae9c6e6d 100644
+--- a/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi
+@@ -12,6 +12,7 @@ &l4_cfg { /* 0x4a000000 */
+ ranges = <0x00000000 0x4a000000 0x100000>, /* segment 0 */
+ <0x00100000 0x4a100000 0x100000>, /* segment 1 */
+ <0x00200000 0x4a200000 0x100000>; /* segment 2 */
++ dma-ranges;
+
+ segment@0 { /* 0x4a000000 */
+ compatible = "simple-pm-bus";
+@@ -557,6 +558,7 @@ segment@100000 { /* 0x4a100000 */
+ <0x0007e000 0x0017e000 0x001000>, /* ap 124 */
+ <0x00059000 0x00159000 0x001000>, /* ap 125 */
+ <0x0005a000 0x0015a000 0x001000>; /* ap 126 */
++ dma-ranges;
+
+ target-module@2000 { /* 0x4a102000, ap 27 3c.0 */
+ compatible = "ti,sysc";
+diff --git a/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi b/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi
+index 3661340009e7a4..8dca2bed941b64 100644
+--- a/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi
+@@ -446,6 +446,7 @@ &omap3_pmx_core2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <
+ &hsusb2_2_pins
++ &mcspi3hog_pins
+ >;
+
+ hsusb2_2_pins: hsusb2-2-pins {
+@@ -459,6 +460,15 @@ OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d15.hsusb2_d
+ >;
+ };
+
++ mcspi3hog_pins: mcspi3hog-pins {
++ pinctrl-single,pins = <
++ OMAP3630_CORE2_IOPAD(0x25dc, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d0 */
++ OMAP3630_CORE2_IOPAD(0x25de, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d1 */
++ OMAP3630_CORE2_IOPAD(0x25e0, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d2 */
++ OMAP3630_CORE2_IOPAD(0x25e2, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d3 */
++ >;
++ };
++
+ spi_gpio_pins: spi-gpio-pinmux-pins {
+ pinctrl-single,pins = <
+ OMAP3630_CORE2_IOPAD(0x25d8, PIN_OUTPUT | MUX_MODE4) /* clk */
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 07ae3c8e897b7d..22924f61ec9ed2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -290,11 +290,6 @@ dsi_out: endpoint {
+ };
+ };
+
+-&dpi0 {
+- /* TODO Re-enable after DP to Type-C port muxing can be described */
+- status = "disabled";
+-};
+-
+ &gic {
+ mediatek,broken-save-restore-fw;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 9cd5e0cef02a29..5cb6bd3c5acbb0 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1846,6 +1846,7 @@ dpi0: dpi@14015000 {
+ <&mmsys CLK_MM_DPI_MM>,
+ <&apmixedsys CLK_APMIXED_TVDPLL>;
+ clock-names = "pixel", "engine", "pll";
++ status = "disabled";
+
+ port {
+ dpi_out: endpoint { };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+index 570331baa09ee3..2601b43b2d8cad 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+@@ -3815,7 +3815,7 @@ sce-fabric@b600000 {
+ compatible = "nvidia,tegra234-sce-fabric";
+ reg = <0x0 0xb600000 0x0 0x40000>;
+ interrupts = <GIC_SPI 173 IRQ_TYPE_LEVEL_HIGH>;
+- status = "okay";
++ status = "disabled";
+ };
+
+ rce-fabric@be00000 {
+@@ -3995,7 +3995,7 @@ bpmp-fabric@d600000 {
+ };
+
+ dce-fabric@de00000 {
+- compatible = "nvidia,tegra234-sce-fabric";
++ compatible = "nvidia,tegra234-dce-fabric";
+ reg = <0x0 0xde00000 0x0 0x40000>;
+ interrupts = <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+@@ -4018,6 +4018,8 @@ gic: interrupt-controller@f400000 {
+ #redistributor-regions = <1>;
+ #interrupt-cells = <3>;
+ interrupt-controller;
++
++ #address-cells = <0>;
+ };
+
+ smmu_iso: iommu@10000000 {
+diff --git a/arch/arm64/boot/dts/qcom/sdx75.dtsi b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+index dcb925348e3f31..60a5d6d3ca7cc8 100644
+--- a/arch/arm64/boot/dts/qcom/sdx75.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+@@ -893,7 +893,7 @@ tcsr: syscon@1fc0000 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sdx75-mpss-pas";
+- reg = <0 0x04080000 0 0x4040>;
++ reg = <0 0x04080000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 250 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 41216cc319d65e..4adadfd1e51ae9 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -2027,7 +2027,7 @@ dispcc: clock-controller@5f00000 {
+
+ remoteproc_mpss: remoteproc@6080000 {
+ compatible = "qcom,sm6115-mpss-pas";
+- reg = <0x0 0x06080000 0x0 0x100>;
++ reg = <0x0 0x06080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 307 IRQ_TYPE_EDGE_RISING>,
+ <&modem_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2670,9 +2670,9 @@ funnel_apss1_in: endpoint {
+ };
+ };
+
+- remoteproc_adsp: remoteproc@ab00000 {
++ remoteproc_adsp: remoteproc@a400000 {
+ compatible = "qcom,sm6115-adsp-pas";
+- reg = <0x0 0x0ab00000 0x0 0x100>;
++ reg = <0x0 0x0a400000 0x0 0x4040>;
+
+ interrupts-extended = <&intc GIC_SPI 282 IRQ_TYPE_EDGE_RISING>,
+ <&adsp_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2744,7 +2744,7 @@ compute-cb@7 {
+
+ remoteproc_cdsp: remoteproc@b300000 {
+ compatible = "qcom,sm6115-cdsp-pas";
+- reg = <0x0 0x0b300000 0x0 0x100000>;
++ reg = <0x0 0x0b300000 0x0 0x4040>;
+
+ interrupts-extended = <&intc GIC_SPI 265 IRQ_TYPE_EDGE_RISING>,
+ <&cdsp_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 4f8477de7e1b1e..10418fccfea24f 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -936,7 +936,7 @@ uart1: serial@884000 {
+ power-domains = <&rpmhpd SM6350_CX>;
+ operating-points-v2 = <&qup_opp_table>;
+ interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+- <&aggre1_noc MASTER_QUP_0 0 &clk_virt SLAVE_EBI_CH0 0>;
++ <&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_QUP_0 0>;
+ interconnect-names = "qup-core", "qup-config";
+ status = "disabled";
+ };
+@@ -1283,7 +1283,7 @@ tcsr_mutex: hwlock@1f40000 {
+
+ adsp: remoteproc@3000000 {
+ compatible = "qcom,sm6350-adsp-pas";
+- reg = <0 0x03000000 0 0x100>;
++ reg = <0x0 0x03000000 0x0 0x10000>;
+
+ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -1503,7 +1503,7 @@ gpucc: clock-controller@3d90000 {
+
+ mpss: remoteproc@4080000 {
+ compatible = "qcom,sm6350-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_EDGE_RISING>,
+ <&modem_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm6375.dtsi b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+index 72e01437ded125..01371f41f7906b 100644
+--- a/arch/arm64/boot/dts/qcom/sm6375.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+@@ -1516,9 +1516,9 @@ gpucc: clock-controller@5990000 {
+ #power-domain-cells = <1>;
+ };
+
+- remoteproc_mss: remoteproc@6000000 {
++ remoteproc_mss: remoteproc@6080000 {
+ compatible = "qcom,sm6375-mpss-pas";
+- reg = <0 0x06000000 0 0x4040>;
++ reg = <0x0 0x06080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 307 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -1559,7 +1559,7 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+
+ remoteproc_adsp: remoteproc@a400000 {
+ compatible = "qcom,sm6375-adsp-pas";
+- reg = <0 0x0a400000 0 0x100>;
++ reg = <0 0x0a400000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 282 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -1595,9 +1595,9 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+ };
+ };
+
+- remoteproc_cdsp: remoteproc@b000000 {
++ remoteproc_cdsp: remoteproc@b300000 {
+ compatible = "qcom,sm6375-cdsp-pas";
+- reg = <0x0 0x0b000000 0x0 0x100000>;
++ reg = <0x0 0x0b300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 265 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 041750d71e4550..46adf10e5fe4d6 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1876,6 +1876,142 @@ tcsr: syscon@1fc0000 {
+ reg = <0x0 0x1fc0000 0x0 0x30000>;
+ };
+
++ adsp: remoteproc@3000000 {
++ compatible = "qcom,sm8350-adsp-pas";
++ reg = <0x0 0x03000000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx", "lmx";
++
++ memory-region = <&pil_adsp_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ apr {
++ compatible = "qcom,apr-v2";
++ qcom,glink-channels = "apr_audio_svc";
++ qcom,domain = <APR_DOMAIN_ADSP>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ service@3 {
++ reg = <APR_SVC_ADSP_CORE>;
++ compatible = "qcom,q6core";
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++ };
++
++ q6afe: service@4 {
++ compatible = "qcom,q6afe";
++ reg = <APR_SVC_AFE>;
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++
++ q6afedai: dais {
++ compatible = "qcom,q6afe-dais";
++ #address-cells = <1>;
++ #size-cells = <0>;
++ #sound-dai-cells = <1>;
++ };
++
++ q6afecc: clock-controller {
++ compatible = "qcom,q6afe-clocks";
++ #clock-cells = <2>;
++ };
++ };
++
++ q6asm: service@7 {
++ compatible = "qcom,q6asm";
++ reg = <APR_SVC_ASM>;
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++
++ q6asmdai: dais {
++ compatible = "qcom,q6asm-dais";
++ #address-cells = <1>;
++ #size-cells = <0>;
++ #sound-dai-cells = <1>;
++ iommus = <&apps_smmu 0x1801 0x0>;
++
++ dai@0 {
++ reg = <0>;
++ };
++
++ dai@1 {
++ reg = <1>;
++ };
++
++ dai@2 {
++ reg = <2>;
++ };
++ };
++ };
++
++ q6adm: service@8 {
++ compatible = "qcom,q6adm";
++ reg = <APR_SVC_ADM>;
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++
++ q6routing: routing {
++ compatible = "qcom,q6adm-routing";
++ #sound-dai-cells = <0>;
++ };
++ };
++ };
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1803 0x0>;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1804 0x0>;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1805 0x0>;
++ };
++ };
++ };
++ };
++
+ lpass_tlmm: pinctrl@33c0000 {
+ compatible = "qcom,sm8350-lpass-lpi-pinctrl";
+ reg = <0 0x033c0000 0 0x20000>,
+@@ -2078,7 +2214,7 @@ lpass_ag_noc: interconnect@3c40000 {
+
+ mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8350-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2360,6 +2496,115 @@ compute_noc: interconnect@a0c0000 {
+ qcom,bcm-voters = <&apps_bcm_voter>;
+ };
+
++ cdsp: remoteproc@a300000 {
++ compatible = "qcom,sm8350-cdsp-pas";
++ reg = <0x0 0x0a300000 0x0 0x10000>;
++
++ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_CX>,
++ <&rpmhpd RPMHPD_MXC>;
++ power-domain-names = "cx", "mxc";
++
++ interconnects = <&compute_noc MASTER_CDSP_PROC 0 &mc_virt SLAVE_EBI1 0>;
++
++ memory-region = <&pil_cdsp_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_cdsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_CDSP
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_CDSP
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "cdsp";
++ qcom,remote-pid = <5>;
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "cdsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@1 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <1>;
++ iommus = <&apps_smmu 0x2161 0x0400>,
++ <&apps_smmu 0x1181 0x0420>;
++ };
++
++ compute-cb@2 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <2>;
++ iommus = <&apps_smmu 0x2162 0x0400>,
++ <&apps_smmu 0x1182 0x0420>;
++ };
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x2163 0x0400>,
++ <&apps_smmu 0x1183 0x0420>;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x2164 0x0400>,
++ <&apps_smmu 0x1184 0x0420>;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x2165 0x0400>,
++ <&apps_smmu 0x1185 0x0420>;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++ iommus = <&apps_smmu 0x2166 0x0400>,
++ <&apps_smmu 0x1186 0x0420>;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++ iommus = <&apps_smmu 0x2167 0x0400>,
++ <&apps_smmu 0x1187 0x0420>;
++ };
++
++ compute-cb@8 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <8>;
++ iommus = <&apps_smmu 0x2168 0x0400>,
++ <&apps_smmu 0x1188 0x0420>;
++ };
++
++ /* note: secure cb9 in downstream */
++ };
++ };
++ };
++
+ usb_1: usb@a6f8800 {
+ compatible = "qcom,sm8350-dwc3", "qcom,dwc3";
+ reg = <0 0x0a6f8800 0 0x400>;
+@@ -3284,142 +3529,6 @@ apps_smmu: iommu@15000000 {
+ <GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- adsp: remoteproc@17300000 {
+- compatible = "qcom,sm8350-adsp-pas";
+- reg = <0 0x17300000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx", "lmx";
+-
+- memory-region = <&pil_adsp_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- apr {
+- compatible = "qcom,apr-v2";
+- qcom,glink-channels = "apr_audio_svc";
+- qcom,domain = <APR_DOMAIN_ADSP>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- service@3 {
+- reg = <APR_SVC_ADSP_CORE>;
+- compatible = "qcom,q6core";
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+- };
+-
+- q6afe: service@4 {
+- compatible = "qcom,q6afe";
+- reg = <APR_SVC_AFE>;
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+-
+- q6afedai: dais {
+- compatible = "qcom,q6afe-dais";
+- #address-cells = <1>;
+- #size-cells = <0>;
+- #sound-dai-cells = <1>;
+- };
+-
+- q6afecc: clock-controller {
+- compatible = "qcom,q6afe-clocks";
+- #clock-cells = <2>;
+- };
+- };
+-
+- q6asm: service@7 {
+- compatible = "qcom,q6asm";
+- reg = <APR_SVC_ASM>;
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+-
+- q6asmdai: dais {
+- compatible = "qcom,q6asm-dais";
+- #address-cells = <1>;
+- #size-cells = <0>;
+- #sound-dai-cells = <1>;
+- iommus = <&apps_smmu 0x1801 0x0>;
+-
+- dai@0 {
+- reg = <0>;
+- };
+-
+- dai@1 {
+- reg = <1>;
+- };
+-
+- dai@2 {
+- reg = <2>;
+- };
+- };
+- };
+-
+- q6adm: service@8 {
+- compatible = "qcom,q6adm";
+- reg = <APR_SVC_ADM>;
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+-
+- q6routing: routing {
+- compatible = "qcom,q6adm-routing";
+- #sound-dai-cells = <0>;
+- };
+- };
+- };
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1803 0x0>;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1804 0x0>;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1805 0x0>;
+- };
+- };
+- };
+- };
+-
+ intc: interrupt-controller@17a00000 {
+ compatible = "arm,gic-v3";
+ #interrupt-cells = <3>;
+@@ -3588,115 +3697,6 @@ cpufreq_hw: cpufreq@18591000 {
+ #freq-domain-cells = <1>;
+ #clock-cells = <1>;
+ };
+-
+- cdsp: remoteproc@98900000 {
+- compatible = "qcom,sm8350-cdsp-pas";
+- reg = <0 0x98900000 0 0x1400000>;
+-
+- interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_CX>,
+- <&rpmhpd RPMHPD_MXC>;
+- power-domain-names = "cx", "mxc";
+-
+- interconnects = <&compute_noc MASTER_CDSP_PROC 0 &mc_virt SLAVE_EBI1 0>;
+-
+- memory-region = <&pil_cdsp_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_cdsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_CDSP
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_CDSP
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "cdsp";
+- qcom,remote-pid = <5>;
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "cdsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@1 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <1>;
+- iommus = <&apps_smmu 0x2161 0x0400>,
+- <&apps_smmu 0x1181 0x0420>;
+- };
+-
+- compute-cb@2 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <2>;
+- iommus = <&apps_smmu 0x2162 0x0400>,
+- <&apps_smmu 0x1182 0x0420>;
+- };
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x2163 0x0400>,
+- <&apps_smmu 0x1183 0x0420>;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x2164 0x0400>,
+- <&apps_smmu 0x1184 0x0420>;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x2165 0x0400>,
+- <&apps_smmu 0x1185 0x0420>;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+- iommus = <&apps_smmu 0x2166 0x0400>,
+- <&apps_smmu 0x1186 0x0420>;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+- iommus = <&apps_smmu 0x2167 0x0400>,
+- <&apps_smmu 0x1187 0x0420>;
+- };
+-
+- compute-cb@8 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <8>;
+- iommus = <&apps_smmu 0x2168 0x0400>,
+- <&apps_smmu 0x1188 0x0420>;
+- };
+-
+- /* note: secure cb9 in downstream */
+- };
+- };
+- };
+ };
+
+ thermal_zones: thermal-zones {
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index f7d52e491b694b..d664a88a018efb 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -2492,6 +2492,112 @@ compute-cb@3 {
+ };
+ };
+
++ remoteproc_adsp: remoteproc@3000000 {
++ compatible = "qcom,sm8450-adsp-pas";
++ reg = <0x0 0x03000000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx", "lmx";
++
++ memory-region = <&adsp_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ remoteproc_adsp_glink: glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1801 0x0>;
++ };
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1803 0x0>;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1804 0x0>;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1805 0x0>;
++ };
++ };
++ };
++ };
++
+ wsa2macro: codec@31e0000 {
+ compatible = "qcom,sm8450-lpass-wsa-macro";
+ reg = <0 0x031e0000 0 0x1000>;
+@@ -2688,115 +2794,9 @@ vamacro: codec@33f0000 {
+ status = "disabled";
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,sm8450-adsp-pas";
+- reg = <0 0x30000000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx", "lmx";
+-
+- memory-region = <&adsp_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- remoteproc_adsp_glink: glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1801 0x0>;
+- };
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1803 0x0>;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1804 0x0>;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1805 0x0>;
+- };
+- };
+- };
+- };
+-
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,sm8450-cdsp-pas";
+- reg = <0 0x32300000 0 0x1400000>;
++ reg = <0 0x32300000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2903,7 +2903,7 @@ compute-cb@8 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8450-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 9dc0ee3eb98f87..9ecf4a7fc3287a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -2313,7 +2313,7 @@ ipa: ipa@3f40000 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8550-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2353,6 +2353,137 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+ };
+ };
+
++ remoteproc_adsp: remoteproc@6800000 {
++ compatible = "qcom,sm8550-adsp-pas";
++ reg = <0x0 0x06800000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx", "lmx";
++
++ interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC 0 &mc_virt SLAVE_EBI1 0>;
++
++ memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ remoteproc_adsp_glink: glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1003 0x80>,
++ <&apps_smmu 0x1063 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1004 0x80>,
++ <&apps_smmu 0x1064 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1005 0x80>,
++ <&apps_smmu 0x1065 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++ iommus = <&apps_smmu 0x1006 0x80>,
++ <&apps_smmu 0x1066 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++ iommus = <&apps_smmu 0x1007 0x80>,
++ <&apps_smmu 0x1067 0x0>;
++ dma-coherent;
++ };
++ };
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1001 0x80>,
++ <&apps_smmu 0x1061 0x0>;
++ };
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++ };
++ };
++
+ lpass_wsa2macro: codec@6aa0000 {
+ compatible = "qcom,sm8550-lpass-wsa-macro";
+ reg = <0 0x06aa0000 0 0x1000>;
+@@ -2871,9 +3002,8 @@ mdss: display-subsystem@ae00000 {
+
+ power-domains = <&dispcc MDSS_GDSC>;
+
+- interconnects = <&mmss_noc MASTER_MDP 0 &gem_noc SLAVE_LLCC 0>,
+- <&mc_virt MASTER_LLCC 0 &mc_virt SLAVE_EBI1 0>;
+- interconnect-names = "mdp0-mem", "mdp1-mem";
++ interconnects = <&mmss_noc MASTER_MDP 0 &mc_virt SLAVE_EBI1 0>;
++ interconnect-names = "mdp0-mem";
+
+ iommus = <&apps_smmu 0x1c00 0x2>;
+
+@@ -4575,137 +4705,6 @@ system-cache-controller@25000000 {
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,sm8550-adsp-pas";
+- reg = <0x0 0x30000000 0x0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx", "lmx";
+-
+- interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC 0 &mc_virt SLAVE_EBI1 0>;
+-
+- memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- remoteproc_adsp_glink: glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1003 0x80>,
+- <&apps_smmu 0x1063 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1004 0x80>,
+- <&apps_smmu 0x1064 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1005 0x80>,
+- <&apps_smmu 0x1065 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+- iommus = <&apps_smmu 0x1006 0x80>,
+- <&apps_smmu 0x1066 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+- iommus = <&apps_smmu 0x1007 0x80>,
+- <&apps_smmu 0x1067 0x0>;
+- dma-coherent;
+- };
+- };
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1001 0x80>,
+- <&apps_smmu 0x1061 0x0>;
+- };
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+- };
+- };
+-
+ nsp_noc: interconnect@320c0000 {
+ compatible = "qcom,sm8550-nsp-noc";
+ reg = <0 0x320c0000 0 0xe080>;
+@@ -4715,7 +4714,7 @@ nsp_noc: interconnect@320c0000 {
+
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,sm8550-cdsp-pas";
+- reg = <0x0 0x32300000 0x0 0x1400000>;
++ reg = <0x0 0x32300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index cd54fd723ce40e..416cfb71878a5f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -2853,7 +2853,7 @@ ipa: ipa@3f40000 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8650-mpss-pas";
+- reg = <0 0x04080000 0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2904,6 +2904,154 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+ };
+ };
+
++ remoteproc_adsp: remoteproc@6800000 {
++ compatible = "qcom,sm8650-adsp-pas";
++ reg = <0x0 0x06800000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog",
++ "fatal",
++ "ready",
++ "handover",
++ "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
++ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx",
++ "lmx";
++
++ memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ remoteproc_adsp_glink: glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ qcom,remote-pid = <2>;
++
++ label = "lpass";
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++
++ label = "adsp";
++
++ qcom,non-secure-domain;
++
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++
++ iommus = <&apps_smmu 0x1003 0x80>,
++ <&apps_smmu 0x1043 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++
++ iommus = <&apps_smmu 0x1004 0x80>,
++ <&apps_smmu 0x1044 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++
++ iommus = <&apps_smmu 0x1005 0x80>,
++ <&apps_smmu 0x1045 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++
++ iommus = <&apps_smmu 0x1006 0x80>,
++ <&apps_smmu 0x1046 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++
++ iommus = <&apps_smmu 0x1007 0x40>,
++ <&apps_smmu 0x1067 0x0>,
++ <&apps_smmu 0x1087 0x0>;
++ dma-coherent;
++ };
++ };
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1001 0x80>,
++ <&apps_smmu 0x1061 0x0>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++ };
++ };
++
+ lpass_wsa2macro: codec@6aa0000 {
+ compatible = "qcom,sm8650-lpass-wsa-macro", "qcom,sm8550-lpass-wsa-macro";
+ reg = <0 0x06aa0000 0 0x1000>;
+@@ -3455,11 +3603,8 @@ mdss: display-subsystem@ae00000 {
+ resets = <&dispcc DISP_CC_MDSS_CORE_BCR>;
+
+ interconnects = <&mmss_noc MASTER_MDP QCOM_ICC_TAG_ALWAYS
+- &gem_noc SLAVE_LLCC QCOM_ICC_TAG_ALWAYS>,
+- <&mc_virt MASTER_LLCC QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+- interconnect-names = "mdp0-mem",
+- "mdp1-mem";
++ interconnect-names = "mdp0-mem";
+
+ power-domains = <&dispcc MDSS_GDSC>;
+
+@@ -5324,154 +5469,6 @@ system-cache-controller@25000000 {
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,sm8650-adsp-pas";
+- reg = <0 0x30000000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog",
+- "fatal",
+- "ready",
+- "handover",
+- "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
+- &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx",
+- "lmx";
+-
+- memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- remoteproc_adsp_glink: glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+-
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- qcom,remote-pid = <2>;
+-
+- label = "lpass";
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+-
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+-
+- label = "adsp";
+-
+- qcom,non-secure-domain;
+-
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+-
+- iommus = <&apps_smmu 0x1003 0x80>,
+- <&apps_smmu 0x1043 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+-
+- iommus = <&apps_smmu 0x1004 0x80>,
+- <&apps_smmu 0x1044 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+-
+- iommus = <&apps_smmu 0x1005 0x80>,
+- <&apps_smmu 0x1045 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+-
+- iommus = <&apps_smmu 0x1006 0x80>,
+- <&apps_smmu 0x1046 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+-
+- iommus = <&apps_smmu 0x1007 0x40>,
+- <&apps_smmu 0x1067 0x0>,
+- <&apps_smmu 0x1087 0x0>;
+- dma-coherent;
+- };
+- };
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1001 0x80>,
+- <&apps_smmu 0x1061 0x0>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+- };
+- };
+-
+ nsp_noc: interconnect@320c0000 {
+ compatible = "qcom,sm8650-nsp-noc";
+ reg = <0 0x320c0000 0 0xf080>;
+@@ -5483,7 +5480,7 @@ nsp_noc: interconnect@320c0000 {
+
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,sm8650-cdsp-pas";
+- reg = <0 0x32300000 0 0x1400000>;
++ reg = <0x0 0x32300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts b/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
+index fdde988ae01ebd..b1fa8f3558b3fc 100644
+--- a/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
++++ b/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
+@@ -754,7 +754,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -786,7 +786,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index 2926a1aba76873..b2cf080cab5622 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -591,7 +591,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -623,7 +623,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index c6e0356ed9a2a2..044a2f1432fe32 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -1147,7 +1147,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -1179,7 +1179,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+@@ -1211,7 +1211,7 @@ &usb_1_ss2_hsphy {
+ };
+
+ &usb_1_ss2_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index f22e5c840a2e55..e9ed723f90381a 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -895,7 +895,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -927,7 +927,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+@@ -959,7 +959,7 @@ &usb_1_ss2_hsphy {
+ };
+
+ &usb_1_ss2_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+index 89e39d55278579..19da90704b7cb9 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+@@ -782,7 +782,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e>;
++ vdda-phy-supply = <&vreg_l2j>;
+ vdda-pll-supply = <&vreg_l1j>;
+
+ status = "okay";
+@@ -814,7 +814,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e>;
++ vdda-phy-supply = <&vreg_l2j>;
+ vdda-pll-supply = <&vreg_l2d>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index 5ef030c60abe29..af76aa034d0e17 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -896,7 +896,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -928,7 +928,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+@@ -960,7 +960,7 @@ &usb_1_ss2_hsphy {
+ };
+
+ &usb_1_ss2_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index f0797df9619b15..91e4fbca19f99c 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -3515,6 +3515,143 @@ nsp_noc: interconnect@320c0000 {
+ #interconnect-cells = <2>;
+ };
+
++ remoteproc_adsp: remoteproc@6800000 {
++ compatible = "qcom,x1e80100-adsp-pas";
++ reg = <0x0 0x06800000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog",
++ "fatal",
++ "ready",
++ "handover",
++ "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx",
++ "lmx";
++
++ interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
++ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
++
++ memory-region = <&adspslpi_mem>,
++ <&q6_adsp_dtb_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1003 0x80>,
++ <&apps_smmu 0x1063 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1004 0x80>,
++ <&apps_smmu 0x1064 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1005 0x80>,
++ <&apps_smmu 0x1065 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++ iommus = <&apps_smmu 0x1006 0x80>,
++ <&apps_smmu 0x1066 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++ iommus = <&apps_smmu 0x1007 0x80>,
++ <&apps_smmu 0x1067 0x0>;
++ dma-coherent;
++ };
++ };
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1001 0x80>,
++ <&apps_smmu 0x1061 0x0>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++ };
++ };
++
+ lpass_wsa2macro: codec@6aa0000 {
+ compatible = "qcom,x1e80100-lpass-wsa-macro", "qcom,sm8550-lpass-wsa-macro";
+ reg = <0 0x06aa0000 0 0x1000>;
+@@ -4115,7 +4252,7 @@ usb_2: usb@a2f8800 {
+ <&gcc GCC_USB20_MASTER_CLK>;
+ assigned-clock-rates = <19200000>, <200000000>;
+
+- interrupts-extended = <&intc GIC_SPI 240 IRQ_TYPE_LEVEL_HIGH>,
++ interrupts-extended = <&intc GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>,
+ <&pdc 50 IRQ_TYPE_EDGE_BOTH>,
+ <&pdc 49 IRQ_TYPE_EDGE_BOTH>;
+ interrupt-names = "pwr_event",
+@@ -4141,7 +4278,7 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ usb_2_dwc3: usb@a200000 {
+ compatible = "snps,dwc3";
+ reg = <0 0x0a200000 0 0xcd00>;
+- interrupts = <GIC_SPI 241 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 240 IRQ_TYPE_LEVEL_HIGH>;
+ iommus = <&apps_smmu 0x14e0 0x0>;
+ phys = <&usb_2_hsphy>;
+ phy-names = "usb2-phy";
+@@ -6108,146 +6245,9 @@ system-cache-controller@25000000 {
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,x1e80100-adsp-pas";
+- reg = <0 0x30000000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog",
+- "fatal",
+- "ready",
+- "handover",
+- "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx",
+- "lmx";
+-
+- interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
+- &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-
+- memory-region = <&adspslpi_mem>,
+- <&q6_adsp_dtb_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1003 0x80>,
+- <&apps_smmu 0x1063 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1004 0x80>,
+- <&apps_smmu 0x1064 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1005 0x80>,
+- <&apps_smmu 0x1065 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+- iommus = <&apps_smmu 0x1006 0x80>,
+- <&apps_smmu 0x1066 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+- iommus = <&apps_smmu 0x1007 0x80>,
+- <&apps_smmu 0x1067 0x0>;
+- dma-coherent;
+- };
+- };
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1001 0x80>,
+- <&apps_smmu 0x1061 0x0>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+- };
+- };
+-
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,x1e80100-cdsp-pas";
+- reg = <0 0x32300000 0 0x1400000>;
++ reg = <0x0 0x32300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 650b1ba9c19213..257636d0d2cbb0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -181,7 +181,7 @@ &gmac {
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 50000>;
+ tx_delay = <0x10>;
+- rx_delay = <0x10>;
++ rx_delay = <0x23>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568.dtsi b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+index 0946310e8c1248..6fd67ae2711746 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+@@ -262,6 +262,7 @@ combphy0: phy@fe820000 {
+ assigned-clocks = <&pmucru CLK_PCIEPHY0_REF>;
+ assigned-clock-rates = <100000000>;
+ resets = <&cru SRST_PIPEPHY0>;
++ reset-names = "phy";
+ rockchip,pipe-grf = <&pipegrf>;
+ rockchip,pipe-phy-grf = <&pipe_phy_grf0>;
+ #phy-cells = <1>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+index 0ee0ada6f0ab0f..bc0f57a26c2ff8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+@@ -1762,6 +1762,7 @@ combphy1: phy@fe830000 {
+ assigned-clocks = <&pmucru CLK_PCIEPHY1_REF>;
+ assigned-clock-rates = <100000000>;
+ resets = <&cru SRST_PIPEPHY1>;
++ reset-names = "phy";
+ rockchip,pipe-grf = <&pipegrf>;
+ rockchip,pipe-phy-grf = <&pipe_phy_grf1>;
+ #phy-cells = <1>;
+@@ -1778,6 +1779,7 @@ combphy2: phy@fe840000 {
+ assigned-clocks = <&pmucru CLK_PCIEPHY2_REF>;
+ assigned-clock-rates = <100000000>;
+ resets = <&cru SRST_PIPEPHY2>;
++ reset-names = "phy";
+ rockchip,pipe-grf = <&pipegrf>;
+ rockchip,pipe-phy-grf = <&pipe_phy_grf2>;
+ #phy-cells = <1>;
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index bc0b0d75acef7b..c1f45fd6b3e9a9 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -350,6 +350,11 @@ alternative_cb_end
+ // Narrow PARange to fit the PS field in TCR_ELx
+ ubfx \tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3
+ mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX
++#ifdef CONFIG_ARM64_LPA2
++alternative_if_not ARM64_HAS_VA52
++ mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_48
++alternative_else_nop_endif
++#endif
+ cmp \tmp0, \tmp1
+ csel \tmp0, \tmp1, \tmp0, hi
+ bfi \tcr, \tmp0, \pos, #3
+diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
+index fd330c1db289a6..a970def932aacb 100644
+--- a/arch/arm64/include/asm/pgtable-hwdef.h
++++ b/arch/arm64/include/asm/pgtable-hwdef.h
+@@ -218,12 +218,6 @@
+ */
+ #define S1_TABLE_AP (_AT(pmdval_t, 3) << 61)
+
+-/*
+- * Highest possible physical address supported.
+- */
+-#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS)
+-#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1)
+-
+ #define TTBR_CNP_BIT (UL(1) << 0)
+
+ /*
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 2a11d0c10760b9..3ce7c632fbfbc3 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -78,6 +78,7 @@ extern bool arm64_use_ng_mappings;
+ #define lpa2_is_enabled() false
+ #define PTE_MAYBE_SHARED PTE_SHARED
+ #define PMD_MAYBE_SHARED PMD_SECT_S
++#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS)
+ #else
+ static inline bool __pure lpa2_is_enabled(void)
+ {
+@@ -86,8 +87,14 @@ static inline bool __pure lpa2_is_enabled(void)
+
+ #define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED)
+ #define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S)
++#define PHYS_MASK_SHIFT (lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48)
+ #endif
+
++/*
++ * Highest possible physical address supported.
++ */
++#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1)
++
+ /*
+ * If we have userspace only BTI we don't want to mark kernel pages
+ * guarded even if the system does support BTI.
+diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
+index 8a8acc220371cb..84783efdc9d1f7 100644
+--- a/arch/arm64/include/asm/sparsemem.h
++++ b/arch/arm64/include/asm/sparsemem.h
+@@ -5,7 +5,10 @@
+ #ifndef __ASM_SPARSEMEM_H
+ #define __ASM_SPARSEMEM_H
+
+-#define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS
++#include <asm/pgtable-prot.h>
++
++#define MAX_PHYSMEM_BITS PHYS_MASK_SHIFT
++#define MAX_POSSIBLE_PHYSMEM_BITS (52)
+
+ /*
+ * Section size must be at least 512MB for 64K base
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index db994d1fd97e70..709f2b51be6df3 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1153,12 +1153,6 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
+- /*
+- * We mask out SMPS since even if the hardware
+- * supports priorities the kernel does not at present
+- * and we block access to them.
+- */
+- info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
+ vec_init_vq_map(ARM64_VEC_SME);
+
+ cpacr_restore(cpacr);
+@@ -1406,13 +1400,6 @@ void update_cpu_features(int cpu,
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
+- /*
+- * We mask out SMPS since even if the hardware
+- * supports priorities the kernel does not at present
+- * and we block access to them.
+- */
+- info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
+-
+ /* Probe vector lengths */
+ if (!system_capabilities_finalized())
+ vec_update_vq_map(ARM64_VEC_SME);
+@@ -2923,6 +2910,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ .matches = match, \
+ }
+
++#define HWCAP_CAP_MATCH_ID(match, reg, field, min_value, cap_type, cap) \
++ { \
++ __HWCAP_CAP(#cap, cap_type, cap) \
++ HWCAP_CPUID_MATCH(reg, field, min_value) \
++ .matches = match, \
++ }
++
+ #ifdef CONFIG_ARM64_PTR_AUTH
+ static const struct arm64_cpu_capabilities ptr_auth_hwcap_addr_matches[] = {
+ {
+@@ -2951,6 +2945,13 @@ static const struct arm64_cpu_capabilities ptr_auth_hwcap_gen_matches[] = {
+ };
+ #endif
+
++#ifdef CONFIG_ARM64_SVE
++static bool has_sve_feature(const struct arm64_cpu_capabilities *cap, int scope)
++{
++ return system_supports_sve() && has_user_cpuid_feature(cap, scope);
++}
++#endif
++
+ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL),
+ HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES),
+@@ -2993,19 +2994,19 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(ID_AA64MMFR2_EL1, AT, IMP, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+ HWCAP_CAP(ID_AA64PFR0_EL1, SVE, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SVEver, SVE2p1, CAP_HWCAP, KERNEL_HWCAP_SVE2P1),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SVEver, SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, AES, PMULL128, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, BitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE_B16B16),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, BF16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBF16),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_SVE_EBF16),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SHA3, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SM4, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, F32MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, F64MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SVEver, SVE2p1, CAP_HWCAP, KERNEL_HWCAP_SVE2P1),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SVEver, SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, AES, PMULL128, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, BitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE_B16B16),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, BF16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBF16),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_SVE_EBF16),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SHA3, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SM4, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, F32MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, F64MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
+ #endif
+ HWCAP_CAP(ID_AA64PFR1_EL1, SSBS, SSBS2, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ #ifdef CONFIG_ARM64_BTI
+@@ -3376,7 +3377,7 @@ static void verify_hyp_capabilities(void)
+ return;
+
+ safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+- mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
++ mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+ mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
+
+ /* Verify VMID bits */
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 44718d0482b3b4..aec5e3947c780a 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -478,6 +478,16 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
+ if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
+ __cpuinfo_store_cpu_32bit(&info->aarch32);
+
++ if (IS_ENABLED(CONFIG_ARM64_SME) &&
++ id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
++ /*
++ * We mask out SMPS since even if the hardware
++ * supports priorities the kernel does not at present
++ * and we block access to them.
++ */
++ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
++ }
++
+ cpuinfo_detect_icache_policy(info);
+ }
+
+diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
+index 29d4b6244a6f63..5c03f5e0d352da 100644
+--- a/arch/arm64/kernel/pi/idreg-override.c
++++ b/arch/arm64/kernel/pi/idreg-override.c
+@@ -74,6 +74,15 @@ static bool __init mmfr2_varange_filter(u64 val)
+ id_aa64mmfr0_override.val |=
+ (ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
+ id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
++
++ /*
++ * Override PARange to 48 bits - the override will just be
++ * ignored if the actual PARange is smaller, but this is
++ * unlikely to be the case for LPA2 capable silicon.
++ */
++ id_aa64mmfr0_override.val |=
++ ID_AA64MMFR0_EL1_PARANGE_48 << ID_AA64MMFR0_EL1_PARANGE_SHIFT;
++ id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_PARANGE_SHIFT;
+ }
+ #endif
+ return true;
+diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c
+index f374a3e5a5fe10..e57b043f324b51 100644
+--- a/arch/arm64/kernel/pi/map_kernel.c
++++ b/arch/arm64/kernel/pi/map_kernel.c
+@@ -136,6 +136,12 @@ static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr)
+ {
+ u64 sctlr = read_sysreg(sctlr_el1);
+ u64 tcr = read_sysreg(tcr_el1) | TCR_DS;
++ u64 mmfr0 = read_sysreg(id_aa64mmfr0_el1);
++ u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
++ ID_AA64MMFR0_EL1_PARANGE_SHIFT);
++
++ tcr &= ~TCR_IPS_MASK;
++ tcr |= parange << TCR_IPS_SHIFT;
+
+ asm(" msr sctlr_el1, %0 ;"
+ " isb ;"
+diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
+index 1215df59041856..754914d9ec6835 100644
+--- a/arch/arm64/kvm/arch_timer.c
++++ b/arch/arm64/kvm/arch_timer.c
+@@ -466,10 +466,8 @@ static void timer_emulate(struct arch_timer_context *ctx)
+
+ trace_kvm_timer_emulate(ctx, should_fire);
+
+- if (should_fire != ctx->irq.level) {
++ if (should_fire != ctx->irq.level)
+ kvm_timer_update_irq(ctx->vcpu, should_fire, ctx);
+- return;
+- }
+
+ /*
+ * If the timer can fire now, we don't need to have a soft timer
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 70ff9a20ef3af3..117702f033218d 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1998,8 +1998,7 @@ static int kvm_init_vector_slots(void)
+ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
+ {
+ struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
+- u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+- unsigned long tcr;
++ unsigned long tcr, ips;
+
+ /*
+ * Calculate the raw per-cpu offset without a translation from the
+@@ -2013,6 +2012,7 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
+ params->mair_el2 = read_sysreg(mair_el1);
+
+ tcr = read_sysreg(tcr_el1);
++ ips = FIELD_GET(TCR_IPS_MASK, tcr);
+ if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
+ tcr |= TCR_EPD1_MASK;
+ } else {
+@@ -2022,8 +2022,8 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
+ tcr &= ~TCR_T0SZ_MASK;
+ tcr |= TCR_T0SZ(hyp_va_bits);
+ tcr &= ~TCR_EL2_PS_MASK;
+- tcr |= FIELD_PREP(TCR_EL2_PS_MASK, kvm_get_parange(mmfr0));
+- if (kvm_lpa2_is_enabled())
++ tcr |= FIELD_PREP(TCR_EL2_PS_MASK, ips);
++ if (lpa2_is_enabled())
+ tcr |= TCR_EL2_DS;
+ params->tcr_el2 = tcr;
+
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 5f1e2103888b76..0a6956bbfb3269 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -508,6 +508,18 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+
+ static int __init hugetlbpage_init(void)
+ {
++ /*
++ * HugeTLB pages are supported on maximum four page table
++ * levels (PUD, CONT PMD, PMD, CONT PTE) for a given base
++ * page size, corresponding to hugetlb_add_hstate() calls
++ * here.
++ *
++ * HUGE_MAX_HSTATE should at least match maximum supported
++ * HugeTLB page sizes on the platform. Any new addition to
++ * supported HugeTLB page sizes will also require changing
++ * HUGE_MAX_HSTATE as well.
++ */
++ BUILD_BUG_ON(HUGE_MAX_HSTATE < 4);
+ if (pud_sect_supported())
+ hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
+
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 93ba66de160ce4..ea71ef2e343c2c 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -278,7 +278,12 @@ void __init arm64_memblock_init(void)
+
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ extern u16 memstart_offset_seed;
+- u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
++
++ /*
++ * Use the sanitised version of id_aa64mmfr0_el1 so that linear
++ * map randomization can be enabled by shrinking the IPA space.
++ */
++ u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+ int parange = cpuid_feature_extract_unsigned_field(
+ mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+ s64 range = linear_region_size -
+diff --git a/arch/loongarch/include/uapi/asm/ptrace.h b/arch/loongarch/include/uapi/asm/ptrace.h
+index ac915f84165053..aafb3cd9e943e5 100644
+--- a/arch/loongarch/include/uapi/asm/ptrace.h
++++ b/arch/loongarch/include/uapi/asm/ptrace.h
+@@ -72,6 +72,16 @@ struct user_watch_state {
+ } dbg_regs[8];
+ };
+
++struct user_watch_state_v2 {
++ uint64_t dbg_info;
++ struct {
++ uint64_t addr;
++ uint64_t mask;
++ uint32_t ctrl;
++ uint32_t pad;
++ } dbg_regs[14];
++};
++
+ #define PTRACE_SYSEMU 0x1f
+ #define PTRACE_SYSEMU_SINGLESTEP 0x20
+
+diff --git a/arch/loongarch/kernel/ptrace.c b/arch/loongarch/kernel/ptrace.c
+index 19dc6eff45ccc8..5e2402cfcab0a1 100644
+--- a/arch/loongarch/kernel/ptrace.c
++++ b/arch/loongarch/kernel/ptrace.c
+@@ -720,7 +720,7 @@ static int hw_break_set(struct task_struct *target,
+ unsigned int note_type = regset->core_note_type;
+
+ /* Resource info */
+- offset = offsetof(struct user_watch_state, dbg_regs);
++ offset = offsetof(struct user_watch_state_v2, dbg_regs);
+ user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, 0, offset);
+
+ /* (address, mask, ctrl) registers */
+@@ -920,7 +920,7 @@ static const struct user_regset loongarch64_regsets[] = {
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+ [REGSET_HW_BREAK] = {
+ .core_note_type = NT_LOONGARCH_HW_BREAK,
+- .n = sizeof(struct user_watch_state) / sizeof(u32),
++ .n = sizeof(struct user_watch_state_v2) / sizeof(u32),
+ .size = sizeof(u32),
+ .align = sizeof(u32),
+ .regset_get = hw_break_get,
+@@ -928,7 +928,7 @@ static const struct user_regset loongarch64_regsets[] = {
+ },
+ [REGSET_HW_WATCH] = {
+ .core_note_type = NT_LOONGARCH_HW_WATCH,
+- .n = sizeof(struct user_watch_state) / sizeof(u32),
++ .n = sizeof(struct user_watch_state_v2) / sizeof(u32),
+ .size = sizeof(u32),
+ .align = sizeof(u32),
+ .regset_get = hw_break_get,
+diff --git a/arch/m68k/include/asm/vga.h b/arch/m68k/include/asm/vga.h
+index 4742e6bc3ab8ea..cdd414fa8710a9 100644
+--- a/arch/m68k/include/asm/vga.h
++++ b/arch/m68k/include/asm/vga.h
+@@ -9,7 +9,7 @@
+ */
+ #ifndef CONFIG_PCI
+
+-#include <asm/raw_io.h>
++#include <asm/io.h>
+ #include <asm/kmap.h>
+
+ /*
+@@ -29,9 +29,9 @@
+ #define inw_p(port) 0
+ #define outb_p(port, val) do { } while (0)
+ #define outw(port, val) do { } while (0)
+-#define readb raw_inb
+-#define writeb raw_outb
+-#define writew raw_outw
++#define readb __raw_readb
++#define writeb __raw_writeb
++#define writew __raw_writew
+
+ #endif /* CONFIG_PCI */
+ #endif /* _ASM_M68K_VGA_H */
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 467b10f4361aeb..5078ebf071ec07 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1084,7 +1084,6 @@ config CSRC_IOASIC
+
+ config CSRC_R4K
+ select CLOCKSOURCE_WATCHDOG if CPU_FREQ
+- select HAVE_UNSTABLE_SCHED_CLOCK if SMP && 64BIT
+ bool
+
+ config CSRC_SB1250
+diff --git a/arch/mips/kernel/ftrace.c b/arch/mips/kernel/ftrace.c
+index 8c401e42301cbf..f39e85fd58fa99 100644
+--- a/arch/mips/kernel/ftrace.c
++++ b/arch/mips/kernel/ftrace.c
+@@ -248,7 +248,7 @@ int ftrace_disable_ftrace_graph_caller(void)
+ #define S_R_SP (0xafb0 << 16) /* s{d,w} R, offset(sp) */
+ #define OFFSET_MASK 0xffff /* stack offset range: 0 ~ PT_SIZE */
+
+-unsigned long ftrace_get_parent_ra_addr(unsigned long self_ra, unsigned long
++static unsigned long ftrace_get_parent_ra_addr(unsigned long self_ra, unsigned long
+ old_parent_ra, unsigned long parent_ra_addr, unsigned long fp)
+ {
+ unsigned long sp, ip, tmp;
+diff --git a/arch/mips/loongson64/boardinfo.c b/arch/mips/loongson64/boardinfo.c
+index 280989c5a137b5..8bb275c93ac099 100644
+--- a/arch/mips/loongson64/boardinfo.c
++++ b/arch/mips/loongson64/boardinfo.c
+@@ -21,13 +21,11 @@ static ssize_t boardinfo_show(struct kobject *kobj,
+ "BIOS Info\n"
+ "Vendor\t\t\t: %s\n"
+ "Version\t\t\t: %s\n"
+- "ROM Size\t\t: %d KB\n"
+ "Release Date\t\t: %s\n",
+ strsep(&tmp_board_manufacturer, "-"),
+ eboard->name,
+ strsep(&tmp_bios_vendor, "-"),
+ einter->description,
+- einter->size,
+ especial->special_name);
+ }
+ static struct kobj_attribute boardinfo_attr = __ATTR(boardinfo, 0444,
+diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c
+index 265bc57819dfb5..c89e70df43d82b 100644
+--- a/arch/mips/math-emu/cp1emu.c
++++ b/arch/mips/math-emu/cp1emu.c
+@@ -1660,7 +1660,7 @@ static int fpux_emu(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
+ break;
+ }
+
+- case 0x3:
++ case 0x7:
+ if (MIPSInst_FUNC(ir) != pfetch_op)
+ return SIGILL;
+
+diff --git a/arch/mips/pci/pci-legacy.c b/arch/mips/pci/pci-legacy.c
+index ec2567f8efd83b..66898fd182dc1f 100644
+--- a/arch/mips/pci/pci-legacy.c
++++ b/arch/mips/pci/pci-legacy.c
+@@ -29,6 +29,14 @@ static LIST_HEAD(controllers);
+
+ static int pci_initialized;
+
++unsigned long pci_address_to_pio(phys_addr_t address)
++{
++ if (address > IO_SPACE_LIMIT)
++ return (unsigned long)-1;
++
++ return (unsigned long) address;
++}
++
+ /*
+ * We need to avoid collisions with `mirrored' VGA ports
+ * and other strange ISA hardware, so we always want the
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index aa6a3cad275d91..fcc5973f75195a 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -60,8 +60,8 @@ config PARISC
+ select HAVE_ARCH_MMAP_RND_BITS
+ select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_ARCH_HASH
+- select HAVE_ARCH_JUMP_LABEL
+- select HAVE_ARCH_JUMP_LABEL_RELATIVE
++ # select HAVE_ARCH_JUMP_LABEL
++ # select HAVE_ARCH_JUMP_LABEL_RELATIVE
+ select HAVE_ARCH_KFENCE
+ select HAVE_ARCH_SECCOMP_FILTER
+ select HAVE_ARCH_TRACEHOOK
+diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
+index c664fdec75b1ab..6824e8139801c2 100644
+--- a/arch/powerpc/kvm/e500_mmu_host.c
++++ b/arch/powerpc/kvm/e500_mmu_host.c
+@@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
+ return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);
+ }
+
+-static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
++static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
+ struct kvm_book3e_206_tlb_entry *gtlbe,
+ kvm_pfn_t pfn, unsigned int wimg)
+ {
+@@ -252,11 +252,7 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
+ /* Use guest supplied MAS2_G and MAS2_E */
+ ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
+
+- /* Mark the page accessed */
+- kvm_set_pfn_accessed(pfn);
+-
+- if (tlbe_is_writable(gtlbe))
+- kvm_set_pfn_dirty(pfn);
++ return tlbe_is_writable(gtlbe);
+ }
+
+ static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
+@@ -326,6 +322,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ {
+ struct kvm_memory_slot *slot;
+ unsigned long pfn = 0; /* silence GCC warning */
++ struct page *page = NULL;
+ unsigned long hva;
+ int pfnmap = 0;
+ int tsize = BOOK3E_PAGESZ_4K;
+@@ -337,6 +334,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ unsigned int wimg = 0;
+ pgd_t *pgdir;
+ unsigned long flags;
++ bool writable = false;
+
+ /* used to check for invalidations in progress */
+ mmu_seq = kvm->mmu_invalidate_seq;
+@@ -446,7 +444,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+
+ if (likely(!pfnmap)) {
+ tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT);
+- pfn = gfn_to_pfn_memslot(slot, gfn);
++ pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page);
+ if (is_error_noslot_pfn(pfn)) {
+ if (printk_ratelimit())
+ pr_err("%s: real page not found for gfn %lx\n",
+@@ -481,7 +479,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ if (pte_present(pte)) {
+ wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) &
+ MAS2_WIMGE_MASK;
+- local_irq_restore(flags);
+ } else {
+ local_irq_restore(flags);
+ pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n",
+@@ -490,8 +487,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ goto out;
+ }
+ }
+- kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
++ local_irq_restore(flags);
+
++ writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
+ kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
+ ref, gvaddr, stlbe);
+
+@@ -499,11 +497,8 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ kvmppc_mmu_flush_icache(pfn);
+
+ out:
++ kvm_release_faultin_page(kvm, page, !!ret, writable);
+ spin_unlock(&kvm->mmu_lock);
+-
+- /* Drop refcount on page, so that mmu notifiers can clear it */
+- kvm_release_pfn_clean(pfn);
+-
+ return ret;
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c
+index 1893f66371fa43..b12ef382fec709 100644
+--- a/arch/powerpc/platforms/pseries/eeh_pseries.c
++++ b/arch/powerpc/platforms/pseries/eeh_pseries.c
+@@ -580,8 +580,10 @@ static int pseries_eeh_get_state(struct eeh_pe *pe, int *delay)
+
+ switch(rets[0]) {
+ case 0:
+- result = EEH_STATE_MMIO_ACTIVE |
+- EEH_STATE_DMA_ACTIVE;
++ result = EEH_STATE_MMIO_ACTIVE |
++ EEH_STATE_DMA_ACTIVE |
++ EEH_STATE_MMIO_ENABLED |
++ EEH_STATE_DMA_ENABLED;
+ break;
+ case 1:
+ result = EEH_STATE_RESET_ACTIVE |
+diff --git a/arch/s390/include/asm/asm-extable.h b/arch/s390/include/asm/asm-extable.h
+index 4a6b0a8b6412f1..00a67464c44534 100644
+--- a/arch/s390/include/asm/asm-extable.h
++++ b/arch/s390/include/asm/asm-extable.h
+@@ -14,6 +14,7 @@
+ #define EX_TYPE_UA_LOAD_REG 5
+ #define EX_TYPE_UA_LOAD_REGPAIR 6
+ #define EX_TYPE_ZEROPAD 7
++#define EX_TYPE_FPC 8
+
+ #define EX_DATA_REG_ERR_SHIFT 0
+ #define EX_DATA_REG_ERR GENMASK(3, 0)
+@@ -84,4 +85,7 @@
+ #define EX_TABLE_ZEROPAD(_fault, _target, _regdata, _regaddr) \
+ __EX_TABLE(__ex_table, _fault, _target, EX_TYPE_ZEROPAD, _regdata, _regaddr, 0)
+
++#define EX_TABLE_FPC(_fault, _target) \
++ __EX_TABLE(__ex_table, _fault, _target, EX_TYPE_FPC, __stringify(%%r0), __stringify(%%r0), 0)
++
+ #endif /* __ASM_EXTABLE_H */
+diff --git a/arch/s390/include/asm/fpu-insn.h b/arch/s390/include/asm/fpu-insn.h
+index c1e2e521d9af7c..a4c9b4db62ff57 100644
+--- a/arch/s390/include/asm/fpu-insn.h
++++ b/arch/s390/include/asm/fpu-insn.h
+@@ -100,19 +100,12 @@ static __always_inline void fpu_lfpc(unsigned int *fpc)
+ */
+ static inline void fpu_lfpc_safe(unsigned int *fpc)
+ {
+- u32 tmp;
+-
+ instrument_read(fpc, sizeof(*fpc));
+- asm volatile("\n"
+- "0: lfpc %[fpc]\n"
+- "1: nopr %%r7\n"
+- ".pushsection .fixup, \"ax\"\n"
+- "2: lghi %[tmp],0\n"
+- " sfpc %[tmp]\n"
+- " jg 1b\n"
+- ".popsection\n"
+- EX_TABLE(1b, 2b)
+- : [tmp] "=d" (tmp)
++ asm_inline volatile(
++ " lfpc %[fpc]\n"
++ "0: nopr %%r7\n"
++ EX_TABLE_FPC(0b, 0b)
++ :
+ : [fpc] "Q" (*fpc)
+ : "memory");
+ }
+diff --git a/arch/s390/include/asm/futex.h b/arch/s390/include/asm/futex.h
+index eaeaeb3ff0be3e..752a2310f0d6c1 100644
+--- a/arch/s390/include/asm/futex.h
++++ b/arch/s390/include/asm/futex.h
+@@ -44,7 +44,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
+ break;
+ case FUTEX_OP_ANDN:
+ __futex_atomic_op("lr %2,%1\nnr %2,%5\n",
+- ret, oldval, newval, uaddr, oparg);
++ ret, oldval, newval, uaddr, ~oparg);
+ break;
+ case FUTEX_OP_XOR:
+ __futex_atomic_op("lr %2,%1\nxr %2,%5\n",
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 9a5236acc0a860..21ae93cbd8e478 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -162,8 +162,7 @@ static __always_inline void __stackleak_poison(unsigned long erase_low,
+ " la %[addr],256(%[addr])\n"
+ " brctg %[tmp],0b\n"
+ "1: stg %[poison],0(%[addr])\n"
+- " larl %[tmp],3f\n"
+- " ex %[count],0(%[tmp])\n"
++ " exrl %[count],3f\n"
+ " j 4f\n"
+ "2: stg %[poison],0(%[addr])\n"
+ " j 4f\n"
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 377b9aaf8c9248..ff1ddba96352a1 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -52,7 +52,6 @@ SECTIONS
+ SOFTIRQENTRY_TEXT
+ FTRACE_HOTPATCH_TRAMPOLINES_TEXT
+ *(.text.*_indirect_*)
+- *(.fixup)
+ *(.gnu.warning)
+ . = ALIGN(PAGE_SIZE);
+ _etext = .; /* End of text section */
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 89cafea4c41f26..caf40665fce96e 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -1362,8 +1362,14 @@ static struct vsie_page *get_vsie_page(struct kvm *kvm, unsigned long addr)
+ page = radix_tree_lookup(&kvm->arch.vsie.addr_to_page, addr >> 9);
+ rcu_read_unlock();
+ if (page) {
+- if (page_ref_inc_return(page) == 2)
+- return page_to_virt(page);
++ if (page_ref_inc_return(page) == 2) {
++ if (page->index == addr)
++ return page_to_virt(page);
++ /*
++ * We raced with someone reusing + putting this vsie
++ * page before we grabbed it.
++ */
++ }
+ page_ref_dec(page);
+ }
+
+@@ -1393,15 +1399,20 @@ static struct vsie_page *get_vsie_page(struct kvm *kvm, unsigned long addr)
+ kvm->arch.vsie.next++;
+ kvm->arch.vsie.next %= nr_vcpus;
+ }
+- radix_tree_delete(&kvm->arch.vsie.addr_to_page, page->index >> 9);
++ if (page->index != ULONG_MAX)
++ radix_tree_delete(&kvm->arch.vsie.addr_to_page,
++ page->index >> 9);
+ }
+- page->index = addr;
+- /* double use of the same address */
++ /* Mark it as invalid until it resides in the tree. */
++ page->index = ULONG_MAX;
++
++ /* Double use of the same address or allocation failure. */
+ if (radix_tree_insert(&kvm->arch.vsie.addr_to_page, addr >> 9, page)) {
+ page_ref_dec(page);
+ mutex_unlock(&kvm->arch.vsie.mutex);
+ return NULL;
+ }
++ page->index = addr;
+ mutex_unlock(&kvm->arch.vsie.mutex);
+
+ vsie_page = page_to_virt(page);
+@@ -1496,7 +1507,9 @@ void kvm_s390_vsie_destroy(struct kvm *kvm)
+ vsie_page = page_to_virt(page);
+ release_gmap_shadow(vsie_page);
+ /* free the radix tree entry */
+- radix_tree_delete(&kvm->arch.vsie.addr_to_page, page->index >> 9);
++ if (page->index != ULONG_MAX)
++ radix_tree_delete(&kvm->arch.vsie.addr_to_page,
++ page->index >> 9);
+ __free_page(page);
+ }
+ kvm->arch.vsie.page_count = 0;
+diff --git a/arch/s390/mm/extable.c b/arch/s390/mm/extable.c
+index 0a0738a473af05..812ec5be129169 100644
+--- a/arch/s390/mm/extable.c
++++ b/arch/s390/mm/extable.c
+@@ -77,6 +77,13 @@ static bool ex_handler_zeropad(const struct exception_table_entry *ex, struct pt
+ return true;
+ }
+
++static bool ex_handler_fpc(const struct exception_table_entry *ex, struct pt_regs *regs)
++{
++ asm volatile("sfpc %[val]\n" : : [val] "d" (0));
++ regs->psw.addr = extable_fixup(ex);
++ return true;
++}
++
+ bool fixup_exception(struct pt_regs *regs)
+ {
+ const struct exception_table_entry *ex;
+@@ -99,6 +106,8 @@ bool fixup_exception(struct pt_regs *regs)
+ return ex_handler_ua_load_reg(ex, true, regs);
+ case EX_TYPE_ZEROPAD:
+ return ex_handler_zeropad(ex, regs);
++ case EX_TYPE_FPC:
++ return ex_handler_fpc(ex, regs);
+ }
+ panic("invalid exception table entry");
+ }
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 1b74a000ff6459..56a786ca7354b9 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -171,7 +171,6 @@ void zpci_bus_scan_busses(void)
+ static bool zpci_bus_is_multifunction_root(struct zpci_dev *zdev)
+ {
+ return !s390_pci_no_rid && zdev->rid_available &&
+- zpci_is_device_configured(zdev) &&
+ !zdev->vfn;
+ }
+
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index f2051644de9432..606c74f274593e 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -25,6 +25,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ # avoid errors with '-march=i386', and future flags may depend on the target to
+ # be valid.
+ KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
++KBUILD_CFLAGS += -std=gnu11
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index ae5482a2f0ca0e..ccb8ff37fa9d4b 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -16,6 +16,7 @@
+ # define PAGES_NR 4
+ #endif
+
++# define KEXEC_CONTROL_PAGE_SIZE 4096
+ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048
+
+ #ifndef __ASSEMBLY__
+@@ -43,7 +44,6 @@ struct kimage;
+ /* Maximum address we can use for the control code buffer */
+ # define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE
+
+-# define KEXEC_CONTROL_PAGE_SIZE 4096
+
+ /* The native architecture */
+ # define KEXEC_ARCH KEXEC_ARCH_386
+@@ -58,9 +58,6 @@ struct kimage;
+ /* Maximum address we can use for the control pages */
+ # define KEXEC_CONTROL_MEMORY_LIMIT (MAXMEM-1)
+
+-/* Allocate one page for the pdp and the second for the code */
+-# define KEXEC_CONTROL_PAGE_SIZE (4096UL + 4096UL)
+-
+ /* The native architecture */
+ # define KEXEC_ARCH KEXEC_ARCH_X86_64
+ #endif
+@@ -145,6 +142,19 @@ struct kimage_arch {
+ };
+ #else
+ struct kimage_arch {
++ /*
++ * This is a kimage control page, as it must not overlap with either
++ * source or destination address ranges.
++ */
++ pgd_t *pgd;
++ /*
++ * The virtual mapping of the control code page itself is used only
++ * during the transition, while the current kernel's pages are all
++ * in place. Thus the intermediate page table pages used to map it
++ * are not control pages, but instead just normal pages obtained
++ * with get_zeroed_page(). And have to be tracked (below) so that
++ * they can be freed.
++ */
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 6b981868905f5d..5da67e5c00401b 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -27,6 +27,7 @@
+ #include <linux/hyperv.h>
+ #include <linux/kfifo.h>
+ #include <linux/sched/vhost_task.h>
++#include <linux/call_once.h>
+
+ #include <asm/apic.h>
+ #include <asm/pvclock-abi.h>
+@@ -1446,6 +1447,7 @@ struct kvm_arch {
+ struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter;
+ struct vhost_task *nx_huge_page_recovery_thread;
+ u64 nx_huge_page_last;
++ struct once nx_once;
+
+ #ifdef CONFIG_X86_64
+ /* The number of TDP MMU pages across all roots. */
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 4efecac49863ec..c70b86f1f2954f 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -226,6 +226,28 @@ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
+ return 0;
+ }
+
++static int __init
++acpi_check_lapic(union acpi_subtable_headers *header, const unsigned long end)
++{
++ struct acpi_madt_local_apic *processor = NULL;
++
++ processor = (struct acpi_madt_local_apic *)header;
++
++ if (BAD_MADT_ENTRY(processor, end))
++ return -EINVAL;
++
++ /* Ignore invalid ID */
++ if (processor->id == 0xff)
++ return 0;
++
++ /* Ignore processors that can not be onlined */
++ if (!acpi_is_processor_usable(processor->lapic_flags))
++ return 0;
++
++ has_lapic_cpus = true;
++ return 0;
++}
++
+ static int __init
+ acpi_parse_lapic(union acpi_subtable_headers * header, const unsigned long end)
+ {
+@@ -257,7 +279,6 @@ acpi_parse_lapic(union acpi_subtable_headers * header, const unsigned long end)
+ processor->processor_id, /* ACPI ID */
+ processor->lapic_flags & ACPI_MADT_ENABLED);
+
+- has_lapic_cpus = true;
+ return 0;
+ }
+
+@@ -1029,6 +1050,8 @@ static int __init early_acpi_parse_madt_lapic_addr_ovr(void)
+ static int __init acpi_parse_madt_lapic_entries(void)
+ {
+ int count, x2count = 0;
++ struct acpi_subtable_proc madt_proc[2];
++ int ret;
+
+ if (!boot_cpu_has(X86_FEATURE_APIC))
+ return -ENODEV;
+@@ -1037,10 +1060,27 @@ static int __init acpi_parse_madt_lapic_entries(void)
+ acpi_parse_sapic, MAX_LOCAL_APIC);
+
+ if (!count) {
+- count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC,
+- acpi_parse_lapic, MAX_LOCAL_APIC);
+- x2count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_X2APIC,
+- acpi_parse_x2apic, MAX_LOCAL_APIC);
++ /* Check if there are valid LAPIC entries */
++ acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC, acpi_check_lapic, MAX_LOCAL_APIC);
++
++ /*
++ * Enumerate the APIC IDs in the order that they appear in the
++ * MADT, no matter LAPIC entry or x2APIC entry is used.
++ */
++ memset(madt_proc, 0, sizeof(madt_proc));
++ madt_proc[0].id = ACPI_MADT_TYPE_LOCAL_APIC;
++ madt_proc[0].handler = acpi_parse_lapic;
++ madt_proc[1].id = ACPI_MADT_TYPE_LOCAL_X2APIC;
++ madt_proc[1].handler = acpi_parse_x2apic;
++ ret = acpi_table_parse_entries_array(ACPI_SIG_MADT,
++ sizeof(struct acpi_table_madt),
++ madt_proc, ARRAY_SIZE(madt_proc), MAX_LOCAL_APIC);
++ if (ret < 0) {
++ pr_err("Error parsing LAPIC/X2APIC entries\n");
++ return ret;
++ }
++ count = madt_proc[0].count;
++ x2count = madt_proc[1].count;
+ }
+ if (!count && !x2count) {
+ pr_err("No LAPIC entries present\n");
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 9fe9972d2071b9..37b8244899d895 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -582,6 +582,10 @@ static __init void fix_erratum_688(void)
+
+ static __init int init_amd_nbs(void)
+ {
++ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
++ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
++ return 0;
++
+ amd_cache_northbridges();
+ amd_cache_gart();
+
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index 9c9ac606893e99..7223c38a8708fc 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -146,7 +146,8 @@ static void free_transition_pgtable(struct kimage *image)
+ image->arch.pte = NULL;
+ }
+
+-static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
++static int init_transition_pgtable(struct kimage *image, pgd_t *pgd,
++ unsigned long control_page)
+ {
+ pgprot_t prot = PAGE_KERNEL_EXEC_NOENC;
+ unsigned long vaddr, paddr;
+@@ -157,7 +158,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
+ pte_t *pte;
+
+ vaddr = (unsigned long)relocate_kernel;
+- paddr = __pa(page_address(image->control_code_page)+PAGE_SIZE);
++ paddr = control_page;
+ pgd += pgd_index(vaddr);
+ if (!pgd_present(*pgd)) {
+ p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL);
+@@ -216,7 +217,7 @@ static void *alloc_pgt_page(void *data)
+ return p;
+ }
+
+-static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
++static int init_pgtable(struct kimage *image, unsigned long control_page)
+ {
+ struct x86_mapping_info info = {
+ .alloc_pgt_page = alloc_pgt_page,
+@@ -225,12 +226,12 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ .kernpg_flag = _KERNPG_TABLE_NOENC,
+ };
+ unsigned long mstart, mend;
+- pgd_t *level4p;
+ int result;
+ int i;
+
+- level4p = (pgd_t *)__va(start_pgtable);
+- clear_page(level4p);
++ image->arch.pgd = alloc_pgt_page(image);
++ if (!image->arch.pgd)
++ return -ENOMEM;
+
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ info.page_flag |= _PAGE_ENC;
+@@ -244,8 +245,8 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ mstart = pfn_mapped[i].start << PAGE_SHIFT;
+ mend = pfn_mapped[i].end << PAGE_SHIFT;
+
+- result = kernel_ident_mapping_init(&info,
+- level4p, mstart, mend);
++ result = kernel_ident_mapping_init(&info, image->arch.pgd,
++ mstart, mend);
+ if (result)
+ return result;
+ }
+@@ -260,8 +261,8 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ mstart = image->segment[i].mem;
+ mend = mstart + image->segment[i].memsz;
+
+- result = kernel_ident_mapping_init(&info,
+- level4p, mstart, mend);
++ result = kernel_ident_mapping_init(&info, image->arch.pgd,
++ mstart, mend);
+
+ if (result)
+ return result;
+@@ -271,15 +272,19 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ * Prepare EFI systab and ACPI tables for kexec kernel since they are
+ * not covered by pfn_mapped.
+ */
+- result = map_efi_systab(&info, level4p);
++ result = map_efi_systab(&info, image->arch.pgd);
+ if (result)
+ return result;
+
+- result = map_acpi_tables(&info, level4p);
++ result = map_acpi_tables(&info, image->arch.pgd);
+ if (result)
+ return result;
+
+- return init_transition_pgtable(image, level4p);
++ /*
++ * This must be last because the intermediate page table pages it
++ * allocates will not be control pages and may overlap the image.
++ */
++ return init_transition_pgtable(image, image->arch.pgd, control_page);
+ }
+
+ static void load_segments(void)
+@@ -296,14 +301,14 @@ static void load_segments(void)
+
+ int machine_kexec_prepare(struct kimage *image)
+ {
+- unsigned long start_pgtable;
++ unsigned long control_page;
+ int result;
+
+ /* Calculate the offsets */
+- start_pgtable = page_to_pfn(image->control_code_page) << PAGE_SHIFT;
++ control_page = page_to_pfn(image->control_code_page) << PAGE_SHIFT;
+
+ /* Setup the identity mapped 64bit page table */
+- result = init_pgtable(image, start_pgtable);
++ result = init_pgtable(image, control_page);
+ if (result)
+ return result;
+
+@@ -357,13 +362,12 @@ void machine_kexec(struct kimage *image)
+ #endif
+ }
+
+- control_page = page_address(image->control_code_page) + PAGE_SIZE;
++ control_page = page_address(image->control_code_page);
+ __memcpy(control_page, relocate_kernel, KEXEC_CONTROL_CODE_MAX_SIZE);
+
+ page_list[PA_CONTROL_PAGE] = virt_to_phys(control_page);
+ page_list[VA_CONTROL_PAGE] = (unsigned long)control_page;
+- page_list[PA_TABLE_PAGE] =
+- (unsigned long)__pa(page_address(image->control_code_page));
++ page_list[PA_TABLE_PAGE] = (unsigned long)__pa(image->arch.pgd);
+
+ if (image->type == KEXEC_TYPE_DEFAULT)
+ page_list[PA_SWAP_PAGE] = (page_to_pfn(image->swap_page)
+@@ -573,8 +577,7 @@ static void kexec_mark_crashkres(bool protect)
+
+ /* Don't touch the control code page used in crash_kexec().*/
+ control = PFN_PHYS(page_to_pfn(kexec_crash_image->control_code_page));
+- /* Control code page is located in the 2nd page. */
+- kexec_mark_range(crashk_res.start, control + PAGE_SIZE - 1, protect);
++ kexec_mark_range(crashk_res.start, control - 1, protect);
+ control += KEXEC_CONTROL_PAGE_SIZE;
+ kexec_mark_range(control, crashk_res.end, protect);
+ }
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index f63f8fd00a91f3..15507e739c255b 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -838,7 +838,7 @@ void __noreturn stop_this_cpu(void *dummy)
+ #ifdef CONFIG_SMP
+ if (smp_ops.stop_this_cpu) {
+ smp_ops.stop_this_cpu();
+- unreachable();
++ BUG();
+ }
+ #endif
+
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 615922838c510b..dc1dd3f3e67fcd 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -883,7 +883,7 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
+
+ if (smp_ops.stop_this_cpu) {
+ smp_ops.stop_this_cpu();
+- unreachable();
++ BUG();
+ }
+
+ /* Assume hlt works */
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 1b4438e24814b4..9dd3796d075a56 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7227,6 +7227,19 @@ static void mmu_destroy_caches(void)
+ kmem_cache_destroy(mmu_page_header_cache);
+ }
+
++static void kvm_wake_nx_recovery_thread(struct kvm *kvm)
++{
++ /*
++ * The NX recovery thread is spawned on-demand at the first KVM_RUN and
++ * may not be valid even though the VM is globally visible. Do nothing,
++ * as such a VM can't have any possible NX huge pages.
++ */
++ struct vhost_task *nx_thread = READ_ONCE(kvm->arch.nx_huge_page_recovery_thread);
++
++ if (nx_thread)
++ vhost_task_wake(nx_thread);
++}
++
+ static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp)
+ {
+ if (nx_hugepage_mitigation_hard_disabled)
+@@ -7287,7 +7300,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ kvm_mmu_zap_all_fast(kvm);
+ mutex_unlock(&kvm->slots_lock);
+
+- vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
++ kvm_wake_nx_recovery_thread(kvm);
+ }
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7433,7 +7446,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
+ mutex_lock(&kvm_lock);
+
+ list_for_each_entry(kvm, &vm_list, vm_list)
+- vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
++ kvm_wake_nx_recovery_thread(kvm);
+
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7565,20 +7578,34 @@ static bool kvm_nx_huge_page_recovery_worker(void *data)
+ return true;
+ }
+
++static void kvm_mmu_start_lpage_recovery(struct once *once)
++{
++ struct kvm_arch *ka = container_of(once, struct kvm_arch, nx_once);
++ struct kvm *kvm = container_of(ka, struct kvm, arch);
++ struct vhost_task *nx_thread;
++
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ nx_thread = vhost_task_create(kvm_nx_huge_page_recovery_worker,
++ kvm_nx_huge_page_recovery_worker_kill,
++ kvm, "kvm-nx-lpage-recovery");
++
++ if (!nx_thread)
++ return;
++
++ vhost_task_start(nx_thread);
++
++ /* Make the task visible only once it is fully started. */
++ WRITE_ONCE(kvm->arch.nx_huge_page_recovery_thread, nx_thread);
++}
++
+ int kvm_mmu_post_init_vm(struct kvm *kvm)
+ {
+ if (nx_hugepage_mitigation_hard_disabled)
+ return 0;
+
+- kvm->arch.nx_huge_page_last = get_jiffies_64();
+- kvm->arch.nx_huge_page_recovery_thread = vhost_task_create(
+- kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill,
+- kvm, "kvm-nx-lpage-recovery");
+-
++ call_once(&kvm->arch.nx_once, kvm_mmu_start_lpage_recovery);
+ if (!kvm->arch.nx_huge_page_recovery_thread)
+ return -ENOMEM;
+-
+- vhost_task_start(kvm->arch.nx_huge_page_recovery_thread);
+ return 0;
+ }
+
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index fb854cf20ac3be..e9af87b1281407 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -3833,7 +3833,7 @@ static int snp_begin_psc(struct vcpu_svm *svm, struct psc_buffer *psc)
+ goto next_range;
+ }
+
+- unreachable();
++ BUG();
+ }
+
+ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b49e2eb4893080..d760b19d1e513e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11478,6 +11478,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ struct kvm_run *kvm_run = vcpu->run;
+ int r;
+
++ r = kvm_mmu_post_init_vm(vcpu->kvm);
++ if (r)
++ return r;
++
+ vcpu_load(vcpu);
+ kvm_sigset_activate(vcpu);
+ kvm_run->flags = 0;
+@@ -12751,7 +12755,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+
+ int kvm_arch_post_init_vm(struct kvm *kvm)
+ {
+- return kvm_mmu_post_init_vm(kvm);
++ once_init(&kvm->arch.nx_once);
++ return 0;
+ }
+
+ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index e6c469b323ccb7..ac52255fab01f4 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -678,7 +678,7 @@ page_fault_oops(struct pt_regs *regs, unsigned long error_code,
+ ASM_CALL_ARG3,
+ , [arg1] "r" (regs), [arg2] "r" (address), [arg3] "r" (&info));
+
+- unreachable();
++ BUG();
+ }
+ #endif
+
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 98a9bb92d75c88..12d5a0f37432ea 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -1010,4 +1010,34 @@ DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_suspend);
+ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_resume);
+ DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_suspend);
+ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_resume);
++
++/*
++ * Putting PCIe root ports on Ryzen SoCs with USB4 controllers into D3hot
++ * may cause problems when the system attempts wake up from s2idle.
++ *
++ * On the TUXEDO Sirius 16 Gen 1 with a specific old BIOS this manifests as
++ * a system hang.
++ */
++static const struct dmi_system_id quirk_tuxeo_rp_d3_dmi_table[] = {
++ {
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_EXACT_MATCH(DMI_BOARD_NAME, "APX958"),
++ DMI_EXACT_MATCH(DMI_BIOS_VERSION, "V1.00A00_20240108"),
++ },
++ },
++ {}
++};
++
++static void quirk_tuxeo_rp_d3(struct pci_dev *pdev)
++{
++ struct pci_dev *root_pdev;
++
++ if (dmi_check_system(quirk_tuxeo_rp_d3_dmi_table)) {
++ root_pdev = pcie_find_root_port(pdev);
++ if (root_pdev)
++ root_pdev->dev_flags |= PCI_DEV_FLAGS_NO_D3;
++ }
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1502, quirk_tuxeo_rp_d3);
+ #endif /* CONFIG_SUSPEND */
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 721a57700a3b05..55978e0dc17551 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -117,8 +117,8 @@ SYM_FUNC_START(xen_hypercall_hvm)
+ pop %ebx
+ pop %eax
+ #else
+- lea xen_hypercall_amd(%rip), %rbx
+- cmp %rax, %rbx
++ lea xen_hypercall_amd(%rip), %rcx
++ cmp %rax, %rcx
+ #ifdef CONFIG_FRAME_POINTER
+ pop %rax /* Dummy pop. */
+ #endif
+@@ -132,6 +132,7 @@ SYM_FUNC_START(xen_hypercall_hvm)
+ pop %rcx
+ pop %rax
+ #endif
++ FRAME_END
+ /* Use correct hypercall function. */
+ jz xen_hypercall_amd
+ jmp xen_hypercall_intel
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 45a395862fbc88..f1cf7f2909f3a7 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1138,6 +1138,7 @@ static void blkcg_fill_root_iostats(void)
+ blkg_iostat_set(&blkg->iostat.cur, &tmp);
+ u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ }
++ class_dev_iter_exit(&iter);
+ }
+
+ static void blkcg_print_one_stat(struct blkcg_gq *blkg, struct seq_file *s)
+diff --git a/block/fops.c b/block/fops.c
+index 13a67940d0408d..43983be5a2b3b1 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -758,11 +758,12 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ file_accessed(iocb->ki_filp);
+
+ ret = blkdev_direct_IO(iocb, to);
+- if (ret >= 0) {
++ if (ret > 0) {
+ iocb->ki_pos += ret;
+ count -= ret;
+ }
+- iov_iter_revert(to, count - iov_iter_count(to));
++ if (ret != -EIOCBQUEUED)
++ iov_iter_revert(to, count - iov_iter_count(to));
+ if (ret < 0 || !count)
+ goto reexpand;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index 10b7ae0f866c98..ef9a4ba18cb8a8 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -73,8 +73,8 @@ static int ivpu_resume(struct ivpu_device *vdev)
+ int ret;
+
+ retry:
+- pci_restore_state(to_pci_dev(vdev->drm.dev));
+ pci_set_power_state(to_pci_dev(vdev->drm.dev), PCI_D0);
++ pci_restore_state(to_pci_dev(vdev->drm.dev));
+
+ ret = ivpu_hw_power_up(vdev);
+ if (ret) {
+@@ -295,7 +295,10 @@ int ivpu_rpm_get(struct ivpu_device *vdev)
+ int ret;
+
+ ret = pm_runtime_resume_and_get(vdev->drm.dev);
+- drm_WARN_ON(&vdev->drm, ret < 0);
++ if (ret < 0) {
++ ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
++ pm_runtime_set_suspended(vdev->drm.dev);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index ada93cfde9ba1c..cff6685fa6cc6b 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -173,8 +173,6 @@ static struct gen_pool *ghes_estatus_pool;
+ static struct ghes_estatus_cache __rcu *ghes_estatus_caches[GHES_ESTATUS_CACHES_SIZE];
+ static atomic_t ghes_estatus_cache_alloced;
+
+-static int ghes_panic_timeout __read_mostly = 30;
+-
+ static void __iomem *ghes_map(u64 pfn, enum fixed_addresses fixmap_idx)
+ {
+ phys_addr_t paddr;
+@@ -983,14 +981,16 @@ static void __ghes_panic(struct ghes *ghes,
+ struct acpi_hest_generic_status *estatus,
+ u64 buf_paddr, enum fixed_addresses fixmap_idx)
+ {
++ const char *msg = GHES_PFX "Fatal hardware error";
++
+ __ghes_print_estatus(KERN_EMERG, ghes->generic, estatus);
+
+ ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx);
+
+- /* reboot to log the error! */
+ if (!panic_timeout)
+- panic_timeout = ghes_panic_timeout;
+- panic("Fatal hardware error!");
++ pr_emerg("%s but panic disabled\n", msg);
++
++ panic(msg);
+ }
+
+ static int ghes_proc(struct ghes *ghes)
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index 747f83f7114d29..e549914a636c66 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -287,9 +287,7 @@ static acpi_status acpi_platformrt_space_handler(u32 function,
+ if (!handler || !module)
+ goto invalid_guid;
+
+- if (!handler->handler_addr ||
+- !handler->static_data_buffer_addr ||
+- !handler->acpi_param_buffer_addr) {
++ if (!handler->handler_addr) {
+ buffer->prm_status = PRM_HANDLER_ERROR;
+ return AE_OK;
+ }
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 80a52a4e66dd16..e9186339f6e6bb 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -1187,8 +1187,6 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
+ }
+ break;
+ }
+- if (nval == 0)
+- return -EINVAL;
+
+ if (obj->type == ACPI_TYPE_BUFFER) {
+ if (proptype != DEV_PROP_U8)
+@@ -1212,9 +1210,11 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
+ ret = acpi_copy_property_array_uint(items, (u64 *)val, nval);
+ break;
+ case DEV_PROP_STRING:
+- ret = acpi_copy_property_array_string(
+- items, (char **)val,
+- min_t(u32, nval, obj->package.count));
++ nval = min_t(u32, nval, obj->package.count);
++ if (nval == 0)
++ return -ENODATA;
++
++ ret = acpi_copy_property_array_string(items, (char **)val, nval);
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
+index 67f277e1c3bf31..5a46c066abc365 100644
+--- a/drivers/ata/libata-sff.c
++++ b/drivers/ata/libata-sff.c
+@@ -601,7 +601,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct page *page;
+- unsigned int offset;
++ unsigned int offset, count;
+
+ if (!qc->cursg) {
+ qc->curbytes = qc->nbytes;
+@@ -617,25 +617,27 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ page = nth_page(page, (offset >> PAGE_SHIFT));
+ offset %= PAGE_SIZE;
+
+- trace_ata_sff_pio_transfer_data(qc, offset, qc->sect_size);
++ /* don't overrun current sg */
++ count = min(qc->cursg->length - qc->cursg_ofs, qc->sect_size);
++
++ trace_ata_sff_pio_transfer_data(qc, offset, count);
+
+ /*
+ * Split the transfer when it splits a page boundary. Note that the
+ * split still has to be dword aligned like all ATA data transfers.
+ */
+ WARN_ON_ONCE(offset % 4);
+- if (offset + qc->sect_size > PAGE_SIZE) {
++ if (offset + count > PAGE_SIZE) {
+ unsigned int split_len = PAGE_SIZE - offset;
+
+ ata_pio_xfer(qc, page, offset, split_len);
+- ata_pio_xfer(qc, nth_page(page, 1), 0,
+- qc->sect_size - split_len);
++ ata_pio_xfer(qc, nth_page(page, 1), 0, count - split_len);
+ } else {
+- ata_pio_xfer(qc, page, offset, qc->sect_size);
++ ata_pio_xfer(qc, page, offset, count);
+ }
+
+- qc->curbytes += qc->sect_size;
+- qc->cursg_ofs += qc->sect_size;
++ qc->curbytes += count;
++ qc->cursg_ofs += count;
+
+ if (qc->cursg_ofs == qc->cursg->length) {
+ qc->cursg = sg_next(qc->cursg);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 258a5cb6f27afe..6bc6dd417adf64 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -604,6 +604,8 @@ static const struct usb_device_id quirks_table[] = {
+ /* MediaTek MT7922 Bluetooth devices */
+ { USB_DEVICE(0x13d3, 0x3585), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3610), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* MediaTek MT7922A Bluetooth devices */
+ { USB_DEVICE(0x0489, 0xe0d8), .driver_info = BTUSB_MEDIATEK |
+@@ -668,6 +670,8 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3608), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3628), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* Additional Realtek 8723AE Bluetooth devices */
+ { USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/char/misc.c b/drivers/char/misc.c
+index 541edc26ec89a1..2cf595d2e10b85 100644
+--- a/drivers/char/misc.c
++++ b/drivers/char/misc.c
+@@ -63,16 +63,30 @@ static DEFINE_MUTEX(misc_mtx);
+ #define DYNAMIC_MINORS 128 /* like dynamic majors */
+ static DEFINE_IDA(misc_minors_ida);
+
+-static int misc_minor_alloc(void)
++static int misc_minor_alloc(int minor)
+ {
+- int ret;
+-
+- ret = ida_alloc_max(&misc_minors_ida, DYNAMIC_MINORS - 1, GFP_KERNEL);
+- if (ret >= 0) {
+- ret = DYNAMIC_MINORS - ret - 1;
++ int ret = 0;
++
++ if (minor == MISC_DYNAMIC_MINOR) {
++ /* allocate free id */
++ ret = ida_alloc_max(&misc_minors_ida, DYNAMIC_MINORS - 1, GFP_KERNEL);
++ if (ret >= 0) {
++ ret = DYNAMIC_MINORS - ret - 1;
++ } else {
++ ret = ida_alloc_range(&misc_minors_ida, MISC_DYNAMIC_MINOR + 1,
++ MINORMASK, GFP_KERNEL);
++ }
+ } else {
+- ret = ida_alloc_range(&misc_minors_ida, MISC_DYNAMIC_MINOR + 1,
+- MINORMASK, GFP_KERNEL);
++ /* specific minor, check if it is in dynamic or misc dynamic range */
++ if (minor < DYNAMIC_MINORS) {
++ minor = DYNAMIC_MINORS - minor - 1;
++ ret = ida_alloc_range(&misc_minors_ida, minor, minor, GFP_KERNEL);
++ } else if (minor > MISC_DYNAMIC_MINOR) {
++ ret = ida_alloc_range(&misc_minors_ida, minor, minor, GFP_KERNEL);
++ } else {
++ /* case of non-dynamic minors, no need to allocate id */
++ ret = 0;
++ }
+ }
+ return ret;
+ }
+@@ -219,7 +233,7 @@ int misc_register(struct miscdevice *misc)
+ mutex_lock(&misc_mtx);
+
+ if (is_dynamic) {
+- int i = misc_minor_alloc();
++ int i = misc_minor_alloc(misc->minor);
+
+ if (i < 0) {
+ err = -EBUSY;
+@@ -228,6 +242,7 @@ int misc_register(struct miscdevice *misc)
+ misc->minor = i;
+ } else {
+ struct miscdevice *c;
++ int i;
+
+ list_for_each_entry(c, &misc_list, list) {
+ if (c->minor == misc->minor) {
+@@ -235,6 +250,12 @@ int misc_register(struct miscdevice *misc)
+ goto out;
+ }
+ }
++
++ i = misc_minor_alloc(misc->minor);
++ if (i < 0) {
++ err = -EBUSY;
++ goto out;
++ }
+ }
+
+ dev = MKDEV(MISC_MAJOR, misc->minor);
+diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
+index 69533d0bfb51e8..cf02ec646f46f0 100644
+--- a/drivers/char/tpm/eventlog/acpi.c
++++ b/drivers/char/tpm/eventlog/acpi.c
+@@ -63,6 +63,11 @@ static bool tpm_is_tpm2_log(void *bios_event_log, u64 len)
+ return n == 0;
+ }
+
++static void tpm_bios_log_free(void *data)
++{
++ kvfree(data);
++}
++
+ /* read binary bios log */
+ int tpm_read_log_acpi(struct tpm_chip *chip)
+ {
+@@ -136,7 +141,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ }
+
+ /* malloc EventLog space */
+- log->bios_event_log = devm_kmalloc(&chip->dev, len, GFP_KERNEL);
++ log->bios_event_log = kvmalloc(len, GFP_KERNEL);
+ if (!log->bios_event_log)
+ return -ENOMEM;
+
+@@ -161,10 +166,16 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ goto err;
+ }
+
++ ret = devm_add_action(&chip->dev, tpm_bios_log_free, log->bios_event_log);
++ if (ret) {
++ log->bios_event_log = NULL;
++ goto err;
++ }
++
+ return format;
+
+ err:
+- devm_kfree(&chip->dev, log->bios_event_log);
++ tpm_bios_log_free(log->bios_event_log);
+ log->bios_event_log = NULL;
+ return ret;
+ }
+diff --git a/drivers/clk/clk-loongson2.c b/drivers/clk/clk-loongson2.c
+index 7082b4309c6f15..0d9485e83938a1 100644
+--- a/drivers/clk/clk-loongson2.c
++++ b/drivers/clk/clk-loongson2.c
+@@ -294,7 +294,7 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ for (p = data; p->name; p++)
+- clks_num++;
++ clks_num = max(clks_num, p->id + 1);
+
+ clp = devm_kzalloc(dev, struct_size(clp, clk_data.hws, clks_num),
+ GFP_KERNEL);
+@@ -309,6 +309,9 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ clp->clk_data.num = clks_num;
+ clp->dev = dev;
+
++ /* Avoid returning NULL for unused id */
++ memset_p((void **)clp->clk_data.hws, ERR_PTR(-ENOENT), clks_num);
++
+ for (i = 0; i < clks_num; i++) {
+ p = &data[i];
+ switch (p->type) {
+diff --git a/drivers/clk/mediatek/clk-mt2701-aud.c b/drivers/clk/mediatek/clk-mt2701-aud.c
+index 425c69cfb105a6..e103121cf58e77 100644
+--- a/drivers/clk/mediatek/clk-mt2701-aud.c
++++ b/drivers/clk/mediatek/clk-mt2701-aud.c
+@@ -55,10 +55,16 @@ static const struct mtk_gate audio_clks[] = {
+ GATE_DUMMY(CLK_DUMMY, "aud_dummy"),
+ /* AUDIO0 */
+ GATE_AUDIO0(CLK_AUD_AFE, "audio_afe", "aud_intbus_sel", 2),
++ GATE_DUMMY(CLK_AUD_LRCK_DETECT, "audio_lrck_detect_dummy"),
++ GATE_DUMMY(CLK_AUD_I2S, "audio_i2c_dummy"),
++ GATE_DUMMY(CLK_AUD_APLL_TUNER, "audio_apll_tuner_dummy"),
+ GATE_AUDIO0(CLK_AUD_HDMI, "audio_hdmi", "audpll_sel", 20),
+ GATE_AUDIO0(CLK_AUD_SPDF, "audio_spdf", "audpll_sel", 21),
+ GATE_AUDIO0(CLK_AUD_SPDF2, "audio_spdf2", "audpll_sel", 22),
+ GATE_AUDIO0(CLK_AUD_APLL, "audio_apll", "audpll_sel", 23),
++ GATE_DUMMY(CLK_AUD_TML, "audio_tml_dummy"),
++ GATE_DUMMY(CLK_AUD_AHB_IDLE_EXT, "audio_ahb_idle_ext_dummy"),
++ GATE_DUMMY(CLK_AUD_AHB_IDLE_INT, "audio_ahb_idle_int_dummy"),
+ /* AUDIO1 */
+ GATE_AUDIO1(CLK_AUD_I2SIN1, "audio_i2sin1", "aud_mux1_sel", 0),
+ GATE_AUDIO1(CLK_AUD_I2SIN2, "audio_i2sin2", "aud_mux1_sel", 1),
+@@ -76,10 +82,12 @@ static const struct mtk_gate audio_clks[] = {
+ GATE_AUDIO1(CLK_AUD_ASRCI2, "audio_asrci2", "asm_h_sel", 13),
+ GATE_AUDIO1(CLK_AUD_ASRCO1, "audio_asrco1", "asm_h_sel", 14),
+ GATE_AUDIO1(CLK_AUD_ASRCO2, "audio_asrco2", "asm_h_sel", 15),
++ GATE_DUMMY(CLK_AUD_HDMIRX, "audio_hdmirx_dummy"),
+ GATE_AUDIO1(CLK_AUD_INTDIR, "audio_intdir", "intdir_sel", 20),
+ GATE_AUDIO1(CLK_AUD_A1SYS, "audio_a1sys", "aud_mux1_sel", 21),
+ GATE_AUDIO1(CLK_AUD_A2SYS, "audio_a2sys", "aud_mux2_sel", 22),
+ GATE_AUDIO1(CLK_AUD_AFE_CONN, "audio_afe_conn", "aud_mux1_sel", 23),
++ GATE_DUMMY(CLK_AUD_AFE_PCMIF, "audio_afe_pcmif_dummy"),
+ GATE_AUDIO1(CLK_AUD_AFE_MRGIF, "audio_afe_mrgif", "aud_mux1_sel", 25),
+ /* AUDIO2 */
+ GATE_AUDIO2(CLK_AUD_MMIF_UL1, "audio_ul1", "aud_mux1_sel", 0),
+@@ -100,6 +108,8 @@ static const struct mtk_gate audio_clks[] = {
+ GATE_AUDIO2(CLK_AUD_MMIF_AWB2, "audio_awb2", "aud_mux1_sel", 15),
+ GATE_AUDIO2(CLK_AUD_MMIF_DAI, "audio_dai", "aud_mux1_sel", 16),
+ /* AUDIO3 */
++ GATE_DUMMY(CLK_AUD_DMIC1, "audio_dmic1_dummy"),
++ GATE_DUMMY(CLK_AUD_DMIC2, "audio_dmic2_dummy"),
+ GATE_AUDIO3(CLK_AUD_ASRCI3, "audio_asrci3", "asm_h_sel", 2),
+ GATE_AUDIO3(CLK_AUD_ASRCI4, "audio_asrci4", "asm_h_sel", 3),
+ GATE_AUDIO3(CLK_AUD_ASRCI5, "audio_asrci5", "asm_h_sel", 4),
+diff --git a/drivers/clk/mediatek/clk-mt2701-bdp.c b/drivers/clk/mediatek/clk-mt2701-bdp.c
+index 5da3eabffd3e76..f11c7a4fa37b65 100644
+--- a/drivers/clk/mediatek/clk-mt2701-bdp.c
++++ b/drivers/clk/mediatek/clk-mt2701-bdp.c
+@@ -31,6 +31,7 @@ static const struct mtk_gate_regs bdp1_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &bdp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+
+ static const struct mtk_gate bdp_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "bdp_dummy"),
+ GATE_BDP0(CLK_BDP_BRG_BA, "brg_baclk", "mm_sel", 0),
+ GATE_BDP0(CLK_BDP_BRG_DRAM, "brg_dram", "mm_sel", 1),
+ GATE_BDP0(CLK_BDP_LARB_DRAM, "larb_dram", "mm_sel", 2),
+diff --git a/drivers/clk/mediatek/clk-mt2701-img.c b/drivers/clk/mediatek/clk-mt2701-img.c
+index 875594bc9dcba8..c158e54c46526e 100644
+--- a/drivers/clk/mediatek/clk-mt2701-img.c
++++ b/drivers/clk/mediatek/clk-mt2701-img.c
+@@ -22,6 +22,7 @@ static const struct mtk_gate_regs img_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
+ static const struct mtk_gate img_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "img_dummy"),
+ GATE_IMG(CLK_IMG_SMI_COMM, "img_smi_comm", "mm_sel", 0),
+ GATE_IMG(CLK_IMG_RESZ, "img_resz", "mm_sel", 1),
+ GATE_IMG(CLK_IMG_JPGDEC_SMI, "img_jpgdec_smi", "mm_sel", 5),
+diff --git a/drivers/clk/mediatek/clk-mt2701-mm.c b/drivers/clk/mediatek/clk-mt2701-mm.c
+index bc68fa718878f9..474d87d62e8331 100644
+--- a/drivers/clk/mediatek/clk-mt2701-mm.c
++++ b/drivers/clk/mediatek/clk-mt2701-mm.c
+@@ -31,6 +31,7 @@ static const struct mtk_gate_regs disp1_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &disp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
+ static const struct mtk_gate mm_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "mm_dummy"),
+ GATE_DISP0(CLK_MM_SMI_COMMON, "mm_smi_comm", "mm_sel", 0),
+ GATE_DISP0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1),
+ GATE_DISP0(CLK_MM_CMDQ, "mm_cmdq", "mm_sel", 2),
+diff --git a/drivers/clk/mediatek/clk-mt2701-vdec.c b/drivers/clk/mediatek/clk-mt2701-vdec.c
+index 94db86f8d0a462..5299d92f3aba0f 100644
+--- a/drivers/clk/mediatek/clk-mt2701-vdec.c
++++ b/drivers/clk/mediatek/clk-mt2701-vdec.c
+@@ -31,6 +31,7 @@ static const struct mtk_gate_regs vdec1_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+
+ static const struct mtk_gate vdec_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "vdec_dummy"),
+ GATE_VDEC0(CLK_VDEC_CKGEN, "vdec_cken", "vdec_sel", 0),
+ GATE_VDEC1(CLK_VDEC_LARB, "vdec_larb_cken", "mm_sel", 0),
+ };
+diff --git a/drivers/clk/mmp/pwr-island.c b/drivers/clk/mmp/pwr-island.c
+index edaa2433a472ad..eaf5d2c5e59337 100644
+--- a/drivers/clk/mmp/pwr-island.c
++++ b/drivers/clk/mmp/pwr-island.c
+@@ -106,10 +106,10 @@ struct generic_pm_domain *mmp_pm_domain_register(const char *name,
+ pm_domain->flags = flags;
+ pm_domain->lock = lock;
+
+- pm_genpd_init(&pm_domain->genpd, NULL, true);
+ pm_domain->genpd.name = name;
+ pm_domain->genpd.power_on = mmp_pm_domain_power_on;
+ pm_domain->genpd.power_off = mmp_pm_domain_power_off;
++ pm_genpd_init(&pm_domain->genpd, NULL, true);
+
+ return &pm_domain->genpd;
+ }
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 9ba675f229b144..16145f74bbc853 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -1022,6 +1022,7 @@ config SM_GCC_7150
+ config SM_GCC_8150
+ tristate "SM8150 Global Clock Controller"
+ depends on ARM64 || COMPILE_TEST
++ select QCOM_GDSC
+ help
+ Support for the global clock controller on SM8150 devices.
+ Say Y if you want to use peripheral devices such as UART,
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 49687512184b92..10e276dabff93d 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -432,6 +432,8 @@ void clk_alpha_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ mask |= config->pre_div_mask;
+ mask |= config->post_div_mask;
+ mask |= config->vco_mask;
++ mask |= config->alpha_en_mask;
++ mask |= config->alpha_mode_mask;
+
+ regmap_update_bits(regmap, PLL_USER_CTL(pll), mask, val);
+
+diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
+index eefc322ce36798..e6c33010cfbf69 100644
+--- a/drivers/clk/qcom/clk-rpmh.c
++++ b/drivers/clk/qcom/clk-rpmh.c
+@@ -329,7 +329,7 @@ static unsigned long clk_rpmh_bcm_recalc_rate(struct clk_hw *hw,
+ {
+ struct clk_rpmh *c = to_clk_rpmh(hw);
+
+- return c->aggr_state * c->unit;
++ return (unsigned long)c->aggr_state * c->unit;
+ }
+
+ static const struct clk_ops clk_rpmh_bcm_ops = {
+diff --git a/drivers/clk/qcom/dispcc-sm6350.c b/drivers/clk/qcom/dispcc-sm6350.c
+index 50facb36701af9..2bc6b5f99f5725 100644
+--- a/drivers/clk/qcom/dispcc-sm6350.c
++++ b/drivers/clk/qcom/dispcc-sm6350.c
+@@ -187,13 +187,12 @@ static struct clk_rcg2 disp_cc_mdss_dp_aux_clk_src = {
+ .cmd_rcgr = 0x1144,
+ .mnd_width = 0,
+ .hid_width = 5,
++ .parent_map = disp_cc_parent_map_6,
+ .freq_tbl = ftbl_disp_cc_mdss_dp_aux_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "disp_cc_mdss_dp_aux_clk_src",
+- .parent_data = &(const struct clk_parent_data){
+- .fw_name = "bi_tcxo",
+- },
+- .num_parents = 1,
++ .parent_data = disp_cc_parent_data_6,
++ .num_parents = ARRAY_SIZE(disp_cc_parent_data_6),
+ .ops = &clk_rcg2_ops,
+ },
+ };
+diff --git a/drivers/clk/qcom/gcc-mdm9607.c b/drivers/clk/qcom/gcc-mdm9607.c
+index 6e6068b168e66e..07f1b78d737a15 100644
+--- a/drivers/clk/qcom/gcc-mdm9607.c
++++ b/drivers/clk/qcom/gcc-mdm9607.c
+@@ -535,7 +535,7 @@ static struct clk_rcg2 blsp1_uart5_apps_clk_src = {
+ };
+
+ static struct clk_rcg2 blsp1_uart6_apps_clk_src = {
+- .cmd_rcgr = 0x6044,
++ .cmd_rcgr = 0x7044,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+diff --git a/drivers/clk/qcom/gcc-sm6350.c b/drivers/clk/qcom/gcc-sm6350.c
+index a811fad2aa2785..74346dc026068a 100644
+--- a/drivers/clk/qcom/gcc-sm6350.c
++++ b/drivers/clk/qcom/gcc-sm6350.c
+@@ -182,6 +182,14 @@ static const struct clk_parent_data gcc_parent_data_2_ao[] = {
+ { .hw = &gpll0_out_odd.clkr.hw },
+ };
+
++static const struct parent_map gcc_parent_map_3[] = {
++ { P_BI_TCXO, 0 },
++};
++
++static const struct clk_parent_data gcc_parent_data_3[] = {
++ { .fw_name = "bi_tcxo" },
++};
++
+ static const struct parent_map gcc_parent_map_4[] = {
+ { P_BI_TCXO, 0 },
+ { P_GPLL0_OUT_MAIN, 1 },
+@@ -701,13 +709,12 @@ static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = {
+ .cmd_rcgr = 0x3a0b0,
+ .mnd_width = 0,
+ .hid_width = 5,
++ .parent_map = gcc_parent_map_3,
+ .freq_tbl = ftbl_gcc_ufs_phy_phy_aux_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_ufs_phy_phy_aux_clk_src",
+- .parent_data = &(const struct clk_parent_data){
+- .fw_name = "bi_tcxo",
+- },
+- .num_parents = 1,
++ .parent_data = gcc_parent_data_3,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ .ops = &clk_rcg2_ops,
+ },
+ };
+@@ -764,13 +771,12 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ .cmd_rcgr = 0x1a034,
+ .mnd_width = 0,
+ .hid_width = 5,
++ .parent_map = gcc_parent_map_3,
+ .freq_tbl = ftbl_gcc_usb30_prim_mock_utmi_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_prim_mock_utmi_clk_src",
+- .parent_data = &(const struct clk_parent_data){
+- .fw_name = "bi_tcxo",
+- },
+- .num_parents = 1,
++ .parent_data = gcc_parent_data_3,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ .ops = &clk_rcg2_ops,
+ },
+ };
+diff --git a/drivers/clk/qcom/gcc-sm8550.c b/drivers/clk/qcom/gcc-sm8550.c
+index 5abaeddd6afcc5..862a9bf73bcb5d 100644
+--- a/drivers/clk/qcom/gcc-sm8550.c
++++ b/drivers/clk/qcom/gcc-sm8550.c
+@@ -3003,7 +3003,7 @@ static struct gdsc pcie_0_gdsc = {
+ .pd = {
+ .name = "pcie_0_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3014,7 +3014,7 @@ static struct gdsc pcie_0_phy_gdsc = {
+ .pd = {
+ .name = "pcie_0_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3025,7 +3025,7 @@ static struct gdsc pcie_1_gdsc = {
+ .pd = {
+ .name = "pcie_1_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3036,7 +3036,7 @@ static struct gdsc pcie_1_phy_gdsc = {
+ .pd = {
+ .name = "pcie_1_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
+index fd9d6544bdd53a..9dd5c48f33bed5 100644
+--- a/drivers/clk/qcom/gcc-sm8650.c
++++ b/drivers/clk/qcom/gcc-sm8650.c
+@@ -3437,7 +3437,7 @@ static struct gdsc pcie_0_gdsc = {
+ .pd = {
+ .name = "pcie_0_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+@@ -3448,7 +3448,7 @@ static struct gdsc pcie_0_phy_gdsc = {
+ .pd = {
+ .name = "pcie_0_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+@@ -3459,7 +3459,7 @@ static struct gdsc pcie_1_gdsc = {
+ .pd = {
+ .name = "pcie_1_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+@@ -3470,7 +3470,7 @@ static struct gdsc pcie_1_phy_gdsc = {
+ .pd = {
+ .name = "pcie_1_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a100.c b/drivers/clk/sunxi-ng/ccu-sun50i-a100.c
+index bbaa82978716a9..a59e420b195d77 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a100.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a100.c
+@@ -436,7 +436,7 @@ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc0_clk, "mmc0", mmc_parents, 0x830,
+ 24, 2, /* mux */
+ BIT(31), /* gate */
+ 2, /* post-div */
+- CLK_SET_RATE_NO_REPARENT);
++ 0);
+
+ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc1_clk, "mmc1", mmc_parents, 0x834,
+ 0, 4, /* M */
+@@ -444,7 +444,7 @@ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc1_clk, "mmc1", mmc_parents, 0x834,
+ 24, 2, /* mux */
+ BIT(31), /* gate */
+ 2, /* post-div */
+- CLK_SET_RATE_NO_REPARENT);
++ 0);
+
+ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc2_clk, "mmc2", mmc_parents, 0x838,
+ 0, 4, /* M */
+@@ -452,7 +452,7 @@ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc2_clk, "mmc2", mmc_parents, 0x838,
+ 24, 2, /* mux */
+ BIT(31), /* gate */
+ 2, /* post-div */
+- CLK_SET_RATE_NO_REPARENT);
++ 0);
+
+ static SUNXI_CCU_GATE(bus_mmc0_clk, "bus-mmc0", "ahb3", 0x84c, BIT(0), 0);
+ static SUNXI_CCU_GATE(bus_mmc1_clk, "bus-mmc1", "ahb3", 0x84c, BIT(1), 0);
+diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
+index 588ab1cc6d557c..f089a1b9c0c98a 100644
+--- a/drivers/cpufreq/Kconfig
++++ b/drivers/cpufreq/Kconfig
+@@ -218,7 +218,7 @@ config CPUFREQ_DT
+ If in doubt, say N.
+
+ config CPUFREQ_DT_PLATDEV
+- tristate "Generic DT based cpufreq platdev driver"
++ bool "Generic DT based cpufreq platdev driver"
+ depends on OF
+ help
+ This adds a generic DT based cpufreq platdev driver for frequency
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 18942bfe9c95f7..78ad3221fe077e 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -234,5 +234,3 @@ static int __init cpufreq_dt_platdev_init(void)
+ sizeof(struct cpufreq_dt_platform_data)));
+ }
+ core_initcall(cpufreq_dt_platdev_init);
+-MODULE_DESCRIPTION("Generic DT based cpufreq platdev driver");
+-MODULE_LICENSE("GPL");
+diff --git a/drivers/cpufreq/s3c64xx-cpufreq.c b/drivers/cpufreq/s3c64xx-cpufreq.c
+index c6bdfc308e9908..9cef7152807626 100644
+--- a/drivers/cpufreq/s3c64xx-cpufreq.c
++++ b/drivers/cpufreq/s3c64xx-cpufreq.c
+@@ -24,6 +24,7 @@ struct s3c64xx_dvfs {
+ unsigned int vddarm_max;
+ };
+
++#ifdef CONFIG_REGULATOR
+ static struct s3c64xx_dvfs s3c64xx_dvfs_table[] = {
+ [0] = { 1000000, 1150000 },
+ [1] = { 1050000, 1150000 },
+@@ -31,6 +32,7 @@ static struct s3c64xx_dvfs s3c64xx_dvfs_table[] = {
+ [3] = { 1200000, 1350000 },
+ [4] = { 1300000, 1350000 },
+ };
++#endif
+
+ static struct cpufreq_frequency_table s3c64xx_freq_table[] = {
+ { 0, 0, 66000 },
+@@ -51,15 +53,16 @@ static struct cpufreq_frequency_table s3c64xx_freq_table[] = {
+ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
+ unsigned int index)
+ {
+- struct s3c64xx_dvfs *dvfs;
+- unsigned int old_freq, new_freq;
++ unsigned int new_freq = s3c64xx_freq_table[index].frequency;
+ int ret;
+
++#ifdef CONFIG_REGULATOR
++ struct s3c64xx_dvfs *dvfs;
++ unsigned int old_freq;
++
+ old_freq = clk_get_rate(policy->clk) / 1000;
+- new_freq = s3c64xx_freq_table[index].frequency;
+ dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[index].driver_data];
+
+-#ifdef CONFIG_REGULATOR
+ if (vddarm && new_freq > old_freq) {
+ ret = regulator_set_voltage(vddarm,
+ dvfs->vddarm_min,
+diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
+index 7d811728f04782..97b56e92ea33f5 100644
+--- a/drivers/crypto/qce/aead.c
++++ b/drivers/crypto/qce/aead.c
+@@ -786,7 +786,7 @@ static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_devi
+ alg->init = qce_aead_init;
+ alg->exit = qce_aead_exit;
+
+- alg->base.cra_priority = 300;
++ alg->base.cra_priority = 275;
+ alg->base.cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY |
+diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
+index 28b5fd82382775..3ec8297ed3fff7 100644
+--- a/drivers/crypto/qce/core.c
++++ b/drivers/crypto/qce/core.c
+@@ -51,16 +51,19 @@ static void qce_unregister_algs(struct qce_device *qce)
+ static int qce_register_algs(struct qce_device *qce)
+ {
+ const struct qce_algo_ops *ops;
+- int i, ret = -ENODEV;
++ int i, j, ret = -ENODEV;
+
+ for (i = 0; i < ARRAY_SIZE(qce_ops); i++) {
+ ops = qce_ops[i];
+ ret = ops->register_algs(qce);
+- if (ret)
+- break;
++ if (ret) {
++ for (j = i - 1; j >= 0; j--)
++ ops->unregister_algs(qce);
++ return ret;
++ }
+ }
+
+- return ret;
++ return 0;
+ }
+
+ static int qce_handle_request(struct crypto_async_request *async_req)
+@@ -247,7 +250,7 @@ static int qce_crypto_probe(struct platform_device *pdev)
+
+ ret = qce_check_version(qce);
+ if (ret)
+- goto err_clks;
++ goto err_dma;
+
+ spin_lock_init(&qce->lock);
+ tasklet_init(&qce->done_tasklet, qce_tasklet_req_done,
+diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
+index fc72af8aa9a725..71b748183cfa86 100644
+--- a/drivers/crypto/qce/sha.c
++++ b/drivers/crypto/qce/sha.c
+@@ -482,7 +482,7 @@ static int qce_ahash_register_one(const struct qce_ahash_def *def,
+
+ base = &alg->halg.base;
+ base->cra_blocksize = def->blocksize;
+- base->cra_priority = 300;
++ base->cra_priority = 175;
+ base->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY;
+ base->cra_ctxsize = sizeof(struct qce_sha_ctx);
+ base->cra_alignmask = 0;
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index 5b493fdc1e747f..ffb334eb5b3461 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -461,7 +461,7 @@ static int qce_skcipher_register_one(const struct qce_skcipher_def *def,
+ alg->encrypt = qce_skcipher_encrypt;
+ alg->decrypt = qce_skcipher_decrypt;
+
+- alg->base.cra_priority = 300;
++ alg->base.cra_priority = 275;
+ alg->base.cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY;
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index 71d8b26c4103b9..9f35f69e0f9e2b 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -106,7 +106,7 @@ config ISCSI_IBFT
+ select ISCSI_BOOT_SYSFS
+ select ISCSI_IBFT_FIND if X86
+ depends on ACPI && SCSI && SCSI_LOWLEVEL
+- default n
++ default n
+ help
+ This option enables support for detection and exposing of iSCSI
+ Boot Firmware Table (iBFT) via sysfs to userspace. If you wish to
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index ed4e8ddbe76a50..1141cd06011ff4 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -11,7 +11,7 @@ cflags-y := $(KBUILD_CFLAGS)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+-cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
++cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -std=gnu11 \
+ -fPIC -fno-strict-aliasing -mno-red-zone \
+ -mno-mmx -mno-sse -fshort-wchar \
+ -Wno-pointer-sign \
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index a6bdedbbf70888..2e093c39b610ae 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -217,7 +217,10 @@ static DEFINE_SPINLOCK(scm_query_lock);
+
+ struct qcom_tzmem_pool *qcom_scm_get_tzmem_pool(void)
+ {
+- return __scm ? __scm->mempool : NULL;
++ if (!qcom_scm_is_available())
++ return NULL;
++
++ return __scm->mempool;
+ }
+
+ static enum qcom_scm_convention __get_convention(void)
+@@ -1839,7 +1842,8 @@ static int qcom_scm_qseecom_init(struct qcom_scm *scm)
+ */
+ bool qcom_scm_is_available(void)
+ {
+- return !!READ_ONCE(__scm);
++ /* Paired with smp_store_release() in qcom_scm_probe */
++ return !!smp_load_acquire(&__scm);
+ }
+ EXPORT_SYMBOL_GPL(qcom_scm_is_available);
+
+@@ -1996,7 +2000,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- /* Let all above stores be available after this */
++ /* Paired with smp_load_acquire() in qcom_scm_is_available(). */
+ smp_store_release(&__scm, scm);
+
+ irq = platform_get_irq_optional(pdev, 0);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index e49802f26e07f8..d764a3af634670 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -841,25 +841,6 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, unsigned long *pendin
+ DECLARE_BITMAP(trigger, MAX_LINE);
+ int ret;
+
+- if (chip->driver_data & PCA_PCAL) {
+- /* Read the current interrupt status from the device */
+- ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, trigger);
+- if (ret)
+- return false;
+-
+- /* Check latched inputs and clear interrupt status */
+- ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
+- if (ret)
+- return false;
+-
+- /* Apply filter for rising/falling edge selection */
+- bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise, cur_stat, gc->ngpio);
+-
+- bitmap_and(pending, new_stat, trigger, gc->ngpio);
+-
+- return !bitmap_empty(pending, gc->ngpio);
+- }
+-
+ ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
+ if (ret)
+ return false;
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index deedacdeb23952..f83a8b5a51d0dc 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -1036,20 +1036,23 @@ gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock)
+ struct configfs_subsystem *subsys = dev->group.cg_subsys;
+ struct gpio_sim_bank *bank;
+ struct gpio_sim_line *line;
++ struct config_item *item;
+
+ /*
+- * The device only needs to depend on leaf line entries. This is
++ * The device only needs to depend on leaf entries. This is
+ * sufficient to lock up all the configfs entries that the
+ * instantiated, alive device depends on.
+ */
+ list_for_each_entry(bank, &dev->bank_list, siblings) {
+ list_for_each_entry(line, &bank->line_list, siblings) {
++ item = line->hog ? &line->hog->item
++ : &line->group.cg_item;
++
+ if (lock)
+- WARN_ON(configfs_depend_item_unlocked(
+- subsys, &line->group.cg_item));
++ WARN_ON(configfs_depend_item_unlocked(subsys,
++ item));
+ else
+- configfs_undepend_item_unlocked(
+- &line->group.cg_item);
++ configfs_undepend_item_unlocked(item);
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 610e159d362ad6..7408ea8caacc3c 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -486,6 +486,10 @@ config DRM_HYPERV
+ config DRM_EXPORT_FOR_TESTS
+ bool
+
++# Separate option as not all DRM drivers use it
++config DRM_PANEL_BACKLIGHT_QUIRKS
++ tristate
++
+ config DRM_LIB_RANDOM
+ bool
+ default n
+diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
+index 784229d4504dcb..84746054c721a3 100644
+--- a/drivers/gpu/drm/Makefile
++++ b/drivers/gpu/drm/Makefile
+@@ -93,6 +93,7 @@ drm-$(CONFIG_DRM_PANIC_SCREEN_QR_CODE) += drm_panic_qr.o
+ obj-$(CONFIG_DRM) += drm.o
+
+ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
++obj-$(CONFIG_DRM_PANEL_BACKLIGHT_QUIRKS) += drm_panel_backlight_quirks.o
+
+ #
+ # Memory-management helpers
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 81d9877c87357d..c27b4c36a7c0f5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -118,9 +118,10 @@
+ * - 3.57.0 - Compute tunneling on GFX10+
+ * - 3.58.0 - Add GFX12 DCC support
+ * - 3.59.0 - Cleared VRAM
++ * - 3.60.0 - Add AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE (Vulkan requirement)
+ */
+ #define KMS_DRIVER_MAJOR 3
+-#define KMS_DRIVER_MINOR 59
++#define KMS_DRIVER_MINOR 60
+ #define KMS_DRIVER_PATCHLEVEL 0
+
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 156abd2ba5a6c6..05ebb8216a55a5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -1837,6 +1837,7 @@ void amdgpu_gfx_enforce_isolation_ring_begin_use(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_device *adev = ring->adev;
+ u32 idx;
++ bool sched_work = false;
+
+ if (!adev->gfx.enable_cleaner_shader)
+ return;
+@@ -1852,15 +1853,19 @@ void amdgpu_gfx_enforce_isolation_ring_begin_use(struct amdgpu_ring *ring)
+ mutex_lock(&adev->enforce_isolation_mutex);
+ if (adev->enforce_isolation[idx]) {
+ if (adev->kfd.init_complete)
+- amdgpu_gfx_kfd_sch_ctrl(adev, idx, false);
++ sched_work = true;
+ }
+ mutex_unlock(&adev->enforce_isolation_mutex);
++
++ if (sched_work)
++ amdgpu_gfx_kfd_sch_ctrl(adev, idx, false);
+ }
+
+ void amdgpu_gfx_enforce_isolation_ring_end_use(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_device *adev = ring->adev;
+ u32 idx;
++ bool sched_work = false;
+
+ if (!adev->gfx.enable_cleaner_shader)
+ return;
+@@ -1876,7 +1881,10 @@ void amdgpu_gfx_enforce_isolation_ring_end_use(struct amdgpu_ring *ring)
+ mutex_lock(&adev->enforce_isolation_mutex);
+ if (adev->enforce_isolation[idx]) {
+ if (adev->kfd.init_complete)
+- amdgpu_gfx_kfd_sch_ctrl(adev, idx, true);
++ sched_work = true;
+ }
+ mutex_unlock(&adev->enforce_isolation_mutex);
++
++ if (sched_work)
++ amdgpu_gfx_kfd_sch_ctrl(adev, idx, true);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index ae9ca6788df78c..425073d994912f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -309,7 +309,7 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
+ mutex_lock(&adev->mman.gtt_window_lock);
+ while (src_mm.remaining) {
+ uint64_t from, to, cur_size, tiling_flags;
+- uint32_t num_type, data_format, max_com;
++ uint32_t num_type, data_format, max_com, write_compress_disable;
+ struct dma_fence *next;
+
+ /* Never copy more than 256MiB at once to avoid a timeout */
+@@ -340,9 +340,13 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
+ max_com = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_MAX_COMPRESSED_BLOCK);
+ num_type = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_NUMBER_TYPE);
+ data_format = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_DATA_FORMAT);
++ write_compress_disable =
++ AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_WRITE_COMPRESS_DISABLE);
+ copy_flags |= (AMDGPU_COPY_FLAGS_SET(MAX_COMPRESSED, max_com) |
+ AMDGPU_COPY_FLAGS_SET(NUMBER_TYPE, num_type) |
+- AMDGPU_COPY_FLAGS_SET(DATA_FORMAT, data_format));
++ AMDGPU_COPY_FLAGS_SET(DATA_FORMAT, data_format) |
++ AMDGPU_COPY_FLAGS_SET(WRITE_COMPRESS_DISABLE,
++ write_compress_disable));
+ }
+
+ r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index 138d80017f3564..b7742fa74e1de2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -118,6 +118,8 @@ struct amdgpu_copy_mem {
+ #define AMDGPU_COPY_FLAGS_NUMBER_TYPE_MASK 0x07
+ #define AMDGPU_COPY_FLAGS_DATA_FORMAT_SHIFT 8
+ #define AMDGPU_COPY_FLAGS_DATA_FORMAT_MASK 0x3f
++#define AMDGPU_COPY_FLAGS_WRITE_COMPRESS_DISABLE_SHIFT 14
++#define AMDGPU_COPY_FLAGS_WRITE_COMPRESS_DISABLE_MASK 0x1
+
+ #define AMDGPU_COPY_FLAGS_SET(field, value) \
+ (((__u32)(value) & AMDGPU_COPY_FLAGS_##field##_MASK) << AMDGPU_COPY_FLAGS_##field##_SHIFT)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 6c19626ec59e9d..ca130880edfd42 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -3981,17 +3981,6 @@ static void gfx_v12_0_update_coarse_grain_clock_gating(struct amdgpu_device *ade
+
+ if (def != data)
+ WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D, data);
+-
+- data = RREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL);
+- data &= ~SDMA0_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+- WREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL, data);
+-
+- /* Some ASICs only have one SDMA instance, not need to configure SDMA1 */
+- if (adev->sdma.num_instances > 1) {
+- data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
+- data &= ~SDMA1_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+- WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
+- }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index c77889040760ad..4dd86c682ee6a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -953,10 +953,12 @@ static int sdma_v4_4_2_inst_start(struct amdgpu_device *adev,
+ /* set utc l1 enable flag always to 1 */
+ temp = RREG32_SDMA(i, regSDMA_CNTL);
+ temp = REG_SET_FIELD(temp, SDMA_CNTL, UTC_L1_ENABLE, 1);
+- /* enable context empty interrupt during initialization */
+- temp = REG_SET_FIELD(temp, SDMA_CNTL, CTXEMPTY_INT_ENABLE, 1);
+- WREG32_SDMA(i, regSDMA_CNTL, temp);
+
++ if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) < IP_VERSION(4, 4, 5)) {
++ /* enable context empty interrupt during initialization */
++ temp = REG_SET_FIELD(temp, SDMA_CNTL, CTXEMPTY_INT_ENABLE, 1);
++ WREG32_SDMA(i, regSDMA_CNTL, temp);
++ }
+ if (!amdgpu_sriov_vf(adev)) {
+ if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+ /* unhalt engine */
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+index 9288f37a3cc5c3..843e6b46deee82 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+@@ -1688,11 +1688,12 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
+ uint32_t byte_count,
+ uint32_t copy_flags)
+ {
+- uint32_t num_type, data_format, max_com;
++ uint32_t num_type, data_format, max_com, write_cm;
+
+ max_com = AMDGPU_COPY_FLAGS_GET(copy_flags, MAX_COMPRESSED);
+ data_format = AMDGPU_COPY_FLAGS_GET(copy_flags, DATA_FORMAT);
+ num_type = AMDGPU_COPY_FLAGS_GET(copy_flags, NUMBER_TYPE);
++ write_cm = AMDGPU_COPY_FLAGS_GET(copy_flags, WRITE_COMPRESS_DISABLE) ? 2 : 1;
+
+ ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) |
+ SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) |
+@@ -1709,7 +1710,7 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
+ if ((copy_flags & (AMDGPU_COPY_FLAGS_READ_DECOMPRESSED | AMDGPU_COPY_FLAGS_WRITE_COMPRESSED)))
+ ib->ptr[ib->length_dw++] = SDMA_DCC_DATA_FORMAT(data_format) | SDMA_DCC_NUM_TYPE(num_type) |
+ ((copy_flags & AMDGPU_COPY_FLAGS_READ_DECOMPRESSED) ? SDMA_DCC_READ_CM(2) : 0) |
+- ((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(1) : 0) |
++ ((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(write_cm) : 0) |
+ SDMA_DCC_MAX_COM(max_com) | SDMA_DCC_MAX_UCOM(1);
+ else
+ ib->ptr[ib->length_dw++] = 0;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index b05be24531e187..d350c7ce35b3d6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -637,6 +637,14 @@ static void kfd_cleanup_nodes(struct kfd_dev *kfd, unsigned int num_nodes)
+ struct kfd_node *knode;
+ unsigned int i;
+
++ /*
++ * flush_work ensures that there are no outstanding
++ * work-queue items that will access interrupt_ring. New work items
++ * can't be created because we stopped interrupt handling above.
++ */
++ flush_workqueue(kfd->ih_wq);
++ destroy_workqueue(kfd->ih_wq);
++
+ for (i = 0; i < num_nodes; i++) {
+ knode = kfd->nodes[i];
+ device_queue_manager_uninit(knode->dqm);
+@@ -1058,21 +1066,6 @@ static int kfd_resume(struct kfd_node *node)
+ return err;
+ }
+
+-static inline void kfd_queue_work(struct workqueue_struct *wq,
+- struct work_struct *work)
+-{
+- int cpu, new_cpu;
+-
+- cpu = new_cpu = smp_processor_id();
+- do {
+- new_cpu = cpumask_next(new_cpu, cpu_online_mask) % nr_cpu_ids;
+- if (cpu_to_node(new_cpu) == numa_node_id())
+- break;
+- } while (cpu != new_cpu);
+-
+- queue_work_on(new_cpu, wq, work);
+-}
+-
+ /* This is called directly from KGD at ISR. */
+ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
+ {
+@@ -1098,7 +1091,7 @@ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
+ patched_ihre, &is_patched)
+ && enqueue_ih_ring_entry(node,
+ is_patched ? patched_ihre : ih_ring_entry)) {
+- kfd_queue_work(node->ih_wq, &node->interrupt_work);
++ queue_work(node->kfd->ih_wq, &node->interrupt_work);
+ spin_unlock_irqrestore(&node->interrupt_lock, flags);
+ return;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index f5b3ed20e891b3..3cfb4a38d17c7f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -2290,9 +2290,9 @@ static int unmap_queues_cpsch(struct device_queue_manager *dqm,
+ */
+ mqd_mgr = dqm->mqd_mgrs[KFD_MQD_TYPE_HIQ];
+ if (mqd_mgr->check_preemption_failed(mqd_mgr, dqm->packet_mgr.priv_queue->queue->mqd)) {
++ while (halt_if_hws_hang)
++ schedule();
+ if (reset_queues_on_hws_hang(dqm)) {
+- while (halt_if_hws_hang)
+- schedule();
+ dqm->is_hws_hang = true;
+ kfd_hws_hang(dqm);
+ retval = -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c b/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c
+index 9b6b6e88259348..15b4b70cf19976 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c
+@@ -62,11 +62,14 @@ int kfd_interrupt_init(struct kfd_node *node)
+ return r;
+ }
+
+- node->ih_wq = alloc_workqueue("KFD IH", WQ_HIGHPRI, 1);
+- if (unlikely(!node->ih_wq)) {
+- kfifo_free(&node->ih_fifo);
+- dev_err(node->adev->dev, "Failed to allocate KFD IH workqueue\n");
+- return -ENOMEM;
++ if (!node->kfd->ih_wq) {
++ node->kfd->ih_wq = alloc_workqueue("KFD IH", WQ_HIGHPRI | WQ_UNBOUND,
++ node->kfd->num_nodes);
++ if (unlikely(!node->kfd->ih_wq)) {
++ kfifo_free(&node->ih_fifo);
++ dev_err(node->adev->dev, "Failed to allocate KFD IH workqueue\n");
++ return -ENOMEM;
++ }
+ }
+ spin_lock_init(&node->interrupt_lock);
+
+@@ -96,16 +99,6 @@ void kfd_interrupt_exit(struct kfd_node *node)
+ spin_lock_irqsave(&node->interrupt_lock, flags);
+ node->interrupts_active = false;
+ spin_unlock_irqrestore(&node->interrupt_lock, flags);
+-
+- /*
+- * flush_work ensures that there are no outstanding
+- * work-queue items that will access interrupt_ring. New work items
+- * can't be created because we stopped interrupt handling above.
+- */
+- flush_workqueue(node->ih_wq);
+-
+- destroy_workqueue(node->ih_wq);
+-
+ kfifo_free(&node->ih_fifo);
+ }
+
+@@ -162,7 +155,7 @@ static void interrupt_wq(struct work_struct *work)
+ /* If we spent more than a second processing signals,
+ * reschedule the worker to avoid soft-lockup warnings
+ */
+- queue_work(dev->ih_wq, &dev->interrupt_work);
++ queue_work(dev->kfd->ih_wq, &dev->interrupt_work);
+ break;
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 26e48fdc872896..75523f30cd38b0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -273,7 +273,6 @@ struct kfd_node {
+
+ /* Interrupts */
+ struct kfifo ih_fifo;
+- struct workqueue_struct *ih_wq;
+ struct work_struct interrupt_work;
+ spinlock_t interrupt_lock;
+
+@@ -366,6 +365,8 @@ struct kfd_dev {
+ struct kfd_node *nodes[MAX_KFD_NODES];
+ unsigned int num_nodes;
+
++ struct workqueue_struct *ih_wq;
++
+ /* Kernel doorbells for KFD device */
+ struct amdgpu_bo *doorbells;
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index ead4317a21680b..dbb63ce316f11e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -86,9 +86,12 @@ void kfd_process_dequeue_from_device(struct kfd_process_device *pdd)
+
+ if (pdd->already_dequeued)
+ return;
+-
++ /* The MES context flush needs to filter out the case which the
++ * KFD process is created without setting up the MES context and
++ * queue for creating a compute queue.
++ */
+ dev->dqm->ops.process_termination(dev->dqm, &pdd->qpd);
+- if (dev->kfd->shared_resources.enable_mes &&
++ if (dev->kfd->shared_resources.enable_mes && !!pdd->proc_ctx_gpu_addr &&
+ down_read_trylock(&dev->adev->reset_domain->sem)) {
+ amdgpu_mes_flush_shader_debugger(dev->adev,
+ pdd->proc_ctx_gpu_addr);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 08c58d0315de7f..85e58e0f6059a6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1036,8 +1036,10 @@ static int amdgpu_dm_audio_component_get_eld(struct device *kdev, int port,
+ continue;
+
+ *enabled = true;
++ mutex_lock(&connector->eld_mutex);
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
++ mutex_unlock(&connector->eld_mutex);
+
+ break;
+ }
+@@ -5497,8 +5499,7 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ const u64 tiling_flags,
+ struct dc_plane_info *plane_info,
+ struct dc_plane_address *address,
+- bool tmz_surface,
+- bool force_disable_dcc)
++ bool tmz_surface)
+ {
+ const struct drm_framebuffer *fb = plane_state->fb;
+ const struct amdgpu_framebuffer *afb =
+@@ -5597,7 +5598,7 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ &plane_info->tiling_info,
+ &plane_info->plane_size,
+ &plane_info->dcc, address,
+- tmz_surface, force_disable_dcc);
++ tmz_surface);
+ if (ret)
+ return ret;
+
+@@ -5618,7 +5619,6 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ struct dc_scaling_info scaling_info;
+ struct dc_plane_info plane_info;
+ int ret;
+- bool force_disable_dcc = false;
+
+ ret = amdgpu_dm_plane_fill_dc_scaling_info(adev, plane_state, &scaling_info);
+ if (ret)
+@@ -5629,13 +5629,11 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ dc_plane_state->clip_rect = scaling_info.clip_rect;
+ dc_plane_state->scaling_quality = scaling_info.scaling_quality;
+
+- force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
+ ret = fill_dc_plane_info_and_addr(adev, plane_state,
+ afb->tiling_flags,
+ &plane_info,
+ &dc_plane_state->address,
+- afb->tmz_surface,
+- force_disable_dcc);
++ afb->tmz_surface);
+ if (ret)
+ return ret;
+
+@@ -9061,7 +9059,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ afb->tiling_flags,
+ &bundle->plane_infos[planes_count],
+ &bundle->flip_addrs[planes_count].address,
+- afb->tmz_surface, false);
++ afb->tmz_surface);
+
+ drm_dbg_state(state->dev, "plane: id=%d dcc_en=%d\n",
+ new_plane_state->plane->index,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 754dbc544f03a3..5bdf44c692180c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1691,16 +1691,16 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ return ret;
+ }
+
+-static unsigned int kbps_from_pbn(unsigned int pbn)
++static uint32_t kbps_from_pbn(unsigned int pbn)
+ {
+- unsigned int kbps = pbn;
++ uint64_t kbps = (uint64_t)pbn;
+
+ kbps *= (1000000 / PEAK_FACTOR_X1000);
+ kbps *= 8;
+ kbps *= 54;
+ kbps /= 64;
+
+- return kbps;
++ return (uint32_t)kbps;
+ }
+
+ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 495e3cd70426db..83c7c8853edeca 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -309,8 +309,7 @@ static int amdgpu_dm_plane_fill_gfx9_plane_attributes_from_modifiers(struct amdg
+ const struct plane_size *plane_size,
+ union dc_tiling_info *tiling_info,
+ struct dc_plane_dcc_param *dcc,
+- struct dc_plane_address *address,
+- const bool force_disable_dcc)
++ struct dc_plane_address *address)
+ {
+ const uint64_t modifier = afb->base.modifier;
+ int ret = 0;
+@@ -318,7 +317,7 @@ static int amdgpu_dm_plane_fill_gfx9_plane_attributes_from_modifiers(struct amdg
+ amdgpu_dm_plane_fill_gfx9_tiling_info_from_modifier(adev, tiling_info, modifier);
+ tiling_info->gfx9.swizzle = amdgpu_dm_plane_modifier_gfx9_swizzle_mode(modifier);
+
+- if (amdgpu_dm_plane_modifier_has_dcc(modifier) && !force_disable_dcc) {
++ if (amdgpu_dm_plane_modifier_has_dcc(modifier)) {
+ uint64_t dcc_address = afb->address + afb->base.offsets[1];
+ bool independent_64b_blks = AMD_FMT_MOD_GET(DCC_INDEPENDENT_64B, modifier);
+ bool independent_128b_blks = AMD_FMT_MOD_GET(DCC_INDEPENDENT_128B, modifier);
+@@ -360,8 +359,7 @@ static int amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(struct amd
+ const struct plane_size *plane_size,
+ union dc_tiling_info *tiling_info,
+ struct dc_plane_dcc_param *dcc,
+- struct dc_plane_address *address,
+- const bool force_disable_dcc)
++ struct dc_plane_address *address)
+ {
+ const uint64_t modifier = afb->base.modifier;
+ int ret = 0;
+@@ -371,7 +369,7 @@ static int amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(struct amd
+
+ tiling_info->gfx9.swizzle = amdgpu_dm_plane_modifier_gfx9_swizzle_mode(modifier);
+
+- if (amdgpu_dm_plane_modifier_has_dcc(modifier) && !force_disable_dcc) {
++ if (amdgpu_dm_plane_modifier_has_dcc(modifier)) {
+ int max_compressed_block = AMD_FMT_MOD_GET(DCC_MAX_COMPRESSED_BLOCK, modifier);
+
+ dcc->enable = 1;
+@@ -839,8 +837,7 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ struct plane_size *plane_size,
+ struct dc_plane_dcc_param *dcc,
+ struct dc_plane_address *address,
+- bool tmz_surface,
+- bool force_disable_dcc)
++ bool tmz_surface)
+ {
+ const struct drm_framebuffer *fb = &afb->base;
+ int ret;
+@@ -900,16 +897,14 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ ret = amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(adev, afb, format,
+ rotation, plane_size,
+ tiling_info, dcc,
+- address,
+- force_disable_dcc);
++ address);
+ if (ret)
+ return ret;
+ } else if (adev->family >= AMDGPU_FAMILY_AI) {
+ ret = amdgpu_dm_plane_fill_gfx9_plane_attributes_from_modifiers(adev, afb, format,
+ rotation, plane_size,
+ tiling_info, dcc,
+- address,
+- force_disable_dcc);
++ address);
+ if (ret)
+ return ret;
+ } else {
+@@ -1000,14 +995,13 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
+ dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
+ struct dc_plane_state *plane_state =
+ dm_plane_state_new->dc_state;
+- bool force_disable_dcc = !plane_state->dcc.enable;
+
+ amdgpu_dm_plane_fill_plane_buffer_attributes(
+ adev, afb, plane_state->format, plane_state->rotation,
+ afb->tiling_flags,
+ &plane_state->tiling_info, &plane_state->plane_size,
+ &plane_state->dcc, &plane_state->address,
+- afb->tmz_surface, force_disable_dcc);
++ afb->tmz_surface);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h
+index 6498359bff6f68..2eef13b1c05a4b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h
+@@ -51,8 +51,7 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ struct plane_size *plane_size,
+ struct dc_plane_dcc_param *dcc,
+ struct dc_plane_address *address,
+- bool tmz_surface,
+- bool force_disable_dcc);
++ bool tmz_surface);
+
+ int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm,
+ struct drm_plane *plane,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 6d4ee8fe615c38..216b525bd75e79 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2032,7 +2032,7 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+
+ dc_enable_stereo(dc, context, dc_streams, context->stream_count);
+
+- if (context->stream_count > get_seamless_boot_stream_count(context) ||
++ if (get_seamless_boot_stream_count(context) == 0 ||
+ context->stream_count == 0) {
+ /* Must wait for no flips to be pending before doing optimize bw */
+ hwss_wait_for_no_pipes_pending(dc, context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index 5bb8b78bf250a0..bf636b28e3e16e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -63,8 +63,7 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+
+ bool should_use_dmub_lock(struct dc_link *link)
+ {
+- if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 ||
+- link->psr_settings.psr_version == DC_PSR_VERSION_1)
++ if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
+ return true;
+
+ if (link->replay_settings.replay_feature_enabled)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/Makefile b/drivers/gpu/drm/amd/display/dc/dml2/Makefile
+index c4378e620cbf91..986a69c5bd4bca 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dml2/Makefile
+@@ -29,7 +29,11 @@ dml2_rcflags := $(CC_FLAGS_NO_FPU)
+
+ ifneq ($(CONFIG_FRAME_WARN),0)
+ ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
++ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy)
++frame_warn_flag := -Wframe-larger-than=4096
++else
+ frame_warn_flag := -Wframe-larger-than=3072
++endif
+ else
+ frame_warn_flag := -Wframe-larger-than=2048
+ endif
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index 8dabb1ac0b684d..6822b07951204b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -6301,9 +6301,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.meta_row_bandwidth_this_state,
+ mode_lib->ms.dpte_row_bandwidth_this_state,
+ mode_lib->ms.NoOfDPPThisState,
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor);
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j]);
+
+ s->VMDataOnlyReturnBWPerState = dml_get_return_bw_mbps_vm_only(
+ &mode_lib->ms.soc,
+@@ -6434,7 +6434,7 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ /* Output */
+ &mode_lib->ms.UrgentBurstFactorCursorPre[k],
+ &mode_lib->ms.UrgentBurstFactorLumaPre[k],
+- &mode_lib->ms.UrgentBurstFactorChroma[k],
++ &mode_lib->ms.UrgentBurstFactorChromaPre[k],
+ &mode_lib->ms.NotUrgentLatencyHidingPre[k]);
+
+ mode_lib->ms.cursor_bw_pre[k] = mode_lib->ms.cache_display_cfg.plane.NumberOfCursors[k] * mode_lib->ms.cache_display_cfg.plane.CursorWidth[k] *
+@@ -6458,9 +6458,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.cursor_bw_pre,
+ mode_lib->ms.prefetch_vmrow_bw,
+ mode_lib->ms.NoOfDPPThisState,
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor,
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j],
+ mode_lib->ms.UrgentBurstFactorLumaPre,
+ mode_lib->ms.UrgentBurstFactorChromaPre,
+ mode_lib->ms.UrgentBurstFactorCursorPre,
+@@ -6517,9 +6517,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.cursor_bw,
+ mode_lib->ms.cursor_bw_pre,
+ mode_lib->ms.NoOfDPPThisState,
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor,
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j],
+ mode_lib->ms.UrgentBurstFactorLumaPre,
+ mode_lib->ms.UrgentBurstFactorChromaPre,
+ mode_lib->ms.UrgentBurstFactorCursorPre);
+@@ -6586,9 +6586,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.cursor_bw_pre,
+ mode_lib->ms.prefetch_vmrow_bw,
+ mode_lib->ms.NoOfDPP[j], // VBA_ERROR DPPPerSurface is not assigned at this point, should use NoOfDpp here
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor,
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j],
+ mode_lib->ms.UrgentBurstFactorLumaPre,
+ mode_lib->ms.UrgentBurstFactorChromaPre,
+ mode_lib->ms.UrgentBurstFactorCursorPre,
+@@ -7809,9 +7809,9 @@ dml_bool_t dml_core_mode_support(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.DETBufferSizeYThisState[k],
+ mode_lib->ms.DETBufferSizeCThisState[k],
+ /* Output */
+- &mode_lib->ms.UrgentBurstFactorCursor[k],
+- &mode_lib->ms.UrgentBurstFactorLuma[k],
+- &mode_lib->ms.UrgentBurstFactorChroma[k],
++ &mode_lib->ms.UrgentBurstFactorCursor[j][k],
++ &mode_lib->ms.UrgentBurstFactorLuma[j][k],
++ &mode_lib->ms.UrgentBurstFactorChroma[j][k],
+ &mode_lib->ms.NotUrgentLatencyHiding[k]);
+ }
+
+@@ -9190,6 +9190,8 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ &locals->FractionOfUrgentBandwidth,
+ &s->dummy_boolean[0]); // dml_bool_t *PrefetchBandwidthSupport
+
++
++
+ if (s->VRatioPrefetchMoreThanMax != false || s->DestinationLineTimesForPrefetchLessThan2 != false) {
+ dml_print("DML::%s: VRatioPrefetchMoreThanMax = %u\n", __func__, s->VRatioPrefetchMoreThanMax);
+ dml_print("DML::%s: DestinationLineTimesForPrefetchLessThan2 = %u\n", __func__, s->DestinationLineTimesForPrefetchLessThan2);
+@@ -9204,6 +9206,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ }
+ }
+
++
+ if (locals->PrefetchModeSupported == true && mode_lib->ms.support.ImmediateFlipSupport == true) {
+ locals->BandwidthAvailableForImmediateFlip = CalculateBandwidthAvailableForImmediateFlip(
+ mode_lib->ms.num_active_planes,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h
+index f951936bb579e6..504c427b3b3191 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h
+@@ -884,11 +884,11 @@ struct mode_support_st {
+ dml_uint_t meta_row_height[__DML_NUM_PLANES__];
+ dml_uint_t meta_row_height_chroma[__DML_NUM_PLANES__];
+ dml_float_t UrgLatency;
+- dml_float_t UrgentBurstFactorCursor[__DML_NUM_PLANES__];
++ dml_float_t UrgentBurstFactorCursor[2][__DML_NUM_PLANES__];
+ dml_float_t UrgentBurstFactorCursorPre[__DML_NUM_PLANES__];
+- dml_float_t UrgentBurstFactorLuma[__DML_NUM_PLANES__];
++ dml_float_t UrgentBurstFactorLuma[2][__DML_NUM_PLANES__];
+ dml_float_t UrgentBurstFactorLumaPre[__DML_NUM_PLANES__];
+- dml_float_t UrgentBurstFactorChroma[__DML_NUM_PLANES__];
++ dml_float_t UrgentBurstFactorChroma[2][__DML_NUM_PLANES__];
+ dml_float_t UrgentBurstFactorChromaPre[__DML_NUM_PLANES__];
+ dml_float_t MaximumSwathWidthInLineBufferLuma;
+ dml_float_t MaximumSwathWidthInLineBufferChroma;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index 866b0abcff1bad..4d64c45930da49 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -533,14 +533,21 @@ static bool optimize_pstate_with_svp_and_drr(struct dml2_context *dml2, struct d
+ static bool call_dml_mode_support_and_programming(struct dc_state *context)
+ {
+ unsigned int result = 0;
+- unsigned int min_state;
++ unsigned int min_state = 0;
+ int min_state_for_g6_temp_read = 0;
++
++
++ if (!context)
++ return false;
++
+ struct dml2_context *dml2 = context->bw_ctx.dml2;
+ struct dml2_wrapper_scratch *s = &dml2->v20.scratch;
+
+- min_state_for_g6_temp_read = calculate_lowest_supported_state_for_temp_read(dml2, context);
++ if (!context->streams[0]->sink->link->dc->caps.is_apu) {
++ min_state_for_g6_temp_read = calculate_lowest_supported_state_for_temp_read(dml2, context);
+
+- ASSERT(min_state_for_g6_temp_read >= 0);
++ ASSERT(min_state_for_g6_temp_read >= 0);
++ }
+
+ if (!dml2->config.use_native_pstate_optimization) {
+ result = optimize_pstate_with_svp_and_drr(dml2, context);
+@@ -551,14 +558,20 @@ static bool call_dml_mode_support_and_programming(struct dc_state *context)
+ /* Upon trying to sett certain frequencies in FRL, min_state_for_g6_temp_read is reported as -1. This leads to an invalid value of min_state causing crashes later on.
+ * Use the default logic for min_state only when min_state_for_g6_temp_read is a valid value. In other cases, use the value calculated by the DML directly.
+ */
+- if (min_state_for_g6_temp_read >= 0)
+- min_state = min_state_for_g6_temp_read > s->mode_support_params.out_lowest_state_idx ? min_state_for_g6_temp_read : s->mode_support_params.out_lowest_state_idx;
+- else
+- min_state = s->mode_support_params.out_lowest_state_idx;
+-
+- if (result)
+- result = dml_mode_programming(&dml2->v20.dml_core_ctx, min_state, &s->cur_display_config, true);
++ if (!context->streams[0]->sink->link->dc->caps.is_apu) {
++ if (min_state_for_g6_temp_read >= 0)
++ min_state = min_state_for_g6_temp_read > s->mode_support_params.out_lowest_state_idx ? min_state_for_g6_temp_read : s->mode_support_params.out_lowest_state_idx;
++ else
++ min_state = s->mode_support_params.out_lowest_state_idx;
++ }
+
++ if (result) {
++ if (!context->streams[0]->sink->link->dc->caps.is_apu) {
++ result = dml_mode_programming(&dml2->v20.dml_core_ctx, min_state, &s->cur_display_config, true);
++ } else {
++ result = dml_mode_programming(&dml2->v20.dml_core_ctx, s->mode_support_params.out_lowest_state_idx, &s->cur_display_config, true);
++ }
++ }
+ return result;
+ }
+
+@@ -687,6 +700,8 @@ static bool dml2_validate_only(struct dc_state *context)
+ build_unoptimized_policy_settings(dml2->v20.dml_core_ctx.project, &dml2->v20.dml_core_ctx.policy);
+
+ map_dc_state_into_dml_display_cfg(dml2, context, &dml2->v20.scratch.cur_display_config);
++ if (!dml2->config.skip_hw_state_mapping)
++ dml2_apply_det_buffer_allocation_policy(dml2, &dml2->v20.scratch.cur_display_config);
+
+ result = pack_and_call_dml_mode_support_ex(dml2,
+ &dml2->v20.scratch.cur_display_config,
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+index 961d8936150ab7..75fb77bca83ba2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+@@ -483,10 +483,11 @@ void dpp1_set_cursor_position(
+ if (src_y_offset + cursor_height <= 0)
+ cur_en = 0; /* not visible beyond top edge*/
+
+- REG_UPDATE(CURSOR0_CONTROL,
+- CUR0_ENABLE, cur_en);
++ if (dpp_base->pos.cur0_ctl.bits.cur0_enable != cur_en) {
++ REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, cur_en);
+
+- dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ }
+ }
+
+ void dpp1_cnv_set_optional_cursor_attributes(
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
+index 3b6ca7974e188d..1236e0f9a2560c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
+@@ -154,9 +154,11 @@ void dpp401_set_cursor_position(
+ struct dcn401_dpp *dpp = TO_DCN401_DPP(dpp_base);
+ uint32_t cur_en = pos->enable ? 1 : 0;
+
+- REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, cur_en);
++ if (dpp_base->pos.cur0_ctl.bits.cur0_enable != cur_en) {
++ REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, cur_en);
+
+- dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ }
+ }
+
+ void dpp401_set_optional_cursor_attributes(
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
+index fe741100c0f880..d347bb06577ac6 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
+@@ -129,7 +129,8 @@ bool hubbub3_program_watermarks(
+ REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+ return wm_pending;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
+index 7fb5523f972244..b98505b240a797 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
+@@ -750,7 +750,8 @@ static bool hubbub31_program_watermarks(
+ REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+ return wm_pending;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
+index 5264dc26cce1fa..32a6be543105c1 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
+@@ -786,7 +786,8 @@ static bool hubbub32_program_watermarks(
+ REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+ hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
+index 5eb3da8d5206e9..dce7269959ce74 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
+@@ -326,7 +326,8 @@ static bool hubbub35_program_watermarks(
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND_COMMIT_THRESHOLD, 0xA);/*hw delta*/
+ REG_UPDATE(DCHUBBUB_ARB_HOSTVM_CNTL, DCHUBBUB_ARB_MAX_QOS_COMMIT_THRESHOLD, 0xF);
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+ hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+index b405fa22f87a9e..c74ee2d50a699a 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+@@ -1044,11 +1044,13 @@ void hubp2_cursor_set_position(
+ if (src_y_offset + cursor_height <= 0)
+ cur_en = 0; /* not visible beyond top edge*/
+
+- if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
+- hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
++ if (hubp->pos.cur_ctl.bits.cur_enable != cur_en) {
++ if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
++ hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
+
+- REG_UPDATE(CURSOR_CONTROL,
++ REG_UPDATE(CURSOR_CONTROL,
+ CURSOR_ENABLE, cur_en);
++ }
+
+ REG_SET_2(CURSOR_POSITION, 0,
+ CURSOR_X_POSITION, pos->x,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+index c55b1b8be8ffd6..5cf7e6771cb49e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+@@ -484,6 +484,8 @@ void hubp3_init(struct hubp *hubp)
+ //hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
+ REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
+
++ REG_UPDATE(DCHUBP_CNTL, HUBP_TTU_DISABLE, 0);
++
+ hubp_reset(hubp);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+index 45023fa9b708dc..c4f41350d1b3ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+@@ -168,6 +168,8 @@ void hubp32_init(struct hubp *hubp)
+ {
+ struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+ REG_WRITE(HUBPREQ_DEBUG_DB, 1 << 8);
++
++ REG_UPDATE(DCHUBP_CNTL, HUBP_TTU_DISABLE, 0);
+ }
+ static struct hubp_funcs dcn32_hubp_funcs = {
+ .hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+index 2d52100510f05f..7013c124efcff8 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+@@ -718,11 +718,13 @@ void hubp401_cursor_set_position(
+ dc_fixpt_from_int(dst_x_offset),
+ param->h_scale_ratio));
+
+- if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
+- hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
++ if (hubp->pos.cur_ctl.bits.cur_enable != cur_en) {
++ if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
++ hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
+
+- REG_UPDATE(CURSOR_CONTROL,
+- CURSOR_ENABLE, cur_en);
++ REG_UPDATE(CURSOR_CONTROL,
++ CURSOR_ENABLE, cur_en);
++ }
+
+ REG_SET_2(CURSOR_POSITION, 0,
+ CURSOR_X_POSITION, x_pos,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index f6b17bd3f714fa..38755ca771401b 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -236,7 +236,8 @@ void dcn35_init_hw(struct dc *dc)
+ }
+
+ hws->funcs.init_pipes(dc, dc->current_state);
+- if (dc->res_pool->hubbub->funcs->allow_self_refresh_control)
++ if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
++ !dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
+ dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+ !dc->res_pool->hubbub->ctx->dc->debug.disable_stutter);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
+index 7d04739c3ba146..4bbbe07ecde7d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
+@@ -671,9 +671,9 @@ static const struct dc_plane_cap plane_cap = {
+
+ /* 6:1 downscaling ratio: 1000/6 = 166.666 */
+ .max_downscale_factor = {
+- .argb8888 = 167,
+- .nv12 = 167,
+- .fp16 = 167
++ .argb8888 = 358,
++ .nv12 = 358,
++ .fp16 = 358
+ },
+ 64,
+ 64
+@@ -694,7 +694,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .disable_dcc = DCC_ENABLE,
+ .vsr_support = true,
+ .performance_trace = false,
+- .max_downscale_src_width = 7680,/*upto 8K*/
++ .max_downscale_src_width = 4096,/*upto true 4k*/
+ .scl_reset_length10 = true,
+ .sanity_checks = false,
+ .underflow_assert_delay_us = 0xFFFFFFFF,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index 2c35eb31475ab8..5a1f24438e472a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1731,7 +1731,6 @@ static ssize_t aldebaran_get_gpu_metrics(struct smu_context *smu,
+
+ gpu_metrics->average_gfx_activity = metrics.AverageGfxActivity;
+ gpu_metrics->average_umc_activity = metrics.AverageUclkActivity;
+- gpu_metrics->average_mm_activity = 0;
+
+ /* Valid power data is available only from primary die */
+ if (aldebaran_is_primary(smu)) {
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c b/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
+index ebccb74306a765..f30b3d5eeca5c5 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
+@@ -160,6 +160,10 @@ static int komeda_wb_connector_add(struct komeda_kms_dev *kms,
+ formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
+ kwb_conn->wb_layer->layer_type,
+ &n_formats);
++ if (!formats) {
++ kfree(kwb_conn);
++ return -ENOMEM;
++ }
+
+ err = drm_writeback_connector_init(&kms->base, wb_conn,
+ &komeda_wb_connector_funcs,
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index a2675b121fe44b..c036bbc92ba96e 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -2002,8 +2002,10 @@ static int anx7625_audio_get_eld(struct device *dev, void *data,
+ memset(buf, 0, len);
+ } else {
+ dev_dbg(dev, "audio copy eld\n");
++ mutex_lock(&ctx->connector->eld_mutex);
+ memcpy(buf, ctx->connector->eld,
+ min(sizeof(ctx->connector->eld), len));
++ mutex_unlock(&ctx->connector->eld_mutex);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index cf891e7677c0e2..faee8e2e82a053 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -296,7 +296,7 @@
+ #define MAX_LANE_COUNT 4
+ #define MAX_LINK_RATE HBR
+ #define AUTO_TRAIN_RETRY 3
+-#define MAX_HDCP_DOWN_STREAM_COUNT 10
++#define MAX_HDCP_DOWN_STREAM_COUNT 127
+ #define MAX_CR_LEVEL 0x03
+ #define MAX_EQ_LEVEL 0x03
+ #define AUX_WAIT_TIMEOUT_MS 15
+@@ -2023,7 +2023,7 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
+ {
+ struct device *dev = it6505->dev;
+ u8 av[5][4], bv[5][4];
+- int i, err;
++ int i, err, retry;
+
+ i = it6505_setup_sha1_input(it6505, it6505->sha1_input);
+ if (i <= 0) {
+@@ -2032,22 +2032,28 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
+ }
+
+ it6505_sha1_digest(it6505, it6505->sha1_input, i, (u8 *)av);
++ /*1B-05 V' must retry 3 times */
++ for (retry = 0; retry < 3; retry++) {
++ err = it6505_get_dpcd(it6505, DP_AUX_HDCP_V_PRIME(0), (u8 *)bv,
++ sizeof(bv));
+
+- err = it6505_get_dpcd(it6505, DP_AUX_HDCP_V_PRIME(0), (u8 *)bv,
+- sizeof(bv));
++ if (err < 0) {
++ dev_err(dev, "Read V' value Fail %d", retry);
++ continue;
++ }
+
+- if (err < 0) {
+- dev_err(dev, "Read V' value Fail");
+- return false;
+- }
++ for (i = 0; i < 5; i++) {
++ if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
++ av[i][1] != av[i][2] || bv[i][0] != av[i][3])
++ break;
+
+- for (i = 0; i < 5; i++)
+- if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
+- bv[i][1] != av[i][2] || bv[i][0] != av[i][3])
+- return false;
++ DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d, %d", retry, i);
++ return true;
++ }
++ }
+
+- DRM_DEV_DEBUG_DRIVER(dev, "V' all match!!");
+- return true;
++ DRM_DEV_DEBUG_DRIVER(dev, "V' NOT match!! %d", retry);
++ return false;
+ }
+
+ static void it6505_hdcp_wait_ksv_list(struct work_struct *work)
+@@ -2055,12 +2061,13 @@ static void it6505_hdcp_wait_ksv_list(struct work_struct *work)
+ struct it6505 *it6505 = container_of(work, struct it6505,
+ hdcp_wait_ksv_list);
+ struct device *dev = it6505->dev;
+- unsigned int timeout = 5000;
+- u8 bstatus = 0;
++ u8 bstatus;
+ bool ksv_list_check;
++ /* 1B-04 wait ksv list for 5s */
++ unsigned long timeout = jiffies +
++ msecs_to_jiffies(5000) + 1;
+
+- timeout /= 20;
+- while (timeout > 0) {
++ for (;;) {
+ if (!it6505_get_sink_hpd_status(it6505))
+ return;
+
+@@ -2069,27 +2076,23 @@ static void it6505_hdcp_wait_ksv_list(struct work_struct *work)
+ if (bstatus & DP_BSTATUS_READY)
+ break;
+
+- msleep(20);
+- timeout--;
+- }
++ if (time_after(jiffies, timeout)) {
++ DRM_DEV_DEBUG_DRIVER(dev, "KSV list wait timeout");
++ goto timeout;
++ }
+
+- if (timeout == 0) {
+- DRM_DEV_DEBUG_DRIVER(dev, "timeout and ksv list wait failed");
+- goto timeout;
++ msleep(20);
+ }
+
+ ksv_list_check = it6505_hdcp_part2_ksvlist_check(it6505);
+ DRM_DEV_DEBUG_DRIVER(dev, "ksv list ready, ksv list check %s",
+ ksv_list_check ? "pass" : "fail");
+- if (ksv_list_check) {
+- it6505_set_bits(it6505, REG_HDCP_TRIGGER,
+- HDCP_TRIGGER_KSV_DONE, HDCP_TRIGGER_KSV_DONE);
++
++ if (ksv_list_check)
+ return;
+- }
++
+ timeout:
+- it6505_set_bits(it6505, REG_HDCP_TRIGGER,
+- HDCP_TRIGGER_KSV_DONE | HDCP_TRIGGER_KSV_FAIL,
+- HDCP_TRIGGER_KSV_DONE | HDCP_TRIGGER_KSV_FAIL);
++ it6505_start_hdcp(it6505);
+ }
+
+ static void it6505_hdcp_work(struct work_struct *work)
+@@ -2312,14 +2315,20 @@ static int it6505_process_hpd_irq(struct it6505 *it6505)
+ DRM_DEV_DEBUG_DRIVER(dev, "dp_irq_vector = 0x%02x", dp_irq_vector);
+
+ if (dp_irq_vector & DP_CP_IRQ) {
+- it6505_set_bits(it6505, REG_HDCP_TRIGGER, HDCP_TRIGGER_CPIRQ,
+- HDCP_TRIGGER_CPIRQ);
+-
+ bstatus = it6505_dpcd_read(it6505, DP_AUX_HDCP_BSTATUS);
+ if (bstatus < 0)
+ return bstatus;
+
+ DRM_DEV_DEBUG_DRIVER(dev, "Bstatus = 0x%02x", bstatus);
++
++ /*Check BSTATUS when recive CP_IRQ */
++ if (bstatus & DP_BSTATUS_R0_PRIME_READY &&
++ it6505->hdcp_status == HDCP_AUTH_GOING)
++ it6505_set_bits(it6505, REG_HDCP_TRIGGER, HDCP_TRIGGER_CPIRQ,
++ HDCP_TRIGGER_CPIRQ);
++ else if (bstatus & (DP_BSTATUS_REAUTH_REQ | DP_BSTATUS_LINK_FAILURE) &&
++ it6505->hdcp_status == HDCP_AUTH_DONE)
++ it6505_start_hdcp(it6505);
+ }
+
+ ret = drm_dp_dpcd_read_link_status(&it6505->aux, link_status);
+@@ -2456,7 +2465,11 @@ static void it6505_irq_hdcp_ksv_check(struct it6505 *it6505)
+ {
+ struct device *dev = it6505->dev;
+
+- DRM_DEV_DEBUG_DRIVER(dev, "HDCP event Interrupt");
++ DRM_DEV_DEBUG_DRIVER(dev, "HDCP repeater R0 event Interrupt");
++ /* 1B01 HDCP encription should start when R0 is ready*/
++ it6505_set_bits(it6505, REG_HDCP_TRIGGER,
++ HDCP_TRIGGER_KSV_DONE, HDCP_TRIGGER_KSV_DONE);
++
+ schedule_work(&it6505->hdcp_wait_ksv_list);
+ }
+
+diff --git a/drivers/gpu/drm/bridge/ite-it66121.c b/drivers/gpu/drm/bridge/ite-it66121.c
+index 925e42f46cd87f..0f8d3ab30daa68 100644
+--- a/drivers/gpu/drm/bridge/ite-it66121.c
++++ b/drivers/gpu/drm/bridge/ite-it66121.c
+@@ -1452,8 +1452,10 @@ static int it66121_audio_get_eld(struct device *dev, void *data,
+ dev_dbg(dev, "No connector present, passing empty EDID data");
+ memset(buf, 0, len);
+ } else {
++ mutex_lock(&ctx->connector->eld_mutex);
+ memcpy(buf, ctx->connector->eld,
+ min(sizeof(ctx->connector->eld), len));
++ mutex_unlock(&ctx->connector->eld_mutex);
+ }
+ mutex_unlock(&ctx->lock);
+
+diff --git a/drivers/gpu/drm/display/drm_dp_cec.c b/drivers/gpu/drm/display/drm_dp_cec.c
+index 007ceb281d00da..56a4965e518cc2 100644
+--- a/drivers/gpu/drm/display/drm_dp_cec.c
++++ b/drivers/gpu/drm/display/drm_dp_cec.c
+@@ -311,16 +311,6 @@ void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address)
+ if (!aux->transfer)
+ return;
+
+-#ifndef CONFIG_MEDIA_CEC_RC
+- /*
+- * CEC_CAP_RC is part of CEC_CAP_DEFAULTS, but it is stripped by
+- * cec_allocate_adapter() if CONFIG_MEDIA_CEC_RC is undefined.
+- *
+- * Do this here as well to ensure the tests against cec_caps are
+- * correct.
+- */
+- cec_caps &= ~CEC_CAP_RC;
+-#endif
+ cancel_delayed_work_sync(&aux->cec.unregister_work);
+
+ mutex_lock(&aux->cec.lock);
+@@ -337,7 +327,9 @@ void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address)
+ num_las = CEC_MAX_LOG_ADDRS;
+
+ if (aux->cec.adap) {
+- if (aux->cec.adap->capabilities == cec_caps &&
++ /* Check if the adapter properties have changed */
++ if ((aux->cec.adap->capabilities & CEC_CAP_MONITOR_ALL) ==
++ (cec_caps & CEC_CAP_MONITOR_ALL) &&
+ aux->cec.adap->available_log_addrs == num_las) {
+ /* Unchanged, so just set the phys addr */
+ cec_s_phys_addr(aux->cec.adap, source_physical_address, false);
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index cee5eafbfb81a8..fd620f7db0dd27 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -741,6 +741,15 @@ static bool drm_client_firmware_config(struct drm_client_dev *client,
+ if ((conn_configured & mask) != mask && conn_configured != conn_seq)
+ goto retry;
+
++ for (i = 0; i < count; i++) {
++ struct drm_connector *connector = connectors[i];
++
++ if (connector->has_tile)
++ drm_client_get_tile_offsets(dev, connectors, connector_count,
++ modes, offsets, i,
++ connector->tile_h_loc, connector->tile_v_loc);
++ }
++
+ /*
+ * If the BIOS didn't enable everything it could, fall back to have the
+ * same user experiencing of lighting up as much as possible like the
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index ca7f43c8d6f1b3..0e6021235a9304 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -277,6 +277,7 @@ static int __drm_connector_init(struct drm_device *dev,
+ INIT_LIST_HEAD(&connector->probed_modes);
+ INIT_LIST_HEAD(&connector->modes);
+ mutex_init(&connector->mutex);
++ mutex_init(&connector->eld_mutex);
+ mutex_init(&connector->edid_override_mutex);
+ mutex_init(&connector->hdmi.infoframes.lock);
+ connector->edid_blob_ptr = NULL;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 855beafb76ffbe..13bc4c290b17d5 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5605,7 +5605,9 @@ EXPORT_SYMBOL(drm_edid_get_monitor_name);
+
+ static void clear_eld(struct drm_connector *connector)
+ {
++ mutex_lock(&connector->eld_mutex);
+ memset(connector->eld, 0, sizeof(connector->eld));
++ mutex_unlock(&connector->eld_mutex);
+
+ connector->latency_present[0] = false;
+ connector->latency_present[1] = false;
+@@ -5657,6 +5659,8 @@ static void drm_edid_to_eld(struct drm_connector *connector,
+ if (!drm_edid)
+ return;
+
++ mutex_lock(&connector->eld_mutex);
++
+ mnl = get_monitor_name(drm_edid, &eld[DRM_ELD_MONITOR_NAME_STRING]);
+ drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] ELD monitor %s\n",
+ connector->base.id, connector->name,
+@@ -5717,6 +5721,8 @@ static void drm_edid_to_eld(struct drm_connector *connector,
+ drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] ELD size %d, SAD count %d\n",
+ connector->base.id, connector->name,
+ drm_eld_size(eld), total_sad_count);
++
++ mutex_unlock(&connector->eld_mutex);
+ }
+
+ static int _drm_edid_to_sad(const struct drm_edid *drm_edid,
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 29c53f9f449ca8..eaac2e5726e750 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1348,14 +1348,14 @@ int drm_fb_helper_set_par(struct fb_info *info)
+ }
+ EXPORT_SYMBOL(drm_fb_helper_set_par);
+
+-static void pan_set(struct drm_fb_helper *fb_helper, int x, int y)
++static void pan_set(struct drm_fb_helper *fb_helper, int dx, int dy)
+ {
+ struct drm_mode_set *mode_set;
+
+ mutex_lock(&fb_helper->client.modeset_mutex);
+ drm_client_for_each_modeset(mode_set, &fb_helper->client) {
+- mode_set->x = x;
+- mode_set->y = y;
++ mode_set->x += dx;
++ mode_set->y += dy;
+ }
+ mutex_unlock(&fb_helper->client.modeset_mutex);
+ }
+@@ -1364,16 +1364,18 @@ static int pan_display_atomic(struct fb_var_screeninfo *var,
+ struct fb_info *info)
+ {
+ struct drm_fb_helper *fb_helper = info->par;
+- int ret;
++ int ret, dx, dy;
+
+- pan_set(fb_helper, var->xoffset, var->yoffset);
++ dx = var->xoffset - info->var.xoffset;
++ dy = var->yoffset - info->var.yoffset;
++ pan_set(fb_helper, dx, dy);
+
+ ret = drm_client_modeset_commit_locked(&fb_helper->client);
+ if (!ret) {
+ info->var.xoffset = var->xoffset;
+ info->var.yoffset = var->yoffset;
+ } else
+- pan_set(fb_helper, info->var.xoffset, info->var.yoffset);
++ pan_set(fb_helper, -dx, -dy);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_panel_backlight_quirks.c b/drivers/gpu/drm/drm_panel_backlight_quirks.c
+new file mode 100644
+index 00000000000000..c477d98ade2b41
+--- /dev/null
++++ b/drivers/gpu/drm/drm_panel_backlight_quirks.c
+@@ -0,0 +1,94 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/array_size.h>
++#include <linux/dmi.h>
++#include <linux/mod_devicetable.h>
++#include <linux/module.h>
++#include <drm/drm_edid.h>
++#include <drm/drm_utils.h>
++
++struct drm_panel_min_backlight_quirk {
++ struct {
++ enum dmi_field field;
++ const char * const value;
++ } dmi_match;
++ struct drm_edid_ident ident;
++ u8 min_brightness;
++};
++
++static const struct drm_panel_min_backlight_quirk drm_panel_min_backlight_quirks[] = {
++ /* 13 inch matte panel */
++ {
++ .dmi_match.field = DMI_BOARD_VENDOR,
++ .dmi_match.value = "Framework",
++ .ident.panel_id = drm_edid_encode_panel_id('B', 'O', 'E', 0x0bca),
++ .ident.name = "NE135FBM-N41",
++ .min_brightness = 0,
++ },
++ /* 13 inch glossy panel */
++ {
++ .dmi_match.field = DMI_BOARD_VENDOR,
++ .dmi_match.value = "Framework",
++ .ident.panel_id = drm_edid_encode_panel_id('B', 'O', 'E', 0x095f),
++ .ident.name = "NE135FBM-N41",
++ .min_brightness = 0,
++ },
++ /* 13 inch 2.8k panel */
++ {
++ .dmi_match.field = DMI_BOARD_VENDOR,
++ .dmi_match.value = "Framework",
++ .ident.panel_id = drm_edid_encode_panel_id('B', 'O', 'E', 0x0cb4),
++ .ident.name = "NE135A1M-NY1",
++ .min_brightness = 0,
++ },
++};
++
++static bool drm_panel_min_backlight_quirk_matches(const struct drm_panel_min_backlight_quirk *quirk,
++ const struct drm_edid *edid)
++{
++ if (!dmi_match(quirk->dmi_match.field, quirk->dmi_match.value))
++ return false;
++
++ if (!drm_edid_match(edid, &quirk->ident))
++ return false;
++
++ return true;
++}
++
++/**
++ * drm_get_panel_min_brightness_quirk - Get minimum supported brightness level for a panel.
++ * @edid: EDID of the panel to check
++ *
++ * This function checks for platform specific (e.g. DMI based) quirks
++ * providing info on the minimum backlight brightness for systems where this
++ * cannot be probed correctly from the hard-/firm-ware.
++ *
++ * Returns:
++ * A negative error value or
++ * an override value in the range [0, 255] representing 0-100% to be scaled to
++ * the drivers target range.
++ */
++int drm_get_panel_min_brightness_quirk(const struct drm_edid *edid)
++{
++ const struct drm_panel_min_backlight_quirk *quirk;
++ size_t i;
++
++ if (!IS_ENABLED(CONFIG_DMI))
++ return -ENODATA;
++
++ if (!edid)
++ return -EINVAL;
++
++ for (i = 0; i < ARRAY_SIZE(drm_panel_min_backlight_quirks); i++) {
++ quirk = &drm_panel_min_backlight_quirks[i];
++
++ if (drm_panel_min_backlight_quirk_matches(quirk, edid))
++ return quirk->min_brightness;
++ }
++
++ return -ENODATA;
++}
++EXPORT_SYMBOL(drm_get_panel_min_brightness_quirk);
++
++MODULE_DESCRIPTION("Quirks for panel backlight overrides");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
+index 1e26cd4f834798..52059cfff4f0b3 100644
+--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
++++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
+@@ -1643,7 +1643,9 @@ static int hdmi_audio_get_eld(struct device *dev, void *data, uint8_t *buf,
+ struct hdmi_context *hdata = dev_get_drvdata(dev);
+ struct drm_connector *connector = &hdata->connector;
+
++ mutex_lock(&connector->eld_mutex);
+ memcpy(buf, connector->eld, min(sizeof(connector->eld), len));
++ mutex_unlock(&connector->eld_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 90fa73575feb13..45cca965c11b48 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2022,11 +2022,10 @@ icl_dsc_compute_link_config(struct intel_dp *intel_dp,
+ /* Compressed BPP should be less than the Input DSC bpp */
+ dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
+
+- for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
+- if (valid_dsc_bpp[i] < dsc_min_bpp)
++ for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
++ if (valid_dsc_bpp[i] < dsc_min_bpp ||
++ valid_dsc_bpp[i] > dsc_max_bpp)
+ continue;
+- if (valid_dsc_bpp[i] > dsc_max_bpp)
+- break;
+
+ ret = dsc_compute_link_config(intel_dp,
+ pipe_config,
+@@ -2738,7 +2737,6 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
+
+ crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC);
+
+- /* Currently only DP_AS_SDP_AVT_FIXED_VTOTAL mode supported */
+ as_sdp->sdp_type = DP_SDP_ADAPTIVE_SYNC;
+ as_sdp->length = 0x9;
+ as_sdp->duration_incr_ms = 0;
+@@ -2750,7 +2748,7 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
+ as_sdp->target_rr = drm_mode_vrefresh(adjusted_mode);
+ as_sdp->target_rr_divider = true;
+ } else {
+- as_sdp->mode = DP_AS_SDP_AVT_FIXED_VTOTAL;
++ as_sdp->mode = DP_AS_SDP_AVT_DYNAMIC_VTOTAL;
+ as_sdp->vtotal = adjusted_mode->vtotal;
+ as_sdp->target_rr = 0;
+ }
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index c8720d31d1013d..62a5287ea1d9c4 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -105,8 +105,6 @@ static const u32 icl_sdr_y_plane_formats[] = {
+ DRM_FORMAT_Y216,
+ DRM_FORMAT_XYUV8888,
+ DRM_FORMAT_XVYU2101010,
+- DRM_FORMAT_XVYU12_16161616,
+- DRM_FORMAT_XVYU16161616,
+ };
+
+ static const u32 icl_sdr_uv_plane_formats[] = {
+@@ -133,8 +131,6 @@ static const u32 icl_sdr_uv_plane_formats[] = {
+ DRM_FORMAT_Y216,
+ DRM_FORMAT_XYUV8888,
+ DRM_FORMAT_XVYU2101010,
+- DRM_FORMAT_XVYU12_16161616,
+- DRM_FORMAT_XVYU16161616,
+ };
+
+ static const u32 icl_hdr_plane_formats[] = {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+index fe69f2c8527d79..ae3343c81a6455 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+@@ -209,8 +209,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
+ struct address_space *mapping = obj->base.filp->f_mapping;
+ unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
+ struct sg_table *st;
+- struct sgt_iter sgt_iter;
+- struct page *page;
+ int ret;
+
+ /*
+@@ -239,9 +237,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
+ * for PAGE_SIZE chunks instead may be helpful.
+ */
+ if (max_segment > PAGE_SIZE) {
+- for_each_sgt_page(page, sgt_iter, st)
+- put_page(page);
+- sg_free_table(st);
++ shmem_sg_free_table(st, mapping, false, false);
+ kfree(st);
+
+ max_segment = PAGE_SIZE;
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index ee12ee0ed41871..b0e94c95940f67 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -5511,12 +5511,20 @@ static inline void guc_log_context(struct drm_printer *p,
+ {
+ drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id.id);
+ drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
+- drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
+- ce->ring->head,
+- ce->lrc_reg_state[CTX_RING_HEAD]);
+- drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
+- ce->ring->tail,
+- ce->lrc_reg_state[CTX_RING_TAIL]);
++ if (intel_context_pin_if_active(ce)) {
++ drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
++ ce->ring->head,
++ ce->lrc_reg_state[CTX_RING_HEAD]);
++ drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
++ ce->ring->tail,
++ ce->lrc_reg_state[CTX_RING_TAIL]);
++ intel_context_unpin(ce);
++ } else {
++ drm_printf(p, "\t\tLRC Head: Internal %u, Memory not pinned\n",
++ ce->ring->head);
++ drm_printf(p, "\t\tLRC Tail: Internal %u, Memory not pinned\n",
++ ce->ring->tail);
++ }
+ drm_printf(p, "\t\tContext Pin Count: %u\n",
+ atomic_read(&ce->pin_count));
+ drm_printf(p, "\t\tGuC ID Ref Count: %u\n",
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index d586aea3089841..9c83bab0a53091 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -121,6 +121,8 @@ r535_gsp_msgq_wait(struct nvkm_gsp *gsp, u32 repc, u32 *prepc, int *ptime)
+ return mqe->data;
+ }
+
++ size = ALIGN(repc + GSP_MSG_HDR_SIZE, GSP_PAGE_SIZE);
++
+ msg = kvmalloc(repc, GFP_KERNEL);
+ if (!msg)
+ return ERR_PTR(-ENOMEM);
+@@ -129,19 +131,15 @@ r535_gsp_msgq_wait(struct nvkm_gsp *gsp, u32 repc, u32 *prepc, int *ptime)
+ len = min_t(u32, repc, len);
+ memcpy(msg, mqe->data, len);
+
+- rptr += DIV_ROUND_UP(len, GSP_PAGE_SIZE);
+- if (rptr == gsp->msgq.cnt)
+- rptr = 0;
+-
+ repc -= len;
+
+ if (repc) {
+ mqe = (void *)((u8 *)gsp->shm.msgq.ptr + 0x1000 + 0 * 0x1000);
+ memcpy(msg + len, mqe, repc);
+-
+- rptr += DIV_ROUND_UP(repc, GSP_PAGE_SIZE);
+ }
+
++ rptr = (rptr + DIV_ROUND_UP(size, GSP_PAGE_SIZE)) % gsp->msgq.cnt;
++
+ mb();
+ (*gsp->msgq.rptr) = rptr;
+ return msg;
+@@ -163,7 +161,7 @@ r535_gsp_cmdq_push(struct nvkm_gsp *gsp, void *argv)
+ u64 *end;
+ u64 csum = 0;
+ int free, time = 1000000;
+- u32 wptr, size;
++ u32 wptr, size, step;
+ u32 off = 0;
+
+ argc = ALIGN(GSP_MSG_HDR_SIZE + argc, GSP_PAGE_SIZE);
+@@ -197,7 +195,9 @@ r535_gsp_cmdq_push(struct nvkm_gsp *gsp, void *argv)
+ }
+
+ cqe = (void *)((u8 *)gsp->shm.cmdq.ptr + 0x1000 + wptr * 0x1000);
+- size = min_t(u32, argc, (gsp->cmdq.cnt - wptr) * GSP_PAGE_SIZE);
++ step = min_t(u32, free, (gsp->cmdq.cnt - wptr));
++ size = min_t(u32, argc, step * GSP_PAGE_SIZE);
++
+ memcpy(cqe, (u8 *)cmd + off, size);
+
+ wptr += DIV_ROUND_UP(size, 0x1000);
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index 5b69cc8011b42b..8d64ba18572ec4 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -775,8 +775,10 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port,
+ if (!dig->pin || dig->pin->id != port)
+ continue;
+ *enabled = true;
++ mutex_lock(&connector->eld_mutex);
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
++ mutex_unlock(&connector->eld_mutex);
+ break;
+ }
+
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index b04538907f956c..f576b1aa86d143 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -947,9 +947,6 @@ static void cdn_dp_pd_event_work(struct work_struct *work)
+ {
+ struct cdn_dp_device *dp = container_of(work, struct cdn_dp_device,
+ event_work);
+- struct drm_connector *connector = &dp->connector;
+- enum drm_connector_status old_status;
+-
+ int ret;
+
+ mutex_lock(&dp->lock);
+@@ -1009,11 +1006,7 @@ static void cdn_dp_pd_event_work(struct work_struct *work)
+
+ out:
+ mutex_unlock(&dp->lock);
+-
+- old_status = connector->status;
+- connector->status = connector->funcs->detect(connector, false);
+- if (old_status != connector->status)
+- drm_kms_helper_hotplug_event(dp->drm_dev);
++ drm_connector_helper_hpd_irq_event(&dp->connector);
+ }
+
+ static int cdn_dp_pd_event(struct notifier_block *nb,
+diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
+index 847470f747c0ef..3c8f3532c79723 100644
+--- a/drivers/gpu/drm/sti/sti_hdmi.c
++++ b/drivers/gpu/drm/sti/sti_hdmi.c
+@@ -1225,7 +1225,9 @@ static int hdmi_audio_get_eld(struct device *dev, void *data, uint8_t *buf, size
+ struct drm_connector *connector = hdmi->drm_connector;
+
+ DRM_DEBUG_DRIVER("\n");
++ mutex_lock(&connector->eld_mutex);
+ memcpy(buf, connector->eld, min(sizeof(connector->eld), len));
++ mutex_unlock(&connector->eld_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+index 294773342e710d..4ba869e0e794c7 100644
+--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+@@ -46,7 +46,7 @@ static struct drm_display_mode *find_preferred_mode(struct drm_connector *connec
+ struct drm_display_mode *mode, *preferred;
+
+ mutex_lock(&drm->mode_config.mutex);
+- preferred = list_first_entry(&connector->modes, struct drm_display_mode, head);
++ preferred = list_first_entry_or_null(&connector->modes, struct drm_display_mode, head);
+ list_for_each_entry(mode, &connector->modes, head)
+ if (mode->type & DRM_MODE_TYPE_PREFERRED)
+ preferred = mode;
+@@ -105,9 +105,8 @@ static int set_connector_edid(struct kunit *test, struct drm_connector *connecto
+ mutex_lock(&drm->mode_config.mutex);
+ ret = connector->funcs->fill_modes(connector, 4096, 4096);
+ mutex_unlock(&drm->mode_config.mutex);
+- KUNIT_ASSERT_GT(test, ret, 0);
+
+- return 0;
++ return ret;
+ }
+
+ static const struct drm_connector_hdmi_funcs dummy_connector_hdmi_funcs = {
+@@ -223,7 +222,7 @@ drm_atomic_helper_connector_hdmi_init(struct kunit *test,
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ return priv;
+ }
+@@ -728,7 +727,7 @@ static void drm_test_check_output_bpc_crtc_mode_changed(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -802,7 +801,7 @@ static void drm_test_check_output_bpc_crtc_mode_not_changed(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -873,7 +872,7 @@ static void drm_test_check_output_bpc_dvi(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_dvi_1080p,
+ ARRAY_SIZE(test_edid_dvi_1080p));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_FALSE(test, info->is_hdmi);
+@@ -920,7 +919,7 @@ static void drm_test_check_tmds_char_rate_rgb_8bpc(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -967,7 +966,7 @@ static void drm_test_check_tmds_char_rate_rgb_10bpc(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -1014,7 +1013,7 @@ static void drm_test_check_tmds_char_rate_rgb_12bpc(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -1121,7 +1120,7 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1190,7 +1189,7 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1254,7 +1253,7 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1314,7 +1313,7 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1381,7 +1380,7 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1447,7 +1446,7 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1507,7 +1506,7 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 7e0a5ea7ab859a..6b83d02b5d62a5 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -2192,9 +2192,9 @@ static int vc4_hdmi_audio_get_eld(struct device *dev, void *data,
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+ struct drm_connector *connector = &vc4_hdmi->connector;
+
+- mutex_lock(&vc4_hdmi->mutex);
++ mutex_lock(&connector->eld_mutex);
+ memcpy(buf, connector->eld, min(sizeof(connector->eld), len));
+- mutex_unlock(&vc4_hdmi->mutex);
++ mutex_unlock(&connector->eld_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
+index 64c236169db88a..5dc8eeaf7123c4 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
+@@ -194,6 +194,13 @@ struct virtio_gpu_framebuffer {
+ #define to_virtio_gpu_framebuffer(x) \
+ container_of(x, struct virtio_gpu_framebuffer, base)
+
++struct virtio_gpu_plane_state {
++ struct drm_plane_state base;
++ struct virtio_gpu_fence *fence;
++};
++#define to_virtio_gpu_plane_state(x) \
++ container_of(x, struct virtio_gpu_plane_state, base)
++
+ struct virtio_gpu_queue {
+ struct virtqueue *vq;
+ spinlock_t qlock;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index a72a2dbda031c2..7acd38b962c621 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -66,11 +66,28 @@ uint32_t virtio_gpu_translate_format(uint32_t drm_fourcc)
+ return format;
+ }
+
++static struct
++drm_plane_state *virtio_gpu_plane_duplicate_state(struct drm_plane *plane)
++{
++ struct virtio_gpu_plane_state *new;
++
++ if (WARN_ON(!plane->state))
++ return NULL;
++
++ new = kzalloc(sizeof(*new), GFP_KERNEL);
++ if (!new)
++ return NULL;
++
++ __drm_atomic_helper_plane_duplicate_state(plane, &new->base);
++
++ return &new->base;
++}
++
+ static const struct drm_plane_funcs virtio_gpu_plane_funcs = {
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
+ .reset = drm_atomic_helper_plane_reset,
+- .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
++ .atomic_duplicate_state = virtio_gpu_plane_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
+ };
+
+@@ -138,11 +155,13 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
+ struct drm_device *dev = plane->dev;
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+ struct virtio_gpu_object *bo;
+
+ vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
++ vgplane_st = to_virtio_gpu_plane_state(plane->state);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+- if (vgfb->fence) {
++ if (vgplane_st->fence) {
+ struct virtio_gpu_object_array *objs;
+
+ objs = virtio_gpu_array_alloc(1);
+@@ -151,13 +170,11 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
+ virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
+ virtio_gpu_array_lock_resv(objs);
+ virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
+- width, height, objs, vgfb->fence);
++ width, height, objs,
++ vgplane_st->fence);
+ virtio_gpu_notify(vgdev);
+-
+- dma_fence_wait_timeout(&vgfb->fence->f, true,
++ dma_fence_wait_timeout(&vgplane_st->fence->f, true,
+ msecs_to_jiffies(50));
+- dma_fence_put(&vgfb->fence->f);
+- vgfb->fence = NULL;
+ } else {
+ virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
+ width, height, NULL, NULL);
+@@ -247,20 +264,23 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ struct drm_device *dev = plane->dev;
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+ struct virtio_gpu_object *bo;
+
+ if (!new_state->fb)
+ return 0;
+
+ vgfb = to_virtio_gpu_framebuffer(new_state->fb);
++ vgplane_st = to_virtio_gpu_plane_state(new_state);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+ if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+ return 0;
+
+- if (bo->dumb && (plane->state->fb != new_state->fb)) {
+- vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
++ if (bo->dumb) {
++ vgplane_st->fence = virtio_gpu_fence_alloc(vgdev,
++ vgdev->fence_drv.context,
+ 0);
+- if (!vgfb->fence)
++ if (!vgplane_st->fence)
+ return -ENOMEM;
+ }
+
+@@ -270,15 +290,15 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
+ struct drm_plane_state *state)
+ {
+- struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+
+ if (!state->fb)
+ return;
+
+- vgfb = to_virtio_gpu_framebuffer(state->fb);
+- if (vgfb->fence) {
+- dma_fence_put(&vgfb->fence->f);
+- vgfb->fence = NULL;
++ vgplane_st = to_virtio_gpu_plane_state(state);
++ if (vgplane_st->fence) {
++ dma_fence_put(&vgplane_st->fence->f);
++ vgplane_st->fence = NULL;
+ }
+ }
+
+@@ -291,6 +311,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_output *output = NULL;
+ struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+ struct virtio_gpu_object *bo = NULL;
+ uint32_t handle;
+
+@@ -303,6 +324,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
+
+ if (plane->state->fb) {
+ vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
++ vgplane_st = to_virtio_gpu_plane_state(plane->state);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+ handle = bo->hw_res_handle;
+ } else {
+@@ -322,11 +344,9 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
+ (vgdev, 0,
+ plane->state->crtc_w,
+ plane->state->crtc_h,
+- 0, 0, objs, vgfb->fence);
++ 0, 0, objs, vgplane_st->fence);
+ virtio_gpu_notify(vgdev);
+- dma_fence_wait(&vgfb->fence->f, true);
+- dma_fence_put(&vgfb->fence->f);
+- vgfb->fence = NULL;
++ dma_fence_wait(&vgplane_st->fence->f, true);
+ }
+
+ if (plane->state->fb != old_state->fb) {
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index 85aa3ab0da3b87..8050938389b68f 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -104,11 +104,7 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+ drm_puts(&p, "\n**** GuC CT ****\n");
+ xe_guc_ct_snapshot_print(ss->ct, &p);
+
+- /*
+- * Don't add a new section header here because the mesa debug decoder
+- * tool expects the context information to be in the 'GuC CT' section.
+- */
+- /* drm_puts(&p, "\n**** Contexts ****\n"); */
++ drm_puts(&p, "\n**** Contexts ****\n");
+ xe_guc_exec_queue_snapshot_print(ss->ge, &p);
+
+ drm_puts(&p, "\n**** Job ****\n");
+@@ -337,42 +333,34 @@ int xe_devcoredump_init(struct xe_device *xe)
+ /**
+ * xe_print_blob_ascii85 - print a BLOB to some useful location in ASCII85
+ *
+- * The output is split to multiple lines because some print targets, e.g. dmesg
+- * cannot handle arbitrarily long lines. Note also that printing to dmesg in
+- * piece-meal fashion is not possible, each separate call to drm_puts() has a
+- * line-feed automatically added! Therefore, the entire output line must be
+- * constructed in a local buffer first, then printed in one atomic output call.
++ * The output is split into multiple calls to drm_puts() because some print
++ * targets, e.g. dmesg, cannot handle arbitrarily long lines. These targets may
++ * add newlines, as is the case with dmesg: each drm_puts() call creates a
++ * separate line.
+ *
+ * There is also a scheduler yield call to prevent the 'task has been stuck for
+ * 120s' kernel hang check feature from firing when printing to a slow target
+ * such as dmesg over a serial port.
+ *
+- * TODO: Add compression prior to the ASCII85 encoding to shrink huge buffers down.
+- *
+ * @p: the printer object to output to
+ * @prefix: optional prefix to add to output string
++ * @suffix: optional suffix to add at the end. 0 disables it and is
++ * not added to the output, which is useful when using multiple calls
++ * to dump data to @p
+ * @blob: the Binary Large OBject to dump out
+ * @offset: offset in bytes to skip from the front of the BLOB, must be a multiple of sizeof(u32)
+ * @size: the size in bytes of the BLOB, must be a multiple of sizeof(u32)
+ */
+-void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix, char suffix,
+ const void *blob, size_t offset, size_t size)
+ {
+ const u32 *blob32 = (const u32 *)blob;
+ char buff[ASCII85_BUFSZ], *line_buff;
+ size_t line_pos = 0;
+
+- /*
+- * Splitting blobs across multiple lines is not compatible with the mesa
+- * debug decoder tool. Note that even dropping the explicit '\n' below
+- * doesn't help because the GuC log is so big some underlying implementation
+- * still splits the lines at 512K characters. So just bail completely for
+- * the moment.
+- */
+- return;
+-
+ #define DMESG_MAX_LINE_LEN 800
+-#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
++ /* Always leave space for the suffix char and the \0 */
++#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "<suffix>\0" */
+
+ if (size & 3)
+ drm_printf(p, "Size not word aligned: %zu", size);
+@@ -404,7 +392,6 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
+ line_pos += strlen(line_buff + line_pos);
+
+ if ((line_pos + MIN_SPACE) >= DMESG_MAX_LINE_LEN) {
+- line_buff[line_pos++] = '\n';
+ line_buff[line_pos++] = 0;
+
+ drm_puts(p, line_buff);
+@@ -416,10 +403,11 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
+ }
+ }
+
++ if (suffix)
++ line_buff[line_pos++] = suffix;
++
+ if (line_pos) {
+- line_buff[line_pos++] = '\n';
+ line_buff[line_pos++] = 0;
+-
+ drm_puts(p, line_buff);
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.h b/drivers/gpu/drm/xe/xe_devcoredump.h
+index a4eebc285fc837..b231c8ad799f69 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.h
++++ b/drivers/gpu/drm/xe/xe_devcoredump.h
+@@ -26,7 +26,7 @@ static inline int xe_devcoredump_init(struct xe_device *xe)
+ }
+ #endif
+
+-void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix, char suffix,
+ const void *blob, size_t offset, size_t size);
+
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
+index be47780ec2a7e7..50851638003b9b 100644
+--- a/drivers/gpu/drm/xe/xe_guc_log.c
++++ b/drivers/gpu/drm/xe/xe_guc_log.c
+@@ -78,7 +78,7 @@ void xe_guc_log_print(struct xe_guc_log *log, struct drm_printer *p)
+
+ xe_map_memcpy_from(xe, copy, &log->bo->vmap, 0, size);
+
+- xe_print_blob_ascii85(p, "Log data", copy, 0, size);
++ xe_print_blob_ascii85(p, "Log data", '\n', copy, 0, size);
+
+ vfree(copy);
+ }
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index a4b47319ad8ead..bcdd168cdc6d79 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -432,6 +432,26 @@ static int asus_kbd_get_functions(struct hid_device *hdev,
+ return ret;
+ }
+
++static int asus_kbd_disable_oobe(struct hid_device *hdev)
++{
++ const u8 init[][6] = {
++ { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 },
++ { FEATURE_KBD_REPORT_ID, 0xBA, 0xC5, 0xC4 },
++ { FEATURE_KBD_REPORT_ID, 0xD0, 0x8F, 0x01 },
++ { FEATURE_KBD_REPORT_ID, 0xD0, 0x85, 0xFF }
++ };
++ int ret;
++
++ for (size_t i = 0; i < ARRAY_SIZE(init); i++) {
++ ret = asus_kbd_set_report(hdev, init[i], sizeof(init[i]));
++ if (ret < 0)
++ return ret;
++ }
++
++ hid_info(hdev, "Disabled OOBE for keyboard\n");
++ return 0;
++}
++
+ static void asus_schedule_work(struct asus_kbd_leds *led)
+ {
+ unsigned long flags;
+@@ -534,6 +554,12 @@ static int asus_kbd_register_leds(struct hid_device *hdev)
+ ret = asus_kbd_init(hdev, FEATURE_KBD_LED_REPORT_ID2);
+ if (ret < 0)
+ return ret;
++
++ if (dmi_match(DMI_PRODUCT_FAMILY, "ProArt P16")) {
++ ret = asus_kbd_disable_oobe(hdev);
++ if (ret < 0)
++ return ret;
++ }
+ } else {
+ /* Initialize keyboard */
+ ret = asus_kbd_init(hdev, FEATURE_KBD_REPORT_ID);
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e07d63db5e1f47..369414c92fccbe 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2308,6 +2308,11 @@ static const struct hid_device_id mt_devices[] = {
+ HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY, USB_VENDOR_ID_SIS_TOUCH,
+ HID_ANY_ID) },
+
++ /* Hantick */
++ { .driver_data = MT_CLS_NSMU,
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288) },
++
+ /* Generic MT device */
+ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH, HID_ANY_ID, HID_ANY_ID) },
+
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 7bd86eef6ec761..4c94c03cb57396 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -730,23 +730,30 @@ static int sensor_hub_probe(struct hid_device *hdev,
+ return ret;
+ }
+
++static int sensor_hub_finalize_pending_fn(struct device *dev, void *data)
++{
++ struct hid_sensor_hub_device *hsdev = dev->platform_data;
++
++ if (hsdev->pending.status)
++ complete(&hsdev->pending.ready);
++
++ return 0;
++}
++
+ static void sensor_hub_remove(struct hid_device *hdev)
+ {
+ struct sensor_hub_data *data = hid_get_drvdata(hdev);
+ unsigned long flags;
+- int i;
+
+ hid_dbg(hdev, " hardware removed\n");
+ hid_hw_close(hdev);
+ hid_hw_stop(hdev);
++
+ spin_lock_irqsave(&data->lock, flags);
+- for (i = 0; i < data->hid_sensor_client_cnt; ++i) {
+- struct hid_sensor_hub_device *hsdev =
+- data->hid_sensor_hub_client_devs[i].platform_data;
+- if (hsdev->pending.status)
+- complete(&hsdev->pending.ready);
+- }
++ device_for_each_child(&hdev->dev, NULL,
++ sensor_hub_finalize_pending_fn);
+ spin_unlock_irqrestore(&data->lock, flags);
++
+ mfd_remove_devices(&hdev->dev);
+ mutex_destroy(&data->mutex);
+ }
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 5a599c90e7a2c7..c7033ffaba3919 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -4943,6 +4943,10 @@ static const struct wacom_features wacom_features_0x94 =
+ HID_DEVICE(BUS_I2C, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
+ .driver_data = (kernel_ulong_t)&wacom_features_##prod
+
++#define PCI_DEVICE_WACOM(prod) \
++ HID_DEVICE(BUS_PCI, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
++ .driver_data = (kernel_ulong_t)&wacom_features_##prod
++
+ #define USB_DEVICE_LENOVO(prod) \
+ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, prod), \
+ .driver_data = (kernel_ulong_t)&wacom_features_##prod
+@@ -5112,6 +5116,7 @@ const struct hid_device_id wacom_ids[] = {
+
+ { USB_DEVICE_WACOM(HID_ANY_ID) },
+ { I2C_DEVICE_WACOM(HID_ANY_ID) },
++ { PCI_DEVICE_WACOM(HID_ANY_ID) },
+ { BT_DEVICE_WACOM(HID_ANY_ID) },
+ { }
+ };
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 14ae0cfc325efb..d2499f302b5083 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -355,6 +355,25 @@ static const struct acpi_device_id i2c_acpi_force_400khz_device_ids[] = {
+ {}
+ };
+
++static const struct acpi_device_id i2c_acpi_force_100khz_device_ids[] = {
++ /*
++ * When a 400KHz freq is used on this model of ELAN touchpad in Linux,
++ * excessive smoothing (similar to when the touchpad's firmware detects
++ * a noisy signal) is sometimes applied. As some devices' (e.g, Lenovo
++ * V15 G4) ACPI tables specify a 400KHz frequency for this device and
++ * some I2C busses (e.g, Designware I2C) default to a 400KHz freq,
++ * force the speed to 100KHz as a workaround.
++ *
++ * For future investigation: This problem may be related to the default
++ * HCNT/LCNT values given by some busses' drivers, because they are not
++ * specified in the aforementioned devices' ACPI tables, and because
++ * the device works without issues on Windows at what is expected to be
++ * a 400KHz frequency. The root cause of the issue is not known.
++ */
++ { "ELAN06FA", 0 },
++ {}
++};
++
+ static acpi_status i2c_acpi_lookup_speed(acpi_handle handle, u32 level,
+ void *data, void **return_value)
+ {
+@@ -373,6 +392,9 @@ static acpi_status i2c_acpi_lookup_speed(acpi_handle handle, u32 level,
+ if (acpi_match_device_ids(adev, i2c_acpi_force_400khz_device_ids) == 0)
+ lookup->force_speed = I2C_MAX_FAST_MODE_FREQ;
+
++ if (acpi_match_device_ids(adev, i2c_acpi_force_100khz_device_ids) == 0)
++ lookup->force_speed = I2C_MAX_STANDARD_MODE_FREQ;
++
+ return AE_OK;
+ }
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 42310c9a00c2d1..53ab814b676ffd 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1919,7 +1919,7 @@ static int i3c_master_bus_init(struct i3c_master_controller *master)
+ goto err_bus_cleanup;
+
+ if (master->ops->set_speed) {
+- master->ops->set_speed(master, I3C_OPEN_DRAIN_NORMAL_SPEED);
++ ret = master->ops->set_speed(master, I3C_OPEN_DRAIN_NORMAL_SPEED);
+ if (ret)
+ goto err_bus_cleanup;
+ }
+diff --git a/drivers/iio/light/as73211.c b/drivers/iio/light/as73211.c
+index be0068081ebbbb..11fbdcdd26d656 100644
+--- a/drivers/iio/light/as73211.c
++++ b/drivers/iio/light/as73211.c
+@@ -177,6 +177,12 @@ struct as73211_data {
+ BIT(AS73211_SCAN_INDEX_TEMP) | \
+ AS73211_SCAN_MASK_COLOR)
+
++static const unsigned long as73211_scan_masks[] = {
++ AS73211_SCAN_MASK_COLOR,
++ AS73211_SCAN_MASK_ALL,
++ 0
++};
++
+ static const struct iio_chan_spec as73211_channels[] = {
+ {
+ .type = IIO_TEMP,
+@@ -672,9 +678,12 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
+
+ /* AS73211 starts reading at address 2 */
+ ret = i2c_master_recv(data->client,
+- (char *)&scan.chan[1], 3 * sizeof(scan.chan[1]));
++ (char *)&scan.chan[0], 3 * sizeof(scan.chan[0]));
+ if (ret < 0)
+ goto done;
++
++ /* Avoid pushing uninitialized data */
++ scan.chan[3] = 0;
+ }
+
+ if (data_result) {
+@@ -682,9 +691,15 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
+ * Saturate all channels (in case of overflows). Temperature channel
+ * is not affected by overflows.
+ */
+- scan.chan[1] = cpu_to_le16(U16_MAX);
+- scan.chan[2] = cpu_to_le16(U16_MAX);
+- scan.chan[3] = cpu_to_le16(U16_MAX);
++ if (*indio_dev->active_scan_mask == AS73211_SCAN_MASK_ALL) {
++ scan.chan[1] = cpu_to_le16(U16_MAX);
++ scan.chan[2] = cpu_to_le16(U16_MAX);
++ scan.chan[3] = cpu_to_le16(U16_MAX);
++ } else {
++ scan.chan[0] = cpu_to_le16(U16_MAX);
++ scan.chan[1] = cpu_to_le16(U16_MAX);
++ scan.chan[2] = cpu_to_le16(U16_MAX);
++ }
+ }
+
+ iio_push_to_buffers_with_timestamp(indio_dev, &scan, iio_get_time_ns(indio_dev));
+@@ -758,6 +773,7 @@ static int as73211_probe(struct i2c_client *client)
+ indio_dev->channels = data->spec_dev->channels;
+ indio_dev->num_channels = data->spec_dev->num_channels;
+ indio_dev->modes = INDIO_DIRECT_MODE;
++ indio_dev->available_scan_masks = as73211_scan_masks;
+
+ ret = i2c_smbus_read_byte_data(data->client, AS73211_REG_OSR);
+ if (ret < 0)
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 45d9dc9c6c8fda..bb02b6adbf2c21 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -2021,6 +2021,11 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+ struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
++ bool is_odp = is_odp_mr(mr);
++ int ret = 0;
++
++ if (is_odp)
++ mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+
+ if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
+ ent = mr->mmkey.cache_ent;
+@@ -2032,7 +2037,7 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ ent->tmp_cleanup_scheduled = true;
+ }
+ spin_unlock_irq(&ent->mkeys_queue.lock);
+- return 0;
++ goto out;
+ }
+
+ if (ent) {
+@@ -2041,7 +2046,15 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ mr->mmkey.cache_ent = NULL;
+ spin_unlock_irq(&ent->mkeys_queue.lock);
+ }
+- return destroy_mkey(dev, mr);
++ ret = destroy_mkey(dev, mr);
++out:
++ if (is_odp) {
++ if (!ret)
++ to_ib_umem_odp(mr->umem)->private = NULL;
++ mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex);
++ }
++
++ return ret;
+ }
+
+ static int __mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 64b441542cd5dd..1d3bf56157702d 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -282,6 +282,8 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
+ if (!umem_odp->npages)
+ goto out;
+ mr = umem_odp->private;
++ if (!mr)
++ goto out;
+
+ start = max_t(u64, ib_umem_start(umem_odp), range->start);
+ end = min_t(u64, ib_umem_end(umem_odp), range->end);
+diff --git a/drivers/input/misc/nxp-bbnsm-pwrkey.c b/drivers/input/misc/nxp-bbnsm-pwrkey.c
+index eb4173f9c82044..7ba8d166d68c18 100644
+--- a/drivers/input/misc/nxp-bbnsm-pwrkey.c
++++ b/drivers/input/misc/nxp-bbnsm-pwrkey.c
+@@ -187,6 +187,12 @@ static int bbnsm_pwrkey_probe(struct platform_device *pdev)
+ return 0;
+ }
+
++static void bbnsm_pwrkey_remove(struct platform_device *pdev)
++{
++ dev_pm_clear_wake_irq(&pdev->dev);
++ device_init_wakeup(&pdev->dev, false);
++}
++
+ static int __maybe_unused bbnsm_pwrkey_suspend(struct device *dev)
+ {
+ struct platform_device *pdev = to_platform_device(dev);
+@@ -223,6 +229,8 @@ static struct platform_driver bbnsm_pwrkey_driver = {
+ .of_match_table = bbnsm_pwrkey_ids,
+ },
+ .probe = bbnsm_pwrkey_probe,
++ .remove = bbnsm_pwrkey_remove,
++
+ };
+ module_platform_driver(bbnsm_pwrkey_driver);
+
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f1a8f8c75cb0e9..6bf8ecbbe0c263 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -4616,7 +4616,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ /* Initialise in-memory data structures */
+ ret = arm_smmu_init_structures(smmu);
+ if (ret)
+- return ret;
++ goto err_free_iopf;
+
+ /* Record our private device structure */
+ platform_set_drvdata(pdev, smmu);
+@@ -4627,22 +4627,29 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ /* Reset the device */
+ ret = arm_smmu_device_reset(smmu);
+ if (ret)
+- return ret;
++ goto err_disable;
+
+ /* And we're up. Go go go! */
+ ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
+ "smmu3.%pa", &ioaddr);
+ if (ret)
+- return ret;
++ goto err_disable;
+
+ ret = iommu_device_register(&smmu->iommu, &arm_smmu_ops, dev);
+ if (ret) {
+ dev_err(dev, "Failed to register iommu\n");
+- iommu_device_sysfs_remove(&smmu->iommu);
+- return ret;
++ goto err_free_sysfs;
+ }
+
+ return 0;
++
++err_free_sysfs:
++ iommu_device_sysfs_remove(&smmu->iommu);
++err_disable:
++ arm_smmu_device_disable(smmu);
++err_free_iopf:
++ iopf_queue_free(smmu->evtq.iopf);
++ return ret;
+ }
+
+ static void arm_smmu_device_remove(struct platform_device *pdev)
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index 6e41ddaa24d636..d525ab43a4aebf 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -79,7 +79,6 @@
+ #define TEGRA241_VCMDQ_PAGE1(q) (TEGRA241_VCMDQ_PAGE1_BASE + 0x80*(q))
+ #define VCMDQ_ADDR GENMASK(47, 5)
+ #define VCMDQ_LOG2SIZE GENMASK(4, 0)
+-#define VCMDQ_LOG2SIZE_MAX 19
+
+ #define TEGRA241_VCMDQ_BASE 0x00000
+ #define TEGRA241_VCMDQ_CONS_INDX_BASE 0x00008
+@@ -505,12 +504,15 @@ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+ struct arm_smmu_cmdq *cmdq = &vcmdq->cmdq;
+ struct arm_smmu_queue *q = &cmdq->q;
+ char name[16];
++ u32 regval;
+ int ret;
+
+ snprintf(name, 16, "vcmdq%u", vcmdq->idx);
+
+- /* Queue size, capped to ensure natural alignment */
+- q->llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, VCMDQ_LOG2SIZE_MAX);
++ /* Cap queue size to SMMU's IDR1.CMDQS and ensure natural alignment */
++ regval = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
++ q->llq.max_n_shift =
++ min_t(u32, CMDQ_MAX_SZ_SHIFT, FIELD_GET(IDR1_CMDQS, regval));
+
+ /* Use the common helper to init the VCMDQ, and then... */
+ ret = arm_smmu_init_one_queue(smmu, q, vcmdq->page0,
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 6372f3e25c4bc2..601fb878d0ef25 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -567,6 +567,7 @@ static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
+ { .compatible = "qcom,sc8180x-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ { .compatible = "qcom,sc8280xp-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ { .compatible = "qcom,sdm630-smmu-v2", .data = &qcom_smmu_v2_data },
++ { .compatible = "qcom,sdm670-smmu-v2", .data = &qcom_smmu_v2_data },
+ { .compatible = "qcom,sdm845-smmu-v2", .data = &qcom_smmu_v2_data },
+ { .compatible = "qcom,sdm845-smmu-500", .data = &sdm845_smmu_500_data },
+ { .compatible = "qcom,sm6115-smmu-500", .data = &qcom_smmu_500_impl0_data},
+diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
+index b8393a8c075396..95e2e99ab27241 100644
+--- a/drivers/iommu/iommufd/fault.c
++++ b/drivers/iommu/iommufd/fault.c
+@@ -98,15 +98,23 @@ static void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
+ {
+ struct iommufd_fault *fault = hwpt->fault;
+ struct iopf_group *group, *next;
++ struct list_head free_list;
+ unsigned long index;
+
+ if (!fault)
+ return;
++ INIT_LIST_HEAD(&free_list);
+
+ mutex_lock(&fault->mutex);
++ spin_lock(&fault->lock);
+ list_for_each_entry_safe(group, next, &fault->deliver, node) {
+ if (group->attach_handle != &handle->handle)
+ continue;
++ list_move(&group->node, &free_list);
++ }
++ spin_unlock(&fault->lock);
++
++ list_for_each_entry_safe(group, next, &free_list, node) {
+ list_del(&group->node);
+ iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
+ iopf_free_group(group);
+@@ -208,6 +216,7 @@ void iommufd_fault_destroy(struct iommufd_object *obj)
+ {
+ struct iommufd_fault *fault = container_of(obj, struct iommufd_fault, obj);
+ struct iopf_group *group, *next;
++ unsigned long index;
+
+ /*
+ * The iommufd object's reference count is zero at this point.
+@@ -220,6 +229,13 @@ void iommufd_fault_destroy(struct iommufd_object *obj)
+ iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
+ iopf_free_group(group);
+ }
++ xa_for_each(&fault->response, index, group) {
++ xa_erase(&fault->response, index);
++ iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
++ iopf_free_group(group);
++ }
++ xa_destroy(&fault->response);
++ mutex_destroy(&fault->mutex);
+ }
+
+ static void iommufd_compose_fault_message(struct iommu_fault *fault,
+@@ -242,7 +258,7 @@ static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
+ {
+ size_t fault_size = sizeof(struct iommu_hwpt_pgfault);
+ struct iommufd_fault *fault = filep->private_data;
+- struct iommu_hwpt_pgfault data;
++ struct iommu_hwpt_pgfault data = {};
+ struct iommufd_device *idev;
+ struct iopf_group *group;
+ struct iopf_fault *iopf;
+@@ -253,17 +269,19 @@ static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
+ return -ESPIPE;
+
+ mutex_lock(&fault->mutex);
+- while (!list_empty(&fault->deliver) && count > done) {
+- group = list_first_entry(&fault->deliver,
+- struct iopf_group, node);
+-
+- if (group->fault_count * fault_size > count - done)
++ while ((group = iommufd_fault_deliver_fetch(fault))) {
++ if (done >= count ||
++ group->fault_count * fault_size > count - done) {
++ iommufd_fault_deliver_restore(fault, group);
+ break;
++ }
+
+ rc = xa_alloc(&fault->response, &group->cookie, group,
+ xa_limit_32b, GFP_KERNEL);
+- if (rc)
++ if (rc) {
++ iommufd_fault_deliver_restore(fault, group);
+ break;
++ }
+
+ idev = to_iommufd_handle(group->attach_handle)->idev;
+ list_for_each_entry(iopf, &group->faults, list) {
+@@ -272,13 +290,12 @@ static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
+ group->cookie);
+ if (copy_to_user(buf + done, &data, fault_size)) {
+ xa_erase(&fault->response, group->cookie);
++ iommufd_fault_deliver_restore(fault, group);
+ rc = -EFAULT;
+ break;
+ }
+ done += fault_size;
+ }
+-
+- list_del(&group->node);
+ }
+ mutex_unlock(&fault->mutex);
+
+@@ -336,10 +353,10 @@ static __poll_t iommufd_fault_fops_poll(struct file *filep,
+ __poll_t pollflags = EPOLLOUT;
+
+ poll_wait(filep, &fault->wait_queue, wait);
+- mutex_lock(&fault->mutex);
++ spin_lock(&fault->lock);
+ if (!list_empty(&fault->deliver))
+ pollflags |= EPOLLIN | EPOLLRDNORM;
+- mutex_unlock(&fault->mutex);
++ spin_unlock(&fault->lock);
+
+ return pollflags;
+ }
+@@ -381,6 +398,7 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ INIT_LIST_HEAD(&fault->deliver);
+ xa_init_flags(&fault->response, XA_FLAGS_ALLOC1);
+ mutex_init(&fault->mutex);
++ spin_lock_init(&fault->lock);
+ init_waitqueue_head(&fault->wait_queue);
+
+ filep = anon_inode_getfile("[iommufd-pgfault]", &iommufd_fault_fops,
+@@ -429,9 +447,9 @@ int iommufd_fault_iopf_handler(struct iopf_group *group)
+ hwpt = group->attach_handle->domain->fault_data;
+ fault = hwpt->fault;
+
+- mutex_lock(&fault->mutex);
++ spin_lock(&fault->lock);
+ list_add_tail(&group->node, &fault->deliver);
+- mutex_unlock(&fault->mutex);
++ spin_unlock(&fault->lock);
+
+ wake_up_interruptible(&fault->wait_queue);
+
+diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
+index f1d865e6fab66a..c1f82cb6824256 100644
+--- a/drivers/iommu/iommufd/iommufd_private.h
++++ b/drivers/iommu/iommufd/iommufd_private.h
+@@ -462,14 +462,39 @@ struct iommufd_fault {
+ struct iommufd_ctx *ictx;
+ struct file *filep;
+
+- /* The lists of outstanding faults protected by below mutex. */
+- struct mutex mutex;
++ spinlock_t lock; /* protects the deliver list */
+ struct list_head deliver;
++ struct mutex mutex; /* serializes response flows */
+ struct xarray response;
+
+ struct wait_queue_head wait_queue;
+ };
+
++/* Fetch the first node out of the fault->deliver list */
++static inline struct iopf_group *
++iommufd_fault_deliver_fetch(struct iommufd_fault *fault)
++{
++ struct list_head *list = &fault->deliver;
++ struct iopf_group *group = NULL;
++
++ spin_lock(&fault->lock);
++ if (!list_empty(list)) {
++ group = list_first_entry(list, struct iopf_group, node);
++ list_del(&group->node);
++ }
++ spin_unlock(&fault->lock);
++ return group;
++}
++
++/* Restore a node back to the head of the fault->deliver list */
++static inline void iommufd_fault_deliver_restore(struct iommufd_fault *fault,
++ struct iopf_group *group)
++{
++ spin_lock(&fault->lock);
++ list_add(&group->node, &fault->deliver);
++ spin_unlock(&fault->lock);
++}
++
+ struct iommufd_attach_handle {
+ struct iommu_attach_handle handle;
+ struct iommufd_device *idev;
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 66ce15027f28d7..c1f30483600859 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -169,6 +169,7 @@ config IXP4XX_IRQ
+
+ config LAN966X_OIC
+ tristate "Microchip LAN966x OIC Support"
++ depends on MCHP_LAN966X_PCI || COMPILE_TEST
+ select GENERIC_IRQ_CHIP
+ select IRQ_DOMAIN
+ help
+diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
+index da5250f0155cfa..2b1684c60e3cac 100644
+--- a/drivers/irqchip/irq-apple-aic.c
++++ b/drivers/irqchip/irq-apple-aic.c
+@@ -577,7 +577,8 @@ static void __exception_irq_entry aic_handle_fiq(struct pt_regs *regs)
+ AIC_FIQ_HWIRQ(AIC_TMR_EL02_VIRT));
+ }
+
+- if (read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & PMCR0_IACT) {
++ if ((read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & (PMCR0_IMODE | PMCR0_IACT)) ==
++ (FIELD_PREP(PMCR0_IMODE, PMCR0_IMODE_FIQ) | PMCR0_IACT)) {
+ int irq;
+ if (cpumask_test_cpu(smp_processor_id(),
+ &aic_irqc->fiq_aff[AIC_CPU_PMU_P]->aff))
+diff --git a/drivers/irqchip/irq-mvebu-icu.c b/drivers/irqchip/irq-mvebu-icu.c
+index b337f6c05f184f..4eebed39880a5b 100644
+--- a/drivers/irqchip/irq-mvebu-icu.c
++++ b/drivers/irqchip/irq-mvebu-icu.c
+@@ -68,7 +68,8 @@ static int mvebu_icu_translate(struct irq_domain *d, struct irq_fwspec *fwspec,
+ unsigned long *hwirq, unsigned int *type)
+ {
+ unsigned int param_count = static_branch_unlikely(&legacy_bindings) ? 3 : 2;
+- struct mvebu_icu_msi_data *msi_data = d->host_data;
++ struct msi_domain_info *info = d->host_data;
++ struct mvebu_icu_msi_data *msi_data = info->chip_data;
+ struct mvebu_icu *icu = msi_data->icu;
+
+ /* Check the count of the parameters in dt */
+diff --git a/drivers/leds/leds-lp8860.c b/drivers/leds/leds-lp8860.c
+index 7a136fd8172061..06196d851ade71 100644
+--- a/drivers/leds/leds-lp8860.c
++++ b/drivers/leds/leds-lp8860.c
+@@ -265,7 +265,7 @@ static int lp8860_init(struct lp8860_led *led)
+ goto out;
+ }
+
+- reg_count = ARRAY_SIZE(lp8860_eeprom_disp_regs) / sizeof(lp8860_eeprom_disp_regs[0]);
++ reg_count = ARRAY_SIZE(lp8860_eeprom_disp_regs);
+ for (i = 0; i < reg_count; i++) {
+ ret = regmap_write(led->eeprom_regmap,
+ lp8860_eeprom_disp_regs[i].reg,
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index 19ef56cbcfd39d..46c921000a34cf 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -388,7 +388,6 @@ static void tegra_hsp_sm_recv32(struct tegra_hsp_channel *channel)
+ value = tegra_hsp_channel_readl(channel, HSP_SM_SHRD_MBOX);
+ value &= ~HSP_SM_SHRD_MBOX_FULL;
+ msg = (void *)(unsigned long)value;
+- mbox_chan_received_data(channel->chan, msg);
+
+ /*
+ * Need to clear all bits here since some producers, such as TCU, depend
+@@ -398,6 +397,8 @@ static void tegra_hsp_sm_recv32(struct tegra_hsp_channel *channel)
+ * explicitly, so we have to make sure we cover all possible cases.
+ */
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SM_SHRD_MBOX);
++
++ mbox_chan_received_data(channel->chan, msg);
+ }
+
+ static const struct tegra_hsp_sm_ops tegra_hsp_sm_32bit_ops = {
+@@ -433,7 +434,6 @@ static void tegra_hsp_sm_recv128(struct tegra_hsp_channel *channel)
+ value[3] = tegra_hsp_channel_readl(channel, HSP_SHRD_MBOX_TYPE1_DATA3);
+
+ msg = (void *)(unsigned long)value;
+- mbox_chan_received_data(channel->chan, msg);
+
+ /*
+ * Clear data registers and tag.
+@@ -443,6 +443,8 @@ static void tegra_hsp_sm_recv128(struct tegra_hsp_channel *channel)
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SHRD_MBOX_TYPE1_DATA2);
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SHRD_MBOX_TYPE1_DATA3);
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SHRD_MBOX_TYPE1_TAG);
++
++ mbox_chan_received_data(channel->chan, msg);
+ }
+
+ static const struct tegra_hsp_sm_ops tegra_hsp_sm_128bit_ops = {
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index 521d08b9ab47e3..d59fcb74b34794 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -905,7 +905,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+ struct device_node *nc, *np = pdev->dev.of_node;
+- struct zynqmp_ipi_pdata __percpu *pdata;
++ struct zynqmp_ipi_pdata *pdata;
+ struct of_phandle_args out_irq;
+ struct zynqmp_ipi_mbox *mbox;
+ int num_mboxes, ret = -EINVAL;
+diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
+index 1e9db8e4acdf65..0b1870a09e1fdc 100644
+--- a/drivers/md/Kconfig
++++ b/drivers/md/Kconfig
+@@ -61,6 +61,19 @@ config MD_BITMAP_FILE
+ various kernel APIs and can only work with files on a file system not
+ actually sitting on the MD device.
+
++config MD_LINEAR
++ tristate "Linear (append) mode"
++ depends on BLK_DEV_MD
++ help
++ If you say Y here, then your multiple devices driver will be able to
++ use the so-called linear mode, i.e. it will combine the hard disk
++ partitions by simply appending one to the other.
++
++ To compile this as a module, choose M here: the module
++ will be called linear.
++
++ If unsure, say Y.
++
+ config MD_RAID0
+ tristate "RAID-0 (striping) mode"
+ depends on BLK_DEV_MD
+diff --git a/drivers/md/Makefile b/drivers/md/Makefile
+index 476a214e4bdc26..87bdfc9fe14c55 100644
+--- a/drivers/md/Makefile
++++ b/drivers/md/Makefile
+@@ -29,12 +29,14 @@ dm-zoned-y += dm-zoned-target.o dm-zoned-metadata.o dm-zoned-reclaim.o
+
+ md-mod-y += md.o md-bitmap.o
+ raid456-y += raid5.o raid5-cache.o raid5-ppl.o
++linear-y += md-linear.o
+
+ # Note: link order is important. All raid personalities
+ # and must come before md.o, as they each initialise
+ # themselves, and md.o may use the personalities when it
+ # auto-initialised.
+
++obj-$(CONFIG_MD_LINEAR) += linear.o
+ obj-$(CONFIG_MD_RAID0) += raid0.o
+ obj-$(CONFIG_MD_RAID1) += raid1.o
+ obj-$(CONFIG_MD_RAID10) += raid10.o
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 1ae2c71bb383b7..78c975d7cd5f42 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -59,6 +59,7 @@ struct convert_context {
+ struct bio *bio_out;
+ struct bvec_iter iter_out;
+ atomic_t cc_pending;
++ unsigned int tag_offset;
+ u64 cc_sector;
+ union {
+ struct skcipher_request *req;
+@@ -1256,6 +1257,7 @@ static void crypt_convert_init(struct crypt_config *cc,
+ if (bio_out)
+ ctx->iter_out = bio_out->bi_iter;
+ ctx->cc_sector = sector + cc->iv_offset;
++ ctx->tag_offset = 0;
+ init_completion(&ctx->restart);
+ }
+
+@@ -1588,7 +1590,6 @@ static void crypt_free_req(struct crypt_config *cc, void *req, struct bio *base_
+ static blk_status_t crypt_convert(struct crypt_config *cc,
+ struct convert_context *ctx, bool atomic, bool reset_pending)
+ {
+- unsigned int tag_offset = 0;
+ unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT;
+ int r;
+
+@@ -1611,9 +1612,9 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ atomic_inc(&ctx->cc_pending);
+
+ if (crypt_integrity_aead(cc))
+- r = crypt_convert_block_aead(cc, ctx, ctx->r.req_aead, tag_offset);
++ r = crypt_convert_block_aead(cc, ctx, ctx->r.req_aead, ctx->tag_offset);
+ else
+- r = crypt_convert_block_skcipher(cc, ctx, ctx->r.req, tag_offset);
++ r = crypt_convert_block_skcipher(cc, ctx, ctx->r.req, ctx->tag_offset);
+
+ switch (r) {
+ /*
+@@ -1633,8 +1634,8 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ * exit and continue processing in a workqueue
+ */
+ ctx->r.req = NULL;
++ ctx->tag_offset++;
+ ctx->cc_sector += sector_step;
+- tag_offset++;
+ return BLK_STS_DEV_RESOURCE;
+ }
+ } else {
+@@ -1648,8 +1649,8 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ */
+ case -EINPROGRESS:
+ ctx->r.req = NULL;
++ ctx->tag_offset++;
+ ctx->cc_sector += sector_step;
+- tag_offset++;
+ continue;
+ /*
+ * The request was already processed (synchronously).
+@@ -1657,7 +1658,7 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ case 0:
+ atomic_dec(&ctx->cc_pending);
+ ctx->cc_sector += sector_step;
+- tag_offset++;
++ ctx->tag_offset++;
+ if (!atomic)
+ cond_resched();
+ continue;
+@@ -2092,7 +2093,6 @@ static void kcryptd_crypt_write_continue(struct work_struct *work)
+ struct crypt_config *cc = io->cc;
+ struct convert_context *ctx = &io->ctx;
+ int crypt_finished;
+- sector_t sector = io->sector;
+ blk_status_t r;
+
+ wait_for_completion(&ctx->restart);
+@@ -2109,10 +2109,8 @@ static void kcryptd_crypt_write_continue(struct work_struct *work)
+ }
+
+ /* Encryption was already finished, submit io now */
+- if (crypt_finished) {
++ if (crypt_finished)
+ kcryptd_crypt_write_io_submit(io, 0);
+- io->sector = sector;
+- }
+
+ crypt_dec_pending(io);
+ }
+@@ -2123,14 +2121,13 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ struct convert_context *ctx = &io->ctx;
+ struct bio *clone;
+ int crypt_finished;
+- sector_t sector = io->sector;
+ blk_status_t r;
+
+ /*
+ * Prevent io from disappearing until this function completes.
+ */
+ crypt_inc_pending(io);
+- crypt_convert_init(cc, ctx, NULL, io->base_bio, sector);
++ crypt_convert_init(cc, ctx, NULL, io->base_bio, io->sector);
+
+ clone = crypt_alloc_buffer(io, io->base_bio->bi_iter.bi_size);
+ if (unlikely(!clone)) {
+@@ -2147,8 +2144,6 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ io->ctx.iter_in = clone->bi_iter;
+ }
+
+- sector += bio_sectors(clone);
+-
+ crypt_inc_pending(io);
+ r = crypt_convert(cc, ctx,
+ test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true);
+@@ -2172,10 +2167,8 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ }
+
+ /* Encryption was already finished, submit io now */
+- if (crypt_finished) {
++ if (crypt_finished)
+ kcryptd_crypt_write_io_submit(io, 0);
+- io->sector = sector;
+- }
+
+ dec:
+ crypt_dec_pending(io);
+diff --git a/drivers/md/md-autodetect.c b/drivers/md/md-autodetect.c
+index b2a00f213c2cd7..4b80165afd2331 100644
+--- a/drivers/md/md-autodetect.c
++++ b/drivers/md/md-autodetect.c
+@@ -49,6 +49,7 @@ static int md_setup_ents __initdata;
+ * instead of just one. -- KTK
+ * 18May2000: Added support for persistent-superblock arrays:
+ * md=n,0,factor,fault,device-list uses RAID0 for device n
++ * md=n,-1,factor,fault,device-list uses LINEAR for device n
+ * md=n,device-list reads a RAID superblock from the devices
+ * elements in device-list are read by name_to_kdev_t so can be
+ * a hex number or something like /dev/hda1 /dev/sdb
+@@ -87,7 +88,7 @@ static int __init md_setup(char *str)
+ md_setup_ents++;
+ switch (get_option(&str, &level)) { /* RAID level */
+ case 2: /* could be 0 or -1.. */
+- if (level == 0) {
++ if (level == 0 || level == LEVEL_LINEAR) {
+ if (get_option(&str, &factor) != 2 || /* Chunk Size */
+ get_option(&str, &fault) != 2) {
+ printk(KERN_WARNING "md: Too few arguments supplied to md=.\n");
+@@ -95,7 +96,10 @@ static int __init md_setup(char *str)
+ }
+ md_setup_args[ent].level = level;
+ md_setup_args[ent].chunk = 1 << (factor+12);
+- pername = "raid0";
++ if (level == LEVEL_LINEAR)
++ pername = "linear";
++ else
++ pername = "raid0";
+ break;
+ }
+ fallthrough;
+diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
+new file mode 100644
+index 00000000000000..369aed044b409f
+--- /dev/null
++++ b/drivers/md/md-linear.c
+@@ -0,0 +1,352 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * linear.c : Multiple Devices driver for Linux Copyright (C) 1994-96 Marc
++ * ZYNGIER <zyngier@ufr-info-p7.ibp.fr> or <maz@gloups.fdn.fr>
++ */
++
++#include <linux/blkdev.h>
++#include <linux/raid/md_u.h>
++#include <linux/seq_file.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <trace/events/block.h>
++#include "md.h"
++
++struct dev_info {
++ struct md_rdev *rdev;
++ sector_t end_sector;
++};
++
++struct linear_conf {
++ struct rcu_head rcu;
++ sector_t array_sectors;
++ /* a copy of mddev->raid_disks */
++ int raid_disks;
++ struct dev_info disks[] __counted_by(raid_disks);
++};
++
++/*
++ * find which device holds a particular offset
++ */
++static inline struct dev_info *which_dev(struct mddev *mddev, sector_t sector)
++{
++ int lo, mid, hi;
++ struct linear_conf *conf;
++
++ lo = 0;
++ hi = mddev->raid_disks - 1;
++ conf = mddev->private;
++
++ /*
++ * Binary Search
++ */
++
++ while (hi > lo) {
++
++ mid = (hi + lo) / 2;
++ if (sector < conf->disks[mid].end_sector)
++ hi = mid;
++ else
++ lo = mid + 1;
++ }
++
++ return conf->disks + lo;
++}
++
++static sector_t linear_size(struct mddev *mddev, sector_t sectors, int raid_disks)
++{
++ struct linear_conf *conf;
++ sector_t array_sectors;
++
++ conf = mddev->private;
++ WARN_ONCE(sectors || raid_disks,
++ "%s does not support generic reshape\n", __func__);
++ array_sectors = conf->array_sectors;
++
++ return array_sectors;
++}
++
++static int linear_set_limits(struct mddev *mddev)
++{
++ struct queue_limits lim;
++ int err;
++
++ md_init_stacking_limits(&lim);
++ lim.max_hw_sectors = mddev->chunk_sectors;
++ lim.max_write_zeroes_sectors = mddev->chunk_sectors;
++ lim.io_min = mddev->chunk_sectors << 9;
++ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
++ if (err)
++ return err;
++
++ return queue_limits_set(mddev->gendisk->queue, &lim);
++}
++
++static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks)
++{
++ struct linear_conf *conf;
++ struct md_rdev *rdev;
++ int ret = -EINVAL;
++ int cnt;
++ int i;
++
++ conf = kzalloc(struct_size(conf, disks, raid_disks), GFP_KERNEL);
++ if (!conf)
++ return ERR_PTR(-ENOMEM);
++
++ /*
++ * conf->raid_disks is copy of mddev->raid_disks. The reason to
++ * keep a copy of mddev->raid_disks in struct linear_conf is,
++ * mddev->raid_disks may not be consistent with pointers number of
++ * conf->disks[] when it is updated in linear_add() and used to
++ * iterate old conf->disks[] earray in linear_congested().
++ * Here conf->raid_disks is always consitent with number of
++ * pointers in conf->disks[] array, and mddev->private is updated
++ * with rcu_assign_pointer() in linear_addr(), such race can be
++ * avoided.
++ */
++ conf->raid_disks = raid_disks;
++
++ cnt = 0;
++ conf->array_sectors = 0;
++
++ rdev_for_each(rdev, mddev) {
++ int j = rdev->raid_disk;
++ struct dev_info *disk = conf->disks + j;
++ sector_t sectors;
++
++ if (j < 0 || j >= raid_disks || disk->rdev) {
++ pr_warn("md/linear:%s: disk numbering problem. Aborting!\n",
++ mdname(mddev));
++ goto out;
++ }
++
++ disk->rdev = rdev;
++ if (mddev->chunk_sectors) {
++ sectors = rdev->sectors;
++ sector_div(sectors, mddev->chunk_sectors);
++ rdev->sectors = sectors * mddev->chunk_sectors;
++ }
++
++ conf->array_sectors += rdev->sectors;
++ cnt++;
++ }
++ if (cnt != raid_disks) {
++ pr_warn("md/linear:%s: not enough drives present. Aborting!\n",
++ mdname(mddev));
++ goto out;
++ }
++
++ /*
++ * Here we calculate the device offsets.
++ */
++ conf->disks[0].end_sector = conf->disks[0].rdev->sectors;
++
++ for (i = 1; i < raid_disks; i++)
++ conf->disks[i].end_sector =
++ conf->disks[i-1].end_sector +
++ conf->disks[i].rdev->sectors;
++
++ if (!mddev_is_dm(mddev)) {
++ ret = linear_set_limits(mddev);
++ if (ret)
++ goto out;
++ }
++
++ return conf;
++
++out:
++ kfree(conf);
++ return ERR_PTR(ret);
++}
++
++static int linear_run(struct mddev *mddev)
++{
++ struct linear_conf *conf;
++ int ret;
++
++ if (md_check_no_bitmap(mddev))
++ return -EINVAL;
++
++ conf = linear_conf(mddev, mddev->raid_disks);
++ if (IS_ERR(conf))
++ return PTR_ERR(conf);
++
++ mddev->private = conf;
++ md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
++
++ ret = md_integrity_register(mddev);
++ if (ret) {
++ kfree(conf);
++ mddev->private = NULL;
++ }
++ return ret;
++}
++
++static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
++{
++ /* Adding a drive to a linear array allows the array to grow.
++ * It is permitted if the new drive has a matching superblock
++ * already on it, with raid_disk equal to raid_disks.
++ * It is achieved by creating a new linear_private_data structure
++ * and swapping it in in-place of the current one.
++ * The current one is never freed until the array is stopped.
++ * This avoids races.
++ */
++ struct linear_conf *newconf, *oldconf;
++
++ if (rdev->saved_raid_disk != mddev->raid_disks)
++ return -EINVAL;
++
++ rdev->raid_disk = rdev->saved_raid_disk;
++ rdev->saved_raid_disk = -1;
++
++ newconf = linear_conf(mddev, mddev->raid_disks + 1);
++ if (IS_ERR(newconf))
++ return PTR_ERR(newconf);
++
++ /* newconf->raid_disks already keeps a copy of * the increased
++ * value of mddev->raid_disks, WARN_ONCE() is just used to make
++ * sure of this. It is possible that oldconf is still referenced
++ * in linear_congested(), therefore kfree_rcu() is used to free
++ * oldconf until no one uses it anymore.
++ */
++ oldconf = rcu_dereference_protected(mddev->private,
++ lockdep_is_held(&mddev->reconfig_mutex));
++ mddev->raid_disks++;
++ WARN_ONCE(mddev->raid_disks != newconf->raid_disks,
++ "copied raid_disks doesn't match mddev->raid_disks");
++ rcu_assign_pointer(mddev->private, newconf);
++ md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
++ set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
++ kfree_rcu(oldconf, rcu);
++ return 0;
++}
++
++static void linear_free(struct mddev *mddev, void *priv)
++{
++ struct linear_conf *conf = priv;
++
++ kfree(conf);
++}
++
++static bool linear_make_request(struct mddev *mddev, struct bio *bio)
++{
++ struct dev_info *tmp_dev;
++ sector_t start_sector, end_sector, data_offset;
++ sector_t bio_sector = bio->bi_iter.bi_sector;
++
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
++ return true;
++
++ tmp_dev = which_dev(mddev, bio_sector);
++ start_sector = tmp_dev->end_sector - tmp_dev->rdev->sectors;
++ end_sector = tmp_dev->end_sector;
++ data_offset = tmp_dev->rdev->data_offset;
++
++ if (unlikely(bio_sector >= end_sector ||
++ bio_sector < start_sector))
++ goto out_of_bounds;
++
++ if (unlikely(is_rdev_broken(tmp_dev->rdev))) {
++ md_error(mddev, tmp_dev->rdev);
++ bio_io_error(bio);
++ return true;
++ }
++
++ if (unlikely(bio_end_sector(bio) > end_sector)) {
++ /* This bio crosses a device boundary, so we have to split it */
++ struct bio *split = bio_split(bio, end_sector - bio_sector,
++ GFP_NOIO, &mddev->bio_set);
++
++ if (IS_ERR(split)) {
++ bio->bi_status = errno_to_blk_status(PTR_ERR(split));
++ bio_endio(bio);
++ return true;
++ }
++
++ bio_chain(split, bio);
++ submit_bio_noacct(bio);
++ bio = split;
++ }
++
++ md_account_bio(mddev, &bio);
++ bio_set_dev(bio, tmp_dev->rdev->bdev);
++ bio->bi_iter.bi_sector = bio->bi_iter.bi_sector -
++ start_sector + data_offset;
++
++ if (unlikely((bio_op(bio) == REQ_OP_DISCARD) &&
++ !bdev_max_discard_sectors(bio->bi_bdev))) {
++ /* Just ignore it */
++ bio_endio(bio);
++ } else {
++ if (mddev->gendisk)
++ trace_block_bio_remap(bio, disk_devt(mddev->gendisk),
++ bio_sector);
++ mddev_check_write_zeroes(mddev, bio);
++ submit_bio_noacct(bio);
++ }
++ return true;
++
++out_of_bounds:
++ pr_err("md/linear:%s: make_request: Sector %llu out of bounds on dev %pg: %llu sectors, offset %llu\n",
++ mdname(mddev),
++ (unsigned long long)bio->bi_iter.bi_sector,
++ tmp_dev->rdev->bdev,
++ (unsigned long long)tmp_dev->rdev->sectors,
++ (unsigned long long)start_sector);
++ bio_io_error(bio);
++ return true;
++}
++
++static void linear_status(struct seq_file *seq, struct mddev *mddev)
++{
++ seq_printf(seq, " %dk rounding", mddev->chunk_sectors / 2);
++}
++
++static void linear_error(struct mddev *mddev, struct md_rdev *rdev)
++{
++ if (!test_and_set_bit(MD_BROKEN, &mddev->flags)) {
++ char *md_name = mdname(mddev);
++
++ pr_crit("md/linear%s: Disk failure on %pg detected, failing array.\n",
++ md_name, rdev->bdev);
++ }
++}
++
++static void linear_quiesce(struct mddev *mddev, int state)
++{
++}
++
++static struct md_personality linear_personality = {
++ .name = "linear",
++ .level = LEVEL_LINEAR,
++ .owner = THIS_MODULE,
++ .make_request = linear_make_request,
++ .run = linear_run,
++ .free = linear_free,
++ .status = linear_status,
++ .hot_add_disk = linear_add,
++ .size = linear_size,
++ .quiesce = linear_quiesce,
++ .error_handler = linear_error,
++};
++
++static int __init linear_init(void)
++{
++ return register_md_personality(&linear_personality);
++}
++
++static void linear_exit(void)
++{
++ unregister_md_personality(&linear_personality);
++}
++
++module_init(linear_init);
++module_exit(linear_exit);
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Linear device concatenation personality for MD (deprecated)");
++MODULE_ALIAS("md-personality-1"); /* LINEAR - deprecated*/
++MODULE_ALIAS("md-linear");
++MODULE_ALIAS("md-level--1");
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 44c4c518430d9b..fff28aea23c89e 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8124,7 +8124,7 @@ void md_error(struct mddev *mddev, struct md_rdev *rdev)
+ return;
+ mddev->pers->error_handler(mddev, rdev);
+
+- if (mddev->pers->level == 0)
++ if (mddev->pers->level == 0 || mddev->pers->level == LEVEL_LINEAR)
+ return;
+
+ if (mddev->degraded && !test_bit(MD_BROKEN, &mddev->flags))
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index e1ae0f9fad4326..cb21df46bab169 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -3566,15 +3566,15 @@ static int ccs_probe(struct i2c_client *client)
+ out_cleanup:
+ ccs_cleanup(sensor);
+
++out_free_ccs_limits:
++ kfree(sensor->ccs_limits);
++
+ out_release_mdata:
+ kvfree(sensor->mdata.backing);
+
+ out_release_sdata:
+ kvfree(sensor->sdata.backing);
+
+-out_free_ccs_limits:
+- kfree(sensor->ccs_limits);
+-
+ out_power_off:
+ ccs_power_off(&client->dev);
+ mutex_destroy(&sensor->mutex);
+diff --git a/drivers/media/i2c/ccs/ccs-data.c b/drivers/media/i2c/ccs/ccs-data.c
+index 08400edf77ced1..2591dba51e17e2 100644
+--- a/drivers/media/i2c/ccs/ccs-data.c
++++ b/drivers/media/i2c/ccs/ccs-data.c
+@@ -10,6 +10,7 @@
+ #include <linux/limits.h>
+ #include <linux/mm.h>
+ #include <linux/slab.h>
++#include <linux/string.h>
+
+ #include "ccs-data-defs.h"
+
+@@ -97,7 +98,7 @@ ccs_data_parse_length_specifier(const struct __ccs_data_length_specifier *__len,
+ plen = ((size_t)
+ (__len3->length[0] &
+ ((1 << CCS_DATA_LENGTH_SPECIFIER_SIZE_SHIFT) - 1))
+- << 16) + (__len3->length[0] << 8) + __len3->length[1];
++ << 16) + (__len3->length[1] << 8) + __len3->length[2];
+ break;
+ }
+ default:
+@@ -948,15 +949,15 @@ int ccs_data_parse(struct ccs_data_container *ccsdata, const void *data,
+
+ rval = __ccs_data_parse(&bin, ccsdata, data, len, dev, verbose);
+ if (rval)
+- return rval;
++ goto out_cleanup;
+
+ rval = bin_backing_alloc(&bin);
+ if (rval)
+- return rval;
++ goto out_cleanup;
+
+ rval = __ccs_data_parse(&bin, ccsdata, data, len, dev, false);
+ if (rval)
+- goto out_free;
++ goto out_cleanup;
+
+ if (verbose && ccsdata->version)
+ print_ccs_data_version(dev, ccsdata->version);
+@@ -965,15 +966,16 @@ int ccs_data_parse(struct ccs_data_container *ccsdata, const void *data,
+ rval = -EPROTO;
+ dev_dbg(dev, "parsing mismatch; base %p; now %p; end %p\n",
+ bin.base, bin.now, bin.end);
+- goto out_free;
++ goto out_cleanup;
+ }
+
+ ccsdata->backing = bin.base;
+
+ return 0;
+
+-out_free:
++out_cleanup:
+ kvfree(bin.base);
++ memset(ccsdata, 0, sizeof(*ccsdata));
+
+ return rval;
+ }
+diff --git a/drivers/media/i2c/ds90ub913.c b/drivers/media/i2c/ds90ub913.c
+index 8eed4a200fd89b..b5375d73662996 100644
+--- a/drivers/media/i2c/ds90ub913.c
++++ b/drivers/media/i2c/ds90ub913.c
+@@ -793,7 +793,6 @@ static void ub913_subdev_uninit(struct ub913_data *priv)
+ v4l2_async_unregister_subdev(&priv->sd);
+ ub913_v4l2_nf_unregister(priv);
+ v4l2_subdev_cleanup(&priv->sd);
+- fwnode_handle_put(priv->sd.fwnode);
+ media_entity_cleanup(&priv->sd.entity);
+ }
+
+diff --git a/drivers/media/i2c/ds90ub953.c b/drivers/media/i2c/ds90ub953.c
+index 16f88db1498162..10daecf6f45798 100644
+--- a/drivers/media/i2c/ds90ub953.c
++++ b/drivers/media/i2c/ds90ub953.c
+@@ -1291,7 +1291,6 @@ static void ub953_subdev_uninit(struct ub953_data *priv)
+ v4l2_async_unregister_subdev(&priv->sd);
+ ub953_v4l2_notifier_unregister(priv);
+ v4l2_subdev_cleanup(&priv->sd);
+- fwnode_handle_put(priv->sd.fwnode);
+ media_entity_cleanup(&priv->sd.entity);
+ }
+
+diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c
+index 58424d8f72af03..432457a761b116 100644
+--- a/drivers/media/i2c/ds90ub960.c
++++ b/drivers/media/i2c/ds90ub960.c
+@@ -352,6 +352,8 @@
+
+ #define UB960_SR_I2C_RX_ID(n) (0xf8 + (n)) /* < UB960_FPD_RX_NPORTS */
+
++#define UB9702_SR_REFCLK_FREQ 0x3d
++
+ /* Indirect register blocks */
+ #define UB960_IND_TARGET_PAT_GEN 0x00
+ #define UB960_IND_TARGET_RX_ANA(n) (0x01 + (n))
+@@ -1575,16 +1577,24 @@ static int ub960_rxport_wait_locks(struct ub960_data *priv,
+
+ ub960_rxport_read16(priv, nport, UB960_RR_RX_FREQ_HIGH, &v);
+
+- ret = ub960_rxport_get_strobe_pos(priv, nport, &strobe_pos);
+- if (ret)
+- return ret;
++ if (priv->hw_data->is_ub9702) {
++ dev_dbg(dev, "\trx%u: locked, freq %llu Hz\n",
++ nport, (v * 1000000ULL) >> 8);
++ } else {
++ ret = ub960_rxport_get_strobe_pos(priv, nport,
++ &strobe_pos);
++ if (ret)
++ return ret;
+
+- ret = ub960_rxport_get_eq_level(priv, nport, &eq_level);
+- if (ret)
+- return ret;
++ ret = ub960_rxport_get_eq_level(priv, nport, &eq_level);
++ if (ret)
++ return ret;
+
+- dev_dbg(dev, "\trx%u: locked, SP: %d, EQ: %u, freq %llu Hz\n",
+- nport, strobe_pos, eq_level, (v * 1000000ULL) >> 8);
++ dev_dbg(dev,
++ "\trx%u: locked, SP: %d, EQ: %u, freq %llu Hz\n",
++ nport, strobe_pos, eq_level,
++ (v * 1000000ULL) >> 8);
++ }
+ }
+
+ return 0;
+@@ -2523,7 +2533,7 @@ static int ub960_configure_ports_for_streaming(struct ub960_data *priv,
+ for (i = 0; i < 8; i++)
+ ub960_rxport_write(priv, nport,
+ UB960_RR_VC_ID_MAP(i),
+- nport);
++ (nport << 4) | nport);
+ }
+
+ break;
+@@ -2940,6 +2950,54 @@ static const struct v4l2_subdev_pad_ops ub960_pad_ops = {
+ .set_fmt = ub960_set_fmt,
+ };
+
++static void ub960_log_status_ub960_sp_eq(struct ub960_data *priv,
++ unsigned int nport)
++{
++ struct device *dev = &priv->client->dev;
++ u8 eq_level;
++ s8 strobe_pos;
++ u8 v = 0;
++
++ /* Strobe */
++
++ ub960_read(priv, UB960_XR_AEQ_CTL1, &v);
++
++ dev_info(dev, "\t%s strobe\n",
++ (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) ? "Adaptive" :
++ "Manual");
++
++ if (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) {
++ ub960_read(priv, UB960_XR_SFILTER_CFG, &v);
++
++ dev_info(dev, "\tStrobe range [%d, %d]\n",
++ ((v >> UB960_XR_SFILTER_CFG_SFILTER_MIN_SHIFT) & 0xf) - 7,
++ ((v >> UB960_XR_SFILTER_CFG_SFILTER_MAX_SHIFT) & 0xf) - 7);
++ }
++
++ ub960_rxport_get_strobe_pos(priv, nport, &strobe_pos);
++
++ dev_info(dev, "\tStrobe pos %d\n", strobe_pos);
++
++ /* EQ */
++
++ ub960_rxport_read(priv, nport, UB960_RR_AEQ_BYPASS, &v);
++
++ dev_info(dev, "\t%s EQ\n",
++ (v & UB960_RR_AEQ_BYPASS_ENABLE) ? "Manual" :
++ "Adaptive");
++
++ if (!(v & UB960_RR_AEQ_BYPASS_ENABLE)) {
++ ub960_rxport_read(priv, nport, UB960_RR_AEQ_MIN_MAX, &v);
++
++ dev_info(dev, "\tEQ range [%u, %u]\n",
++ (v >> UB960_RR_AEQ_MIN_MAX_AEQ_FLOOR_SHIFT) & 0xf,
++ (v >> UB960_RR_AEQ_MIN_MAX_AEQ_MAX_SHIFT) & 0xf);
++ }
++
++ if (ub960_rxport_get_eq_level(priv, nport, &eq_level) == 0)
++ dev_info(dev, "\tEQ level %u\n", eq_level);
++}
++
+ static int ub960_log_status(struct v4l2_subdev *sd)
+ {
+ struct ub960_data *priv = sd_to_ub960(sd);
+@@ -2987,8 +3045,6 @@ static int ub960_log_status(struct v4l2_subdev *sd)
+
+ for (nport = 0; nport < priv->hw_data->num_rxports; nport++) {
+ struct ub960_rxport *rxport = priv->rxports[nport];
+- u8 eq_level;
+- s8 strobe_pos;
+ unsigned int i;
+
+ dev_info(dev, "RX %u\n", nport);
+@@ -3024,44 +3080,8 @@ static int ub960_log_status(struct v4l2_subdev *sd)
+ ub960_rxport_read(priv, nport, UB960_RR_CSI_ERR_COUNTER, &v);
+ dev_info(dev, "\tcsi_err_counter %u\n", v);
+
+- /* Strobe */
+-
+- ub960_read(priv, UB960_XR_AEQ_CTL1, &v);
+-
+- dev_info(dev, "\t%s strobe\n",
+- (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) ? "Adaptive" :
+- "Manual");
+-
+- if (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) {
+- ub960_read(priv, UB960_XR_SFILTER_CFG, &v);
+-
+- dev_info(dev, "\tStrobe range [%d, %d]\n",
+- ((v >> UB960_XR_SFILTER_CFG_SFILTER_MIN_SHIFT) & 0xf) - 7,
+- ((v >> UB960_XR_SFILTER_CFG_SFILTER_MAX_SHIFT) & 0xf) - 7);
+- }
+-
+- ub960_rxport_get_strobe_pos(priv, nport, &strobe_pos);
+-
+- dev_info(dev, "\tStrobe pos %d\n", strobe_pos);
+-
+- /* EQ */
+-
+- ub960_rxport_read(priv, nport, UB960_RR_AEQ_BYPASS, &v);
+-
+- dev_info(dev, "\t%s EQ\n",
+- (v & UB960_RR_AEQ_BYPASS_ENABLE) ? "Manual" :
+- "Adaptive");
+-
+- if (!(v & UB960_RR_AEQ_BYPASS_ENABLE)) {
+- ub960_rxport_read(priv, nport, UB960_RR_AEQ_MIN_MAX, &v);
+-
+- dev_info(dev, "\tEQ range [%u, %u]\n",
+- (v >> UB960_RR_AEQ_MIN_MAX_AEQ_FLOOR_SHIFT) & 0xf,
+- (v >> UB960_RR_AEQ_MIN_MAX_AEQ_MAX_SHIFT) & 0xf);
+- }
+-
+- if (ub960_rxport_get_eq_level(priv, nport, &eq_level) == 0)
+- dev_info(dev, "\tEQ level %u\n", eq_level);
++ if (!priv->hw_data->is_ub9702)
++ ub960_log_status_ub960_sp_eq(priv, nport);
+
+ /* GPIOs */
+ for (i = 0; i < UB960_NUM_BC_GPIOS; i++) {
+@@ -3837,7 +3857,10 @@ static int ub960_enable_core_hw(struct ub960_data *priv)
+ if (ret)
+ goto err_pd_gpio;
+
+- ret = ub960_read(priv, UB960_XR_REFCLK_FREQ, &refclk_freq);
++ if (priv->hw_data->is_ub9702)
++ ret = ub960_read(priv, UB9702_SR_REFCLK_FREQ, &refclk_freq);
++ else
++ ret = ub960_read(priv, UB960_XR_REFCLK_FREQ, &refclk_freq);
+ if (ret)
+ goto err_pd_gpio;
+
+diff --git a/drivers/media/i2c/imx296.c b/drivers/media/i2c/imx296.c
+index 83149fa729c424..f3bec16b527c44 100644
+--- a/drivers/media/i2c/imx296.c
++++ b/drivers/media/i2c/imx296.c
+@@ -954,6 +954,8 @@ static int imx296_identify_model(struct imx296 *sensor)
+ return ret;
+ }
+
++ usleep_range(2000, 5000);
++
+ ret = imx296_read(sensor, IMX296_SENSOR_INFO);
+ if (ret < 0) {
+ dev_err(sensor->dev, "failed to read sensor information (%d)\n",
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index c1d3fce4a7d383..8566bc2edde978 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1982,6 +1982,7 @@ static int ov5640_get_light_freq(struct ov5640_dev *sensor)
+ light_freq = 50;
+ } else {
+ /* 60Hz */
++ light_freq = 60;
+ }
+ }
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys.c b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+index c85e056cb904b2..17bc8cabcbdb59 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+@@ -1133,6 +1133,7 @@ static int isys_probe(struct auxiliary_device *auxdev,
+ free_fw_msg_bufs:
+ free_fw_msg_bufs(isys);
+ out_remove_pkg_dir_shared_buffer:
++ cpu_latency_qos_remove_request(&isys->pm_qos);
+ if (!isp->secure_mode)
+ ipu6_cpd_free_pkg_dir(adev);
+ remove_shared_buffer:
+diff --git a/drivers/media/platform/marvell/mmp-driver.c b/drivers/media/platform/marvell/mmp-driver.c
+index ff9d151121d5eb..4fa171d469cabc 100644
+--- a/drivers/media/platform/marvell/mmp-driver.c
++++ b/drivers/media/platform/marvell/mmp-driver.c
+@@ -231,13 +231,23 @@ static int mmpcam_probe(struct platform_device *pdev)
+
+ mcam_init_clk(mcam);
+
++ /*
++ * Register with V4L.
++ */
++
++ ret = v4l2_device_register(mcam->dev, &mcam->v4l2_dev);
++ if (ret)
++ return ret;
++
+ /*
+ * Create a match of the sensor against its OF node.
+ */
+ ep = fwnode_graph_get_next_endpoint(of_fwnode_handle(pdev->dev.of_node),
+ NULL);
+- if (!ep)
+- return -ENODEV;
++ if (!ep) {
++ ret = -ENODEV;
++ goto out_v4l2_device_unregister;
++ }
+
+ v4l2_async_nf_init(&mcam->notifier, &mcam->v4l2_dev);
+
+@@ -246,7 +256,7 @@ static int mmpcam_probe(struct platform_device *pdev)
+ fwnode_handle_put(ep);
+ if (IS_ERR(asd)) {
+ ret = PTR_ERR(asd);
+- goto out;
++ goto out_v4l2_device_unregister;
+ }
+
+ /*
+@@ -254,7 +264,7 @@ static int mmpcam_probe(struct platform_device *pdev)
+ */
+ ret = mccic_register(mcam);
+ if (ret)
+- goto out;
++ goto out_v4l2_device_unregister;
+
+ /*
+ * Add OF clock provider.
+@@ -283,6 +293,8 @@ static int mmpcam_probe(struct platform_device *pdev)
+ return 0;
+ out:
+ mccic_shutdown(mcam);
++out_v4l2_device_unregister:
++ v4l2_device_unregister(&mcam->v4l2_dev);
+
+ return ret;
+ }
+@@ -293,6 +305,7 @@ static void mmpcam_remove(struct platform_device *pdev)
+ struct mcam_camera *mcam = &cam->mcam;
+
+ mccic_shutdown(mcam);
++ v4l2_device_unregister(&mcam->v4l2_dev);
+ pm_runtime_force_suspend(mcam->dev);
+ }
+
+diff --git a/drivers/media/platform/nuvoton/npcm-video.c b/drivers/media/platform/nuvoton/npcm-video.c
+index 60fbb91400355c..db454c9d2641f8 100644
+--- a/drivers/media/platform/nuvoton/npcm-video.c
++++ b/drivers/media/platform/nuvoton/npcm-video.c
+@@ -1667,9 +1667,9 @@ static int npcm_video_ece_init(struct npcm_video *video)
+ dev_info(dev, "Support HEXTILE pixel format\n");
+
+ ece_pdev = of_find_device_by_node(ece_node);
+- if (IS_ERR(ece_pdev)) {
++ if (!ece_pdev) {
+ dev_err(dev, "Failed to find ECE device\n");
+- return PTR_ERR(ece_pdev);
++ return -ENODEV;
+ }
+ of_node_put(ece_node);
+
+diff --git a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
+index 9f768f011fa25a..0f6918f4db383f 100644
+--- a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
++++ b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
+@@ -893,7 +893,7 @@ struct dcmipp_ent_device *dcmipp_bytecap_ent_init(struct device *dev,
+ q->dev = dev;
+
+ /* DCMIPP requires 16 bytes aligned buffers */
+- ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32) & ~0x0f);
++ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(dev, "Failed to set DMA mask\n");
+ goto err_mutex_destroy;
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 4fe26e82e3d1c1..4837d8df9c0386 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1579,6 +1579,40 @@ static void uvc_ctrl_send_slave_event(struct uvc_video_chain *chain,
+ uvc_ctrl_send_event(chain, handle, ctrl, mapping, val, changes);
+ }
+
++static void uvc_ctrl_set_handle(struct uvc_fh *handle, struct uvc_control *ctrl,
++ struct uvc_fh *new_handle)
++{
++ lockdep_assert_held(&handle->chain->ctrl_mutex);
++
++ if (new_handle) {
++ if (ctrl->handle)
++ dev_warn_ratelimited(&handle->stream->dev->udev->dev,
++ "UVC non compliance: Setting an async control with a pending operation.");
++
++ if (new_handle == ctrl->handle)
++ return;
++
++ if (ctrl->handle) {
++ WARN_ON(!ctrl->handle->pending_async_ctrls);
++ if (ctrl->handle->pending_async_ctrls)
++ ctrl->handle->pending_async_ctrls--;
++ }
++
++ ctrl->handle = new_handle;
++ handle->pending_async_ctrls++;
++ return;
++ }
++
++ /* Cannot clear the handle for a control not owned by us.*/
++ if (WARN_ON(ctrl->handle != handle))
++ return;
++
++ ctrl->handle = NULL;
++ if (WARN_ON(!handle->pending_async_ctrls))
++ return;
++ handle->pending_async_ctrls--;
++}
++
+ void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+ struct uvc_control *ctrl, const u8 *data)
+ {
+@@ -1589,7 +1623,8 @@ void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+ mutex_lock(&chain->ctrl_mutex);
+
+ handle = ctrl->handle;
+- ctrl->handle = NULL;
++ if (handle)
++ uvc_ctrl_set_handle(handle, ctrl, NULL);
+
+ list_for_each_entry(mapping, &ctrl->info.mappings, list) {
+ s32 value = __uvc_ctrl_get_value(mapping, data);
+@@ -1640,10 +1675,8 @@ bool uvc_ctrl_status_event_async(struct urb *urb, struct uvc_video_chain *chain,
+ struct uvc_device *dev = chain->dev;
+ struct uvc_ctrl_work *w = &dev->async_ctrl;
+
+- if (list_empty(&ctrl->info.mappings)) {
+- ctrl->handle = NULL;
++ if (list_empty(&ctrl->info.mappings))
+ return false;
+- }
+
+ w->data = data;
+ w->urb = urb;
+@@ -1673,13 +1706,13 @@ static void uvc_ctrl_send_events(struct uvc_fh *handle,
+ {
+ struct uvc_control_mapping *mapping;
+ struct uvc_control *ctrl;
+- u32 changes = V4L2_EVENT_CTRL_CH_VALUE;
+ unsigned int i;
+ unsigned int j;
+
+ for (i = 0; i < xctrls_count; ++i) {
+- ctrl = uvc_find_control(handle->chain, xctrls[i].id, &mapping);
++ u32 changes = V4L2_EVENT_CTRL_CH_VALUE;
+
++ ctrl = uvc_find_control(handle->chain, xctrls[i].id, &mapping);
+ if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+ /* Notification will be sent from an Interrupt event. */
+ continue;
+@@ -1811,7 +1844,10 @@ int uvc_ctrl_begin(struct uvc_video_chain *chain)
+ }
+
+ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+- struct uvc_entity *entity, int rollback, struct uvc_control **err_ctrl)
++ struct uvc_fh *handle,
++ struct uvc_entity *entity,
++ int rollback,
++ struct uvc_control **err_ctrl)
+ {
+ struct uvc_control *ctrl;
+ unsigned int i;
+@@ -1859,6 +1895,10 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ *err_ctrl = ctrl;
+ return ret;
+ }
++
++ if (!rollback && handle &&
++ ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
++ uvc_ctrl_set_handle(handle, ctrl, handle);
+ }
+
+ return 0;
+@@ -1895,8 +1935,8 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+
+ /* Find the control. */
+ list_for_each_entry(entity, &chain->entities, chain) {
+- ret = uvc_ctrl_commit_entity(chain->dev, entity, rollback,
+- &err_ctrl);
++ ret = uvc_ctrl_commit_entity(chain->dev, handle, entity,
++ rollback, &err_ctrl);
+ if (ret < 0) {
+ if (ctrls)
+ ctrls->error_idx =
+@@ -2046,9 +2086,6 @@ int uvc_ctrl_set(struct uvc_fh *handle,
+ mapping->set(mapping, value,
+ uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
+
+- if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+- ctrl->handle = handle;
+-
+ ctrl->dirty = 1;
+ ctrl->modified = 1;
+ return 0;
+@@ -2377,7 +2414,7 @@ int uvc_ctrl_restore_values(struct uvc_device *dev)
+ ctrl->dirty = 1;
+ }
+
+- ret = uvc_ctrl_commit_entity(dev, entity, 0, NULL);
++ ret = uvc_ctrl_commit_entity(dev, NULL, entity, 0, NULL);
+ if (ret < 0)
+ return ret;
+ }
+@@ -2770,6 +2807,26 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
+ return 0;
+ }
+
++void uvc_ctrl_cleanup_fh(struct uvc_fh *handle)
++{
++ struct uvc_entity *entity;
++
++ guard(mutex)(&handle->chain->ctrl_mutex);
++
++ if (!handle->pending_async_ctrls)
++ return;
++
++ list_for_each_entry(entity, &handle->chain->dev->entities, list) {
++ for (unsigned int i = 0; i < entity->ncontrols; ++i) {
++ if (entity->controls[i].handle != handle)
++ continue;
++ uvc_ctrl_set_handle(handle, &entity->controls[i], NULL);
++ }
++ }
++
++ WARN_ON(handle->pending_async_ctrls);
++}
++
+ /*
+ * Cleanup device controls.
+ */
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 9f38a9b23c0181..d832aa55056f39 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -775,27 +775,14 @@ static const u8 uvc_media_transport_input_guid[16] =
+ UVC_GUID_UVC_MEDIA_TRANSPORT_INPUT;
+ static const u8 uvc_processing_guid[16] = UVC_GUID_UVC_PROCESSING;
+
+-static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
+- u16 id, unsigned int num_pads,
+- unsigned int extra_size)
++static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
++ unsigned int num_pads, unsigned int extra_size)
+ {
+ struct uvc_entity *entity;
+ unsigned int num_inputs;
+ unsigned int size;
+ unsigned int i;
+
+- /* Per UVC 1.1+ spec 3.7.2, the ID should be non-zero. */
+- if (id == 0) {
+- dev_err(&dev->udev->dev, "Found Unit with invalid ID 0.\n");
+- return ERR_PTR(-EINVAL);
+- }
+-
+- /* Per UVC 1.1+ spec 3.7.2, the ID is unique. */
+- if (uvc_entity_by_id(dev, id)) {
+- dev_err(&dev->udev->dev, "Found multiple Units with ID %u\n", id);
+- return ERR_PTR(-EINVAL);
+- }
+-
+ extra_size = roundup(extra_size, sizeof(*entity->pads));
+ if (num_pads)
+ num_inputs = type & UVC_TERM_OUTPUT ? num_pads : num_pads - 1;
+@@ -805,7 +792,7 @@ static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
+ + num_inputs;
+ entity = kzalloc(size, GFP_KERNEL);
+ if (entity == NULL)
+- return ERR_PTR(-ENOMEM);
++ return NULL;
+
+ entity->id = id;
+ entity->type = type;
+@@ -917,10 +904,10 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
+ break;
+ }
+
+- unit = uvc_alloc_new_entity(dev, UVC_VC_EXTENSION_UNIT,
+- buffer[3], p + 1, 2 * n);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(UVC_VC_EXTENSION_UNIT, buffer[3],
++ p + 1, 2*n);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1029,10 +1016,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- term = uvc_alloc_new_entity(dev, type | UVC_TERM_INPUT,
+- buffer[3], 1, n + p);
+- if (IS_ERR(term))
+- return PTR_ERR(term);
++ term = uvc_alloc_entity(type | UVC_TERM_INPUT, buffer[3],
++ 1, n + p);
++ if (term == NULL)
++ return -ENOMEM;
+
+ if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA) {
+ term->camera.bControlSize = n;
+@@ -1088,10 +1075,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return 0;
+ }
+
+- term = uvc_alloc_new_entity(dev, type | UVC_TERM_OUTPUT,
+- buffer[3], 1, 0);
+- if (IS_ERR(term))
+- return PTR_ERR(term);
++ term = uvc_alloc_entity(type | UVC_TERM_OUTPUT, buffer[3],
++ 1, 0);
++ if (term == NULL)
++ return -ENOMEM;
+
+ memcpy(term->baSourceID, &buffer[7], 1);
+
+@@ -1110,10 +1097,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
+- p + 1, 0);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, 0);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->baSourceID, &buffer[5], p);
+
+@@ -1133,9 +1119,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3], 2, n);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(buffer[2], buffer[3], 2, n);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->baSourceID, &buffer[4], 1);
+ unit->processing.wMaxMultiplier =
+@@ -1162,10 +1148,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
+- p + 1, n);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, n);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1295,20 +1280,19 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ struct gpio_desc *gpio_privacy;
+ int irq;
+
+- gpio_privacy = devm_gpiod_get_optional(&dev->udev->dev, "privacy",
++ gpio_privacy = devm_gpiod_get_optional(&dev->intf->dev, "privacy",
+ GPIOD_IN);
+ if (IS_ERR_OR_NULL(gpio_privacy))
+ return PTR_ERR_OR_ZERO(gpio_privacy);
+
+ irq = gpiod_to_irq(gpio_privacy);
+ if (irq < 0)
+- return dev_err_probe(&dev->udev->dev, irq,
++ return dev_err_probe(&dev->intf->dev, irq,
+ "No IRQ for privacy GPIO\n");
+
+- unit = uvc_alloc_new_entity(dev, UVC_EXT_GPIO_UNIT,
+- UVC_EXT_GPIO_UNIT_ID, 0, 1);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
++ if (!unit)
++ return -ENOMEM;
+
+ unit->gpio.gpio_privacy = gpio_privacy;
+ unit->gpio.irq = irq;
+@@ -1329,15 +1313,27 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ static int uvc_gpio_init_irq(struct uvc_device *dev)
+ {
+ struct uvc_entity *unit = dev->gpio_unit;
++ int ret;
+
+ if (!unit || unit->gpio.irq < 0)
+ return 0;
+
+- return devm_request_threaded_irq(&dev->udev->dev, unit->gpio.irq, NULL,
+- uvc_gpio_irq,
+- IRQF_ONESHOT | IRQF_TRIGGER_FALLING |
+- IRQF_TRIGGER_RISING,
+- "uvc_privacy_gpio", dev);
++ ret = request_threaded_irq(unit->gpio.irq, NULL, uvc_gpio_irq,
++ IRQF_ONESHOT | IRQF_TRIGGER_FALLING |
++ IRQF_TRIGGER_RISING,
++ "uvc_privacy_gpio", dev);
++
++ unit->gpio.initialized = !ret;
++
++ return ret;
++}
++
++static void uvc_gpio_deinit(struct uvc_device *dev)
++{
++ if (!dev->gpio_unit || !dev->gpio_unit->gpio.initialized)
++ return;
++
++ free_irq(dev->gpio_unit->gpio.irq, dev);
+ }
+
+ /* ------------------------------------------------------------------------
+@@ -1934,6 +1930,8 @@ static void uvc_unregister_video(struct uvc_device *dev)
+ {
+ struct uvc_streaming *stream;
+
++ uvc_gpio_deinit(dev);
++
+ list_for_each_entry(stream, &dev->streams, list) {
+ /* Nothing to do here, continue. */
+ if (!video_is_registered(&stream->vdev))
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index f4988f03640aec..7bcd706281daf3 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -659,6 +659,8 @@ static int uvc_v4l2_release(struct file *file)
+
+ uvc_dbg(stream->dev, CALLS, "%s\n", __func__);
+
++ uvc_ctrl_cleanup_fh(handle);
++
+ /* Only free resources if this is a privileged handle. */
+ if (uvc_has_privileges(handle))
+ uvc_queue_release(&stream->queue);
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index e00f38dd07d935..d2fe01bcd209e5 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -79,6 +79,27 @@ int uvc_query_ctrl(struct uvc_device *dev, u8 query, u8 unit,
+ if (likely(ret == size))
+ return 0;
+
++ /*
++ * Some devices return shorter USB control packets than expected if the
++ * returned value can fit in less bytes. Zero all the bytes that the
++ * device has not written.
++ *
++ * This quirk is applied to all controls, regardless of their data type.
++ * Most controls are little-endian integers, in which case the missing
++ * bytes become 0 MSBs. For other data types, a different heuristic
++ * could be implemented if a device is found needing it.
++ *
++ * We exclude UVC_GET_INFO from the quirk. UVC_GET_LEN does not need
++ * to be excluded because its size is always 1.
++ */
++ if (ret > 0 && query != UVC_GET_INFO) {
++ memset(data + ret, 0, size - ret);
++ dev_warn_once(&dev->udev->dev,
++ "UVC non compliance: %s control %u on unit %u returned %d bytes when we expected %u.\n",
++ uvc_query_name(query), cs, unit, ret, size);
++ return 0;
++ }
++
+ if (ret != -EPIPE) {
+ dev_err(&dev->udev->dev,
+ "Failed to query (%s) UVC control %u on unit %u: %d (exp. %u).\n",
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index b7d24a853ce4f1..272dc9cf01ee7d 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -234,6 +234,7 @@ struct uvc_entity {
+ u8 *bmControls;
+ struct gpio_desc *gpio_privacy;
+ int irq;
++ bool initialized;
+ } gpio;
+ };
+
+@@ -337,7 +338,11 @@ struct uvc_video_chain {
+ struct uvc_entity *processing; /* Processing unit */
+ struct uvc_entity *selector; /* Selector unit */
+
+- struct mutex ctrl_mutex; /* Protects ctrl.info */
++ struct mutex ctrl_mutex; /*
++ * Protects ctrl.info,
++ * ctrl.handle and
++ * uvc_fh.pending_async_ctrls
++ */
+
+ struct v4l2_prio_state prio; /* V4L2 priority state */
+ u32 caps; /* V4L2 chain-wide caps */
+@@ -612,6 +617,7 @@ struct uvc_fh {
+ struct uvc_video_chain *chain;
+ struct uvc_streaming *stream;
+ enum uvc_handle_state state;
++ unsigned int pending_async_ctrls;
+ };
+
+ struct uvc_driver {
+@@ -795,6 +801,8 @@ int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+ int uvc_xu_ctrl_query(struct uvc_video_chain *chain,
+ struct uvc_xu_control_query *xqry);
+
++void uvc_ctrl_cleanup_fh(struct uvc_fh *handle);
++
+ /* Utility functions */
+ struct usb_host_endpoint *uvc_find_endpoint(struct usb_host_interface *alts,
+ u8 epaddr);
+diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
+index 4bb91359e3a9a7..937d358697e19a 100644
+--- a/drivers/media/v4l2-core/v4l2-mc.c
++++ b/drivers/media/v4l2-core/v4l2-mc.c
+@@ -329,7 +329,7 @@ int v4l2_create_fwnode_links_to_pad(struct v4l2_subdev *src_sd,
+ if (!(sink->flags & MEDIA_PAD_FL_SINK))
+ return -EINVAL;
+
+- fwnode_graph_for_each_endpoint(dev_fwnode(src_sd->dev), endpoint) {
++ fwnode_graph_for_each_endpoint(src_sd->fwnode, endpoint) {
+ struct fwnode_handle *remote_ep;
+ int src_idx, sink_idx, ret;
+ struct media_pad *src;
+diff --git a/drivers/mfd/lpc_ich.c b/drivers/mfd/lpc_ich.c
+index f14901660147f5..4b7d0cb9340f1a 100644
+--- a/drivers/mfd/lpc_ich.c
++++ b/drivers/mfd/lpc_ich.c
+@@ -834,8 +834,9 @@ static const struct pci_device_id lpc_ich_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x2917), LPC_ICH9ME},
+ { PCI_VDEVICE(INTEL, 0x2918), LPC_ICH9},
+ { PCI_VDEVICE(INTEL, 0x2919), LPC_ICH9M},
+- { PCI_VDEVICE(INTEL, 0x3197), LPC_GLK},
+ { PCI_VDEVICE(INTEL, 0x2b9c), LPC_COUGARMOUNTAIN},
++ { PCI_VDEVICE(INTEL, 0x3197), LPC_GLK},
++ { PCI_VDEVICE(INTEL, 0x31e8), LPC_GLK},
+ { PCI_VDEVICE(INTEL, 0x3a14), LPC_ICH10DO},
+ { PCI_VDEVICE(INTEL, 0x3a16), LPC_ICH10R},
+ { PCI_VDEVICE(INTEL, 0x3a18), LPC_ICH10},
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 74181b8c386b78..e567a36275afc5 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -992,7 +992,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ mmap_read_lock(current->mm);
+ vma = find_vma(current->mm, ctx->args[i].ptr);
+ if (vma)
+- pages[i].addr += ctx->args[i].ptr -
++ pages[i].addr += (ctx->args[i].ptr & PAGE_MASK) -
+ vma->vm_start;
+ mmap_read_unlock(current->mm);
+
+@@ -1019,8 +1019,8 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ (pkt_size - rlen);
+ pages[i].addr = pages[i].addr & PAGE_MASK;
+
+- pg_start = (args & PAGE_MASK) >> PAGE_SHIFT;
+- pg_end = ((args + len - 1) & PAGE_MASK) >> PAGE_SHIFT;
++ pg_start = (rpra[i].buf.pv & PAGE_MASK) >> PAGE_SHIFT;
++ pg_end = ((rpra[i].buf.pv + len - 1) & PAGE_MASK) >> PAGE_SHIFT;
+ pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE;
+ args = args + mlen;
+ rlen -= mlen;
+@@ -2344,7 +2344,7 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
+
+ err = fastrpc_device_register(rdev, data, false, domains[domain_id]);
+ if (err)
+- goto fdev_error;
++ goto populate_error;
+ break;
+ default:
+ err = -EINVAL;
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 9566837c9848e6..4b19b8a16b0968 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -458,6 +458,8 @@ static unsigned mmc_sdio_get_max_clock(struct mmc_card *card)
+ if (mmc_card_sd_combo(card))
+ max_dtr = min(max_dtr, mmc_sd_get_max_clock(card));
+
++ max_dtr = min_not_zero(max_dtr, card->quirk_max_rate);
++
+ return max_dtr;
+ }
+
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index ef3a44f2dff16d..d84aa20f035894 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -303,6 +303,7 @@ static struct esdhc_soc_data usdhc_s32g2_data = {
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+ | ESDHC_FLAG_SKIP_ERR004536 | ESDHC_FLAG_SKIP_CD_WAKE,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ static struct esdhc_soc_data usdhc_imx7ulp_data = {
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 8716004fcf6c90..945d08531de376 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -134,9 +134,18 @@
+ /* Timeout value to avoid infinite waiting for pwr_irq */
+ #define MSM_PWR_IRQ_TIMEOUT_MS 5000
+
++/* Max load for eMMC Vdd supply */
++#define MMC_VMMC_MAX_LOAD_UA 570000
++
+ /* Max load for eMMC Vdd-io supply */
+ #define MMC_VQMMC_MAX_LOAD_UA 325000
+
++/* Max load for SD Vdd supply */
++#define SD_VMMC_MAX_LOAD_UA 800000
++
++/* Max load for SD Vdd-io supply */
++#define SD_VQMMC_MAX_LOAD_UA 22000
++
+ #define msm_host_readl(msm_host, host, offset) \
+ msm_host->var_ops->msm_readl_relaxed(host, offset)
+
+@@ -1403,11 +1412,48 @@ static int sdhci_msm_set_pincfg(struct sdhci_msm_host *msm_host, bool level)
+ return ret;
+ }
+
+-static int sdhci_msm_set_vmmc(struct mmc_host *mmc)
++static void msm_config_vmmc_regulator(struct mmc_host *mmc, bool hpm)
++{
++ int load;
++
++ if (!hpm)
++ load = 0;
++ else if (!mmc->card)
++ load = max(MMC_VMMC_MAX_LOAD_UA, SD_VMMC_MAX_LOAD_UA);
++ else if (mmc_card_mmc(mmc->card))
++ load = MMC_VMMC_MAX_LOAD_UA;
++ else if (mmc_card_sd(mmc->card))
++ load = SD_VMMC_MAX_LOAD_UA;
++ else
++ return;
++
++ regulator_set_load(mmc->supply.vmmc, load);
++}
++
++static void msm_config_vqmmc_regulator(struct mmc_host *mmc, bool hpm)
++{
++ int load;
++
++ if (!hpm)
++ load = 0;
++ else if (!mmc->card)
++ load = max(MMC_VQMMC_MAX_LOAD_UA, SD_VQMMC_MAX_LOAD_UA);
++ else if (mmc_card_sd(mmc->card))
++ load = SD_VQMMC_MAX_LOAD_UA;
++ else
++ return;
++
++ regulator_set_load(mmc->supply.vqmmc, load);
++}
++
++static int sdhci_msm_set_vmmc(struct sdhci_msm_host *msm_host,
++ struct mmc_host *mmc, bool hpm)
+ {
+ if (IS_ERR(mmc->supply.vmmc))
+ return 0;
+
++ msm_config_vmmc_regulator(mmc, hpm);
++
+ return mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, mmc->ios.vdd);
+ }
+
+@@ -1420,6 +1466,8 @@ static int msm_toggle_vqmmc(struct sdhci_msm_host *msm_host,
+ if (msm_host->vqmmc_enabled == level)
+ return 0;
+
++ msm_config_vqmmc_regulator(mmc, level);
++
+ if (level) {
+ /* Set the IO voltage regulator to default voltage level */
+ if (msm_host->caps_0 & CORE_3_0V_SUPPORT)
+@@ -1642,7 +1690,8 @@ static void sdhci_msm_handle_pwr_irq(struct sdhci_host *host, int irq)
+ }
+
+ if (pwr_state) {
+- ret = sdhci_msm_set_vmmc(mmc);
++ ret = sdhci_msm_set_vmmc(msm_host, mmc,
++ pwr_state & REQ_BUS_ON);
+ if (!ret)
+ ret = sdhci_msm_set_vqmmc(msm_host, mmc,
+ pwr_state & REQ_BUS_ON);
+diff --git a/drivers/mtd/nand/onenand/onenand_base.c b/drivers/mtd/nand/onenand/onenand_base.c
+index f66385faf631cd..0dc2ea4fc857b7 100644
+--- a/drivers/mtd/nand/onenand/onenand_base.c
++++ b/drivers/mtd/nand/onenand/onenand_base.c
+@@ -2923,6 +2923,7 @@ static int do_otp_read(struct mtd_info *mtd, loff_t from, size_t len,
+ ret = ONENAND_IS_4KB_PAGE(this) ?
+ onenand_mlc_read_ops_nolock(mtd, from, &ops) :
+ onenand_read_ops_nolock(mtd, from, &ops);
++ *retlen = ops.retlen;
+
+ /* Exit OTP access mode */
+ this->command(mtd, ONENAND_CMD_RESET, 0, 0);
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index 30be4ed68fad29..ef6a22f372f95c 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -1537,7 +1537,7 @@ static int ubi_mtd_param_parse(const char *val, const struct kernel_param *kp)
+ if (token) {
+ int err = kstrtoint(token, 10, &p->ubi_num);
+
+- if (err) {
++ if (err || p->ubi_num < UBI_DEV_NUM_AUTO) {
+ pr_err("UBI error: bad value for ubi_num parameter: %s\n",
+ token);
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index fe0e3e2a811718..71e50fc65c1478 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -1441,7 +1441,9 @@ void aq_nic_deinit(struct aq_nic_s *self, bool link_down)
+ aq_ptp_ring_free(self);
+ aq_ptp_free(self);
+
+- if (likely(self->aq_fw_ops->deinit) && link_down) {
++ /* May be invoked during hot unplug. */
++ if (pci_device_is_present(self->pdev) &&
++ likely(self->aq_fw_ops->deinit) && link_down) {
+ mutex_lock(&self->fwreq_mutex);
+ self->aq_fw_ops->deinit(self->aq_hw);
+ mutex_unlock(&self->fwreq_mutex);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index 0715ea5bf13ed9..3b082114f2e538 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -41,9 +41,12 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ {
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+ struct device *kdev = &priv->pdev->dev;
++ u32 phy_wolopts = 0;
+
+- if (dev->phydev)
++ if (dev->phydev) {
+ phy_ethtool_get_wol(dev->phydev, wol);
++ phy_wolopts = wol->wolopts;
++ }
+
+ /* MAC is not wake-up capable, return what the PHY does */
+ if (!device_can_wakeup(kdev))
+@@ -51,9 +54,14 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+
+ /* Overlay MAC capabilities with that of the PHY queried before */
+ wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
+- wol->wolopts = priv->wolopts;
+- memset(wol->sopass, 0, sizeof(wol->sopass));
++ wol->wolopts |= priv->wolopts;
+
++ /* Return the PHY configured magic password */
++ if (phy_wolopts & WAKE_MAGICSECURE)
++ return;
++
++ /* Otherwise the MAC one */
++ memset(wol->sopass, 0, sizeof(wol->sopass));
+ if (wol->wolopts & WAKE_MAGICSECURE)
+ memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass));
+ }
+@@ -70,7 +78,7 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ /* Try Wake-on-LAN from the PHY first */
+ if (dev->phydev) {
+ ret = phy_ethtool_set_wol(dev->phydev, wol);
+- if (ret != -EOPNOTSUPP)
++ if (ret != -EOPNOTSUPP && wol->wolopts)
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index d178138981a967..717e110d23c914 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -55,6 +55,7 @@
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+ #include <linux/crc32poly.h>
++#include <linux/dmi.h>
+
+ #include <net/checksum.h>
+ #include <net/gso.h>
+@@ -18154,6 +18155,50 @@ static int tg3_resume(struct device *device)
+
+ static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume);
+
++/* Systems where ACPI _PTS (Prepare To Sleep) S5 will result in a fatal
++ * PCIe AER event on the tg3 device if the tg3 device is not, or cannot
++ * be, powered down.
++ */
++static const struct dmi_system_id tg3_restart_aer_quirk_table[] = {
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R440"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R540"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R640"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R650"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R740"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R750"),
++ },
++ },
++ {}
++};
++
+ static void tg3_shutdown(struct pci_dev *pdev)
+ {
+ struct net_device *dev = pci_get_drvdata(pdev);
+@@ -18170,6 +18215,19 @@ static void tg3_shutdown(struct pci_dev *pdev)
+
+ if (system_state == SYSTEM_POWER_OFF)
+ tg3_power_down(tp);
++ else if (system_state == SYSTEM_RESTART &&
++ dmi_first_match(tg3_restart_aer_quirk_table) &&
++ pdev->current_state != PCI_D3cold &&
++ pdev->current_state != PCI_UNKNOWN) {
++ /* Disable PCIe AER on the tg3 to avoid a fatal
++ * error during this system restart.
++ */
++ pcie_capability_clear_word(pdev, PCI_EXP_DEVCTL,
++ PCI_EXP_DEVCTL_CERE |
++ PCI_EXP_DEVCTL_NFERE |
++ PCI_EXP_DEVCTL_FERE |
++ PCI_EXP_DEVCTL_URRE);
++ }
+
+ rtnl_unlock();
+
+diff --git a/drivers/net/ethernet/intel/ice/devlink/devlink.c b/drivers/net/ethernet/intel/ice/devlink/devlink.c
+index 415445cefdb2aa..b1efd287b3309c 100644
+--- a/drivers/net/ethernet/intel/ice/devlink/devlink.c
++++ b/drivers/net/ethernet/intel/ice/devlink/devlink.c
+@@ -977,6 +977,9 @@ static int ice_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv
+
+ /* preallocate memory for ice_sched_node */
+ node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
++ if (!node)
++ return -ENOMEM;
++
+ *priv = node;
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 8208055d6e7fc5..f12fb3a2b6ad94 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -527,15 +527,14 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
+ * @xdp: xdp_buff used as input to the XDP program
+ * @xdp_prog: XDP program to run
+ * @xdp_ring: ring to be used for XDP_TX action
+- * @rx_buf: Rx buffer to store the XDP action
+ * @eop_desc: Last descriptor in packet to read metadata from
+ *
+ * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
+ */
+-static void
++static u32
+ ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
+- struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *eop_desc)
++ union ice_32b_rx_flex_desc *eop_desc)
+ {
+ unsigned int ret = ICE_XDP_PASS;
+ u32 act;
+@@ -574,7 +573,7 @@ ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ ret = ICE_XDP_CONSUMED;
+ }
+ exit:
+- ice_set_rx_bufs_act(xdp, rx_ring, ret);
++ return ret;
+ }
+
+ /**
+@@ -860,10 +859,8 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ xdp_buff_set_frags_flag(xdp);
+ }
+
+- if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) {
+- ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
++ if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS))
+ return -ENOMEM;
+- }
+
+ __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
+ rx_buf->page_offset, size);
+@@ -924,7 +921,6 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
+ struct ice_rx_buf *rx_buf;
+
+ rx_buf = &rx_ring->rx_buf[ntc];
+- rx_buf->pgcnt = page_count(rx_buf->page);
+ prefetchw(rx_buf->page);
+
+ if (!size)
+@@ -940,6 +936,31 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
+ return rx_buf;
+ }
+
++/**
++ * ice_get_pgcnts - grab page_count() for gathered fragments
++ * @rx_ring: Rx descriptor ring to store the page counts on
++ *
++ * This function is intended to be called right before running XDP
++ * program so that the page recycling mechanism will be able to take
++ * a correct decision regarding underlying pages; this is done in such
++ * way as XDP program can change the refcount of page
++ */
++static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
++{
++ u32 nr_frags = rx_ring->nr_frags + 1;
++ u32 idx = rx_ring->first_desc;
++ struct ice_rx_buf *rx_buf;
++ u32 cnt = rx_ring->count;
++
++ for (int i = 0; i < nr_frags; i++) {
++ rx_buf = &rx_ring->rx_buf[idx];
++ rx_buf->pgcnt = page_count(rx_buf->page);
++
++ if (++idx == cnt)
++ idx = 0;
++ }
++}
++
+ /**
+ * ice_build_skb - Build skb around an existing buffer
+ * @rx_ring: Rx descriptor ring to transact packets on
+@@ -1051,12 +1072,12 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
+ rx_buf->page_offset + headlen, size,
+ xdp->frame_sz);
+ } else {
+- /* buffer is unused, change the act that should be taken later
+- * on; data was copied onto skb's linear part so there's no
++ /* buffer is unused, restore biased page count in Rx buffer;
++ * data was copied onto skb's linear part so there's no
+ * need for adjusting page offset and we can reuse this buffer
+ * as-is
+ */
+- rx_buf->act = ICE_SKB_CONSUMED;
++ rx_buf->pagecnt_bias++;
+ }
+
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+@@ -1103,6 +1124,65 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
+ rx_buf->page = NULL;
+ }
+
++/**
++ * ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
++ * @rx_ring: Rx ring with all the auxiliary data
++ * @xdp: XDP buffer carrying linear + frags part
++ * @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
++ * @ntc: a current next_to_clean value to be stored at rx_ring
++ * @verdict: return code from XDP program execution
++ *
++ * Walk through gathered fragments and satisfy internal page
++ * recycle mechanism; we take here an action related to verdict
++ * returned by XDP program;
++ */
++static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
++ u32 *xdp_xmit, u32 ntc, u32 verdict)
++{
++ u32 nr_frags = rx_ring->nr_frags + 1;
++ u32 idx = rx_ring->first_desc;
++ u32 cnt = rx_ring->count;
++ u32 post_xdp_frags = 1;
++ struct ice_rx_buf *buf;
++ int i;
++
++ if (unlikely(xdp_buff_has_frags(xdp)))
++ post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
++
++ for (i = 0; i < post_xdp_frags; i++) {
++ buf = &rx_ring->rx_buf[idx];
++
++ if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
++ ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
++ *xdp_xmit |= verdict;
++ } else if (verdict & ICE_XDP_CONSUMED) {
++ buf->pagecnt_bias++;
++ } else if (verdict == ICE_XDP_PASS) {
++ ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
++ }
++
++ ice_put_rx_buf(rx_ring, buf);
++
++ if (++idx == cnt)
++ idx = 0;
++ }
++ /* handle buffers that represented frags released by XDP prog;
++ * for these we keep pagecnt_bias as-is; refcount from struct page
++ * has been decremented within XDP prog and we do not have to increase
++ * the biased refcnt
++ */
++ for (; i < nr_frags; i++) {
++ buf = &rx_ring->rx_buf[idx];
++ ice_put_rx_buf(rx_ring, buf);
++ if (++idx == cnt)
++ idx = 0;
++ }
++
++ xdp->data = NULL;
++ rx_ring->first_desc = ntc;
++ rx_ring->nr_frags = 0;
++}
++
+ /**
+ * ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @rx_ring: Rx descriptor ring to transact packets on
+@@ -1120,15 +1200,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+ unsigned int offset = rx_ring->rx_offset;
+ struct xdp_buff *xdp = &rx_ring->xdp;
+- u32 cached_ntc = rx_ring->first_desc;
+ struct ice_tx_ring *xdp_ring = NULL;
+ struct bpf_prog *xdp_prog = NULL;
+ u32 ntc = rx_ring->next_to_clean;
++ u32 cached_ntu, xdp_verdict;
+ u32 cnt = rx_ring->count;
+ u32 xdp_xmit = 0;
+- u32 cached_ntu;
+ bool failure;
+- u32 first;
+
+ xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+ if (xdp_prog) {
+@@ -1190,6 +1268,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
+ xdp_buff_clear_frags_flag(xdp);
+ } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
++ ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
+ break;
+ }
+ if (++ntc == cnt)
+@@ -1199,15 +1278,15 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ if (ice_is_non_eop(rx_ring, rx_desc))
+ continue;
+
+- ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf, rx_desc);
+- if (rx_buf->act == ICE_XDP_PASS)
++ ice_get_pgcnts(rx_ring);
++ xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
++ if (xdp_verdict == ICE_XDP_PASS)
+ goto construct_skb;
+ total_rx_bytes += xdp_get_buff_len(xdp);
+ total_rx_pkts++;
+
+- xdp->data = NULL;
+- rx_ring->first_desc = ntc;
+- rx_ring->nr_frags = 0;
++ ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
++
+ continue;
+ construct_skb:
+ if (likely(ice_ring_uses_build_skb(rx_ring)))
+@@ -1217,18 +1296,12 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ /* exit if we failed to retrieve a buffer */
+ if (!skb) {
+ rx_ring->ring_stats->rx_stats.alloc_page_failed++;
+- rx_buf->act = ICE_XDP_CONSUMED;
+- if (unlikely(xdp_buff_has_frags(xdp)))
+- ice_set_rx_bufs_act(xdp, rx_ring,
+- ICE_XDP_CONSUMED);
+- xdp->data = NULL;
+- rx_ring->first_desc = ntc;
+- rx_ring->nr_frags = 0;
+- break;
++ xdp_verdict = ICE_XDP_CONSUMED;
+ }
+- xdp->data = NULL;
+- rx_ring->first_desc = ntc;
+- rx_ring->nr_frags = 0;
++ ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
++
++ if (!skb)
++ break;
+
+ stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
+ if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
+@@ -1257,23 +1330,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ total_rx_pkts++;
+ }
+
+- first = rx_ring->first_desc;
+- while (cached_ntc != first) {
+- struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc];
+-
+- if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) {
+- ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+- xdp_xmit |= buf->act;
+- } else if (buf->act & ICE_XDP_CONSUMED) {
+- buf->pagecnt_bias++;
+- } else if (buf->act == ICE_XDP_PASS) {
+- ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+- }
+-
+- ice_put_rx_buf(rx_ring, buf);
+- if (++cached_ntc >= cnt)
+- cached_ntc = 0;
+- }
+ rx_ring->next_to_clean = ntc;
+ /* return up to cleaned_count buffers to hardware */
+ failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring));
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index feba314a3fe441..7130992d417798 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -201,7 +201,6 @@ struct ice_rx_buf {
+ struct page *page;
+ unsigned int page_offset;
+ unsigned int pgcnt;
+- unsigned int act;
+ unsigned int pagecnt_bias;
+ };
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+index afcead4baef4b1..f6c2b16ab45674 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+@@ -5,49 +5,6 @@
+ #define _ICE_TXRX_LIB_H_
+ #include "ice.h"
+
+-/**
+- * ice_set_rx_bufs_act - propagate Rx buffer action to frags
+- * @xdp: XDP buffer representing frame (linear and frags part)
+- * @rx_ring: Rx ring struct
+- * act: action to store onto Rx buffers related to XDP buffer parts
+- *
+- * Set action that should be taken before putting Rx buffer from first frag
+- * to the last.
+- */
+-static inline void
+-ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring,
+- const unsigned int act)
+-{
+- u32 sinfo_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
+- u32 nr_frags = rx_ring->nr_frags + 1;
+- u32 idx = rx_ring->first_desc;
+- u32 cnt = rx_ring->count;
+- struct ice_rx_buf *buf;
+-
+- for (int i = 0; i < nr_frags; i++) {
+- buf = &rx_ring->rx_buf[idx];
+- buf->act = act;
+-
+- if (++idx == cnt)
+- idx = 0;
+- }
+-
+- /* adjust pagecnt_bias on frags freed by XDP prog */
+- if (sinfo_frags < rx_ring->nr_frags && act == ICE_XDP_CONSUMED) {
+- u32 delta = rx_ring->nr_frags - sinfo_frags;
+-
+- while (delta) {
+- if (idx == 0)
+- idx = cnt - 1;
+- else
+- idx--;
+- buf = &rx_ring->rx_buf[idx];
+- buf->pagecnt_bias--;
+- delta--;
+- }
+- }
+-}
+-
+ /**
+ * ice_test_staterr - tests bits in Rx descriptor status and error fields
+ * @status_err_n: Rx descriptor status_error0 or status_error1 bits
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
+index 7d0124b283dace..d7a3465e6a7241 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
+@@ -157,17 +157,14 @@ octep_get_ethtool_stats(struct net_device *netdev,
+ iface_rx_stats,
+ iface_tx_stats);
+
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_iq *iq = oct->iq[q];
+- struct octep_oq *oq = oct->oq[q];
+-
+- tx_packets += iq->stats.instr_completed;
+- tx_bytes += iq->stats.bytes_sent;
+- tx_busy_errors += iq->stats.tx_busy;
+-
+- rx_packets += oq->stats.packets;
+- rx_bytes += oq->stats.bytes;
+- rx_alloc_errors += oq->stats.alloc_failures;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ tx_packets += oct->stats_iq[q].instr_completed;
++ tx_bytes += oct->stats_iq[q].bytes_sent;
++ tx_busy_errors += oct->stats_iq[q].tx_busy;
++
++ rx_packets += oct->stats_oq[q].packets;
++ rx_bytes += oct->stats_oq[q].bytes;
++ rx_alloc_errors += oct->stats_oq[q].alloc_failures;
+ }
+ i = 0;
+ data[i++] = rx_packets;
+@@ -205,22 +202,18 @@ octep_get_ethtool_stats(struct net_device *netdev,
+ data[i++] = iface_rx_stats->err_pkts;
+
+ /* Per Tx Queue stats */
+- for (q = 0; q < oct->num_iqs; q++) {
+- struct octep_iq *iq = oct->iq[q];
+-
+- data[i++] = iq->stats.instr_posted;
+- data[i++] = iq->stats.instr_completed;
+- data[i++] = iq->stats.bytes_sent;
+- data[i++] = iq->stats.tx_busy;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ data[i++] = oct->stats_iq[q].instr_posted;
++ data[i++] = oct->stats_iq[q].instr_completed;
++ data[i++] = oct->stats_iq[q].bytes_sent;
++ data[i++] = oct->stats_iq[q].tx_busy;
+ }
+
+ /* Per Rx Queue stats */
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_oq *oq = oct->oq[q];
+-
+- data[i++] = oq->stats.packets;
+- data[i++] = oq->stats.bytes;
+- data[i++] = oq->stats.alloc_failures;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ data[i++] = oct->stats_oq[q].packets;
++ data[i++] = oct->stats_oq[q].bytes;
++ data[i++] = oct->stats_oq[q].alloc_failures;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 730aa5632cceee..a89f80bac39b8d 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -822,7 +822,7 @@ static inline int octep_iq_full_check(struct octep_iq *iq)
+ if (unlikely(IQ_INSTR_SPACE(iq) >
+ OCTEP_WAKE_QUEUE_THRESHOLD)) {
+ netif_start_subqueue(iq->netdev, iq->q_no);
+- iq->stats.restart_cnt++;
++ iq->stats->restart_cnt++;
+ return 0;
+ }
+
+@@ -960,7 +960,7 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
+ wmb();
+ /* Ring Doorbell to notify the NIC of new packets */
+ writel(iq->fill_cnt, iq->doorbell_reg);
+- iq->stats.instr_posted += iq->fill_cnt;
++ iq->stats->instr_posted += iq->fill_cnt;
+ iq->fill_cnt = 0;
+ return NETDEV_TX_OK;
+
+@@ -991,22 +991,19 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
+ static void octep_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+ {
+- u64 tx_packets, tx_bytes, rx_packets, rx_bytes;
+ struct octep_device *oct = netdev_priv(netdev);
++ u64 tx_packets, tx_bytes, rx_packets, rx_bytes;
+ int q;
+
+ tx_packets = 0;
+ tx_bytes = 0;
+ rx_packets = 0;
+ rx_bytes = 0;
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_iq *iq = oct->iq[q];
+- struct octep_oq *oq = oct->oq[q];
+-
+- tx_packets += iq->stats.instr_completed;
+- tx_bytes += iq->stats.bytes_sent;
+- rx_packets += oq->stats.packets;
+- rx_bytes += oq->stats.bytes;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ tx_packets += oct->stats_iq[q].instr_completed;
++ tx_bytes += oct->stats_iq[q].bytes_sent;
++ rx_packets += oct->stats_oq[q].packets;
++ rx_bytes += oct->stats_oq[q].bytes;
+ }
+ stats->tx_packets = tx_packets;
+ stats->tx_bytes = tx_bytes;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
+index fee59e0e0138fe..936b786f428168 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
+@@ -257,11 +257,17 @@ struct octep_device {
+ /* Pointers to Octeon Tx queues */
+ struct octep_iq *iq[OCTEP_MAX_IQ];
+
++ /* Per iq stats */
++ struct octep_iq_stats stats_iq[OCTEP_MAX_IQ];
++
+ /* Rx queues (OQ: Output Queue) */
+ u16 num_oqs;
+ /* Pointers to Octeon Rx queues */
+ struct octep_oq *oq[OCTEP_MAX_OQ];
+
++ /* Per oq stats */
++ struct octep_oq_stats stats_oq[OCTEP_MAX_OQ];
++
+ /* Hardware port number of the PCIe interface */
+ u16 pcie_port;
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+index 8af75cb37c3ee8..82b6b19e76b47a 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+@@ -87,7 +87,7 @@ static int octep_oq_refill(struct octep_device *oct, struct octep_oq *oq)
+ page = dev_alloc_page();
+ if (unlikely(!page)) {
+ dev_err(oq->dev, "refill: rx buffer alloc failed\n");
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+
+@@ -98,7 +98,7 @@ static int octep_oq_refill(struct octep_device *oct, struct octep_oq *oq)
+ "OQ-%d buffer refill: DMA mapping error!\n",
+ oq->q_no);
+ put_page(page);
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+ oq->buff_info[refill_idx].page = page;
+@@ -134,6 +134,7 @@ static int octep_setup_oq(struct octep_device *oct, int q_no)
+ oq->netdev = oct->netdev;
+ oq->dev = &oct->pdev->dev;
+ oq->q_no = q_no;
++ oq->stats = &oct->stats_oq[q_no];
+ oq->max_count = CFG_GET_OQ_NUM_DESC(oct->conf);
+ oq->ring_size_mask = oq->max_count - 1;
+ oq->buffer_size = CFG_GET_OQ_BUF_SIZE(oct->conf);
+@@ -443,7 +444,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+ if (!skb) {
+ octep_oq_drop_rx(oq, buff_info,
+ &read_idx, &desc_used);
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ continue;
+ }
+ skb_reserve(skb, data_offset);
+@@ -494,8 +495,8 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+
+ oq->host_read_idx = read_idx;
+ oq->refill_count += desc_used;
+- oq->stats.packets += pkt;
+- oq->stats.bytes += rx_bytes;
++ oq->stats->packets += pkt;
++ oq->stats->bytes += rx_bytes;
+
+ return pkt;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
+index 3b08e2d560dc39..b4696c93d0e6a9 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
+@@ -186,8 +186,8 @@ struct octep_oq {
+ */
+ u8 __iomem *pkts_sent_reg;
+
+- /* Statistics for this OQ. */
+- struct octep_oq_stats stats;
++ /* Pointer to statistics for this OQ. */
++ struct octep_oq_stats *stats;
+
+ /* Packets pending to be processed */
+ u32 pkts_pending;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+index 06851b78aa28c8..08ee90013fef3b 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+@@ -81,9 +81,9 @@ int octep_iq_process_completions(struct octep_iq *iq, u16 budget)
+ }
+
+ iq->pkts_processed += compl_pkts;
+- iq->stats.instr_completed += compl_pkts;
+- iq->stats.bytes_sent += compl_bytes;
+- iq->stats.sgentry_sent += compl_sg;
++ iq->stats->instr_completed += compl_pkts;
++ iq->stats->bytes_sent += compl_bytes;
++ iq->stats->sgentry_sent += compl_sg;
+ iq->flush_index = fi;
+
+ netdev_tx_completed_queue(iq->netdev_q, compl_pkts, compl_bytes);
+@@ -187,6 +187,7 @@ static int octep_setup_iq(struct octep_device *oct, int q_no)
+ iq->netdev = oct->netdev;
+ iq->dev = &oct->pdev->dev;
+ iq->q_no = q_no;
++ iq->stats = &oct->stats_iq[q_no];
+ iq->max_count = CFG_GET_IQ_NUM_DESC(oct->conf);
+ iq->ring_size_mask = iq->max_count - 1;
+ iq->fill_threshold = CFG_GET_IQ_DB_MIN(oct->conf);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+index 875a2c34091ffe..58fb39dda977c0 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+@@ -170,8 +170,8 @@ struct octep_iq {
+ */
+ u16 flush_index;
+
+- /* Statistics for this input queue. */
+- struct octep_iq_stats stats;
++ /* Pointer to statistics for this input queue. */
++ struct octep_iq_stats *stats;
+
+ /* Pointer to the Virtual Base addr of the input ring. */
+ struct octep_tx_desc_hw *desc_ring;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c
+index a1979b45e355c6..12ddb77141cc35 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c
+@@ -121,12 +121,9 @@ static void octep_vf_get_ethtool_stats(struct net_device *netdev,
+ iface_tx_stats = &oct->iface_tx_stats;
+ iface_rx_stats = &oct->iface_rx_stats;
+
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_vf_iq *iq = oct->iq[q];
+- struct octep_vf_oq *oq = oct->oq[q];
+-
+- tx_busy_errors += iq->stats.tx_busy;
+- rx_alloc_errors += oq->stats.alloc_failures;
++ for (q = 0; q < OCTEP_VF_MAX_QUEUES; q++) {
++ tx_busy_errors += oct->stats_iq[q].tx_busy;
++ rx_alloc_errors += oct->stats_oq[q].alloc_failures;
+ }
+ i = 0;
+ data[i++] = rx_alloc_errors;
+@@ -141,22 +138,18 @@ static void octep_vf_get_ethtool_stats(struct net_device *netdev,
+ data[i++] = iface_rx_stats->dropped_octets_fifo_full;
+
+ /* Per Tx Queue stats */
+- for (q = 0; q < oct->num_iqs; q++) {
+- struct octep_vf_iq *iq = oct->iq[q];
+-
+- data[i++] = iq->stats.instr_posted;
+- data[i++] = iq->stats.instr_completed;
+- data[i++] = iq->stats.bytes_sent;
+- data[i++] = iq->stats.tx_busy;
++ for (q = 0; q < OCTEP_VF_MAX_QUEUES; q++) {
++ data[i++] = oct->stats_iq[q].instr_posted;
++ data[i++] = oct->stats_iq[q].instr_completed;
++ data[i++] = oct->stats_iq[q].bytes_sent;
++ data[i++] = oct->stats_iq[q].tx_busy;
+ }
+
+ /* Per Rx Queue stats */
+ for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_vf_oq *oq = oct->oq[q];
+-
+- data[i++] = oq->stats.packets;
+- data[i++] = oq->stats.bytes;
+- data[i++] = oq->stats.alloc_failures;
++ data[i++] = oct->stats_oq[q].packets;
++ data[i++] = oct->stats_oq[q].bytes;
++ data[i++] = oct->stats_oq[q].alloc_failures;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+index 4c699514fd57a0..18c922dd5fc64d 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+@@ -574,7 +574,7 @@ static int octep_vf_iq_full_check(struct octep_vf_iq *iq)
+ * caused queues to get re-enabled after
+ * being stopped
+ */
+- iq->stats.restart_cnt++;
++ iq->stats->restart_cnt++;
+ fallthrough;
+ case 1: /* Queue left enabled, since IQ is not yet full*/
+ return 0;
+@@ -731,7 +731,7 @@ static netdev_tx_t octep_vf_start_xmit(struct sk_buff *skb,
+ /* Flush the hw descriptors before writing to doorbell */
+ smp_wmb();
+ writel(iq->fill_cnt, iq->doorbell_reg);
+- iq->stats.instr_posted += iq->fill_cnt;
++ iq->stats->instr_posted += iq->fill_cnt;
+ iq->fill_cnt = 0;
+ return NETDEV_TX_OK;
+ }
+@@ -786,14 +786,11 @@ static void octep_vf_get_stats64(struct net_device *netdev,
+ tx_bytes = 0;
+ rx_packets = 0;
+ rx_bytes = 0;
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_vf_iq *iq = oct->iq[q];
+- struct octep_vf_oq *oq = oct->oq[q];
+-
+- tx_packets += iq->stats.instr_completed;
+- tx_bytes += iq->stats.bytes_sent;
+- rx_packets += oq->stats.packets;
+- rx_bytes += oq->stats.bytes;
++ for (q = 0; q < OCTEP_VF_MAX_QUEUES; q++) {
++ tx_packets += oct->stats_iq[q].instr_completed;
++ tx_bytes += oct->stats_iq[q].bytes_sent;
++ rx_packets += oct->stats_oq[q].packets;
++ rx_bytes += oct->stats_oq[q].bytes;
+ }
+ stats->tx_packets = tx_packets;
+ stats->tx_bytes = tx_bytes;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
+index 5769f62545cd44..1a352f41f823cd 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
+@@ -246,11 +246,17 @@ struct octep_vf_device {
+ /* Pointers to Octeon Tx queues */
+ struct octep_vf_iq *iq[OCTEP_VF_MAX_IQ];
+
++ /* Per iq stats */
++ struct octep_vf_iq_stats stats_iq[OCTEP_VF_MAX_IQ];
++
+ /* Rx queues (OQ: Output Queue) */
+ u16 num_oqs;
+ /* Pointers to Octeon Rx queues */
+ struct octep_vf_oq *oq[OCTEP_VF_MAX_OQ];
+
++ /* Per oq stats */
++ struct octep_vf_oq_stats stats_oq[OCTEP_VF_MAX_OQ];
++
+ /* Hardware port number of the PCIe interface */
+ u16 pcie_port;
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
+index 82821bc28634b6..d70c8be3cfc40b 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
+@@ -87,7 +87,7 @@ static int octep_vf_oq_refill(struct octep_vf_device *oct, struct octep_vf_oq *o
+ page = dev_alloc_page();
+ if (unlikely(!page)) {
+ dev_err(oq->dev, "refill: rx buffer alloc failed\n");
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+
+@@ -98,7 +98,7 @@ static int octep_vf_oq_refill(struct octep_vf_device *oct, struct octep_vf_oq *o
+ "OQ-%d buffer refill: DMA mapping error!\n",
+ oq->q_no);
+ put_page(page);
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+ oq->buff_info[refill_idx].page = page;
+@@ -134,6 +134,7 @@ static int octep_vf_setup_oq(struct octep_vf_device *oct, int q_no)
+ oq->netdev = oct->netdev;
+ oq->dev = &oct->pdev->dev;
+ oq->q_no = q_no;
++ oq->stats = &oct->stats_oq[q_no];
+ oq->max_count = CFG_GET_OQ_NUM_DESC(oct->conf);
+ oq->ring_size_mask = oq->max_count - 1;
+ oq->buffer_size = CFG_GET_OQ_BUF_SIZE(oct->conf);
+@@ -458,8 +459,8 @@ static int __octep_vf_oq_process_rx(struct octep_vf_device *oct,
+
+ oq->host_read_idx = read_idx;
+ oq->refill_count += desc_used;
+- oq->stats.packets += pkt;
+- oq->stats.bytes += rx_bytes;
++ oq->stats->packets += pkt;
++ oq->stats->bytes += rx_bytes;
+
+ return pkt;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h
+index fe46838b5200ff..9e296b7d7e3494 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h
+@@ -187,7 +187,7 @@ struct octep_vf_oq {
+ u8 __iomem *pkts_sent_reg;
+
+ /* Statistics for this OQ. */
+- struct octep_vf_oq_stats stats;
++ struct octep_vf_oq_stats *stats;
+
+ /* Packets pending to be processed */
+ u32 pkts_pending;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c
+index 47a5c054fdb636..8180e5ce3d7efe 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c
+@@ -82,9 +82,9 @@ int octep_vf_iq_process_completions(struct octep_vf_iq *iq, u16 budget)
+ }
+
+ iq->pkts_processed += compl_pkts;
+- iq->stats.instr_completed += compl_pkts;
+- iq->stats.bytes_sent += compl_bytes;
+- iq->stats.sgentry_sent += compl_sg;
++ iq->stats->instr_completed += compl_pkts;
++ iq->stats->bytes_sent += compl_bytes;
++ iq->stats->sgentry_sent += compl_sg;
+ iq->flush_index = fi;
+
+ netif_subqueue_completed_wake(iq->netdev, iq->q_no, compl_pkts,
+@@ -186,6 +186,7 @@ static int octep_vf_setup_iq(struct octep_vf_device *oct, int q_no)
+ iq->netdev = oct->netdev;
+ iq->dev = &oct->pdev->dev;
+ iq->q_no = q_no;
++ iq->stats = &oct->stats_iq[q_no];
+ iq->max_count = CFG_GET_IQ_NUM_DESC(oct->conf);
+ iq->ring_size_mask = iq->max_count - 1;
+ iq->fill_threshold = CFG_GET_IQ_DB_MIN(oct->conf);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h
+index f338b975103c30..1cede90e3a5fae 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h
+@@ -129,7 +129,7 @@ struct octep_vf_iq {
+ u16 flush_index;
+
+ /* Statistics for this input queue. */
+- struct octep_vf_iq_stats stats;
++ struct octep_vf_iq_stats *stats;
+
+ /* Pointer to the Virtual Base addr of the input ring. */
+ struct octep_vf_tx_desc_hw *desc_ring;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index b306ae79bf97a6..863196ad0ddc73 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -322,17 +322,16 @@ static void mlx5_pps_out(struct work_struct *work)
+ }
+ }
+
+-static void mlx5_timestamp_overflow(struct work_struct *work)
++static long mlx5_timestamp_overflow(struct ptp_clock_info *ptp_info)
+ {
+- struct delayed_work *dwork = to_delayed_work(work);
+ struct mlx5_core_dev *mdev;
+ struct mlx5_timer *timer;
+ struct mlx5_clock *clock;
+ unsigned long flags;
+
+- timer = container_of(dwork, struct mlx5_timer, overflow_work);
+- clock = container_of(timer, struct mlx5_clock, timer);
++ clock = container_of(ptp_info, struct mlx5_clock, ptp_info);
+ mdev = container_of(clock, struct mlx5_core_dev, clock);
++ timer = &clock->timer;
+
+ if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
+ goto out;
+@@ -343,7 +342,7 @@ static void mlx5_timestamp_overflow(struct work_struct *work)
+ write_sequnlock_irqrestore(&clock->lock, flags);
+
+ out:
+- schedule_delayed_work(&timer->overflow_work, timer->overflow_period);
++ return timer->overflow_period;
+ }
+
+ static int mlx5_ptp_settime_real_time(struct mlx5_core_dev *mdev,
+@@ -521,6 +520,7 @@ static int mlx5_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+ timer->cycles.mult = mult;
+ mlx5_update_clock_info_page(mdev);
+ write_sequnlock_irqrestore(&clock->lock, flags);
++ ptp_schedule_worker(clock->ptp, timer->overflow_period);
+
+ return 0;
+ }
+@@ -856,6 +856,7 @@ static const struct ptp_clock_info mlx5_ptp_clock_info = {
+ .settime64 = mlx5_ptp_settime,
+ .enable = NULL,
+ .verify = NULL,
++ .do_aux_work = mlx5_timestamp_overflow,
+ };
+
+ static int mlx5_query_mtpps_pin_mode(struct mlx5_core_dev *mdev, u8 pin,
+@@ -1056,12 +1057,11 @@ static void mlx5_init_overflow_period(struct mlx5_clock *clock)
+ do_div(ns, NSEC_PER_SEC / HZ);
+ timer->overflow_period = ns;
+
+- INIT_DELAYED_WORK(&timer->overflow_work, mlx5_timestamp_overflow);
+- if (timer->overflow_period)
+- schedule_delayed_work(&timer->overflow_work, 0);
+- else
++ if (!timer->overflow_period) {
++ timer->overflow_period = HZ;
+ mlx5_core_warn(mdev,
+- "invalid overflow period, overflow_work is not scheduled\n");
++ "invalid overflow period, overflow_work is scheduled once per second\n");
++ }
+
+ if (clock_info)
+ clock_info->overflow_period = timer->overflow_period;
+@@ -1176,6 +1176,9 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+
+ MLX5_NB_INIT(&clock->pps_nb, mlx5_pps_event, PPS_EVENT);
+ mlx5_eq_notifier_register(mdev, &clock->pps_nb);
++
++ if (clock->ptp)
++ ptp_schedule_worker(clock->ptp, 0);
+ }
+
+ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+@@ -1192,7 +1195,6 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+ }
+
+ cancel_work_sync(&clock->pps_info.out_work);
+- cancel_delayed_work_sync(&clock->timer.overflow_work);
+
+ if (mdev->clock_info) {
+ free_page((unsigned long)mdev->clock_info);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index b13c7e958e6b4e..3c0d067c360992 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2201,8 +2201,6 @@ static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ struct device *dev = common->dev;
+ int i;
+
+- devm_remove_action(dev, am65_cpsw_nuss_free_tx_chns, common);
+-
+ common->tx_ch_rate_msk = 0;
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+@@ -2224,8 +2222,6 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common)
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+
+- netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx,
+- am65_cpsw_nuss_tx_poll);
+ hrtimer_init(&tx_chn->tx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
+ tx_chn->tx_hrtimer.function = &am65_cpsw_nuss_tx_timer_callback;
+
+@@ -2238,9 +2234,21 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common)
+ tx_chn->id, tx_chn->irq, ret);
+ goto err;
+ }
++
++ netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx,
++ am65_cpsw_nuss_tx_poll);
+ }
+
++ return 0;
++
+ err:
++ for (--i ; i >= 0 ; i--) {
++ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
++
++ netif_napi_del(&tx_chn->napi_tx);
++ devm_free_irq(dev, tx_chn->irq, tx_chn);
++ }
++
+ return ret;
+ }
+
+@@ -2321,12 +2329,10 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common)
+ goto err;
+ }
+
++ return 0;
++
+ err:
+- i = devm_add_action(dev, am65_cpsw_nuss_free_tx_chns, common);
+- if (i) {
+- dev_err(dev, "Failed to add free_tx_chns action %d\n", i);
+- return i;
+- }
++ am65_cpsw_nuss_free_tx_chns(common);
+
+ return ret;
+ }
+@@ -2354,7 +2360,6 @@ static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common)
+
+ rx_chn = &common->rx_chns;
+ flows = rx_chn->flows;
+- devm_remove_action(dev, am65_cpsw_nuss_free_rx_chns, common);
+
+ for (i = 0; i < common->rx_ch_num_flows; i++) {
+ if (!(flows[i].irq < 0))
+@@ -2453,7 +2458,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ i, &rx_flow_cfg);
+ if (ret) {
+ dev_err(dev, "Failed to init rx flow%d %d\n", i, ret);
+- goto err;
++ goto err_flow;
+ }
+ if (!i)
+ fdqring_id =
+@@ -2465,14 +2470,12 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ dev_err(dev, "Failed to get rx dma irq %d\n",
+ flow->irq);
+ ret = flow->irq;
+- goto err;
++ goto err_flow;
+ }
+
+ snprintf(flow->name,
+ sizeof(flow->name), "%s-rx%d",
+ dev_name(dev), i);
+- netif_napi_add(common->dma_ndev, &flow->napi_rx,
+- am65_cpsw_nuss_rx_poll);
+ hrtimer_init(&flow->rx_hrtimer, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL_PINNED);
+ flow->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback;
+@@ -2485,20 +2488,28 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ dev_err(dev, "failure requesting rx %d irq %u, %d\n",
+ i, flow->irq, ret);
+ flow->irq = -EINVAL;
+- goto err;
++ goto err_flow;
+ }
++
++ netif_napi_add(common->dma_ndev, &flow->napi_rx,
++ am65_cpsw_nuss_rx_poll);
+ }
+
+ /* setup classifier to route priorities to flows */
+ cpsw_ale_classifier_setup_default(common->ale, common->rx_ch_num_flows);
+
+-err:
+- i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common);
+- if (i) {
+- dev_err(dev, "Failed to add free_rx_chns action %d\n", i);
+- return i;
++ return 0;
++
++err_flow:
++ for (--i; i >= 0 ; i--) {
++ flow = &rx_chn->flows[i];
++ netif_napi_del(&flow->napi_rx);
++ devm_free_irq(dev, flow->irq, flow);
+ }
+
++err:
++ am65_cpsw_nuss_free_rx_chns(common);
++
+ return ret;
+ }
+
+@@ -3324,7 +3335,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+ return ret;
+ ret = am65_cpsw_nuss_init_rx_chns(common);
+ if (ret)
+- return ret;
++ goto err_remove_tx;
+
+ /* The DMA Channels are not guaranteed to be in a clean state.
+ * Reset and disable them to ensure that they are back to the
+@@ -3345,7 +3356,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+
+ ret = am65_cpsw_nuss_register_devlink(common);
+ if (ret)
+- return ret;
++ goto err_remove_rx;
+
+ for (i = 0; i < common->port_num; i++) {
+ port = &common->ports[i];
+@@ -3376,6 +3387,10 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+ err_cleanup_ndev:
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_unregister_devlink(common);
++err_remove_rx:
++ am65_cpsw_nuss_remove_rx_chns(common);
++err_remove_tx:
++ am65_cpsw_nuss_remove_tx_chns(common);
+
+ return ret;
+ }
+@@ -3395,6 +3410,8 @@ int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common,
+ return ret;
+
+ ret = am65_cpsw_nuss_init_rx_chns(common);
++ if (ret)
++ am65_cpsw_nuss_remove_tx_chns(common);
+
+ return ret;
+ }
+@@ -3652,6 +3669,8 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev)
+ */
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_unregister_devlink(common);
++ am65_cpsw_nuss_remove_rx_chns(common);
++ am65_cpsw_nuss_remove_tx_chns(common);
+ am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
+ am65_cpsw_disable_serdes_phy(common);
+@@ -3713,8 +3732,10 @@ static int am65_cpsw_nuss_resume(struct device *dev)
+ if (ret)
+ return ret;
+ ret = am65_cpsw_nuss_init_rx_chns(common);
+- if (ret)
++ if (ret) {
++ am65_cpsw_nuss_remove_tx_chns(common);
+ return ret;
++ }
+
+ /* If RX IRQ was disabled before suspend, keep it disabled */
+ for (i = 0; i < common->rx_ch_num_flows; i++) {
+diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
+index 5af5ade4fc6418..ae43103c76cbd8 100644
+--- a/drivers/net/phy/nxp-c45-tja11xx.c
++++ b/drivers/net/phy/nxp-c45-tja11xx.c
+@@ -1296,6 +1296,8 @@ static int nxp_c45_soft_reset(struct phy_device *phydev)
+ if (ret)
+ return ret;
+
++ usleep_range(2000, 2050);
++
+ return phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
+ VEND1_DEVICE_CONTROL, ret,
+ !(ret & DEVICE_CONTROL_RESET), 20000,
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 6fc60950100c7c..fae1a0ab36bdfe 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -580,7 +580,7 @@ static inline bool tun_not_capable(struct tun_struct *tun)
+ struct net *net = dev_net(tun->dev);
+
+ return ((uid_valid(tun->owner) && !uid_eq(cred->euid, tun->owner)) ||
+- (gid_valid(tun->group) && !in_egroup_p(tun->group))) &&
++ (gid_valid(tun->group) && !in_egroup_p(tun->group))) &&
+ !ns_capable(net->user_ns, CAP_NET_ADMIN);
+ }
+
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 46afb95ffabe3b..a19789b571905a 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -61,7 +61,18 @@
+ #define IPHETH_USBINTF_PROTO 1
+
+ #define IPHETH_IP_ALIGN 2 /* padding at front of URB */
+-#define IPHETH_NCM_HEADER_SIZE (12 + 96) /* NCMH + NCM0 */
++/* On iOS devices, NCM headers in RX have a fixed size regardless of DPE count:
++ * - NTH16 (NCMH): 12 bytes, as per CDC NCM 1.0 spec
++ * - NDP16 (NCM0): 96 bytes, of which
++ * - NDP16 fixed header: 8 bytes
++ * - maximum of 22 DPEs (21 datagrams + trailer), 4 bytes each
++ */
++#define IPHETH_NDP16_MAX_DPE 22
++#define IPHETH_NDP16_HEADER_SIZE (sizeof(struct usb_cdc_ncm_ndp16) + \
++ IPHETH_NDP16_MAX_DPE * \
++ sizeof(struct usb_cdc_ncm_dpe16))
++#define IPHETH_NCM_HEADER_SIZE (sizeof(struct usb_cdc_ncm_nth16) + \
++ IPHETH_NDP16_HEADER_SIZE)
+ #define IPHETH_TX_BUF_SIZE ETH_FRAME_LEN
+ #define IPHETH_RX_BUF_SIZE_LEGACY (IPHETH_IP_ALIGN + ETH_FRAME_LEN)
+ #define IPHETH_RX_BUF_SIZE_NCM 65536
+@@ -207,15 +218,23 @@ static int ipheth_rcvbulk_callback_legacy(struct urb *urb)
+ return ipheth_consume_skb(buf, len, dev);
+ }
+
++/* In "NCM mode", the iOS device encapsulates RX (phone->computer) traffic
++ * in NCM Transfer Blocks (similarly to CDC NCM). However, unlike reverse
++ * tethering (handled by the `cdc_ncm` driver), regular tethering is not
++ * compliant with the CDC NCM spec, as the device is missing the necessary
++ * descriptors, and TX (computer->phone) traffic is not encapsulated
++ * at all. Thus `ipheth` implements a very limited subset of the spec with
++ * the sole purpose of parsing RX URBs.
++ */
+ static int ipheth_rcvbulk_callback_ncm(struct urb *urb)
+ {
+ struct usb_cdc_ncm_nth16 *ncmh;
+ struct usb_cdc_ncm_ndp16 *ncm0;
+ struct usb_cdc_ncm_dpe16 *dpe;
+ struct ipheth_device *dev;
++ u16 dg_idx, dg_len;
+ int retval = -EINVAL;
+ char *buf;
+- int len;
+
+ dev = urb->context;
+
+@@ -226,40 +245,42 @@ static int ipheth_rcvbulk_callback_ncm(struct urb *urb)
+
+ ncmh = urb->transfer_buffer;
+ if (ncmh->dwSignature != cpu_to_le32(USB_CDC_NCM_NTH16_SIGN) ||
+- le16_to_cpu(ncmh->wNdpIndex) >= urb->actual_length) {
+- dev->net->stats.rx_errors++;
+- return retval;
+- }
++ /* On iOS, NDP16 directly follows NTH16 */
++ ncmh->wNdpIndex != cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16)))
++ goto rx_error;
+
+- ncm0 = urb->transfer_buffer + le16_to_cpu(ncmh->wNdpIndex);
+- if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN) ||
+- le16_to_cpu(ncmh->wHeaderLength) + le16_to_cpu(ncm0->wLength) >=
+- urb->actual_length) {
+- dev->net->stats.rx_errors++;
+- return retval;
+- }
++ ncm0 = urb->transfer_buffer + sizeof(struct usb_cdc_ncm_nth16);
++ if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN))
++ goto rx_error;
+
+ dpe = ncm0->dpe16;
+- while (le16_to_cpu(dpe->wDatagramIndex) != 0 &&
+- le16_to_cpu(dpe->wDatagramLength) != 0) {
+- if (le16_to_cpu(dpe->wDatagramIndex) >= urb->actual_length ||
+- le16_to_cpu(dpe->wDatagramIndex) +
+- le16_to_cpu(dpe->wDatagramLength) > urb->actual_length) {
++ for (int dpe_i = 0; dpe_i < IPHETH_NDP16_MAX_DPE; ++dpe_i, ++dpe) {
++ dg_idx = le16_to_cpu(dpe->wDatagramIndex);
++ dg_len = le16_to_cpu(dpe->wDatagramLength);
++
++ /* Null DPE must be present after last datagram pointer entry
++ * (3.3.1 USB CDC NCM spec v1.0)
++ */
++ if (dg_idx == 0 && dg_len == 0)
++ return 0;
++
++ if (dg_idx < IPHETH_NCM_HEADER_SIZE ||
++ dg_idx >= urb->actual_length ||
++ dg_len > urb->actual_length - dg_idx) {
+ dev->net->stats.rx_length_errors++;
+ return retval;
+ }
+
+- buf = urb->transfer_buffer + le16_to_cpu(dpe->wDatagramIndex);
+- len = le16_to_cpu(dpe->wDatagramLength);
++ buf = urb->transfer_buffer + dg_idx;
+
+- retval = ipheth_consume_skb(buf, len, dev);
++ retval = ipheth_consume_skb(buf, dg_len, dev);
+ if (retval != 0)
+ return retval;
+-
+- dpe++;
+ }
+
+- return 0;
++rx_error:
++ dev->net->stats.rx_errors++;
++ return retval;
+ }
+
+ static void ipheth_rcvbulk_callback(struct urb *urb)
+diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
+index 1341374a4588a0..616ecc38d1726c 100644
+--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
++++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
+@@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
+ if (likely(cpu < tq_number))
+ tq = &adapter->tx_queue[cpu];
+ else
+- tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)];
++ tq = &adapter->tx_queue[cpu % tq_number];
+
+ return tq;
+ }
+@@ -124,6 +124,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ u32 buf_size;
+ u32 dw2;
+
++ spin_lock_irq(&tq->tx_lock);
+ dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
+ dw2 |= xdpf->len;
+ ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
+@@ -134,6 +135,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+
+ if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
+ tq->stats.tx_ring_full++;
++ spin_unlock_irq(&tq->tx_lock);
+ return -ENOSPC;
+ }
+
+@@ -142,8 +144,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
+ xdpf->data, buf_size,
+ DMA_TO_DEVICE);
+- if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr))
++ if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
++ spin_unlock_irq(&tq->tx_lock);
+ return -EFAULT;
++ }
+ tbi->map_type |= VMXNET3_MAP_SINGLE;
+ } else { /* XDP buffer from page pool */
+ page = virt_to_page(xdpf->data);
+@@ -182,6 +186,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ dma_wmb();
+ gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
+ VMXNET3_TXD_GEN);
++ spin_unlock_irq(&tq->tx_lock);
+
+ /* No need to handle the case when tx_num_deferred doesn't reach
+ * threshold. Backend driver at hypervisor side will poll and reset
+@@ -225,6 +230,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
+ {
+ struct vmxnet3_adapter *adapter = netdev_priv(dev);
+ struct vmxnet3_tx_queue *tq;
++ struct netdev_queue *nq;
+ int i;
+
+ if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
+@@ -236,6 +242,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
+ if (tq->stopped)
+ return -ENETDOWN;
+
++ nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
++
++ __netif_tx_lock(nq, smp_processor_id());
+ for (i = 0; i < n; i++) {
+ if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
+ tq->stats.xdp_xmit_err++;
+@@ -243,6 +252,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
+ }
+ }
+ tq->stats.xdp_xmit += i;
++ __netif_tx_unlock(nq);
+
+ return i;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index da72fd2d541ff7..20ab9b1eea2836 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -540,6 +540,11 @@ void brcmf_txfinalize(struct brcmf_if *ifp, struct sk_buff *txp, bool success)
+ struct ethhdr *eh;
+ u16 type;
+
++ if (!ifp) {
++ brcmu_pkt_buf_free_skb(txp);
++ return;
++ }
++
+ eh = (struct ethhdr *)(txp->data);
+ type = ntohs(eh->h_proto);
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+index af930e34c21f8a..22c064848124d8 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+@@ -97,13 +97,13 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ /* Set board-type to the first string of the machine compatible prop */
+ root = of_find_node_by_path("/");
+ if (root && err) {
+- char *board_type;
++ char *board_type = NULL;
+ const char *tmp;
+
+- of_property_read_string_index(root, "compatible", 0, &tmp);
+-
+ /* get rid of '/' in the compatible string to be able to find the FW */
+- board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
++ if (!of_property_read_string_index(root, "compatible", 0, &tmp))
++ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
++
+ if (!board_type) {
+ of_node_put(root);
+ return;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
+index d69879e1bd870c..d362c4337616b4 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
+@@ -23423,6 +23423,9 @@ wlc_phy_iqcal_gainparams_nphy(struct brcms_phy *pi, u16 core_no,
+ break;
+ }
+
++ if (WARN_ON(k == NPHY_IQCAL_NUMGAINS))
++ return;
++
+ params->txgm = tbl_iqcal_gainparams_nphy[band_idx][k][1];
+ params->pga = tbl_iqcal_gainparams_nphy[band_idx][k][2];
+ params->pad = tbl_iqcal_gainparams_nphy[band_idx][k][3];
+diff --git a/drivers/net/wireless/intel/iwlwifi/Makefile b/drivers/net/wireless/intel/iwlwifi/Makefile
+index 64c1233142451a..a3052684b341f2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/Makefile
++++ b/drivers/net/wireless/intel/iwlwifi/Makefile
+@@ -11,7 +11,7 @@ iwlwifi-objs += pcie/ctxt-info.o pcie/ctxt-info-gen3.o
+ iwlwifi-objs += pcie/trans-gen2.o pcie/tx-gen2.o
+ iwlwifi-$(CONFIG_IWLDVM) += cfg/1000.o cfg/2000.o cfg/5000.o cfg/6000.o
+ iwlwifi-$(CONFIG_IWLMVM) += cfg/7000.o cfg/8000.o cfg/9000.o cfg/22000.o
+-iwlwifi-$(CONFIG_IWLMVM) += cfg/ax210.o cfg/bz.o cfg/sc.o
++iwlwifi-$(CONFIG_IWLMVM) += cfg/ax210.o cfg/bz.o cfg/sc.o cfg/dr.o
+ iwlwifi-objs += iwl-dbg-tlv.o
+ iwlwifi-objs += iwl-trans.o
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/dr.c b/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
+new file mode 100644
+index 00000000000000..ab7c0f8d54f425
+--- /dev/null
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
+@@ -0,0 +1,167 @@
++// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
++/*
++ * Copyright (C) 2024 Intel Corporation
++ */
++#include <linux/module.h>
++#include <linux/stringify.h>
++#include "iwl-config.h"
++#include "iwl-prph.h"
++#include "fw/api/txq.h"
++
++/* Highest firmware API version supported */
++#define IWL_DR_UCODE_API_MAX 96
++
++/* Lowest firmware API version supported */
++#define IWL_DR_UCODE_API_MIN 96
++
++/* NVM versions */
++#define IWL_DR_NVM_VERSION 0x0a1d
++
++/* Memory offsets and lengths */
++#define IWL_DR_DCCM_OFFSET 0x800000 /* LMAC1 */
++#define IWL_DR_DCCM_LEN 0x10000 /* LMAC1 */
++#define IWL_DR_DCCM2_OFFSET 0x880000
++#define IWL_DR_DCCM2_LEN 0x8000
++#define IWL_DR_SMEM_OFFSET 0x400000
++#define IWL_DR_SMEM_LEN 0xD0000
++
++#define IWL_DR_A_PE_A_FW_PRE "iwlwifi-dr-a0-pe-a0"
++#define IWL_BR_A_PET_A_FW_PRE "iwlwifi-br-a0-petc-a0"
++#define IWL_BR_A_PE_A_FW_PRE "iwlwifi-br-a0-pe-a0"
++
++#define IWL_DR_A_PE_A_FW_MODULE_FIRMWARE(api) \
++ IWL_DR_A_PE_A_FW_PRE "-" __stringify(api) ".ucode"
++#define IWL_BR_A_PET_A_FW_MODULE_FIRMWARE(api) \
++ IWL_BR_A_PET_A_FW_PRE "-" __stringify(api) ".ucode"
++#define IWL_BR_A_PE_A_FW_MODULE_FIRMWARE(api) \
++ IWL_BR_A_PE_A_FW_PRE "-" __stringify(api) ".ucode"
++
++static const struct iwl_base_params iwl_dr_base_params = {
++ .eeprom_size = OTP_LOW_IMAGE_SIZE_32K,
++ .num_of_queues = 512,
++ .max_tfd_queue_size = 65536,
++ .shadow_ram_support = true,
++ .led_compensation = 57,
++ .wd_timeout = IWL_LONG_WD_TIMEOUT,
++ .max_event_log_size = 512,
++ .shadow_reg_enable = true,
++ .pcie_l1_allowed = true,
++};
++
++#define IWL_DEVICE_DR_COMMON \
++ .ucode_api_max = IWL_DR_UCODE_API_MAX, \
++ .ucode_api_min = IWL_DR_UCODE_API_MIN, \
++ .led_mode = IWL_LED_RF_STATE, \
++ .nvm_hw_section_num = 10, \
++ .non_shared_ant = ANT_B, \
++ .dccm_offset = IWL_DR_DCCM_OFFSET, \
++ .dccm_len = IWL_DR_DCCM_LEN, \
++ .dccm2_offset = IWL_DR_DCCM2_OFFSET, \
++ .dccm2_len = IWL_DR_DCCM2_LEN, \
++ .smem_offset = IWL_DR_SMEM_OFFSET, \
++ .smem_len = IWL_DR_SMEM_LEN, \
++ .apmg_not_supported = true, \
++ .trans.mq_rx_supported = true, \
++ .vht_mu_mimo_supported = true, \
++ .mac_addr_from_csr = 0x30, \
++ .nvm_ver = IWL_DR_NVM_VERSION, \
++ .trans.rf_id = true, \
++ .trans.gen2 = true, \
++ .nvm_type = IWL_NVM_EXT, \
++ .dbgc_supported = true, \
++ .min_umac_error_event_table = 0xD0000, \
++ .d3_debug_data_base_addr = 0x401000, \
++ .d3_debug_data_length = 60 * 1024, \
++ .mon_smem_regs = { \
++ .write_ptr = { \
++ .addr = LDBG_M2S_BUF_WPTR, \
++ .mask = LDBG_M2S_BUF_WPTR_VAL_MSK, \
++ }, \
++ .cycle_cnt = { \
++ .addr = LDBG_M2S_BUF_WRAP_CNT, \
++ .mask = LDBG_M2S_BUF_WRAP_CNT_VAL_MSK, \
++ }, \
++ }, \
++ .trans.umac_prph_offset = 0x300000, \
++ .trans.device_family = IWL_DEVICE_FAMILY_DR, \
++ .trans.base_params = &iwl_dr_base_params, \
++ .min_txq_size = 128, \
++ .gp2_reg_addr = 0xd02c68, \
++ .min_ba_txq_size = IWL_DEFAULT_QUEUE_SIZE_EHT, \
++ .mon_dram_regs = { \
++ .write_ptr = { \
++ .addr = DBGC_CUR_DBGBUF_STATUS, \
++ .mask = DBGC_CUR_DBGBUF_STATUS_OFFSET_MSK, \
++ }, \
++ .cycle_cnt = { \
++ .addr = DBGC_DBGBUF_WRAP_AROUND, \
++ .mask = 0xffffffff, \
++ }, \
++ .cur_frag = { \
++ .addr = DBGC_CUR_DBGBUF_STATUS, \
++ .mask = DBGC_CUR_DBGBUF_STATUS_IDX_MSK, \
++ }, \
++ }, \
++ .mon_dbgi_regs = { \
++ .write_ptr = { \
++ .addr = DBGI_SRAM_FIFO_POINTERS, \
++ .mask = DBGI_SRAM_FIFO_POINTERS_WR_PTR_MSK, \
++ }, \
++ }
++
++#define IWL_DEVICE_DR \
++ IWL_DEVICE_DR_COMMON, \
++ .uhb_supported = true, \
++ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM, \
++ .num_rbds = IWL_NUM_RBDS_DR_EHT, \
++ .ht_params = &iwl_22000_ht_params
++
++/*
++ * This size was picked according to 8 MSDUs inside 512 A-MSDUs in an
++ * A-MPDU, with additional overhead to account for processing time.
++ */
++#define IWL_NUM_RBDS_DR_EHT (512 * 16)
++
++const struct iwl_cfg_trans_params iwl_dr_trans_cfg = {
++ .device_family = IWL_DEVICE_FAMILY_DR,
++ .base_params = &iwl_dr_base_params,
++ .mq_rx_supported = true,
++ .rf_id = true,
++ .gen2 = true,
++ .integrated = true,
++ .umac_prph_offset = 0x300000,
++ .xtal_latency = 12000,
++ .low_latency_xtal = true,
++ .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
++};
++
++const char iwl_dr_name[] = "Intel(R) TBD Dr device";
++
++const struct iwl_cfg iwl_cfg_dr = {
++ .fw_name_mac = "dr",
++ IWL_DEVICE_DR,
++};
++
++const struct iwl_cfg_trans_params iwl_br_trans_cfg = {
++ .device_family = IWL_DEVICE_FAMILY_DR,
++ .base_params = &iwl_dr_base_params,
++ .mq_rx_supported = true,
++ .rf_id = true,
++ .gen2 = true,
++ .integrated = true,
++ .umac_prph_offset = 0x300000,
++ .xtal_latency = 12000,
++ .low_latency_xtal = true,
++ .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
++};
++
++const char iwl_br_name[] = "Intel(R) TBD Br device";
++
++const struct iwl_cfg iwl_cfg_br = {
++ .fw_name_mac = "br",
++ IWL_DEVICE_DR,
++};
++
++MODULE_FIRMWARE(IWL_DR_A_PE_A_FW_MODULE_FIRMWARE(IWL_DR_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_BR_A_PET_A_FW_MODULE_FIRMWARE(IWL_DR_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_BR_A_PE_A_FW_MODULE_FIRMWARE(IWL_DR_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 0bc32291815e1b..a26c5573d20916 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -108,7 +108,7 @@ static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
+ size_t expected_size)
+ {
+ union acpi_object *obj;
+- int ret = 0;
++ int ret;
+
+ obj = iwl_acpi_get_dsm_object(dev, rev, func, NULL, guid);
+ if (IS_ERR(obj)) {
+@@ -123,8 +123,10 @@ static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
+ } else if (obj->type == ACPI_TYPE_BUFFER) {
+ __le64 le_value = 0;
+
+- if (WARN_ON_ONCE(expected_size > sizeof(le_value)))
+- return -EINVAL;
++ if (WARN_ON_ONCE(expected_size > sizeof(le_value))) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ /* if the buffer size doesn't match the expected size */
+ if (obj->buffer.length != expected_size)
+@@ -145,8 +147,9 @@ static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
+ }
+
+ IWL_DEBUG_DEV_RADIO(dev,
+- "ACPI: DSM method evaluated: func=%d, ret=%d\n",
+- func, ret);
++ "ACPI: DSM method evaluated: func=%d, value=%lld\n",
++ func, *value);
++ ret = 0;
+ out:
+ ACPI_FREE(obj);
+ return ret;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 17721bb47e2511..89744dbedb4a5a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -38,6 +38,7 @@ enum iwl_device_family {
+ IWL_DEVICE_FAMILY_AX210,
+ IWL_DEVICE_FAMILY_BZ,
+ IWL_DEVICE_FAMILY_SC,
++ IWL_DEVICE_FAMILY_DR,
+ };
+
+ /*
+@@ -424,6 +425,8 @@ struct iwl_cfg {
+ #define IWL_CFG_MAC_TYPE_SC2 0x49
+ #define IWL_CFG_MAC_TYPE_SC2F 0x4A
+ #define IWL_CFG_MAC_TYPE_BZ_W 0x4B
++#define IWL_CFG_MAC_TYPE_BR 0x4C
++#define IWL_CFG_MAC_TYPE_DR 0x4D
+
+ #define IWL_CFG_RF_TYPE_TH 0x105
+ #define IWL_CFG_RF_TYPE_TH1 0x108
+@@ -434,6 +437,7 @@ struct iwl_cfg {
+ #define IWL_CFG_RF_TYPE_GF 0x10D
+ #define IWL_CFG_RF_TYPE_FM 0x112
+ #define IWL_CFG_RF_TYPE_WH 0x113
++#define IWL_CFG_RF_TYPE_PE 0x114
+
+ #define IWL_CFG_RF_ID_TH 0x1
+ #define IWL_CFG_RF_ID_TH1 0x1
+@@ -506,6 +510,8 @@ extern const struct iwl_cfg_trans_params iwl_ma_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_bz_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_gl_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_sc_trans_cfg;
++extern const struct iwl_cfg_trans_params iwl_dr_trans_cfg;
++extern const struct iwl_cfg_trans_params iwl_br_trans_cfg;
+ extern const char iwl9162_name[];
+ extern const char iwl9260_name[];
+ extern const char iwl9260_1_name[];
+@@ -551,6 +557,8 @@ extern const char iwl_mtp_name[];
+ extern const char iwl_sc_name[];
+ extern const char iwl_sc2_name[];
+ extern const char iwl_sc2f_name[];
++extern const char iwl_dr_name[];
++extern const char iwl_br_name[];
+ #if IS_ENABLED(CONFIG_IWLDVM)
+ extern const struct iwl_cfg iwl5300_agn_cfg;
+ extern const struct iwl_cfg iwl5100_agn_cfg;
+@@ -658,6 +666,8 @@ extern const struct iwl_cfg iwl_cfg_gl;
+ extern const struct iwl_cfg iwl_cfg_sc;
+ extern const struct iwl_cfg iwl_cfg_sc2;
+ extern const struct iwl_cfg iwl_cfg_sc2f;
++extern const struct iwl_cfg iwl_cfg_dr;
++extern const struct iwl_cfg iwl_cfg_br;
+ #endif /* CONFIG_IWLMVM */
+
+ #endif /* __IWL_CONFIG_H__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 8fb2aa28224212..9dd0e0a51ce5cc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -540,6 +540,9 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0xE340, PCI_ANY_ID, iwl_sc_trans_cfg)},
+ {IWL_PCI_DEVICE(0xD340, PCI_ANY_ID, iwl_sc_trans_cfg)},
+ {IWL_PCI_DEVICE(0x6E70, PCI_ANY_ID, iwl_sc_trans_cfg)},
++
++/* Dr devices */
++ {IWL_PCI_DEVICE(0x272F, PCI_ANY_ID, iwl_dr_trans_cfg)},
+ #endif /* CONFIG_IWLMVM */
+
+ {0}
+@@ -1182,6 +1185,19 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ iwl_cfg_sc2f, iwl_sc2f_name),
++/* Dr */
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_DR, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_dr, iwl_dr_name),
++
++/* Br */
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BR, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_br, iwl_br_name),
+ #endif /* CONFIG_IWLMVM */
+ };
+ EXPORT_SYMBOL_IF_IWLWIFI_KUNIT(iwl_dev_info_table);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index bfdbc15abaa9a7..928e0b07a9bf18 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -2,9 +2,14 @@
+ /* Copyright (C) 2020 MediaTek Inc. */
+
+ #include <linux/firmware.h>
++#include <linux/moduleparam.h>
+ #include "mt7915.h"
+ #include "eeprom.h"
+
++static bool enable_6ghz;
++module_param(enable_6ghz, bool, 0644);
++MODULE_PARM_DESC(enable_6ghz, "Enable 6 GHz instead of 5 GHz on hardware that supports both");
++
+ static int mt7915_eeprom_load_precal(struct mt7915_dev *dev)
+ {
+ struct mt76_dev *mdev = &dev->mt76;
+@@ -170,8 +175,20 @@ static void mt7915_eeprom_parse_band_config(struct mt7915_phy *phy)
+ phy->mt76->cap.has_6ghz = true;
+ return;
+ case MT_EE_V2_BAND_SEL_5GHZ_6GHZ:
+- phy->mt76->cap.has_5ghz = true;
+- phy->mt76->cap.has_6ghz = true;
++ if (enable_6ghz) {
++ phy->mt76->cap.has_6ghz = true;
++ u8p_replace_bits(&eeprom[MT_EE_WIFI_CONF + band],
++ MT_EE_V2_BAND_SEL_6GHZ,
++ MT_EE_WIFI_CONF0_BAND_SEL);
++ } else {
++ phy->mt76->cap.has_5ghz = true;
++ u8p_replace_bits(&eeprom[MT_EE_WIFI_CONF + band],
++ MT_EE_V2_BAND_SEL_5GHZ,
++ MT_EE_WIFI_CONF0_BAND_SEL);
++ }
++ /* force to buffer mode */
++ dev->flash_mode = true;
++
+ return;
+ default:
+ phy->mt76->cap.has_2ghz = true;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 77d82ccd73079d..bc983ab10b0c7a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -1239,14 +1239,14 @@ int mt7915_register_device(struct mt7915_dev *dev)
+ if (ret)
+ goto unreg_dev;
+
+- ieee80211_queue_work(mt76_hw(dev), &dev->init_work);
+-
+ if (phy2) {
+ ret = mt7915_register_ext_phy(dev, phy2);
+ if (ret)
+ goto unreg_thermal;
+ }
+
++ ieee80211_queue_work(mt76_hw(dev), &dev->init_work);
++
+ dev->recovery.hw_init_done = true;
+
+ ret = mt7915_init_debugfs(&dev->phy);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+index 8aa4f0203208ab..e3459295ad884e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+@@ -21,6 +21,9 @@ static const struct usb_device_id mt7921u_device_table[] = {
+ /* Netgear, Inc. [A8000,AXE3000] */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9060, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
++ /* TP-Link TXE50UH */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x35bc, 0x0107, 0xff, 0xff, 0xff),
++ .driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
+ { },
+ };
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h
+index c269942b3f4ab1..af8d17b9e012ca 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h
+@@ -197,9 +197,9 @@ enum rtl8821a_h2c_cmd {
+
+ /* _MEDIA_STATUS_RPT_PARM_CMD1 */
+ #define SET_H2CCMD_MSRRPT_PARM_OPMODE(__cmd, __value) \
+- u8p_replace_bits(__cmd + 1, __value, BIT(0))
++ u8p_replace_bits(__cmd, __value, BIT(0))
+ #define SET_H2CCMD_MSRRPT_PARM_MACID_IND(__cmd, __value) \
+- u8p_replace_bits(__cmd + 1, __value, BIT(1))
++ u8p_replace_bits(__cmd, __value, BIT(1))
+
+ /* AP_OFFLOAD */
+ #define SET_H2CCMD_AP_OFFLOAD_ON(__cmd, __value) \
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 945117afe1438b..c808bb271e9d0f 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -508,12 +508,12 @@ struct rtw_5g_txpwr_idx {
+ struct rtw_5g_vht_ns_pwr_idx_diff vht_2s_diff;
+ struct rtw_5g_vht_ns_pwr_idx_diff vht_3s_diff;
+ struct rtw_5g_vht_ns_pwr_idx_diff vht_4s_diff;
+-};
++} __packed;
+
+ struct rtw_txpwr_idx {
+ struct rtw_2g_txpwr_idx pwr_idx_2g;
+ struct rtw_5g_txpwr_idx pwr_idx_5g;
+-};
++} __packed;
+
+ struct rtw_channel_params {
+ u8 center_chan;
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8703b.c b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+index 222608de33cdec..a977aad9c650f5 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8703b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+@@ -911,7 +911,7 @@ static void rtw8703b_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
+ rtw_write32_mask(rtwdev, REG_FPGA0_RFMOD, BIT_MASK_RFMOD, 0x0);
+ rtw_write32_mask(rtwdev, REG_FPGA1_RFMOD, BIT_MASK_RFMOD, 0x0);
+ rtw_write32_mask(rtwdev, REG_OFDM0_TX_PSD_NOISE,
+- GENMASK(31, 20), 0x0);
++ GENMASK(31, 30), 0x0);
+ rtw_write32(rtwdev, REG_BBRX_DFIR, 0x4A880000);
+ rtw_write32(rtwdev, REG_OFDM0_A_TX_AFE, 0x19F60000);
+ break;
+@@ -1257,9 +1257,9 @@ static u8 rtw8703b_iqk_rx_path(struct rtw_dev *rtwdev,
+ rtw_write32(rtwdev, REG_RXIQK_TONE_A_11N, 0x38008c1c);
+ rtw_write32(rtwdev, REG_TX_IQK_TONE_B, 0x38008c1c);
+ rtw_write32(rtwdev, REG_RX_IQK_TONE_B, 0x38008c1c);
+- rtw_write32(rtwdev, REG_TXIQK_PI_A_11N, 0x8216000f);
++ rtw_write32(rtwdev, REG_TXIQK_PI_A_11N, 0x8214030f);
+ rtw_write32(rtwdev, REG_RXIQK_PI_A_11N, 0x28110000);
+- rtw_write32(rtwdev, REG_TXIQK_PI_B, 0x28110000);
++ rtw_write32(rtwdev, REG_TXIQK_PI_B, 0x82110000);
+ rtw_write32(rtwdev, REG_RXIQK_PI_B, 0x28110000);
+
+ /* LOK setting */
+@@ -1431,7 +1431,7 @@ void rtw8703b_iqk_fill_a_matrix(struct rtw_dev *rtwdev, const s32 result[])
+ return;
+
+ tmp_rx_iqi |= FIELD_PREP(BIT_MASK_RXIQ_S1_X, result[IQK_S1_RX_X]);
+- tmp_rx_iqi |= FIELD_PREP(BIT_MASK_RXIQ_S1_Y1, result[IQK_S1_RX_X]);
++ tmp_rx_iqi |= FIELD_PREP(BIT_MASK_RXIQ_S1_Y1, result[IQK_S1_RX_Y]);
+ rtw_write32(rtwdev, REG_A_RXIQI, tmp_rx_iqi);
+ rtw_write32_mask(rtwdev, REG_RXIQK_MATRIX_LSB_11N, BIT_MASK_RXIQ_S1_Y2,
+ BIT_SET_RXIQ_S1_Y2(result[IQK_S1_RX_Y]));
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8723x.h b/drivers/net/wireless/realtek/rtw88/rtw8723x.h
+index e93bfce994bf82..a99af527c92cfb 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8723x.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8723x.h
+@@ -47,7 +47,7 @@ struct rtw8723xe_efuse {
+ u8 device_id[2];
+ u8 sub_vendor_id[2];
+ u8 sub_device_id[2];
+-};
++} __packed;
+
+ struct rtw8723xu_efuse {
+ u8 res4[48]; /* 0xd0 */
+@@ -56,12 +56,12 @@ struct rtw8723xu_efuse {
+ u8 usb_option; /* 0x104 */
+ u8 res5[2]; /* 0x105 */
+ u8 mac_addr[ETH_ALEN]; /* 0x107 */
+-};
++} __packed;
+
+ struct rtw8723xs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+ u8 mac_addr[ETH_ALEN]; /* 0x11a */
+-};
++} __packed;
+
+ struct rtw8723x_efuse {
+ __le16 rtl_id;
+@@ -96,7 +96,7 @@ struct rtw8723x_efuse {
+ struct rtw8723xu_efuse u;
+ struct rtw8723xs_efuse s;
+ };
+-};
++} __packed;
+
+ #define RTW8723X_IQK_ADDA_REG_NUM 16
+ #define RTW8723X_IQK_MAC8_REG_NUM 3
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.h b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+index 91ed921407bbe7..10172f4d74bf28 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+@@ -27,7 +27,7 @@ struct rtw8821cu_efuse {
+ u8 res11[0xcf];
+ u8 package_type; /* 0x1fb */
+ u8 res12[0x4];
+-};
++} __packed;
+
+ struct rtw8821ce_efuse {
+ u8 mac_addr[ETH_ALEN]; /* 0xd0 */
+@@ -47,7 +47,8 @@ struct rtw8821ce_efuse {
+ u8 ltr_en:1;
+ u8 res1:2;
+ u8 obff:2;
+- u8 res2:3;
++ u8 res2_1:1;
++ u8 res2_2:2;
+ u8 obff_cap:2;
+ u8 res3:4;
+ u8 res4[3];
+@@ -63,7 +64,7 @@ struct rtw8821ce_efuse {
+ u8 res6:1;
+ u8 port_t_power_on_value:5;
+ u8 res7;
+-};
++} __packed;
+
+ struct rtw8821cs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+@@ -101,7 +102,7 @@ struct rtw8821c_efuse {
+ struct rtw8821cu_efuse u;
+ struct rtw8821cs_efuse s;
+ };
+-};
++} __packed;
+
+ static inline void
+ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.h b/drivers/net/wireless/realtek/rtw88/rtw8822b.h
+index cf85e63966a1c7..e815bc97c218af 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.h
+@@ -27,7 +27,7 @@ struct rtw8822bu_efuse {
+ u8 res11[0xcf];
+ u8 package_type; /* 0x1fb */
+ u8 res12[0x4];
+-};
++} __packed;
+
+ struct rtw8822be_efuse {
+ u8 mac_addr[ETH_ALEN]; /* 0xd0 */
+@@ -47,7 +47,8 @@ struct rtw8822be_efuse {
+ u8 ltr_en:1;
+ u8 res1:2;
+ u8 obff:2;
+- u8 res2:3;
++ u8 res2_1:1;
++ u8 res2_2:2;
+ u8 obff_cap:2;
+ u8 res3:4;
+ u8 res4[3];
+@@ -63,7 +64,7 @@ struct rtw8822be_efuse {
+ u8 res6:1;
+ u8 port_t_power_on_value:5;
+ u8 res7;
+-};
++} __packed;
+
+ struct rtw8822bs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+@@ -103,7 +104,7 @@ struct rtw8822b_efuse {
+ struct rtw8822bu_efuse u;
+ struct rtw8822bs_efuse s;
+ };
+-};
++} __packed;
+
+ static inline void
+ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.h b/drivers/net/wireless/realtek/rtw88/rtw8822c.h
+index e2b383d633cd23..fc62b67a15f216 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.h
+@@ -14,7 +14,7 @@ struct rtw8822cu_efuse {
+ u8 res1[3];
+ u8 mac_addr[ETH_ALEN]; /* 0x157 */
+ u8 res2[0x3d];
+-};
++} __packed;
+
+ struct rtw8822cs_efuse {
+ u8 res0[0x4a]; /* 0x120 */
+@@ -39,7 +39,8 @@ struct rtw8822ce_efuse {
+ u8 ltr_en:1;
+ u8 res1:2;
+ u8 obff:2;
+- u8 res2:3;
++ u8 res2_1:1;
++ u8 res2_2:2;
+ u8 obff_cap:2;
+ u8 res3:4;
+ u8 class_code[3];
+@@ -55,7 +56,7 @@ struct rtw8822ce_efuse {
+ u8 res6:1;
+ u8 port_t_power_on_value:5;
+ u8 res7;
+-};
++} __packed;
+
+ struct rtw8822c_efuse {
+ __le16 rtl_id;
+@@ -102,7 +103,7 @@ struct rtw8822c_efuse {
+ struct rtw8822cu_efuse u;
+ struct rtw8822cs_efuse s;
+ };
+-};
++} __packed;
+
+ enum rtw8822c_dpk_agc_phase {
+ RTW_DPK_GAIN_CHECK,
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index b67e551fcee3ef..1d62b38526c486 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -1193,6 +1193,8 @@ static void rtw_sdio_indicate_tx_status(struct rtw_dev *rtwdev,
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_hw *hw = rtwdev->hw;
+
++ skb_pull(skb, rtwdev->chip->tx_pkt_desc_sz);
++
+ /* enqueue to wait for tx report */
+ if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
+ rtw_tx_report_enqueue(rtwdev, skb, tx_data->sn);
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index 4b47b45f897cbc..5c31639b4cade9 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -3892,7 +3892,6 @@ static void rtw89_phy_cfo_set_crystal_cap(struct rtw89_dev *rtwdev,
+
+ if (!force && cfo->crystal_cap == crystal_cap)
+ return;
+- crystal_cap = clamp_t(u8, crystal_cap, 0, 127);
+ if (chip->chip_id == RTL8852A || chip->chip_id == RTL8851B) {
+ rtw89_phy_cfo_set_xcap_reg(rtwdev, true, crystal_cap);
+ rtw89_phy_cfo_set_xcap_reg(rtwdev, false, crystal_cap);
+@@ -4015,7 +4014,7 @@ static void rtw89_phy_cfo_crystal_cap_adjust(struct rtw89_dev *rtwdev,
+ s32 curr_cfo)
+ {
+ struct rtw89_cfo_tracking_info *cfo = &rtwdev->cfo_tracking;
+- s8 crystal_cap = cfo->crystal_cap;
++ int crystal_cap = cfo->crystal_cap;
+ s32 cfo_abs = abs(curr_cfo);
+ int sign;
+
+@@ -4036,15 +4035,17 @@ static void rtw89_phy_cfo_crystal_cap_adjust(struct rtw89_dev *rtwdev,
+ }
+ sign = curr_cfo > 0 ? 1 : -1;
+ if (cfo_abs > CFO_TRK_STOP_TH_4)
+- crystal_cap += 7 * sign;
++ crystal_cap += 3 * sign;
+ else if (cfo_abs > CFO_TRK_STOP_TH_3)
+- crystal_cap += 5 * sign;
+- else if (cfo_abs > CFO_TRK_STOP_TH_2)
+ crystal_cap += 3 * sign;
++ else if (cfo_abs > CFO_TRK_STOP_TH_2)
++ crystal_cap += 1 * sign;
+ else if (cfo_abs > CFO_TRK_STOP_TH_1)
+ crystal_cap += 1 * sign;
+ else
+ return;
++
++ crystal_cap = clamp(crystal_cap, 0, 127);
+ rtw89_phy_cfo_set_crystal_cap(rtwdev, (u8)crystal_cap, false);
+ rtw89_debug(rtwdev, RTW89_DBG_CFO,
+ "X_cap{Curr,Default}={0x%x,0x%x}\n",
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.h b/drivers/net/wireless/realtek/rtw89/phy.h
+index 7e335c02ee6fbf..9bb9c9c8e7a1b0 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.h
++++ b/drivers/net/wireless/realtek/rtw89/phy.h
+@@ -57,7 +57,7 @@
+ #define CFO_TRK_STOP_TH_4 (30 << 2)
+ #define CFO_TRK_STOP_TH_3 (20 << 2)
+ #define CFO_TRK_STOP_TH_2 (10 << 2)
+-#define CFO_TRK_STOP_TH_1 (00 << 2)
++#define CFO_TRK_STOP_TH_1 (03 << 2)
+ #define CFO_TRK_STOP_TH (2 << 2)
+ #define CFO_SW_COMP_FINE_TUNE (2 << 2)
+ #define CFO_PERIOD_CNT 15
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.c b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
+index 04517bd3325a2a..a066977af0be5c 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_pcie.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
+@@ -6,6 +6,7 @@
+ #include <linux/acpi.h>
+ #include <linux/bitfield.h>
+ #include <linux/module.h>
++#include <linux/suspend.h>
+ #include <net/rtnetlink.h>
+
+ #include "iosm_ipc_imem.h"
+@@ -18,6 +19,7 @@ MODULE_LICENSE("GPL v2");
+ /* WWAN GUID */
+ static guid_t wwan_acpi_guid = GUID_INIT(0xbad01b75, 0x22a8, 0x4f48, 0x87, 0x92,
+ 0xbd, 0xde, 0x94, 0x67, 0x74, 0x7d);
++static bool pci_registered;
+
+ static void ipc_pcie_resources_release(struct iosm_pcie *ipc_pcie)
+ {
+@@ -448,7 +450,6 @@ static struct pci_driver iosm_ipc_driver = {
+ },
+ .id_table = iosm_ipc_ids,
+ };
+-module_pci_driver(iosm_ipc_driver);
+
+ int ipc_pcie_addr_map(struct iosm_pcie *ipc_pcie, unsigned char *data,
+ size_t size, dma_addr_t *mapping, int direction)
+@@ -530,3 +531,56 @@ void ipc_pcie_kfree_skb(struct iosm_pcie *ipc_pcie, struct sk_buff *skb)
+ IPC_CB(skb)->mapping = 0;
+ dev_kfree_skb(skb);
+ }
++
++static int pm_notify(struct notifier_block *nb, unsigned long mode, void *_unused)
++{
++ if (mode == PM_HIBERNATION_PREPARE || mode == PM_RESTORE_PREPARE) {
++ if (pci_registered) {
++ pci_unregister_driver(&iosm_ipc_driver);
++ pci_registered = false;
++ }
++ } else if (mode == PM_POST_HIBERNATION || mode == PM_POST_RESTORE) {
++ if (!pci_registered) {
++ int ret;
++
++ ret = pci_register_driver(&iosm_ipc_driver);
++ if (ret) {
++ pr_err(KBUILD_MODNAME ": unable to re-register PCI driver: %d\n",
++ ret);
++ } else {
++ pci_registered = true;
++ }
++ }
++ }
++
++ return 0;
++}
++
++static struct notifier_block pm_notifier = {
++ .notifier_call = pm_notify,
++};
++
++static int __init iosm_ipc_driver_init(void)
++{
++ int ret;
++
++ ret = pci_register_driver(&iosm_ipc_driver);
++ if (ret)
++ return ret;
++
++ pci_registered = true;
++
++ register_pm_notifier(&pm_notifier);
++
++ return 0;
++}
++module_init(iosm_ipc_driver_init);
++
++static void __exit iosm_ipc_driver_exit(void)
++{
++ unregister_pm_notifier(&pm_notifier);
++
++ if (pci_registered)
++ pci_unregister_driver(&iosm_ipc_driver);
++}
++module_exit(iosm_ipc_driver_exit);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 4c409efd8cec17..8da50df56b0795 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1691,7 +1691,13 @@ int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count)
+
+ status = nvme_set_features(ctrl, NVME_FEAT_NUM_QUEUES, q_count, NULL, 0,
+ &result);
+- if (status < 0)
++
++ /*
++ * It's either a kernel error or the host observed a connection
++ * lost. In either case it's not possible communicate with the
++ * controller and thus enter the error code path.
++ */
++ if (status < 0 || status == NVME_SC_HOST_PATH_ERROR)
+ return status;
+
+ /*
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index b81af7919e94c4..682234da2fabe0 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2080,7 +2080,8 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
+ nvme_fc_complete_rq(rq);
+
+ check_error:
+- if (terminate_assoc && ctrl->ctrl.state != NVME_CTRL_RESETTING)
++ if (terminate_assoc &&
++ nvme_ctrl_state(&ctrl->ctrl) != NVME_CTRL_RESETTING)
+ queue_work(nvme_reset_wq, &ctrl->ioerr_work);
+ }
+
+@@ -2534,6 +2535,8 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ static void
+ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ {
++ enum nvme_ctrl_state state = nvme_ctrl_state(&ctrl->ctrl);
++
+ /*
+ * if an error (io timeout, etc) while (re)connecting, the remote
+ * port requested terminating of the association (disconnect_ls)
+@@ -2541,7 +2544,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ * the controller. Abort any ios on the association and let the
+ * create_association error path resolve things.
+ */
+- if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {
++ if (state == NVME_CTRL_CONNECTING) {
+ __nvme_fc_abort_outstanding_ios(ctrl, true);
+ set_bit(ASSOC_FAILED, &ctrl->flags);
+ dev_warn(ctrl->ctrl.device,
+@@ -2551,7 +2554,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ }
+
+ /* Otherwise, only proceed if in LIVE state - e.g. on first error */
+- if (ctrl->ctrl.state != NVME_CTRL_LIVE)
++ if (state != NVME_CTRL_LIVE)
+ return;
+
+ dev_warn(ctrl->ctrl.device,
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 76b3f7b396c86b..cc74682dc0d4e9 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2987,7 +2987,9 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ * because of high power consumption (> 2 Watt) in s2idle
+ * sleep. Only some boards with Intel CPU are affected.
+ */
+- if (dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
++ if (dmi_match(DMI_BOARD_NAME, "DN50Z-140HC-YD") ||
++ dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
++ dmi_match(DMI_BOARD_NAME, "GXxMRXx") ||
+ dmi_match(DMI_BOARD_NAME, "PH4PG31") ||
+ dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1") ||
+ dmi_match(DMI_BOARD_NAME, "PH6PG01_PH6PG71"))
+diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
+index b68a9e5f1ea395..3a41b9ab0f13c4 100644
+--- a/drivers/nvme/host/sysfs.c
++++ b/drivers/nvme/host/sysfs.c
+@@ -792,7 +792,7 @@ static umode_t nvme_tls_attrs_are_visible(struct kobject *kobj,
+ return a->mode;
+ }
+
+-const struct attribute_group nvme_tls_attrs_group = {
++static const struct attribute_group nvme_tls_attrs_group = {
+ .attrs = nvme_tls_attrs,
+ .is_visible = nvme_tls_attrs_are_visible,
+ };
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index e1a15fbc6ad025..d00a3b015635c2 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1780,6 +1780,8 @@ static int __nvmem_cell_entry_write(struct nvmem_cell_entry *cell, void *buf, si
+ return -EINVAL;
+
+ if (cell->bit_offset || cell->nbits) {
++ if (len != BITS_TO_BYTES(cell->nbits) && len != cell->bytes)
++ return -EINVAL;
+ buf = nvmem_cell_prepare_write_buffer(cell, buf, len);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+diff --git a/drivers/nvmem/imx-ocotp-ele.c b/drivers/nvmem/imx-ocotp-ele.c
+index 1ba49449769874..ca6dd71d8a2e29 100644
+--- a/drivers/nvmem/imx-ocotp-ele.c
++++ b/drivers/nvmem/imx-ocotp-ele.c
+@@ -71,13 +71,15 @@ static int imx_ocotp_reg_read(void *context, unsigned int offset, void *val, siz
+ u32 *buf;
+ void *p;
+ int i;
++ u8 skipbytes;
+
+- index = offset;
+- num_bytes = round_up(bytes, 4);
+- count = num_bytes >> 2;
++ if (offset + bytes > priv->data->size)
++ bytes = priv->data->size - offset;
+
+- if (count > ((priv->data->size >> 2) - index))
+- count = (priv->data->size >> 2) - index;
++ index = offset >> 2;
++ skipbytes = offset - (index << 2);
++ num_bytes = round_up(bytes + skipbytes, 4);
++ count = num_bytes >> 2;
+
+ p = kzalloc(num_bytes, GFP_KERNEL);
+ if (!p)
+@@ -100,7 +102,7 @@ static int imx_ocotp_reg_read(void *context, unsigned int offset, void *val, siz
+ *buf++ = readl_relaxed(reg + (i << 2));
+ }
+
+- memcpy(val, (u8 *)p, bytes);
++ memcpy(val, ((u8 *)p) + skipbytes, bytes);
+
+ mutex_unlock(&priv->lock);
+
+@@ -109,6 +111,26 @@ static int imx_ocotp_reg_read(void *context, unsigned int offset, void *val, siz
+ return 0;
+ };
+
++static int imx_ocotp_cell_pp(void *context, const char *id, int index,
++ unsigned int offset, void *data, size_t bytes)
++{
++ u8 *buf = data;
++ int i;
++
++ /* Deal with some post processing of nvmem cell data */
++ if (id && !strcmp(id, "mac-address"))
++ for (i = 0; i < bytes / 2; i++)
++ swap(buf[i], buf[bytes - i - 1]);
++
++ return 0;
++}
++
++static void imx_ocotp_fixup_dt_cell_info(struct nvmem_device *nvmem,
++ struct nvmem_cell_info *cell)
++{
++ cell->read_post_process = imx_ocotp_cell_pp;
++}
++
+ static int imx_ele_ocotp_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -131,10 +153,12 @@ static int imx_ele_ocotp_probe(struct platform_device *pdev)
+ priv->config.owner = THIS_MODULE;
+ priv->config.size = priv->data->size;
+ priv->config.reg_read = priv->data->reg_read;
+- priv->config.word_size = 4;
++ priv->config.word_size = 1;
+ priv->config.stride = 1;
+ priv->config.priv = priv;
+ priv->config.read_only = true;
++ priv->config.add_legacy_fixed_of_cells = true;
++ priv->config.fixup_dt_cell_info = imx_ocotp_fixup_dt_cell_info;
+ mutex_init(&priv->lock);
+
+ nvmem = devm_nvmem_register(dev, &priv->config);
+diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c
+index 9aa8f42faa4c93..4f1cca6eab71e1 100644
+--- a/drivers/nvmem/qcom-spmi-sdam.c
++++ b/drivers/nvmem/qcom-spmi-sdam.c
+@@ -144,6 +144,7 @@ static int sdam_probe(struct platform_device *pdev)
+ sdam->sdam_config.owner = THIS_MODULE;
+ sdam->sdam_config.add_legacy_fixed_of_cells = true;
+ sdam->sdam_config.stride = 1;
++ sdam->sdam_config.size = sdam->size;
+ sdam->sdam_config.word_size = 1;
+ sdam->sdam_config.reg_read = sdam_read;
+ sdam->sdam_config.reg_write = sdam_write;
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index a565b8c91da593..0e708a863e4aa3 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -200,17 +200,15 @@ static u64 of_bus_pci_map(__be32 *addr, const __be32 *range, int na, int ns,
+
+ static int __of_address_resource_bounds(struct resource *r, u64 start, u64 size)
+ {
+- u64 end = start;
+-
+ if (overflows_type(start, r->start))
+ return -EOVERFLOW;
+- if (size && check_add_overflow(end, size - 1, &end))
+- return -EOVERFLOW;
+- if (overflows_type(end, r->end))
+- return -EOVERFLOW;
+
+ r->start = start;
+- r->end = end;
++
++ if (!size)
++ r->end = wrapping_sub(typeof(r->end), r->start, 1);
++ else if (size && check_add_overflow(r->start, size - 1, &r->end))
++ return -EOVERFLOW;
+
+ return 0;
+ }
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 63161d0f72b4e8..4bb87e0cbaf179 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -841,10 +841,10 @@ struct device_node *of_find_node_opts_by_path(const char *path, const char **opt
+ /* The path could begin with an alias */
+ if (*path != '/') {
+ int len;
+- const char *p = separator;
++ const char *p = strchrnul(path, '/');
+
+- if (!p)
+- p = strchrnul(path, '/');
++ if (separator && separator < p)
++ p = separator;
+ len = p - path;
+
+ /* of_aliases must not be NULL */
+@@ -1493,7 +1493,6 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ * specifier into the out_args structure, keeping the
+ * bits specified in <list>-map-pass-thru.
+ */
+- match_array = map - new_size;
+ for (i = 0; i < new_size; i++) {
+ __be32 val = *(map - new_size + i);
+
+@@ -1502,6 +1501,7 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ val |= cpu_to_be32(out_args->args[i]) & pass[i];
+ }
+
++ initial_match_array[i] = val;
+ out_args->args[i] = be32_to_cpu(val);
+ }
+ out_args->args_count = list_size = new_size;
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 45445a1600a968..e45d6d3a8dc678 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -360,12 +360,12 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
+
+ prop = of_get_flat_dt_prop(node, "alignment", &len);
+ if (prop) {
+- if (len != dt_root_addr_cells * sizeof(__be32)) {
++ if (len != dt_root_size_cells * sizeof(__be32)) {
+ pr_err("invalid alignment property in '%s' node.\n",
+ uname);
+ return -EINVAL;
+ }
+- align = dt_mem_next_cell(dt_root_addr_cells, &prop);
++ align = dt_mem_next_cell(dt_root_size_cells, &prop);
+ }
+
+ nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index cc8ff4a014368c..b58e89ea566b8d 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -222,19 +222,30 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ if ((flags & PCI_BASE_ADDRESS_MEM_TYPE_64) && (bar & 1))
+ return -EINVAL;
+
+- reg = PCI_BASE_ADDRESS_0 + (4 * bar);
+-
+- if (!(flags & PCI_BASE_ADDRESS_SPACE))
+- type = PCIE_ATU_TYPE_MEM;
+- else
+- type = PCIE_ATU_TYPE_IO;
++ /*
++ * Certain EPF drivers dynamically change the physical address of a BAR
++ * (i.e. they call set_bar() twice, without ever calling clear_bar(), as
++ * calling clear_bar() would clear the BAR's PCI address assigned by the
++ * host).
++ */
++ if (ep->epf_bar[bar]) {
++ /*
++ * We can only dynamically change a BAR if the new BAR size and
++ * BAR flags do not differ from the existing configuration.
++ */
++ if (ep->epf_bar[bar]->barno != bar ||
++ ep->epf_bar[bar]->size != size ||
++ ep->epf_bar[bar]->flags != flags)
++ return -EINVAL;
+
+- ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar);
+- if (ret)
+- return ret;
++ /*
++ * When dynamically changing a BAR, skip writing the BAR reg, as
++ * that would clear the BAR's PCI address assigned by the host.
++ */
++ goto config_atu;
++ }
+
+- if (ep->epf_bar[bar])
+- return 0;
++ reg = PCI_BASE_ADDRESS_0 + (4 * bar);
+
+ dw_pcie_dbi_ro_wr_en(pci);
+
+@@ -246,9 +257,20 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0);
+ }
+
+- ep->epf_bar[bar] = epf_bar;
+ dw_pcie_dbi_ro_wr_dis(pci);
+
++config_atu:
++ if (!(flags & PCI_BASE_ADDRESS_SPACE))
++ type = PCIE_ATU_TYPE_MEM;
++ else
++ type = PCIE_ATU_TYPE_IO;
++
++ ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar);
++ if (ret)
++ return ret;
++
++ ep->epf_bar[bar] = epf_bar;
++
+ return 0;
+ }
+
+diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
+index 8fa2797d4169a9..50bc2892a36c54 100644
+--- a/drivers/pci/endpoint/pci-epf-core.c
++++ b/drivers/pci/endpoint/pci-epf-core.c
+@@ -202,6 +202,7 @@ void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
+
+ mutex_lock(&epf_pf->lock);
+ clear_bit(epf_vf->vfunc_no, &epf_pf->vfunction_num_map);
++ epf_vf->epf_pf = NULL;
+ list_del(&epf_vf->list);
+ mutex_unlock(&epf_pf->lock);
+ }
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 3a81837b5e623b..5081c7d8064fae 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -155,7 +155,7 @@
+ #define PWPR_REGWE_B BIT(5) /* OEN Register Write Enable, known only in RZ/V2H(P) */
+
+ #define PM_MASK 0x03
+-#define PFC_MASK 0x07
++#define PFC_MASK 0x0f
+ #define IEN_MASK 0x01
+ #define IOLH_MASK 0x03
+ #define SR_MASK 0x01
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 675efa5d86a9af..c142cd7920307f 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1272,7 +1272,7 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+
+ ret = platform_get_irq_optional(pdev, 0);
+ if (ret < 0 && ret != -ENXIO)
+- return ret;
++ goto err_put_banks;
+ if (ret > 0)
+ drvdata->irq = ret;
+
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 7169b84ccdb6e2..c5679e4a58a76e 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -95,6 +95,7 @@ enum acer_wmi_event_ids {
+ WMID_HOTKEY_EVENT = 0x1,
+ WMID_ACCEL_OR_KBD_DOCK_EVENT = 0x5,
+ WMID_GAMING_TURBO_KEY_EVENT = 0x7,
++ WMID_AC_EVENT = 0x8,
+ };
+
+ enum acer_wmi_predator_v4_sys_info_command {
+@@ -398,6 +399,20 @@ static struct quirk_entry quirk_acer_predator_ph315_53 = {
+ .gpu_fans = 1,
+ };
+
++static struct quirk_entry quirk_acer_predator_ph16_72 = {
++ .turbo = 1,
++ .cpu_fans = 1,
++ .gpu_fans = 1,
++ .predator_v4 = 1,
++};
++
++static struct quirk_entry quirk_acer_predator_pt14_51 = {
++ .turbo = 1,
++ .cpu_fans = 1,
++ .gpu_fans = 1,
++ .predator_v4 = 1,
++};
++
+ static struct quirk_entry quirk_acer_predator_v4 = {
+ .predator_v4 = 1,
+ };
+@@ -569,6 +584,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_acer_travelmate_2490,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Acer Nitro AN515-58",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Nitro AN515-58"),
++ },
++ .driver_data = &quirk_acer_predator_v4,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Acer Predator PH315-53",
+@@ -596,6 +620,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_acer_predator_v4,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Acer Predator PH16-72",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Predator PH16-72"),
++ },
++ .driver_data = &quirk_acer_predator_ph16_72,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Acer Predator PH18-71",
+@@ -605,6 +638,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_acer_predator_v4,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Acer Predator PT14-51",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Predator PT14-51"),
++ },
++ .driver_data = &quirk_acer_predator_pt14_51,
++ },
+ {
+ .callback = set_force_caps,
+ .ident = "Acer Aspire Switch 10E SW3-016",
+@@ -2285,6 +2327,9 @@ static void acer_wmi_notify(union acpi_object *obj, void *context)
+ if (return_value.key_num == 0x5 && has_cap(ACER_CAP_PLATFORM_PROFILE))
+ acer_thermal_profile_change();
+ break;
++ case WMID_AC_EVENT:
++ /* We ignore AC events here */
++ break;
+ default:
+ pr_warn("Unknown function number - %d - %d\n",
+ return_value.function, return_value.key_num);
+diff --git a/drivers/platform/x86/intel/int3472/discrete.c b/drivers/platform/x86/intel/int3472/discrete.c
+index 3de463c3d13b8e..15678508ee5019 100644
+--- a/drivers/platform/x86/intel/int3472/discrete.c
++++ b/drivers/platform/x86/intel/int3472/discrete.c
+@@ -336,6 +336,9 @@ static int skl_int3472_discrete_probe(struct platform_device *pdev)
+ struct int3472_cldb cldb;
+ int ret;
+
++ if (!adev)
++ return -ENODEV;
++
+ ret = skl_int3472_fill_cldb(adev, &cldb);
+ if (ret) {
+ dev_err(&pdev->dev, "Couldn't fill CLDB structure\n");
+diff --git a/drivers/platform/x86/intel/int3472/tps68470.c b/drivers/platform/x86/intel/int3472/tps68470.c
+index 1e107fd49f828c..81ac4c69196309 100644
+--- a/drivers/platform/x86/intel/int3472/tps68470.c
++++ b/drivers/platform/x86/intel/int3472/tps68470.c
+@@ -152,6 +152,9 @@ static int skl_int3472_tps68470_probe(struct i2c_client *client)
+ int ret;
+ int i;
+
++ if (!adev)
++ return -ENODEV;
++
+ n_consumers = skl_int3472_fill_clk_pdata(&client->dev, &clk_pdata);
+ if (n_consumers < 0)
+ return n_consumers;
+diff --git a/drivers/platform/x86/serdev_helpers.h b/drivers/platform/x86/serdev_helpers.h
+index bcf3a0c356ea1b..3bc7fd8e1e1972 100644
+--- a/drivers/platform/x86/serdev_helpers.h
++++ b/drivers/platform/x86/serdev_helpers.h
+@@ -35,7 +35,7 @@ get_serdev_controller(const char *serial_ctrl_hid,
+ ctrl_adev = acpi_dev_get_first_match_dev(serial_ctrl_hid, serial_ctrl_uid, -1);
+ if (!ctrl_adev) {
+ pr_err("error could not get %s/%s serial-ctrl adev\n",
+- serial_ctrl_hid, serial_ctrl_uid);
++ serial_ctrl_hid, serial_ctrl_uid ?: "*");
+ return ERR_PTR(-ENODEV);
+ }
+
+@@ -43,7 +43,7 @@ get_serdev_controller(const char *serial_ctrl_hid,
+ ctrl_dev = get_device(acpi_get_first_physical_node(ctrl_adev));
+ if (!ctrl_dev) {
+ pr_err("error could not get %s/%s serial-ctrl physical node\n",
+- serial_ctrl_hid, serial_ctrl_uid);
++ serial_ctrl_hid, serial_ctrl_uid ?: "*");
+ ctrl_dev = ERR_PTR(-ENODEV);
+ goto put_ctrl_adev;
+ }
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 77a36e7bddd54e..1a1edd87122d3d 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -217,6 +217,11 @@ static int ptp_getcycles64(struct ptp_clock_info *info, struct timespec64 *ts)
+ return info->gettime64(info, ts);
+ }
+
++static int ptp_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *request, int on)
++{
++ return -EOPNOTSUPP;
++}
++
+ static void ptp_aux_kworker(struct kthread_work *work)
+ {
+ struct ptp_clock *ptp = container_of(work, struct ptp_clock,
+@@ -294,6 +299,9 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
+ ptp->info->getcrosscycles = ptp->info->getcrosststamp;
+ }
+
++ if (!ptp->info->enable)
++ ptp->info->enable = ptp_enable;
++
+ if (ptp->info->do_aux_work) {
+ kthread_init_delayed_work(&ptp->aux_work, ptp_aux_kworker);
+ ptp->kworker = kthread_create_worker(0, "ptp%d", ptp->index);
+diff --git a/drivers/pwm/pwm-microchip-core.c b/drivers/pwm/pwm-microchip-core.c
+index c1f2287b8e9748..12821b4bbf9756 100644
+--- a/drivers/pwm/pwm-microchip-core.c
++++ b/drivers/pwm/pwm-microchip-core.c
+@@ -327,7 +327,7 @@ static int mchp_core_pwm_apply_locked(struct pwm_chip *chip, struct pwm_device *
+ * mchp_core_pwm_calc_period().
+ * The period is locked and we cannot change this, so we abort.
+ */
+- if (hw_period_steps == MCHPCOREPWM_PERIOD_STEPS_MAX)
++ if (hw_period_steps > MCHPCOREPWM_PERIOD_STEPS_MAX)
+ return -EINVAL;
+
+ prescale = hw_prescale;
+diff --git a/drivers/remoteproc/omap_remoteproc.c b/drivers/remoteproc/omap_remoteproc.c
+index 9ae2e831456d57..3260dd512491e8 100644
+--- a/drivers/remoteproc/omap_remoteproc.c
++++ b/drivers/remoteproc/omap_remoteproc.c
+@@ -37,6 +37,10 @@
+
+ #include <linux/platform_data/dmtimer-omap.h>
+
++#ifdef CONFIG_ARM_DMA_USE_IOMMU
++#include <asm/dma-iommu.h>
++#endif
++
+ #include "omap_remoteproc.h"
+ #include "remoteproc_internal.h"
+
+@@ -1323,6 +1327,19 @@ static int omap_rproc_probe(struct platform_device *pdev)
+ /* All existing OMAP IPU and DSP processors have an MMU */
+ rproc->has_iommu = true;
+
++#ifdef CONFIG_ARM_DMA_USE_IOMMU
++ /*
++ * Throw away the ARM DMA mapping that we'll never use, so it doesn't
++ * interfere with the core rproc->domain and we get the right DMA ops.
++ */
++ if (pdev->dev.archdata.mapping) {
++ struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(&pdev->dev);
++
++ arm_iommu_detach_device(&pdev->dev);
++ arm_iommu_release_mapping(mapping);
++ }
++#endif
++
+ ret = omap_rproc_of_get_internal_memories(pdev, rproc);
+ if (ret)
+ return ret;
+diff --git a/drivers/rtc/rtc-zynqmp.c b/drivers/rtc/rtc-zynqmp.c
+index 08ed171bdab43a..b6f96c10196ae3 100644
+--- a/drivers/rtc/rtc-zynqmp.c
++++ b/drivers/rtc/rtc-zynqmp.c
+@@ -318,8 +318,8 @@ static int xlnx_rtc_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- /* Getting the rtc_clk info */
+- xrtcdev->rtc_clk = devm_clk_get_optional(&pdev->dev, "rtc_clk");
++ /* Getting the rtc info */
++ xrtcdev->rtc_clk = devm_clk_get_optional(&pdev->dev, "rtc");
+ if (IS_ERR(xrtcdev->rtc_clk)) {
+ if (PTR_ERR(xrtcdev->rtc_clk) != -EPROBE_DEFER)
+ dev_warn(&pdev->dev, "Device clock not found.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 15066c112817a8..cb95b7b12051da 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -4098,6 +4098,8 @@ struct qla_hw_data {
+ uint32_t npiv_supported :1;
+ uint32_t pci_channel_io_perm_failure :1;
+ uint32_t fce_enabled :1;
++ uint32_t user_enabled_fce :1;
++ uint32_t fce_dump_buf_alloced :1;
+ uint32_t fac_supported :1;
+
+ uint32_t chip_reset_done :1;
+diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
+index a1545dad0c0ce2..08273520c77793 100644
+--- a/drivers/scsi/qla2xxx/qla_dfs.c
++++ b/drivers/scsi/qla2xxx/qla_dfs.c
+@@ -409,26 +409,31 @@ qla2x00_dfs_fce_show(struct seq_file *s, void *unused)
+
+ mutex_lock(&ha->fce_mutex);
+
+- seq_puts(s, "FCE Trace Buffer\n");
+- seq_printf(s, "In Pointer = %llx\n\n", (unsigned long long)ha->fce_wr);
+- seq_printf(s, "Base = %llx\n\n", (unsigned long long) ha->fce_dma);
+- seq_puts(s, "FCE Enable Registers\n");
+- seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
+- ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
+- ha->fce_mb[5], ha->fce_mb[6]);
+-
+- fce = (uint32_t *) ha->fce;
+- fce_start = (unsigned long long) ha->fce_dma;
+- for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
+- if (cnt % 8 == 0)
+- seq_printf(s, "\n%llx: ",
+- (unsigned long long)((cnt * 4) + fce_start));
+- else
+- seq_putc(s, ' ');
+- seq_printf(s, "%08x", *fce++);
+- }
++ if (ha->flags.user_enabled_fce) {
++ seq_puts(s, "FCE Trace Buffer\n");
++ seq_printf(s, "In Pointer = %llx\n\n", (unsigned long long)ha->fce_wr);
++ seq_printf(s, "Base = %llx\n\n", (unsigned long long)ha->fce_dma);
++ seq_puts(s, "FCE Enable Registers\n");
++ seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
++ ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
++ ha->fce_mb[5], ha->fce_mb[6]);
++
++ fce = (uint32_t *)ha->fce;
++ fce_start = (unsigned long long)ha->fce_dma;
++ for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
++ if (cnt % 8 == 0)
++ seq_printf(s, "\n%llx: ",
++ (unsigned long long)((cnt * 4) + fce_start));
++ else
++ seq_putc(s, ' ');
++ seq_printf(s, "%08x", *fce++);
++ }
+
+- seq_puts(s, "\nEnd\n");
++ seq_puts(s, "\nEnd\n");
++ } else {
++ seq_puts(s, "FCE Trace is currently not enabled\n");
++ seq_puts(s, "\techo [ 1 | 0 ] > fce\n");
++ }
+
+ mutex_unlock(&ha->fce_mutex);
+
+@@ -467,7 +472,7 @@ qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
+ struct qla_hw_data *ha = vha->hw;
+ int rval;
+
+- if (ha->flags.fce_enabled)
++ if (ha->flags.fce_enabled || !ha->fce)
+ goto out;
+
+ mutex_lock(&ha->fce_mutex);
+@@ -488,11 +493,88 @@ qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
+ return single_release(inode, file);
+ }
+
++static ssize_t
++qla2x00_dfs_fce_write(struct file *file, const char __user *buffer,
++ size_t count, loff_t *pos)
++{
++ struct seq_file *s = file->private_data;
++ struct scsi_qla_host *vha = s->private;
++ struct qla_hw_data *ha = vha->hw;
++ char *buf;
++ int rc = 0;
++ unsigned long enable;
++
++ if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
++ !IS_QLA27XX(ha) && !IS_QLA28XX(ha)) {
++ ql_dbg(ql_dbg_user, vha, 0xd034,
++ "this adapter does not support FCE.");
++ return -EINVAL;
++ }
++
++ buf = memdup_user_nul(buffer, count);
++ if (IS_ERR(buf)) {
++ ql_dbg(ql_dbg_user, vha, 0xd037,
++ "fail to copy user buffer.");
++ return PTR_ERR(buf);
++ }
++
++ enable = kstrtoul(buf, 0, 0);
++ rc = count;
++
++ mutex_lock(&ha->fce_mutex);
++
++ if (enable) {
++ if (ha->flags.user_enabled_fce) {
++ mutex_unlock(&ha->fce_mutex);
++ goto out_free;
++ }
++ ha->flags.user_enabled_fce = 1;
++ if (!ha->fce) {
++ rc = qla2x00_alloc_fce_trace(vha);
++ if (rc) {
++ ha->flags.user_enabled_fce = 0;
++ mutex_unlock(&ha->fce_mutex);
++ goto out_free;
++ }
++
++ /* adjust fw dump buffer to take into account of this feature */
++ if (!ha->flags.fce_dump_buf_alloced)
++ qla2x00_alloc_fw_dump(vha);
++ }
++
++ if (!ha->flags.fce_enabled)
++ qla_enable_fce_trace(vha);
++
++ ql_dbg(ql_dbg_user, vha, 0xd045, "User enabled FCE .\n");
++ } else {
++ if (!ha->flags.user_enabled_fce) {
++ mutex_unlock(&ha->fce_mutex);
++ goto out_free;
++ }
++ ha->flags.user_enabled_fce = 0;
++ if (ha->flags.fce_enabled) {
++ qla2x00_disable_fce_trace(vha, NULL, NULL);
++ ha->flags.fce_enabled = 0;
++ }
++
++ qla2x00_free_fce_trace(ha);
++ /* no need to re-adjust fw dump buffer */
++
++ ql_dbg(ql_dbg_user, vha, 0xd04f, "User disabled FCE .\n");
++ }
++
++ mutex_unlock(&ha->fce_mutex);
++out_free:
++ kfree(buf);
++ return rc;
++}
++
+ static const struct file_operations dfs_fce_ops = {
+ .open = qla2x00_dfs_fce_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = qla2x00_dfs_fce_release,
++ .write = qla2x00_dfs_fce_write,
+ };
+
+ static int
+@@ -626,8 +708,6 @@ qla2x00_dfs_setup(scsi_qla_host_t *vha)
+ if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
+ !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
+ goto out;
+- if (!ha->fce)
+- goto out;
+
+ if (qla2x00_dfs_root)
+ goto create_dir;
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index cededfda9d0e31..e556f57c91af62 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -11,6 +11,9 @@
+ /*
+ * Global Function Prototypes in qla_init.c source file.
+ */
++int qla2x00_alloc_fce_trace(scsi_qla_host_t *);
++void qla2x00_free_fce_trace(struct qla_hw_data *ha);
++void qla_enable_fce_trace(scsi_qla_host_t *);
+ extern int qla2x00_initialize_adapter(scsi_qla_host_t *);
+ extern int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport);
+
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 31fc6a0eca3e80..79cdfec2bca356 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -2681,7 +2681,7 @@ qla83xx_nic_core_fw_load(scsi_qla_host_t *vha)
+ return rval;
+ }
+
+-static void qla_enable_fce_trace(scsi_qla_host_t *vha)
++void qla_enable_fce_trace(scsi_qla_host_t *vha)
+ {
+ int rval;
+ struct qla_hw_data *ha = vha->hw;
+@@ -3717,25 +3717,24 @@ qla24xx_chip_diag(scsi_qla_host_t *vha)
+ return rval;
+ }
+
+-static void
+-qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
++int qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ {
+ dma_addr_t tc_dma;
+ void *tc;
+ struct qla_hw_data *ha = vha->hw;
+
+ if (!IS_FWI2_CAPABLE(ha))
+- return;
++ return -EINVAL;
+
+ if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
+ !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
+- return;
++ return -EINVAL;
+
+ if (ha->fce) {
+ ql_dbg(ql_dbg_init, vha, 0x00bd,
+ "%s: FCE Mem is already allocated.\n",
+ __func__);
+- return;
++ return -EIO;
+ }
+
+ /* Allocate memory for Fibre Channel Event Buffer. */
+@@ -3745,7 +3744,7 @@ qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ ql_log(ql_log_warn, vha, 0x00be,
+ "Unable to allocate (%d KB) for FCE.\n",
+ FCE_SIZE / 1024);
+- return;
++ return -ENOMEM;
+ }
+
+ ql_dbg(ql_dbg_init, vha, 0x00c0,
+@@ -3754,6 +3753,16 @@ qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ ha->fce_dma = tc_dma;
+ ha->fce = tc;
+ ha->fce_bufs = FCE_NUM_BUFFERS;
++ return 0;
++}
++
++void qla2x00_free_fce_trace(struct qla_hw_data *ha)
++{
++ if (!ha->fce)
++ return;
++ dma_free_coherent(&ha->pdev->dev, FCE_SIZE, ha->fce, ha->fce_dma);
++ ha->fce = NULL;
++ ha->fce_dma = 0;
+ }
+
+ static void
+@@ -3844,9 +3853,10 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ if (ha->tgt.atio_ring)
+ mq_size += ha->tgt.atio_q_length * sizeof(request_t);
+
+- qla2x00_alloc_fce_trace(vha);
+- if (ha->fce)
++ if (ha->fce) {
+ fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE;
++ ha->flags.fce_dump_buf_alloced = 1;
++ }
+ qla2x00_alloc_eft_trace(vha);
+ if (ha->eft)
+ eft_size = EFT_SIZE;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index adee6f60c96655..c9dde1ac9523e8 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -865,13 +865,18 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ case 0x1a: /* start stop unit in progress */
+ case 0x1b: /* sanitize in progress */
+ case 0x1d: /* configuration in progress */
+- case 0x24: /* depopulation in progress */
+- case 0x25: /* depopulation restore in progress */
+ action = ACTION_DELAYED_RETRY;
+ break;
+ case 0x0a: /* ALUA state transition */
+ action = ACTION_DELAYED_REPREP;
+ break;
++ /*
++ * Depopulation might take many hours,
++ * thus it is not worthwhile to retry.
++ */
++ case 0x24: /* depopulation in progress */
++ case 0x25: /* depopulation restore in progress */
++ fallthrough;
+ default:
+ action = ACTION_FAIL;
+ break;
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index c9038284bc893d..0dc37fc6f23678 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -1027,6 +1027,11 @@ static int test_ready(struct scsi_tape *STp, int do_wait)
+ retval = new_session ? CHKRES_NEW_SESSION : CHKRES_READY;
+ break;
+ }
++ if (STp->first_tur) {
++ /* Don't set pos_unknown right after device recognition */
++ STp->pos_unknown = 0;
++ STp->first_tur = 0;
++ }
+
+ if (SRpnt != NULL)
+ st_release_request(SRpnt);
+@@ -4325,6 +4330,7 @@ static int st_probe(struct device *dev)
+ blk_queue_rq_timeout(tpnt->device->request_queue, ST_TIMEOUT);
+ tpnt->long_timeout = ST_LONG_TIMEOUT;
+ tpnt->try_dio = try_direct_io;
++ tpnt->first_tur = 1;
+
+ for (i = 0; i < ST_NBR_MODES; i++) {
+ STm = &(tpnt->modes[i]);
+diff --git a/drivers/scsi/st.h b/drivers/scsi/st.h
+index 7a68eaba7e810c..1aaaf5369a40fc 100644
+--- a/drivers/scsi/st.h
++++ b/drivers/scsi/st.h
+@@ -170,6 +170,7 @@ struct scsi_tape {
+ unsigned char rew_at_close; /* rewind necessary at close */
+ unsigned char inited;
+ unsigned char cleaning_req; /* cleaning requested? */
++ unsigned char first_tur; /* first TEST UNIT READY */
+ int block_size;
+ int min_block;
+ int max_block;
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index b3c588b102d900..b8186feccdf5aa 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1800,6 +1800,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+
+ length = scsi_bufflen(scmnd);
+ payload = (struct vmbus_packet_mpb_array *)&cmd_request->mpb;
++ payload->range.len = 0;
+ payload_sz = 0;
+
+ if (scsi_sg_count(scmnd)) {
+diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c
+index 56cc345552a430..d83a46334adbbe 100644
+--- a/drivers/soc/mediatek/mtk-devapc.c
++++ b/drivers/soc/mediatek/mtk-devapc.c
+@@ -273,23 +273,31 @@ static int mtk_devapc_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ devapc_irq = irq_of_parse_and_map(node, 0);
+- if (!devapc_irq)
+- return -EINVAL;
++ if (!devapc_irq) {
++ ret = -EINVAL;
++ goto err;
++ }
+
+ ctx->infra_clk = devm_clk_get_enabled(&pdev->dev, "devapc-infra-clock");
+- if (IS_ERR(ctx->infra_clk))
+- return -EINVAL;
++ if (IS_ERR(ctx->infra_clk)) {
++ ret = -EINVAL;
++ goto err;
++ }
+
+ ret = devm_request_irq(&pdev->dev, devapc_irq, devapc_violation_irq,
+ IRQF_TRIGGER_NONE, "devapc", ctx);
+ if (ret)
+- return ret;
++ goto err;
+
+ platform_set_drvdata(pdev, ctx);
+
+ start_devapc(ctx);
+
+ return 0;
++
++err:
++ iounmap(ctx->infra_base);
++ return ret;
+ }
+
+ static void mtk_devapc_remove(struct platform_device *pdev)
+@@ -297,6 +305,7 @@ static void mtk_devapc_remove(struct platform_device *pdev)
+ struct mtk_devapc_context *ctx = platform_get_drvdata(pdev);
+
+ stop_devapc(ctx);
++ iounmap(ctx->infra_base);
+ }
+
+ static struct platform_driver mtk_devapc_driver = {
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index a470285f54a875..133dc483331352 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -2511,6 +2511,7 @@ static const struct llcc_slice_config x1e80100_data[] = {
+ .fixed_size = true,
+ .bonus_ways = 0xfff,
+ .cache_mode = 0,
++ .activate_on_init = true,
+ }, {
+ .usecase_id = LLCC_CAMEXP0,
+ .slice_id = 4,
+diff --git a/drivers/soc/qcom/smem_state.c b/drivers/soc/qcom/smem_state.c
+index e848cc9a3cf801..a8be3a2f33824f 100644
+--- a/drivers/soc/qcom/smem_state.c
++++ b/drivers/soc/qcom/smem_state.c
+@@ -116,7 +116,8 @@ struct qcom_smem_state *qcom_smem_state_get(struct device *dev,
+
+ if (args.args_count != 1) {
+ dev_err(dev, "invalid #qcom,smem-state-cells\n");
+- return ERR_PTR(-EINVAL);
++ state = ERR_PTR(-EINVAL);
++ goto put;
+ }
+
+ state = of_node_to_state(args.np);
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index ecfd3da9d5e877..c2f2a1ce4194b3 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -789,7 +789,7 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
+ if (!qs->attr.soc_id || !qs->attr.revision)
+ return -ENOMEM;
+
+- if (offsetof(struct socinfo, serial_num) <= item_size) {
++ if (offsetofend(struct socinfo, serial_num) <= item_size) {
+ qs->attr.serial_number = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%u",
+ le32_to_cpu(info->serial_num));
+diff --git a/drivers/soc/samsung/exynos-pmu.c b/drivers/soc/samsung/exynos-pmu.c
+index d8c53cec7f37ad..dd5256e5aae1ae 100644
+--- a/drivers/soc/samsung/exynos-pmu.c
++++ b/drivers/soc/samsung/exynos-pmu.c
+@@ -126,7 +126,7 @@ static int tensor_set_bits_atomic(void *ctx, unsigned int offset, u32 val,
+ if (ret)
+ return ret;
+ }
+- return ret;
++ return 0;
+ }
+
+ static bool tensor_is_atomic(unsigned int reg)
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index caecb2ad2a150d..02d83e2956cdf7 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -138,11 +138,15 @@
+ #define QSPI_WPSR_WPVSRC_MASK GENMASK(15, 8)
+ #define QSPI_WPSR_WPVSRC(src) (((src) << 8) & QSPI_WPSR_WPVSRC)
+
++#define ATMEL_QSPI_TIMEOUT 1000 /* ms */
++
+ struct atmel_qspi_caps {
+ bool has_qspick;
+ bool has_ricr;
+ };
+
++struct atmel_qspi_ops;
++
+ struct atmel_qspi {
+ void __iomem *regs;
+ void __iomem *mem;
+@@ -150,13 +154,22 @@ struct atmel_qspi {
+ struct clk *qspick;
+ struct platform_device *pdev;
+ const struct atmel_qspi_caps *caps;
++ const struct atmel_qspi_ops *ops;
+ resource_size_t mmap_size;
+ u32 pending;
++ u32 irq_mask;
+ u32 mr;
+ u32 scr;
+ struct completion cmd_completion;
+ };
+
++struct atmel_qspi_ops {
++ int (*set_cfg)(struct atmel_qspi *aq, const struct spi_mem_op *op,
++ u32 *offset);
++ int (*transfer)(struct spi_mem *mem, const struct spi_mem_op *op,
++ u32 offset);
++};
++
+ struct atmel_qspi_mode {
+ u8 cmd_buswidth;
+ u8 addr_buswidth;
+@@ -404,10 +417,67 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ return 0;
+ }
+
++static int atmel_qspi_wait_for_completion(struct atmel_qspi *aq, u32 irq_mask)
++{
++ int err = 0;
++ u32 sr;
++
++ /* Poll INSTRuction End status */
++ sr = atmel_qspi_read(aq, QSPI_SR);
++ if ((sr & irq_mask) == irq_mask)
++ return 0;
++
++ /* Wait for INSTRuction End interrupt */
++ reinit_completion(&aq->cmd_completion);
++ aq->pending = sr & irq_mask;
++ aq->irq_mask = irq_mask;
++ atmel_qspi_write(irq_mask, aq, QSPI_IER);
++ if (!wait_for_completion_timeout(&aq->cmd_completion,
++ msecs_to_jiffies(ATMEL_QSPI_TIMEOUT)))
++ err = -ETIMEDOUT;
++ atmel_qspi_write(irq_mask, aq, QSPI_IDR);
++
++ return err;
++}
++
++static int atmel_qspi_transfer(struct spi_mem *mem,
++ const struct spi_mem_op *op, u32 offset)
++{
++ struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller);
++
++ /* Skip to the final steps if there is no data */
++ if (!op->data.nbytes)
++ return atmel_qspi_wait_for_completion(aq,
++ QSPI_SR_CMD_COMPLETED);
++
++ /* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */
++ (void)atmel_qspi_read(aq, QSPI_IFR);
++
++ /* Send/Receive data */
++ if (op->data.dir == SPI_MEM_DATA_IN) {
++ memcpy_fromio(op->data.buf.in, aq->mem + offset,
++ op->data.nbytes);
++
++ /* Synchronize AHB and APB accesses again */
++ rmb();
++ } else {
++ memcpy_toio(aq->mem + offset, op->data.buf.out,
++ op->data.nbytes);
++
++ /* Synchronize AHB and APB accesses again */
++ wmb();
++ }
++
++ /* Release the chip-select */
++ atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR);
++
++ return atmel_qspi_wait_for_completion(aq, QSPI_SR_CMD_COMPLETED);
++}
++
+ static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ {
+ struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller);
+- u32 sr, offset;
++ u32 offset;
+ int err;
+
+ /*
+@@ -416,46 +486,20 @@ static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ * when the flash memories overrun the controller's memory space.
+ */
+ if (op->addr.val + op->data.nbytes > aq->mmap_size)
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
++
++ if (op->addr.nbytes > 4)
++ return -EOPNOTSUPP;
+
+ err = pm_runtime_resume_and_get(&aq->pdev->dev);
+ if (err < 0)
+ return err;
+
+- err = atmel_qspi_set_cfg(aq, op, &offset);
++ err = aq->ops->set_cfg(aq, op, &offset);
+ if (err)
+ goto pm_runtime_put;
+
+- /* Skip to the final steps if there is no data */
+- if (op->data.nbytes) {
+- /* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */
+- (void)atmel_qspi_read(aq, QSPI_IFR);
+-
+- /* Send/Receive data */
+- if (op->data.dir == SPI_MEM_DATA_IN)
+- memcpy_fromio(op->data.buf.in, aq->mem + offset,
+- op->data.nbytes);
+- else
+- memcpy_toio(aq->mem + offset, op->data.buf.out,
+- op->data.nbytes);
+-
+- /* Release the chip-select */
+- atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR);
+- }
+-
+- /* Poll INSTRuction End status */
+- sr = atmel_qspi_read(aq, QSPI_SR);
+- if ((sr & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED)
+- goto pm_runtime_put;
+-
+- /* Wait for INSTRuction End interrupt */
+- reinit_completion(&aq->cmd_completion);
+- aq->pending = sr & QSPI_SR_CMD_COMPLETED;
+- atmel_qspi_write(QSPI_SR_CMD_COMPLETED, aq, QSPI_IER);
+- if (!wait_for_completion_timeout(&aq->cmd_completion,
+- msecs_to_jiffies(1000)))
+- err = -ETIMEDOUT;
+- atmel_qspi_write(QSPI_SR_CMD_COMPLETED, aq, QSPI_IDR);
++ err = aq->ops->transfer(mem, op, offset);
+
+ pm_runtime_put:
+ pm_runtime_mark_last_busy(&aq->pdev->dev);
+@@ -571,12 +615,17 @@ static irqreturn_t atmel_qspi_interrupt(int irq, void *dev_id)
+ return IRQ_NONE;
+
+ aq->pending |= pending;
+- if ((aq->pending & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED)
++ if ((aq->pending & aq->irq_mask) == aq->irq_mask)
+ complete(&aq->cmd_completion);
+
+ return IRQ_HANDLED;
+ }
+
++static const struct atmel_qspi_ops atmel_qspi_ops = {
++ .set_cfg = atmel_qspi_set_cfg,
++ .transfer = atmel_qspi_transfer,
++};
++
+ static int atmel_qspi_probe(struct platform_device *pdev)
+ {
+ struct spi_controller *ctrl;
+@@ -601,6 +650,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+
+ init_completion(&aq->cmd_completion);
+ aq->pdev = pdev;
++ aq->ops = &atmel_qspi_ops;
+
+ /* Map the registers */
+ aq->regs = devm_platform_ioremap_resource_byname(pdev, "qspi_base");
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index bdf17eafd3598d..f43059e1b5c28e 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -165,6 +165,7 @@ struct sci_port {
+ static struct sci_port sci_ports[SCI_NPORTS];
+ static unsigned long sci_ports_in_use;
+ static struct uart_driver sci_uart_driver;
++static bool sci_uart_earlycon;
+
+ static inline struct sci_port *
+ to_sci_port(struct uart_port *uart)
+@@ -3450,6 +3451,7 @@ static int sci_probe_single(struct platform_device *dev,
+ static int sci_probe(struct platform_device *dev)
+ {
+ struct plat_sci_port *p;
++ struct resource *res;
+ struct sci_port *sp;
+ unsigned int dev_id;
+ int ret;
+@@ -3479,6 +3481,26 @@ static int sci_probe(struct platform_device *dev)
+ }
+
+ sp = &sci_ports[dev_id];
++
++ /*
++ * In case:
++ * - the probed port alias is zero (as the one used by earlycon), and
++ * - the earlycon is still active (e.g., "earlycon keep_bootcon" in
++ * bootargs)
++ *
++ * defer the probe of this serial. This is a debug scenario and the user
++ * must be aware of it.
++ *
++ * Except when the probed port is the same as the earlycon port.
++ */
++
++ res = platform_get_resource(dev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -ENODEV;
++
++ if (sci_uart_earlycon && sp == &sci_ports[0] && sp->port.mapbase != res->start)
++ return dev_err_probe(&dev->dev, -EBUSY, "sci_port[0] is used by earlycon!\n");
++
+ platform_set_drvdata(dev, sp);
+
+ ret = sci_probe_single(dev, dev_id, p, sp);
+@@ -3562,7 +3584,7 @@ sh_early_platform_init_buffer("earlyprintk", &sci_driver,
+ early_serial_buf, ARRAY_SIZE(early_serial_buf));
+ #endif
+ #ifdef CONFIG_SERIAL_SH_SCI_EARLYCON
+-static struct plat_sci_port port_cfg __initdata;
++static struct plat_sci_port port_cfg;
+
+ static int __init early_console_setup(struct earlycon_device *device,
+ int type)
+@@ -3575,6 +3597,7 @@ static int __init early_console_setup(struct earlycon_device *device,
+ port_cfg.type = type;
+ sci_ports[0].cfg = &port_cfg;
+ sci_ports[0].params = sci_probe_regmap(&port_cfg);
++ sci_uart_earlycon = true;
+ port_cfg.scscr = sci_serial_in(&sci_ports[0].port, SCSCR);
+ sci_serial_out(&sci_ports[0].port, SCSCR,
+ SCSCR_RE | SCSCR_TE | port_cfg.scscr);
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 777392914819d7..1d636578c1efc5 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -287,7 +287,7 @@ static void cdns_uart_handle_rx(void *dev_id, unsigned int isrstatus)
+ continue;
+ }
+
+- if (uart_handle_sysrq_char(port, data))
++ if (uart_prepare_sysrq_char(port, data))
+ continue;
+
+ if (is_rxbs_support) {
+@@ -495,7 +495,7 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
+ cdns_uart_handle_rx(dev_id, isrstatus);
+
+- uart_port_unlock(port);
++ uart_unlock_and_check_sysrq(port);
+ return IRQ_HANDLED;
+ }
+
+@@ -1380,9 +1380,7 @@ static void cdns_uart_console_write(struct console *co, const char *s,
+ unsigned int imr, ctrl;
+ int locked = 1;
+
+- if (port->sysrq)
+- locked = 0;
+- else if (oops_in_progress)
++ if (oops_in_progress)
+ locked = uart_port_trylock_irqsave(port, &flags);
+ else
+ uart_port_lock_irqsave(port, &flags);
+diff --git a/drivers/tty/vt/selection.c b/drivers/tty/vt/selection.c
+index 564341f1a74f3f..0bd6544e30a6b3 100644
+--- a/drivers/tty/vt/selection.c
++++ b/drivers/tty/vt/selection.c
+@@ -192,6 +192,20 @@ int set_selection_user(const struct tiocl_selection __user *sel,
+ if (copy_from_user(&v, sel, sizeof(*sel)))
+ return -EFAULT;
+
++ /*
++ * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to
++ * use without CAP_SYS_ADMIN as they do not modify the selection.
++ */
++ switch (v.sel_mode) {
++ case TIOCL_SELCLEAR:
++ case TIOCL_SELPOINTER:
++ case TIOCL_SELMOUSEREPORT:
++ break;
++ default:
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++ }
++
+ return set_selection_kernel(&v, tty);
+ }
+
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 96842ce817af47..be5564ed8c018a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -3345,8 +3345,6 @@ int tioclinux(struct tty_struct *tty, unsigned long arg)
+
+ switch (type) {
+ case TIOCL_SETSEL:
+- if (!capable(CAP_SYS_ADMIN))
+- return -EPERM;
+ return set_selection_user(param, tty);
+ case TIOCL_PASTESEL:
+ if (!capable(CAP_SYS_ADMIN))
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 6cc9e61cca07de..b786cba9a270f4 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -10323,16 +10323,6 @@ int ufshcd_system_thaw(struct device *dev)
+ EXPORT_SYMBOL_GPL(ufshcd_system_thaw);
+ #endif /* CONFIG_PM_SLEEP */
+
+-/**
+- * ufshcd_dealloc_host - deallocate Host Bus Adapter (HBA)
+- * @hba: pointer to Host Bus Adapter (HBA)
+- */
+-void ufshcd_dealloc_host(struct ufs_hba *hba)
+-{
+- scsi_host_put(hba->host);
+-}
+-EXPORT_SYMBOL_GPL(ufshcd_dealloc_host);
+-
+ /**
+ * ufshcd_set_dma_mask - Set dma mask based on the controller
+ * addressing capability
+@@ -10351,12 +10341,26 @@ static int ufshcd_set_dma_mask(struct ufs_hba *hba)
+ return dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(32));
+ }
+
++/**
++ * ufshcd_devres_release - devres cleanup handler, invoked during release of
++ * hba->dev
++ * @host: pointer to SCSI host
++ */
++static void ufshcd_devres_release(void *host)
++{
++ scsi_host_put(host);
++}
++
+ /**
+ * ufshcd_alloc_host - allocate Host Bus Adapter (HBA)
+ * @dev: pointer to device handle
+ * @hba_handle: driver private handle
+ *
+ * Return: 0 on success, non-zero value on failure.
++ *
++ * NOTE: There is no corresponding ufshcd_dealloc_host() because this function
++ * keeps track of its allocations using devres and deallocates everything on
++ * device removal automatically.
+ */
+ int ufshcd_alloc_host(struct device *dev, struct ufs_hba **hba_handle)
+ {
+@@ -10378,6 +10382,13 @@ int ufshcd_alloc_host(struct device *dev, struct ufs_hba **hba_handle)
+ err = -ENOMEM;
+ goto out_error;
+ }
++
++ err = devm_add_action_or_reset(dev, ufshcd_devres_release,
++ host);
++ if (err)
++ return dev_err_probe(dev, err,
++ "failed to add ufshcd dealloc action\n");
++
+ host->nr_maps = HCTX_TYPE_POLL + 1;
+ hba = shost_priv(host);
+ hba->host = host;
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 989692fb91083f..e12c5f9f795638 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -155,8 +155,9 @@ static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
+ {
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ union ufs_crypto_cap_entry cap;
+- bool config_enable =
+- cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE;
++
++ if (!(cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE))
++ return qcom_ice_evict_key(host->ice, slot);
+
+ /* Only AES-256-XTS has been tested so far. */
+ cap = hba->crypto_cap_array[cfg->crypto_cap_idx];
+@@ -164,14 +165,11 @@ static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
+ cap.key_size != UFS_CRYPTO_KEY_SIZE_256)
+ return -EOPNOTSUPP;
+
+- if (config_enable)
+- return qcom_ice_program_key(host->ice,
+- QCOM_ICE_CRYPTO_ALG_AES_XTS,
+- QCOM_ICE_CRYPTO_KEY_SIZE_256,
+- cfg->crypto_key,
+- cfg->data_unit_size, slot);
+- else
+- return qcom_ice_evict_key(host->ice, slot);
++ return qcom_ice_program_key(host->ice,
++ QCOM_ICE_CRYPTO_ALG_AES_XTS,
++ QCOM_ICE_CRYPTO_KEY_SIZE_256,
++ cfg->crypto_key,
++ cfg->data_unit_size, slot);
+ }
+
+ #else
+diff --git a/drivers/ufs/host/ufshcd-pci.c b/drivers/ufs/host/ufshcd-pci.c
+index 54e0cc0653a247..850ff71130d5e4 100644
+--- a/drivers/ufs/host/ufshcd-pci.c
++++ b/drivers/ufs/host/ufshcd-pci.c
+@@ -562,7 +562,6 @@ static void ufshcd_pci_remove(struct pci_dev *pdev)
+ pm_runtime_forbid(&pdev->dev);
+ pm_runtime_get_noresume(&pdev->dev);
+ ufshcd_remove(hba);
+- ufshcd_dealloc_host(hba);
+ }
+
+ /**
+@@ -607,7 +606,6 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ err = ufshcd_init(hba, mmio_base, pdev->irq);
+ if (err) {
+ dev_err(&pdev->dev, "Initialization failed\n");
+- ufshcd_dealloc_host(hba);
+ return err;
+ }
+
+diff --git a/drivers/ufs/host/ufshcd-pltfrm.c b/drivers/ufs/host/ufshcd-pltfrm.c
+index 505572d4fa878c..ffe5d1d2b21588 100644
+--- a/drivers/ufs/host/ufshcd-pltfrm.c
++++ b/drivers/ufs/host/ufshcd-pltfrm.c
+@@ -465,21 +465,17 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ struct device *dev = &pdev->dev;
+
+ mmio_base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(mmio_base)) {
+- err = PTR_ERR(mmio_base);
+- goto out;
+- }
++ if (IS_ERR(mmio_base))
++ return PTR_ERR(mmio_base);
+
+ irq = platform_get_irq(pdev, 0);
+- if (irq < 0) {
+- err = irq;
+- goto out;
+- }
++ if (irq < 0)
++ return irq;
+
+ err = ufshcd_alloc_host(dev, &hba);
+ if (err) {
+ dev_err(dev, "Allocation failed\n");
+- goto out;
++ return err;
+ }
+
+ hba->vops = vops;
+@@ -488,13 +484,13 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ if (err) {
+ dev_err(dev, "%s: clock parse failed %d\n",
+ __func__, err);
+- goto dealloc_host;
++ return err;
+ }
+ err = ufshcd_parse_regulator_info(hba);
+ if (err) {
+ dev_err(dev, "%s: regulator init failed %d\n",
+ __func__, err);
+- goto dealloc_host;
++ return err;
+ }
+
+ ufshcd_init_lanes_per_dir(hba);
+@@ -502,25 +498,20 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ err = ufshcd_parse_operating_points(hba);
+ if (err) {
+ dev_err(dev, "%s: OPP parse failed %d\n", __func__, err);
+- goto dealloc_host;
++ return err;
+ }
+
+ err = ufshcd_init(hba, mmio_base, irq);
+ if (err) {
+ dev_err_probe(dev, err, "Initialization failed with error %d\n",
+ err);
+- goto dealloc_host;
++ return err;
+ }
+
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
+ return 0;
+-
+-dealloc_host:
+- ufshcd_dealloc_host(hba);
+-out:
+- return err;
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_pltfrm_init);
+
+@@ -534,7 +525,6 @@ void ufshcd_pltfrm_remove(struct platform_device *pdev)
+
+ pm_runtime_get_sync(&pdev->dev);
+ ufshcd_remove(hba);
+- ufshcd_dealloc_host(hba);
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ }
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 48dee166e5d89c..7b23631f47449b 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -245,7 +245,6 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
+ {
+ struct f_uas *fu = cmd->fu;
+ struct se_cmd *se_cmd = &cmd->se_cmd;
+- struct usb_gadget *gadget = fuas_to_gadget(fu);
+ int ret;
+
+ init_completion(&cmd->write_complete);
+@@ -256,22 +255,6 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
+ return -EINVAL;
+ }
+
+- if (!gadget->sg_supported) {
+- cmd->data_buf = kmalloc(se_cmd->data_length, GFP_KERNEL);
+- if (!cmd->data_buf)
+- return -ENOMEM;
+-
+- fu->bot_req_out->buf = cmd->data_buf;
+- } else {
+- fu->bot_req_out->buf = NULL;
+- fu->bot_req_out->num_sgs = se_cmd->t_data_nents;
+- fu->bot_req_out->sg = se_cmd->t_data_sg;
+- }
+-
+- fu->bot_req_out->complete = usbg_data_write_cmpl;
+- fu->bot_req_out->length = se_cmd->data_length;
+- fu->bot_req_out->context = cmd;
+-
+ ret = usbg_prepare_w_request(cmd, fu->bot_req_out);
+ if (ret)
+ goto cleanup;
+@@ -973,6 +956,7 @@ static void usbg_data_write_cmpl(struct usb_ep *ep, struct usb_request *req)
+ return;
+
+ cleanup:
++ target_put_sess_cmd(se_cmd);
+ transport_generic_free_cmd(&cmd->se_cmd, 0);
+ }
+
+@@ -1065,7 +1049,7 @@ static void usbg_cmd_work(struct work_struct *work)
+
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+- TCM_UNSUPPORTED_SCSI_OPCODE, 1);
++ TCM_UNSUPPORTED_SCSI_OPCODE, 0);
+ }
+
+ static struct usbg_cmd *usbg_get_cmd(struct f_uas *fu,
+@@ -1193,7 +1177,7 @@ static void bot_cmd_work(struct work_struct *work)
+
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+- TCM_UNSUPPORTED_SCSI_OPCODE, 1);
++ TCM_UNSUPPORTED_SCSI_OPCODE, 0);
+ }
+
+ static int bot_submit_command(struct f_uas *fu,
+@@ -1969,43 +1953,39 @@ static int tcm_bind(struct usb_configuration *c, struct usb_function *f)
+ bot_intf_desc.bInterfaceNumber = iface;
+ uasp_intf_desc.bInterfaceNumber = iface;
+ fu->iface = iface;
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_bi_desc,
+- &uasp_bi_ep_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_bi_desc);
+ if (!ep)
+ goto ep_fail;
+
+ fu->ep_in = ep;
+
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_bo_desc,
+- &uasp_bo_ep_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_bo_desc);
+ if (!ep)
+ goto ep_fail;
+ fu->ep_out = ep;
+
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_status_desc,
+- &uasp_status_in_ep_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_status_desc);
+ if (!ep)
+ goto ep_fail;
+ fu->ep_status = ep;
+
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_cmd_desc,
+- &uasp_cmd_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_cmd_desc);
+ if (!ep)
+ goto ep_fail;
+ fu->ep_cmd = ep;
+
+ /* Assume endpoint addresses are the same for both speeds */
+- uasp_bi_desc.bEndpointAddress = uasp_ss_bi_desc.bEndpointAddress;
+- uasp_bo_desc.bEndpointAddress = uasp_ss_bo_desc.bEndpointAddress;
++ uasp_bi_desc.bEndpointAddress = uasp_fs_bi_desc.bEndpointAddress;
++ uasp_bo_desc.bEndpointAddress = uasp_fs_bo_desc.bEndpointAddress;
+ uasp_status_desc.bEndpointAddress =
+- uasp_ss_status_desc.bEndpointAddress;
+- uasp_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress;
+-
+- uasp_fs_bi_desc.bEndpointAddress = uasp_ss_bi_desc.bEndpointAddress;
+- uasp_fs_bo_desc.bEndpointAddress = uasp_ss_bo_desc.bEndpointAddress;
+- uasp_fs_status_desc.bEndpointAddress =
+- uasp_ss_status_desc.bEndpointAddress;
+- uasp_fs_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress;
++ uasp_fs_status_desc.bEndpointAddress;
++ uasp_cmd_desc.bEndpointAddress = uasp_fs_cmd_desc.bEndpointAddress;
++
++ uasp_ss_bi_desc.bEndpointAddress = uasp_fs_bi_desc.bEndpointAddress;
++ uasp_ss_bo_desc.bEndpointAddress = uasp_fs_bo_desc.bEndpointAddress;
++ uasp_ss_status_desc.bEndpointAddress =
++ uasp_fs_status_desc.bEndpointAddress;
++ uasp_ss_cmd_desc.bEndpointAddress = uasp_fs_cmd_desc.bEndpointAddress;
+
+ ret = usb_assign_descriptors(f, uasp_fs_function_desc,
+ uasp_hs_function_desc, uasp_ss_function_desc,
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index 3bf1043cd7957c..d63c2d266d0735 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -393,6 +393,11 @@ static ssize_t vfio_platform_read_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+@@ -477,6 +482,11 @@ static ssize_t vfio_platform_write_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index 390808ce935d50..b5b5ca1a44f70b 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -478,7 +478,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ * 28 bits (256 MB) is way more than reasonable in this case.
+ * If some top bits are set we have probable binary corruption.
+ */
+- if ((text_len | data_len | bss_len | stack_len | full_data) >> 28) {
++ if ((text_len | data_len | bss_len | stack_len | relocs | full_data) >> 28) {
+ pr_err("bad header\n");
+ ret = -ENOEXEC;
+ goto err;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 4fb521d91b0612..559c177456e6a0 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -242,7 +242,7 @@ int btrfs_drop_extents(struct btrfs_trans_handle *trans,
+ if (args->drop_cache)
+ btrfs_drop_extent_map_range(inode, args->start, args->end - 1, false);
+
+- if (args->start >= inode->disk_i_size && !args->replace_extent)
++ if (data_race(args->start >= inode->disk_i_size) && !args->replace_extent)
+ modify_tree = 0;
+
+ update_refs = (btrfs_root_id(root) != BTRFS_TREE_LOG_OBJECTID);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 9d9ce308488dd3..f7e7d864f41440 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7200,8 +7200,6 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ ret = -EAGAIN;
+ goto out;
+ }
+-
+- cond_resched();
+ }
+
+ if (file_extent)
+@@ -10144,6 +10142,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ ret = -EINTR;
+ goto out;
+ }
++
++ cond_resched();
+ }
+
+ if (bsi.block_len)
+diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
+index 2104d60c216166..4ed11b089ea95a 100644
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -1229,6 +1229,18 @@ struct btrfs_ordered_extent *btrfs_split_ordered_extent(
+ */
+ if (WARN_ON_ONCE(len >= ordered->num_bytes))
+ return ERR_PTR(-EINVAL);
++ /*
++ * If our ordered extent had an error there's no point in continuing.
++ * The error may have come from a transaction abort done either by this
++ * task or some other concurrent task, and the transaction abort path
++ * iterates over all existing ordered extents and sets the flag
++ * BTRFS_ORDERED_IOERR on them.
++ */
++ if (unlikely(flags & (1U << BTRFS_ORDERED_IOERR))) {
++ const int fs_error = BTRFS_FS_ERROR(fs_info);
++
++ return fs_error ? ERR_PTR(fs_error) : ERR_PTR(-EIO);
++ }
+ /* We cannot split partially completed ordered extents. */
+ if (ordered->bytes_left) {
+ ASSERT(!(flags & ~BTRFS_ORDERED_TYPE_FLAGS));
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 4fcd6cd4c1c244..fa9025c05d4e29 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1916,8 +1916,11 @@ int btrfs_qgroup_cleanup_dropped_subvolume(struct btrfs_fs_info *fs_info, u64 su
+ /*
+ * It's squota and the subvolume still has numbers needed for future
+ * accounting, in this case we can not delete it. Just skip it.
++ *
++ * Or the qgroup is already removed by a qgroup rescan. For both cases we're
++ * safe to ignore them.
+ */
+- if (ret == -EBUSY)
++ if (ret == -EBUSY || ret == -ENOENT)
+ ret = 0;
+ return ret;
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index adcbdc970f9ea4..f24a80857cd600 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -4405,8 +4405,18 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ WARN_ON(!first_cow && level == 0);
+
+ node = rc->backref_cache.path[level];
+- BUG_ON(node->bytenr != buf->start &&
+- node->new_bytenr != buf->start);
++
++ /*
++ * If node->bytenr != buf->start and node->new_bytenr !=
++ * buf->start then we've got the wrong backref node for what we
++ * expected to see here and the cache is incorrect.
++ */
++ if (unlikely(node->bytenr != buf->start && node->new_bytenr != buf->start)) {
++ btrfs_err(fs_info,
++"bytenr %llu was found but our backref cache was expecting %llu or %llu",
++ buf->start, node->bytenr, node->new_bytenr);
++ return -EUCLEAN;
++ }
+
+ btrfs_backref_drop_node_buffer(node);
+ atomic_inc(&cow->refs);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 0fc873af891f65..82dd9ee89fbc5b 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -275,8 +275,10 @@ static noinline int join_transaction(struct btrfs_fs_info *fs_info,
+ cur_trans = fs_info->running_transaction;
+ if (cur_trans) {
+ if (TRANS_ABORTED(cur_trans)) {
++ const int abort_error = cur_trans->aborted;
++
+ spin_unlock(&fs_info->trans_lock);
+- return cur_trans->aborted;
++ return abort_error;
+ }
+ if (btrfs_blocked_trans_types[cur_trans->state] & type) {
+ spin_unlock(&fs_info->trans_lock);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index f48242262b2177..f3af6330d74a7d 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -5698,18 +5698,18 @@ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc,
+ *
+ * All the other cases --> mismatch
+ */
++ bool path_matched = true;
+ char *first = strstr(_tpath, auth->match.path);
+- if (first != _tpath) {
+- if (free_tpath)
+- kfree(_tpath);
+- return 0;
++ if (first != _tpath ||
++ (tlen > len && _tpath[len] != '/')) {
++ path_matched = false;
+ }
+
+- if (tlen > len && _tpath[len] != '/') {
+- if (free_tpath)
+- kfree(_tpath);
++ if (free_tpath)
++ kfree(_tpath);
++
++ if (!path_matched)
+ return 0;
+- }
+ }
+ }
+
+diff --git a/fs/exec.c b/fs/exec.c
+index 9c349a74f38589..67513bd606c249 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1351,7 +1351,28 @@ int begin_new_exec(struct linux_binprm * bprm)
+ set_dumpable(current->mm, SUID_DUMP_USER);
+
+ perf_event_exec();
+- __set_task_comm(me, kbasename(bprm->filename), true);
++
++ /*
++ * If the original filename was empty, alloc_bprm() made up a path
++ * that will probably not be useful to admins running ps or similar.
++ * Let's fix it up to be something reasonable.
++ */
++ if (bprm->comm_from_dentry) {
++ /*
++ * Hold RCU lock to keep the name from being freed behind our back.
++ * Use acquire semantics to make sure the terminating NUL from
++ * __d_alloc() is seen.
++ *
++ * Note, we're deliberately sloppy here. We don't need to care about
++ * detecting a concurrent rename and just want a terminated name.
++ */
++ rcu_read_lock();
++ __set_task_comm(me, smp_load_acquire(&bprm->file->f_path.dentry->d_name.name),
++ true);
++ rcu_read_unlock();
++ } else {
++ __set_task_comm(me, kbasename(bprm->filename), true);
++ }
+
+ /* An exec changes our domain. We are no longer part of the thread
+ group */
+@@ -1527,11 +1548,13 @@ static struct linux_binprm *alloc_bprm(int fd, struct filename *filename, int fl
+ if (fd == AT_FDCWD || filename->name[0] == '/') {
+ bprm->filename = filename->name;
+ } else {
+- if (filename->name[0] == '\0')
++ if (filename->name[0] == '\0') {
+ bprm->fdpath = kasprintf(GFP_KERNEL, "/dev/fd/%d", fd);
+- else
++ bprm->comm_from_dentry = 1;
++ } else {
+ bprm->fdpath = kasprintf(GFP_KERNEL, "/dev/fd/%d/%s",
+ fd, filename->name);
++ }
+ if (!bprm->fdpath)
+ goto out_free;
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 5ea644b679add5..73da51ac5a0349 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -5020,26 +5020,29 @@ static int statmount_mnt_opts(struct kstatmount *s, struct seq_file *seq)
+ {
+ struct vfsmount *mnt = s->mnt;
+ struct super_block *sb = mnt->mnt_sb;
++ size_t start = seq->count;
+ int err;
+
+- if (sb->s_op->show_options) {
+- size_t start = seq->count;
++ err = security_sb_show_options(seq, sb);
++ if (err)
++ return err;
+
++ if (sb->s_op->show_options) {
+ err = sb->s_op->show_options(seq, mnt->mnt_root);
+ if (err)
+ return err;
++ }
+
+- if (unlikely(seq_has_overflowed(seq)))
+- return -EAGAIN;
++ if (unlikely(seq_has_overflowed(seq)))
++ return -EAGAIN;
+
+- if (seq->count == start)
+- return 0;
++ if (seq->count == start)
++ return 0;
+
+- /* skip leading comma */
+- memmove(seq->buf + start, seq->buf + start + 1,
+- seq->count - start - 1);
+- seq->count--;
+- }
++ /* skip leading comma */
++ memmove(seq->buf + start, seq->buf + start + 1,
++ seq->count - start - 1);
++ seq->count--;
+
+ return 0;
+ }
+@@ -5050,22 +5053,29 @@ static int statmount_string(struct kstatmount *s, u64 flag)
+ size_t kbufsize;
+ struct seq_file *seq = &s->seq;
+ struct statmount *sm = &s->sm;
++ u32 start, *offp;
++
++ /* Reserve an empty string at the beginning for any unset offsets */
++ if (!seq->count)
++ seq_putc(seq, 0);
++
++ start = seq->count;
+
+ switch (flag) {
+ case STATMOUNT_FS_TYPE:
+- sm->fs_type = seq->count;
++ offp = &sm->fs_type;
+ ret = statmount_fs_type(s, seq);
+ break;
+ case STATMOUNT_MNT_ROOT:
+- sm->mnt_root = seq->count;
++ offp = &sm->mnt_root;
+ ret = statmount_mnt_root(s, seq);
+ break;
+ case STATMOUNT_MNT_POINT:
+- sm->mnt_point = seq->count;
++ offp = &sm->mnt_point;
+ ret = statmount_mnt_point(s, seq);
+ break;
+ case STATMOUNT_MNT_OPTS:
+- sm->mnt_opts = seq->count;
++ offp = &sm->mnt_opts;
+ ret = statmount_mnt_opts(s, seq);
+ break;
+ default:
+@@ -5087,6 +5097,7 @@ static int statmount_string(struct kstatmount *s, u64 flag)
+
+ seq->buf[seq->count++] = '\0';
+ sm->mask |= flag;
++ *offp = start;
+ return 0;
+ }
+
+diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
+index 0eb20012792f07..d3f76101ad4b91 100644
+--- a/fs/nfs/Kconfig
++++ b/fs/nfs/Kconfig
+@@ -170,7 +170,8 @@ config ROOT_NFS
+
+ config NFS_FSCACHE
+ bool "Provide NFS client caching support"
+- depends on NFS_FS=m && NETFS_SUPPORT || NFS_FS=y && NETFS_SUPPORT=y
++ depends on NFS_FS
++ select NETFS_SUPPORT
+ select FSCACHE
+ help
+ Say Y here if you want NFS data to be cached locally on disc through
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index f78115c6c2c12a..a1cfe4cc60c4b1 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -847,6 +847,9 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ struct nfs4_pnfs_ds *ds;
+ u32 ds_idx;
+
++ if (NFS_SERVER(pgio->pg_inode)->flags &
++ (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
++ pgio->pg_maxretrans = io_maxretrans;
+ retry:
+ pnfs_generic_pg_check_layout(pgio, req);
+ /* Use full layout for now */
+@@ -860,6 +863,8 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ if (!pgio->pg_lseg)
+ goto out_nolseg;
+ }
++ /* Reset wb_nio, since getting layout segment was successful */
++ req->wb_nio = 0;
+
+ ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+ if (!ds) {
+@@ -876,14 +881,24 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;
+
+ pgio->pg_mirror_idx = ds_idx;
+-
+- if (NFS_SERVER(pgio->pg_inode)->flags &
+- (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
+- pgio->pg_maxretrans = io_maxretrans;
+ return;
+ out_nolseg:
+- if (pgio->pg_error < 0)
+- return;
++ if (pgio->pg_error < 0) {
++ if (pgio->pg_error != -EAGAIN)
++ return;
++ /* Retry getting layout segment if lower layer returned -EAGAIN */
++ if (pgio->pg_maxretrans && req->wb_nio++ > pgio->pg_maxretrans) {
++ if (NFS_SERVER(pgio->pg_inode)->flags & NFS_MOUNT_SOFTERR)
++ pgio->pg_error = -ETIMEDOUT;
++ else
++ pgio->pg_error = -EIO;
++ return;
++ }
++ pgio->pg_error = 0;
++ /* Sleep for 1 second before retrying */
++ ssleep(1);
++ goto retry;
++ }
+ out_mds:
+ trace_pnfs_mds_fallback_pg_init_read(pgio->pg_inode,
+ 0, NFS4_MAX_UINT64, IOMODE_READ,
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 8d25aef51ad150..2fc1919dd3c09f 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -5747,15 +5747,14 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ struct nfs4_stateowner *so = resp->cstate.replay_owner;
+ struct svc_rqst *rqstp = resp->rqstp;
+ const struct nfsd4_operation *opdesc = op->opdesc;
+- int post_err_offset;
++ unsigned int op_status_offset;
+ nfsd4_enc encoder;
+- __be32 *p;
+
+- p = xdr_reserve_space(xdr, 8);
+- if (!p)
++ if (xdr_stream_encode_u32(xdr, op->opnum) != XDR_UNIT)
++ goto release;
++ op_status_offset = xdr->buf->len;
++ if (!xdr_reserve_space(xdr, XDR_UNIT))
+ goto release;
+- *p++ = cpu_to_be32(op->opnum);
+- post_err_offset = xdr->buf->len;
+
+ if (op->opnum == OP_ILLEGAL)
+ goto status;
+@@ -5796,20 +5795,21 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ * bug if we had to do this on a non-idempotent op:
+ */
+ warn_on_nonidempotent_op(op);
+- xdr_truncate_encode(xdr, post_err_offset);
++ xdr_truncate_encode(xdr, op_status_offset + XDR_UNIT);
+ }
+ if (so) {
+- int len = xdr->buf->len - post_err_offset;
++ int len = xdr->buf->len - (op_status_offset + XDR_UNIT);
+
+ so->so_replay.rp_status = op->status;
+ so->so_replay.rp_buflen = len;
+- read_bytes_from_xdr_buf(xdr->buf, post_err_offset,
++ read_bytes_from_xdr_buf(xdr->buf, op_status_offset + XDR_UNIT,
+ so->so_replay.rp_buf, len);
+ }
+ status:
+ op->status = nfsd4_map_status(op->status,
+ resp->cstate.minorversion);
+- *p = op->status;
++ write_bytes_to_xdr_buf(xdr->buf, op_status_offset,
++ &op->status, XDR_UNIT);
+ release:
+ if (opdesc && opdesc->op_release)
+ opdesc->op_release(&op->u);
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index aaca34ec678f26..fd2de4b2bef1a8 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -1219,7 +1219,7 @@ int nilfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (size) {
+ if (phys && blkphy << blkbits == phys + size) {
+ /* The current extent goes on */
+- size += n << blkbits;
++ size += (u64)n << blkbits;
+ } else {
+ /* Terminate the current extent */
+ ret = fiemap_fill_next_extent(
+@@ -1232,14 +1232,14 @@ int nilfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ flags = FIEMAP_EXTENT_MERGED;
+ logical = blkoff << blkbits;
+ phys = blkphy << blkbits;
+- size = n << blkbits;
++ size = (u64)n << blkbits;
+ }
+ } else {
+ /* Start a new extent */
+ flags = FIEMAP_EXTENT_MERGED;
+ logical = blkoff << blkbits;
+ phys = blkphy << blkbits;
+- size = n << blkbits;
++ size = (u64)n << blkbits;
+ }
+ blkoff += n;
+ }
+diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
+index 213206ebdd5810..7799f4d16ce999 100644
+--- a/fs/ocfs2/dir.c
++++ b/fs/ocfs2/dir.c
+@@ -1065,26 +1065,39 @@ int ocfs2_find_entry(const char *name, int namelen,
+ {
+ struct buffer_head *bh;
+ struct ocfs2_dir_entry *res_dir = NULL;
++ int ret = 0;
+
+ if (ocfs2_dir_indexed(dir))
+ return ocfs2_find_entry_dx(name, namelen, dir, lookup);
+
++ if (unlikely(i_size_read(dir) <= 0)) {
++ ret = -EFSCORRUPTED;
++ mlog_errno(ret);
++ goto out;
++ }
+ /*
+ * The unindexed dir code only uses part of the lookup
+ * structure, so there's no reason to push it down further
+ * than this.
+ */
+- if (OCFS2_I(dir)->ip_dyn_features & OCFS2_INLINE_DATA_FL)
++ if (OCFS2_I(dir)->ip_dyn_features & OCFS2_INLINE_DATA_FL) {
++ if (unlikely(i_size_read(dir) > dir->i_sb->s_blocksize)) {
++ ret = -EFSCORRUPTED;
++ mlog_errno(ret);
++ goto out;
++ }
+ bh = ocfs2_find_entry_id(name, namelen, dir, &res_dir);
+- else
++ } else {
+ bh = ocfs2_find_entry_el(name, namelen, dir, &res_dir);
++ }
+
+ if (bh == NULL)
+ return -ENOENT;
+
+ lookup->dl_leaf_bh = bh;
+ lookup->dl_entry = res_dir;
+- return 0;
++out:
++ return ret;
+ }
+
+ /*
+@@ -2010,6 +2023,7 @@ int ocfs2_lookup_ino_from_name(struct inode *dir, const char *name,
+ *
+ * Return 0 if the name does not exist
+ * Return -EEXIST if the directory contains the name
++ * Return -EFSCORRUPTED if found corruption
+ *
+ * Callers should have i_rwsem + a cluster lock on dir
+ */
+@@ -2023,9 +2037,12 @@ int ocfs2_check_dir_for_entry(struct inode *dir,
+ trace_ocfs2_check_dir_for_entry(
+ (unsigned long long)OCFS2_I(dir)->ip_blkno, namelen, name);
+
+- if (ocfs2_find_entry(name, namelen, dir, &lookup) == 0) {
++ ret = ocfs2_find_entry(name, namelen, dir, &lookup);
++ if (ret == 0) {
+ ret = -EEXIST;
+ mlog_errno(ret);
++ } else if (ret == -ENOENT) {
++ ret = 0;
+ }
+
+ ocfs2_free_dir_lookup_result(&lookup);
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index c79b4291777f63..1e87554f6f4104 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2340,7 +2340,7 @@ static int ocfs2_verify_volume(struct ocfs2_dinode *di,
+ mlog(ML_ERROR, "found superblock with incorrect block "
+ "size bits: found %u, should be 9, 10, 11, or 12\n",
+ blksz_bits);
+- } else if ((1 << le32_to_cpu(blksz_bits)) != blksz) {
++ } else if ((1 << blksz_bits) != blksz) {
+ mlog(ML_ERROR, "found superblock with incorrect block "
+ "size: found %u, should be %u\n", 1 << blksz_bits, blksz);
+ } else if (le16_to_cpu(di->id2.i_super.s_major_rev_level) !=
+diff --git a/fs/ocfs2/symlink.c b/fs/ocfs2/symlink.c
+index d4c5fdcfa1e464..f5cf2255dc0972 100644
+--- a/fs/ocfs2/symlink.c
++++ b/fs/ocfs2/symlink.c
+@@ -65,7 +65,7 @@ static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio)
+
+ if (status < 0) {
+ mlog_errno(status);
+- return status;
++ goto out;
+ }
+
+ fe = (struct ocfs2_dinode *) bh->b_data;
+@@ -76,9 +76,10 @@ static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio)
+ memcpy(kaddr, link, len + 1);
+ kunmap_atomic(kaddr);
+ SetPageUptodate(page);
++out:
+ unlock_page(page);
+ brelse(bh);
+- return 0;
++ return status;
+ }
+
+ const struct address_space_operations ocfs2_fast_symlink_aops = {
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index 34a47fb0c57f25..5e4f7b411fbdb9 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -500,7 +500,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ * a program is not able to use ptrace(2) in that case. It is
+ * safe because the task has stopped executing permanently.
+ */
+- if (permitted && (task->flags & (PF_EXITING|PF_DUMPCORE))) {
++ if (permitted && (task->flags & (PF_EXITING|PF_DUMPCORE|PF_POSTCOREDUMP))) {
+ if (try_get_task_stack(task)) {
+ eip = KSTK_EIP(task);
+ esp = KSTK_ESP(task);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 9a4b3608b7d6f3..94785abc9b1b2d 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -320,7 +320,7 @@ struct smb_version_operations {
+ int (*handle_cancelled_mid)(struct mid_q_entry *, struct TCP_Server_Info *);
+ void (*downgrade_oplock)(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache);
++ __u16 epoch, bool *purge_cache);
+ /* process transaction2 response */
+ bool (*check_trans2)(struct mid_q_entry *, struct TCP_Server_Info *,
+ char *, int);
+@@ -515,12 +515,12 @@ struct smb_version_operations {
+ /* if we can do cache read operations */
+ bool (*is_read_op)(__u32);
+ /* set oplock level for the inode */
+- void (*set_oplock_level)(struct cifsInodeInfo *, __u32, unsigned int,
+- bool *);
++ void (*set_oplock_level)(struct cifsInodeInfo *cinode, __u32 oplock, __u16 epoch,
++ bool *purge_cache);
+ /* create lease context buffer for CREATE request */
+ char * (*create_lease_buf)(u8 *lease_key, u8 oplock);
+ /* parse lease context buffer and return oplock/epoch info */
+- __u8 (*parse_lease_buf)(void *buf, unsigned int *epoch, char *lkey);
++ __u8 (*parse_lease_buf)(void *buf, __u16 *epoch, char *lkey);
+ ssize_t (*copychunk_range)(const unsigned int,
+ struct cifsFileInfo *src_file,
+ struct cifsFileInfo *target_file,
+@@ -1415,7 +1415,7 @@ struct cifs_fid {
+ __u8 create_guid[16];
+ __u32 access;
+ struct cifs_pending_open *pending_open;
+- unsigned int epoch;
++ __u16 epoch;
+ #ifdef CONFIG_CIFS_DEBUG2
+ __u64 mid;
+ #endif /* CIFS_DEBUG2 */
+@@ -1448,7 +1448,7 @@ struct cifsFileInfo {
+ bool oplock_break_cancelled:1;
+ bool status_file_deleted:1; /* file has been deleted */
+ bool offload:1; /* offload final part of _put to a wq */
+- unsigned int oplock_epoch; /* epoch from the lease break */
++ __u16 oplock_epoch; /* epoch from the lease break */
+ __u32 oplock_level; /* oplock/lease level from the lease break */
+ int count;
+ spinlock_t file_info_lock; /* protects four flag/count fields above */
+@@ -1545,7 +1545,7 @@ struct cifsInodeInfo {
+ spinlock_t open_file_lock; /* protects openFileList */
+ __u32 cifsAttrs; /* e.g. DOS archive bit, sparse, compressed, system */
+ unsigned int oplock; /* oplock/lease level we have */
+- unsigned int epoch; /* used to track lease state changes */
++ __u16 epoch; /* used to track lease state changes */
+ #define CIFS_INODE_PENDING_OPLOCK_BREAK (0) /* oplock break in progress */
+ #define CIFS_INODE_PENDING_WRITERS (1) /* Writes in progress */
+ #define CIFS_INODE_FLAG_UNUSED (2) /* Unused flag */
+diff --git a/fs/smb/client/dir.c b/fs/smb/client/dir.c
+index 864b194dbaa0a0..1822493dd0842e 100644
+--- a/fs/smb/client/dir.c
++++ b/fs/smb/client/dir.c
+@@ -627,7 +627,7 @@ int cifs_mknod(struct mnt_idmap *idmap, struct inode *inode,
+ goto mknod_out;
+ }
+
+- trace_smb3_mknod_enter(xid, tcon->ses->Suid, tcon->tid, full_path);
++ trace_smb3_mknod_enter(xid, tcon->tid, tcon->ses->Suid, full_path);
+
+ rc = tcon->ses->server->ops->make_node(xid, inode, direntry, tcon,
+ full_path, mode,
+@@ -635,9 +635,9 @@ int cifs_mknod(struct mnt_idmap *idmap, struct inode *inode,
+
+ mknod_out:
+ if (rc)
+- trace_smb3_mknod_err(xid, tcon->ses->Suid, tcon->tid, rc);
++ trace_smb3_mknod_err(xid, tcon->tid, tcon->ses->Suid, rc);
+ else
+- trace_smb3_mknod_done(xid, tcon->ses->Suid, tcon->tid);
++ trace_smb3_mknod_done(xid, tcon->tid, tcon->ses->Suid);
+
+ free_dentry_path(page);
+ free_xid(xid);
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index db3695eddcf9d5..c70f4961c4eb78 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -377,7 +377,7 @@ coalesce_t2(char *second_buf, struct smb_hdr *target_hdr)
+ static void
+ cifs_downgrade_oplock(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ cifs_set_oplock_level(cinode, oplock);
+ }
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index b935c1a62d10cf..7dfd3eb3847b33 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -298,8 +298,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_query_info_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_query_info_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_POSIX_QUERY_INFO:
+ rqst[num_rqst].rq_iov = &vars->qi_iov;
+@@ -334,18 +334,18 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_posix_query_info_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_posix_query_info_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_DELETE:
+- trace_smb3_delete_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_delete_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_MKDIR:
+ /*
+ * Directories are created through parameters in the
+ * SMB2_open() call.
+ */
+- trace_smb3_mkdir_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_mkdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_RMDIR:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -363,7 +363,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ smb2_set_next_command(tcon, &rqst[num_rqst]);
+ smb2_set_related(&rqst[num_rqst++]);
+- trace_smb3_rmdir_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_rmdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_EOF:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -398,7 +398,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_set_eof_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_set_eof_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_INFO:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -429,8 +429,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_set_info_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_set_info_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_RENAME:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -469,7 +469,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_rename_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_rename_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_HARDLINK:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -496,7 +496,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ smb2_set_next_command(tcon, &rqst[num_rqst]);
+ smb2_set_related(&rqst[num_rqst++]);
+- trace_smb3_hardlink_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_hardlink_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_REPARSE:
+ rqst[num_rqst].rq_iov = vars->io_iov;
+@@ -523,8 +523,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_set_reparse_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_set_reparse_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_GET_REPARSE:
+ rqst[num_rqst].rq_iov = vars->io_iov;
+@@ -549,8 +549,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_get_reparse_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_get_reparse_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_QUERY_WSL_EA:
+ rqst[num_rqst].rq_iov = &vars->ea_iov;
+@@ -663,11 +663,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ SMB2_query_info_free(&rqst[num_rqst++]);
+ if (rc)
+- trace_smb3_query_info_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_query_info_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ else
+- trace_smb3_query_info_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_query_info_compound_done(xid, tcon->tid,
++ ses->Suid);
+ break;
+ case SMB2_OP_POSIX_QUERY_INFO:
+ idata = in_iov[i].iov_base;
+@@ -690,15 +690,15 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+
+ SMB2_query_info_free(&rqst[num_rqst++]);
+ if (rc)
+- trace_smb3_posix_query_info_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_posix_query_info_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ else
+- trace_smb3_posix_query_info_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_posix_query_info_compound_done(xid, tcon->tid,
++ ses->Suid);
+ break;
+ case SMB2_OP_DELETE:
+ if (rc)
+- trace_smb3_delete_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_delete_err(xid, tcon->tid, ses->Suid, rc);
+ else {
+ /*
+ * If dentry (hence, inode) is NULL, lease break is going to
+@@ -706,59 +706,59 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ */
+ if (inode)
+ cifs_mark_open_handles_for_deleted_file(inode, full_path);
+- trace_smb3_delete_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_delete_done(xid, tcon->tid, ses->Suid);
+ }
+ break;
+ case SMB2_OP_MKDIR:
+ if (rc)
+- trace_smb3_mkdir_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_mkdir_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_mkdir_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_mkdir_done(xid, tcon->tid, ses->Suid);
+ break;
+ case SMB2_OP_HARDLINK:
+ if (rc)
+- trace_smb3_hardlink_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_hardlink_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_hardlink_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_hardlink_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_RENAME:
+ if (rc)
+- trace_smb3_rename_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_rename_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_rename_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_rename_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_RMDIR:
+ if (rc)
+- trace_smb3_rmdir_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_rmdir_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_rmdir_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_rmdir_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_EOF:
+ if (rc)
+- trace_smb3_set_eof_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_set_eof_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_set_eof_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_set_eof_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_INFO:
+ if (rc)
+- trace_smb3_set_info_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_set_info_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ else
+- trace_smb3_set_info_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_set_info_compound_done(xid, tcon->tid,
++ ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_REPARSE:
+ if (rc) {
+- trace_smb3_set_reparse_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_set_reparse_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ } else {
+- trace_smb3_set_reparse_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_set_reparse_compound_done(xid, tcon->tid,
++ ses->Suid);
+ }
+ SMB2_ioctl_free(&rqst[num_rqst++]);
+ break;
+@@ -771,18 +771,18 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ rbuf = reparse_buf_ptr(iov);
+ if (IS_ERR(rbuf)) {
+ rc = PTR_ERR(rbuf);
+- trace_smb3_set_reparse_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_get_reparse_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ } else {
+ idata->reparse.tag = le32_to_cpu(rbuf->ReparseTag);
+- trace_smb3_set_reparse_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_get_reparse_compound_done(xid, tcon->tid,
++ ses->Suid);
+ }
+ memset(iov, 0, sizeof(*iov));
+ resp_buftype[i + 1] = CIFS_NO_BUFFER;
+ } else {
+- trace_smb3_set_reparse_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_get_reparse_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ }
+ SMB2_ioctl_free(&rqst[num_rqst++]);
+ break;
+@@ -799,11 +799,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ if (!rc) {
+- trace_smb3_query_wsl_ea_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_query_wsl_ea_compound_done(xid, tcon->tid,
++ ses->Suid);
+ } else {
+- trace_smb3_query_wsl_ea_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_query_wsl_ea_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ }
+ SMB2_query_info_free(&rqst[num_rqst++]);
+ break;
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 6bacf754b57efd..44952727fef9ef 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -3931,22 +3931,22 @@ static long smb3_fallocate(struct file *file, struct cifs_tcon *tcon, int mode,
+ static void
+ smb2_downgrade_oplock(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ server->ops->set_oplock_level(cinode, oplock, 0, NULL);
+ }
+
+ static void
+ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache);
++ __u16 epoch, bool *purge_cache);
+
+ static void
+ smb3_downgrade_oplock(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ unsigned int old_state = cinode->oplock;
+- unsigned int old_epoch = cinode->epoch;
++ __u16 old_epoch = cinode->epoch;
+ unsigned int new_state;
+
+ if (epoch > old_epoch) {
+@@ -3966,7 +3966,7 @@ smb3_downgrade_oplock(struct TCP_Server_Info *server,
+
+ static void
+ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ oplock &= 0xFF;
+ cinode->lease_granted = false;
+@@ -3990,7 +3990,7 @@ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+
+ static void
+ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ char message[5] = {0};
+ unsigned int new_oplock = 0;
+@@ -4027,7 +4027,7 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+
+ static void
+ smb3_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ unsigned int old_oplock = cinode->oplock;
+
+@@ -4141,7 +4141,7 @@ smb3_create_lease_buf(u8 *lease_key, u8 oplock)
+ }
+
+ static __u8
+-smb2_parse_lease_buf(void *buf, unsigned int *epoch, char *lease_key)
++smb2_parse_lease_buf(void *buf, __u16 *epoch, char *lease_key)
+ {
+ struct create_lease *lc = (struct create_lease *)buf;
+
+@@ -4152,7 +4152,7 @@ smb2_parse_lease_buf(void *buf, unsigned int *epoch, char *lease_key)
+ }
+
+ static __u8
+-smb3_parse_lease_buf(void *buf, unsigned int *epoch, char *lease_key)
++smb3_parse_lease_buf(void *buf, __u16 *epoch, char *lease_key)
+ {
+ struct create_lease_v2 *lc = (struct create_lease_v2 *)buf;
+
+@@ -5104,6 +5104,7 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ {
+ struct TCP_Server_Info *server = tcon->ses->server;
+ struct cifs_open_parms oparms;
++ struct cifs_open_info_data idata;
+ struct cifs_io_parms io_parms = {};
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct cifs_fid fid;
+@@ -5173,10 +5174,20 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ CREATE_OPTION_SPECIAL, ACL_NO_MODE);
+ oparms.fid = &fid;
+
+- rc = server->ops->open(xid, &oparms, &oplock, NULL);
++ rc = server->ops->open(xid, &oparms, &oplock, &idata);
+ if (rc)
+ goto out;
+
++ /*
++ * Check if the server honored ATTR_SYSTEM flag by CREATE_OPTION_SPECIAL
++ * option. If not then server does not support ATTR_SYSTEM and newly
++ * created file is not SFU compatible, which means that the call failed.
++ */
++ if (!(le32_to_cpu(idata.fi.Attributes) & ATTR_SYSTEM)) {
++ rc = -EOPNOTSUPP;
++ goto out_close;
++ }
++
+ if (type_len + data_len > 0) {
+ io_parms.pid = current->tgid;
+ io_parms.tcon = tcon;
+@@ -5191,8 +5202,18 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ iov, ARRAY_SIZE(iov)-1);
+ }
+
++out_close:
+ server->ops->close(xid, tcon, &fid);
+
++ /*
++ * If CREATE was successful but either setting ATTR_SYSTEM failed or
++ * writing type/data information failed then remove the intermediate
++ * object created by CREATE. Otherwise intermediate empty object stay
++ * on the server.
++ */
++ if (rc)
++ server->ops->unlink(xid, tcon, full_path, cifs_sb, NULL);
++
+ out:
+ kfree(symname_utf16);
+ return rc;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 4750505465ae63..2e3f78fe9210ff 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -2335,7 +2335,7 @@ parse_posix_ctxt(struct create_context *cc, struct smb2_file_all_info *info,
+
+ int smb2_parse_contexts(struct TCP_Server_Info *server,
+ struct kvec *rsp_iov,
+- unsigned int *epoch,
++ __u16 *epoch,
+ char *lease_key, __u8 *oplock,
+ struct smb2_file_all_info *buf,
+ struct create_posix_rsp *posix)
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 09349fa8da039a..51d890f74e36f3 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -282,7 +282,7 @@ extern enum securityEnum smb2_select_sectype(struct TCP_Server_Info *,
+ enum securityEnum);
+ int smb2_parse_contexts(struct TCP_Server_Info *server,
+ struct kvec *rsp_iov,
+- unsigned int *epoch,
++ __u16 *epoch,
+ char *lease_key, __u8 *oplock,
+ struct smb2_file_all_info *buf,
+ struct create_posix_rsp *posix);
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 6de351cc2b60e0..69bac122adbe06 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -626,6 +626,9 @@ ksmbd_ipc_spnego_authen_request(const char *spnego_blob, int blob_len)
+ struct ksmbd_spnego_authen_request *req;
+ struct ksmbd_spnego_authen_response *resp;
+
++ if (blob_len > KSMBD_IPC_MAX_PAYLOAD)
++ return NULL;
++
+ msg = ipc_msg_alloc(sizeof(struct ksmbd_spnego_authen_request) +
+ blob_len + 1);
+ if (!msg)
+@@ -805,6 +808,9 @@ struct ksmbd_rpc_command *ksmbd_rpc_write(struct ksmbd_session *sess, int handle
+ struct ksmbd_rpc_command *req;
+ struct ksmbd_rpc_command *resp;
+
++ if (payload_sz > KSMBD_IPC_MAX_PAYLOAD)
++ return NULL;
++
+ msg = ipc_msg_alloc(sizeof(struct ksmbd_rpc_command) + payload_sz + 1);
+ if (!msg)
+ return NULL;
+@@ -853,6 +859,9 @@ struct ksmbd_rpc_command *ksmbd_rpc_ioctl(struct ksmbd_session *sess, int handle
+ struct ksmbd_rpc_command *req;
+ struct ksmbd_rpc_command *resp;
+
++ if (payload_sz > KSMBD_IPC_MAX_PAYLOAD)
++ return NULL;
++
+ msg = ipc_msg_alloc(sizeof(struct ksmbd_rpc_command) + payload_sz + 1);
+ if (!msg)
+ return NULL;
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index 5180cbf5a90b4b..0185c92df8c2ea 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -1036,11 +1036,20 @@ xlog_recover_buf_commit_pass2(
+ error = xlog_recover_do_primary_sb_buffer(mp, item, bp, buf_f,
+ current_lsn);
+ if (error)
+- goto out_release;
++ goto out_writebuf;
+ } else {
+ xlog_recover_do_reg_buffer(mp, item, bp, buf_f, current_lsn);
+ }
+
++ /*
++ * Buffer held by buf log item during 'normal' buffer recovery must
++ * be committed through buffer I/O submission path to ensure proper
++ * release. When error occurs during sb buffer recovery, log shutdown
++ * will be done before submitting buffer list so that buffers can be
++ * released correctly through ioend failure path.
++ */
++out_writebuf:
++
+ /*
+ * Perform delayed write on the buffer. Asynchronous writes will be
+ * slower when taking into account all the buffers to be flushed.
+diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
+index c1b211c260a9d5..0d73b59f1c9e57 100644
+--- a/fs/xfs/xfs_dquot.c
++++ b/fs/xfs/xfs_dquot.c
+@@ -68,6 +68,31 @@ xfs_dquot_mark_sick(
+ }
+ }
+
++/*
++ * Detach the dquot buffer if it's still attached, because we can get called
++ * through dqpurge after a log shutdown. Caller must hold the dqflock or have
++ * otherwise isolated the dquot.
++ */
++void
++xfs_dquot_detach_buf(
++ struct xfs_dquot *dqp)
++{
++ struct xfs_dq_logitem *qlip = &dqp->q_logitem;
++ struct xfs_buf *bp = NULL;
++
++ spin_lock(&qlip->qli_lock);
++ if (qlip->qli_item.li_buf) {
++ bp = qlip->qli_item.li_buf;
++ qlip->qli_item.li_buf = NULL;
++ }
++ spin_unlock(&qlip->qli_lock);
++ if (bp) {
++ xfs_buf_lock(bp);
++ list_del_init(&qlip->qli_item.li_bio_list);
++ xfs_buf_relse(bp);
++ }
++}
++
+ /*
+ * This is called to free all the memory associated with a dquot
+ */
+@@ -76,6 +101,7 @@ xfs_qm_dqdestroy(
+ struct xfs_dquot *dqp)
+ {
+ ASSERT(list_empty(&dqp->q_lru));
++ ASSERT(dqp->q_logitem.qli_item.li_buf == NULL);
+
+ kvfree(dqp->q_logitem.qli_item.li_lv_shadow);
+ mutex_destroy(&dqp->q_qlock);
+@@ -1136,9 +1162,11 @@ static void
+ xfs_qm_dqflush_done(
+ struct xfs_log_item *lip)
+ {
+- struct xfs_dq_logitem *qip = (struct xfs_dq_logitem *)lip;
+- struct xfs_dquot *dqp = qip->qli_dquot;
++ struct xfs_dq_logitem *qlip =
++ container_of(lip, struct xfs_dq_logitem, qli_item);
++ struct xfs_dquot *dqp = qlip->qli_dquot;
+ struct xfs_ail *ailp = lip->li_ailp;
++ struct xfs_buf *bp = NULL;
+ xfs_lsn_t tail_lsn;
+
+ /*
+@@ -1150,12 +1178,12 @@ xfs_qm_dqflush_done(
+ * holding the lock before removing the dquot from the AIL.
+ */
+ if (test_bit(XFS_LI_IN_AIL, &lip->li_flags) &&
+- ((lip->li_lsn == qip->qli_flush_lsn) ||
++ ((lip->li_lsn == qlip->qli_flush_lsn) ||
+ test_bit(XFS_LI_FAILED, &lip->li_flags))) {
+
+ spin_lock(&ailp->ail_lock);
+ xfs_clear_li_failed(lip);
+- if (lip->li_lsn == qip->qli_flush_lsn) {
++ if (lip->li_lsn == qlip->qli_flush_lsn) {
+ /* xfs_ail_update_finish() drops the AIL lock */
+ tail_lsn = xfs_ail_delete_one(ailp, lip);
+ xfs_ail_update_finish(ailp, tail_lsn);
+@@ -1168,6 +1196,19 @@ xfs_qm_dqflush_done(
+ * Release the dq's flush lock since we're done with it.
+ */
+ xfs_dqfunlock(dqp);
++
++ /*
++ * If this dquot hasn't been dirtied since initiating the last dqflush,
++ * release the buffer reference.
++ */
++ spin_lock(&qlip->qli_lock);
++ if (!qlip->qli_dirty) {
++ bp = lip->li_buf;
++ lip->li_buf = NULL;
++ }
++ spin_unlock(&qlip->qli_lock);
++ if (bp)
++ xfs_buf_rele(bp);
+ }
+
+ void
+@@ -1190,7 +1231,7 @@ xfs_buf_dquot_io_fail(
+
+ spin_lock(&bp->b_mount->m_ail->ail_lock);
+ list_for_each_entry(lip, &bp->b_li_list, li_bio_list)
+- xfs_set_li_failed(lip, bp);
++ set_bit(XFS_LI_FAILED, &lip->li_flags);
+ spin_unlock(&bp->b_mount->m_ail->ail_lock);
+ }
+
+@@ -1232,6 +1273,115 @@ xfs_qm_dqflush_check(
+ return NULL;
+ }
+
++/*
++ * Get the buffer containing the on-disk dquot.
++ *
++ * Requires dquot flush lock, will clear the dirty flag, delete the quota log
++ * item from the AIL, and shut down the system if something goes wrong.
++ */
++static int
++xfs_dquot_read_buf(
++ struct xfs_trans *tp,
++ struct xfs_dquot *dqp,
++ struct xfs_buf **bpp)
++{
++ struct xfs_mount *mp = dqp->q_mount;
++ struct xfs_buf *bp = NULL;
++ int error;
++
++ error = xfs_trans_read_buf(mp, tp, mp->m_ddev_targp, dqp->q_blkno,
++ mp->m_quotainfo->qi_dqchunklen, 0,
++ &bp, &xfs_dquot_buf_ops);
++ if (xfs_metadata_is_sick(error))
++ xfs_dquot_mark_sick(dqp);
++ if (error)
++ goto out_abort;
++
++ *bpp = bp;
++ return 0;
++
++out_abort:
++ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
++ xfs_trans_ail_delete(&dqp->q_logitem.qli_item, 0);
++ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
++ return error;
++}
++
++/*
++ * Attach a dquot buffer to this dquot to avoid allocating a buffer during a
++ * dqflush, since dqflush can be called from reclaim context. Caller must hold
++ * the dqlock.
++ */
++int
++xfs_dquot_attach_buf(
++ struct xfs_trans *tp,
++ struct xfs_dquot *dqp)
++{
++ struct xfs_dq_logitem *qlip = &dqp->q_logitem;
++ struct xfs_log_item *lip = &qlip->qli_item;
++ int error;
++
++ spin_lock(&qlip->qli_lock);
++ if (!lip->li_buf) {
++ struct xfs_buf *bp = NULL;
++
++ spin_unlock(&qlip->qli_lock);
++ error = xfs_dquot_read_buf(tp, dqp, &bp);
++ if (error)
++ return error;
++
++ /*
++ * Hold the dquot buffer so that we retain our ref to it after
++ * detaching it from the transaction, then give that ref to the
++ * dquot log item so that the AIL does not have to read the
++ * dquot buffer to push this item.
++ */
++ xfs_buf_hold(bp);
++ xfs_trans_brelse(tp, bp);
++
++ spin_lock(&qlip->qli_lock);
++ lip->li_buf = bp;
++ }
++ qlip->qli_dirty = true;
++ spin_unlock(&qlip->qli_lock);
++
++ return 0;
++}
++
++/*
++ * Get a new reference the dquot buffer attached to this dquot for a dqflush
++ * operation.
++ *
++ * Returns 0 and a NULL bp if none was attached to the dquot; 0 and a locked
++ * bp; or -EAGAIN if the buffer could not be locked.
++ */
++int
++xfs_dquot_use_attached_buf(
++ struct xfs_dquot *dqp,
++ struct xfs_buf **bpp)
++{
++ struct xfs_buf *bp = dqp->q_logitem.qli_item.li_buf;
++
++ /*
++ * A NULL buffer can happen if the dquot dirty flag was set but the
++ * filesystem shut down before transaction commit happened. In that
++ * case we're not going to flush anyway.
++ */
++ if (!bp) {
++ ASSERT(xfs_is_shutdown(dqp->q_mount));
++
++ *bpp = NULL;
++ return 0;
++ }
++
++ if (!xfs_buf_trylock(bp))
++ return -EAGAIN;
++
++ xfs_buf_hold(bp);
++ *bpp = bp;
++ return 0;
++}
++
+ /*
+ * Write a modified dquot to disk.
+ * The dquot must be locked and the flush lock too taken by caller.
+@@ -1243,11 +1393,11 @@ xfs_qm_dqflush_check(
+ int
+ xfs_qm_dqflush(
+ struct xfs_dquot *dqp,
+- struct xfs_buf **bpp)
++ struct xfs_buf *bp)
+ {
+ struct xfs_mount *mp = dqp->q_mount;
+- struct xfs_log_item *lip = &dqp->q_logitem.qli_item;
+- struct xfs_buf *bp;
++ struct xfs_dq_logitem *qlip = &dqp->q_logitem;
++ struct xfs_log_item *lip = &qlip->qli_item;
+ struct xfs_dqblk *dqblk;
+ xfs_failaddr_t fa;
+ int error;
+@@ -1257,28 +1407,12 @@ xfs_qm_dqflush(
+
+ trace_xfs_dqflush(dqp);
+
+- *bpp = NULL;
+-
+ xfs_qm_dqunpin_wait(dqp);
+
+- /*
+- * Get the buffer containing the on-disk dquot
+- */
+- error = xfs_trans_read_buf(mp, NULL, mp->m_ddev_targp, dqp->q_blkno,
+- mp->m_quotainfo->qi_dqchunklen, XBF_TRYLOCK,
+- &bp, &xfs_dquot_buf_ops);
+- if (error == -EAGAIN)
+- goto out_unlock;
+- if (xfs_metadata_is_sick(error))
+- xfs_dquot_mark_sick(dqp);
+- if (error)
+- goto out_abort;
+-
+ fa = xfs_qm_dqflush_check(dqp);
+ if (fa) {
+ xfs_alert(mp, "corrupt dquot ID 0x%x in memory at %pS",
+ dqp->q_id, fa);
+- xfs_buf_relse(bp);
+ xfs_dquot_mark_sick(dqp);
+ error = -EFSCORRUPTED;
+ goto out_abort;
+@@ -1293,8 +1427,15 @@ xfs_qm_dqflush(
+ */
+ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
+
+- xfs_trans_ail_copy_lsn(mp->m_ail, &dqp->q_logitem.qli_flush_lsn,
+- &dqp->q_logitem.qli_item.li_lsn);
++ /*
++ * We hold the dquot lock, so nobody can dirty it while we're
++ * scheduling the write out. Clear the dirty-since-flush flag.
++ */
++ spin_lock(&qlip->qli_lock);
++ qlip->qli_dirty = false;
++ spin_unlock(&qlip->qli_lock);
++
++ xfs_trans_ail_copy_lsn(mp->m_ail, &qlip->qli_flush_lsn, &lip->li_lsn);
+
+ /*
+ * copy the lsn into the on-disk dquot now while we have the in memory
+@@ -1306,7 +1447,7 @@ xfs_qm_dqflush(
+ * of a dquot without an up-to-date CRC getting to disk.
+ */
+ if (xfs_has_crc(mp)) {
+- dqblk->dd_lsn = cpu_to_be64(dqp->q_logitem.qli_item.li_lsn);
++ dqblk->dd_lsn = cpu_to_be64(lip->li_lsn);
+ xfs_update_cksum((char *)dqblk, sizeof(struct xfs_dqblk),
+ XFS_DQUOT_CRC_OFF);
+ }
+@@ -1316,7 +1457,7 @@ xfs_qm_dqflush(
+ * the AIL and release the flush lock once the dquot is synced to disk.
+ */
+ bp->b_flags |= _XBF_DQUOTS;
+- list_add_tail(&dqp->q_logitem.qli_item.li_bio_list, &bp->b_li_list);
++ list_add_tail(&lip->li_bio_list, &bp->b_li_list);
+
+ /*
+ * If the buffer is pinned then push on the log so we won't
+@@ -1328,14 +1469,12 @@ xfs_qm_dqflush(
+ }
+
+ trace_xfs_dqflush_done(dqp);
+- *bpp = bp;
+ return 0;
+
+ out_abort:
+ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
+ xfs_trans_ail_delete(lip, 0);
+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+-out_unlock:
+ xfs_dqfunlock(dqp);
+ return error;
+ }
+diff --git a/fs/xfs/xfs_dquot.h b/fs/xfs/xfs_dquot.h
+index 677bb2dc9ac913..bd7bfd9e402e5b 100644
+--- a/fs/xfs/xfs_dquot.h
++++ b/fs/xfs/xfs_dquot.h
+@@ -204,7 +204,7 @@ void xfs_dquot_to_disk(struct xfs_disk_dquot *ddqp, struct xfs_dquot *dqp);
+ #define XFS_DQ_IS_DIRTY(dqp) ((dqp)->q_flags & XFS_DQFLAG_DIRTY)
+
+ void xfs_qm_dqdestroy(struct xfs_dquot *dqp);
+-int xfs_qm_dqflush(struct xfs_dquot *dqp, struct xfs_buf **bpp);
++int xfs_qm_dqflush(struct xfs_dquot *dqp, struct xfs_buf *bp);
+ void xfs_qm_dqunpin_wait(struct xfs_dquot *dqp);
+ void xfs_qm_adjust_dqtimers(struct xfs_dquot *d);
+ void xfs_qm_adjust_dqlimits(struct xfs_dquot *d);
+@@ -227,6 +227,10 @@ void xfs_dqlockn(struct xfs_dqtrx *q);
+
+ void xfs_dquot_set_prealloc_limits(struct xfs_dquot *);
+
++int xfs_dquot_attach_buf(struct xfs_trans *tp, struct xfs_dquot *dqp);
++int xfs_dquot_use_attached_buf(struct xfs_dquot *dqp, struct xfs_buf **bpp);
++void xfs_dquot_detach_buf(struct xfs_dquot *dqp);
++
+ static inline struct xfs_dquot *xfs_qm_dqhold(struct xfs_dquot *dqp)
+ {
+ xfs_dqlock(dqp);
+diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c
+index 7d19091215b080..271b195ebb9326 100644
+--- a/fs/xfs/xfs_dquot_item.c
++++ b/fs/xfs/xfs_dquot_item.c
+@@ -123,8 +123,9 @@ xfs_qm_dquot_logitem_push(
+ __releases(&lip->li_ailp->ail_lock)
+ __acquires(&lip->li_ailp->ail_lock)
+ {
+- struct xfs_dquot *dqp = DQUOT_ITEM(lip)->qli_dquot;
+- struct xfs_buf *bp = lip->li_buf;
++ struct xfs_dq_logitem *qlip = DQUOT_ITEM(lip);
++ struct xfs_dquot *dqp = qlip->qli_dquot;
++ struct xfs_buf *bp;
+ uint rval = XFS_ITEM_SUCCESS;
+ int error;
+
+@@ -155,14 +156,25 @@ xfs_qm_dquot_logitem_push(
+
+ spin_unlock(&lip->li_ailp->ail_lock);
+
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
++ if (error == -EAGAIN) {
++ xfs_dqfunlock(dqp);
++ rval = XFS_ITEM_LOCKED;
++ goto out_relock_ail;
++ }
++
++ /*
++ * dqflush completes dqflock on error, and the delwri ioend does it on
++ * success.
++ */
++ error = xfs_qm_dqflush(dqp, bp);
+ if (!error) {
+ if (!xfs_buf_delwri_queue(bp, buffer_list))
+ rval = XFS_ITEM_FLUSHING;
+- xfs_buf_relse(bp);
+- } else if (error == -EAGAIN)
+- rval = XFS_ITEM_LOCKED;
++ }
++ xfs_buf_relse(bp);
+
++out_relock_ail:
+ spin_lock(&lip->li_ailp->ail_lock);
+ out_unlock:
+ xfs_dqunlock(dqp);
+@@ -195,12 +207,10 @@ xfs_qm_dquot_logitem_committing(
+ }
+
+ #ifdef DEBUG_EXPENSIVE
+-static int
+-xfs_qm_dquot_logitem_precommit(
+- struct xfs_trans *tp,
+- struct xfs_log_item *lip)
++static void
++xfs_qm_dquot_logitem_precommit_check(
++ struct xfs_dquot *dqp)
+ {
+- struct xfs_dquot *dqp = DQUOT_ITEM(lip)->qli_dquot;
+ struct xfs_mount *mp = dqp->q_mount;
+ struct xfs_disk_dquot ddq = { };
+ xfs_failaddr_t fa;
+@@ -216,13 +226,24 @@ xfs_qm_dquot_logitem_precommit(
+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ ASSERT(fa == NULL);
+ }
+-
+- return 0;
+ }
+ #else
+-# define xfs_qm_dquot_logitem_precommit NULL
++# define xfs_qm_dquot_logitem_precommit_check(...) ((void)0)
+ #endif
+
++static int
++xfs_qm_dquot_logitem_precommit(
++ struct xfs_trans *tp,
++ struct xfs_log_item *lip)
++{
++ struct xfs_dq_logitem *qlip = DQUOT_ITEM(lip);
++ struct xfs_dquot *dqp = qlip->qli_dquot;
++
++ xfs_qm_dquot_logitem_precommit_check(dqp);
++
++ return xfs_dquot_attach_buf(tp, dqp);
++}
++
+ static const struct xfs_item_ops xfs_dquot_item_ops = {
+ .iop_size = xfs_qm_dquot_logitem_size,
+ .iop_precommit = xfs_qm_dquot_logitem_precommit,
+@@ -247,5 +268,7 @@ xfs_qm_dquot_logitem_init(
+
+ xfs_log_item_init(dqp->q_mount, &lp->qli_item, XFS_LI_DQUOT,
+ &xfs_dquot_item_ops);
++ spin_lock_init(&lp->qli_lock);
+ lp->qli_dquot = dqp;
++ lp->qli_dirty = false;
+ }
+diff --git a/fs/xfs/xfs_dquot_item.h b/fs/xfs/xfs_dquot_item.h
+index 794710c2447493..d66e52807d76d5 100644
+--- a/fs/xfs/xfs_dquot_item.h
++++ b/fs/xfs/xfs_dquot_item.h
+@@ -14,6 +14,13 @@ struct xfs_dq_logitem {
+ struct xfs_log_item qli_item; /* common portion */
+ struct xfs_dquot *qli_dquot; /* dquot ptr */
+ xfs_lsn_t qli_flush_lsn; /* lsn at last flush */
++
++ /*
++ * We use this spinlock to coordinate access to the li_buf pointer in
++ * the log item and the qli_dirty flag.
++ */
++ spinlock_t qli_lock;
++ bool qli_dirty; /* dirtied since last flush? */
+ };
+
+ void xfs_qm_dquot_logitem_init(struct xfs_dquot *dqp);
+diff --git a/fs/xfs/xfs_exchrange.c b/fs/xfs/xfs_exchrange.c
+index 75cb53f090d1f7..7c8195895a734e 100644
+--- a/fs/xfs/xfs_exchrange.c
++++ b/fs/xfs/xfs_exchrange.c
+@@ -326,22 +326,6 @@ xfs_exchrange_mappings(
+ * successfully but before locks are dropped.
+ */
+
+-/* Verify that we have security clearance to perform this operation. */
+-static int
+-xfs_exchange_range_verify_area(
+- struct xfs_exchrange *fxr)
+-{
+- int ret;
+-
+- ret = remap_verify_area(fxr->file1, fxr->file1_offset, fxr->length,
+- true);
+- if (ret)
+- return ret;
+-
+- return remap_verify_area(fxr->file2, fxr->file2_offset, fxr->length,
+- true);
+-}
+-
+ /*
+ * Performs necessary checks before doing a range exchange, having stabilized
+ * mutable inode attributes via i_rwsem.
+@@ -352,11 +336,13 @@ xfs_exchange_range_checks(
+ unsigned int alloc_unit)
+ {
+ struct inode *inode1 = file_inode(fxr->file1);
++ loff_t size1 = i_size_read(inode1);
+ struct inode *inode2 = file_inode(fxr->file2);
++ loff_t size2 = i_size_read(inode2);
+ uint64_t allocmask = alloc_unit - 1;
+ int64_t test_len;
+ uint64_t blen;
+- loff_t size1, size2, tmp;
++ loff_t tmp;
+ int error;
+
+ /* Don't touch certain kinds of inodes */
+@@ -365,24 +351,25 @@ xfs_exchange_range_checks(
+ if (IS_SWAPFILE(inode1) || IS_SWAPFILE(inode2))
+ return -ETXTBSY;
+
+- size1 = i_size_read(inode1);
+- size2 = i_size_read(inode2);
+-
+ /* Ranges cannot start after EOF. */
+ if (fxr->file1_offset > size1 || fxr->file2_offset > size2)
+ return -EINVAL;
+
+- /*
+- * If the caller said to exchange to EOF, we set the length of the
+- * request large enough to cover everything to the end of both files.
+- */
+ if (fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF) {
++ /*
++ * If the caller said to exchange to EOF, we set the length of
++ * the request large enough to cover everything to the end of
++ * both files.
++ */
+ fxr->length = max_t(int64_t, size1 - fxr->file1_offset,
+ size2 - fxr->file2_offset);
+-
+- error = xfs_exchange_range_verify_area(fxr);
+- if (error)
+- return error;
++ } else {
++ /*
++ * Otherwise we require both ranges to end within EOF.
++ */
++ if (fxr->file1_offset + fxr->length > size1 ||
++ fxr->file2_offset + fxr->length > size2)
++ return -EINVAL;
+ }
+
+ /*
+@@ -398,15 +385,6 @@ xfs_exchange_range_checks(
+ check_add_overflow(fxr->file2_offset, fxr->length, &tmp))
+ return -EINVAL;
+
+- /*
+- * We require both ranges to end within EOF, unless we're exchanging
+- * to EOF.
+- */
+- if (!(fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF) &&
+- (fxr->file1_offset + fxr->length > size1 ||
+- fxr->file2_offset + fxr->length > size2))
+- return -EINVAL;
+-
+ /*
+ * Make sure we don't hit any file size limits. If we hit any size
+ * limits such that test_length was adjusted, we abort the whole
+@@ -744,6 +722,7 @@ xfs_exchange_range(
+ {
+ struct inode *inode1 = file_inode(fxr->file1);
+ struct inode *inode2 = file_inode(fxr->file2);
++ loff_t check_len = fxr->length;
+ int ret;
+
+ BUILD_BUG_ON(XFS_EXCHANGE_RANGE_ALL_FLAGS &
+@@ -776,14 +755,18 @@ xfs_exchange_range(
+ return -EBADF;
+
+ /*
+- * If we're not exchanging to EOF, we can check the areas before
+- * stabilizing both files' i_size.
++ * If we're exchanging to EOF we can't calculate the length until taking
++ * the iolock. Pass a 0 length to remap_verify_area similar to the
++ * FICLONE and FICLONERANGE ioctls that support cloning to EOF as well.
+ */
+- if (!(fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF)) {
+- ret = xfs_exchange_range_verify_area(fxr);
+- if (ret)
+- return ret;
+- }
++ if (fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF)
++ check_len = 0;
++ ret = remap_verify_area(fxr->file1, fxr->file1_offset, check_len, true);
++ if (ret)
++ return ret;
++ ret = remap_verify_area(fxr->file2, fxr->file2_offset, check_len, true);
++ if (ret)
++ return ret;
+
+ /* Update cmtime if the fd/inode don't forbid it. */
+ if (!(fxr->file1->f_mode & FMODE_NOCMTIME) && !IS_NOCMTIME(inode1))
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 19dcb569a3e7f8..ed09b4a3084e1c 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -1392,8 +1392,11 @@ xfs_inactive(
+ goto out;
+
+ /* Try to clean out the cow blocks if there are any. */
+- if (xfs_inode_has_cow_data(ip))
+- xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, true);
++ if (xfs_inode_has_cow_data(ip)) {
++ error = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, true);
++ if (error)
++ goto out;
++ }
+
+ if (VFS_I(ip)->i_nlink != 0) {
+ /*
+diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
+index 86da16f54be9d7..6335b122486fee 100644
+--- a/fs/xfs/xfs_iomap.c
++++ b/fs/xfs/xfs_iomap.c
+@@ -942,10 +942,8 @@ xfs_dax_write_iomap_end(
+ if (!xfs_is_cow_inode(ip))
+ return 0;
+
+- if (!written) {
+- xfs_reflink_cancel_cow_range(ip, pos, length, true);
+- return 0;
+- }
++ if (!written)
++ return xfs_reflink_cancel_cow_range(ip, pos, length, true);
+
+ return xfs_reflink_end_cow(ip, pos, written);
+ }
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index 7e2307921deb2f..3212b5bf3fb3c6 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -146,17 +146,29 @@ xfs_qm_dqpurge(
+ * We don't care about getting disk errors here. We need
+ * to purge this dquot anyway, so we go ahead regardless.
+ */
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
++ if (error == -EAGAIN) {
++ xfs_dqfunlock(dqp);
++ dqp->q_flags &= ~XFS_DQFLAG_FREEING;
++ goto out_unlock;
++ }
++ if (!bp)
++ goto out_funlock;
++
++ /*
++ * dqflush completes dqflock on error, and the bwrite ioend
++ * does it on success.
++ */
++ error = xfs_qm_dqflush(dqp, bp);
+ if (!error) {
+ error = xfs_bwrite(bp);
+ xfs_buf_relse(bp);
+- } else if (error == -EAGAIN) {
+- dqp->q_flags &= ~XFS_DQFLAG_FREEING;
+- goto out_unlock;
+ }
+ xfs_dqflock(dqp);
+ }
++ xfs_dquot_detach_buf(dqp);
+
++out_funlock:
+ ASSERT(atomic_read(&dqp->q_pincount) == 0);
+ ASSERT(xlog_is_shutdown(dqp->q_logitem.qli_item.li_log) ||
+ !test_bit(XFS_LI_IN_AIL, &dqp->q_logitem.qli_item.li_flags));
+@@ -462,7 +474,17 @@ xfs_qm_dquot_isolate(
+ /* we have to drop the LRU lock to flush the dquot */
+ spin_unlock(lru_lock);
+
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
++ if (!bp || error == -EAGAIN) {
++ xfs_dqfunlock(dqp);
++ goto out_unlock_dirty;
++ }
++
++ /*
++ * dqflush completes dqflock on error, and the delwri ioend
++ * does it on success.
++ */
++ error = xfs_qm_dqflush(dqp, bp);
+ if (error)
+ goto out_unlock_dirty;
+
+@@ -470,6 +492,8 @@ xfs_qm_dquot_isolate(
+ xfs_buf_relse(bp);
+ goto out_unlock_dirty;
+ }
++
++ xfs_dquot_detach_buf(dqp);
+ xfs_dqfunlock(dqp);
+
+ /*
+@@ -1108,6 +1132,10 @@ xfs_qm_quotacheck_dqadjust(
+ return error;
+ }
+
++ error = xfs_dquot_attach_buf(NULL, dqp);
++ if (error)
++ return error;
++
+ trace_xfs_dqadjust(dqp);
+
+ /*
+@@ -1287,11 +1315,17 @@ xfs_qm_flush_one(
+ goto out_unlock;
+ }
+
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
+ if (error)
+ goto out_unlock;
++ if (!bp) {
++ error = -EFSCORRUPTED;
++ goto out_unlock;
++ }
+
+- xfs_buf_delwri_queue(bp, buffer_list);
++ error = xfs_qm_dqflush(dqp, bp);
++ if (!error)
++ xfs_buf_delwri_queue(bp, buffer_list);
+ xfs_buf_relse(bp);
+ out_unlock:
+ xfs_dqunlock(dqp);
+diff --git a/fs/xfs/xfs_qm_bhv.c b/fs/xfs/xfs_qm_bhv.c
+index a11436579877d5..ed1d597c30ca25 100644
+--- a/fs/xfs/xfs_qm_bhv.c
++++ b/fs/xfs/xfs_qm_bhv.c
+@@ -19,28 +19,41 @@
+ STATIC void
+ xfs_fill_statvfs_from_dquot(
+ struct kstatfs *statp,
++ struct xfs_inode *ip,
+ struct xfs_dquot *dqp)
+ {
++ struct xfs_dquot_res *blkres = &dqp->q_blk;
+ uint64_t limit;
+
+- limit = dqp->q_blk.softlimit ?
+- dqp->q_blk.softlimit :
+- dqp->q_blk.hardlimit;
+- if (limit && statp->f_blocks > limit) {
+- statp->f_blocks = limit;
+- statp->f_bfree = statp->f_bavail =
+- (statp->f_blocks > dqp->q_blk.reserved) ?
+- (statp->f_blocks - dqp->q_blk.reserved) : 0;
++ if (XFS_IS_REALTIME_MOUNT(ip->i_mount) &&
++ (ip->i_diflags & (XFS_DIFLAG_RTINHERIT | XFS_DIFLAG_REALTIME)))
++ blkres = &dqp->q_rtb;
++
++ limit = blkres->softlimit ?
++ blkres->softlimit :
++ blkres->hardlimit;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > blkres->reserved)
++ remaining = limit - blkres->reserved;
++
++ statp->f_blocks = min(statp->f_blocks, limit);
++ statp->f_bfree = min(statp->f_bfree, remaining);
++ statp->f_bavail = min(statp->f_bavail, remaining);
+ }
+
+ limit = dqp->q_ino.softlimit ?
+ dqp->q_ino.softlimit :
+ dqp->q_ino.hardlimit;
+- if (limit && statp->f_files > limit) {
+- statp->f_files = limit;
+- statp->f_ffree =
+- (statp->f_files > dqp->q_ino.reserved) ?
+- (statp->f_files - dqp->q_ino.reserved) : 0;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > dqp->q_ino.reserved)
++ remaining = limit - dqp->q_ino.reserved;
++
++ statp->f_files = min(statp->f_files, limit);
++ statp->f_ffree = min(statp->f_ffree, remaining);
+ }
+ }
+
+@@ -61,7 +74,7 @@ xfs_qm_statvfs(
+ struct xfs_dquot *dqp;
+
+ if (!xfs_qm_dqget(mp, ip->i_projid, XFS_DQTYPE_PROJ, false, &dqp)) {
+- xfs_fill_statvfs_from_dquot(statp, dqp);
++ xfs_fill_statvfs_from_dquot(statp, ip, dqp);
+ xfs_qm_dqput(dqp);
+ }
+ }
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index fbb3a1594c0dcc..8f7c9eaeb36090 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -873,12 +873,6 @@ xfs_fs_statfs(
+ ffree = statp->f_files - (icount - ifree);
+ statp->f_ffree = max_t(int64_t, ffree, 0);
+
+-
+- if ((ip->i_diflags & XFS_DIFLAG_PROJINHERIT) &&
+- ((mp->m_qflags & (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))) ==
+- (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))
+- xfs_qm_statvfs(ip, statp);
+-
+ if (XFS_IS_REALTIME_MOUNT(mp) &&
+ (ip->i_diflags & (XFS_DIFLAG_RTINHERIT | XFS_DIFLAG_REALTIME))) {
+ s64 freertx;
+@@ -888,6 +882,11 @@ xfs_fs_statfs(
+ statp->f_bavail = statp->f_bfree = xfs_rtx_to_rtb(mp, freertx);
+ }
+
++ if ((ip->i_diflags & XFS_DIFLAG_PROJINHERIT) &&
++ ((mp->m_qflags & (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))) ==
++ (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))
++ xfs_qm_statvfs(ip, statp);
++
+ return 0;
+ }
+
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index 30e03342287a94..ee46051db12dde 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -835,16 +835,11 @@ __xfs_trans_commit(
+ trace_xfs_trans_commit(tp, _RET_IP_);
+
+ /*
+- * Finish deferred items on final commit. Only permanent transactions
+- * should ever have deferred ops.
++ * Commit per-transaction changes that are not already tracked through
++ * log items. This can add dirty log items to the transaction.
+ */
+- WARN_ON_ONCE(!list_empty(&tp->t_dfops) &&
+- !(tp->t_flags & XFS_TRANS_PERM_LOG_RES));
+- if (!regrant && (tp->t_flags & XFS_TRANS_PERM_LOG_RES)) {
+- error = xfs_defer_finish_noroll(&tp);
+- if (error)
+- goto out_unreserve;
+- }
++ if (tp->t_flags & XFS_TRANS_SB_DIRTY)
++ xfs_trans_apply_sb_deltas(tp);
+
+ error = xfs_trans_run_precommits(tp);
+ if (error)
+@@ -876,8 +871,6 @@ __xfs_trans_commit(
+ /*
+ * If we need to update the superblock, then do it now.
+ */
+- if (tp->t_flags & XFS_TRANS_SB_DIRTY)
+- xfs_trans_apply_sb_deltas(tp);
+ xfs_trans_apply_dquot_deltas(tp);
+
+ xlog_cil_commit(log, tp, &commit_seq, regrant);
+@@ -924,6 +917,20 @@ int
+ xfs_trans_commit(
+ struct xfs_trans *tp)
+ {
++ /*
++ * Finish deferred items on final commit. Only permanent transactions
++ * should ever have deferred ops.
++ */
++ WARN_ON_ONCE(!list_empty(&tp->t_dfops) &&
++ !(tp->t_flags & XFS_TRANS_PERM_LOG_RES));
++ if (tp->t_flags & XFS_TRANS_PERM_LOG_RES) {
++ int error = xfs_defer_finish_noroll(&tp);
++ if (error) {
++ xfs_trans_cancel(tp);
++ return error;
++ }
++ }
++
+ return __xfs_trans_commit(tp, false);
+ }
+
+diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c
+index 8ede9d099d1fea..f56d62dced97b1 100644
+--- a/fs/xfs/xfs_trans_ail.c
++++ b/fs/xfs/xfs_trans_ail.c
+@@ -360,7 +360,7 @@ xfsaild_resubmit_item(
+
+ /* protected by ail_lock */
+ list_for_each_entry(lip, &bp->b_li_list, li_bio_list) {
+- if (bp->b_flags & _XBF_INODES)
++ if (bp->b_flags & (_XBF_INODES | _XBF_DQUOTS))
+ clear_bit(XFS_LI_FAILED, &lip->li_flags);
+ else
+ xfs_clear_li_failed(lip);
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index e3fa43291f449d..1e2b25e204cb52 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -2001,8 +2001,11 @@ struct drm_connector {
+ struct drm_encoder *encoder;
+
+ #define MAX_ELD_BYTES 128
+- /** @eld: EDID-like data, if present */
++ /** @eld: EDID-like data, if present, protected by @eld_mutex */
+ uint8_t eld[MAX_ELD_BYTES];
++ /** @eld_mutex: protection for concurrenct access to @eld */
++ struct mutex eld_mutex;
++
+ /** @latency_present: AV delay info from ELD, if found */
+ bool latency_present[2];
+ /**
+diff --git a/include/drm/drm_utils.h b/include/drm/drm_utils.h
+index 70775748d243b0..15fa9b6865f448 100644
+--- a/include/drm/drm_utils.h
++++ b/include/drm/drm_utils.h
+@@ -12,8 +12,12 @@
+
+ #include <linux/types.h>
+
++struct drm_edid;
++
+ int drm_get_panel_orientation_quirk(int width, int height);
+
++int drm_get_panel_min_brightness_quirk(const struct drm_edid *edid);
++
+ signed long drm_timeout_abs_to_jiffies(int64_t timeout_nsec);
+
+ #endif
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index e6c00e860951ae..3305c849abd66a 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -42,7 +42,9 @@ struct linux_binprm {
+ * Set when errors can no longer be returned to the
+ * original userspace.
+ */
+- point_of_no_return:1;
++ point_of_no_return:1,
++ /* Set when "comm" must come from the dentry. */
++ comm_from_dentry:1;
+ struct file *executable; /* Executable to pass to the interpreter */
+ struct file *interpreter;
+ struct file *file;
+diff --git a/include/linux/call_once.h b/include/linux/call_once.h
+new file mode 100644
+index 00000000000000..6261aa0b3fb00d
+--- /dev/null
++++ b/include/linux/call_once.h
+@@ -0,0 +1,45 @@
++#ifndef _LINUX_CALL_ONCE_H
++#define _LINUX_CALL_ONCE_H
++
++#include <linux/types.h>
++#include <linux/mutex.h>
++
++#define ONCE_NOT_STARTED 0
++#define ONCE_RUNNING 1
++#define ONCE_COMPLETED 2
++
++struct once {
++ atomic_t state;
++ struct mutex lock;
++};
++
++static inline void __once_init(struct once *once, const char *name,
++ struct lock_class_key *key)
++{
++ atomic_set(&once->state, ONCE_NOT_STARTED);
++ __mutex_init(&once->lock, name, key);
++}
++
++#define once_init(once) \
++do { \
++ static struct lock_class_key __key; \
++ __once_init((once), #once, &__key); \
++} while (0)
++
++static inline void call_once(struct once *once, void (*cb)(struct once *))
++{
++ /* Pairs with atomic_set_release() below. */
++ if (atomic_read_acquire(&once->state) == ONCE_COMPLETED)
++ return;
++
++ guard(mutex)(&once->lock);
++ WARN_ON(atomic_read(&once->state) == ONCE_RUNNING);
++ if (atomic_read(&once->state) != ONCE_NOT_STARTED)
++ return;
++
++ atomic_set(&once->state, ONCE_RUNNING);
++ cb(once);
++ atomic_set_release(&once->state, ONCE_COMPLETED);
++}
++
++#endif /* _LINUX_CALL_ONCE_H */
+diff --git a/include/linux/hrtimer_defs.h b/include/linux/hrtimer_defs.h
+index c3b4b7ed7c163f..84a5045f80f36f 100644
+--- a/include/linux/hrtimer_defs.h
++++ b/include/linux/hrtimer_defs.h
+@@ -125,6 +125,7 @@ struct hrtimer_cpu_base {
+ ktime_t softirq_expires_next;
+ struct hrtimer *softirq_next_timer;
+ struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
++ call_single_data_t csd;
+ } ____cacheline_aligned;
+
+
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 85fe9d0ebb9152..2c66ca21801c17 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -969,6 +969,15 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
+ static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
+ {
+ int num_vcpus = atomic_read(&kvm->online_vcpus);
++
++ /*
++ * Explicitly verify the target vCPU is online, as the anti-speculation
++ * logic only limits the CPU's ability to speculate, e.g. given a "bad"
++ * index, clamping the index to 0 would return vCPU0, not NULL.
++ */
++ if (i >= num_vcpus)
++ return NULL;
++
+ i = array_index_nospec(i, num_vcpus);
+
+ /* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu. */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 82c7056e27599e..d4b2c09cd5fec4 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -722,7 +722,6 @@ struct mlx5_timer {
+ struct timecounter tc;
+ u32 nominal_c_mult;
+ unsigned long overflow_period;
+- struct delayed_work overflow_work;
+ };
+
+ struct mlx5_clock {
+diff --git a/include/linux/platform_data/x86/asus-wmi.h b/include/linux/platform_data/x86/asus-wmi.h
+index 365e119bebaa23..783e2a336861b7 100644
+--- a/include/linux/platform_data/x86/asus-wmi.h
++++ b/include/linux/platform_data/x86/asus-wmi.h
+@@ -184,6 +184,11 @@ static const struct dmi_system_id asus_use_hid_led_dmi_ids[] = {
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "ROG Flow"),
+ },
+ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt P16"),
++ },
++ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "GA403U"),
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 1e6324f0d4efda..24e48af7e8f74a 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -851,7 +851,7 @@ static inline int qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ }
+
+ static inline void _bstats_update(struct gnet_stats_basic_sync *bstats,
+- __u64 bytes, __u32 packets)
++ __u64 bytes, __u64 packets)
+ {
+ u64_stats_update_begin(&bstats->syncp);
+ u64_stats_add(&bstats->bytes, bytes);
+diff --git a/include/rv/da_monitor.h b/include/rv/da_monitor.h
+index 9705b2a98e49e1..510c88bfabd433 100644
+--- a/include/rv/da_monitor.h
++++ b/include/rv/da_monitor.h
+@@ -14,6 +14,7 @@
+ #include <rv/automata.h>
+ #include <linux/rv.h>
+ #include <linux/bug.h>
++#include <linux/sched.h>
+
+ #ifdef CONFIG_RV_REACTORS
+
+@@ -324,10 +325,13 @@ static inline struct da_monitor *da_get_monitor_##name(struct task_struct *tsk)
+ static void da_monitor_reset_all_##name(void) \
+ { \
+ struct task_struct *g, *p; \
++ int cpu; \
+ \
+ read_lock(&tasklist_lock); \
+ for_each_process_thread(g, p) \
+ da_monitor_reset_##name(da_get_monitor_##name(p)); \
++ for_each_present_cpu(cpu) \
++ da_monitor_reset_##name(da_get_monitor_##name(idle_task(cpu))); \
+ read_unlock(&tasklist_lock); \
+ } \
+ \
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 666fe1779ccc63..e1a37e9c2d42d5 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -218,6 +218,7 @@
+ EM(rxrpc_conn_get_conn_input, "GET inp-conn") \
+ EM(rxrpc_conn_get_idle, "GET idle ") \
+ EM(rxrpc_conn_get_poke_abort, "GET pk-abort") \
++ EM(rxrpc_conn_get_poke_secured, "GET secured ") \
+ EM(rxrpc_conn_get_poke_timer, "GET poke ") \
+ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \
+ EM(rxrpc_conn_new_client, "NEW client ") \
+diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
+index efe5de6ce208a1..aaa4f3bc688b57 100644
+--- a/include/uapi/drm/amdgpu_drm.h
++++ b/include/uapi/drm/amdgpu_drm.h
+@@ -411,13 +411,20 @@ struct drm_amdgpu_gem_userptr {
+ /* GFX12 and later: */
+ #define AMDGPU_TILING_GFX12_SWIZZLE_MODE_SHIFT 0
+ #define AMDGPU_TILING_GFX12_SWIZZLE_MODE_MASK 0x7
+-/* These are DCC recompression setting for memory management: */
++/* These are DCC recompression settings for memory management: */
+ #define AMDGPU_TILING_GFX12_DCC_MAX_COMPRESSED_BLOCK_SHIFT 3
+ #define AMDGPU_TILING_GFX12_DCC_MAX_COMPRESSED_BLOCK_MASK 0x3 /* 0:64B, 1:128B, 2:256B */
+ #define AMDGPU_TILING_GFX12_DCC_NUMBER_TYPE_SHIFT 5
+ #define AMDGPU_TILING_GFX12_DCC_NUMBER_TYPE_MASK 0x7 /* CB_COLOR0_INFO.NUMBER_TYPE */
+ #define AMDGPU_TILING_GFX12_DCC_DATA_FORMAT_SHIFT 8
+ #define AMDGPU_TILING_GFX12_DCC_DATA_FORMAT_MASK 0x3f /* [0:4]:CB_COLOR0_INFO.FORMAT, [5]:MM */
++/* When clearing the buffer or moving it from VRAM to GTT, don't compress and set DCC metadata
++ * to uncompressed. Set when parts of an allocation bypass DCC and read raw data. */
++#define AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE_SHIFT 14
++#define AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE_MASK 0x1
++/* bit gap */
++#define AMDGPU_TILING_GFX12_SCANOUT_SHIFT 63
++#define AMDGPU_TILING_GFX12_SCANOUT_MASK 0x1
+
+ /* Set/Get helpers for tiling flags. */
+ #define AMDGPU_TILING_SET(field, value) \
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index a4206723f50333..5a199f3d4a26a2 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -519,6 +519,7 @@
+ #define KEY_NOTIFICATION_CENTER 0x1bc /* Show/hide the notification center */
+ #define KEY_PICKUP_PHONE 0x1bd /* Answer incoming call */
+ #define KEY_HANGUP_PHONE 0x1be /* Decline incoming call */
++#define KEY_LINK_PHONE 0x1bf /* AL Phone Syncing */
+
+ #define KEY_DEL_EOL 0x1c0
+ #define KEY_DEL_EOS 0x1c1
+diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h
+index 72010f71c5e479..8c4470742dcd99 100644
+--- a/include/uapi/linux/iommufd.h
++++ b/include/uapi/linux/iommufd.h
+@@ -737,6 +737,7 @@ enum iommu_hwpt_pgfault_perm {
+ * @pasid: Process Address Space ID
+ * @grpid: Page Request Group Index
+ * @perm: Combination of enum iommu_hwpt_pgfault_perm
++ * @__reserved: Must be 0.
+ * @addr: Fault address
+ * @length: a hint of how much data the requestor is expecting to fetch. For
+ * example, if the PRI initiator knows it is going to do a 10MB
+@@ -752,7 +753,8 @@ struct iommu_hwpt_pgfault {
+ __u32 pasid;
+ __u32 grpid;
+ __u32 perm;
+- __u64 addr;
++ __u32 __reserved;
++ __aligned_u64 addr;
+ __u32 length;
+ __u32 cookie;
+ };
+diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h
+index 5a43c23f53bfbf..ff47b6f0ba0f5f 100644
+--- a/include/uapi/linux/raid/md_p.h
++++ b/include/uapi/linux/raid/md_p.h
+@@ -233,7 +233,7 @@ struct mdp_superblock_1 {
+ char set_name[32]; /* set and interpreted by user-space */
+
+ __le64 ctime; /* lo 40 bits are seconds, top 24 are microseconds or 0*/
+- __le32 level; /* 0,1,4,5 */
++ __le32 level; /* 0,1,4,5, -1 (linear) */
+ __le32 layout; /* only for raid5 and raid10 currently */
+ __le64 size; /* used size of component devices, in 512byte sectors */
+
+diff --git a/include/uapi/linux/raid/md_u.h b/include/uapi/linux/raid/md_u.h
+index 7be89a4906e73e..a893010735fbad 100644
+--- a/include/uapi/linux/raid/md_u.h
++++ b/include/uapi/linux/raid/md_u.h
+@@ -103,6 +103,8 @@ typedef struct mdu_array_info_s {
+
+ } mdu_array_info_t;
+
++#define LEVEL_LINEAR (-1)
++
+ /* we need a value for 'no level specified' and 0
+ * means 'raid0', so we need something else. This is
+ * for internal use only
+diff --git a/include/ufs/ufs.h b/include/ufs/ufs.h
+index e594abe5d05fed..f0c6111160e7af 100644
+--- a/include/ufs/ufs.h
++++ b/include/ufs/ufs.h
+@@ -386,8 +386,8 @@ enum {
+
+ /* Possible values for dExtendedUFSFeaturesSupport */
+ enum {
+- UFS_DEV_LOW_TEMP_NOTIF = BIT(4),
+- UFS_DEV_HIGH_TEMP_NOTIF = BIT(5),
++ UFS_DEV_HIGH_TEMP_NOTIF = BIT(4),
++ UFS_DEV_LOW_TEMP_NOTIF = BIT(5),
+ UFS_DEV_EXT_TEMP_NOTIF = BIT(6),
+ UFS_DEV_HPB_SUPPORT = BIT(7),
+ UFS_DEV_WRITE_BOOSTER_SUP = BIT(8),
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 20c5374e922ef5..d5e43a1dcff226 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -1297,7 +1297,6 @@ static inline void ufshcd_rmwl(struct ufs_hba *hba, u32 mask, u32 val, u32 reg)
+ void ufshcd_enable_irq(struct ufs_hba *hba);
+ void ufshcd_disable_irq(struct ufs_hba *hba);
+ int ufshcd_alloc_host(struct device *, struct ufs_hba **);
+-void ufshcd_dealloc_host(struct ufs_hba *);
+ int ufshcd_hba_enable(struct ufs_hba *hba);
+ int ufshcd_init(struct ufs_hba *, void __iomem *, unsigned int);
+ int ufshcd_link_recovery(struct ufs_hba *hba);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 7f549be9abd1e6..3974c417fe2644 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -1697,6 +1697,11 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
+ int ret;
+ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+
++ if (unlikely(req->flags & REQ_F_FAIL)) {
++ ret = -ECONNRESET;
++ goto out;
++ }
++
+ file_flags = force_nonblock ? O_NONBLOCK : 0;
+
+ ret = __sys_connect_file(req->file, &io->addr, connect->addr_len,
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 1f63b60e85e7c0..b93e9ebdd87c8f 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -315,6 +315,8 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts)
+ return IOU_POLL_REISSUE;
+ }
+ }
++ if (unlikely(req->cqe.res & EPOLLERR))
++ req_set_fail(req);
+ if (req->apoll_events & EPOLLONESHOT)
+ return IOU_POLL_DONE;
+
+@@ -357,8 +359,10 @@ void io_poll_task_func(struct io_kiocb *req, struct io_tw_state *ts)
+
+ ret = io_poll_check_events(req, ts);
+ if (ret == IOU_POLL_NO_ACTION) {
++ io_kbuf_recycle(req, 0);
+ return;
+ } else if (ret == IOU_POLL_REQUEUE) {
++ io_kbuf_recycle(req, 0);
+ __io_poll_execute(req, 0);
+ return;
+ }
+diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
+index 10a5736a21c222..b5c2a2de457888 100644
+--- a/kernel/locking/test-ww_mutex.c
++++ b/kernel/locking/test-ww_mutex.c
+@@ -402,7 +402,7 @@ static inline u32 prandom_u32_below(u32 ceil)
+ static int *get_random_order(int count)
+ {
+ int *order;
+- int n, r, tmp;
++ int n, r;
+
+ order = kmalloc_array(count, sizeof(*order), GFP_KERNEL);
+ if (!order)
+@@ -413,11 +413,8 @@ static int *get_random_order(int count)
+
+ for (n = count - 1; n > 1; n--) {
+ r = prandom_u32_below(n + 1);
+- if (r != n) {
+- tmp = order[n];
+- order[n] = order[r];
+- order[r] = tmp;
+- }
++ if (r != n)
++ swap(order[n], order[r]);
+ }
+
+ return order;
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 7530df62ff7cbc..3b75f6e8410b9d 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -523,7 +523,7 @@ static struct latched_seq clear_seq = {
+ /* record buffer */
+ #define LOG_ALIGN __alignof__(unsigned long)
+ #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
+-#define LOG_BUF_LEN_MAX (u32)(1 << 31)
++#define LOG_BUF_LEN_MAX ((u32)1 << 31)
+ static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);
+ static char *log_buf = __log_buf;
+ static u32 log_buf_len = __LOG_BUF_LEN;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index aba41c69f09c42..5d67f41d05d40b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -766,13 +766,15 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
+ #endif
+ #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
+ if (static_key_false((¶virt_steal_rq_enabled))) {
+- steal = paravirt_steal_clock(cpu_of(rq));
++ u64 prev_steal;
++
++ steal = prev_steal = paravirt_steal_clock(cpu_of(rq));
+ steal -= rq->prev_steal_time_rq;
+
+ if (unlikely(steal > delta))
+ steal = delta;
+
+- rq->prev_steal_time_rq += steal;
++ rq->prev_steal_time_rq = prev_steal;
+ delta -= steal;
+ }
+ #endif
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 65e7be64487202..ddc096d6b0c203 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5481,6 +5481,15 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
+ static void set_delayed(struct sched_entity *se)
+ {
+ se->sched_delayed = 1;
++
++ /*
++ * Delayed se of cfs_rq have no tasks queued on them.
++ * Do not adjust h_nr_runnable since dequeue_entities()
++ * will account it for blocked tasks.
++ */
++ if (!entity_is_task(se))
++ return;
++
+ for_each_sched_entity(se) {
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
+@@ -5493,6 +5502,16 @@ static void set_delayed(struct sched_entity *se)
+ static void clear_delayed(struct sched_entity *se)
+ {
+ se->sched_delayed = 0;
++
++ /*
++ * Delayed se of cfs_rq have no tasks queued on them.
++ * Do not adjust h_nr_runnable since a dequeue has
++ * already accounted for it or an enqueue of a task
++ * below it will account for it in enqueue_task_fair().
++ */
++ if (!entity_is_task(se))
++ return;
++
+ for_each_sched_entity(se) {
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 385d48293a5fa1..0cd1f8b5a102ee 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -749,6 +749,15 @@ static bool seccomp_is_const_allow(struct sock_fprog_kern *fprog,
+ if (WARN_ON_ONCE(!fprog))
+ return false;
+
++ /* Our single exception to filtering. */
++#ifdef __NR_uretprobe
++#ifdef SECCOMP_ARCH_COMPAT
++ if (sd->arch == SECCOMP_ARCH_NATIVE)
++#endif
++ if (sd->nr == __NR_uretprobe)
++ return true;
++#endif
++
+ for (pc = 0; pc < fprog->len; pc++) {
+ struct sock_filter *insn = &fprog->filter[pc];
+ u16 code = insn->code;
+@@ -1023,6 +1032,9 @@ static inline void seccomp_log(unsigned long syscall, long signr, u32 action,
+ */
+ static const int mode1_syscalls[] = {
+ __NR_seccomp_read, __NR_seccomp_write, __NR_seccomp_exit, __NR_seccomp_sigreturn,
++#ifdef __NR_uretprobe
++ __NR_uretprobe,
++#endif
+ -1, /* negative terminated */
+ };
+
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index ee20f5032a0366..d116c28564f26c 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -58,6 +58,8 @@
+ #define HRTIMER_ACTIVE_SOFT (HRTIMER_ACTIVE_HARD << MASK_SHIFT)
+ #define HRTIMER_ACTIVE_ALL (HRTIMER_ACTIVE_SOFT | HRTIMER_ACTIVE_HARD)
+
++static void retrigger_next_event(void *arg);
++
+ /*
+ * The timer bases:
+ *
+@@ -111,7 +113,8 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
+ .clockid = CLOCK_TAI,
+ .get_time = &ktime_get_clocktai,
+ },
+- }
++ },
++ .csd = CSD_INIT(retrigger_next_event, NULL)
+ };
+
+ static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
+@@ -124,6 +127,14 @@ static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
+ [CLOCK_TAI] = HRTIMER_BASE_TAI,
+ };
+
++static inline bool hrtimer_base_is_online(struct hrtimer_cpu_base *base)
++{
++ if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
++ return true;
++ else
++ return likely(base->online);
++}
++
+ /*
+ * Functions and macros which are different for UP/SMP systems are kept in a
+ * single place
+@@ -183,27 +194,54 @@ struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer,
+ }
+
+ /*
+- * We do not migrate the timer when it is expiring before the next
+- * event on the target cpu. When high resolution is enabled, we cannot
+- * reprogram the target cpu hardware and we would cause it to fire
+- * late. To keep it simple, we handle the high resolution enabled and
+- * disabled case similar.
++ * Check if the elected target is suitable considering its next
++ * event and the hotplug state of the current CPU.
++ *
++ * If the elected target is remote and its next event is after the timer
++ * to queue, then a remote reprogram is necessary. However there is no
++ * guarantee the IPI handling the operation would arrive in time to meet
++ * the high resolution deadline. In this case the local CPU becomes a
++ * preferred target, unless it is offline.
++ *
++ * High and low resolution modes are handled the same way for simplicity.
+ *
+ * Called with cpu_base->lock of target cpu held.
+ */
+-static int
+-hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base)
++static bool hrtimer_suitable_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base,
++ struct hrtimer_cpu_base *new_cpu_base,
++ struct hrtimer_cpu_base *this_cpu_base)
+ {
+ ktime_t expires;
+
++ /*
++ * The local CPU clockevent can be reprogrammed. Also get_target_base()
++ * guarantees it is online.
++ */
++ if (new_cpu_base == this_cpu_base)
++ return true;
++
++ /*
++ * The offline local CPU can't be the default target if the
++ * next remote target event is after this timer. Keep the
++ * elected new base. An IPI will we issued to reprogram
++ * it as a last resort.
++ */
++ if (!hrtimer_base_is_online(this_cpu_base))
++ return true;
++
+ expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset);
+- return expires < new_base->cpu_base->expires_next;
++
++ return expires >= new_base->cpu_base->expires_next;
+ }
+
+-static inline
+-struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base,
+- int pinned)
++static inline struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base, int pinned)
+ {
++ if (!hrtimer_base_is_online(base)) {
++ int cpu = cpumask_any_and(cpu_online_mask, housekeeping_cpumask(HK_TYPE_TIMER));
++
++ return &per_cpu(hrtimer_bases, cpu);
++ }
++
+ #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+ if (static_branch_likely(&timers_migration_enabled) && !pinned)
+ return &per_cpu(hrtimer_bases, get_nohz_timer_target());
+@@ -254,8 +292,8 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ raw_spin_unlock(&base->cpu_base->lock);
+ raw_spin_lock(&new_base->cpu_base->lock);
+
+- if (new_cpu_base != this_cpu_base &&
+- hrtimer_check_target(timer, new_base)) {
++ if (!hrtimer_suitable_target(timer, new_base, new_cpu_base,
++ this_cpu_base)) {
+ raw_spin_unlock(&new_base->cpu_base->lock);
+ raw_spin_lock(&base->cpu_base->lock);
+ new_cpu_base = this_cpu_base;
+@@ -264,8 +302,7 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ }
+ WRITE_ONCE(timer->base, new_base);
+ } else {
+- if (new_cpu_base != this_cpu_base &&
+- hrtimer_check_target(timer, new_base)) {
++ if (!hrtimer_suitable_target(timer, new_base, new_cpu_base, this_cpu_base)) {
+ new_cpu_base = this_cpu_base;
+ goto again;
+ }
+@@ -725,8 +762,6 @@ static inline int hrtimer_is_hres_enabled(void)
+ return hrtimer_hres_enabled;
+ }
+
+-static void retrigger_next_event(void *arg);
+-
+ /*
+ * Switch to high resolution mode
+ */
+@@ -1215,6 +1250,7 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ u64 delta_ns, const enum hrtimer_mode mode,
+ struct hrtimer_clock_base *base)
+ {
++ struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases);
+ struct hrtimer_clock_base *new_base;
+ bool force_local, first;
+
+@@ -1226,9 +1262,15 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ * and enforce reprogramming after it is queued no matter whether
+ * it is the new first expiring timer again or not.
+ */
+- force_local = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
++ force_local = base->cpu_base == this_cpu_base;
+ force_local &= base->cpu_base->next_timer == timer;
+
++ /*
++ * Don't force local queuing if this enqueue happens on a unplugged
++ * CPU after hrtimer_cpu_dying() has been invoked.
++ */
++ force_local &= this_cpu_base->online;
++
+ /*
+ * Remove an active timer from the queue. In case it is not queued
+ * on the current CPU, make sure that remove_hrtimer() updates the
+@@ -1258,8 +1300,27 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ }
+
+ first = enqueue_hrtimer(timer, new_base, mode);
+- if (!force_local)
+- return first;
++ if (!force_local) {
++ /*
++ * If the current CPU base is online, then the timer is
++ * never queued on a remote CPU if it would be the first
++ * expiring timer there.
++ */
++ if (hrtimer_base_is_online(this_cpu_base))
++ return first;
++
++ /*
++ * Timer was enqueued remote because the current base is
++ * already offline. If the timer is the first to expire,
++ * kick the remote CPU to reprogram the clock event.
++ */
++ if (first) {
++ struct hrtimer_cpu_base *new_cpu_base = new_base->cpu_base;
++
++ smp_call_function_single_async(new_cpu_base->cpu, &new_cpu_base->csd);
++ }
++ return 0;
++ }
+
+ /*
+ * Timer was forced to stay on the current CPU to avoid
+diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
+index 371a62a749aad3..72538baa7a1fb0 100644
+--- a/kernel/time/timer_migration.c
++++ b/kernel/time/timer_migration.c
+@@ -1668,6 +1668,9 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node)
+
+ } while (i < tmigr_hierarchy_levels);
+
++ /* Assert single root */
++ WARN_ON_ONCE(!err && !group->parent && !list_is_singular(&tmigr_level_list[top]));
++
+ while (i > 0) {
+ group = stack[--i];
+
+@@ -1709,7 +1712,12 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node)
+ WARN_ON_ONCE(top == 0);
+
+ lvllist = &tmigr_level_list[top];
+- if (group->num_children == 1 && list_is_singular(lvllist)) {
++
++ /*
++ * Newly created root level should have accounted the upcoming
++ * CPU's child group and pre-accounted the old root.
++ */
++ if (group->num_children == 2 && list_is_singular(lvllist)) {
+ /*
+ * The target CPU must never do the prepare work, except
+ * on early boot when the boot CPU is the target. Otherwise
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 703978b2d557d7..0f8f3ffc6f0904 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4398,8 +4398,13 @@ rb_reserve_next_event(struct trace_buffer *buffer,
+ int nr_loops = 0;
+ int add_ts_default;
+
+- /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+- if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
++ /*
++ * ring buffer does cmpxchg as well as atomic64 operations
++ * (which some archs use locking for atomic64), make sure this
++ * is safe in NMI context
++ */
++ if ((!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) ||
++ IS_ENABLED(CONFIG_GENERIC_ATOMIC64)) &&
+ (unlikely(in_nmi()))) {
+ return NULL;
+ }
+@@ -7059,7 +7064,7 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ }
+
+ while (p < nr_pages) {
+- struct page *page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
++ struct page *page;
+ int off = 0;
+
+ if (WARN_ON_ONCE(s >= nr_subbufs)) {
+@@ -7067,6 +7072,8 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ goto out;
+ }
+
++ page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
++
+ for (; off < (1 << (subbuf_order)); off++, page++) {
+ if (p >= nr_pages)
+ break;
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index a569daaac4c4ff..ebb61ddca749d8 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -150,7 +150,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
+ * returning from the function.
+ */
+ if (ftrace_graph_notrace_addr(trace->func)) {
+- *task_var |= TRACE_GRAPH_NOTRACE_BIT;
++ *task_var |= TRACE_GRAPH_NOTRACE;
+ /*
+ * Need to return 1 to have the return called
+ * that will clear the NOTRACE bit.
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index a50ed23bee777b..032fdeba37d350 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1235,6 +1235,8 @@ static void trace_sched_migrate_callback(void *data, struct task_struct *p, int
+ }
+ }
+
++static bool monitor_enabled;
++
+ static int register_migration_monitor(void)
+ {
+ int ret = 0;
+@@ -1243,16 +1245,25 @@ static int register_migration_monitor(void)
+ * Timerlat thread migration check is only required when running timerlat in user-space.
+ * Thus, enable callback only if timerlat is set with no workload.
+ */
+- if (timerlat_enabled() && !test_bit(OSN_WORKLOAD, &osnoise_options))
++ if (timerlat_enabled() && !test_bit(OSN_WORKLOAD, &osnoise_options)) {
++ if (WARN_ON_ONCE(monitor_enabled))
++ return 0;
++
+ ret = register_trace_sched_migrate_task(trace_sched_migrate_callback, NULL);
++ if (!ret)
++ monitor_enabled = true;
++ }
+
+ return ret;
+ }
+
+ static void unregister_migration_monitor(void)
+ {
+- if (timerlat_enabled() && !test_bit(OSN_WORKLOAD, &osnoise_options))
+- unregister_trace_sched_migrate_task(trace_sched_migrate_callback, NULL);
++ if (!monitor_enabled)
++ return;
++
++ unregister_trace_sched_migrate_task(trace_sched_migrate_callback, NULL);
++ monitor_enabled = false;
+ }
+ #else
+ static int register_migration_monitor(void)
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 3f9c238bb58ea3..e48375fe5a50ce 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1511,7 +1511,7 @@ config LOCKDEP_SMALL
+ config LOCKDEP_BITS
+ int "Bitsize for MAX_LOCKDEP_ENTRIES"
+ depends on LOCKDEP && !LOCKDEP_SMALL
+- range 10 30
++ range 10 24
+ default 15
+ help
+ Try increasing this value if you hit "BUG: MAX_LOCKDEP_ENTRIES too low!" message.
+@@ -1527,7 +1527,7 @@ config LOCKDEP_CHAINS_BITS
+ config LOCKDEP_STACK_TRACE_BITS
+ int "Bitsize for MAX_STACK_TRACE_ENTRIES"
+ depends on LOCKDEP && !LOCKDEP_SMALL
+- range 10 30
++ range 10 26
+ default 19
+ help
+ Try increasing this value if you hit "BUG: MAX_STACK_TRACE_ENTRIES too low!" message.
+@@ -1535,7 +1535,7 @@ config LOCKDEP_STACK_TRACE_BITS
+ config LOCKDEP_STACK_TRACE_HASH_BITS
+ int "Bitsize for STACK_TRACE_HASH_SIZE"
+ depends on LOCKDEP && !LOCKDEP_SMALL
+- range 10 30
++ range 10 26
+ default 14
+ help
+ Try increasing this value if you need large STACK_TRACE_HASH_SIZE.
+@@ -1543,7 +1543,7 @@ config LOCKDEP_STACK_TRACE_HASH_BITS
+ config LOCKDEP_CIRCULAR_QUEUE_BITS
+ int "Bitsize for elements in circular_queue struct"
+ depends on LOCKDEP
+- range 10 30
++ range 10 26
+ default 12
+ help
+ Try increasing this value if you hit "lockdep bfs error:-1" warning due to __cq_enqueue() failure.
+diff --git a/lib/atomic64.c b/lib/atomic64.c
+index caf895789a1ee6..1a72bba36d2430 100644
+--- a/lib/atomic64.c
++++ b/lib/atomic64.c
+@@ -25,15 +25,15 @@
+ * Ensure each lock is in a separate cacheline.
+ */
+ static union {
+- raw_spinlock_t lock;
++ arch_spinlock_t lock;
+ char pad[L1_CACHE_BYTES];
+ } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = {
+ [0 ... (NR_LOCKS - 1)] = {
+- .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock),
++ .lock = __ARCH_SPIN_LOCK_UNLOCKED,
+ },
+ };
+
+-static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
++static inline arch_spinlock_t *lock_addr(const atomic64_t *v)
+ {
+ unsigned long addr = (unsigned long) v;
+
+@@ -45,12 +45,14 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
+ s64 generic_atomic64_read(const atomic64_t *v)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_read);
+@@ -58,11 +60,13 @@ EXPORT_SYMBOL(generic_atomic64_read);
+ void generic_atomic64_set(atomic64_t *v, s64 i)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ v->counter = i;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL(generic_atomic64_set);
+
+@@ -70,11 +74,13 @@ EXPORT_SYMBOL(generic_atomic64_set);
+ void generic_atomic64_##op(s64 a, atomic64_t *v) \
+ { \
+ unsigned long flags; \
+- raw_spinlock_t *lock = lock_addr(v); \
++ arch_spinlock_t *lock = lock_addr(v); \
+ \
+- raw_spin_lock_irqsave(lock, flags); \
++ local_irq_save(flags); \
++ arch_spin_lock(lock); \
+ v->counter c_op a; \
+- raw_spin_unlock_irqrestore(lock, flags); \
++ arch_spin_unlock(lock); \
++ local_irq_restore(flags); \
+ } \
+ EXPORT_SYMBOL(generic_atomic64_##op);
+
+@@ -82,12 +88,14 @@ EXPORT_SYMBOL(generic_atomic64_##op);
+ s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v) \
+ { \
+ unsigned long flags; \
+- raw_spinlock_t *lock = lock_addr(v); \
++ arch_spinlock_t *lock = lock_addr(v); \
+ s64 val; \
+ \
+- raw_spin_lock_irqsave(lock, flags); \
++ local_irq_save(flags); \
++ arch_spin_lock(lock); \
+ val = (v->counter c_op a); \
+- raw_spin_unlock_irqrestore(lock, flags); \
++ arch_spin_unlock(lock); \
++ local_irq_restore(flags); \
+ return val; \
+ } \
+ EXPORT_SYMBOL(generic_atomic64_##op##_return);
+@@ -96,13 +104,15 @@ EXPORT_SYMBOL(generic_atomic64_##op##_return);
+ s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v) \
+ { \
+ unsigned long flags; \
+- raw_spinlock_t *lock = lock_addr(v); \
++ arch_spinlock_t *lock = lock_addr(v); \
+ s64 val; \
+ \
+- raw_spin_lock_irqsave(lock, flags); \
++ local_irq_save(flags); \
++ arch_spin_lock(lock); \
+ val = v->counter; \
+ v->counter c_op a; \
+- raw_spin_unlock_irqrestore(lock, flags); \
++ arch_spin_unlock(lock); \
++ local_irq_restore(flags); \
+ return val; \
+ } \
+ EXPORT_SYMBOL(generic_atomic64_fetch_##op);
+@@ -131,14 +141,16 @@ ATOMIC64_OPS(xor, ^=)
+ s64 generic_atomic64_dec_if_positive(atomic64_t *v)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter - 1;
+ if (val >= 0)
+ v->counter = val;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
+@@ -146,14 +158,16 @@ EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
+ s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+ if (val == o)
+ v->counter = n;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_cmpxchg);
+@@ -161,13 +175,15 @@ EXPORT_SYMBOL(generic_atomic64_cmpxchg);
+ s64 generic_atomic64_xchg(atomic64_t *v, s64 new)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+ v->counter = new;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_xchg);
+@@ -175,14 +191,16 @@ EXPORT_SYMBOL(generic_atomic64_xchg);
+ s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+ if (val != u)
+ v->counter += a;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+
+ return val;
+ }
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 0cbe913634be4b..8d73ccf66f3aa0 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -1849,11 +1849,11 @@ static inline int mab_no_null_split(struct maple_big_node *b_node,
+ * Return: The first split location. The middle split is set in @mid_split.
+ */
+ static inline int mab_calc_split(struct ma_state *mas,
+- struct maple_big_node *bn, unsigned char *mid_split, unsigned long min)
++ struct maple_big_node *bn, unsigned char *mid_split)
+ {
+ unsigned char b_end = bn->b_end;
+ int split = b_end / 2; /* Assume equal split. */
+- unsigned char slot_min, slot_count = mt_slots[bn->type];
++ unsigned char slot_count = mt_slots[bn->type];
+
+ /*
+ * To support gap tracking, all NULL entries are kept together and a node cannot
+@@ -1886,18 +1886,7 @@ static inline int mab_calc_split(struct ma_state *mas,
+ split = b_end / 3;
+ *mid_split = split * 2;
+ } else {
+- slot_min = mt_min_slots[bn->type];
+-
+ *mid_split = 0;
+- /*
+- * Avoid having a range less than the slot count unless it
+- * causes one node to be deficient.
+- * NOTE: mt_min_slots is 1 based, b_end and split are zero.
+- */
+- while ((split < slot_count - 1) &&
+- ((bn->pivot[split] - min) < slot_count - 1) &&
+- (b_end - split > slot_min))
+- split++;
+ }
+
+ /* Avoid ending a node on a NULL entry */
+@@ -2366,7 +2355,7 @@ static inline struct maple_enode
+ static inline unsigned char mas_mab_to_node(struct ma_state *mas,
+ struct maple_big_node *b_node, struct maple_enode **left,
+ struct maple_enode **right, struct maple_enode **middle,
+- unsigned char *mid_split, unsigned long min)
++ unsigned char *mid_split)
+ {
+ unsigned char split = 0;
+ unsigned char slot_count = mt_slots[b_node->type];
+@@ -2379,7 +2368,7 @@ static inline unsigned char mas_mab_to_node(struct ma_state *mas,
+ if (b_node->b_end < slot_count) {
+ split = b_node->b_end;
+ } else {
+- split = mab_calc_split(mas, b_node, mid_split, min);
++ split = mab_calc_split(mas, b_node, mid_split);
+ *right = mas_new_ma_node(mas, b_node);
+ }
+
+@@ -2866,7 +2855,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
+ mast->bn->b_end--;
+ mast->bn->type = mte_node_type(mast->orig_l->node);
+ split = mas_mab_to_node(mas, mast->bn, &left, &right, &middle,
+- &mid_split, mast->orig_l->min);
++ &mid_split);
+ mast_set_split_parents(mast, left, middle, right, split,
+ mid_split);
+ mast_cp_to_nodes(mast, left, middle, right, split, mid_split);
+@@ -3357,7 +3346,7 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
+ if (mas_push_data(mas, height, &mast, false))
+ break;
+
+- split = mab_calc_split(mas, b_node, &mid_split, prev_l_mas.min);
++ split = mab_calc_split(mas, b_node, &mid_split);
+ mast_split_data(&mast, mas, split);
+ /*
+ * Usually correct, mab_mas_cp in the above call overwrites
+diff --git a/mm/compaction.c b/mm/compaction.c
+index a2b16b08cbbff7..384e4672998e55 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -630,7 +630,8 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
+ if (PageCompound(page)) {
+ const unsigned int order = compound_order(page);
+
+- if (blockpfn + (1UL << order) <= end_pfn) {
++ if ((order <= MAX_PAGE_ORDER) &&
++ (blockpfn + (1UL << order) <= end_pfn)) {
+ blockpfn += (1UL << order) - 1;
+ page += (1UL << order) - 1;
+ nr_scanned += (1UL << order) - 1;
+diff --git a/mm/gup.c b/mm/gup.c
+index 7053f8114e0127..44c536904a83bb 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2323,13 +2323,13 @@ static void pofs_unpin(struct pages_or_folios *pofs)
+ /*
+ * Returns the number of collected folios. Return value is always >= 0.
+ */
+-static unsigned long collect_longterm_unpinnable_folios(
++static void collect_longterm_unpinnable_folios(
+ struct list_head *movable_folio_list,
+ struct pages_or_folios *pofs)
+ {
+- unsigned long i, collected = 0;
+ struct folio *prev_folio = NULL;
+ bool drain_allow = true;
++ unsigned long i;
+
+ for (i = 0; i < pofs->nr_entries; i++) {
+ struct folio *folio = pofs_get_folio(pofs, i);
+@@ -2341,8 +2341,6 @@ static unsigned long collect_longterm_unpinnable_folios(
+ if (folio_is_longterm_pinnable(folio))
+ continue;
+
+- collected++;
+-
+ if (folio_is_device_coherent(folio))
+ continue;
+
+@@ -2364,8 +2362,6 @@ static unsigned long collect_longterm_unpinnable_folios(
+ NR_ISOLATED_ANON + folio_is_file_lru(folio),
+ folio_nr_pages(folio));
+ }
+-
+- return collected;
+ }
+
+ /*
+@@ -2442,11 +2438,9 @@ static long
+ check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs)
+ {
+ LIST_HEAD(movable_folio_list);
+- unsigned long collected;
+
+- collected = collect_longterm_unpinnable_folios(&movable_folio_list,
+- pofs);
+- if (!collected)
++ collect_longterm_unpinnable_folios(&movable_folio_list, pofs);
++ if (list_empty(&movable_folio_list))
+ return 0;
+
+ return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 4a8a4f3535caf7..bdee6d3ab0e7e3 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1394,8 +1394,7 @@ static unsigned long available_huge_pages(struct hstate *h)
+
+ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
+ struct vm_area_struct *vma,
+- unsigned long address, int avoid_reserve,
+- long chg)
++ unsigned long address, long chg)
+ {
+ struct folio *folio = NULL;
+ struct mempolicy *mpol;
+@@ -1411,10 +1410,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
+ if (!vma_has_reserves(vma, chg) && !available_huge_pages(h))
+ goto err;
+
+- /* If reserves cannot be used, ensure enough pages are in the pool */
+- if (avoid_reserve && !available_huge_pages(h))
+- goto err;
+-
+ gfp_mask = htlb_alloc_mask(h);
+ nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+
+@@ -1430,7 +1425,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
+ folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
+ nid, nodemask);
+
+- if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) {
++ if (folio && vma_has_reserves(vma, chg)) {
+ folio_set_hugetlb_restore_reserve(folio);
+ h->resv_huge_pages--;
+ }
+@@ -3006,17 +3001,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ gbl_chg = hugepage_subpool_get_pages(spool, 1);
+ if (gbl_chg < 0)
+ goto out_end_reservation;
+-
+- /*
+- * Even though there was no reservation in the region/reserve
+- * map, there could be reservations associated with the
+- * subpool that can be used. This would be indicated if the
+- * return value of hugepage_subpool_get_pages() is zero.
+- * However, if avoid_reserve is specified we still avoid even
+- * the subpool reservations.
+- */
+- if (avoid_reserve)
+- gbl_chg = 1;
+ }
+
+ /* If this allocation is not consuming a reservation, charge it now.
+@@ -3039,7 +3023,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ * from the global free pool (global change). gbl_chg == 0 indicates
+ * a reservation exists for the allocation.
+ */
+- folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg);
++ folio = dequeue_hugetlb_folio_vma(h, vma, addr, gbl_chg);
+ if (!folio) {
+ spin_unlock_irq(&hugetlb_lock);
+ folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
+@@ -3287,7 +3271,7 @@ static void __init gather_bootmem_prealloc(void)
+ .thread_fn = gather_bootmem_prealloc_parallel,
+ .fn_arg = NULL,
+ .start = 0,
+- .size = num_node_state(N_MEMORY),
++ .size = nr_node_ids,
+ .align = 1,
+ .min_chunk = 1,
+ .max_threads = num_node_state(N_MEMORY),
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index 67fc321db79b7e..102048821c222a 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -21,6 +21,7 @@
+ #include <linux/log2.h>
+ #include <linux/memblock.h>
+ #include <linux/moduleparam.h>
++#include <linux/nodemask.h>
+ #include <linux/notifier.h>
+ #include <linux/panic_notifier.h>
+ #include <linux/random.h>
+@@ -1084,6 +1085,7 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
+ * properties (e.g. reside in DMAable memory).
+ */
+ if ((flags & GFP_ZONEMASK) ||
++ ((flags & __GFP_THISNODE) && num_online_nodes() > 1) ||
+ (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) {
+ atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]);
+ return NULL;
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 5f878ee05ff80b..44bb798423dd39 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1650,7 +1650,7 @@ static void kmemleak_scan(void)
+ unsigned long phys = object->pointer;
+
+ if (PHYS_PFN(phys) < min_low_pfn ||
+- PHYS_PFN(phys + object->size) >= max_low_pfn)
++ PHYS_PFN(phys + object->size) > max_low_pfn)
+ __paint_it(object, KMEMLEAK_BLACK);
+ }
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index d81d667907448c..77d015d5db0c5b 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1053,7 +1053,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ struct folio_batch free_folios;
+ LIST_HEAD(ret_folios);
+ LIST_HEAD(demote_folios);
+- unsigned int nr_reclaimed = 0;
++ unsigned int nr_reclaimed = 0, nr_demoted = 0;
+ unsigned int pgactivate = 0;
+ bool do_demote_pass;
+ struct swap_iocb *plug = NULL;
+@@ -1522,8 +1522,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ /* 'folio_list' is always empty here */
+
+ /* Migrate folios selected for demotion */
+- stat->nr_demoted = demote_folio_list(&demote_folios, pgdat);
+- nr_reclaimed += stat->nr_demoted;
++ nr_demoted = demote_folio_list(&demote_folios, pgdat);
++ nr_reclaimed += nr_demoted;
++ stat->nr_demoted += nr_demoted;
+ /* Folios that could not be demoted are still in @demote_folios */
+ if (!list_empty(&demote_folios)) {
+ /* Folios which weren't demoted go back on @folio_list */
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 3d2553dcdb1b3c..46ea0bee2259f8 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -710,12 +710,12 @@ static bool l2cap_valid_mtu(struct l2cap_chan *chan, u16 mtu)
+ {
+ switch (chan->scid) {
+ case L2CAP_CID_ATT:
+- if (mtu < L2CAP_LE_MIN_MTU)
++ if (mtu && mtu < L2CAP_LE_MIN_MTU)
+ return false;
+ break;
+
+ default:
+- if (mtu < L2CAP_DEFAULT_MIN_MTU)
++ if (mtu && mtu < L2CAP_DEFAULT_MIN_MTU)
+ return false;
+ }
+
+@@ -1888,7 +1888,8 @@ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
+ chan = l2cap_chan_create();
+ if (!chan) {
+ sk_free(sk);
+- sock->sk = NULL;
++ if (sock)
++ sock->sk = NULL;
+ return NULL;
+ }
+
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 7dc315c1658e7d..90c21b3edcd80e 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -5460,10 +5460,16 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ {
+ struct mgmt_rp_remove_adv_monitor rp;
+ struct mgmt_pending_cmd *cmd = data;
+- struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
++ struct mgmt_cp_remove_adv_monitor *cp;
++
++ if (status == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
++ return;
+
+ hci_dev_lock(hdev);
+
++ cp = cmd->param;
++
+ rp.monitor_handle = cp->monitor_handle;
+
+ if (!status)
+@@ -5481,6 +5487,10 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ struct mgmt_pending_cmd *cmd = data;
++
++ if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
++ return -ECANCELED;
++
+ struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
+ u16 handle = __le16_to_cpu(cp->monitor_handle);
+
+diff --git a/net/ethtool/rss.c b/net/ethtool/rss.c
+index e07386275e142d..8aa45f3fdfdf08 100644
+--- a/net/ethtool/rss.c
++++ b/net/ethtool/rss.c
+@@ -107,6 +107,8 @@ rss_prepare_ctx(const struct rss_req_info *request, struct net_device *dev,
+ u32 total_size, indir_bytes;
+ u8 *rss_config;
+
++ data->no_key_fields = !dev->ethtool_ops->rxfh_per_ctx_key;
++
+ ctx = xa_load(&dev->ethtool->rss_ctx, request->rss_context);
+ if (!ctx)
+ return -ENOENT;
+@@ -153,7 +155,6 @@ rss_prepare_data(const struct ethnl_req_info *req_base,
+ if (!ops->cap_rss_ctx_supported && !ops->create_rxfh_context)
+ return -EOPNOTSUPP;
+
+- data->no_key_fields = !ops->rxfh_per_ctx_key;
+ return rss_prepare_ctx(request, dev, data, info);
+ }
+
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index d2eeb6fc49b382..8da74dc63061c0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -985,9 +985,9 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
+ const int hlen = skb_network_header_len(skb) +
+ sizeof(struct udphdr);
+
+- if (hlen + cork->gso_size > cork->fragsize) {
++ if (hlen + min(datalen, cork->gso_size) > cork->fragsize) {
+ kfree_skb(skb);
+- return -EINVAL;
++ return -EMSGSIZE;
+ }
+ if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
+ kfree_skb(skb);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 896c9c827a288c..197d0ac47592ad 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1294,9 +1294,9 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
+ const int hlen = skb_network_header_len(skb) +
+ sizeof(struct udphdr);
+
+- if (hlen + cork->gso_size > cork->fragsize) {
++ if (hlen + min(datalen, cork->gso_size) > cork->fragsize) {
+ kfree_skb(skb);
+- return -EINVAL;
++ return -EMSGSIZE;
+ }
+ if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
+ kfree_skb(skb);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index fac774825aff39..42b239d9b2b3cf 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -136,6 +136,7 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
+ int delta;
+
+ if (MPTCP_SKB_CB(from)->offset ||
++ ((to->len + from->len) > (sk->sk_rcvbuf >> 3)) ||
+ !skb_try_coalesce(to, from, &fragstolen, &delta))
+ return false;
+
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index bf276eaf933075..7891a537bddd11 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -1385,6 +1385,12 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ nd->state = ncsi_dev_state_probe_package;
+ break;
+ case ncsi_dev_state_probe_package:
++ if (ndp->package_probe_id >= 8) {
++ /* Last package probed, finishing */
++ ndp->flags |= NCSI_DEV_PROBED;
++ break;
++ }
++
+ ndp->pending_req_num = 1;
+
+ nca.type = NCSI_PKT_CMD_SP;
+@@ -1501,13 +1507,8 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ if (ret)
+ goto error;
+
+- /* Probe next package */
++ /* Probe next package after receiving response */
+ ndp->package_probe_id++;
+- if (ndp->package_probe_id >= 8) {
+- /* Probe finished */
+- ndp->flags |= NCSI_DEV_PROBED;
+- break;
+- }
+ nd->state = ncsi_dev_state_probe_package;
+ ndp->active_package = NULL;
+ break;
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index de175318a3a0f3..082ab66f120b73 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -542,6 +542,8 @@ static u8 nci_hci_create_pipe(struct nci_dev *ndev, u8 dest_host,
+
+ pr_debug("pipe created=%d\n", pipe);
+
++ if (pipe >= NCI_HCI_MAX_PIPES)
++ pipe = NCI_HCI_INVALID_PIPE;
+ return pipe;
+ }
+
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 72c65d938a150e..a4a668b88a8f27 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -701,11 +701,9 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ struct net_device *dev;
+ ax25_address *source;
+ ax25_uid_assoc *user;
++ int err = -EINVAL;
+ int n;
+
+- if (!sock_flag(sk, SOCK_ZAPPED))
+- return -EINVAL;
+-
+ if (addr_len != sizeof(struct sockaddr_rose) && addr_len != sizeof(struct full_sockaddr_rose))
+ return -EINVAL;
+
+@@ -718,8 +716,15 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ if ((unsigned int) addr->srose_ndigis > ROSE_MAX_DIGIS)
+ return -EINVAL;
+
+- if ((dev = rose_dev_get(&addr->srose_addr)) == NULL)
+- return -EADDRNOTAVAIL;
++ lock_sock(sk);
++
++ if (!sock_flag(sk, SOCK_ZAPPED))
++ goto out_release;
++
++ err = -EADDRNOTAVAIL;
++ dev = rose_dev_get(&addr->srose_addr);
++ if (!dev)
++ goto out_release;
+
+ source = &addr->srose_call;
+
+@@ -730,7 +735,8 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ } else {
+ if (ax25_uid_policy && !capable(CAP_NET_BIND_SERVICE)) {
+ dev_put(dev);
+- return -EACCES;
++ err = -EACCES;
++ goto out_release;
+ }
+ rose->source_call = *source;
+ }
+@@ -753,8 +759,10 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ rose_insert_socket(sk);
+
+ sock_reset_flag(sk, SOCK_ZAPPED);
+-
+- return 0;
++ err = 0;
++out_release:
++ release_sock(sk);
++ return err;
+ }
+
+ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags)
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index d0fd37bdcfe9c8..6b036c0564c7a8 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -567,6 +567,7 @@ enum rxrpc_call_flag {
+ RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */
+ RXRPC_CALL_RX_IS_IDLE, /* recvmsg() is idle - send an ACK */
+ RXRPC_CALL_RECVMSG_READ_ALL, /* recvmsg() read all of the received data */
++ RXRPC_CALL_CONN_CHALLENGING, /* The connection is being challenged */
+ };
+
+ /*
+@@ -587,7 +588,6 @@ enum rxrpc_call_state {
+ RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */
+ RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */
+ RXRPC_CALL_SERVER_PREALLOC, /* - service preallocation */
+- RXRPC_CALL_SERVER_SECURING, /* - server securing request connection */
+ RXRPC_CALL_SERVER_RECV_REQUEST, /* - server receiving request */
+ RXRPC_CALL_SERVER_ACK_REQUEST, /* - server pending ACK of request */
+ RXRPC_CALL_SERVER_SEND_REPLY, /* - server sending reply */
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f9e983a12c1492..e379a2a9375ae0 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -22,7 +22,6 @@ const char *const rxrpc_call_states[NR__RXRPC_CALL_STATES] = {
+ [RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl",
+ [RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl",
+ [RXRPC_CALL_SERVER_PREALLOC] = "SvPrealc",
+- [RXRPC_CALL_SERVER_SECURING] = "SvSecure",
+ [RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq",
+ [RXRPC_CALL_SERVER_ACK_REQUEST] = "SvAckReq",
+ [RXRPC_CALL_SERVER_SEND_REPLY] = "SvSndRpl",
+@@ -453,17 +452,16 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
+ call->cong_tstamp = skb->tstamp;
+
+ __set_bit(RXRPC_CALL_EXPOSED, &call->flags);
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING);
++ rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
+
+ spin_lock(&conn->state_lock);
+
+ switch (conn->state) {
+ case RXRPC_CONN_SERVICE_UNSECURED:
+ case RXRPC_CONN_SERVICE_CHALLENGING:
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING);
++ __set_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags);
+ break;
+ case RXRPC_CONN_SERVICE:
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
+ break;
+
+ case RXRPC_CONN_ABORTED:
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 2a1396cd892f30..c4eb7986efddf8 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -222,10 +222,8 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn)
+ */
+ static void rxrpc_call_is_secure(struct rxrpc_call *call)
+ {
+- if (call && __rxrpc_call_state(call) == RXRPC_CALL_SERVER_SECURING) {
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
++ if (call && __test_and_clear_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags))
+ rxrpc_notify_socket(call);
+- }
+ }
+
+ /*
+@@ -266,6 +264,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
+ * we've already received the packet, put it on the
+ * front of the queue.
+ */
++ sp->conn = rxrpc_get_connection(conn, rxrpc_conn_get_poke_secured);
+ skb->mark = RXRPC_SKB_MARK_SERVICE_CONN_SECURED;
+ rxrpc_get_skb(skb, rxrpc_skb_get_conn_secured);
+ skb_queue_head(&conn->local->rx_queue, skb);
+@@ -431,14 +430,16 @@ void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb)
+ if (test_and_clear_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events))
+ rxrpc_abort_calls(conn);
+
+- switch (skb->mark) {
+- case RXRPC_SKB_MARK_SERVICE_CONN_SECURED:
+- if (conn->state != RXRPC_CONN_SERVICE)
+- break;
++ if (skb) {
++ switch (skb->mark) {
++ case RXRPC_SKB_MARK_SERVICE_CONN_SECURED:
++ if (conn->state != RXRPC_CONN_SERVICE)
++ break;
+
+- for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
+- rxrpc_call_is_secure(conn->channels[loop].call);
+- break;
++ for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
++ rxrpc_call_is_secure(conn->channels[loop].call);
++ break;
++ }
+ }
+
+ /* Process delayed ACKs whose time has come. */
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 1539d315afe74a..7bc68135966e24 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -67,6 +67,7 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet,
+ INIT_WORK(&conn->destructor, rxrpc_clean_up_connection);
+ INIT_LIST_HEAD(&conn->proc_link);
+ INIT_LIST_HEAD(&conn->link);
++ INIT_LIST_HEAD(&conn->attend_link);
+ mutex_init(&conn->security_lock);
+ mutex_init(&conn->tx_data_alloc_lock);
+ skb_queue_head_init(&conn->rx_queue);
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 16d49a861dbb58..6a075a7c190db3 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -573,7 +573,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb
+ rxrpc_propose_delay_ACK(call, sp->hdr.serial,
+ rxrpc_propose_ack_input_data);
+ }
+- if (notify) {
++ if (notify && !test_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags)) {
+ trace_rxrpc_notify_socket(call->debug_id, sp->hdr.serial);
+ rxrpc_notify_socket(call);
+ }
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 23d18fe5de9f0d..154f650efb0ab6 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -654,7 +654,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ } else {
+ switch (rxrpc_call_state(call)) {
+ case RXRPC_CALL_CLIENT_AWAIT_CONN:
+- case RXRPC_CALL_SERVER_SECURING:
++ case RXRPC_CALL_SERVER_RECV_REQUEST:
+ if (p.command == RXRPC_CMD_SEND_ABORT)
+ break;
+ fallthrough;
+diff --git a/net/sched/sch_fifo.c b/net/sched/sch_fifo.c
+index b50b2c2cc09bc6..e6bfd39ff33965 100644
+--- a/net/sched/sch_fifo.c
++++ b/net/sched/sch_fifo.c
+@@ -40,6 +40,9 @@ static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ {
+ unsigned int prev_backlog;
+
++ if (unlikely(READ_ONCE(sch->limit) == 0))
++ return qdisc_drop(skb, sch, to_free);
++
+ if (likely(sch->q.qlen < READ_ONCE(sch->limit)))
+ return qdisc_enqueue_tail(skb, sch);
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 3b519adc01259f..68a08f6d1fbce2 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -748,9 +748,9 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ if (err != NET_XMIT_SUCCESS) {
+ if (net_xmit_drop_count(err))
+ qdisc_qstats_drop(sch);
+- qdisc_tree_reduce_backlog(sch, 1, pkt_len);
+ sch->qstats.backlog -= pkt_len;
+ sch->q.qlen--;
++ qdisc_tree_reduce_backlog(sch, 1, pkt_len);
+ }
+ goto tfifo_dequeue;
+ }
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 43c3f1c971b8fd..c524421ec65252 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -2293,8 +2293,8 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
+ keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
+
+ /* Verify the supplied size values */
+- if (unlikely(size != keylen + sizeof(struct tipc_aead_key) ||
+- keylen > TIPC_AEAD_KEY_SIZE_MAX)) {
++ if (unlikely(keylen > TIPC_AEAD_KEY_SIZE_MAX ||
++ size != keylen + sizeof(struct tipc_aead_key))) {
+ pr_debug("%s: invalid MSG_CRYPTO key size\n", rx->name);
+ goto exit;
+ }
+diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
+index a17ac8762d8f9f..789f80f71ca7e1 100644
+--- a/rust/kernel/init.rs
++++ b/rust/kernel/init.rs
+@@ -858,7 +858,7 @@ pub unsafe trait PinInit<T: ?Sized, E = Infallible>: Sized {
+ /// use kernel::{types::Opaque, init::pin_init_from_closure};
+ /// #[repr(C)]
+ /// struct RawFoo([u8; 16]);
+- /// extern {
++ /// extern "C" {
+ /// fn init_foo(_: *mut RawFoo);
+ /// }
+ ///
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 1d13cecc7cc780..04faf15ed316a9 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -130,7 +130,6 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
+ KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
+ KBUILD_CFLAGS += -Wno-enum-compare-conditional
+-KBUILD_CFLAGS += -Wno-enum-enum-conversion
+ endif
+
+ endif
+@@ -154,6 +153,10 @@ KBUILD_CFLAGS += -Wno-missing-field-initializers
+ KBUILD_CFLAGS += -Wno-type-limits
+ KBUILD_CFLAGS += -Wno-shift-negative-value
+
++ifdef CONFIG_CC_IS_CLANG
++KBUILD_CFLAGS += -Wno-enum-enum-conversion
++endif
++
+ ifdef CONFIG_CC_IS_GCC
+ KBUILD_CFLAGS += -Wno-maybe-uninitialized
+ endif
+diff --git a/scripts/gdb/linux/cpus.py b/scripts/gdb/linux/cpus.py
+index 2f11c4f9c345a0..13eb8b3901b8fc 100644
+--- a/scripts/gdb/linux/cpus.py
++++ b/scripts/gdb/linux/cpus.py
+@@ -167,7 +167,7 @@ def get_current_task(cpu):
+ var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task")
+ return per_cpu(var_ptr, cpu).dereference()
+ elif utils.is_target_arch("aarch64"):
+- current_task_addr = gdb.parse_and_eval("$SP_EL0")
++ current_task_addr = gdb.parse_and_eval("(unsigned long)$SP_EL0")
+ if (current_task_addr >> 63) != 0:
+ current_task = current_task_addr.cast(task_ptr_type)
+ return current_task.dereference()
+diff --git a/scripts/generate_rust_target.rs b/scripts/generate_rust_target.rs
+index 0d00ac3723b5e5..4fd6b6ab3e329d 100644
+--- a/scripts/generate_rust_target.rs
++++ b/scripts/generate_rust_target.rs
+@@ -165,6 +165,18 @@ fn has(&self, option: &str) -> bool {
+ let option = "CONFIG_".to_owned() + option;
+ self.0.contains_key(&option)
+ }
++
++ /// Is the rustc version at least `major.minor.patch`?
++ fn rustc_version_atleast(&self, major: u32, minor: u32, patch: u32) -> bool {
++ let check_version = 100000 * major + 100 * minor + patch;
++ let actual_version = self
++ .0
++ .get("CONFIG_RUSTC_VERSION")
++ .unwrap()
++ .parse::<u32>()
++ .unwrap();
++ check_version <= actual_version
++ }
+ }
+
+ fn main() {
+@@ -182,6 +194,9 @@ fn main() {
+ }
+ } else if cfg.has("X86_64") {
+ ts.push("arch", "x86_64");
++ if cfg.rustc_version_atleast(1, 86, 0) {
++ ts.push("rustc-abi", "x86-softfloat");
++ }
+ ts.push(
+ "data-layout",
+ "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
+@@ -215,6 +230,9 @@ fn main() {
+ panic!("32-bit x86 only works under UML");
+ }
+ ts.push("arch", "x86");
++ if cfg.rustc_version_atleast(1, 86, 0) {
++ ts.push("rustc-abi", "x86-softfloat");
++ }
+ ts.push(
+ "data-layout",
+ "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128",
+diff --git a/security/keys/trusted-keys/trusted_dcp.c b/security/keys/trusted-keys/trusted_dcp.c
+index e908c53a803c4b..7b6eb655df0cbf 100644
+--- a/security/keys/trusted-keys/trusted_dcp.c
++++ b/security/keys/trusted-keys/trusted_dcp.c
+@@ -201,12 +201,16 @@ static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)
+ {
+ struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;
+ int blen, ret;
+- u8 plain_blob_key[AES_KEYSIZE_128];
++ u8 *plain_blob_key;
+
+ blen = calc_blob_len(p->key_len);
+ if (blen > MAX_BLOB_SIZE)
+ return -E2BIG;
+
++ plain_blob_key = kmalloc(AES_KEYSIZE_128, GFP_KERNEL);
++ if (!plain_blob_key)
++ return -ENOMEM;
++
+ b->fmt_version = DCP_BLOB_VERSION;
+ get_random_bytes(b->nonce, AES_KEYSIZE_128);
+ get_random_bytes(plain_blob_key, AES_KEYSIZE_128);
+@@ -229,7 +233,8 @@ static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)
+ ret = 0;
+
+ out:
+- memzero_explicit(plain_blob_key, sizeof(plain_blob_key));
++ memzero_explicit(plain_blob_key, AES_KEYSIZE_128);
++ kfree(plain_blob_key);
+
+ return ret;
+ }
+@@ -238,7 +243,7 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ {
+ struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;
+ int blen, ret;
+- u8 plain_blob_key[AES_KEYSIZE_128];
++ u8 *plain_blob_key = NULL;
+
+ if (b->fmt_version != DCP_BLOB_VERSION) {
+ pr_err("DCP blob has bad version: %i, expected %i\n",
+@@ -256,6 +261,12 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ goto out;
+ }
+
++ plain_blob_key = kmalloc(AES_KEYSIZE_128, GFP_KERNEL);
++ if (!plain_blob_key) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
+ ret = decrypt_blob_key(b->blob_key, plain_blob_key);
+ if (ret) {
+ pr_err("Unable to decrypt blob key: %i\n", ret);
+@@ -271,7 +282,10 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+
+ ret = 0;
+ out:
+- memzero_explicit(plain_blob_key, sizeof(plain_blob_key));
++ if (plain_blob_key) {
++ memzero_explicit(plain_blob_key, AES_KEYSIZE_128);
++ kfree(plain_blob_key);
++ }
+
+ return ret;
+ }
+diff --git a/security/safesetid/securityfs.c b/security/safesetid/securityfs.c
+index 25310468bcddff..8e1ffd70b18ab4 100644
+--- a/security/safesetid/securityfs.c
++++ b/security/safesetid/securityfs.c
+@@ -143,6 +143,9 @@ static ssize_t handle_policy_update(struct file *file,
+ char *buf, *p, *end;
+ int err;
+
++ if (len >= KMALLOC_MAX_SIZE)
++ return -EINVAL;
++
+ pol = kmalloc(sizeof(struct setid_ruleset), GFP_KERNEL);
+ if (!pol)
+ return -ENOMEM;
+diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c
+index 5c7b059a332aac..972664962e8f67 100644
+--- a/security/tomoyo/common.c
++++ b/security/tomoyo/common.c
+@@ -2665,7 +2665,7 @@ ssize_t tomoyo_write_control(struct tomoyo_io_buffer *head,
+
+ if (head->w.avail >= head->writebuf_size - 1) {
+ const int len = head->writebuf_size * 2;
+- char *cp = kzalloc(len, GFP_NOFS);
++ char *cp = kzalloc(len, GFP_NOFS | __GFP_NOWARN);
+
+ if (!cp) {
+ error = -ENOMEM;
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 8e74be038b0fad..0091ab3f2bd56b 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -80,7 +80,11 @@ static int compare_input_type(const void *ap, const void *bp)
+
+ /* In case one has boost and the other one has not,
+ pick the one with boost first. */
+- return (int)(b->has_boost_on_pin - a->has_boost_on_pin);
++ if (a->has_boost_on_pin != b->has_boost_on_pin)
++ return (int)(b->has_boost_on_pin - a->has_boost_on_pin);
++
++ /* Keep the original order */
++ return a->order - b->order;
+ }
+
+ /* Reorder the surround channels
+@@ -400,6 +404,8 @@ int snd_hda_parse_pin_defcfg(struct hda_codec *codec,
+ reorder_outputs(cfg->speaker_outs, cfg->speaker_pins);
+
+ /* sort inputs in the order of AUTO_PIN_* type */
++ for (i = 0; i < cfg->num_inputs; i++)
++ cfg->inputs[i].order = i;
+ sort(cfg->inputs, cfg->num_inputs, sizeof(cfg->inputs[0]),
+ compare_input_type, NULL);
+
+diff --git a/sound/pci/hda/hda_auto_parser.h b/sound/pci/hda/hda_auto_parser.h
+index 579b11beac718e..87af3d8c02f7f6 100644
+--- a/sound/pci/hda/hda_auto_parser.h
++++ b/sound/pci/hda/hda_auto_parser.h
+@@ -37,6 +37,7 @@ struct auto_pin_cfg_item {
+ unsigned int is_headset_mic:1;
+ unsigned int is_headphone_mic:1; /* Mic-only in headphone jack */
+ unsigned int has_boost_on_pin:1;
++ int order;
+ };
+
+ struct auto_pin_cfg;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 5d99a4ea176a15..f3f849b96402d1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10374,6 +10374,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x887a, "HP Laptop 15s-eq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++ SND_PCI_QUIRK(0x103c, 0x887c, "HP Laptop 14s-fq1xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x888a, "HP ENVY x360 Convertible 15-eu0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED),
+@@ -10873,7 +10874,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ HDA_CODEC_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x386e, "Yoga Pro 7 14ARP8", ALC285_FIXUP_SPEAKER2_TO_DAC1),
+- HDA_CODEC_QUIRK(0x17aa, 0x386f, "Legion Pro 7 16ARX8H", ALC287_FIXUP_TAS2781_I2C),
++ HDA_CODEC_QUIRK(0x17aa, 0x38a8, "Legion Pro 7 16ARX8H", ALC287_FIXUP_TAS2781_I2C), /* this must match before PCI SSID 17aa:386f below */
+ SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7i 16IAX7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3877, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10948,6 +10949,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x17aa, 0x9e56, "Lenovo ZhaoYang CF4620Z", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1849, 0x0269, "Positivo Master C6400", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
+ SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1854, 0x0440, "LG CQ6", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+diff --git a/sound/soc/amd/Kconfig b/sound/soc/amd/Kconfig
+index 6dec44f516c13f..c2a5671ba96b07 100644
+--- a/sound/soc/amd/Kconfig
++++ b/sound/soc/amd/Kconfig
+@@ -105,7 +105,7 @@ config SND_SOC_AMD_ACP6x
+ config SND_SOC_AMD_YC_MACH
+ tristate "AMD YC support for DMIC"
+ select SND_SOC_DMIC
+- depends on SND_SOC_AMD_ACP6x
++ depends on SND_SOC_AMD_ACP6x && ACPI
+ help
+ This option enables machine driver for Yellow Carp platform
+ using dmic. ACP IP has PDM Decoder block with DMA controller.
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index ecf57a6cb7c37d..b16587d8f97a89 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -304,6 +304,34 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "83AS"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83L3"),
++ }
++ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83N6"),
++ }
++ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83Q2"),
++ }
++ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 9f2dc24d44cb54..84fc35d88b9267 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -617,9 +617,10 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "380E")
++ DMI_MATCH(DMI_PRODUCT_NAME, "83HM")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS |
++ SOC_SDW_CODEC_MIC),
+ },
+ {
+ .callback = sof_sdw_quirk_cb,
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 7a59121fc323c3..1102599403c534 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -38,7 +38,6 @@ static inline int _soc_pcm_ret(struct snd_soc_pcm_runtime *rtd,
+ switch (ret) {
+ case -EPROBE_DEFER:
+ case -ENOTSUPP:
+- case -EINVAL:
+ break;
+ default:
+ dev_err(rtd->dev,
+@@ -1001,7 +1000,13 @@ static int __soc_pcm_prepare(struct snd_soc_pcm_runtime *rtd,
+ }
+
+ out:
+- return soc_pcm_ret(rtd, ret);
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
++ return ret;
+ }
+
+ /* PCM prepare ops for non-DPCM streams */
+@@ -1013,6 +1018,13 @@ static int soc_pcm_prepare(struct snd_pcm_substream *substream)
+ snd_soc_dpcm_mutex_lock(rtd);
+ ret = __soc_pcm_prepare(rtd, substream);
+ snd_soc_dpcm_mutex_unlock(rtd);
++
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
+ return ret;
+ }
+
+@@ -2554,7 +2566,13 @@ int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream)
+ be->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
+ }
+
+- return soc_pcm_ret(fe, ret);
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
++ return ret;
+ }
+
+ static int dpcm_fe_dai_prepare(struct snd_pcm_substream *substream)
+@@ -2594,7 +2612,13 @@ static int dpcm_fe_dai_prepare(struct snd_pcm_substream *substream)
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
+ snd_soc_dpcm_mutex_unlock(fe);
+
+- return soc_pcm_ret(fe, ret);
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
++ return ret;
+ }
+
+ static int dpcm_run_update_shutdown(struct snd_soc_pcm_runtime *fe, int stream)
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index 82f46ecd94301e..2e58a264da5566 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -503,6 +503,12 @@ int sdw_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ int ret;
+ int i;
+
++ if (!w) {
++ dev_err(cpu_dai->dev, "%s widget not found, check amp link num in the topology\n",
++ cpu_dai->name);
++ return -EINVAL;
++ }
++
+ ops = hda_dai_get_ops(substream, cpu_dai);
+ if (!ops) {
+ dev_err(cpu_dai->dev, "DAI widget ops not set\n");
+@@ -582,6 +588,12 @@ int sdw_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ */
+ for_each_rtd_cpu_dais(rtd, i, dai) {
+ w = snd_soc_dai_get_widget(dai, substream->stream);
++ if (!w) {
++ dev_err(cpu_dai->dev,
++ "%s widget not found, check amp link num in the topology\n",
++ dai->name);
++ return -EINVAL;
++ }
+ ipc4_copier = widget_to_copier(w);
+ memcpy(&ipc4_copier->dma_config_tlv[cpu_dai_id], dma_config_tlv,
+ sizeof(*dma_config_tlv));
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 70fc08c8fc99e2..f10ed4d1025016 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -63,6 +63,11 @@ static int sdw_params_stream(struct device *dev,
+ struct snd_soc_dapm_widget *w = snd_soc_dai_get_widget(d, params_data->substream->stream);
+ struct snd_sof_dai_config_data data = { 0 };
+
++ if (!w) {
++ dev_err(dev, "%s widget not found, check amp link num in the topology\n",
++ d->name);
++ return -EINVAL;
++ }
+ data.dai_index = (params_data->link_id << 8) | d->id;
+ data.dai_data = params_data->alh_stream_id;
+ data.dai_node_id = data.dai_data;
+diff --git a/tools/perf/bench/epoll-wait.c b/tools/perf/bench/epoll-wait.c
+index ef5c4257844d13..20fe4f72b4afcc 100644
+--- a/tools/perf/bench/epoll-wait.c
++++ b/tools/perf/bench/epoll-wait.c
+@@ -420,7 +420,12 @@ static int cmpworker(const void *p1, const void *p2)
+
+ struct worker *w1 = (struct worker *) p1;
+ struct worker *w2 = (struct worker *) p2;
+- return w1->tid > w2->tid;
++
++ if (w1->tid > w2->tid)
++ return 1;
++ if (w1->tid < w2->tid)
++ return -1;
++ return 0;
+ }
+
+ int bench_epoll_wait(int argc, const char **argv)
+diff --git a/tools/testing/selftests/net/ipsec.c b/tools/testing/selftests/net/ipsec.c
+index be4a30a0d02aef..9b44a091802cbb 100644
+--- a/tools/testing/selftests/net/ipsec.c
++++ b/tools/testing/selftests/net/ipsec.c
+@@ -227,7 +227,8 @@ static int rtattr_pack(struct nlmsghdr *nh, size_t req_sz,
+
+ attr->rta_len = RTA_LENGTH(size);
+ attr->rta_type = rta_type;
+- memcpy(RTA_DATA(attr), payload, size);
++ if (payload)
++ memcpy(RTA_DATA(attr), payload, size);
+
+ return 0;
+ }
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index 414addef9a4514..d240d02fa443a1 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -1302,7 +1302,7 @@ int main_loop(void)
+ return ret;
+
+ if (cfg_truncate > 0) {
+- xdisconnect(fd);
++ shutdown(fd, SHUT_WR);
+ } else if (--cfg_repeat > 0) {
+ xdisconnect(fd);
+
+diff --git a/tools/testing/selftests/net/udpgso.c b/tools/testing/selftests/net/udpgso.c
+index 3f2fca02fec53f..36ff28af4b1905 100644
+--- a/tools/testing/selftests/net/udpgso.c
++++ b/tools/testing/selftests/net/udpgso.c
+@@ -102,6 +102,19 @@ struct testcase testcases_v4[] = {
+ .gso_len = CONST_MSS_V4,
+ .r_num_mss = 1,
+ },
++ {
++ /* datalen <= MSS < gso_len: will fall back to no GSO */
++ .tlen = CONST_MSS_V4,
++ .gso_len = CONST_MSS_V4 + 1,
++ .r_num_mss = 0,
++ .r_len_last = CONST_MSS_V4,
++ },
++ {
++ /* MSS < datalen < gso_len: fail */
++ .tlen = CONST_MSS_V4 + 1,
++ .gso_len = CONST_MSS_V4 + 2,
++ .tfail = true,
++ },
+ {
+ /* send a single MSS + 1B */
+ .tlen = CONST_MSS_V4 + 1,
+@@ -205,6 +218,19 @@ struct testcase testcases_v6[] = {
+ .gso_len = CONST_MSS_V6,
+ .r_num_mss = 1,
+ },
++ {
++ /* datalen <= MSS < gso_len: will fall back to no GSO */
++ .tlen = CONST_MSS_V6,
++ .gso_len = CONST_MSS_V6 + 1,
++ .r_num_mss = 0,
++ .r_len_last = CONST_MSS_V6,
++ },
++ {
++ /* MSS < datalen < gso_len: fail */
++ .tlen = CONST_MSS_V6 + 1,
++ .gso_len = CONST_MSS_V6 + 2,
++ .tfail = true
++ },
+ {
+ /* send a single MSS + 1B */
+ .tlen = CONST_MSS_V6 + 1,
+diff --git a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+index 6f4c3f5a1c5d99..37d9bf6fb7458d 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+@@ -20,7 +20,7 @@ s32 BPF_STRUCT_OPS(ddsp_bogus_dsq_fail_select_cpu, struct task_struct *p,
+ * If we dispatch to a bogus DSQ that will fall back to the
+ * builtin global DSQ, we fail gracefully.
+ */
+- scx_bpf_dsq_insert_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
++ scx_bpf_dispatch_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
+ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+index e4a55027778fd0..dffc97d9cdf141 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+@@ -17,8 +17,8 @@ s32 BPF_STRUCT_OPS(ddsp_vtimelocal_fail_select_cpu, struct task_struct *p,
+
+ if (cpu >= 0) {
+ /* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */
+- scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
+- p->scx.dsq_vtime, 0);
++ scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
++ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+index fbda6bf5467128..c9a2da0575a0fa 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+@@ -48,7 +48,7 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev)
+ else
+ target = scx_bpf_task_cpu(p);
+
+- scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
+ bpf_task_release(p);
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+index a7cf868d5e311d..1efb50d61040ad 100644
+--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
++++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+@@ -31,7 +31,7 @@ void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p,
+ /* Can only call from ops.select_cpu() */
+ scx_bpf_select_cpu_dfl(p, 0, 0, &found);
+
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c
+index 4bc36182d3ffc2..d75d4faf07f6d5 100644
+--- a/tools/testing/selftests/sched_ext/exit.bpf.c
++++ b/tools/testing/selftests/sched_ext/exit.bpf.c
+@@ -33,7 +33,7 @@ void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags)
+ if (exit_point == EXIT_ENQUEUE)
+ EXIT_CLEANLY();
+
+- scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+@@ -41,7 +41,7 @@ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+ if (exit_point == EXIT_DISPATCH)
+ EXIT_CLEANLY();
+
+- scx_bpf_dsq_move_to_local(DSQ_ID);
++ scx_bpf_consume(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(exit_enable, struct task_struct *p)
+diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
+index 430f5e13bf5544..361797e10ed5d5 100644
+--- a/tools/testing/selftests/sched_ext/maximal.bpf.c
++++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
+@@ -22,7 +22,7 @@ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
+
+ void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags)
+ {
+- scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+
+ void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev)
+ {
+- scx_bpf_dsq_move_to_local(DSQ_ID);
++ scx_bpf_consume(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags)
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+index 13d0f5be788d12..f171ac47097060 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_enqueue, struct task_struct *p,
+ }
+ scx_bpf_put_idle_cpumask(idle_mask);
+
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+index 815f1d5d61ac43..9efdbb7da92887 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+@@ -67,7 +67,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p,
+ saw_local = true;
+ }
+
+- scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, enq_flags);
+ }
+
+ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_init_task,
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+index 4bb99699e9209c..59bfc4f36167a7 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+@@ -29,7 +29,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+
+ dispatch:
+- scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+index 2a75de11b2cfd5..3bbd5fcdfb18e0 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+@@ -18,7 +18,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_bad_dsq_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching to a random DSQ should fail. */
+- scx_bpf_dsq_insert(p, 0xcafef00d, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, 0xcafef00d, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+index 99d075695c9743..0fda57fe0ecfae 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+@@ -18,8 +18,8 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_dbl_dsp_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching twice in a row is disallowed. */
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+index bfcb96cd4954bd..e6c67bcf5e6e35 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+@@ -2,8 +2,8 @@
+ /*
+ * A scheduler that validates that enqueue flags are properly stored and
+ * applied at dispatch time when a task is directly dispatched from
+- * ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(),
+- * and making the test a very basic vtime scheduler.
++ * ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and
++ * making the test a very basic vtime scheduler.
+ *
+ * Copyright (c) 2024 Meta Platforms, Inc. and affiliates.
+ * Copyright (c) 2024 David Vernet <dvernet@meta.com>
+@@ -47,13 +47,13 @@ s32 BPF_STRUCT_OPS(select_cpu_vtime_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+ scx_bpf_test_and_clear_cpu_idle(cpu);
+ ddsp:
+- scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
++ scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
+ return cpu;
+ }
+
+ void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p)
+ {
+- if (scx_bpf_dsq_move_to_local(VTIME_DSQ))
++ if (scx_bpf_consume(VTIME_DSQ))
+ consumed = true;
+ }
+
+diff --git a/tools/tracing/rtla/src/osnoise.c b/tools/tracing/rtla/src/osnoise.c
+index 245e9344932bc4..699a83f538a8e8 100644
+--- a/tools/tracing/rtla/src/osnoise.c
++++ b/tools/tracing/rtla/src/osnoise.c
+@@ -867,7 +867,7 @@ int osnoise_set_workload(struct osnoise_context *context, bool onoff)
+
+ retval = osnoise_options_set_option("OSNOISE_WORKLOAD", onoff);
+ if (retval < 0)
+- return -1;
++ return -2;
+
+ context->opt_workload = onoff;
+
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 2cc3ffcbc983d3..4cbd2d8ebb0461 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1091,12 +1091,15 @@ timerlat_hist_apply_config(struct osnoise_tool *tool, struct timerlat_hist_param
+ }
+ }
+
+- if (params->user_hist) {
+- retval = osnoise_set_workload(tool->context, 0);
+- if (retval) {
+- err_msg("Failed to set OSNOISE_WORKLOAD option\n");
+- goto out_err;
+- }
++ /*
++ * Set workload according to type of thread if the kernel supports it.
++ * On kernels without support, user threads will have already failed
++ * on missing timerlat_fd, and kernel threads do not need it.
++ */
++ retval = osnoise_set_workload(tool->context, params->kernel_workload);
++ if (retval < -1) {
++ err_msg("Failed to set OSNOISE_WORKLOAD option\n");
++ goto out_err;
+ }
+
+ return 0;
+@@ -1137,9 +1140,12 @@ static struct osnoise_tool
+ }
+
+ static int stop_tracing;
++static struct trace_instance *hist_inst = NULL;
+ static void stop_hist(int sig)
+ {
+ stop_tracing = 1;
++ if (hist_inst)
++ trace_instance_stop(hist_inst);
+ }
+
+ /*
+@@ -1185,6 +1191,12 @@ int timerlat_hist_main(int argc, char *argv[])
+ }
+
+ trace = &tool->trace;
++ /*
++ * Save trace instance into global variable so that SIGINT can stop
++ * the timerlat tracer.
++ * Otherwise, rtla could loop indefinitely when overloaded.
++ */
++ hist_inst = trace;
+
+ retval = enable_timerlat(trace);
+ if (retval) {
+@@ -1331,7 +1343,7 @@ int timerlat_hist_main(int argc, char *argv[])
+
+ return_value = 0;
+
+- if (trace_is_off(&tool->trace, &record->trace)) {
++ if (trace_is_off(&tool->trace, &record->trace) && !stop_tracing) {
+ printf("rtla timerlat hit stop tracing\n");
+
+ if (!params->no_aa)
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index ac2ff38a57ee55..d13be28dacd599 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -842,12 +842,15 @@ timerlat_top_apply_config(struct osnoise_tool *top, struct timerlat_top_params *
+ }
+ }
+
+- if (params->user_top) {
+- retval = osnoise_set_workload(top->context, 0);
+- if (retval) {
+- err_msg("Failed to set OSNOISE_WORKLOAD option\n");
+- goto out_err;
+- }
++ /*
++ * Set workload according to type of thread if the kernel supports it.
++ * On kernels without support, user threads will have already failed
++ * on missing timerlat_fd, and kernel threads do not need it.
++ */
++ retval = osnoise_set_workload(top->context, params->kernel_workload);
++ if (retval < -1) {
++ err_msg("Failed to set OSNOISE_WORKLOAD option\n");
++ goto out_err;
+ }
+
+ if (isatty(1) && !params->quiet)
+@@ -891,9 +894,12 @@ static struct osnoise_tool
+ }
+
+ static int stop_tracing;
++static struct trace_instance *top_inst = NULL;
+ static void stop_top(int sig)
+ {
+ stop_tracing = 1;
++ if (top_inst)
++ trace_instance_stop(top_inst);
+ }
+
+ /*
+@@ -940,6 +946,13 @@ int timerlat_top_main(int argc, char *argv[])
+ }
+
+ trace = &top->trace;
++ /*
++ * Save trace instance into global variable so that SIGINT can stop
++ * the timerlat tracer.
++ * Otherwise, rtla could loop indefinitely when overloaded.
++ */
++ top_inst = trace;
++
+
+ retval = enable_timerlat(trace);
+ if (retval) {
+@@ -1099,7 +1112,7 @@ int timerlat_top_main(int argc, char *argv[])
+
+ return_value = 0;
+
+- if (trace_is_off(&top->trace, &record->trace)) {
++ if (trace_is_off(&top->trace, &record->trace) && !stop_tracing) {
+ printf("rtla timerlat hit stop tracing\n");
+
+ if (!params->no_aa)
+diff --git a/tools/tracing/rtla/src/trace.c b/tools/tracing/rtla/src/trace.c
+index 170a706248abff..440323a997c621 100644
+--- a/tools/tracing/rtla/src/trace.c
++++ b/tools/tracing/rtla/src/trace.c
+@@ -196,6 +196,14 @@ int trace_instance_start(struct trace_instance *trace)
+ return tracefs_trace_on(trace->inst);
+ }
+
++/*
++ * trace_instance_stop - stop tracing a given rtla instance
++ */
++int trace_instance_stop(struct trace_instance *trace)
++{
++ return tracefs_trace_off(trace->inst);
++}
++
+ /*
+ * trace_events_free - free a list of trace events
+ */
+diff --git a/tools/tracing/rtla/src/trace.h b/tools/tracing/rtla/src/trace.h
+index c7c92dc9a18a61..76e1b77291ba2a 100644
+--- a/tools/tracing/rtla/src/trace.h
++++ b/tools/tracing/rtla/src/trace.h
+@@ -21,6 +21,7 @@ struct trace_instance {
+
+ int trace_instance_init(struct trace_instance *trace, char *tool_name);
+ int trace_instance_start(struct trace_instance *trace);
++int trace_instance_stop(struct trace_instance *trace);
+ void trace_instance_destroy(struct trace_instance *trace);
+
+ struct trace_seq *get_trace_seq(void);
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-17 11:25 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-17 11:25 UTC (permalink / raw
To: gentoo-commits
commit: e35e34f8774b096f3213e24cfcbf45e4240ae613
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 17 11:24:59 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 17 11:24:59 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e35e34f8
Removed redundant patch
Removed
2980_GCC15-gnu23-to-gnu11-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
2980_GCC15-gnu23-to-gnu11-fix.patch | 105 ------------------------------------
2 files changed, 109 deletions(-)
diff --git a/0000_README b/0000_README
index c6c607fe..54f48e7e 100644
--- a/0000_README
+++ b/0000_README
@@ -131,10 +131,6 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
-Patch: 2980_GCC15-gnu23-to-gnu11-fix.patch
-From: https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
-Desc: GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere.
-
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_GCC15-gnu23-to-gnu11-fix.patch b/2980_GCC15-gnu23-to-gnu11-fix.patch
deleted file mode 100644
index c74b6180..00000000
--- a/2980_GCC15-gnu23-to-gnu11-fix.patch
+++ /dev/null
@@ -1,105 +0,0 @@
-iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
-some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
-everywhere.
-
-https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
---- a/Makefile
-+++ b/Makefile
-@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
- # SHELL used by kbuild
- CONFIG_SHELL := sh
-
-+CSTD_FLAG := -std=gnu11
-+
- HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
- HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
- HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
-@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
- HOSTPKG_CONFIG = pkg-config
-
- KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
-- -O2 -fomit-frame-pointer -std=gnu11
-+ -O2 -fomit-frame-pointer $(CSTD_FLAG)
- KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
- KBUILD_USERLDFLAGS := $(USERLDFLAGS)
-
-@@ -545,7 +547,7 @@ LINUXINCLUDE := \
- KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
-
- KBUILD_CFLAGS :=
--KBUILD_CFLAGS += -std=gnu11
-+KBUILD_CFLAGS += $(CSTD_FLAG)
- KBUILD_CFLAGS += -fshort-wchar
- KBUILD_CFLAGS += -funsigned-char
- KBUILD_CFLAGS += -fno-common
-@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
- export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
- export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
- export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
--export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
-+export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
-
- export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
- export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
---- a/arch/arm64/kernel/vdso32/Makefile
-+++ b/arch/arm64/kernel/vdso32/Makefile
-@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
- -fno-strict-aliasing -fno-common \
- -Werror-implicit-function-declaration \
- -Wno-format-security \
-- -std=gnu11
-+ $(CSTD_FLAG)
- VDSO_CFLAGS += -O2
- # Some useful compiler-dependent flags from top-level Makefile
- VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
---- a/arch/x86/Makefile
-+++ b/arch/x86/Makefile
-@@ -47,7 +47,7 @@ endif
-
- # How to compile the 16-bit code. Note we always compile for -march=i386;
- # that way we can complain to the user if the CPU is insufficient.
--REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
-+REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
- -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
- -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
- -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
---- a/drivers/firmware/efi/libstub/Makefile
-+++ b/drivers/firmware/efi/libstub/Makefile
-@@ -7,7 +7,7 @@
- #
-
- # non-x86 reuses KBUILD_CFLAGS, x86 does not
--cflags-y := $(KBUILD_CFLAGS)
-+cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
-
- cflags-$(CONFIG_X86_32) := -march=i386
- cflags-$(CONFIG_X86_64) := -mcmodel=small
-@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
- $(call cc-disable-warning, address-of-packed-member) \
- $(call cc-disable-warning, gnu) \
- -fno-asynchronous-unwind-tables \
-- $(CLANG_FLAGS)
-+ $(CLANG_FLAGS) $(CSTD_FLAG)
-
- # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
- # disable the stackleak plugin
-@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
- -ffreestanding \
- -fno-stack-protector \
- $(call cc-option,-fno-addrsig) \
-- -D__DISABLE_EXPORTS
-+ -D__DISABLE_EXPORTS $(CSTD_FLAG)
-
- #
- # struct randomization only makes sense for Linux internal types, which the EFI
---- a/arch/x86/boot/compressed/Makefile
-+++ b/arch/x86/boot/compressed/Makefile
-@@ -24,7 +24,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
- # case of cross compiling, as it has the '--target=' flag, which is needed to
- # avoid errors with '-march=i386', and future flags may depend on the target to
- # be valid.
--KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
-+KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS) $(CSTD_FLAG)
- KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
- KBUILD_CFLAGS += -Wundef
- KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-17 15:44 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-17 15:44 UTC (permalink / raw
To: gentoo-commits
commit: 86cb22103e9d9da37a098e62cada3e3a279169a4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 17 15:44:27 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 17 15:44:27 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=86cb2210
kbuild gcc15 fixes, thanks to holgar
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch | 94 ++++++++++++++++++++++++++++++
2 files changed, 98 insertions(+)
diff --git a/0000_README b/0000_README
index 54f48e7e..8a136823 100644
--- a/0000_README
+++ b/0000_README
@@ -131,6 +131,10 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
+Patch: 2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
+From: https://github.com/hhoffstaette/kernel-patches/
+Desc: gcc 15 kbuild fixes
+
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch b/2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
new file mode 100644
index 00000000..e55dc3ed
--- /dev/null
+++ b/2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
@@ -0,0 +1,94 @@
+iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
+some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
+everywhere.
+
+https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+--- a/Makefile
++++ b/Makefile
+@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
+ # SHELL used by kbuild
+ CONFIG_SHELL := sh
+
++CSTD_FLAG := -std=gnu11
++
+ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
+ HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
+ HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
+@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
+ HOSTPKG_CONFIG = pkg-config
+
+ KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
+- -O2 -fomit-frame-pointer -std=gnu11
++ -O2 -fomit-frame-pointer $(CSTD_FLAG)
+ KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
+ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
+
+@@ -545,7 +547,7 @@ LINUXINCLUDE := \
+ KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
+
+ KBUILD_CFLAGS :=
+-KBUILD_CFLAGS += -std=gnu11
++KBUILD_CFLAGS += $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fshort-wchar
+ KBUILD_CFLAGS += -funsigned-char
+ KBUILD_CFLAGS += -fno-common
+@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
+ export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+ export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+-export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
++export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
+ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -fno-common \
+ -Werror-implicit-function-declaration \
+ -Wno-format-security \
+- -std=gnu11
++ $(CSTD_FLAG)
+ VDSO_CFLAGS += -O2
+ # Some useful compiler-dependent flags from top-level Makefile
+ VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ endif
+
+ # How to compile the 16-bit code. Note we always compile for -march=i386;
+ # that way we can complain to the user if the CPU is insufficient.
+-REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
++REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
+ -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -7,7 +7,7 @@
+ #
+
+ # non-x86 reuses KBUILD_CFLAGS, x86 does not
+-cflags-y := $(KBUILD_CFLAGS)
++cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
+ $(call cc-disable-warning, address-of-packed-member) \
+ $(call cc-disable-warning, gnu) \
+ -fno-asynchronous-unwind-tables \
+- $(CLANG_FLAGS)
++ $(CLANG_FLAGS) $(CSTD_FLAG)
+
+ # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
+ # disable the stackleak plugin
+@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
+ -ffreestanding \
+ -fno-stack-protector \
+ $(call cc-option,-fno-addrsig) \
+- -D__DISABLE_EXPORTS
++ -D__DISABLE_EXPORTS $(CSTD_FLAG)
+
+ #
+ # struct randomization only makes sense for Linux internal types, which the EFI
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-18 11:26 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-18 11:26 UTC (permalink / raw
To: gentoo-commits
commit: 74d0366e3c6bc166d44d782e2f233740da1a9a16
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb 18 11:26:15 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb 18 11:26:15 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=74d0366e
Linux patch 6.12.15
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
1014_linux-6.12.15.patch | 142 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 146 insertions(+)
diff --git a/0000_README b/0000_README
index 8a136823..f6cd3204 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-6.12.14.patch
From: https://www.kernel.org
Desc: Linux 6.12.14
+Patch: 1014_linux-6.12.15.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.15
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1014_linux-6.12.15.patch b/1014_linux-6.12.15.patch
new file mode 100644
index 00000000..8fb3146b
--- /dev/null
+++ b/1014_linux-6.12.15.patch
@@ -0,0 +1,142 @@
+diff --git a/Makefile b/Makefile
+index 26a471dbed62a5..c6918c620bc368 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/fs/xfs/xfs_quota.h b/fs/xfs/xfs_quota.h
+index 23d71a55bbc006..032f3a70f21ddd 100644
+--- a/fs/xfs/xfs_quota.h
++++ b/fs/xfs/xfs_quota.h
+@@ -96,7 +96,8 @@ extern void xfs_trans_free_dqinfo(struct xfs_trans *);
+ extern void xfs_trans_mod_dquot_byino(struct xfs_trans *, struct xfs_inode *,
+ uint, int64_t);
+ extern void xfs_trans_apply_dquot_deltas(struct xfs_trans *);
+-extern void xfs_trans_unreserve_and_mod_dquots(struct xfs_trans *);
++void xfs_trans_unreserve_and_mod_dquots(struct xfs_trans *tp,
++ bool already_locked);
+ int xfs_trans_reserve_quota_nblks(struct xfs_trans *tp, struct xfs_inode *ip,
+ int64_t dblocks, int64_t rblocks, bool force);
+ extern int xfs_trans_reserve_quota_bydquots(struct xfs_trans *,
+@@ -166,7 +167,7 @@ static inline void xfs_trans_mod_dquot_byino(struct xfs_trans *tp,
+ {
+ }
+ #define xfs_trans_apply_dquot_deltas(tp)
+-#define xfs_trans_unreserve_and_mod_dquots(tp)
++#define xfs_trans_unreserve_and_mod_dquots(tp, a)
+ static inline int xfs_trans_reserve_quota_nblks(struct xfs_trans *tp,
+ struct xfs_inode *ip, int64_t dblocks, int64_t rblocks,
+ bool force)
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index ee46051db12dde..39cd11cbe21fcb 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -840,6 +840,7 @@ __xfs_trans_commit(
+ */
+ if (tp->t_flags & XFS_TRANS_SB_DIRTY)
+ xfs_trans_apply_sb_deltas(tp);
++ xfs_trans_apply_dquot_deltas(tp);
+
+ error = xfs_trans_run_precommits(tp);
+ if (error)
+@@ -868,11 +869,6 @@ __xfs_trans_commit(
+
+ ASSERT(tp->t_ticket != NULL);
+
+- /*
+- * If we need to update the superblock, then do it now.
+- */
+- xfs_trans_apply_dquot_deltas(tp);
+-
+ xlog_cil_commit(log, tp, &commit_seq, regrant);
+
+ xfs_trans_free(tp);
+@@ -898,7 +894,7 @@ __xfs_trans_commit(
+ * the dqinfo portion to be. All that means is that we have some
+ * (non-persistent) quota reservations that need to be unreserved.
+ */
+- xfs_trans_unreserve_and_mod_dquots(tp);
++ xfs_trans_unreserve_and_mod_dquots(tp, true);
+ if (tp->t_ticket) {
+ if (regrant && !xlog_is_shutdown(log))
+ xfs_log_ticket_regrant(log, tp->t_ticket);
+@@ -992,7 +988,7 @@ xfs_trans_cancel(
+ }
+ #endif
+ xfs_trans_unreserve_and_mod_sb(tp);
+- xfs_trans_unreserve_and_mod_dquots(tp);
++ xfs_trans_unreserve_and_mod_dquots(tp, false);
+
+ if (tp->t_ticket) {
+ xfs_log_ticket_ungrant(log, tp->t_ticket);
+diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
+index b368e13424c4f4..b92eeaa1a2a9e7 100644
+--- a/fs/xfs/xfs_trans_dquot.c
++++ b/fs/xfs/xfs_trans_dquot.c
+@@ -602,6 +602,24 @@ xfs_trans_apply_dquot_deltas(
+ ASSERT(dqp->q_blk.reserved >= dqp->q_blk.count);
+ ASSERT(dqp->q_ino.reserved >= dqp->q_ino.count);
+ ASSERT(dqp->q_rtb.reserved >= dqp->q_rtb.count);
++
++ /*
++ * We've applied the count changes and given back
++ * whatever reservation we didn't use. Zero out the
++ * dqtrx fields.
++ */
++ qtrx->qt_blk_res = 0;
++ qtrx->qt_bcount_delta = 0;
++ qtrx->qt_delbcnt_delta = 0;
++
++ qtrx->qt_rtblk_res = 0;
++ qtrx->qt_rtblk_res_used = 0;
++ qtrx->qt_rtbcount_delta = 0;
++ qtrx->qt_delrtb_delta = 0;
++
++ qtrx->qt_ino_res = 0;
++ qtrx->qt_ino_res_used = 0;
++ qtrx->qt_icount_delta = 0;
+ }
+ }
+ }
+@@ -638,7 +656,8 @@ xfs_trans_unreserve_and_mod_dquots_hook(
+ */
+ void
+ xfs_trans_unreserve_and_mod_dquots(
+- struct xfs_trans *tp)
++ struct xfs_trans *tp,
++ bool already_locked)
+ {
+ int i, j;
+ struct xfs_dquot *dqp;
+@@ -667,10 +686,12 @@ xfs_trans_unreserve_and_mod_dquots(
+ * about the number of blocks used field, or deltas.
+ * Also we don't bother to zero the fields.
+ */
+- locked = false;
++ locked = already_locked;
+ if (qtrx->qt_blk_res) {
+- xfs_dqlock(dqp);
+- locked = true;
++ if (!locked) {
++ xfs_dqlock(dqp);
++ locked = true;
++ }
+ dqp->q_blk.reserved -=
+ (xfs_qcnt_t)qtrx->qt_blk_res;
+ }
+@@ -691,7 +712,7 @@ xfs_trans_unreserve_and_mod_dquots(
+ dqp->q_rtb.reserved -=
+ (xfs_qcnt_t)qtrx->qt_rtblk_res;
+ }
+- if (locked)
++ if (locked && !already_locked)
+ xfs_dqunlock(dqp);
+
+ }
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-21 13:31 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-21 13:31 UTC (permalink / raw
To: gentoo-commits
commit: ce2243f5071849f131d7ebfffb21858a8b0fb12a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 21 13:30:53 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 21 13:30:53 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ce2243f5
Linux patch 6.12.16
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1015_linux-6.12.16.patch | 9009 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9013 insertions(+)
diff --git a/0000_README b/0000_README
index f6cd3204..9f0c3a67 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-6.12.15.patch
From: https://www.kernel.org
Desc: Linux 6.12.15
+Patch: 1015_linux-6.12.16.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.16
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1015_linux-6.12.16.patch b/1015_linux-6.12.16.patch
new file mode 100644
index 00000000..6d524d6b
--- /dev/null
+++ b/1015_linux-6.12.16.patch
@@ -0,0 +1,9009 @@
+diff --git a/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml b/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
+index f2fd2df68a9ed9..b7241ce975b961 100644
+--- a/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
+@@ -22,7 +22,7 @@ description:
+ Each sub-node is identified using the node's name, with valid values listed
+ for each of the pmics below.
+
+- For mp5496, s1, s2
++ For mp5496, s1, s2, l2, l5
+
+ For pm2250, s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11,
+ l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22
+diff --git a/Documentation/networking/iso15765-2.rst b/Documentation/networking/iso15765-2.rst
+index 0e9d960741783b..37ebb2c417cb44 100644
+--- a/Documentation/networking/iso15765-2.rst
++++ b/Documentation/networking/iso15765-2.rst
+@@ -369,8 +369,8 @@ to their default.
+
+ addr.can_family = AF_CAN;
+ addr.can_ifindex = if_nametoindex("can0");
+- addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
+- addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
++ addr.can_addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
++ addr.can_addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
+
+ ret = bind(s, (struct sockaddr *)&addr, sizeof(addr));
+ if (ret < 0)
+diff --git a/Makefile b/Makefile
+index c6918c620bc368..340da922fa4f2c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1057,8 +1057,8 @@ LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
+ endif
+
+ # Align the bit size of userspace programs with the kernel
+-KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))
+-KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))
++KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
++KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+
+ # make the checker run with the right architecture
+ CHECKFLAGS += --arch=$(ARCH)
+@@ -1357,18 +1357,13 @@ ifneq ($(wildcard $(resolve_btfids_O)),)
+ $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
+ endif
+
+-# Clear a bunch of variables before executing the submake
+-ifeq ($(quiet),silent_)
+-tools_silent=s
+-endif
+-
+ tools/: FORCE
+ $(Q)mkdir -p $(objtree)/tools
+- $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/
++ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/
+
+ tools/%: FORCE
+ $(Q)mkdir -p $(objtree)/tools
+- $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*
++ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*
+
+ # ---------------------------------------------------------------------------
+ # Kernel selftest
+diff --git a/arch/alpha/include/uapi/asm/ptrace.h b/arch/alpha/include/uapi/asm/ptrace.h
+index 5ca45934fcbb82..72ed913a910f25 100644
+--- a/arch/alpha/include/uapi/asm/ptrace.h
++++ b/arch/alpha/include/uapi/asm/ptrace.h
+@@ -42,6 +42,8 @@ struct pt_regs {
+ unsigned long trap_a0;
+ unsigned long trap_a1;
+ unsigned long trap_a2;
++/* This makes the stack 16-byte aligned as GCC expects */
++ unsigned long __pad0;
+ /* These are saved by PAL-code: */
+ unsigned long ps;
+ unsigned long pc;
+diff --git a/arch/alpha/kernel/asm-offsets.c b/arch/alpha/kernel/asm-offsets.c
+index 4cfeae42c79ac7..e9dad60b147f33 100644
+--- a/arch/alpha/kernel/asm-offsets.c
++++ b/arch/alpha/kernel/asm-offsets.c
+@@ -19,9 +19,13 @@ static void __used foo(void)
+ DEFINE(TI_STATUS, offsetof(struct thread_info, status));
+ BLANK();
+
++ DEFINE(SP_OFF, offsetof(struct pt_regs, ps));
+ DEFINE(SIZEOF_PT_REGS, sizeof(struct pt_regs));
+ BLANK();
+
++ DEFINE(SWITCH_STACK_SIZE, sizeof(struct switch_stack));
++ BLANK();
++
+ DEFINE(HAE_CACHE, offsetof(struct alpha_machine_vector, hae_cache));
+ DEFINE(HAE_REG, offsetof(struct alpha_machine_vector, hae_register));
+ }
+diff --git a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
+index dd26062d75b3c5..f4d41b4538c2e8 100644
+--- a/arch/alpha/kernel/entry.S
++++ b/arch/alpha/kernel/entry.S
+@@ -15,10 +15,6 @@
+ .set noat
+ .cfi_sections .debug_frame
+
+-/* Stack offsets. */
+-#define SP_OFF 184
+-#define SWITCH_STACK_SIZE 64
+-
+ .macro CFI_START_OSF_FRAME func
+ .align 4
+ .globl \func
+@@ -198,8 +194,8 @@ CFI_END_OSF_FRAME entArith
+ CFI_START_OSF_FRAME entMM
+ SAVE_ALL
+ /* save $9 - $15 so the inline exception code can manipulate them. */
+- subq $sp, 56, $sp
+- .cfi_adjust_cfa_offset 56
++ subq $sp, 64, $sp
++ .cfi_adjust_cfa_offset 64
+ stq $9, 0($sp)
+ stq $10, 8($sp)
+ stq $11, 16($sp)
+@@ -214,7 +210,7 @@ CFI_START_OSF_FRAME entMM
+ .cfi_rel_offset $13, 32
+ .cfi_rel_offset $14, 40
+ .cfi_rel_offset $15, 48
+- addq $sp, 56, $19
++ addq $sp, 64, $19
+ /* handle the fault */
+ lda $8, 0x3fff
+ bic $sp, $8, $8
+@@ -227,7 +223,7 @@ CFI_START_OSF_FRAME entMM
+ ldq $13, 32($sp)
+ ldq $14, 40($sp)
+ ldq $15, 48($sp)
+- addq $sp, 56, $sp
++ addq $sp, 64, $sp
+ .cfi_restore $9
+ .cfi_restore $10
+ .cfi_restore $11
+@@ -235,7 +231,7 @@ CFI_START_OSF_FRAME entMM
+ .cfi_restore $13
+ .cfi_restore $14
+ .cfi_restore $15
+- .cfi_adjust_cfa_offset -56
++ .cfi_adjust_cfa_offset -64
+ /* finish up the syscall as normal. */
+ br ret_from_sys_call
+ CFI_END_OSF_FRAME entMM
+@@ -382,8 +378,8 @@ entUnaUser:
+ .cfi_restore $0
+ .cfi_adjust_cfa_offset -256
+ SAVE_ALL /* setup normal kernel stack */
+- lda $sp, -56($sp)
+- .cfi_adjust_cfa_offset 56
++ lda $sp, -64($sp)
++ .cfi_adjust_cfa_offset 64
+ stq $9, 0($sp)
+ stq $10, 8($sp)
+ stq $11, 16($sp)
+@@ -399,7 +395,7 @@ entUnaUser:
+ .cfi_rel_offset $14, 40
+ .cfi_rel_offset $15, 48
+ lda $8, 0x3fff
+- addq $sp, 56, $19
++ addq $sp, 64, $19
+ bic $sp, $8, $8
+ jsr $26, do_entUnaUser
+ ldq $9, 0($sp)
+@@ -409,7 +405,7 @@ entUnaUser:
+ ldq $13, 32($sp)
+ ldq $14, 40($sp)
+ ldq $15, 48($sp)
+- lda $sp, 56($sp)
++ lda $sp, 64($sp)
+ .cfi_restore $9
+ .cfi_restore $10
+ .cfi_restore $11
+@@ -417,7 +413,7 @@ entUnaUser:
+ .cfi_restore $13
+ .cfi_restore $14
+ .cfi_restore $15
+- .cfi_adjust_cfa_offset -56
++ .cfi_adjust_cfa_offset -64
+ br ret_from_sys_call
+ CFI_END_OSF_FRAME entUna
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index a9a38c80c4a7af..7004397937cfda 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -649,7 +649,7 @@ s_reg_to_mem (unsigned long s_reg)
+ static int unauser_reg_offsets[32] = {
+ R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8),
+ /* r9 ... r15 are stored in front of regs. */
+- -56, -48, -40, -32, -24, -16, -8,
++ -64, -56, -48, -40, -32, -24, -16, /* padding at -8 */
+ R(r16), R(r17), R(r18),
+ R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26),
+ R(r27), R(r28), R(gp),
+diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
+index 8c9850437e6744..a9816bbc9f34d3 100644
+--- a/arch/alpha/mm/fault.c
++++ b/arch/alpha/mm/fault.c
+@@ -78,8 +78,8 @@ __load_new_mm_context(struct mm_struct *next_mm)
+
+ /* Macro for exception fixup code to access integer registers. */
+ #define dpf_reg(r) \
+- (((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-16 : \
+- (r) <= 18 ? (r)+10 : (r)-10])
++ (((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-17 : \
++ (r) <= 18 ? (r)+11 : (r)-10])
+
+ asmlinkage void
+ do_page_fault(unsigned long address, unsigned long mmcsr,
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 9efd3f37c2fd9d..19a4988621ac9a 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -48,7 +48,11 @@ KBUILD_CFLAGS += $(CC_FLAGS_NO_FPU) \
+ KBUILD_CFLAGS += $(call cc-disable-warning, psabi)
+ KBUILD_AFLAGS += $(compat_vdso)
+
++ifeq ($(call test-ge, $(CONFIG_RUSTC_VERSION), 108500),y)
++KBUILD_RUSTFLAGS += --target=aarch64-unknown-none-softfloat
++else
+ KBUILD_RUSTFLAGS += --target=aarch64-unknown-none -Ctarget-feature="-neon"
++endif
+
+ KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
+ KBUILD_AFLAGS += $(call cc-option,-mabi=lp64)
+diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
+index d9c9218fa1fddc..309942b06c5bc2 100644
+--- a/arch/arm64/kernel/cacheinfo.c
++++ b/arch/arm64/kernel/cacheinfo.c
+@@ -101,16 +101,18 @@ int populate_cache_leaves(unsigned int cpu)
+ unsigned int level, idx;
+ enum cache_type type;
+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+- struct cacheinfo *this_leaf = this_cpu_ci->info_list;
++ struct cacheinfo *infos = this_cpu_ci->info_list;
+
+ for (idx = 0, level = 1; level <= this_cpu_ci->num_levels &&
+- idx < this_cpu_ci->num_leaves; idx++, level++) {
++ idx < this_cpu_ci->num_leaves; level++) {
+ type = get_cache_type(level);
+ if (type == CACHE_TYPE_SEPARATE) {
+- ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
+- ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
++ if (idx + 1 >= this_cpu_ci->num_leaves)
++ break;
++ ci_leaf_init(&infos[idx++], CACHE_TYPE_DATA, level);
++ ci_leaf_init(&infos[idx++], CACHE_TYPE_INST, level);
+ } else {
+- ci_leaf_init(this_leaf++, type, level);
++ ci_leaf_init(&infos[idx++], type, level);
+ }
+ }
+ return 0;
+diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
+index f204a9ddc83359..a3f1e895e2a670 100644
+--- a/arch/arm64/kernel/vdso/vdso.lds.S
++++ b/arch/arm64/kernel/vdso/vdso.lds.S
+@@ -41,6 +41,7 @@ SECTIONS
+ */
+ /DISCARD/ : {
+ *(.note.GNU-stack .note.gnu.property)
++ *(.ARM.attributes)
+ }
+ .note : { *(.note.*) } :text :note
+
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index f84c71f04d9ea9..e73326bd3ff7e9 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -162,6 +162,7 @@ SECTIONS
+ /DISCARD/ : {
+ *(.interp .dynamic)
+ *(.dynsym .dynstr .hash .gnu.hash)
++ *(.ARM.attributes)
+ }
+
+ . = KIMAGE_VADDR;
+diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
+index 86d5d90ebefe5b..4f09121417818d 100644
+--- a/arch/loongarch/kernel/genex.S
++++ b/arch/loongarch/kernel/genex.S
+@@ -18,16 +18,19 @@
+
+ .align 5
+ SYM_FUNC_START(__arch_cpu_idle)
+- /* start of rollback region */
+- LONG_L t0, tp, TI_FLAGS
+- nop
+- andi t0, t0, _TIF_NEED_RESCHED
+- bnez t0, 1f
+- nop
+- nop
+- nop
++ /* start of idle interrupt region */
++ ori t0, zero, CSR_CRMD_IE
++ /* idle instruction needs irq enabled */
++ csrxchg t0, t0, LOONGARCH_CSR_CRMD
++ /*
++ * If an interrupt lands here; between enabling interrupts above and
++ * going idle on the next instruction, we must *NOT* go idle since the
++ * interrupt could have set TIF_NEED_RESCHED or caused an timer to need
++ * reprogramming. Fall through -- see handle_vint() below -- and have
++ * the idle loop take care of things.
++ */
+ idle 0
+- /* end of rollback region */
++ /* end of idle interrupt region */
+ 1: jr ra
+ SYM_FUNC_END(__arch_cpu_idle)
+
+@@ -35,11 +38,10 @@ SYM_CODE_START(handle_vint)
+ UNWIND_HINT_UNDEFINED
+ BACKUP_T0T1
+ SAVE_ALL
+- la_abs t1, __arch_cpu_idle
++ la_abs t1, 1b
+ LONG_L t0, sp, PT_ERA
+- /* 32 byte rollback region */
+- ori t0, t0, 0x1f
+- xori t0, t0, 0x1f
++ /* 3 instructions idle interrupt region */
++ ori t0, t0, 0b1100
+ bne t0, t1, 1f
+ LONG_S t0, sp, PT_ERA
+ 1: move a0, sp
+diff --git a/arch/loongarch/kernel/idle.c b/arch/loongarch/kernel/idle.c
+index 0b5dd2faeb90b8..54b247d8cdb695 100644
+--- a/arch/loongarch/kernel/idle.c
++++ b/arch/loongarch/kernel/idle.c
+@@ -11,7 +11,6 @@
+
+ void __cpuidle arch_cpu_idle(void)
+ {
+- raw_local_irq_enable();
+- __arch_cpu_idle(); /* idle instruction needs irq enabled */
++ __arch_cpu_idle();
+ raw_local_irq_disable();
+ }
+diff --git a/arch/loongarch/kernel/reset.c b/arch/loongarch/kernel/reset.c
+index 1ef8c63835351b..de8fa5a8a825cd 100644
+--- a/arch/loongarch/kernel/reset.c
++++ b/arch/loongarch/kernel/reset.c
+@@ -33,7 +33,7 @@ void machine_halt(void)
+ console_flush_on_panic(CONSOLE_FLUSH_PENDING);
+
+ while (true) {
+- __arch_cpu_idle();
++ __asm__ __volatile__("idle 0" : : : "memory");
+ }
+ }
+
+@@ -53,7 +53,7 @@ void machine_power_off(void)
+ #endif
+
+ while (true) {
+- __arch_cpu_idle();
++ __asm__ __volatile__("idle 0" : : : "memory");
+ }
+ }
+
+@@ -74,6 +74,6 @@ void machine_restart(char *command)
+ acpi_reboot();
+
+ while (true) {
+- __arch_cpu_idle();
++ __asm__ __volatile__("idle 0" : : : "memory");
+ }
+ }
+diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
+index 27e9b94c0a0b6e..7e8f5d6829ef0c 100644
+--- a/arch/loongarch/kvm/main.c
++++ b/arch/loongarch/kvm/main.c
+@@ -283,9 +283,9 @@ int kvm_arch_enable_virtualization_cpu(void)
+ * TOE=0: Trap on Exception.
+ * TIT=0: Trap on Timer.
+ */
+- if (env & CSR_GCFG_GCIP_ALL)
++ if (env & CSR_GCFG_GCIP_SECURE)
+ gcfg |= CSR_GCFG_GCI_SECURE;
+- if (env & CSR_GCFG_MATC_ROOT)
++ if (env & CSR_GCFG_MATP_ROOT)
+ gcfg |= CSR_GCFG_MATC_ROOT;
+
+ write_csr_gcfg(gcfg);
+diff --git a/arch/loongarch/lib/csum.c b/arch/loongarch/lib/csum.c
+index a5e84b403c3b34..df309ae4045dee 100644
+--- a/arch/loongarch/lib/csum.c
++++ b/arch/loongarch/lib/csum.c
+@@ -25,7 +25,7 @@ unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len)
+ const u64 *ptr;
+ u64 data, sum64 = 0;
+
+- if (unlikely(len == 0))
++ if (unlikely(len <= 0))
+ return 0;
+
+ offset = (unsigned long)buff & 7;
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 56a786ca7354b9..c3854682934557 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -331,6 +331,17 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ return rc;
+ }
+
++static bool zpci_bus_is_isolated_vf(struct zpci_bus *zbus, struct zpci_dev *zdev)
++{
++ struct pci_dev *pdev;
++
++ pdev = zpci_iov_find_parent_pf(zbus, zdev);
++ if (!pdev)
++ return true;
++ pci_dev_put(pdev);
++ return false;
++}
++
+ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+ {
+ bool topo_is_tid = zdev->tid_avail;
+@@ -345,6 +356,15 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+
+ topo = topo_is_tid ? zdev->tid : zdev->pchid;
+ zbus = zpci_bus_get(topo, topo_is_tid);
++ /*
++ * An isolated VF gets its own domain/bus even if there exists
++ * a matching domain/bus already
++ */
++ if (zbus && zpci_bus_is_isolated_vf(zbus, zdev)) {
++ zpci_bus_put(zbus);
++ zbus = NULL;
++ }
++
+ if (!zbus) {
+ zbus = zpci_bus_alloc(topo, topo_is_tid);
+ if (!zbus)
+diff --git a/arch/s390/pci/pci_iov.c b/arch/s390/pci/pci_iov.c
+index ead062bf2b41cc..191e56a623f62c 100644
+--- a/arch/s390/pci/pci_iov.c
++++ b/arch/s390/pci/pci_iov.c
+@@ -60,18 +60,35 @@ static int zpci_iov_link_virtfn(struct pci_dev *pdev, struct pci_dev *virtfn, in
+ return 0;
+ }
+
+-int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn)
++/**
++ * zpci_iov_find_parent_pf - Find the parent PF, if any, of the given function
++ * @zbus: The bus that the PCI function is on, or would be added on
++ * @zdev: The PCI function
++ *
++ * Finds the parent PF, if it exists and is configured, of the given PCI function
++ * and increments its refcount. Th PF is searched for on the provided bus so the
++ * caller has to ensure that this is the correct bus to search. This function may
++ * be used before adding the PCI function to a zbus.
++ *
++ * Return: Pointer to the struct pci_dev of the parent PF or NULL if it not
++ * found. If the function is not a VF or has no RequesterID information,
++ * NULL is returned as well.
++ */
++struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ {
+- int i, cand_devfn;
+- struct zpci_dev *zdev;
++ int i, vfid, devfn, cand_devfn;
+ struct pci_dev *pdev;
+- int vfid = vfn - 1; /* Linux' vfid's start at 0 vfn at 1*/
+- int rc = 0;
+
+ if (!zbus->multifunction)
+- return 0;
+-
+- /* If the parent PF for the given VF is also configured in the
++ return NULL;
++ /* Non-VFs and VFs without RID available don't have a parent */
++ if (!zdev->vfn || !zdev->rid_available)
++ return NULL;
++ /* Linux vfid starts at 0 vfn at 1 */
++ vfid = zdev->vfn - 1;
++ devfn = zdev->rid & ZPCI_RID_MASK_DEVFN;
++ /*
++ * If the parent PF for the given VF is also configured in the
+ * instance, it must be on the same zbus.
+ * We can then identify the parent PF by checking what
+ * devfn the VF would have if it belonged to that PF using the PF's
+@@ -85,15 +102,26 @@ int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn
+ if (!pdev)
+ continue;
+ cand_devfn = pci_iov_virtfn_devfn(pdev, vfid);
+- if (cand_devfn == virtfn->devfn) {
+- rc = zpci_iov_link_virtfn(pdev, virtfn, vfid);
+- /* balance pci_get_slot() */
+- pci_dev_put(pdev);
+- break;
+- }
++ if (cand_devfn == devfn)
++ return pdev;
+ /* balance pci_get_slot() */
+ pci_dev_put(pdev);
+ }
+ }
++ return NULL;
++}
++
++int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn)
++{
++ struct zpci_dev *zdev = to_zpci(virtfn);
++ struct pci_dev *pdev_pf;
++ int rc = 0;
++
++ pdev_pf = zpci_iov_find_parent_pf(zbus, zdev);
++ if (pdev_pf) {
++ /* Linux' vfids start at 0 while zdev->vfn starts at 1 */
++ rc = zpci_iov_link_virtfn(pdev_pf, virtfn, zdev->vfn - 1);
++ pci_dev_put(pdev_pf);
++ }
+ return rc;
+ }
+diff --git a/arch/s390/pci/pci_iov.h b/arch/s390/pci/pci_iov.h
+index b2c828003bad0a..05df728f980ca4 100644
+--- a/arch/s390/pci/pci_iov.h
++++ b/arch/s390/pci/pci_iov.h
+@@ -17,6 +17,8 @@ void zpci_iov_map_resources(struct pci_dev *pdev);
+
+ int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn);
+
++struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev);
++
+ #else /* CONFIG_PCI_IOV */
+ static inline void zpci_iov_remove_virtfn(struct pci_dev *pdev, int vfn) {}
+
+@@ -26,5 +28,10 @@ static inline int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *v
+ {
+ return 0;
+ }
++
++static inline struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev)
++{
++ return NULL;
++}
+ #endif /* CONFIG_PCI_IOV */
+ #endif /* __S390_PCI_IOV_h */
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 171be04eca1f5d..1b0c2397d65753 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2582,7 +2582,8 @@ config MITIGATION_IBPB_ENTRY
+ depends on CPU_SUP_AMD && X86_64
+ default y
+ help
+- Compile the kernel with support for the retbleed=ibpb mitigation.
++ Compile the kernel with support for the retbleed=ibpb and
++ spec_rstack_overflow={ibpb,ibpb-vmexit} mitigations.
+
+ config MITIGATION_IBRS_ENTRY
+ bool "Enable IBRS on kernel entry"
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index f558be868a50b6..f5bf400f6a2833 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4865,20 +4865,22 @@ static inline bool intel_pmu_broken_perf_cap(void)
+
+ static void update_pmu_cap(struct x86_hybrid_pmu *pmu)
+ {
+- unsigned int sub_bitmaps, eax, ebx, ecx, edx;
++ unsigned int cntr, fixed_cntr, ecx, edx;
++ union cpuid35_eax eax;
++ union cpuid35_ebx ebx;
+
+- cpuid(ARCH_PERFMON_EXT_LEAF, &sub_bitmaps, &ebx, &ecx, &edx);
++ cpuid(ARCH_PERFMON_EXT_LEAF, &eax.full, &ebx.full, &ecx, &edx);
+
+- if (ebx & ARCH_PERFMON_EXT_UMASK2)
++ if (ebx.split.umask2)
+ pmu->config_mask |= ARCH_PERFMON_EVENTSEL_UMASK2;
+- if (ebx & ARCH_PERFMON_EXT_EQ)
++ if (ebx.split.eq)
+ pmu->config_mask |= ARCH_PERFMON_EVENTSEL_EQ;
+
+- if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) {
++ if (eax.split.cntr_subleaf) {
+ cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF,
+- &eax, &ebx, &ecx, &edx);
+- pmu->cntr_mask64 = eax;
+- pmu->fixed_cntr_mask64 = ebx;
++ &cntr, &fixed_cntr, &ecx, &edx);
++ pmu->cntr_mask64 = cntr;
++ pmu->fixed_cntr_mask64 = fixed_cntr;
+ }
+
+ if (!intel_pmu_broken_perf_cap()) {
+@@ -4901,11 +4903,6 @@ static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
+ else
+ pmu->intel_ctrl &= ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS);
+
+- if (pmu->intel_cap.pebs_output_pt_available)
+- pmu->pmu.capabilities |= PERF_PMU_CAP_AUX_OUTPUT;
+- else
+- pmu->pmu.capabilities &= ~PERF_PMU_CAP_AUX_OUTPUT;
+-
+ intel_pmu_check_event_constraints(pmu->event_constraints,
+ pmu->cntr_mask64,
+ pmu->fixed_cntr_mask64,
+@@ -4974,9 +4971,6 @@ static bool init_hybrid_pmu(int cpu)
+
+ pr_info("%s PMU driver: ", pmu->name);
+
+- if (pmu->intel_cap.pebs_output_pt_available)
+- pr_cont("PEBS-via-PT ");
+-
+ pr_cont("\n");
+
+ x86_pmu_show_pmu_cap(&pmu->pmu);
+@@ -4999,8 +4993,11 @@ static void intel_pmu_cpu_starting(int cpu)
+
+ init_debug_store_on_cpu(cpu);
+ /*
+- * Deal with CPUs that don't clear their LBRs on power-up.
++ * Deal with CPUs that don't clear their LBRs on power-up, and that may
++ * even boot with LBRs enabled.
+ */
++ if (!static_cpu_has(X86_FEATURE_ARCH_LBR) && x86_pmu.lbr_nr)
++ msr_clear_bit(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR_BIT);
+ intel_pmu_lbr_reset();
+
+ cpuc->lbr_sel = NULL;
+@@ -6284,11 +6281,9 @@ static __always_inline int intel_pmu_init_hybrid(enum hybrid_pmu_type pmus)
+ pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities;
+ if (pmu->pmu_type & hybrid_small) {
+ pmu->intel_cap.perf_metrics = 0;
+- pmu->intel_cap.pebs_output_pt_available = 1;
+ pmu->mid_ack = true;
+ } else if (pmu->pmu_type & hybrid_big) {
+ pmu->intel_cap.perf_metrics = 1;
+- pmu->intel_cap.pebs_output_pt_available = 0;
+ pmu->late_ack = true;
+ }
+ }
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 19a9fd974e3e1d..b6303b0224531b 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2523,7 +2523,15 @@ void __init intel_ds_init(void)
+ }
+ pr_cont("PEBS fmt4%c%s, ", pebs_type, pebs_qual);
+
+- if (!is_hybrid() && x86_pmu.intel_cap.pebs_output_pt_available) {
++ /*
++ * The PEBS-via-PT is not supported on hybrid platforms,
++ * because not all CPUs of a hybrid machine support it.
++ * The global x86_pmu.intel_cap, which only contains the
++ * common capabilities, is used to check the availability
++ * of the feature. The per-PMU pebs_output_pt_available
++ * in a hybrid machine should be ignored.
++ */
++ if (x86_pmu.intel_cap.pebs_output_pt_available) {
+ pr_cont("PEBS-via-PT, ");
+ x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_AUX_OUTPUT;
+ }
+diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
+index 861d080ed4c6ab..cfb22f8c451a7f 100644
+--- a/arch/x86/include/asm/kvm-x86-ops.h
++++ b/arch/x86/include/asm/kvm-x86-ops.h
+@@ -47,6 +47,7 @@ KVM_X86_OP(set_idt)
+ KVM_X86_OP(get_gdt)
+ KVM_X86_OP(set_gdt)
+ KVM_X86_OP(sync_dirty_debug_regs)
++KVM_X86_OP(set_dr6)
+ KVM_X86_OP(set_dr7)
+ KVM_X86_OP(cache_reg)
+ KVM_X86_OP(get_rflags)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 5da67e5c00401b..8499b9cb9c8263 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1674,6 +1674,7 @@ struct kvm_x86_ops {
+ void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
++ void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value);
+ void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
+ void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+ unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
+index ce4677b8b7356c..3b496cdcb74b3c 100644
+--- a/arch/x86/include/asm/mmu.h
++++ b/arch/x86/include/asm/mmu.h
+@@ -37,6 +37,8 @@ typedef struct {
+ */
+ atomic64_t tlb_gen;
+
++ unsigned long next_trim_cpumask;
++
+ #ifdef CONFIG_MODIFY_LDT_SYSCALL
+ struct rw_semaphore ldt_usr_sem;
+ struct ldt_struct *ldt;
+diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
+index 2886cb668d7fae..795fdd53bd0a6d 100644
+--- a/arch/x86/include/asm/mmu_context.h
++++ b/arch/x86/include/asm/mmu_context.h
+@@ -151,6 +151,7 @@ static inline int init_new_context(struct task_struct *tsk,
+
+ mm->context.ctx_id = atomic64_inc_return(&last_mm_ctx_id);
+ atomic64_set(&mm->context.tlb_gen, 0);
++ mm->context.next_trim_cpumask = jiffies + HZ;
+
+ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+ if (cpu_feature_enabled(X86_FEATURE_OSPKE)) {
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 3ae84c3b8e6dba..61e991507353eb 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -395,7 +395,8 @@
+ #define MSR_IA32_PASID_VALID BIT_ULL(31)
+
+ /* DEBUGCTLMSR bits (others vary by model): */
+-#define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */
++#define DEBUGCTLMSR_LBR_BIT 0 /* last branch recording */
++#define DEBUGCTLMSR_LBR (1UL << DEBUGCTLMSR_LBR_BIT)
+ #define DEBUGCTLMSR_BTF_SHIFT 1
+ #define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */
+ #define DEBUGCTLMSR_BUS_LOCK_DETECT (1UL << 2)
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 91b73571412f16..7505bb5d260ab4 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -187,11 +187,33 @@ union cpuid10_edx {
+ * detection/enumeration details:
+ */
+ #define ARCH_PERFMON_EXT_LEAF 0x00000023
+-#define ARCH_PERFMON_EXT_UMASK2 0x1
+-#define ARCH_PERFMON_EXT_EQ 0x2
+-#define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1
+ #define ARCH_PERFMON_NUM_COUNTER_LEAF 0x1
+
++union cpuid35_eax {
++ struct {
++ unsigned int leaf0:1;
++ /* Counters Sub-Leaf */
++ unsigned int cntr_subleaf:1;
++ /* Auto Counter Reload Sub-Leaf */
++ unsigned int acr_subleaf:1;
++ /* Events Sub-Leaf */
++ unsigned int events_subleaf:1;
++ unsigned int reserved:28;
++ } split;
++ unsigned int full;
++};
++
++union cpuid35_ebx {
++ struct {
++ /* UnitMask2 Supported */
++ unsigned int umask2:1;
++ /* EQ-bit Supported */
++ unsigned int eq:1;
++ unsigned int reserved:30;
++ } split;
++ unsigned int full;
++};
++
+ /*
+ * Intel Architectural LBR CPUID detection/enumeration details:
+ */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 69e79fff41b800..02fc2aa06e9e0e 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -222,6 +222,7 @@ struct flush_tlb_info {
+ unsigned int initiating_cpu;
+ u8 stride_shift;
+ u8 freed_tables;
++ u8 trim_cpumask;
+ };
+
+ void flush_tlb_local(void);
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 47a01d4028f60e..5fba44a4f988c0 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1115,6 +1115,8 @@ static void __init retbleed_select_mitigation(void)
+
+ case RETBLEED_MITIGATION_IBPB:
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
++ mitigate_smt = true;
+
+ /*
+ * IBPB on entry already obviates the need for
+@@ -1124,9 +1126,6 @@ static void __init retbleed_select_mitigation(void)
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+
+- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+- mitigate_smt = true;
+-
+ /*
+ * There is no need for RSB filling: entry_ibpb() ensures
+ * all predictions, including the RSB, are invalidated,
+@@ -2643,6 +2642,7 @@ static void __init srso_select_mitigation(void)
+ if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
+ if (has_microcode) {
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ srso_mitigation = SRSO_MITIGATION_IBPB;
+
+ /*
+@@ -2652,6 +2652,13 @@ static void __init srso_select_mitigation(void)
+ */
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
++
++ /*
++ * There is no need for RSB filling: entry_ibpb() ensures
++ * all predictions, including the RSB, are invalidated,
++ * regardless of IBPB implementation.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ }
+ } else {
+ pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+@@ -2659,8 +2666,8 @@ static void __init srso_select_mitigation(void)
+ break;
+
+ case SRSO_CMD_IBPB_ON_VMEXIT:
+- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
+- if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
++ if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
++ if (has_microcode) {
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
+
+@@ -2672,8 +2679,8 @@ static void __init srso_select_mitigation(void)
+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ }
+ } else {
+- pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+- }
++ pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
++ }
+ break;
+ default:
+ break;
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 9eed0c144dad51..9e51242ed125ee 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -175,7 +175,6 @@ EXPORT_SYMBOL_GPL(arch_static_call_transform);
+ noinstr void __static_call_update_early(void *tramp, void *func)
+ {
+ BUG_ON(system_state != SYSTEM_BOOTING);
+- BUG_ON(!early_boot_irqs_disabled);
+ BUG_ON(static_call_initialized);
+ __text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE);
+ sync_core();
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 4f0a94346d0094..44c88537448c74 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -2226,6 +2226,9 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
+ u32 vector;
+ bool all_cpus;
+
++ if (!lapic_in_kernel(vcpu))
++ return HV_STATUS_INVALID_HYPERCALL_INPUT;
++
+ if (hc->code == HVCALL_SEND_IPI) {
+ if (!hc->fast) {
+ if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
+@@ -2852,7 +2855,8 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
+ ent->eax |= HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED;
+ ent->eax |= HV_X64_APIC_ACCESS_RECOMMENDED;
+ ent->eax |= HV_X64_RELAXED_TIMING_RECOMMENDED;
+- ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;
++ if (!vcpu || lapic_in_kernel(vcpu))
++ ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;
+ ent->eax |= HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED;
+ if (evmcs_ver)
+ ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 9dd3796d075a56..19c96278ba755d 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5591,7 +5591,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
+ union kvm_mmu_page_role root_role;
+
+ /* NPT requires CR0.PG=1. */
+- WARN_ON_ONCE(cpu_role.base.direct);
++ WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
+
+ root_role = cpu_role.base;
+ root_role.level = kvm_mmu_get_tdp_level(vcpu);
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index cf84103ce38b97..2dcb9c870d5a22 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -646,6 +646,11 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
+ u32 pause_count12;
+ u32 pause_thresh12;
+
++ nested_svm_transition_tlb_flush(vcpu);
++
++ /* Enter Guest-Mode */
++ enter_guest_mode(vcpu);
++
+ /*
+ * Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
+ * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes.
+@@ -762,11 +767,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
+ }
+ }
+
+- nested_svm_transition_tlb_flush(vcpu);
+-
+- /* Enter Guest-Mode */
+- enter_guest_mode(vcpu);
+-
+ /*
+ * Merge guest and host intercepts - must be called with vcpu in
+ * guest-mode to take effect.
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 4543dd6bcab2cb..a7cb7c82b38e39 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1993,11 +1993,11 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
+ svm->asid = sd->next_asid++;
+ }
+
+-static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value)
++static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
+ {
+- struct vmcb *vmcb = svm->vmcb;
++ struct vmcb *vmcb = to_svm(vcpu)->vmcb;
+
+- if (svm->vcpu.arch.guest_state_protected)
++ if (vcpu->arch.guest_state_protected)
+ return;
+
+ if (unlikely(value != vmcb->save.dr6)) {
+@@ -4234,10 +4234,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ * Run with all-zero DR6 unless needed, so that we can get the exact cause
+ * of a #DB.
+ */
+- if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
+- svm_set_dr6(svm, vcpu->arch.dr6);
+- else
+- svm_set_dr6(svm, DR6_ACTIVE_LOW);
++ if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
++ svm_set_dr6(vcpu, DR6_ACTIVE_LOW);
+
+ clgi();
+ kvm_load_guest_xsave_state(vcpu);
+@@ -5033,6 +5031,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ .set_idt = svm_set_idt,
+ .get_gdt = svm_get_gdt,
+ .set_gdt = svm_set_gdt,
++ .set_dr6 = svm_set_dr6,
+ .set_dr7 = svm_set_dr7,
+ .sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
+ .cache_reg = svm_cache_reg,
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index 7668e2fb8043ef..47476fcc179a52 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -60,6 +60,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ .set_idt = vmx_set_idt,
+ .get_gdt = vmx_get_gdt,
+ .set_gdt = vmx_set_gdt,
++ .set_dr6 = vmx_set_dr6,
+ .set_dr7 = vmx_set_dr7,
+ .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
+ .cache_reg = vmx_cache_reg,
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 968ddf71405446..f06d443ec3c68d 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -5631,6 +5631,12 @@ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+ set_debugreg(DR6_RESERVED, 6);
+ }
+
++void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
++{
++ lockdep_assert_irqs_disabled();
++ set_debugreg(vcpu->arch.dr6, 6);
++}
++
+ void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+ {
+ vmcs_writel(GUEST_DR7, val);
+@@ -7392,10 +7398,6 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+ vmx->loaded_vmcs->host_state.cr4 = cr4;
+ }
+
+- /* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
+- if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
+- set_debugreg(vcpu->arch.dr6, 6);
+-
+ /* When single-stepping over STI and MOV SS, we must clear the
+ * corresponding interruptibility bits in the guest state. Otherwise
+ * vmentry fails as it then expects bit 14 (BS) in pending debug
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index 48dc76bf0ec03a..4aba200f435d42 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -74,6 +74,7 @@ void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
++void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val);
+ void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val);
+ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
+ void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d760b19d1e513e..0846e3af5f6c5a 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10968,6 +10968,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ set_debugreg(vcpu->arch.eff_db[1], 1);
+ set_debugreg(vcpu->arch.eff_db[2], 2);
+ set_debugreg(vcpu->arch.eff_db[3], 3);
++ /* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
++ if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
++ kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);
+ } else if (unlikely(hw_breakpoint_active())) {
+ set_debugreg(0, 7);
+ }
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index b0678d59ebdb4a..00ffa74d0dd0bf 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -893,9 +893,36 @@ static void flush_tlb_func(void *info)
+ nr_invalidate);
+ }
+
+-static bool tlb_is_not_lazy(int cpu, void *data)
++static bool should_flush_tlb(int cpu, void *data)
+ {
+- return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu);
++ struct flush_tlb_info *info = data;
++
++ /* Lazy TLB will get flushed at the next context switch. */
++ if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu))
++ return false;
++
++ /* No mm means kernel memory flush. */
++ if (!info->mm)
++ return true;
++
++ /* The target mm is loaded, and the CPU is not lazy. */
++ if (per_cpu(cpu_tlbstate.loaded_mm, cpu) == info->mm)
++ return true;
++
++ /* In cpumask, but not the loaded mm? Periodically remove by flushing. */
++ if (info->trim_cpumask)
++ return true;
++
++ return false;
++}
++
++static bool should_trim_cpumask(struct mm_struct *mm)
++{
++ if (time_after(jiffies, READ_ONCE(mm->context.next_trim_cpumask))) {
++ WRITE_ONCE(mm->context.next_trim_cpumask, jiffies + HZ);
++ return true;
++ }
++ return false;
+ }
+
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
+@@ -929,7 +956,7 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
+ if (info->freed_tables)
+ on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
+ else
+- on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func,
++ on_each_cpu_cond_mask(should_flush_tlb, flush_tlb_func,
+ (void *)info, 1, cpumask);
+ }
+
+@@ -980,6 +1007,7 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
+ info->freed_tables = freed_tables;
+ info->new_tlb_gen = new_tlb_gen;
+ info->initiating_cpu = smp_processor_id();
++ info->trim_cpumask = 0;
+
+ return info;
+ }
+@@ -1022,6 +1050,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
+ * flush_tlb_func_local() directly in this case.
+ */
+ if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
++ info->trim_cpumask = should_trim_cpumask(mm);
+ flush_tlb_multi(mm_cpumask(mm), info);
+ } else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
+ lockdep_assert_irqs_enabled();
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 55a4996d0c04f1..d078de2c952b37 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -111,6 +111,51 @@ static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
+ */
+ static DEFINE_SPINLOCK(xen_reservation_lock);
+
++/* Protected by xen_reservation_lock. */
++#define MIN_CONTIG_ORDER 9 /* 2MB */
++static unsigned int discontig_frames_order = MIN_CONTIG_ORDER;
++static unsigned long discontig_frames_early[1UL << MIN_CONTIG_ORDER] __initdata;
++static unsigned long *discontig_frames __refdata = discontig_frames_early;
++static bool discontig_frames_dyn;
++
++static int alloc_discontig_frames(unsigned int order)
++{
++ unsigned long *new_array, *old_array;
++ unsigned int old_order;
++ unsigned long flags;
++
++ BUG_ON(order < MIN_CONTIG_ORDER);
++ BUILD_BUG_ON(sizeof(discontig_frames_early) != PAGE_SIZE);
++
++ new_array = (unsigned long *)__get_free_pages(GFP_KERNEL,
++ order - MIN_CONTIG_ORDER);
++ if (!new_array)
++ return -ENOMEM;
++
++ spin_lock_irqsave(&xen_reservation_lock, flags);
++
++ old_order = discontig_frames_order;
++
++ if (order > discontig_frames_order || !discontig_frames_dyn) {
++ if (!discontig_frames_dyn)
++ old_array = NULL;
++ else
++ old_array = discontig_frames;
++
++ discontig_frames = new_array;
++ discontig_frames_order = order;
++ discontig_frames_dyn = true;
++ } else {
++ old_array = new_array;
++ }
++
++ spin_unlock_irqrestore(&xen_reservation_lock, flags);
++
++ free_pages((unsigned long)old_array, old_order - MIN_CONTIG_ORDER);
++
++ return 0;
++}
++
+ /*
+ * Note about cr3 (pagetable base) values:
+ *
+@@ -781,6 +826,7 @@ void xen_mm_pin_all(void)
+ {
+ struct page *page;
+
++ spin_lock(&init_mm.page_table_lock);
+ spin_lock(&pgd_lock);
+
+ list_for_each_entry(page, &pgd_list, lru) {
+@@ -791,6 +837,7 @@ void xen_mm_pin_all(void)
+ }
+
+ spin_unlock(&pgd_lock);
++ spin_unlock(&init_mm.page_table_lock);
+ }
+
+ static void __init xen_mark_pinned(struct mm_struct *mm, struct page *page,
+@@ -812,6 +859,9 @@ static void __init xen_after_bootmem(void)
+ SetPagePinned(virt_to_page(level3_user_vsyscall));
+ #endif
+ xen_pgd_walk(&init_mm, xen_mark_pinned, FIXADDR_TOP);
++
++ if (alloc_discontig_frames(MIN_CONTIG_ORDER))
++ BUG();
+ }
+
+ static void xen_unpin_page(struct mm_struct *mm, struct page *page,
+@@ -887,6 +937,7 @@ void xen_mm_unpin_all(void)
+ {
+ struct page *page;
+
++ spin_lock(&init_mm.page_table_lock);
+ spin_lock(&pgd_lock);
+
+ list_for_each_entry(page, &pgd_list, lru) {
+@@ -898,6 +949,7 @@ void xen_mm_unpin_all(void)
+ }
+
+ spin_unlock(&pgd_lock);
++ spin_unlock(&init_mm.page_table_lock);
+ }
+
+ static void xen_enter_mmap(struct mm_struct *mm)
+@@ -2199,10 +2251,6 @@ void __init xen_init_mmu_ops(void)
+ memset(dummy_mapping, 0xff, PAGE_SIZE);
+ }
+
+-/* Protected by xen_reservation_lock. */
+-#define MAX_CONTIG_ORDER 9 /* 2MB */
+-static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER];
+-
+ #define VOID_PTE (mfn_pte(0, __pgprot(0)))
+ static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order,
+ unsigned long *in_frames,
+@@ -2319,18 +2367,25 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
+ unsigned int address_bits,
+ dma_addr_t *dma_handle)
+ {
+- unsigned long *in_frames = discontig_frames, out_frame;
++ unsigned long *in_frames, out_frame;
+ unsigned long flags;
+ int success;
+ unsigned long vstart = (unsigned long)phys_to_virt(pstart);
+
+- if (unlikely(order > MAX_CONTIG_ORDER))
+- return -ENOMEM;
++ if (unlikely(order > discontig_frames_order)) {
++ if (!discontig_frames_dyn)
++ return -ENOMEM;
++
++ if (alloc_discontig_frames(order))
++ return -ENOMEM;
++ }
+
+ memset((void *) vstart, 0, PAGE_SIZE << order);
+
+ spin_lock_irqsave(&xen_reservation_lock, flags);
+
++ in_frames = discontig_frames;
++
+ /* 1. Zap current PTEs, remembering MFNs. */
+ xen_zap_pfn_range(vstart, order, in_frames, NULL);
+
+@@ -2354,12 +2409,12 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
+
+ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
+ {
+- unsigned long *out_frames = discontig_frames, in_frame;
++ unsigned long *out_frames, in_frame;
+ unsigned long flags;
+ int success;
+ unsigned long vstart;
+
+- if (unlikely(order > MAX_CONTIG_ORDER))
++ if (unlikely(order > discontig_frames_order))
+ return;
+
+ vstart = (unsigned long)phys_to_virt(pstart);
+@@ -2367,6 +2422,8 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
+
+ spin_lock_irqsave(&xen_reservation_lock, flags);
+
++ out_frames = discontig_frames;
++
+ /* 1. Find start MFN of contiguous extent. */
+ in_frame = virt_to_mfn((void *)vstart);
+
+diff --git a/block/partitions/mac.c b/block/partitions/mac.c
+index c80183156d6802..b02530d9862970 100644
+--- a/block/partitions/mac.c
++++ b/block/partitions/mac.c
+@@ -53,13 +53,25 @@ int mac_partition(struct parsed_partitions *state)
+ }
+ secsize = be16_to_cpu(md->block_size);
+ put_dev_sector(sect);
++
++ /*
++ * If the "block size" is not a power of 2, things get weird - we might
++ * end up with a partition straddling a sector boundary, so we wouldn't
++ * be able to read a partition entry with read_part_sector().
++ * Real block sizes are probably (?) powers of two, so just require
++ * that.
++ */
++ if (!is_power_of_2(secsize))
++ return -1;
+ datasize = round_down(secsize, 512);
+ data = read_part_sector(state, datasize / 512, §);
+ if (!data)
+ return -1;
+ partoffset = secsize % 512;
+- if (partoffset + sizeof(*part) > datasize)
++ if (partoffset + sizeof(*part) > datasize) {
++ put_dev_sector(sect);
+ return -1;
++ }
+ part = (struct mac_partition *) (data + partoffset);
+ if (be16_to_cpu(part->signature) != MAC_PARTITION_MAGIC) {
+ put_dev_sector(sect);
+@@ -112,8 +124,8 @@ int mac_partition(struct parsed_partitions *state)
+ int i, l;
+
+ goodness++;
+- l = strlen(part->name);
+- if (strcmp(part->name, "/") == 0)
++ l = strnlen(part->name, sizeof(part->name));
++ if (strncmp(part->name, "/", sizeof(part->name)) == 0)
+ goodness++;
+ for (i = 0; i <= l - 4; ++i) {
+ if (strncasecmp(part->name + i, "root",
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index cb45ef5240dab6..068c1612660bc0 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -407,6 +407,19 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
+ },
++ {
++ /* Vexia Edu Atla 10 tablet 5V version */
++ .matches = {
++ /* Having all 3 of these not set is somewhat unique */
++ DMI_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "05/14/2015"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++ },
+ {
+ /* Vexia Edu Atla 10 tablet 9V version */
+ .matches = {
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index 6981e5f974e9a4..ff7d0b14a6468b 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -909,6 +909,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ kfree(d->wake_buf);
+ kfree(d->mask_buf_def);
+ kfree(d->mask_buf);
++ kfree(d->main_status_buf);
+ kfree(d->status_buf);
+ kfree(d->status_reg_buf);
+ if (d->config_buf) {
+@@ -984,6 +985,7 @@ void regmap_del_irq_chip(int irq, struct regmap_irq_chip_data *d)
+ kfree(d->wake_buf);
+ kfree(d->mask_buf_def);
+ kfree(d->mask_buf);
++ kfree(d->main_status_buf);
+ kfree(d->status_reg_buf);
+ kfree(d->status_buf);
+ if (d->config_buf) {
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 8bd663f4bac1b7..53f6b4f76bccdd 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1312,6 +1312,10 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ if (opcode == 0xfc01)
+ btintel_pcie_inject_cmd_complete(hdev, opcode);
+ }
++ /* Firmware raises alive interrupt on HCI_OP_RESET */
++ if (opcode == HCI_OP_RESET)
++ data->gp0_received = false;
++
+ hdev->stat.cmd_tx++;
+ break;
+ case HCI_ACLDATA_PKT:
+@@ -1349,7 +1353,6 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
+ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+ if (opcode == HCI_OP_RESET) {
+- data->gp0_received = false;
+ ret = wait_event_timeout(data->gp0_wait_q,
+ data->gp0_received,
+ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 91d3c3b1c2d3bf..9db5354fdb0271 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -696,12 +696,12 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
+ pr_err("Boost mode is not supported by this processor or SBIOS\n");
+ return -EOPNOTSUPP;
+ }
+- mutex_lock(&amd_pstate_driver_lock);
++ guard(mutex)(&amd_pstate_driver_lock);
++
+ ret = amd_pstate_cpu_boost_update(policy, state);
+ WRITE_ONCE(cpudata->boost_state, !ret ? state : false);
+ policy->boost_enabled = !ret ? state : false;
+ refresh_frequency_limits(policy);
+- mutex_unlock(&amd_pstate_driver_lock);
+
+ return ret;
+ }
+@@ -778,24 +778,28 @@ static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata)
+
+ static void amd_pstate_update_limits(unsigned int cpu)
+ {
+- struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
++ struct cpufreq_policy *policy = NULL;
+ struct amd_cpudata *cpudata;
+ u32 prev_high = 0, cur_high = 0;
+ int ret;
+ bool highest_perf_changed = false;
+
++ if (!amd_pstate_prefcore)
++ return;
++
++ policy = cpufreq_cpu_get(cpu);
+ if (!policy)
+ return;
+
+ cpudata = policy->driver_data;
+
+- if (!amd_pstate_prefcore)
+- return;
++ guard(mutex)(&amd_pstate_driver_lock);
+
+- mutex_lock(&amd_pstate_driver_lock);
+ ret = amd_get_highest_perf(cpu, &cur_high);
+- if (ret)
+- goto free_cpufreq_put;
++ if (ret) {
++ cpufreq_cpu_put(policy);
++ return;
++ }
+
+ prev_high = READ_ONCE(cpudata->prefcore_ranking);
+ highest_perf_changed = (prev_high != cur_high);
+@@ -805,14 +809,11 @@ static void amd_pstate_update_limits(unsigned int cpu)
+ if (cur_high < CPPC_MAX_PERF)
+ sched_set_itmt_core_prio((int)cur_high, cpu);
+ }
+-
+-free_cpufreq_put:
+ cpufreq_cpu_put(policy);
+
+ if (!highest_perf_changed)
+ cpufreq_update_policy(cpu);
+
+- mutex_unlock(&amd_pstate_driver_lock);
+ }
+
+ /*
+@@ -1145,11 +1146,11 @@ static ssize_t store_energy_performance_preference(
+ if (ret < 0)
+ return -EINVAL;
+
+- mutex_lock(&amd_pstate_limits_lock);
++ guard(mutex)(&amd_pstate_limits_lock);
++
+ ret = amd_pstate_set_energy_pref_index(cpudata, ret);
+- mutex_unlock(&amd_pstate_limits_lock);
+
+- return ret ?: count;
++ return ret ? ret : count;
+ }
+
+ static ssize_t show_energy_performance_preference(
+@@ -1297,13 +1298,10 @@ EXPORT_SYMBOL_GPL(amd_pstate_update_status);
+ static ssize_t status_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+- ssize_t ret;
+
+- mutex_lock(&amd_pstate_driver_lock);
+- ret = amd_pstate_show_status(buf);
+- mutex_unlock(&amd_pstate_driver_lock);
++ guard(mutex)(&amd_pstate_driver_lock);
+
+- return ret;
++ return amd_pstate_show_status(buf);
+ }
+
+ static ssize_t status_store(struct device *a, struct device_attribute *b,
+@@ -1312,9 +1310,8 @@ static ssize_t status_store(struct device *a, struct device_attribute *b,
+ char *p = memchr(buf, '\n', count);
+ int ret;
+
+- mutex_lock(&amd_pstate_driver_lock);
++ guard(mutex)(&amd_pstate_driver_lock);
+ ret = amd_pstate_update_status(buf, p ? p - buf : count);
+- mutex_unlock(&amd_pstate_driver_lock);
+
+ return ret < 0 ? ret : count;
+ }
+@@ -1579,24 +1576,17 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+
+ static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata)
+ {
+- struct cppc_perf_ctrls perf_ctrls;
+- u64 value, max_perf;
++ u64 max_perf;
+ int ret;
+
+ ret = amd_pstate_enable(true);
+ if (ret)
+ pr_err("failed to enable amd pstate during resume, return %d\n", ret);
+
+- value = READ_ONCE(cpudata->cppc_req_cached);
+ max_perf = READ_ONCE(cpudata->highest_perf);
+
+- if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.max_perf = max_perf;
+- perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(cpudata->epp_cached);
+- cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- }
++ amd_pstate_update_perf(cpudata, 0, 0, max_perf, false);
++ amd_pstate_set_epp(cpudata, cpudata->epp_cached);
+ }
+
+ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+@@ -1605,54 +1595,26 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+
+ pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
+
+- if (cppc_state == AMD_PSTATE_ACTIVE) {
+- amd_pstate_epp_reenable(cpudata);
+- cpudata->suspended = false;
+- }
++ amd_pstate_epp_reenable(cpudata);
++ cpudata->suspended = false;
+
+ return 0;
+ }
+
+-static void amd_pstate_epp_offline(struct cpufreq_policy *policy)
+-{
+- struct amd_cpudata *cpudata = policy->driver_data;
+- struct cppc_perf_ctrls perf_ctrls;
+- int min_perf;
+- u64 value;
+-
+- min_perf = READ_ONCE(cpudata->lowest_perf);
+- value = READ_ONCE(cpudata->cppc_req_cached);
+-
+- mutex_lock(&amd_pstate_limits_lock);
+- if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
+- cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN;
+-
+- /* Set max perf same as min perf */
+- value &= ~AMD_CPPC_MAX_PERF(~0L);
+- value |= AMD_CPPC_MAX_PERF(min_perf);
+- value &= ~AMD_CPPC_MIN_PERF(~0L);
+- value |= AMD_CPPC_MIN_PERF(min_perf);
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.desired_perf = 0;
+- perf_ctrls.max_perf = min_perf;
+- perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(HWP_EPP_BALANCE_POWERSAVE);
+- cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- }
+- mutex_unlock(&amd_pstate_limits_lock);
+-}
+-
+ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+-
+- pr_debug("AMD CPU Core %d going offline\n", cpudata->cpu);
++ int min_perf;
+
+ if (cpudata->suspended)
+ return 0;
+
+- if (cppc_state == AMD_PSTATE_ACTIVE)
+- amd_pstate_epp_offline(policy);
++ min_perf = READ_ONCE(cpudata->lowest_perf);
++
++ guard(mutex)(&amd_pstate_limits_lock);
++
++ amd_pstate_update_perf(cpudata, min_perf, 0, min_perf, false);
++ amd_pstate_set_epp(cpudata, AMD_CPPC_EPP_BALANCE_POWERSAVE);
+
+ return 0;
+ }
+@@ -1689,13 +1651,11 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
+ struct amd_cpudata *cpudata = policy->driver_data;
+
+ if (cpudata->suspended) {
+- mutex_lock(&amd_pstate_limits_lock);
++ guard(mutex)(&amd_pstate_limits_lock);
+
+ /* enable amd pstate from suspend state*/
+ amd_pstate_epp_reenable(cpudata);
+
+- mutex_unlock(&amd_pstate_limits_lock);
+-
+ cpudata->suspended = false;
+ }
+
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 70490bf2697b16..acabc856fe8a58 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -922,13 +922,15 @@ char * __init efi_md_typeattr_format(char *buf, size_t size,
+ EFI_MEMORY_WB | EFI_MEMORY_UCE | EFI_MEMORY_RO |
+ EFI_MEMORY_WP | EFI_MEMORY_RP | EFI_MEMORY_XP |
+ EFI_MEMORY_NV | EFI_MEMORY_SP | EFI_MEMORY_CPU_CRYPTO |
+- EFI_MEMORY_RUNTIME | EFI_MEMORY_MORE_RELIABLE))
++ EFI_MEMORY_MORE_RELIABLE | EFI_MEMORY_HOT_PLUGGABLE |
++ EFI_MEMORY_RUNTIME))
+ snprintf(pos, size, "|attr=0x%016llx]",
+ (unsigned long long)attr);
+ else
+ snprintf(pos, size,
+- "|%3s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
++ "|%3s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
+ attr & EFI_MEMORY_RUNTIME ? "RUN" : "",
++ attr & EFI_MEMORY_HOT_PLUGGABLE ? "HP" : "",
+ attr & EFI_MEMORY_MORE_RELIABLE ? "MR" : "",
+ attr & EFI_MEMORY_CPU_CRYPTO ? "CC" : "",
+ attr & EFI_MEMORY_SP ? "SP" : "",
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index c41e7b2091cdd1..8ad3efb9b1ff16 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -25,6 +25,9 @@ static unsigned long get_entry_num_slots(efi_memory_desc_t *md,
+ if (md->type != EFI_CONVENTIONAL_MEMORY)
+ return 0;
+
++ if (md->attribute & EFI_MEMORY_HOT_PLUGGABLE)
++ return 0;
++
+ if (efi_soft_reserve_enabled() &&
+ (md->attribute & EFI_MEMORY_SP))
+ return 0;
+diff --git a/drivers/firmware/efi/libstub/relocate.c b/drivers/firmware/efi/libstub/relocate.c
+index d694bcfa1074e9..bf676dd127a143 100644
+--- a/drivers/firmware/efi/libstub/relocate.c
++++ b/drivers/firmware/efi/libstub/relocate.c
+@@ -53,6 +53,9 @@ efi_status_t efi_low_alloc_above(unsigned long size, unsigned long align,
+ if (desc->type != EFI_CONVENTIONAL_MEMORY)
+ continue;
+
++ if (desc->attribute & EFI_MEMORY_HOT_PLUGGABLE)
++ continue;
++
+ if (efi_soft_reserve_enabled() &&
+ (desc->attribute & EFI_MEMORY_SP))
+ continue;
+diff --git a/drivers/firmware/qcom/qcom_scm-smc.c b/drivers/firmware/qcom/qcom_scm-smc.c
+index 2b4c2826f57251..3f10b23ec941b5 100644
+--- a/drivers/firmware/qcom/qcom_scm-smc.c
++++ b/drivers/firmware/qcom/qcom_scm-smc.c
+@@ -173,6 +173,9 @@ int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ smc.args[i + SCM_SMC_FIRST_REG_IDX] = desc->args[i];
+
+ if (unlikely(arglen > SCM_SMC_N_REG_ARGS)) {
++ if (!mempool)
++ return -EINVAL;
++
+ args_virt = qcom_tzmem_alloc(mempool,
+ SCM_SMC_N_EXT_ARGS * sizeof(u64),
+ flag);
+diff --git a/drivers/gpio/gpio-bcm-kona.c b/drivers/gpio/gpio-bcm-kona.c
+index 5321ef98f4427d..64908f1a5e7f9b 100644
+--- a/drivers/gpio/gpio-bcm-kona.c
++++ b/drivers/gpio/gpio-bcm-kona.c
+@@ -69,6 +69,22 @@ struct bcm_kona_gpio {
+ struct bcm_kona_gpio_bank {
+ int id;
+ int irq;
++ /*
++ * Used to keep track of lock/unlock operations for each GPIO in the
++ * bank.
++ *
++ * All GPIOs are locked by default (see bcm_kona_gpio_reset), and the
++ * unlock count for all GPIOs is 0 by default. Each unlock increments
++ * the counter, and each lock decrements the counter.
++ *
++ * The lock function only locks the GPIO once its unlock counter is
++ * down to 0. This is necessary because the GPIO is unlocked in two
++ * places in this driver: once for requested GPIOs, and once for
++ * requested IRQs. Since it is possible for a GPIO to be requested
++ * as both a GPIO and an IRQ, we need to ensure that we don't lock it
++ * too early.
++ */
++ u8 gpio_unlock_count[GPIO_PER_BANK];
+ /* Used in the interrupt handler */
+ struct bcm_kona_gpio *kona_gpio;
+ };
+@@ -86,14 +102,24 @@ static void bcm_kona_gpio_lock_gpio(struct bcm_kona_gpio *kona_gpio,
+ u32 val;
+ unsigned long flags;
+ int bank_id = GPIO_BANK(gpio);
++ int bit = GPIO_BIT(gpio);
++ struct bcm_kona_gpio_bank *bank = &kona_gpio->banks[bank_id];
+
+- raw_spin_lock_irqsave(&kona_gpio->lock, flags);
++ if (bank->gpio_unlock_count[bit] == 0) {
++ dev_err(kona_gpio->gpio_chip.parent,
++ "Unbalanced locks for GPIO %u\n", gpio);
++ return;
++ }
+
+- val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
+- val |= BIT(gpio);
+- bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
++ if (--bank->gpio_unlock_count[bit] == 0) {
++ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+- raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
++ val |= BIT(bit);
++ bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
++
++ raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ }
+ }
+
+ static void bcm_kona_gpio_unlock_gpio(struct bcm_kona_gpio *kona_gpio,
+@@ -102,14 +128,20 @@ static void bcm_kona_gpio_unlock_gpio(struct bcm_kona_gpio *kona_gpio,
+ u32 val;
+ unsigned long flags;
+ int bank_id = GPIO_BANK(gpio);
++ int bit = GPIO_BIT(gpio);
++ struct bcm_kona_gpio_bank *bank = &kona_gpio->banks[bank_id];
+
+- raw_spin_lock_irqsave(&kona_gpio->lock, flags);
++ if (bank->gpio_unlock_count[bit] == 0) {
++ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+- val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
+- val &= ~BIT(gpio);
+- bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
++ val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
++ val &= ~BIT(bit);
++ bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
+
+- raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ }
++
++ ++bank->gpio_unlock_count[bit];
+ }
+
+ static int bcm_kona_gpio_get_dir(struct gpio_chip *chip, unsigned gpio)
+@@ -360,6 +392,7 @@ static void bcm_kona_gpio_irq_mask(struct irq_data *d)
+
+ kona_gpio = irq_data_get_irq_chip_data(d);
+ reg_base = kona_gpio->reg_base;
++
+ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+ val = readl(reg_base + GPIO_INT_MASK(bank_id));
+@@ -382,6 +415,7 @@ static void bcm_kona_gpio_irq_unmask(struct irq_data *d)
+
+ kona_gpio = irq_data_get_irq_chip_data(d);
+ reg_base = kona_gpio->reg_base;
++
+ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+ val = readl(reg_base + GPIO_INT_MSKCLR(bank_id));
+@@ -477,15 +511,26 @@ static void bcm_kona_gpio_irq_handler(struct irq_desc *desc)
+ static int bcm_kona_gpio_irq_reqres(struct irq_data *d)
+ {
+ struct bcm_kona_gpio *kona_gpio = irq_data_get_irq_chip_data(d);
++ unsigned int gpio = d->hwirq;
+
+- return gpiochip_reqres_irq(&kona_gpio->gpio_chip, d->hwirq);
++ /*
++ * We need to unlock the GPIO before any other operations are performed
++ * on the relevant GPIO configuration registers
++ */
++ bcm_kona_gpio_unlock_gpio(kona_gpio, gpio);
++
++ return gpiochip_reqres_irq(&kona_gpio->gpio_chip, gpio);
+ }
+
+ static void bcm_kona_gpio_irq_relres(struct irq_data *d)
+ {
+ struct bcm_kona_gpio *kona_gpio = irq_data_get_irq_chip_data(d);
++ unsigned int gpio = d->hwirq;
++
++ /* Once we no longer use it, lock the GPIO again */
++ bcm_kona_gpio_lock_gpio(kona_gpio, gpio);
+
+- gpiochip_relres_irq(&kona_gpio->gpio_chip, d->hwirq);
++ gpiochip_relres_irq(&kona_gpio->gpio_chip, gpio);
+ }
+
+ static struct irq_chip bcm_gpio_irq_chip = {
+@@ -614,7 +659,7 @@ static int bcm_kona_gpio_probe(struct platform_device *pdev)
+ bank->irq = platform_get_irq(pdev, i);
+ bank->kona_gpio = kona_gpio;
+ if (bank->irq < 0) {
+- dev_err(dev, "Couldn't get IRQ for bank %d", i);
++ dev_err(dev, "Couldn't get IRQ for bank %d\n", i);
+ ret = -ENOENT;
+ goto err_irq_domain;
+ }
+diff --git a/drivers/gpio/gpio-stmpe.c b/drivers/gpio/gpio-stmpe.c
+index 75a3633ceddbb8..222279a9d82b2d 100644
+--- a/drivers/gpio/gpio-stmpe.c
++++ b/drivers/gpio/gpio-stmpe.c
+@@ -191,7 +191,7 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ [REG_IE][CSB] = STMPE_IDX_IEGPIOR_CSB,
+ [REG_IE][MSB] = STMPE_IDX_IEGPIOR_MSB,
+ };
+- int i, j;
++ int ret, i, j;
+
+ /*
+ * STMPE1600: to be able to get IRQ from pins,
+@@ -199,8 +199,16 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ * GPSR or GPCR registers
+ */
+ if (stmpe->partnum == STMPE1600) {
+- stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);
+- stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);
++ ret = stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);
++ if (ret < 0) {
++ dev_err(stmpe->dev, "Failed to read GPMR_LSB: %d\n", ret);
++ goto err;
++ }
++ ret = stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);
++ if (ret < 0) {
++ dev_err(stmpe->dev, "Failed to read GPMR_CSB: %d\n", ret);
++ goto err;
++ }
+ }
+
+ for (i = 0; i < CACHE_NR_REGS; i++) {
+@@ -222,6 +230,7 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ }
+ }
+
++err:
+ mutex_unlock(&stmpe_gpio->irq_lock);
+ }
+
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 78ecd56123a3b6..148b4d1788a219 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1691,6 +1691,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ .ignore_wake = "PNP0C50:00@8",
+ },
+ },
++ {
++ /*
++ * Spurious wakeups from GPIO 11
++ * Found in BIOS 1.04
++ * https://gitlab.freedesktop.org/drm/amd/-/issues/3954
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "Acer Nitro V 14"),
++ },
++ .driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++ .ignore_interrupt = "AMDI0030:00@11",
++ },
++ },
+ {} /* Terminating entry */
+ };
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 44372f8647d51a..1e8f0bdb6ae3b4 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -905,13 +905,13 @@ int gpiochip_get_ngpios(struct gpio_chip *gc, struct device *dev)
+ }
+
+ if (gc->ngpio == 0) {
+- chip_err(gc, "tried to insert a GPIO chip with zero lines\n");
++ dev_err(dev, "tried to insert a GPIO chip with zero lines\n");
+ return -EINVAL;
+ }
+
+ if (gc->ngpio > FASTPATH_NGPIO)
+- chip_warn(gc, "line cnt %u is greater than fast path cnt %u\n",
+- gc->ngpio, FASTPATH_NGPIO);
++ dev_warn(dev, "line cnt %u is greater than fast path cnt %u\n",
++ gc->ngpio, FASTPATH_NGPIO);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 0b28b2cf1517d1..d70855d7c61c1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -3713,9 +3713,10 @@ int psp_init_cap_microcode(struct psp_context *psp, const char *chip_name)
+ if (err == -ENODEV) {
+ dev_warn(adev->dev, "cap microcode does not exist, skip\n");
+ err = 0;
+- goto out;
++ } else {
++ dev_err(adev->dev, "fail to initialize cap microcode\n");
+ }
+- dev_err(adev->dev, "fail to initialize cap microcode\n");
++ goto out;
+ }
+
+ info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CAP];
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index dbb63ce316f11e..42fd7669ac7d37 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -298,7 +298,7 @@ static int init_user_queue(struct process_queue_manager *pqm,
+ return 0;
+
+ free_gang_ctx_bo:
+- amdgpu_amdkfd_free_gtt_mem(dev->adev, (*q)->gang_ctx_bo);
++ amdgpu_amdkfd_free_gtt_mem(dev->adev, &(*q)->gang_ctx_bo);
+ cleanup:
+ uninit_queue(*q);
+ *q = NULL;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 0c0b9aa44dfa3a..99d2d3092ea540 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -607,7 +607,8 @@ static int smu_sys_set_pp_table(void *handle,
+ return -EIO;
+ }
+
+- if (!smu_table->hardcode_pptable) {
++ if (!smu_table->hardcode_pptable || smu_table->power_play_table_size < size) {
++ kfree(smu_table->hardcode_pptable);
+ smu_table->hardcode_pptable = kzalloc(size, GFP_KERNEL);
+ if (!smu_table->hardcode_pptable)
+ return -ENOMEM;
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index 6ee51003de3ce6..9fa13da513d24e 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -2421,7 +2421,7 @@ u8 drm_dp_dsc_sink_bpp_incr(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
+ {
+ u8 bpp_increment_dpcd = dsc_dpcd[DP_DSC_BITS_PER_PIXEL_INC - DP_DSC_SUPPORT];
+
+- switch (bpp_increment_dpcd) {
++ switch (bpp_increment_dpcd & DP_DSC_BITS_PER_PIXEL_MASK) {
+ case DP_DSC_BITS_PER_PIXEL_1_16:
+ return 16;
+ case DP_DSC_BITS_PER_PIXEL_1_8:
+diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+index 5c397a2df70e28..5d27e1c733c527 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+@@ -168,7 +168,7 @@ static int igt_ppgtt_alloc(void *arg)
+ return PTR_ERR(ppgtt);
+
+ if (!ppgtt->vm.allocate_va_range)
+- goto err_ppgtt_cleanup;
++ goto ppgtt_vm_put;
+
+ /*
+ * While we only allocate the page tables here and so we could
+@@ -236,7 +236,7 @@ static int igt_ppgtt_alloc(void *arg)
+ goto retry;
+ }
+ i915_gem_ww_ctx_fini(&ww);
+-
++ppgtt_vm_put:
+ i915_vm_put(&ppgtt->vm);
+ return err;
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+index e084406ebb0711..4f110be6b750d3 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+@@ -391,8 +391,8 @@ static const struct dpu_intf_cfg x1e80100_intf[] = {
+ .type = INTF_DP,
+ .controller_id = MSM_DP_CONTROLLER_2,
+ .prog_fetch_lines_worst_case = 24,
+- .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 17),
+- .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16),
++ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16),
++ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 17),
+ }, {
+ .name = "intf_7", .id = INTF_7,
+ .base = 0x3b000, .len = 0x280,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
+index 16f144cbc0c986..8ff496082902b1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
+@@ -42,9 +42,6 @@ static int dpu_wb_conn_atomic_check(struct drm_connector *connector,
+ if (!conn_state || !conn_state->connector) {
+ DPU_ERROR("invalid connector state\n");
+ return -EINVAL;
+- } else if (conn_state->connector->status != connector_status_connected) {
+- DPU_ERROR("connector not connected %d\n", conn_state->connector->status);
+- return -EINVAL;
+ }
+
+ crtc = conn_state->crtc;
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index fba78193127dee..f775638d239a5c 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -787,8 +787,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ goto out;
+
+ if (!submit->cmd[i].size ||
+- ((submit->cmd[i].size + submit->cmd[i].offset) >
+- obj->size / 4)) {
++ (size_add(submit->cmd[i].size, submit->cmd[i].offset) > obj->size / 4)) {
+ SUBMIT_ERROR(submit, "invalid cmdstream size: %u\n", submit->cmd[i].size * 4);
+ ret = -EINVAL;
+ goto out;
+diff --git a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
+index 2dba7c5ffd2c62..92f4261305bd9d 100644
+--- a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
++++ b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
+@@ -587,7 +587,7 @@ static int rcar_mipi_dsi_startup(struct rcar_mipi_dsi *dsi,
+ for (timeout = 10; timeout > 0; --timeout) {
+ if ((rcar_mipi_dsi_read(dsi, PPICLSR) & PPICLSR_STPST) &&
+ (rcar_mipi_dsi_read(dsi, PPIDLSR) & PPIDLSR_STPST) &&
+- (rcar_mipi_dsi_read(dsi, CLOCKSET1) & CLOCKSET1_LOCK))
++ (rcar_mipi_dsi_read(dsi, CLOCKSET1) & CLOCKSET1_LOCK_PHY))
+ break;
+
+ usleep_range(1000, 2000);
+diff --git a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
+index f8114d11f2d158..a6b276f1d6ee15 100644
+--- a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
++++ b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
+@@ -142,7 +142,6 @@
+
+ #define CLOCKSET1 0x101c
+ #define CLOCKSET1_LOCK_PHY (1 << 17)
+-#define CLOCKSET1_LOCK (1 << 16)
+ #define CLOCKSET1_CLKSEL (1 << 8)
+ #define CLOCKSET1_CLKINSEL_EXTAL (0 << 2)
+ #define CLOCKSET1_CLKINSEL_DIG (1 << 2)
+diff --git a/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c b/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c
+index b99217b4e05d7d..90c6269ccd2920 100644
+--- a/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c
++++ b/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c
+@@ -311,11 +311,11 @@ int rzg2l_du_modeset_init(struct rzg2l_du_device *rcdu)
+ dev->mode_config.helper_private = &rzg2l_du_mode_config_helper;
+
+ /*
+- * The RZ DU uses the VSP1 for memory access, and is limited
+- * to frame sizes of 1920x1080.
++ * The RZ DU was designed to support a frame size of 1920x1200 (landscape)
++ * or 1200x1920 (portrait).
+ */
+ dev->mode_config.max_width = 1920;
+- dev->mode_config.max_height = 1080;
++ dev->mode_config.max_height = 1920;
+
+ rcdu->num_crtcs = hweight8(rcdu->info->channels_mask);
+
+diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+index 4ba869e0e794c7..cbd9584af32995 100644
+--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+@@ -70,10 +70,17 @@ static int light_up_connector(struct kunit *test,
+ state = drm_kunit_helper_atomic_state_alloc(test, drm, ctx);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+
++retry:
+ conn_state = drm_atomic_get_connector_state(state, connector);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, conn_state);
+
+ ret = drm_atomic_set_crtc_for_connector(conn_state, crtc);
++ if (ret == -EDEADLK) {
++ drm_atomic_state_clear(state);
++ ret = drm_modeset_backoff(ctx);
++ if (!ret)
++ goto retry;
++ }
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
+diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c
+index 1ad711f8d2a8bf..45f22ead3e61d3 100644
+--- a/drivers/gpu/drm/tidss/tidss_dispc.c
++++ b/drivers/gpu/drm/tidss/tidss_dispc.c
+@@ -700,7 +700,7 @@ void dispc_k2g_set_irqenable(struct dispc_device *dispc, dispc_irq_t mask)
+ {
+ dispc_irq_t old_mask = dispc_k2g_read_irqenable(dispc);
+
+- /* clear the irqstatus for newly enabled irqs */
++ /* clear the irqstatus for irqs that will be enabled */
+ dispc_k2g_clear_irqstatus(dispc, (mask ^ old_mask) & mask);
+
+ dispc_k2g_vp_set_irqenable(dispc, 0, mask);
+@@ -708,6 +708,9 @@ void dispc_k2g_set_irqenable(struct dispc_device *dispc, dispc_irq_t mask)
+
+ dispc_write(dispc, DISPC_IRQENABLE_SET, (1 << 0) | (1 << 7));
+
++ /* clear the irqstatus for irqs that were disabled */
++ dispc_k2g_clear_irqstatus(dispc, (mask ^ old_mask) & old_mask);
++
+ /* flush posted write */
+ dispc_k2g_read_irqenable(dispc);
+ }
+@@ -780,24 +783,20 @@ static
+ void dispc_k3_clear_irqstatus(struct dispc_device *dispc, dispc_irq_t clearmask)
+ {
+ unsigned int i;
+- u32 top_clear = 0;
+
+ for (i = 0; i < dispc->feat->num_vps; ++i) {
+- if (clearmask & DSS_IRQ_VP_MASK(i)) {
++ if (clearmask & DSS_IRQ_VP_MASK(i))
+ dispc_k3_vp_write_irqstatus(dispc, i, clearmask);
+- top_clear |= BIT(i);
+- }
+ }
+ for (i = 0; i < dispc->feat->num_planes; ++i) {
+- if (clearmask & DSS_IRQ_PLANE_MASK(i)) {
++ if (clearmask & DSS_IRQ_PLANE_MASK(i))
+ dispc_k3_vid_write_irqstatus(dispc, i, clearmask);
+- top_clear |= BIT(4 + i);
+- }
+ }
+ if (dispc->feat->subrev == DISPC_K2G)
+ return;
+
+- dispc_write(dispc, DISPC_IRQSTATUS, top_clear);
++ /* always clear the top level irqstatus */
++ dispc_write(dispc, DISPC_IRQSTATUS, dispc_read(dispc, DISPC_IRQSTATUS));
+
+ /* Flush posted writes */
+ dispc_read(dispc, DISPC_IRQSTATUS);
+@@ -843,7 +842,7 @@ static void dispc_k3_set_irqenable(struct dispc_device *dispc,
+
+ old_mask = dispc_k3_read_irqenable(dispc);
+
+- /* clear the irqstatus for newly enabled irqs */
++ /* clear the irqstatus for irqs that will be enabled */
+ dispc_k3_clear_irqstatus(dispc, (old_mask ^ mask) & mask);
+
+ for (i = 0; i < dispc->feat->num_vps; ++i) {
+@@ -868,6 +867,9 @@ static void dispc_k3_set_irqenable(struct dispc_device *dispc,
+ if (main_disable)
+ dispc_write(dispc, DISPC_IRQENABLE_CLR, main_disable);
+
++ /* clear the irqstatus for irqs that were disabled */
++ dispc_k3_clear_irqstatus(dispc, (old_mask ^ mask) & old_mask);
++
+ /* Flush posted writes */
+ dispc_read(dispc, DISPC_IRQENABLE_SET);
+ }
+@@ -2767,8 +2769,12 @@ static void dispc_init_errata(struct dispc_device *dispc)
+ */
+ static void dispc_softreset_k2g(struct dispc_device *dispc)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&dispc->tidss->wait_lock, flags);
+ dispc_set_irqenable(dispc, 0);
+ dispc_read_and_clear_irqstatus(dispc);
++ spin_unlock_irqrestore(&dispc->tidss->wait_lock, flags);
+
+ for (unsigned int vp_idx = 0; vp_idx < dispc->feat->num_vps; ++vp_idx)
+ VP_REG_FLD_MOD(dispc, vp_idx, DISPC_VP_CONTROL, 0, 0, 0);
+diff --git a/drivers/gpu/drm/tidss/tidss_irq.c b/drivers/gpu/drm/tidss/tidss_irq.c
+index 604334ef526a04..d053dbb9d28c5d 100644
+--- a/drivers/gpu/drm/tidss/tidss_irq.c
++++ b/drivers/gpu/drm/tidss/tidss_irq.c
+@@ -60,7 +60,9 @@ static irqreturn_t tidss_irq_handler(int irq, void *arg)
+ unsigned int id;
+ dispc_irq_t irqstatus;
+
++ spin_lock(&tidss->wait_lock);
+ irqstatus = dispc_read_and_clear_irqstatus(tidss->dispc);
++ spin_unlock(&tidss->wait_lock);
+
+ for (id = 0; id < tidss->num_crtcs; id++) {
+ struct drm_crtc *crtc = tidss->crtcs[id];
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index e3013ac3a5c2a6..1abfd738a6017d 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -384,6 +384,7 @@ int v3d_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ {
+ struct v3d_file_priv *v3d_priv = file_priv->driver_priv;
+ struct drm_v3d_perfmon_destroy *req = data;
++ struct v3d_dev *v3d = v3d_priv->v3d;
+ struct v3d_perfmon *perfmon;
+
+ mutex_lock(&v3d_priv->perfmon.lock);
+@@ -393,6 +394,10 @@ int v3d_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ if (!perfmon)
+ return -EINVAL;
+
++ /* If the active perfmon is being destroyed, stop it first */
++ if (perfmon == v3d->active_perfmon)
++ v3d_perfmon_stop(v3d, perfmon, false);
++
+ v3d_perfmon_put(perfmon);
+
+ return 0;
+diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
+index fb52a23e28f84e..a89fbfbdab329f 100644
+--- a/drivers/gpu/drm/xe/xe_drm_client.c
++++ b/drivers/gpu/drm/xe/xe_drm_client.c
+@@ -135,8 +135,8 @@ void xe_drm_client_add_bo(struct xe_drm_client *client,
+ XE_WARN_ON(bo->client);
+ XE_WARN_ON(!list_empty(&bo->client_link));
+
+- spin_lock(&client->bos_lock);
+ bo->client = xe_drm_client_get(client);
++ spin_lock(&client->bos_lock);
+ list_add_tail(&bo->client_link, &client->bos_list);
+ spin_unlock(&client->bos_lock);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_trace_bo.h b/drivers/gpu/drm/xe/xe_trace_bo.h
+index 9b1a1d4304ae18..ba0f61e7d2d6b9 100644
+--- a/drivers/gpu/drm/xe/xe_trace_bo.h
++++ b/drivers/gpu/drm/xe/xe_trace_bo.h
+@@ -55,8 +55,8 @@ TRACE_EVENT(xe_bo_move,
+ TP_STRUCT__entry(
+ __field(struct xe_bo *, bo)
+ __field(size_t, size)
+- __field(u32, new_placement)
+- __field(u32, old_placement)
++ __string(new_placement_name, xe_mem_type_to_name[new_placement])
++ __string(old_placement_name, xe_mem_type_to_name[old_placement])
+ __string(device_id, __dev_name_bo(bo))
+ __field(bool, move_lacks_source)
+ ),
+@@ -64,15 +64,15 @@ TRACE_EVENT(xe_bo_move,
+ TP_fast_assign(
+ __entry->bo = bo;
+ __entry->size = bo->size;
+- __entry->new_placement = new_placement;
+- __entry->old_placement = old_placement;
++ __assign_str(new_placement_name);
++ __assign_str(old_placement_name);
+ __assign_str(device_id);
+ __entry->move_lacks_source = move_lacks_source;
+ ),
+ TP_printk("move_lacks_source:%s, migrate object %p [size %zu] from %s to %s device_id:%s",
+ __entry->move_lacks_source ? "yes" : "no", __entry->bo, __entry->size,
+- xe_mem_type_to_name[__entry->old_placement],
+- xe_mem_type_to_name[__entry->new_placement], __get_str(device_id))
++ __get_str(old_placement_name),
++ __get_str(new_placement_name), __get_str(device_id))
+ );
+
+ DECLARE_EVENT_CLASS(xe_vma,
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index e98528777faaec..710674ef40a973 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -625,6 +625,8 @@ static int host1x_probe(struct platform_device *pdev)
+ goto free_contexts;
+ }
+
++ mutex_init(&host->intr_mutex);
++
+ pm_runtime_enable(&pdev->dev);
+
+ err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev);
+diff --git a/drivers/gpu/host1x/intr.c b/drivers/gpu/host1x/intr.c
+index b3285dd101804c..f77a678949e96b 100644
+--- a/drivers/gpu/host1x/intr.c
++++ b/drivers/gpu/host1x/intr.c
+@@ -104,8 +104,6 @@ int host1x_intr_init(struct host1x *host)
+ unsigned int id;
+ int i, err;
+
+- mutex_init(&host->intr_mutex);
+-
+ for (id = 0; id < host1x_syncpt_nb_pts(host); ++id) {
+ struct host1x_syncpt *syncpt = &host->syncpt[id];
+
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 369414c92fccbe..93b5c648ef82c9 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1673,9 +1673,12 @@ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ break;
+ }
+
+- if (suffix)
++ if (suffix) {
+ hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
+ "%s %s", hdev->name, suffix);
++ if (!hi->input->name)
++ return -ENOMEM;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index bf8b633114be6a..7b359668987854 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -313,6 +313,7 @@ struct steam_device {
+ u16 rumble_left;
+ u16 rumble_right;
+ unsigned int sensor_timestamp_us;
++ struct work_struct unregister_work;
+ };
+
+ static int steam_recv_report(struct steam_device *steam,
+@@ -1072,6 +1073,31 @@ static void steam_mode_switch_cb(struct work_struct *work)
+ }
+ }
+
++static void steam_work_unregister_cb(struct work_struct *work)
++{
++ struct steam_device *steam = container_of(work, struct steam_device,
++ unregister_work);
++ unsigned long flags;
++ bool connected;
++ bool opened;
++
++ spin_lock_irqsave(&steam->lock, flags);
++ opened = steam->client_opened;
++ connected = steam->connected;
++ spin_unlock_irqrestore(&steam->lock, flags);
++
++ if (connected) {
++ if (opened) {
++ steam_sensors_unregister(steam);
++ steam_input_unregister(steam);
++ } else {
++ steam_set_lizard_mode(steam, lizard_mode);
++ steam_input_register(steam);
++ steam_sensors_register(steam);
++ }
++ }
++}
++
+ static bool steam_is_valve_interface(struct hid_device *hdev)
+ {
+ struct hid_report_enum *rep_enum;
+@@ -1117,8 +1143,7 @@ static int steam_client_ll_open(struct hid_device *hdev)
+ steam->client_opened++;
+ spin_unlock_irqrestore(&steam->lock, flags);
+
+- steam_sensors_unregister(steam);
+- steam_input_unregister(steam);
++ schedule_work(&steam->unregister_work);
+
+ return 0;
+ }
+@@ -1135,11 +1160,7 @@ static void steam_client_ll_close(struct hid_device *hdev)
+ connected = steam->connected && !steam->client_opened;
+ spin_unlock_irqrestore(&steam->lock, flags);
+
+- if (connected) {
+- steam_set_lizard_mode(steam, lizard_mode);
+- steam_input_register(steam);
+- steam_sensors_register(steam);
+- }
++ schedule_work(&steam->unregister_work);
+ }
+
+ static int steam_client_ll_raw_request(struct hid_device *hdev,
+@@ -1231,6 +1252,7 @@ static int steam_probe(struct hid_device *hdev,
+ INIT_LIST_HEAD(&steam->list);
+ INIT_WORK(&steam->rumble_work, steam_haptic_rumble_cb);
+ steam->sensor_timestamp_us = 0;
++ INIT_WORK(&steam->unregister_work, steam_work_unregister_cb);
+
+ /*
+ * With the real steam controller interface, do not connect hidraw.
+@@ -1291,6 +1313,7 @@ static int steam_probe(struct hid_device *hdev,
+ cancel_work_sync(&steam->work_connect);
+ cancel_delayed_work_sync(&steam->mode_switch);
+ cancel_work_sync(&steam->rumble_work);
++ cancel_work_sync(&steam->unregister_work);
+
+ return ret;
+ }
+@@ -1306,6 +1329,8 @@ static void steam_remove(struct hid_device *hdev)
+
+ cancel_delayed_work_sync(&steam->mode_switch);
+ cancel_work_sync(&steam->work_connect);
++ cancel_work_sync(&steam->rumble_work);
++ cancel_work_sync(&steam->unregister_work);
+ hid_destroy_device(steam->client_hdev);
+ steam->client_hdev = NULL;
+ steam->client_opened = 0;
+@@ -1592,7 +1617,7 @@ static void steam_do_deck_input_event(struct steam_device *steam,
+
+ if (!(b9 & BIT(6)) && steam->did_mode_switch) {
+ steam->did_mode_switch = false;
+- cancel_delayed_work_sync(&steam->mode_switch);
++ cancel_delayed_work(&steam->mode_switch);
+ } else if (!steam->client_opened && (b9 & BIT(6)) && !steam->did_mode_switch) {
+ steam->did_mode_switch = true;
+ schedule_delayed_work(&steam->mode_switch, 45 * HZ / 100);
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index 6c3e758bbb09e3..3b81468a1df297 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -171,7 +171,7 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
+ b_ep = ep->desc.bEndpointAddress;
+
+ /* Are the expected endpoints present? */
+- u8 ep_addr[1] = {b_ep};
++ u8 ep_addr[2] = {b_ep, 0};
+
+ if (!usb_check_int_endpoints(usbif, ep_addr)) {
+ hid_err(hdev, "Unexpected non-int endpoint\n");
+diff --git a/drivers/hid/hid-winwing.c b/drivers/hid/hid-winwing.c
+index 831b760c66ea72..d4afbbd2780797 100644
+--- a/drivers/hid/hid-winwing.c
++++ b/drivers/hid/hid-winwing.c
+@@ -106,6 +106,8 @@ static int winwing_init_led(struct hid_device *hdev,
+ "%s::%s",
+ dev_name(&input->dev),
+ info->led_name);
++ if (!led->cdev.name)
++ return -ENOMEM;
+
+ ret = devm_led_classdev_register(&hdev->dev, &led->cdev);
+ if (ret)
+diff --git a/drivers/i3c/master/Kconfig b/drivers/i3c/master/Kconfig
+index 90dee3ec552097..77da199c7413e6 100644
+--- a/drivers/i3c/master/Kconfig
++++ b/drivers/i3c/master/Kconfig
+@@ -57,3 +57,14 @@ config MIPI_I3C_HCI
+
+ This driver can also be built as a module. If so, the module will be
+ called mipi-i3c-hci.
++
++config MIPI_I3C_HCI_PCI
++ tristate "MIPI I3C Host Controller Interface PCI support"
++ depends on MIPI_I3C_HCI
++ depends on PCI
++ help
++ Support for MIPI I3C Host Controller Interface compatible hardware
++ on the PCI bus.
++
++ This driver can also be built as a module. If so, the module will be
++ called mipi-i3c-hci-pci.
+diff --git a/drivers/i3c/master/mipi-i3c-hci/Makefile b/drivers/i3c/master/mipi-i3c-hci/Makefile
+index 1f8cd5c48fdef3..e3d3ef757035f0 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/Makefile
++++ b/drivers/i3c/master/mipi-i3c-hci/Makefile
+@@ -5,3 +5,4 @@ mipi-i3c-hci-y := core.o ext_caps.o pio.o dma.o \
+ cmd_v1.o cmd_v2.o \
+ dat_v1.o dct_v1.o \
+ hci_quirks.o
++obj-$(CONFIG_MIPI_I3C_HCI_PCI) += mipi-i3c-hci-pci.o
+diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
+index 13adc584009429..fe955703e59b58 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
++++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
+@@ -762,9 +762,26 @@ static bool hci_dma_irq_handler(struct i3c_hci *hci, unsigned int mask)
+ complete(&rh->op_done);
+
+ if (status & INTR_TRANSFER_ABORT) {
++ u32 ring_status;
++
+ dev_notice_ratelimited(&hci->master.dev,
+ "ring %d: Transfer Aborted\n", i);
+ mipi_i3c_hci_resume(hci);
++ ring_status = rh_reg_read(RING_STATUS);
++ if (!(ring_status & RING_STATUS_RUNNING) &&
++ status & INTR_TRANSFER_COMPLETION &&
++ status & INTR_TRANSFER_ERR) {
++ /*
++ * Ring stop followed by run is an Intel
++ * specific required quirk after resuming the
++ * halted controller. Do it only when the ring
++ * is not in running state after a transfer
++ * error.
++ */
++ rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE);
++ rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE |
++ RING_CTRL_RUN_STOP);
++ }
+ }
+ if (status & INTR_WARN_INS_STOP_MODE)
+ dev_warn_ratelimited(&hci->master.dev,
+diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+new file mode 100644
+index 00000000000000..c6c3a3ec11eae3
+--- /dev/null
++++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+@@ -0,0 +1,148 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * PCI glue code for MIPI I3C HCI driver
++ *
++ * Copyright (C) 2024 Intel Corporation
++ *
++ * Author: Jarkko Nikula <jarkko.nikula@linux.intel.com>
++ */
++#include <linux/acpi.h>
++#include <linux/idr.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++#include <linux/platform_device.h>
++
++struct mipi_i3c_hci_pci_info {
++ int (*init)(struct pci_dev *pci);
++};
++
++#define INTEL_PRIV_OFFSET 0x2b0
++#define INTEL_PRIV_SIZE 0x28
++#define INTEL_PRIV_RESETS 0x04
++#define INTEL_PRIV_RESETS_RESET BIT(0)
++#define INTEL_PRIV_RESETS_RESET_DONE BIT(1)
++
++static DEFINE_IDA(mipi_i3c_hci_pci_ida);
++
++static int mipi_i3c_hci_pci_intel_init(struct pci_dev *pci)
++{
++ unsigned long timeout;
++ void __iomem *priv;
++
++ priv = devm_ioremap(&pci->dev,
++ pci_resource_start(pci, 0) + INTEL_PRIV_OFFSET,
++ INTEL_PRIV_SIZE);
++ if (!priv)
++ return -ENOMEM;
++
++ /* Assert reset, wait for completion and release reset */
++ writel(0, priv + INTEL_PRIV_RESETS);
++ timeout = jiffies + msecs_to_jiffies(10);
++ while (!(readl(priv + INTEL_PRIV_RESETS) &
++ INTEL_PRIV_RESETS_RESET_DONE)) {
++ if (time_after(jiffies, timeout))
++ break;
++ cpu_relax();
++ }
++ writel(INTEL_PRIV_RESETS_RESET, priv + INTEL_PRIV_RESETS);
++
++ return 0;
++}
++
++static struct mipi_i3c_hci_pci_info intel_info = {
++ .init = mipi_i3c_hci_pci_intel_init,
++};
++
++static int mipi_i3c_hci_pci_probe(struct pci_dev *pci,
++ const struct pci_device_id *id)
++{
++ struct mipi_i3c_hci_pci_info *info;
++ struct platform_device *pdev;
++ struct resource res[2];
++ int dev_id, ret;
++
++ ret = pcim_enable_device(pci);
++ if (ret)
++ return ret;
++
++ pci_set_master(pci);
++
++ memset(&res, 0, sizeof(res));
++
++ res[0].flags = IORESOURCE_MEM;
++ res[0].start = pci_resource_start(pci, 0);
++ res[0].end = pci_resource_end(pci, 0);
++
++ res[1].flags = IORESOURCE_IRQ;
++ res[1].start = pci->irq;
++ res[1].end = pci->irq;
++
++ dev_id = ida_alloc(&mipi_i3c_hci_pci_ida, GFP_KERNEL);
++ if (dev_id < 0)
++ return dev_id;
++
++ pdev = platform_device_alloc("mipi-i3c-hci", dev_id);
++ if (!pdev)
++ return -ENOMEM;
++
++ pdev->dev.parent = &pci->dev;
++ device_set_node(&pdev->dev, dev_fwnode(&pci->dev));
++
++ ret = platform_device_add_resources(pdev, res, ARRAY_SIZE(res));
++ if (ret)
++ goto err;
++
++ info = (struct mipi_i3c_hci_pci_info *)id->driver_data;
++ if (info && info->init) {
++ ret = info->init(pci);
++ if (ret)
++ goto err;
++ }
++
++ ret = platform_device_add(pdev);
++ if (ret)
++ goto err;
++
++ pci_set_drvdata(pci, pdev);
++
++ return 0;
++
++err:
++ platform_device_put(pdev);
++ ida_free(&mipi_i3c_hci_pci_ida, dev_id);
++ return ret;
++}
++
++static void mipi_i3c_hci_pci_remove(struct pci_dev *pci)
++{
++ struct platform_device *pdev = pci_get_drvdata(pci);
++ int dev_id = pdev->id;
++
++ platform_device_unregister(pdev);
++ ida_free(&mipi_i3c_hci_pci_ida, dev_id);
++}
++
++static const struct pci_device_id mipi_i3c_hci_pci_devices[] = {
++ /* Panther Lake-H */
++ { PCI_VDEVICE(INTEL, 0xe37c), (kernel_ulong_t)&intel_info},
++ { PCI_VDEVICE(INTEL, 0xe36f), (kernel_ulong_t)&intel_info},
++ /* Panther Lake-P */
++ { PCI_VDEVICE(INTEL, 0xe47c), (kernel_ulong_t)&intel_info},
++ { PCI_VDEVICE(INTEL, 0xe46f), (kernel_ulong_t)&intel_info},
++ { },
++};
++MODULE_DEVICE_TABLE(pci, mipi_i3c_hci_pci_devices);
++
++static struct pci_driver mipi_i3c_hci_pci_driver = {
++ .name = "mipi_i3c_hci_pci",
++ .id_table = mipi_i3c_hci_pci_devices,
++ .probe = mipi_i3c_hci_pci_probe,
++ .remove = mipi_i3c_hci_pci_remove,
++};
++
++module_pci_driver(mipi_i3c_hci_pci_driver);
++
++MODULE_AUTHOR("Jarkko Nikula <jarkko.nikula@intel.com>");
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("MIPI I3C HCI driver on PCI bus");
+diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
+index ad225823e6f2fe..45a4564c670c01 100644
+--- a/drivers/infiniband/hw/efa/efa_main.c
++++ b/drivers/infiniband/hw/efa/efa_main.c
+@@ -470,7 +470,6 @@ static void efa_ib_device_remove(struct efa_dev *dev)
+ ibdev_info(&dev->ibdev, "Unregister ib device\n");
+ ib_unregister_device(&dev->ibdev);
+ efa_destroy_eqs(dev);
+- efa_com_dev_reset(&dev->edev, EFA_REGS_RESET_NORMAL);
+ efa_release_doorbell_bar(dev);
+ }
+
+@@ -643,12 +642,14 @@ static struct efa_dev *efa_probe_device(struct pci_dev *pdev)
+ return ERR_PTR(err);
+ }
+
+-static void efa_remove_device(struct pci_dev *pdev)
++static void efa_remove_device(struct pci_dev *pdev,
++ enum efa_regs_reset_reason_types reset_reason)
+ {
+ struct efa_dev *dev = pci_get_drvdata(pdev);
+ struct efa_com_dev *edev;
+
+ edev = &dev->edev;
++ efa_com_dev_reset(edev, reset_reason);
+ efa_com_admin_destroy(edev);
+ efa_free_irq(dev, &dev->admin_irq);
+ efa_disable_msix(dev);
+@@ -676,7 +677,7 @@ static int efa_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return 0;
+
+ err_remove_device:
+- efa_remove_device(pdev);
++ efa_remove_device(pdev, EFA_REGS_RESET_INIT_ERR);
+ return err;
+ }
+
+@@ -685,7 +686,7 @@ static void efa_remove(struct pci_dev *pdev)
+ struct efa_dev *dev = pci_get_drvdata(pdev);
+
+ efa_ib_device_remove(dev);
+- efa_remove_device(pdev);
++ efa_remove_device(pdev, EFA_REGS_RESET_NORMAL);
+ }
+
+ static void efa_shutdown(struct pci_dev *pdev)
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 601fb4ee69009e..6fb2f2919ab1ff 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -175,6 +175,7 @@
+ #define CONTROL_GAM_EN 25
+ #define CONTROL_GALOG_EN 28
+ #define CONTROL_GAINT_EN 29
++#define CONTROL_EPH_EN 45
+ #define CONTROL_XT_EN 50
+ #define CONTROL_INTCAPXT_EN 51
+ #define CONTROL_IRTCACHEDIS 59
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 43131c3a21726f..dbe2d13972feff 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -2647,6 +2647,10 @@ static void iommu_init_flags(struct amd_iommu *iommu)
+
+ /* Set IOTLB invalidation timeout to 1s */
+ iommu_set_inv_tlb_timeout(iommu, CTRL_INV_TO_1S);
++
++ /* Enable Enhanced Peripheral Page Request Handling */
++ if (check_feature(FEATURE_EPHSUP))
++ iommu_feature_enable(iommu, CONTROL_EPH_EN);
+ }
+
+ static void iommu_apply_resume_quirks(struct amd_iommu *iommu)
+diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
+index 4674e618797c15..8b5926c1452edb 100644
+--- a/drivers/iommu/io-pgfault.c
++++ b/drivers/iommu/io-pgfault.c
+@@ -478,6 +478,7 @@ void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev)
+
+ ops->page_response(dev, iopf, &resp);
+ list_del_init(&group->pending_node);
++ iopf_free_group(group);
+ }
+ mutex_unlock(&fault_param->lock);
+
+diff --git a/drivers/media/dvb-frontends/cxd2841er.c b/drivers/media/dvb-frontends/cxd2841er.c
+index d925ca24183b50..415f1f91cc3072 100644
+--- a/drivers/media/dvb-frontends/cxd2841er.c
++++ b/drivers/media/dvb-frontends/cxd2841er.c
+@@ -311,12 +311,8 @@ static int cxd2841er_set_reg_bits(struct cxd2841er_priv *priv,
+
+ static u32 cxd2841er_calc_iffreq_xtal(enum cxd2841er_xtal xtal, u32 ifhz)
+ {
+- u64 tmp;
+-
+- tmp = (u64) ifhz * 16777216;
+- do_div(tmp, ((xtal == SONY_XTAL_24000) ? 48000000 : 41000000));
+-
+- return (u32) tmp;
++ return div_u64(ifhz * 16777216ull,
++ (xtal == SONY_XTAL_24000) ? 48000000 : 41000000);
+ }
+
+ static u32 cxd2841er_calc_iffreq(u32 ifhz)
+diff --git a/drivers/media/i2c/ds90ub913.c b/drivers/media/i2c/ds90ub913.c
+index b5375d73662996..7670d6c82d923e 100644
+--- a/drivers/media/i2c/ds90ub913.c
++++ b/drivers/media/i2c/ds90ub913.c
+@@ -8,6 +8,7 @@
+ * Copyright (c) 2023 Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/clk-provider.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -146,6 +147,19 @@ static int ub913_write(const struct ub913_data *priv, u8 reg, u8 val)
+ return ret;
+ }
+
++static int ub913_update_bits(const struct ub913_data *priv, u8 reg, u8 mask,
++ u8 val)
++{
++ int ret;
++
++ ret = regmap_update_bits(priv->regmap, reg, mask, val);
++ if (ret < 0)
++ dev_err(&priv->client->dev,
++ "Cannot update register 0x%02x %d!\n", reg, ret);
++
++ return ret;
++}
++
+ /*
+ * GPIO chip
+ */
+@@ -733,10 +747,13 @@ static int ub913_hw_init(struct ub913_data *priv)
+ if (ret)
+ return dev_err_probe(dev, ret, "i2c master init failed\n");
+
+- ub913_read(priv, UB913_REG_GENERAL_CFG, &v);
+- v &= ~UB913_REG_GENERAL_CFG_PCLK_RISING;
+- v |= priv->pclk_polarity_rising ? UB913_REG_GENERAL_CFG_PCLK_RISING : 0;
+- ub913_write(priv, UB913_REG_GENERAL_CFG, v);
++ ret = ub913_update_bits(priv, UB913_REG_GENERAL_CFG,
++ UB913_REG_GENERAL_CFG_PCLK_RISING,
++ FIELD_PREP(UB913_REG_GENERAL_CFG_PCLK_RISING,
++ priv->pclk_polarity_rising));
++
++ if (ret)
++ return ret;
+
+ return 0;
+ }
+diff --git a/drivers/media/i2c/ds90ub953.c b/drivers/media/i2c/ds90ub953.c
+index 10daecf6f45798..f0bad3e64f23dc 100644
+--- a/drivers/media/i2c/ds90ub953.c
++++ b/drivers/media/i2c/ds90ub953.c
+@@ -398,8 +398,13 @@ static int ub953_gpiochip_probe(struct ub953_data *priv)
+ int ret;
+
+ /* Set all GPIOs to local input mode */
+- ub953_write(priv, UB953_REG_LOCAL_GPIO_DATA, 0);
+- ub953_write(priv, UB953_REG_GPIO_INPUT_CTRL, 0xf);
++ ret = ub953_write(priv, UB953_REG_LOCAL_GPIO_DATA, 0);
++ if (ret)
++ return ret;
++
++ ret = ub953_write(priv, UB953_REG_GPIO_INPUT_CTRL, 0xf);
++ if (ret)
++ return ret;
+
+ gc->label = dev_name(dev);
+ gc->parent = dev;
+@@ -961,10 +966,11 @@ static void ub953_calc_clkout_params(struct ub953_data *priv,
+ clkout_data->rate = clkout_rate;
+ }
+
+-static void ub953_write_clkout_regs(struct ub953_data *priv,
+- const struct ub953_clkout_data *clkout_data)
++static int ub953_write_clkout_regs(struct ub953_data *priv,
++ const struct ub953_clkout_data *clkout_data)
+ {
+ u8 clkout_ctrl0, clkout_ctrl1;
++ int ret;
+
+ if (priv->hw_data->is_ub971)
+ clkout_ctrl0 = clkout_data->m;
+@@ -974,8 +980,15 @@ static void ub953_write_clkout_regs(struct ub953_data *priv,
+
+ clkout_ctrl1 = clkout_data->n;
+
+- ub953_write(priv, UB953_REG_CLKOUT_CTRL0, clkout_ctrl0);
+- ub953_write(priv, UB953_REG_CLKOUT_CTRL1, clkout_ctrl1);
++ ret = ub953_write(priv, UB953_REG_CLKOUT_CTRL0, clkout_ctrl0);
++ if (ret)
++ return ret;
++
++ ret = ub953_write(priv, UB953_REG_CLKOUT_CTRL1, clkout_ctrl1);
++ if (ret)
++ return ret;
++
++ return 0;
+ }
+
+ static unsigned long ub953_clkout_recalc_rate(struct clk_hw *hw,
+@@ -1055,9 +1068,7 @@ static int ub953_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+ dev_dbg(&priv->client->dev, "%s %lu (requested %lu)\n", __func__,
+ clkout_data.rate, rate);
+
+- ub953_write_clkout_regs(priv, &clkout_data);
+-
+- return 0;
++ return ub953_write_clkout_regs(priv, &clkout_data);
+ }
+
+ static const struct clk_ops ub953_clkout_ops = {
+@@ -1082,7 +1093,9 @@ static int ub953_register_clkout(struct ub953_data *priv)
+
+ /* Initialize clkout to 25MHz by default */
+ ub953_calc_clkout_params(priv, UB953_DEFAULT_CLKOUT_RATE, &clkout_data);
+- ub953_write_clkout_regs(priv, &clkout_data);
++ ret = ub953_write_clkout_regs(priv, &clkout_data);
++ if (ret)
++ return ret;
+
+ priv->clkout_clk_hw.init = &init;
+
+@@ -1229,10 +1242,15 @@ static int ub953_hw_init(struct ub953_data *priv)
+ if (ret)
+ return dev_err_probe(dev, ret, "i2c init failed\n");
+
+- ub953_write(priv, UB953_REG_GENERAL_CFG,
+- (priv->non_continous_clk ? 0 : UB953_REG_GENERAL_CFG_CONT_CLK) |
+- ((priv->num_data_lanes - 1) << UB953_REG_GENERAL_CFG_CSI_LANE_SEL_SHIFT) |
+- UB953_REG_GENERAL_CFG_CRC_TX_GEN_ENABLE);
++ v = 0;
++ v |= priv->non_continous_clk ? 0 : UB953_REG_GENERAL_CFG_CONT_CLK;
++ v |= (priv->num_data_lanes - 1) <<
++ UB953_REG_GENERAL_CFG_CSI_LANE_SEL_SHIFT;
++ v |= UB953_REG_GENERAL_CFG_CRC_TX_GEN_ENABLE;
++
++ ret = ub953_write(priv, UB953_REG_GENERAL_CFG, v);
++ if (ret)
++ return ret;
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/broadcom/bcm2835-unicam.c b/drivers/media/platform/broadcom/bcm2835-unicam.c
+index a1d93c14553d80..9f81e1582a3005 100644
+--- a/drivers/media/platform/broadcom/bcm2835-unicam.c
++++ b/drivers/media/platform/broadcom/bcm2835-unicam.c
+@@ -816,11 +816,6 @@ static irqreturn_t unicam_isr(int irq, void *dev)
+ }
+ }
+
+- if (unicam_reg_read(unicam, UNICAM_ICTL) & UNICAM_FCM) {
+- /* Switch out of trigger mode if selected */
+- unicam_reg_write_field(unicam, UNICAM_ICTL, 1, UNICAM_TFC);
+- unicam_reg_write_field(unicam, UNICAM_ICTL, 0, UNICAM_FCM);
+- }
+ return IRQ_HANDLED;
+ }
+
+@@ -984,8 +979,7 @@ static void unicam_start_rx(struct unicam_device *unicam,
+
+ unicam_reg_write_field(unicam, UNICAM_ANA, 0, UNICAM_DDL);
+
+- /* Always start in trigger frame capture mode (UNICAM_FCM set) */
+- val = UNICAM_FSIE | UNICAM_FEIE | UNICAM_FCM | UNICAM_IBOB;
++ val = UNICAM_FSIE | UNICAM_FEIE | UNICAM_IBOB;
+ line_int_freq = max(fmt->height >> 2, 128);
+ unicam_set_field(&val, line_int_freq, UNICAM_LCIE_MASK);
+ unicam_reg_write(unicam, UNICAM_ICTL, val);
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_bridge.c b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+index 613949df897d34..6d964e392d3130 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_bridge.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+@@ -191,10 +191,11 @@ static int vidtv_start_streaming(struct vidtv_dvb *dvb)
+
+ mux_args.mux_buf_sz = mux_buf_sz;
+
+- dvb->streaming = true;
+ dvb->mux = vidtv_mux_init(dvb->fe[0], dev, &mux_args);
+ if (!dvb->mux)
+ return -ENOMEM;
++
++ dvb->streaming = true;
+ vidtv_mux_start_thread(dvb->mux);
+
+ dev_dbg_ratelimited(dev, "Started streaming\n");
+@@ -205,6 +206,11 @@ static int vidtv_stop_streaming(struct vidtv_dvb *dvb)
+ {
+ struct device *dev = &dvb->pdev->dev;
+
++ if (!dvb->streaming) {
++ dev_warn_ratelimited(dev, "No streaming. Skipping.\n");
++ return 0;
++ }
++
+ dvb->streaming = false;
+ vidtv_mux_stop_thread(dvb->mux);
+ vidtv_mux_destroy(dvb->mux);
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index d832aa55056f39..4d8e00b425f443 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2809,6 +2809,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
++ /* Sonix Technology Co. Ltd. - 292A IPC AR0330 */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x0c45,
++ .idProduct = 0x6366,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_MJPEG_NO_EOF) },
+ /* MT6227 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+@@ -2837,6 +2846,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
++ /* Kurokesu C1 PRO */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x16d0,
++ .idProduct = 0x0ed1,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_MJPEG_NO_EOF) },
+ /* Syntek (HP Spartan) */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index d2fe01bcd209e5..eab7b8f5573057 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -20,6 +20,7 @@
+ #include <linux/atomic.h>
+ #include <linux/unaligned.h>
+
++#include <media/jpeg.h>
+ #include <media/v4l2-common.h>
+
+ #include "uvcvideo.h"
+@@ -1137,6 +1138,7 @@ static void uvc_video_stats_stop(struct uvc_streaming *stream)
+ static int uvc_video_decode_start(struct uvc_streaming *stream,
+ struct uvc_buffer *buf, const u8 *data, int len)
+ {
++ u8 header_len;
+ u8 fid;
+
+ /*
+@@ -1150,6 +1152,7 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
+ return -EINVAL;
+ }
+
++ header_len = data[0];
+ fid = data[1] & UVC_STREAM_FID;
+
+ /*
+@@ -1231,9 +1234,31 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
+ return -EAGAIN;
+ }
+
++ /*
++ * Some cameras, when running two parallel streams (one MJPEG alongside
++ * another non-MJPEG stream), are known to lose the EOF packet for a frame.
++ * We can detect the end of a frame by checking for a new SOI marker, as
++ * the SOI always lies on the packet boundary between two frames for
++ * these devices.
++ */
++ if (stream->dev->quirks & UVC_QUIRK_MJPEG_NO_EOF &&
++ (stream->cur_format->fcc == V4L2_PIX_FMT_MJPEG ||
++ stream->cur_format->fcc == V4L2_PIX_FMT_JPEG)) {
++ const u8 *packet = data + header_len;
++
++ if (len >= header_len + 2 &&
++ packet[0] == 0xff && packet[1] == JPEG_MARKER_SOI &&
++ buf->bytesused != 0) {
++ buf->state = UVC_BUF_STATE_READY;
++ buf->error = 1;
++ stream->last_fid ^= UVC_STREAM_FID;
++ return -EAGAIN;
++ }
++ }
++
+ stream->last_fid = fid;
+
+- return data[0];
++ return header_len;
+ }
+
+ static inline enum dma_data_direction uvc_stream_dir(
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index 272dc9cf01ee7d..74ac2106f08e2c 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -76,6 +76,7 @@
+ #define UVC_QUIRK_NO_RESET_RESUME 0x00004000
+ #define UVC_QUIRK_DISABLE_AUTOSUSPEND 0x00008000
+ #define UVC_QUIRK_INVALID_DEVICE_SOF 0x00010000
++#define UVC_QUIRK_MJPEG_NO_EOF 0x00020000
+
+ /* Format flags */
+ #define UVC_FMT_FLAG_COMPRESSED 0x00000001
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 6e62415de2e5ec..d5d868cb4edc7b 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -263,6 +263,7 @@
+ #define MSDC_PAD_TUNE_CMD2_SEL BIT(21) /* RW */
+
+ #define PAD_DS_TUNE_DLY_SEL BIT(0) /* RW */
++#define PAD_DS_TUNE_DLY2_SEL BIT(1) /* RW */
+ #define PAD_DS_TUNE_DLY1 GENMASK(6, 2) /* RW */
+ #define PAD_DS_TUNE_DLY2 GENMASK(11, 7) /* RW */
+ #define PAD_DS_TUNE_DLY3 GENMASK(16, 12) /* RW */
+@@ -308,6 +309,7 @@
+
+ /* EMMC50_PAD_DS_TUNE mask */
+ #define PAD_DS_DLY_SEL BIT(16) /* RW */
++#define PAD_DS_DLY2_SEL BIT(15) /* RW */
+ #define PAD_DS_DLY1 GENMASK(14, 10) /* RW */
+ #define PAD_DS_DLY3 GENMASK(4, 0) /* RW */
+
+@@ -2361,13 +2363,23 @@ static int msdc_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ static int msdc_prepare_hs400_tuning(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ struct msdc_host *host = mmc_priv(mmc);
++
+ host->hs400_mode = true;
+
+- if (host->top_base)
+- writel(host->hs400_ds_delay,
+- host->top_base + EMMC50_PAD_DS_TUNE);
+- else
+- writel(host->hs400_ds_delay, host->base + PAD_DS_TUNE);
++ if (host->top_base) {
++ if (host->hs400_ds_dly3)
++ sdr_set_field(host->top_base + EMMC50_PAD_DS_TUNE,
++ PAD_DS_DLY3, host->hs400_ds_dly3);
++ if (host->hs400_ds_delay)
++ writel(host->hs400_ds_delay,
++ host->top_base + EMMC50_PAD_DS_TUNE);
++ } else {
++ if (host->hs400_ds_dly3)
++ sdr_set_field(host->base + PAD_DS_TUNE,
++ PAD_DS_TUNE_DLY3, host->hs400_ds_dly3);
++ if (host->hs400_ds_delay)
++ writel(host->hs400_ds_delay, host->base + PAD_DS_TUNE);
++ }
+ /* hs400 mode must set it to 0 */
+ sdr_clr_bits(host->base + MSDC_PATCH_BIT2, MSDC_PATCH_BIT2_CFGCRCSTS);
+ /* to improve read performance, set outstanding to 2 */
+@@ -2387,14 +2399,11 @@ static int msdc_execute_hs400_tuning(struct mmc_host *mmc, struct mmc_card *card
+ if (host->top_base) {
+ sdr_set_bits(host->top_base + EMMC50_PAD_DS_TUNE,
+ PAD_DS_DLY_SEL);
+- if (host->hs400_ds_dly3)
+- sdr_set_field(host->top_base + EMMC50_PAD_DS_TUNE,
+- PAD_DS_DLY3, host->hs400_ds_dly3);
++ sdr_clr_bits(host->top_base + EMMC50_PAD_DS_TUNE,
++ PAD_DS_DLY2_SEL);
+ } else {
+ sdr_set_bits(host->base + PAD_DS_TUNE, PAD_DS_TUNE_DLY_SEL);
+- if (host->hs400_ds_dly3)
+- sdr_set_field(host->base + PAD_DS_TUNE,
+- PAD_DS_TUNE_DLY3, host->hs400_ds_dly3);
++ sdr_clr_bits(host->base + PAD_DS_TUNE, PAD_DS_TUNE_DLY2_SEL);
+ }
+
+ host->hs400_tuning = true;
+diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
+index 6cba9717a6d87d..399844809bbeaa 100644
+--- a/drivers/net/can/c_can/c_can_platform.c
++++ b/drivers/net/can/c_can/c_can_platform.c
+@@ -385,15 +385,16 @@ static int c_can_plat_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
+ KBUILD_MODNAME, ret);
+- goto exit_free_device;
++ goto exit_pm_runtime;
+ }
+
+ dev_info(&pdev->dev, "%s device registered (regs=%p, irq=%d)\n",
+ KBUILD_MODNAME, priv->base, dev->irq);
+ return 0;
+
+-exit_free_device:
++exit_pm_runtime:
+ pm_runtime_disable(priv->device);
++exit_free_device:
+ free_c_can_dev(dev);
+ exit:
+ dev_err(&pdev->dev, "probe failed\n");
+diff --git a/drivers/net/can/ctucanfd/ctucanfd_base.c b/drivers/net/can/ctucanfd/ctucanfd_base.c
+index 64c349fd46007f..f65c1a1e05ccdf 100644
+--- a/drivers/net/can/ctucanfd/ctucanfd_base.c
++++ b/drivers/net/can/ctucanfd/ctucanfd_base.c
+@@ -867,10 +867,12 @@ static void ctucan_err_interrupt(struct net_device *ndev, u32 isr)
+ }
+ break;
+ case CAN_STATE_ERROR_ACTIVE:
+- cf->can_id |= CAN_ERR_CNT;
+- cf->data[1] = CAN_ERR_CRTL_ACTIVE;
+- cf->data[6] = bec.txerr;
+- cf->data[7] = bec.rxerr;
++ if (skb) {
++ cf->can_id |= CAN_ERR_CNT;
++ cf->data[1] = CAN_ERR_CRTL_ACTIVE;
++ cf->data[6] = bec.txerr;
++ cf->data[7] = bec.rxerr;
++ }
+ break;
+ default:
+ netdev_warn(ndev, "unhandled error state (%d:%s)!\n",
+diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c
+index df18c85fc07841..d9a937ba126c3c 100644
+--- a/drivers/net/can/rockchip/rockchip_canfd-core.c
++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c
+@@ -622,7 +622,7 @@ rkcanfd_handle_rx_fifo_overflow_int(struct rkcanfd_priv *priv)
+ netdev_dbg(priv->ndev, "RX-FIFO overflow\n");
+
+ skb = rkcanfd_alloc_can_err_skb(priv, &cf, ×tamp);
+- if (skb)
++ if (!skb)
+ return 0;
+
+ rkcanfd_get_berr_counter_corrected(priv, &bec);
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_devlink.c b/drivers/net/can/usb/etas_es58x/es58x_devlink.c
+index eee20839d96fd4..0d155eb1b9e999 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_devlink.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_devlink.c
+@@ -248,7 +248,11 @@ static int es58x_devlink_info_get(struct devlink *devlink,
+ return ret;
+ }
+
+- return devlink_info_serial_number_put(req, es58x_dev->udev->serial);
++ if (es58x_dev->udev->serial)
++ ret = devlink_info_serial_number_put(req,
++ es58x_dev->udev->serial);
++
++ return ret;
+ }
+
+ const struct devlink_ops es58x_dl_ops = {
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index b4fbb99bfad208..a3d6b8f198a86a 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -2159,8 +2159,13 @@ static int idpf_open(struct net_device *netdev)
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
++ err = idpf_set_real_num_queues(vport);
++ if (err)
++ goto unlock;
++
+ err = idpf_vport_open(vport);
+
++unlock:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 60d15b3e6e2faa..1e0d1f9b07fbcf 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -3008,8 +3008,6 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ return -EINVAL;
+
+ rsc_segments = DIV_ROUND_UP(skb->data_len, rsc_seg_len);
+- if (unlikely(rsc_segments == 1))
+- return 0;
+
+ NAPI_GRO_CB(skb)->count = rsc_segments;
+ skb_shinfo(skb)->gso_size = rsc_seg_len;
+@@ -3072,6 +3070,7 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ idpf_rx_hash(rxq, skb, rx_desc, decoded);
+
+ skb->protocol = eth_type_trans(skb, rxq->netdev);
++ skb_record_rx_queue(skb, rxq->idx);
+
+ if (le16_get_bits(rx_desc->hdrlen_flags,
+ VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M))
+@@ -3080,8 +3079,6 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ csum_bits = idpf_rx_splitq_extract_csum_bits(rx_desc);
+ idpf_rx_csum(rxq, skb, csum_bits, decoded);
+
+- skb_record_rx_queue(skb, rxq->idx);
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 6e70bca15db1d8..1ec9e8cc99d947 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -1096,6 +1096,7 @@ static int igc_init_empty_frame(struct igc_ring *ring,
+ return -ENOMEM;
+ }
+
++ buffer->type = IGC_TX_BUFFER_TYPE_SKB;
+ buffer->skb = skb;
+ buffer->protocol = 0;
+ buffer->bytecount = skb->len;
+@@ -2707,8 +2708,9 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget)
+ }
+
+ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+- struct xdp_buff *xdp)
++ struct igc_xdp_buff *ctx)
+ {
++ struct xdp_buff *xdp = &ctx->xdp;
+ unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ unsigned int metasize = xdp->data - xdp->data_meta;
+ struct sk_buff *skb;
+@@ -2727,27 +2729,28 @@ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+ __skb_pull(skb, metasize);
+ }
+
++ if (ctx->rx_ts) {
++ skb_shinfo(skb)->tx_flags |= SKBTX_HW_TSTAMP_NETDEV;
++ skb_hwtstamps(skb)->netdev_data = ctx->rx_ts;
++ }
++
+ return skb;
+ }
+
+ static void igc_dispatch_skb_zc(struct igc_q_vector *q_vector,
+ union igc_adv_rx_desc *desc,
+- struct xdp_buff *xdp,
+- ktime_t timestamp)
++ struct igc_xdp_buff *ctx)
+ {
+ struct igc_ring *ring = q_vector->rx.ring;
+ struct sk_buff *skb;
+
+- skb = igc_construct_skb_zc(ring, xdp);
++ skb = igc_construct_skb_zc(ring, ctx);
+ if (!skb) {
+ ring->rx_stats.alloc_failed++;
+ set_bit(IGC_RING_FLAG_RX_ALLOC_FAILED, &ring->flags);
+ return;
+ }
+
+- if (timestamp)
+- skb_hwtstamps(skb)->hwtstamp = timestamp;
+-
+ if (igc_cleanup_headers(ring, desc, skb))
+ return;
+
+@@ -2783,7 +2786,6 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
+ union igc_adv_rx_desc *desc;
+ struct igc_rx_buffer *bi;
+ struct igc_xdp_buff *ctx;
+- ktime_t timestamp = 0;
+ unsigned int size;
+ int res;
+
+@@ -2813,6 +2815,8 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
+ */
+ bi->xdp->data_meta += IGC_TS_HDR_LEN;
+ size -= IGC_TS_HDR_LEN;
++ } else {
++ ctx->rx_ts = NULL;
+ }
+
+ bi->xdp->data_end = bi->xdp->data + size;
+@@ -2821,7 +2825,7 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
+ res = __igc_xdp_run_prog(adapter, prog, bi->xdp);
+ switch (res) {
+ case IGC_XDP_PASS:
+- igc_dispatch_skb_zc(q_vector, desc, bi->xdp, timestamp);
++ igc_dispatch_skb_zc(q_vector, desc, ctx);
+ fallthrough;
+ case IGC_XDP_CONSUMED:
+ xsk_buff_free(bi->xdp);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+index 2bed8c86b7cfc5..3f64cdbabfa3c1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+@@ -768,7 +768,9 @@ static void __mlxsw_sp_port_get_stats(struct net_device *dev,
+ err = mlxsw_sp_get_hw_stats_by_group(&hw_stats, &len, grp);
+ if (err)
+ return;
+- mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl);
++ err = mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl);
++ if (err)
++ return;
+ for (i = 0; i < len; i++) {
+ data[data_index + i] = hw_stats[i].getter(ppcnt_pl);
+ if (!hw_stats[i].cells_bytes)
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 3c0d067c360992..3e090f87f97ebd 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -585,21 +585,30 @@ static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn,
+ static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma)
+ {
+ struct am65_cpsw_tx_chn *tx_chn = data;
++ enum am65_cpsw_tx_buf_type buf_type;
+ struct cppi5_host_desc_t *desc_tx;
++ struct xdp_frame *xdpf;
+ struct sk_buff *skb;
+ void **swdata;
+
+ desc_tx = k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma);
+ swdata = cppi5_hdesc_get_swdata(desc_tx);
+- skb = *(swdata);
+- am65_cpsw_nuss_xmit_free(tx_chn, desc_tx);
++ buf_type = am65_cpsw_nuss_buf_type(tx_chn, desc_dma);
++ if (buf_type == AM65_CPSW_TX_BUF_TYPE_SKB) {
++ skb = *(swdata);
++ dev_kfree_skb_any(skb);
++ } else {
++ xdpf = *(swdata);
++ xdp_return_frame(xdpf);
++ }
+
+- dev_kfree_skb_any(skb);
++ am65_cpsw_nuss_xmit_free(tx_chn, desc_tx);
+ }
+
+ static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
+ struct net_device *ndev,
+- unsigned int len)
++ unsigned int len,
++ unsigned int headroom)
+ {
+ struct sk_buff *skb;
+
+@@ -609,7 +618,7 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
+ if (unlikely(!skb))
+ return NULL;
+
+- skb_reserve(skb, AM65_CPSW_HEADROOM);
++ skb_reserve(skb, headroom);
+ skb->dev = ndev;
+
+ return skb;
+@@ -1191,16 +1200,8 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
+ dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info);
+
+ dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE);
+-
+ k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);
+
+- skb = am65_cpsw_build_skb(page_addr, ndev,
+- AM65_CPSW_MAX_PACKET_SIZE);
+- if (unlikely(!skb)) {
+- new_page = page;
+- goto requeue;
+- }
+-
+ if (port->xdp_prog) {
+ xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]);
+ xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM,
+@@ -1210,9 +1211,16 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
+ if (*xdp_state != AM65_CPSW_XDP_PASS)
+ goto allocate;
+
+- /* Compute additional headroom to be reserved */
+- headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb);
+- skb_reserve(skb, headroom);
++ headroom = xdp.data - xdp.data_hard_start;
++ } else {
++ headroom = AM65_CPSW_HEADROOM;
++ }
++
++ skb = am65_cpsw_build_skb(page_addr, ndev,
++ AM65_CPSW_MAX_PACKET_SIZE, headroom);
++ if (unlikely(!skb)) {
++ new_page = page;
++ goto requeue;
+ }
+
+ ndev_priv = netdev_priv(ndev);
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index 3612b0633bd177..88187dd4eb2d40 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -39,10 +39,14 @@ static ssize_t nsim_dbg_netdev_ops_read(struct file *filp,
+ if (!sap->used)
+ continue;
+
+- p += scnprintf(p, bufsize - (p - buf),
+- "sa[%i] %cx ipaddr=0x%08x %08x %08x %08x\n",
+- i, (sap->rx ? 'r' : 't'), sap->ipaddr[0],
+- sap->ipaddr[1], sap->ipaddr[2], sap->ipaddr[3]);
++ if (sap->xs->props.family == AF_INET6)
++ p += scnprintf(p, bufsize - (p - buf),
++ "sa[%i] %cx ipaddr=%pI6c\n",
++ i, (sap->rx ? 'r' : 't'), &sap->ipaddr);
++ else
++ p += scnprintf(p, bufsize - (p - buf),
++ "sa[%i] %cx ipaddr=%pI4\n",
++ i, (sap->rx ? 'r' : 't'), &sap->ipaddr[3]);
+ p += scnprintf(p, bufsize - (p - buf),
+ "sa[%i] spi=0x%08x proto=0x%x salt=0x%08x crypt=%d\n",
+ i, be32_to_cpu(sap->xs->id.spi),
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 7f4ef219eee44f..6cfafaac1b4fb6 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -2640,7 +2640,9 @@ int team_nl_options_set_doit(struct sk_buff *skb, struct genl_info *info)
+ ctx.data.u32_val = nla_get_u32(attr_data);
+ break;
+ case TEAM_OPTION_TYPE_STRING:
+- if (nla_len(attr_data) > TEAM_STRING_MAX_LEN) {
++ if (nla_len(attr_data) > TEAM_STRING_MAX_LEN ||
++ !memchr(nla_data(attr_data), '\0',
++ nla_len(attr_data))) {
+ err = -EINVAL;
+ goto team_put;
+ }
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 6e9a3795846aa3..5e7cdd1b806fbd 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -2871,8 +2871,11 @@ static int vxlan_init(struct net_device *dev)
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ int err;
+
+- if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+- vxlan_vnigroup_init(vxlan);
++ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) {
++ err = vxlan_vnigroup_init(vxlan);
++ if (err)
++ return err;
++ }
+
+ err = gro_cells_init(&vxlan->gro_cells, dev);
+ if (err)
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 2cd3ff9b0164c8..a6ba97949440e4 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -4681,6 +4681,22 @@ static struct ath12k_reg_rule
+ return reg_rule_ptr;
+ }
+
++static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_params *rule,
++ u32 num_reg_rules)
++{
++ u8 num_invalid_5ghz_rules = 0;
++ u32 count, start_freq;
++
++ for (count = 0; count < num_reg_rules; count++) {
++ start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ);
++
++ if (start_freq >= ATH12K_MIN_6G_FREQ)
++ num_invalid_5ghz_rules++;
++ }
++
++ return num_invalid_5ghz_rules;
++}
++
+ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ struct sk_buff *skb,
+ struct ath12k_reg_info *reg_info)
+@@ -4691,6 +4707,7 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ u32 num_2g_reg_rules, num_5g_reg_rules;
+ u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
++ u8 num_invalid_5ghz_ext_rules;
+ u32 total_reg_rules = 0;
+ int ret, i, j;
+
+@@ -4784,20 +4801,6 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+
+ memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN);
+
+- /* FIXME: Currently FW includes 6G reg rule also in 5G rule
+- * list for country US.
+- * Having same 6G reg rule in 5G and 6G rules list causes
+- * intersect check to be true, and same rules will be shown
+- * multiple times in iw cmd. So added hack below to avoid
+- * parsing 6G rule from 5G reg rule list, and this can be
+- * removed later, after FW updates to remove 6G reg rule
+- * from 5G rules list.
+- */
+- if (memcmp(reg_info->alpha2, "US", 2) == 0) {
+- reg_info->num_5g_reg_rules = REG_US_5G_NUM_REG_RULES;
+- num_5g_reg_rules = reg_info->num_5g_reg_rules;
+- }
+-
+ reg_info->dfs_region = le32_to_cpu(ev->dfs_region);
+ reg_info->phybitmap = le32_to_cpu(ev->phybitmap);
+ reg_info->num_phy = le32_to_cpu(ev->num_phy);
+@@ -4900,8 +4903,29 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ }
+ }
+
++ ext_wmi_reg_rule += num_2g_reg_rules;
++
++ /* Firmware might include 6 GHz reg rule in 5 GHz rule list
++ * for few countries along with separate 6 GHz rule.
++ * Having same 6 GHz reg rule in 5 GHz and 6 GHz rules list
++ * causes intersect check to be true, and same rules will be
++ * shown multiple times in iw cmd.
++ * Hence, avoid parsing 6 GHz rule from 5 GHz reg rule list
++ */
++ num_invalid_5ghz_ext_rules = ath12k_wmi_ignore_num_extra_rules(ext_wmi_reg_rule,
++ num_5g_reg_rules);
++
++ if (num_invalid_5ghz_ext_rules) {
++ ath12k_dbg(ab, ATH12K_DBG_WMI,
++ "CC: %s 5 GHz reg rules number %d from fw, %d number of invalid 5 GHz rules",
++ reg_info->alpha2, reg_info->num_5g_reg_rules,
++ num_invalid_5ghz_ext_rules);
++
++ num_5g_reg_rules = num_5g_reg_rules - num_invalid_5ghz_ext_rules;
++ reg_info->num_5g_reg_rules = num_5g_reg_rules;
++ }
++
+ if (num_5g_reg_rules) {
+- ext_wmi_reg_rule += num_2g_reg_rules;
+ reg_info->reg_rules_5g_ptr =
+ create_ext_reg_rules_from_wmi(num_5g_reg_rules,
+ ext_wmi_reg_rule);
+@@ -4913,7 +4937,12 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ }
+ }
+
+- ext_wmi_reg_rule += num_5g_reg_rules;
++ /* We have adjusted the number of 5 GHz reg rules above. But still those
++ * many rules needs to be adjusted in ext_wmi_reg_rule.
++ *
++ * NOTE: num_invalid_5ghz_ext_rules will be 0 for rest other cases.
++ */
++ ext_wmi_reg_rule += (num_5g_reg_rules + num_invalid_5ghz_ext_rules);
+
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ reg_info->reg_rules_6g_ap_ptr[i] =
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 6a913f9b831580..b495cdea7111c3 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3943,7 +3943,6 @@ struct ath12k_wmi_eht_rate_set_params {
+ #define MAX_REG_RULES 10
+ #define REG_ALPHA2_LEN 2
+ #define MAX_6G_REG_RULES 5
+-#define REG_US_5G_NUM_REG_RULES 4
+
+ enum wmi_start_event_param {
+ WMI_VDEV_START_RESP_EVENT = 0,
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index 5aef7fa378788c..0ac84f968994b4 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -2492,7 +2492,7 @@ static int rtw89_pci_dphy_delay(struct rtw89_dev *rtwdev)
+ PCIE_DPHY_DLY_25US, PCIE_PHY_GEN1);
+ }
+
+-static void rtw89_pci_power_wake(struct rtw89_dev *rtwdev, bool pwr_up)
++static void rtw89_pci_power_wake_ax(struct rtw89_dev *rtwdev, bool pwr_up)
+ {
+ if (pwr_up)
+ rtw89_write32_set(rtwdev, R_AX_HCI_OPT_CTRL, BIT_WAKE_CTRL);
+@@ -2799,6 +2799,8 @@ static int rtw89_pci_ops_deinit(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_pci_info *info = rtwdev->pci_info;
+
++ rtw89_pci_power_wake(rtwdev, false);
++
+ if (rtwdev->chip->chip_id == RTL8852A) {
+ /* ltr sw trigger */
+ rtw89_write32_set(rtwdev, R_AX_LTR_CTRL_0, B_AX_APP_LTR_IDLE);
+@@ -2841,7 +2843,7 @@ static int rtw89_pci_ops_mac_pre_init_ax(struct rtw89_dev *rtwdev)
+ return ret;
+ }
+
+- rtw89_pci_power_wake(rtwdev, true);
++ rtw89_pci_power_wake_ax(rtwdev, true);
+ rtw89_pci_autoload_hang(rtwdev);
+ rtw89_pci_l12_vmain(rtwdev);
+ rtw89_pci_gen2_force_ib(rtwdev);
+@@ -2886,6 +2888,13 @@ static int rtw89_pci_ops_mac_pre_init_ax(struct rtw89_dev *rtwdev)
+ return 0;
+ }
+
++static int rtw89_pci_ops_mac_pre_deinit_ax(struct rtw89_dev *rtwdev)
++{
++ rtw89_pci_power_wake_ax(rtwdev, false);
++
++ return 0;
++}
++
+ int rtw89_pci_ltr_set(struct rtw89_dev *rtwdev, bool en)
+ {
+ u32 val;
+@@ -4264,7 +4273,7 @@ const struct rtw89_pci_gen_def rtw89_pci_gen_ax = {
+ B_AX_RDU_INT},
+
+ .mac_pre_init = rtw89_pci_ops_mac_pre_init_ax,
+- .mac_pre_deinit = NULL,
++ .mac_pre_deinit = rtw89_pci_ops_mac_pre_deinit_ax,
+ .mac_post_init = rtw89_pci_ops_mac_post_init_ax,
+
+ .clr_idx_all = rtw89_pci_clr_idx_all_ax,
+@@ -4280,6 +4289,8 @@ const struct rtw89_pci_gen_def rtw89_pci_gen_ax = {
+ .aspm_set = rtw89_pci_aspm_set_ax,
+ .clkreq_set = rtw89_pci_clkreq_set_ax,
+ .l1ss_set = rtw89_pci_l1ss_set_ax,
++
++ .power_wake = rtw89_pci_power_wake_ax,
+ };
+ EXPORT_SYMBOL(rtw89_pci_gen_ax);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.h b/drivers/net/wireless/realtek/rtw89/pci.h
+index 48c3ab735db2a7..0ea4dcb84dd862 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.h
++++ b/drivers/net/wireless/realtek/rtw89/pci.h
+@@ -1276,6 +1276,8 @@ struct rtw89_pci_gen_def {
+ void (*aspm_set)(struct rtw89_dev *rtwdev, bool enable);
+ void (*clkreq_set)(struct rtw89_dev *rtwdev, bool enable);
+ void (*l1ss_set)(struct rtw89_dev *rtwdev, bool enable);
++
++ void (*power_wake)(struct rtw89_dev *rtwdev, bool pwr_up);
+ };
+
+ struct rtw89_pci_info {
+@@ -1766,4 +1768,13 @@ static inline int rtw89_pci_poll_txdma_ch_idle(struct rtw89_dev *rtwdev)
+
+ return gen_def->poll_txdma_ch_idle(rtwdev);
+ }
++
++static inline void rtw89_pci_power_wake(struct rtw89_dev *rtwdev, bool pwr_up)
++{
++ const struct rtw89_pci_info *info = rtwdev->pci_info;
++ const struct rtw89_pci_gen_def *gen_def = info->gen_def;
++
++ gen_def->power_wake(rtwdev, pwr_up);
++}
++
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/pci_be.c b/drivers/net/wireless/realtek/rtw89/pci_be.c
+index 7cc32822296528..2f0d9ff25ba520 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci_be.c
++++ b/drivers/net/wireless/realtek/rtw89/pci_be.c
+@@ -614,5 +614,7 @@ const struct rtw89_pci_gen_def rtw89_pci_gen_be = {
+ .aspm_set = rtw89_pci_aspm_set_be,
+ .clkreq_set = rtw89_pci_clkreq_set_be,
+ .l1ss_set = rtw89_pci_l1ss_set_be,
++
++ .power_wake = _patch_pcie_power_wake_be,
+ };
+ EXPORT_SYMBOL(rtw89_pci_gen_be);
+diff --git a/drivers/parport/parport_serial.c b/drivers/parport/parport_serial.c
+index 3644997a834255..24d4f3a3ec3d0e 100644
+--- a/drivers/parport/parport_serial.c
++++ b/drivers/parport/parport_serial.c
+@@ -266,10 +266,14 @@ static struct pci_device_id parport_serial_pci_tbl[] = {
+ { 0x1409, 0x7168, 0x1409, 0xd079, 0, 0, timedia_9079c },
+
+ /* WCH CARDS */
+- { 0x4348, 0x5053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, wch_ch353_1s1p},
+- { 0x4348, 0x7053, 0x4348, 0x3253, 0, 0, wch_ch353_2s1p},
+- { 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382_0s1p},
+- { 0x1c00, 0x3250, 0x1c00, 0x3250, 0, 0, wch_ch382_2s1p},
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_1S1P,
++ PCI_ANY_ID, PCI_ANY_ID, 0, 0, wch_ch353_1s1p },
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_2S1P,
++ 0x4348, 0x3253, 0, 0, wch_ch353_2s1p },
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH382_0S1P,
++ 0x1c00, 0x3050, 0, 0, wch_ch382_0s1p },
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH382_2S1P,
++ 0x1c00, 0x3250, 0, 0, wch_ch382_2s1p },
+
+ /* BrainBoxes PX272/PX306 MIO card */
+ { PCI_VENDOR_ID_INTASHIELD, 0x4100,
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 8103bc24a54ea4..064067d9c8b529 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5522,7 +5522,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap);
+ * AMD Matisse USB 3.0 Host Controller 0x149c
+ * Intel 82579LM Gigabit Ethernet Controller 0x1502
+ * Intel 82579V Gigabit Ethernet Controller 0x1503
+- *
++ * Mediatek MT7922 802.11ax PCI Express Wireless Network Adapter
+ */
+ static void quirk_no_flr(struct pci_dev *dev)
+ {
+@@ -5534,6 +5534,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_MEDIATEK, 0x0616, quirk_no_flr);
+
+ /* FLR may cause the SolidRun SNET DPU (rev 0x1) to hang */
+ static void quirk_no_flr_snet(struct pci_dev *dev)
+@@ -5985,6 +5986,17 @@ SWITCHTEC_QUIRK(0x5552); /* PAXA 52XG5 */
+ SWITCHTEC_QUIRK(0x5536); /* PAXA 36XG5 */
+ SWITCHTEC_QUIRK(0x5528); /* PAXA 28XG5 */
+
++#define SWITCHTEC_PCI100X_QUIRK(vid) \
++ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_EFAR, vid, \
++ PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias)
++SWITCHTEC_PCI100X_QUIRK(0x1001); /* PCI1001XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1002); /* PCI1002XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1003); /* PCI1003XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1004); /* PCI1004XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1005); /* PCI1005XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1006); /* PCI1006XG4 */
++
++
+ /*
+ * The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints.
+ * These IDs are used to forward responses to the originator on the other
+@@ -6254,6 +6266,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2b, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2d, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa72f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa73f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size);
+ #endif
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index c7e1089ffdafcb..b14dfab04d846c 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1739,6 +1739,26 @@ static void switchtec_pci_remove(struct pci_dev *pdev)
+ .driver_data = gen, \
+ }
+
++#define SWITCHTEC_PCI100X_DEVICE(device_id, gen) \
++ { \
++ .vendor = PCI_VENDOR_ID_EFAR, \
++ .device = device_id, \
++ .subvendor = PCI_ANY_ID, \
++ .subdevice = PCI_ANY_ID, \
++ .class = (PCI_CLASS_MEMORY_OTHER << 8), \
++ .class_mask = 0xFFFFFFFF, \
++ .driver_data = gen, \
++ }, \
++ { \
++ .vendor = PCI_VENDOR_ID_EFAR, \
++ .device = device_id, \
++ .subvendor = PCI_ANY_ID, \
++ .subdevice = PCI_ANY_ID, \
++ .class = (PCI_CLASS_BRIDGE_OTHER << 8), \
++ .class_mask = 0xFFFFFFFF, \
++ .driver_data = gen, \
++ }
++
+ static const struct pci_device_id switchtec_pci_tbl[] = {
+ SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), /* PFX 24xG3 */
+ SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), /* PFX 32xG3 */
+@@ -1833,6 +1853,12 @@ static const struct pci_device_id switchtec_pci_tbl[] = {
+ SWITCHTEC_PCI_DEVICE(0x5552, SWITCHTEC_GEN5), /* PAXA 52XG5 */
+ SWITCHTEC_PCI_DEVICE(0x5536, SWITCHTEC_GEN5), /* PAXA 36XG5 */
+ SWITCHTEC_PCI_DEVICE(0x5528, SWITCHTEC_GEN5), /* PAXA 28XG5 */
++ SWITCHTEC_PCI100X_DEVICE(0x1001, SWITCHTEC_GEN4), /* PCI1001 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1002, SWITCHTEC_GEN4), /* PCI1002 12XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1003, SWITCHTEC_GEN4), /* PCI1003 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1004, SWITCHTEC_GEN4), /* PCI1004 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1005, SWITCHTEC_GEN4), /* PCI1005 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1006, SWITCHTEC_GEN4), /* PCI1006 16XG4 */
+ {0}
+ };
+ MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index 0b13d7f17b3256..42547f64453e85 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -89,12 +89,12 @@ static void pinconf_generic_dump_one(struct pinctrl_dev *pctldev,
+ seq_puts(s, items[i].display);
+ /* Print unit if available */
+ if (items[i].has_arg) {
+- seq_printf(s, " (0x%x",
+- pinconf_to_config_argument(config));
++ u32 val = pinconf_to_config_argument(config);
++
+ if (items[i].format)
+- seq_printf(s, " %s)", items[i].format);
++ seq_printf(s, " (%u %s)", val, items[i].format);
+ else
+- seq_puts(s, ")");
++ seq_printf(s, " (0x%x)", val);
+ }
+ }
+ }
+diff --git a/drivers/pinctrl/pinctrl-cy8c95x0.c b/drivers/pinctrl/pinctrl-cy8c95x0.c
+index 5096ccdd459ea4..7a6a1434ae7f4b 100644
+--- a/drivers/pinctrl/pinctrl-cy8c95x0.c
++++ b/drivers/pinctrl/pinctrl-cy8c95x0.c
+@@ -42,7 +42,7 @@
+ #define CY8C95X0_PORTSEL 0x18
+ /* Port settings, write PORTSEL first */
+ #define CY8C95X0_INTMASK 0x19
+-#define CY8C95X0_PWMSEL 0x1A
++#define CY8C95X0_SELPWM 0x1A
+ #define CY8C95X0_INVERT 0x1B
+ #define CY8C95X0_DIRECTION 0x1C
+ /* Drive mode register change state on writing '1' */
+@@ -330,14 +330,14 @@ static int cypress_get_pin_mask(struct cy8c95x0_pinctrl *chip, unsigned int pin)
+ static bool cy8c95x0_readable_register(struct device *dev, unsigned int reg)
+ {
+ /*
+- * Only 12 registers are present per port (see Table 6 in the
+- * datasheet).
++ * Only 12 registers are present per port (see Table 6 in the datasheet).
+ */
+- if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) < 12)
+- return true;
++ if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) >= 12)
++ return false;
+
+ switch (reg) {
+ case 0x24 ... 0x27:
++ case 0x31 ... 0x3f:
+ return false;
+ default:
+ return true;
+@@ -346,8 +346,11 @@ static bool cy8c95x0_readable_register(struct device *dev, unsigned int reg)
+
+ static bool cy8c95x0_writeable_register(struct device *dev, unsigned int reg)
+ {
+- if (reg >= CY8C95X0_VIRTUAL)
+- return true;
++ /*
++ * Only 12 registers are present per port (see Table 6 in the datasheet).
++ */
++ if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) >= 12)
++ return false;
+
+ switch (reg) {
+ case CY8C95X0_INPUT_(0) ... CY8C95X0_INPUT_(7):
+@@ -355,6 +358,7 @@ static bool cy8c95x0_writeable_register(struct device *dev, unsigned int reg)
+ case CY8C95X0_DEVID:
+ return false;
+ case 0x24 ... 0x27:
++ case 0x31 ... 0x3f:
+ return false;
+ default:
+ return true;
+@@ -367,8 +371,8 @@ static bool cy8c95x0_volatile_register(struct device *dev, unsigned int reg)
+ case CY8C95X0_INPUT_(0) ... CY8C95X0_INPUT_(7):
+ case CY8C95X0_INTSTATUS_(0) ... CY8C95X0_INTSTATUS_(7):
+ case CY8C95X0_INTMASK:
++ case CY8C95X0_SELPWM:
+ case CY8C95X0_INVERT:
+- case CY8C95X0_PWMSEL:
+ case CY8C95X0_DIRECTION:
+ case CY8C95X0_DRV_PU:
+ case CY8C95X0_DRV_PD:
+@@ -397,7 +401,7 @@ static bool cy8c95x0_muxed_register(unsigned int reg)
+ {
+ switch (reg) {
+ case CY8C95X0_INTMASK:
+- case CY8C95X0_PWMSEL:
++ case CY8C95X0_SELPWM:
+ case CY8C95X0_INVERT:
+ case CY8C95X0_DIRECTION:
+ case CY8C95X0_DRV_PU:
+@@ -468,7 +472,11 @@ static const struct regmap_config cy8c9520_i2c_regmap = {
+ .max_register = 0, /* Updated at runtime */
+ .num_reg_defaults_raw = 0, /* Updated at runtime */
+ .use_single_read = true, /* Workaround for regcache bug */
++#if IS_ENABLED(CONFIG_DEBUG_PINCTRL)
++ .disable_locking = false,
++#else
+ .disable_locking = true,
++#endif
+ };
+
+ static inline int cy8c95x0_regmap_update_bits_base(struct cy8c95x0_pinctrl *chip,
+@@ -799,7 +807,7 @@ static int cy8c95x0_gpio_get_pincfg(struct cy8c95x0_pinctrl *chip,
+ reg = CY8C95X0_DIRECTION;
+ break;
+ case PIN_CONFIG_MODE_PWM:
+- reg = CY8C95X0_PWMSEL;
++ reg = CY8C95X0_SELPWM;
+ break;
+ case PIN_CONFIG_OUTPUT:
+ reg = CY8C95X0_OUTPUT;
+@@ -881,7 +889,7 @@ static int cy8c95x0_gpio_set_pincfg(struct cy8c95x0_pinctrl *chip,
+ reg = CY8C95X0_DRV_PP_FAST;
+ break;
+ case PIN_CONFIG_MODE_PWM:
+- reg = CY8C95X0_PWMSEL;
++ reg = CY8C95X0_SELPWM;
+ break;
+ case PIN_CONFIG_OUTPUT_ENABLE:
+ ret = cy8c95x0_pinmux_direction(chip, off, !arg);
+@@ -1171,7 +1179,7 @@ static void cy8c95x0_pin_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *
+ bitmap_zero(mask, MAX_LINE);
+ __set_bit(pin, mask);
+
+- if (cy8c95x0_read_regs_mask(chip, CY8C95X0_PWMSEL, pwm, mask)) {
++ if (cy8c95x0_read_regs_mask(chip, CY8C95X0_SELPWM, pwm, mask)) {
+ seq_puts(s, "not available");
+ return;
+ }
+@@ -1216,7 +1224,7 @@ static int cy8c95x0_set_mode(struct cy8c95x0_pinctrl *chip, unsigned int off, bo
+ u8 port = cypress_get_port(chip, off);
+ u8 bit = cypress_get_pin_mask(chip, off);
+
+- return cy8c95x0_regmap_write_bits(chip, CY8C95X0_PWMSEL, port, bit, mode ? bit : 0);
++ return cy8c95x0_regmap_write_bits(chip, CY8C95X0_SELPWM, port, bit, mode ? bit : 0);
+ }
+
+ static int cy8c95x0_pinmux_mode(struct cy8c95x0_pinctrl *chip,
+@@ -1365,7 +1373,7 @@ static int cy8c95x0_irq_setup(struct cy8c95x0_pinctrl *chip, int irq)
+
+ ret = devm_request_threaded_irq(chip->dev, irq,
+ NULL, cy8c95x0_irq_handler,
+- IRQF_ONESHOT | IRQF_SHARED | IRQF_TRIGGER_HIGH,
++ IRQF_ONESHOT | IRQF_SHARED,
+ dev_name(chip->dev), chip);
+ if (ret) {
+ dev_err(chip->dev, "failed to request irq %d\n", irq);
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra30.c b/drivers/soc/tegra/fuse/fuse-tegra30.c
+index eb14e5ff5a0aa8..e24ab5f7d2bf10 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra30.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra30.c
+@@ -647,15 +647,20 @@ static const struct nvmem_cell_lookup tegra234_fuse_lookups[] = {
+ };
+
+ static const struct nvmem_keepout tegra234_fuse_keepouts[] = {
+- { .start = 0x01c, .end = 0x0c8 },
+- { .start = 0x12c, .end = 0x184 },
++ { .start = 0x01c, .end = 0x064 },
++ { .start = 0x084, .end = 0x0a0 },
++ { .start = 0x0a4, .end = 0x0c8 },
++ { .start = 0x12c, .end = 0x164 },
++ { .start = 0x16c, .end = 0x184 },
+ { .start = 0x190, .end = 0x198 },
+ { .start = 0x1a0, .end = 0x204 },
+- { .start = 0x21c, .end = 0x250 },
+- { .start = 0x25c, .end = 0x2f0 },
++ { .start = 0x21c, .end = 0x2f0 },
+ { .start = 0x310, .end = 0x3d8 },
+- { .start = 0x400, .end = 0x4f0 },
+- { .start = 0x4f8, .end = 0x7e8 },
++ { .start = 0x400, .end = 0x420 },
++ { .start = 0x444, .end = 0x490 },
++ { .start = 0x4bc, .end = 0x4f0 },
++ { .start = 0x4f8, .end = 0x54c },
++ { .start = 0x57c, .end = 0x7e8 },
+ { .start = 0x8d0, .end = 0x8d8 },
+ { .start = 0xacc, .end = 0xf00 }
+ };
+diff --git a/drivers/spi/spi-sn-f-ospi.c b/drivers/spi/spi-sn-f-ospi.c
+index a7c3b3923b4af7..fd8c8eb37d01d6 100644
+--- a/drivers/spi/spi-sn-f-ospi.c
++++ b/drivers/spi/spi-sn-f-ospi.c
+@@ -116,6 +116,9 @@ struct f_ospi {
+
+ static u32 f_ospi_get_dummy_cycle(const struct spi_mem_op *op)
+ {
++ if (!op->dummy.nbytes)
++ return 0;
++
+ return (op->dummy.nbytes * 8) / op->dummy.buswidth;
+ }
+
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index e5310c65cf52b3..10a706fe4b247d 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -374,6 +374,7 @@ static inline int is_omap1510_8250(struct uart_8250_port *pt)
+
+ #ifdef CONFIG_SERIAL_8250_DMA
+ extern int serial8250_tx_dma(struct uart_8250_port *);
++extern void serial8250_tx_dma_flush(struct uart_8250_port *);
+ extern int serial8250_rx_dma(struct uart_8250_port *);
+ extern void serial8250_rx_dma_flush(struct uart_8250_port *);
+ extern int serial8250_request_dma(struct uart_8250_port *);
+@@ -406,6 +407,7 @@ static inline int serial8250_tx_dma(struct uart_8250_port *p)
+ {
+ return -1;
+ }
++static inline void serial8250_tx_dma_flush(struct uart_8250_port *p) { }
+ static inline int serial8250_rx_dma(struct uart_8250_port *p)
+ {
+ return -1;
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index d215c494ee24c1..f245a84f4a508d 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -149,6 +149,22 @@ int serial8250_tx_dma(struct uart_8250_port *p)
+ return ret;
+ }
+
++void serial8250_tx_dma_flush(struct uart_8250_port *p)
++{
++ struct uart_8250_dma *dma = p->dma;
++
++ if (!dma->tx_running)
++ return;
++
++ /*
++ * kfifo_reset() has been called by the serial core, avoid
++ * advancing and underflowing in __dma_tx_complete().
++ */
++ dma->tx_size = 0;
++
++ dmaengine_terminate_async(dma->rxchan);
++}
++
+ int serial8250_rx_dma(struct uart_8250_port *p)
+ {
+ struct uart_8250_dma *dma = p->dma;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 6709b6a5f3011d..de6d90bf0d70a2 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -64,23 +64,17 @@
+ #define PCIE_DEVICE_ID_NEO_2_OX_IBM 0x00F6
+ #define PCI_DEVICE_ID_PLX_CRONYX_OMEGA 0xc001
+ #define PCI_DEVICE_ID_INTEL_PATSBURG_KT 0x1d3d
+-#define PCI_VENDOR_ID_WCH 0x4348
+-#define PCI_DEVICE_ID_WCH_CH352_2S 0x3253
+-#define PCI_DEVICE_ID_WCH_CH353_4S 0x3453
+-#define PCI_DEVICE_ID_WCH_CH353_2S1PF 0x5046
+-#define PCI_DEVICE_ID_WCH_CH353_1S1P 0x5053
+-#define PCI_DEVICE_ID_WCH_CH353_2S1P 0x7053
+-#define PCI_DEVICE_ID_WCH_CH355_4S 0x7173
++
++#define PCI_DEVICE_ID_WCHCN_CH352_2S 0x3253
++#define PCI_DEVICE_ID_WCHCN_CH355_4S 0x7173
++
+ #define PCI_VENDOR_ID_AGESTAR 0x5372
+ #define PCI_DEVICE_ID_AGESTAR_9375 0x6872
+ #define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a
+ #define PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800 0x818e
+
+-#define PCIE_VENDOR_ID_WCH 0x1c00
+-#define PCIE_DEVICE_ID_WCH_CH382_2S1P 0x3250
+-#define PCIE_DEVICE_ID_WCH_CH384_4S 0x3470
+-#define PCIE_DEVICE_ID_WCH_CH384_8S 0x3853
+-#define PCIE_DEVICE_ID_WCH_CH382_2S 0x3253
++#define PCI_DEVICE_ID_WCHIC_CH384_4S 0x3470
++#define PCI_DEVICE_ID_WCHIC_CH384_8S 0x3853
+
+ #define PCI_DEVICE_ID_MOXA_CP102E 0x1024
+ #define PCI_DEVICE_ID_MOXA_CP102EL 0x1025
+@@ -2777,80 +2771,80 @@ static struct pci_serial_quirk pci_serial_quirks[] = {
+ },
+ /* WCH CH353 1S1P card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_1S1P,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_1S1P,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH353 2S1P card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_2S1P,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_2S1P,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH353 4S card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_4S,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_4S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH353 2S1PF card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_2S1PF,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_2S1PF,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH352 2S card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH352_2S,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH352_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH355 4S card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH355_4S,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH355_4S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch355_setup,
+ },
+ /* WCH CH382 2S card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH382_2S,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH382_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
+ /* WCH CH382 2S1P card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH382_2S1P,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH382_2S1P,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
+ /* WCH CH384 4S card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH384_4S,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH384_4S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
+ /* WCH CH384 8S card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH384_8S,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH384_8S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .init = pci_wch_ch38x_init,
+@@ -3927,11 +3921,11 @@ static const struct pci_device_id blacklist[] = {
+
+ /* multi-io cards handled by parport_serial */
+ /* WCH CH353 2S1P */
+- { PCI_DEVICE(0x4348, 0x7053), 0, 0, REPORT_CONFIG(PARPORT_SERIAL), },
++ { PCI_VDEVICE(WCHCN, 0x7053), REPORT_CONFIG(PARPORT_SERIAL), },
+ /* WCH CH353 1S1P */
+- { PCI_DEVICE(0x4348, 0x5053), 0, 0, REPORT_CONFIG(PARPORT_SERIAL), },
++ { PCI_VDEVICE(WCHCN, 0x5053), REPORT_CONFIG(PARPORT_SERIAL), },
+ /* WCH CH382 2S1P */
+- { PCI_DEVICE(0x1c00, 0x3250), 0, 0, REPORT_CONFIG(PARPORT_SERIAL), },
++ { PCI_VDEVICE(WCHIC, 0x3250), REPORT_CONFIG(PARPORT_SERIAL), },
+
+ /* Intel platforms with MID UART */
+ { PCI_VDEVICE(INTEL, 0x081b), REPORT_8250_CONFIG(MID), },
+@@ -6004,27 +5998,27 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ * WCH CH353 series devices: The 2S1P is handled by parport_serial
+ * so not listed here.
+ */
+- { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH353_4S,
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_4S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_4_115200 },
+
+- { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH353_2S1PF,
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_2S1PF,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_2_115200 },
+
+- { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH355_4S,
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH355_4S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_4_115200 },
+
+- { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH382_2S,
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH382_2S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch382_2 },
+
+- { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH384_4S,
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH384_4S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch384_4 },
+
+- { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH384_8S,
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH384_8S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch384_8 },
+ /*
+diff --git a/drivers/tty/serial/8250/8250_pci1xxxx.c b/drivers/tty/serial/8250/8250_pci1xxxx.c
+index d3930bf32fe4c4..f462b3d1c104ce 100644
+--- a/drivers/tty/serial/8250/8250_pci1xxxx.c
++++ b/drivers/tty/serial/8250/8250_pci1xxxx.c
+@@ -78,6 +78,12 @@
+ #define UART_TX_BYTE_FIFO 0x00
+ #define UART_FIFO_CTL 0x02
+
++#define UART_MODEM_CTL_REG 0x04
++#define UART_MODEM_CTL_RTS_SET BIT(1)
++
++#define UART_LINE_STAT_REG 0x05
++#define UART_LINE_XMIT_CHECK_MASK GENMASK(6, 5)
++
+ #define UART_ACTV_REG 0x11
+ #define UART_BLOCK_SET_ACTIVE BIT(0)
+
+@@ -94,6 +100,7 @@
+ #define UART_BIT_SAMPLE_CNT_16 16
+ #define BAUD_CLOCK_DIV_INT_MSK GENMASK(31, 8)
+ #define ADCL_CFG_RTS_DELAY_MASK GENMASK(11, 8)
++#define FRAC_DIV_TX_END_POINT_MASK GENMASK(23, 20)
+
+ #define UART_WAKE_REG 0x8C
+ #define UART_WAKE_MASK_REG 0x90
+@@ -134,6 +141,11 @@
+ #define UART_BST_STAT_LSR_FRAME_ERR 0x8000000
+ #define UART_BST_STAT_LSR_THRE 0x20000000
+
++#define GET_MODEM_CTL_RTS_STATUS(reg) ((reg) & UART_MODEM_CTL_RTS_SET)
++#define GET_RTS_PIN_STATUS(val) (((val) & TIOCM_RTS) >> 1)
++#define RTS_TOGGLE_STATUS_MASK(val, reg) (GET_MODEM_CTL_RTS_STATUS(reg) \
++ != GET_RTS_PIN_STATUS(val))
++
+ struct pci1xxxx_8250 {
+ unsigned int nr;
+ u8 dev_rev;
+@@ -254,6 +266,47 @@ static void pci1xxxx_set_divisor(struct uart_port *port, unsigned int baud,
+ port->membase + UART_BAUD_CLK_DIVISOR_REG);
+ }
+
++static void pci1xxxx_set_mctrl(struct uart_port *port, unsigned int mctrl)
++{
++ u32 fract_div_cfg_reg;
++ u32 line_stat_reg;
++ u32 modem_ctl_reg;
++ u32 adcl_cfg_reg;
++
++ adcl_cfg_reg = readl(port->membase + ADCL_CFG_REG);
++
++ /* HW is responsible in ADCL_EN case */
++ if ((adcl_cfg_reg & (ADCL_CFG_EN | ADCL_CFG_PIN_SEL)))
++ return;
++
++ modem_ctl_reg = readl(port->membase + UART_MODEM_CTL_REG);
++
++ serial8250_do_set_mctrl(port, mctrl);
++
++ if (RTS_TOGGLE_STATUS_MASK(mctrl, modem_ctl_reg)) {
++ line_stat_reg = readl(port->membase + UART_LINE_STAT_REG);
++ if (line_stat_reg & UART_LINE_XMIT_CHECK_MASK) {
++ fract_div_cfg_reg = readl(port->membase +
++ FRAC_DIV_CFG_REG);
++
++ writel((fract_div_cfg_reg &
++ ~(FRAC_DIV_TX_END_POINT_MASK)),
++ port->membase + FRAC_DIV_CFG_REG);
++
++ /* Enable ADC and set the nRTS pin */
++ writel((adcl_cfg_reg | (ADCL_CFG_EN |
++ ADCL_CFG_PIN_SEL)),
++ port->membase + ADCL_CFG_REG);
++
++ /* Revert to the original settings */
++ writel(adcl_cfg_reg, port->membase + ADCL_CFG_REG);
++
++ writel(fract_div_cfg_reg, port->membase +
++ FRAC_DIV_CFG_REG);
++ }
++ }
++}
++
+ static int pci1xxxx_rs485_config(struct uart_port *port,
+ struct ktermios *termios,
+ struct serial_rs485 *rs485)
+@@ -631,9 +684,14 @@ static int pci1xxxx_setup(struct pci_dev *pdev,
+ port->port.rs485_config = pci1xxxx_rs485_config;
+ port->port.rs485_supported = pci1xxxx_rs485_supported;
+
+- /* From C0 rev Burst operation is supported */
++ /*
++ * C0 and later revisions support Burst operation.
++ * RTS workaround in mctrl is applicable only to B0.
++ */
+ if (rev >= 0xC0)
+ port->port.handle_irq = pci1xxxx_handle_irq;
++ else if (rev == 0xB0)
++ port->port.set_mctrl = pci1xxxx_set_mctrl;
+
+ ret = serial8250_pci_setup_port(pdev, port, 0, PORT_OFFSET * port_idx, 0);
+ if (ret < 0)
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 11519aa2598a01..c1376727642a71 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2524,6 +2524,14 @@ static void serial8250_shutdown(struct uart_port *port)
+ serial8250_do_shutdown(port);
+ }
+
++static void serial8250_flush_buffer(struct uart_port *port)
++{
++ struct uart_8250_port *up = up_to_u8250p(port);
++
++ if (up->dma)
++ serial8250_tx_dma_flush(up);
++}
++
+ static unsigned int serial8250_do_get_divisor(struct uart_port *port,
+ unsigned int baud,
+ unsigned int *frac)
+@@ -3207,6 +3215,7 @@ static const struct uart_ops serial8250_pops = {
+ .break_ctl = serial8250_break_ctl,
+ .startup = serial8250_startup,
+ .shutdown = serial8250_shutdown,
++ .flush_buffer = serial8250_flush_buffer,
+ .set_termios = serial8250_set_termios,
+ .set_ldisc = serial8250_set_ldisc,
+ .pm = serial8250_pm,
+diff --git a/drivers/tty/serial/serial_port.c b/drivers/tty/serial/serial_port.c
+index d35f1d24156c22..85285c56fabff4 100644
+--- a/drivers/tty/serial/serial_port.c
++++ b/drivers/tty/serial/serial_port.c
+@@ -173,6 +173,7 @@ EXPORT_SYMBOL(uart_remove_one_port);
+ * The caller is responsible to initialize the following fields of the @port
+ * ->dev (must be valid)
+ * ->flags
++ * ->iobase
+ * ->mapbase
+ * ->mapsize
+ * ->regshift (if @use_defaults is false)
+@@ -214,7 +215,7 @@ static int __uart_read_properties(struct uart_port *port, bool use_defaults)
+ /* Read the registers I/O access type (default: MMIO 8-bit) */
+ ret = device_property_read_u32(dev, "reg-io-width", &value);
+ if (ret) {
+- port->iotype = UPIO_MEM;
++ port->iotype = port->iobase ? UPIO_PORT : UPIO_MEM;
+ } else {
+ switch (value) {
+ case 1:
+@@ -227,11 +228,11 @@ static int __uart_read_properties(struct uart_port *port, bool use_defaults)
+ port->iotype = device_is_big_endian(dev) ? UPIO_MEM32BE : UPIO_MEM32;
+ break;
+ default:
++ port->iotype = UPIO_UNKNOWN;
+ if (!use_defaults) {
+ dev_err(dev, "Unsupported reg-io-width (%u)\n", value);
+ return -EINVAL;
+ }
+- port->iotype = UPIO_UNKNOWN;
+ break;
+ }
+ }
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 58023f735c195f..8d4ad0a3f2cf02 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -216,6 +216,7 @@ void ufs_bsg_remove(struct ufs_hba *hba)
+ return;
+
+ bsg_remove_queue(hba->bsg_queue);
++ hba->bsg_queue = NULL;
+
+ device_del(bsg_dev);
+ put_device(bsg_dev);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index b786cba9a270f4..67410c4cebee6d 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -258,10 +258,15 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state,
+ return UFS_PM_LVL_0;
+ }
+
++static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
++{
++ return hba->outstanding_tasks || hba->active_uic_cmd ||
++ hba->uic_async_done;
++}
++
+ static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba)
+ {
+- return (hba->clk_gating.active_reqs || hba->outstanding_reqs || hba->outstanding_tasks ||
+- hba->active_uic_cmd || hba->uic_async_done);
++ return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba);
+ }
+
+ static const struct ufs_dev_quirk ufs_fixups[] = {
+@@ -1835,19 +1840,16 @@ static void ufshcd_exit_clk_scaling(struct ufs_hba *hba)
+ static void ufshcd_ungate_work(struct work_struct *work)
+ {
+ int ret;
+- unsigned long flags;
+ struct ufs_hba *hba = container_of(work, struct ufs_hba,
+ clk_gating.ungate_work);
+
+ cancel_delayed_work_sync(&hba->clk_gating.gate_work);
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- if (hba->clk_gating.state == CLKS_ON) {
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+- return;
++ scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) {
++ if (hba->clk_gating.state == CLKS_ON)
++ return;
+ }
+
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ ufshcd_hba_vreg_set_hpm(hba);
+ ufshcd_setup_clocks(hba, true);
+
+@@ -1882,7 +1884,7 @@ void ufshcd_hold(struct ufs_hba *hba)
+ if (!ufshcd_is_clkgating_allowed(hba) ||
+ !hba->clk_gating.is_initialized)
+ return;
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ spin_lock_irqsave(&hba->clk_gating.lock, flags);
+ hba->clk_gating.active_reqs++;
+
+ start:
+@@ -1898,11 +1900,11 @@ void ufshcd_hold(struct ufs_hba *hba)
+ */
+ if (ufshcd_can_hibern8_during_gating(hba) &&
+ ufshcd_is_link_hibern8(hba)) {
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
+ flush_result = flush_work(&hba->clk_gating.ungate_work);
+ if (hba->clk_gating.is_suspended && !flush_result)
+ return;
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ spin_lock_irqsave(&hba->clk_gating.lock, flags);
+ goto start;
+ }
+ break;
+@@ -1931,17 +1933,17 @@ void ufshcd_hold(struct ufs_hba *hba)
+ */
+ fallthrough;
+ case REQ_CLKS_ON:
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
+ flush_work(&hba->clk_gating.ungate_work);
+ /* Make sure state is CLKS_ON before returning */
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ spin_lock_irqsave(&hba->clk_gating.lock, flags);
+ goto start;
+ default:
+ dev_err(hba->dev, "%s: clk gating is in invalid state %d\n",
+ __func__, hba->clk_gating.state);
+ break;
+ }
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_hold);
+
+@@ -1949,28 +1951,32 @@ static void ufshcd_gate_work(struct work_struct *work)
+ {
+ struct ufs_hba *hba = container_of(work, struct ufs_hba,
+ clk_gating.gate_work.work);
+- unsigned long flags;
+ int ret;
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- /*
+- * In case you are here to cancel this work the gating state
+- * would be marked as REQ_CLKS_ON. In this case save time by
+- * skipping the gating work and exit after changing the clock
+- * state to CLKS_ON.
+- */
+- if (hba->clk_gating.is_suspended ||
+- (hba->clk_gating.state != REQ_CLKS_OFF)) {
+- hba->clk_gating.state = CLKS_ON;
+- trace_ufshcd_clk_gating(dev_name(hba->dev),
+- hba->clk_gating.state);
+- goto rel_lock;
+- }
++ scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) {
++ /*
++ * In case you are here to cancel this work the gating state
++ * would be marked as REQ_CLKS_ON. In this case save time by
++ * skipping the gating work and exit after changing the clock
++ * state to CLKS_ON.
++ */
++ if (hba->clk_gating.is_suspended ||
++ hba->clk_gating.state != REQ_CLKS_OFF) {
++ hba->clk_gating.state = CLKS_ON;
++ trace_ufshcd_clk_gating(dev_name(hba->dev),
++ hba->clk_gating.state);
++ return;
++ }
+
+- if (ufshcd_is_ufs_dev_busy(hba) || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
+- goto rel_lock;
++ if (hba->clk_gating.active_reqs)
++ return;
++ }
+
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ scoped_guard(spinlock_irqsave, hba->host->host_lock) {
++ if (ufshcd_is_ufs_dev_busy(hba) ||
++ hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
++ return;
++ }
+
+ /* put the link into hibern8 mode before turning off clocks */
+ if (ufshcd_can_hibern8_during_gating(hba)) {
+@@ -1981,7 +1987,7 @@ static void ufshcd_gate_work(struct work_struct *work)
+ __func__, ret);
+ trace_ufshcd_clk_gating(dev_name(hba->dev),
+ hba->clk_gating.state);
+- goto out;
++ return;
+ }
+ ufshcd_set_link_hibern8(hba);
+ }
+@@ -2001,33 +2007,34 @@ static void ufshcd_gate_work(struct work_struct *work)
+ * prevent from doing cancel work multiple times when there are
+ * new requests arriving before the current cancel work is done.
+ */
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
+ if (hba->clk_gating.state == REQ_CLKS_OFF) {
+ hba->clk_gating.state = CLKS_OFF;
+ trace_ufshcd_clk_gating(dev_name(hba->dev),
+ hba->clk_gating.state);
+ }
+-rel_lock:
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+-out:
+- return;
+ }
+
+-/* host lock must be held before calling this variant */
+ static void __ufshcd_release(struct ufs_hba *hba)
+ {
++ lockdep_assert_held(&hba->clk_gating.lock);
++
+ if (!ufshcd_is_clkgating_allowed(hba))
+ return;
+
+ hba->clk_gating.active_reqs--;
+
+ if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended ||
+- hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL ||
+- hba->outstanding_tasks || !hba->clk_gating.is_initialized ||
+- hba->active_uic_cmd || hba->uic_async_done ||
++ !hba->clk_gating.is_initialized ||
+ hba->clk_gating.state == CLKS_OFF)
+ return;
+
++ scoped_guard(spinlock_irqsave, hba->host->host_lock) {
++ if (ufshcd_has_pending_tasks(hba) ||
++ hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
++ return;
++ }
++
+ hba->clk_gating.state = REQ_CLKS_OFF;
+ trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
+ queue_delayed_work(hba->clk_gating.clk_gating_workq,
+@@ -2037,11 +2044,8 @@ static void __ufshcd_release(struct ufs_hba *hba)
+
+ void ufshcd_release(struct ufs_hba *hba)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
+ __ufshcd_release(hba);
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_release);
+
+@@ -2056,11 +2060,9 @@ static ssize_t ufshcd_clkgate_delay_show(struct device *dev,
+ void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value)
+ {
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+- unsigned long flags;
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
+ hba->clk_gating.delay_ms = value;
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_clkgate_delay_set);
+
+@@ -2088,7 +2090,6 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+ {
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+- unsigned long flags;
+ u32 value;
+
+ if (kstrtou32(buf, 0, &value))
+@@ -2096,9 +2097,10 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
+
+ value = !!value;
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
++
+ if (value == hba->clk_gating.is_enabled)
+- goto out;
++ return count;
+
+ if (value)
+ __ufshcd_release(hba);
+@@ -2106,8 +2108,7 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
+ hba->clk_gating.active_reqs++;
+
+ hba->clk_gating.is_enabled = value;
+-out:
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++
+ return count;
+ }
+
+@@ -8267,7 +8268,9 @@ static void ufshcd_rtc_work(struct work_struct *work)
+ hba = container_of(to_delayed_work(work), struct ufs_hba, ufs_rtc_update_work);
+
+ /* Update RTC only when there are no requests in progress and UFSHCI is operational */
+- if (!ufshcd_is_ufs_dev_busy(hba) && hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL)
++ if (!ufshcd_is_ufs_dev_busy(hba) &&
++ hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL &&
++ !hba->clk_gating.active_reqs)
+ ufshcd_update_rtc(hba);
+
+ if (ufshcd_is_ufs_dev_active(hba) && hba->dev_info.rtc_update_period)
+@@ -9186,7 +9189,6 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
+ int ret = 0;
+ struct ufs_clk_info *clki;
+ struct list_head *head = &hba->clk_list_head;
+- unsigned long flags;
+ ktime_t start = ktime_get();
+ bool clk_state_changed = false;
+
+@@ -9236,12 +9238,11 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
+ if (!IS_ERR_OR_NULL(clki->clk) && clki->enabled)
+ clk_disable_unprepare(clki->clk);
+ }
+- } else if (!ret && on) {
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- hba->clk_gating.state = CLKS_ON;
++ } else if (!ret && on && hba->clk_gating.is_initialized) {
++ scoped_guard(spinlock_irqsave, &hba->clk_gating.lock)
++ hba->clk_gating.state = CLKS_ON;
+ trace_ufshcd_clk_gating(dev_name(hba->dev),
+ hba->clk_gating.state);
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
+
+ if (clk_state_changed)
+@@ -10450,6 +10451,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ hba->irq = irq;
+ hba->vps = &ufs_hba_vps;
+
++ /*
++ * Initialize clk_gating.lock early since it is being used in
++ * ufshcd_setup_clocks()
++ */
++ spin_lock_init(&hba->clk_gating.lock);
++
+ err = ufshcd_hba_init(hba);
+ if (err)
+ goto out_error;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 6b37d1c47fce13..c2ecfa3c83496f 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -371,7 +371,7 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ static void acm_ctrl_irq(struct urb *urb)
+ {
+ struct acm *acm = urb->context;
+- struct usb_cdc_notification *dr = urb->transfer_buffer;
++ struct usb_cdc_notification *dr;
+ unsigned int current_size = urb->actual_length;
+ unsigned int expected_size, copy_size, alloc_size;
+ int retval;
+@@ -398,14 +398,25 @@ static void acm_ctrl_irq(struct urb *urb)
+
+ usb_mark_last_busy(acm->dev);
+
+- if (acm->nb_index)
++ if (acm->nb_index == 0) {
++ /*
++ * The first chunk of a message must contain at least the
++ * notification header with the length field, otherwise we
++ * can't get an expected_size.
++ */
++ if (current_size < sizeof(struct usb_cdc_notification)) {
++ dev_dbg(&acm->control->dev, "urb too short\n");
++ goto exit;
++ }
++ dr = urb->transfer_buffer;
++ } else {
+ dr = (struct usb_cdc_notification *)acm->notification_buffer;
+-
++ }
+ /* size = notification-header + (optional) data */
+ expected_size = sizeof(struct usb_cdc_notification) +
+ le16_to_cpu(dr->wLength);
+
+- if (current_size < expected_size) {
++ if (acm->nb_index != 0 || current_size < expected_size) {
+ /* notification is transmitted fragmented, reassemble */
+ if (acm->nb_size < expected_size) {
+ u8 *new_buffer;
+@@ -1727,13 +1738,16 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
+ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ },
+- { USB_DEVICE(0x045b, 0x023c), /* Renesas USB Download mode */
++ { USB_DEVICE(0x045b, 0x023c), /* Renesas R-Car H3 USB Download mode */
++ .driver_info = DISABLE_ECHO, /* Don't echo banner */
++ },
++ { USB_DEVICE(0x045b, 0x0247), /* Renesas R-Car D3 USB Download mode */
+ .driver_info = DISABLE_ECHO, /* Don't echo banner */
+ },
+- { USB_DEVICE(0x045b, 0x0248), /* Renesas USB Download mode */
++ { USB_DEVICE(0x045b, 0x0248), /* Renesas R-Car M3-N USB Download mode */
+ .driver_info = DISABLE_ECHO, /* Don't echo banner */
+ },
+- { USB_DEVICE(0x045b, 0x024D), /* Renesas USB Download mode */
++ { USB_DEVICE(0x045b, 0x024D), /* Renesas R-Car E3 USB Download mode */
+ .driver_info = DISABLE_ECHO, /* Don't echo banner */
+ },
+ { USB_DEVICE(0x0e8d, 0x0003), /* FIREFLY, MediaTek Inc; andrey.arapov@gmail.com */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 21ac9b464696f5..906daf423cb02b 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1847,6 +1847,17 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ desc = intf->cur_altsetting;
+ hdev = interface_to_usbdev(intf);
+
++ /*
++ * The USB 2.0 spec prohibits hubs from having more than one
++ * configuration or interface, and we rely on this prohibition.
++ * Refuse to accept a device that violates it.
++ */
++ if (hdev->descriptor.bNumConfigurations > 1 ||
++ hdev->actconfig->desc.bNumInterfaces > 1) {
++ dev_err(&intf->dev, "Invalid hub with more than one config or interface\n");
++ return -EINVAL;
++ }
++
+ /*
+ * Set default autosuspend delay as 0 to speedup bus suspend,
+ * based on the below considerations:
+@@ -4698,7 +4709,6 @@ void usb_ep0_reinit(struct usb_device *udev)
+ EXPORT_SYMBOL_GPL(usb_ep0_reinit);
+
+ #define usb_sndaddr0pipe() (PIPE_CONTROL << 30)
+-#define usb_rcvaddr0pipe() ((PIPE_CONTROL << 30) | USB_DIR_IN)
+
+ static int hub_set_address(struct usb_device *udev, int devnum)
+ {
+@@ -4804,7 +4814,7 @@ static int get_bMaxPacketSize0(struct usb_device *udev,
+ for (i = 0; i < GET_MAXPACKET0_TRIES; ++i) {
+ /* Start with invalid values in case the transfer fails */
+ buf->bDescriptorType = buf->bMaxPacketSize0 = 0;
+- rc = usb_control_msg(udev, usb_rcvaddr0pipe(),
++ rc = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
+ USB_DT_DEVICE << 8, 0,
+ buf, size,
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 13171454f9591a..027479179f09e9 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -432,6 +432,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0c45, 0x7056), .driver_info =
+ USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+
++ /* Sony Xperia XZ1 Compact (lilac) smartphone in fastboot mode */
++ { USB_DEVICE(0x0fce, 0x0dde), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* Action Semiconductor flash disk */
+ { USB_DEVICE(0x10d6, 0x2200), .driver_info =
+ USB_QUIRK_STRING_FETCH_255 },
+@@ -522,6 +525,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* Blackmagic Design UltraStudio SDI */
+ { USB_DEVICE(0x1edb, 0xbd4f), .driver_info = USB_QUIRK_NO_LPM },
+
++ /* Teclast disk */
++ { USB_DEVICE(0x1f75, 0x0917), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* Hauppauge HVR-950q */
+ { USB_DEVICE(0x2040, 0x7200), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index e7bf9cc635be6f..bd4c788f03bc14 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4615,6 +4615,7 @@ static int dwc2_hsotg_udc_stop(struct usb_gadget *gadget)
+ spin_lock_irqsave(&hsotg->lock, flags);
+
+ hsotg->driver = NULL;
++ hsotg->gadget.dev.of_node = NULL;
+ hsotg->gadget.speed = USB_SPEED_UNKNOWN;
+ hsotg->enabled = 0;
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a5d75d7d0a8707..8c80bb4a467bff 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2618,10 +2618,38 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
+ {
+ u32 reg;
+ u32 timeout = 2000;
++ u32 saved_config = 0;
+
+ if (pm_runtime_suspended(dwc->dev))
+ return 0;
+
++ /*
++ * When operating in USB 2.0 speeds (HS/FS), ensure that
++ * GUSB2PHYCFG.ENBLSLPM and GUSB2PHYCFG.SUSPHY are cleared before starting
++ * or stopping the controller. This resolves timeout issues that occur
++ * during frequent role switches between host and device modes.
++ *
++ * Save and clear these settings, then restore them after completing the
++ * controller start or stop sequence.
++ *
++ * This solution was discovered through experimentation as it is not
++ * mentioned in the dwc3 programming guide. It has been tested on an
++ * Exynos platforms.
++ */
++ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++ if (reg & DWC3_GUSB2PHYCFG_SUSPHY) {
++ saved_config |= DWC3_GUSB2PHYCFG_SUSPHY;
++ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
++ }
++
++ if (reg & DWC3_GUSB2PHYCFG_ENBLSLPM) {
++ saved_config |= DWC3_GUSB2PHYCFG_ENBLSLPM;
++ reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;
++ }
++
++ if (saved_config)
++ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
++
+ reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ if (is_on) {
+ if (DWC3_VER_IS_WITHIN(DWC3, ANY, 187A)) {
+@@ -2649,6 +2677,12 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
+ reg &= DWC3_DSTS_DEVCTRLHLT;
+ } while (--timeout && !(!is_on ^ !reg));
+
++ if (saved_config) {
++ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++ reg |= saved_config;
++ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
++ }
++
+ if (!timeout)
+ return -ETIMEDOUT;
+
+diff --git a/drivers/usb/gadget/function/f_midi.c b/drivers/usb/gadget/function/f_midi.c
+index 1067847cc07995..4153643c67dcec 100644
+--- a/drivers/usb/gadget/function/f_midi.c
++++ b/drivers/usb/gadget/function/f_midi.c
+@@ -907,6 +907,15 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
+
+ status = -ENODEV;
+
++ /*
++ * Reset wMaxPacketSize with maximum packet size of FS bulk transfer before
++ * endpoint claim. This ensures that the wMaxPacketSize does not exceed the
++ * limit during bind retries where configured dwc3 TX/RX FIFO's maxpacket
++ * size of 512 bytes for IN/OUT endpoints in support HS speed only.
++ */
++ bulk_in_desc.wMaxPacketSize = cpu_to_le16(64);
++ bulk_out_desc.wMaxPacketSize = cpu_to_le16(64);
++
+ /* allocate instance-specific endpoints */
+ midi->in_ep = usb_ep_autoconfig(cdev->gadget, &bulk_in_desc);
+ if (!midi->in_ep)
+@@ -1000,11 +1009,11 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
+ }
+
+ /* configure the endpoint descriptors ... */
+- ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports);
+- ms_out_desc.bNumEmbMIDIJack = midi->in_ports;
++ ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports);
++ ms_out_desc.bNumEmbMIDIJack = midi->out_ports;
+
+- ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports);
+- ms_in_desc.bNumEmbMIDIJack = midi->out_ports;
++ ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports);
++ ms_in_desc.bNumEmbMIDIJack = midi->in_ports;
+
+ /* ... and add them to the list */
+ endpoint_descriptor_index = i;
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index a6f46364be65f0..4b3d5075621aa0 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1543,8 +1543,8 @@ void usb_del_gadget(struct usb_gadget *gadget)
+
+ kobject_uevent(&udc->dev.kobj, KOBJ_REMOVE);
+ sysfs_remove_link(&udc->dev.kobj, "gadget");
+- flush_work(&gadget->work);
+ device_del(&gadget->dev);
++ flush_work(&gadget->work);
+ ida_free(&gadget_id_numbers, gadget->id_number);
+ cancel_work_sync(&udc->vbus_work);
+ device_unregister(&udc->dev);
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 3b01734ce1b7e5..a93ad93390ba17 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -310,7 +310,7 @@ struct renesas_usb3_request {
+ struct list_head queue;
+ };
+
+-#define USB3_EP_NAME_SIZE 8
++#define USB3_EP_NAME_SIZE 16
+ struct renesas_usb3_ep {
+ struct usb_ep ep;
+ struct renesas_usb3 *usb3;
+diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
+index 1f9c1b1435d862..0404489c2f6a9c 100644
+--- a/drivers/usb/host/pci-quirks.c
++++ b/drivers/usb/host/pci-quirks.c
+@@ -958,6 +958,15 @@ static void quirk_usb_disable_ehci(struct pci_dev *pdev)
+ * booting from USB disk or using a usb keyboard
+ */
+ hcc_params = readl(base + EHCI_HCC_PARAMS);
++
++ /* LS7A EHCI controller doesn't have extended capabilities, the
++ * EECP (EHCI Extended Capabilities Pointer) field of HCCPARAMS
++ * register should be 0x0 but it reads as 0xa0. So clear it to
++ * avoid error messages on boot.
++ */
++ if (pdev->vendor == PCI_VENDOR_ID_LOONGSON && pdev->device == 0x7a14)
++ hcc_params &= ~(0xffL << 8);
++
+ offset = (hcc_params >> 8) & 0xff;
+ while (offset && --count) {
+ pci_read_config_dword(pdev, offset, &cap);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 3ba9902dd2093c..deb3c98c9beaf6 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -656,8 +656,8 @@ int xhci_pci_common_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ }
+ EXPORT_SYMBOL_NS_GPL(xhci_pci_common_probe, xhci);
+
+-static const struct pci_device_id pci_ids_reject[] = {
+- /* handled by xhci-pci-renesas */
++/* handled by xhci-pci-renesas if enabled */
++static const struct pci_device_id pci_ids_renesas[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0014) },
+ { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0015) },
+ { /* end: all zeroes */ }
+@@ -665,7 +665,8 @@ static const struct pci_device_id pci_ids_reject[] = {
+
+ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ {
+- if (pci_match_id(pci_ids_reject, dev))
++ if (IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS) &&
++ pci_match_id(pci_ids_renesas, dev))
+ return -ENODEV;
+
+ return xhci_pci_common_probe(dev, id);
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index c58a12c147f451..30482d4cf82678 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -387,8 +387,11 @@ usb_role_switch_register(struct device *parent,
+ dev_set_name(&sw->dev, "%s-role-switch",
+ desc->name ? desc->name : dev_name(parent));
+
++ sw->registered = true;
++
+ ret = device_register(&sw->dev);
+ if (ret) {
++ sw->registered = false;
+ put_device(&sw->dev);
+ return ERR_PTR(ret);
+ }
+@@ -399,8 +402,6 @@ usb_role_switch_register(struct device *parent,
+ dev_warn(&sw->dev, "failed to add component\n");
+ }
+
+- sw->registered = true;
+-
+ /* TODO: Symlinks for the host port and the device controller. */
+
+ return sw;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 1e2ae0c6c41c79..58bd54e8c483a2 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -619,15 +619,6 @@ static void option_instat_callback(struct urb *urb);
+ /* Luat Air72*U series based on UNISOC UIS8910 uses UNISOC's vendor ID */
+ #define LUAT_PRODUCT_AIR720U 0x4e00
+
+-/* MeiG Smart Technology products */
+-#define MEIGSMART_VENDOR_ID 0x2dee
+-/* MeiG Smart SRM815/SRM825L based on Qualcomm 315 */
+-#define MEIGSMART_PRODUCT_SRM825L 0x4d22
+-/* MeiG Smart SLM320 based on UNISOC UIS8910 */
+-#define MEIGSMART_PRODUCT_SLM320 0x4d41
+-/* MeiG Smart SLM770A based on ASR1803 */
+-#define MEIGSMART_PRODUCT_SLM770A 0x4d57
+-
+ /* Device flags */
+
+ /* Highest interface number which can be used with NCTRL() and RSVD() */
+@@ -1367,15 +1358,15 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990 (rmnet) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990A (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990 (MBIM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990A (MBIM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990 (RNDIS) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990A (RNDIS) */
+ .driver_info = NCTRL(2) | RSVD(3) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990A (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990 (PCIe) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */
+ .driver_info = RSVD(0) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+@@ -1403,6 +1394,22 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(0) | NCTRL(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30),
++ .driver_info = NCTRL(5) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30),
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30),
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30),
++ .driver_info = NCTRL(6) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -2347,6 +2354,14 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a05, 0xff) }, /* Fibocom FM650-CN (NCM mode) */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a06, 0xff) }, /* Fibocom FM650-CN (RNDIS mode) */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a07, 0xff) }, /* Fibocom FM650-CN (MBIM mode) */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d41, 0xff, 0, 0) }, /* MeiG Smart SLM320 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d57, 0xff, 0, 0) }, /* MeiG Smart SLM770A */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0, 0) }, /* MeiG Smart SRM815 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0x10, 0x02) }, /* MeiG Smart SLM828 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0x10, 0x03) }, /* MeiG Smart SLM828 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x30) }, /* MeiG Smart SRM815 and SRM825L */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x40) }, /* MeiG Smart SRM825L */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x60) }, /* MeiG Smart SRM825L */
+ { USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
+@@ -2403,12 +2418,6 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
+ .driver_info = NCTRL(1) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */
+diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
+index a7fd018aa54836..9e1c57baab64a2 100644
+--- a/drivers/vfio/pci/nvgrace-gpu/main.c
++++ b/drivers/vfio/pci/nvgrace-gpu/main.c
+@@ -17,12 +17,14 @@
+ #define RESMEM_REGION_INDEX VFIO_PCI_BAR2_REGION_INDEX
+ #define USEMEM_REGION_INDEX VFIO_PCI_BAR4_REGION_INDEX
+
+-/* Memory size expected as non cached and reserved by the VM driver */
+-#define RESMEM_SIZE SZ_1G
+-
+ /* A hardwired and constant ABI value between the GPU FW and VFIO driver. */
+ #define MEMBLK_SIZE SZ_512M
+
++#define DVSEC_BITMAP_OFFSET 0xA
++#define MIG_SUPPORTED_WITH_CACHED_RESMEM BIT(0)
++
++#define GPU_CAP_DVSEC_REGISTER 3
++
+ /*
+ * The state of the two device memory region - resmem and usemem - is
+ * saved as struct mem_region.
+@@ -46,6 +48,7 @@ struct nvgrace_gpu_pci_core_device {
+ struct mem_region resmem;
+ /* Lock to control device memory kernel mapping */
+ struct mutex remap_lock;
++ bool has_mig_hw_bug;
+ };
+
+ static void nvgrace_gpu_init_fake_bar_emu_regs(struct vfio_device *core_vdev)
+@@ -66,7 +69,7 @@ nvgrace_gpu_memregion(int index,
+ if (index == USEMEM_REGION_INDEX)
+ return &nvdev->usemem;
+
+- if (index == RESMEM_REGION_INDEX)
++ if (nvdev->resmem.memlength && index == RESMEM_REGION_INDEX)
+ return &nvdev->resmem;
+
+ return NULL;
+@@ -751,40 +754,67 @@ nvgrace_gpu_init_nvdev_struct(struct pci_dev *pdev,
+ u64 memphys, u64 memlength)
+ {
+ int ret = 0;
++ u64 resmem_size = 0;
+
+ /*
+- * The VM GPU device driver needs a non-cacheable region to support
+- * the MIG feature. Since the device memory is mapped as NORMAL cached,
+- * carve out a region from the end with a different NORMAL_NC
+- * property (called as reserved memory and represented as resmem). This
+- * region then is exposed as a 64b BAR (region 2 and 3) to the VM, while
+- * exposing the rest (termed as usable memory and represented using usemem)
+- * as cacheable 64b BAR (region 4 and 5).
++ * On Grace Hopper systems, the VM GPU device driver needs a non-cacheable
++ * region to support the MIG feature owing to a hardware bug. Since the
++ * device memory is mapped as NORMAL cached, carve out a region from the end
++ * with a different NORMAL_NC property (called as reserved memory and
++ * represented as resmem). This region then is exposed as a 64b BAR
++ * (region 2 and 3) to the VM, while exposing the rest (termed as usable
++ * memory and represented using usemem) as cacheable 64b BAR (region 4 and 5).
+ *
+ * devmem (memlength)
+ * |-------------------------------------------------|
+ * | |
+ * usemem.memphys resmem.memphys
++ *
++ * This hardware bug is fixed on the Grace Blackwell platforms and the
++ * presence of the bug can be determined through nvdev->has_mig_hw_bug.
++ * Thus on systems with the hardware fix, there is no need to partition
++ * the GPU device memory and the entire memory is usable and mapped as
++ * NORMAL cached (i.e. resmem size is 0).
+ */
++ if (nvdev->has_mig_hw_bug)
++ resmem_size = SZ_1G;
++
+ nvdev->usemem.memphys = memphys;
+
+ /*
+ * The device memory exposed to the VM is added to the kernel by the
+- * VM driver module in chunks of memory block size. Only the usable
+- * memory (usemem) is added to the kernel for usage by the VM
+- * workloads. Make the usable memory size memblock aligned.
++ * VM driver module in chunks of memory block size. Note that only the
++ * usable memory (usemem) is added to the kernel for usage by the VM
++ * workloads.
+ */
+- if (check_sub_overflow(memlength, RESMEM_SIZE,
++ if (check_sub_overflow(memlength, resmem_size,
+ &nvdev->usemem.memlength)) {
+ ret = -EOVERFLOW;
+ goto done;
+ }
+
+ /*
+- * The USEMEM part of the device memory has to be MEMBLK_SIZE
+- * aligned. This is a hardwired ABI value between the GPU FW and
+- * VFIO driver. The VM device driver is also aware of it and make
+- * use of the value for its calculation to determine USEMEM size.
++ * The usemem region is exposed as a 64B Bar composed of region 4 and 5.
++ * Calculate and save the BAR size for the region.
++ */
++ nvdev->usemem.bar_size = roundup_pow_of_two(nvdev->usemem.memlength);
++
++ /*
++ * If the hardware has the fix for MIG, there is no requirement
++ * for splitting the device memory to create RESMEM. The entire
++ * device memory is usable and will be USEMEM. Return here for
++ * such case.
++ */
++ if (!nvdev->has_mig_hw_bug)
++ goto done;
++
++ /*
++ * When the device memory is split to workaround the MIG bug on
++ * Grace Hopper, the USEMEM part of the device memory has to be
++ * MEMBLK_SIZE aligned. This is a hardwired ABI value between the
++ * GPU FW and VFIO driver. The VM device driver is also aware of it
++ * and make use of the value for its calculation to determine USEMEM
++ * size. Note that the device memory may not be 512M aligned.
+ */
+ nvdev->usemem.memlength = round_down(nvdev->usemem.memlength,
+ MEMBLK_SIZE);
+@@ -803,15 +833,34 @@ nvgrace_gpu_init_nvdev_struct(struct pci_dev *pdev,
+ }
+
+ /*
+- * The memory regions are exposed as BARs. Calculate and save
+- * the BAR size for them.
++ * The resmem region is exposed as a 64b BAR composed of region 2 and 3
++ * for Grace Hopper. Calculate and save the BAR size for the region.
+ */
+- nvdev->usemem.bar_size = roundup_pow_of_two(nvdev->usemem.memlength);
+ nvdev->resmem.bar_size = roundup_pow_of_two(nvdev->resmem.memlength);
+ done:
+ return ret;
+ }
+
++static bool nvgrace_gpu_has_mig_hw_bug(struct pci_dev *pdev)
++{
++ int pcie_dvsec;
++ u16 dvsec_ctrl16;
++
++ pcie_dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_NVIDIA,
++ GPU_CAP_DVSEC_REGISTER);
++
++ if (pcie_dvsec) {
++ pci_read_config_word(pdev,
++ pcie_dvsec + DVSEC_BITMAP_OFFSET,
++ &dvsec_ctrl16);
++
++ if (dvsec_ctrl16 & MIG_SUPPORTED_WITH_CACHED_RESMEM)
++ return false;
++ }
++
++ return true;
++}
++
+ static int nvgrace_gpu_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+ {
+@@ -832,6 +881,8 @@ static int nvgrace_gpu_probe(struct pci_dev *pdev,
+ dev_set_drvdata(&pdev->dev, &nvdev->core_device);
+
+ if (ops == &nvgrace_gpu_pci_ops) {
++ nvdev->has_mig_hw_bug = nvgrace_gpu_has_mig_hw_bug(pdev);
++
+ /*
+ * Device memory properties are identified in the host ACPI
+ * table. Set the nvgrace_gpu_pci_core_device structure.
+diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
+index 66b72c2892841d..a0595c745732a3 100644
+--- a/drivers/vfio/pci/vfio_pci_rdwr.c
++++ b/drivers/vfio/pci/vfio_pci_rdwr.c
+@@ -16,6 +16,7 @@
+ #include <linux/io.h>
+ #include <linux/vfio.h>
+ #include <linux/vgaarb.h>
++#include <linux/io-64-nonatomic-lo-hi.h>
+
+ #include "vfio_pci_priv.h"
+
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index d63c2d266d0735..3bf1043cd7957c 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -393,11 +393,6 @@ static ssize_t vfio_platform_read_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
+- if (off >= reg->size)
+- return -EINVAL;
+-
+- count = min_t(size_t, count, reg->size - off);
+-
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+@@ -482,11 +477,6 @@ static ssize_t vfio_platform_write_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
+- if (off >= reg->size)
+- return -EINVAL;
+-
+- count = min_t(size_t, count, reg->size - off);
+-
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+diff --git a/drivers/video/fbdev/omap/lcd_dma.c b/drivers/video/fbdev/omap/lcd_dma.c
+index f85817635a8c2c..0da23c57e4757e 100644
+--- a/drivers/video/fbdev/omap/lcd_dma.c
++++ b/drivers/video/fbdev/omap/lcd_dma.c
+@@ -432,8 +432,8 @@ static int __init omap_init_lcd_dma(void)
+
+ spin_lock_init(&lcd_dma.lock);
+
+- r = request_irq(INT_DMA_LCD, lcd_dma_irq_handler, 0,
+- "LCD DMA", NULL);
++ r = request_threaded_irq(INT_DMA_LCD, NULL, lcd_dma_irq_handler,
++ IRQF_ONESHOT, "LCD DMA", NULL);
+ if (r != 0)
+ pr_err("unable to request IRQ for LCD DMA (error %d)\n", r);
+
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index a337edcf8faf71..26c62e0d34e98b 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -74,19 +74,21 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev,
+ return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr));
+ }
+
++static inline bool range_requires_alignment(phys_addr_t p, size_t size)
++{
++ phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
++ phys_addr_t bus_addr = pfn_to_bfn(XEN_PFN_DOWN(p)) << XEN_PAGE_SHIFT;
++
++ return IS_ALIGNED(p, algn) && !IS_ALIGNED(bus_addr, algn);
++}
++
+ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
+ {
+ unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
+ unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
+- phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
+
+ next_bfn = pfn_to_bfn(xen_pfn);
+
+- /* If buffer is physically aligned, ensure DMA alignment. */
+- if (IS_ALIGNED(p, algn) &&
+- !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn))
+- return 1;
+-
+ for (i = 1; i < nr_pages; i++)
+ if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
+ return 1;
+@@ -156,7 +158,8 @@ xen_swiotlb_alloc_coherent(struct device *dev, size_t size,
+
+ *dma_handle = xen_phys_to_dma(dev, phys);
+ if (*dma_handle + size - 1 > dma_mask ||
+- range_straddles_page_boundary(phys, size)) {
++ range_straddles_page_boundary(phys, size) ||
++ range_requires_alignment(phys, size)) {
+ if (xen_create_contiguous_region(phys, order, fls64(dma_mask),
+ dma_handle) != 0)
+ goto out_free_pages;
+@@ -182,7 +185,8 @@ xen_swiotlb_free_coherent(struct device *dev, size_t size, void *vaddr,
+ size = ALIGN(size, XEN_PAGE_SIZE);
+
+ if (WARN_ON_ONCE(dma_handle + size - 1 > dev->coherent_dma_mask) ||
+- WARN_ON_ONCE(range_straddles_page_boundary(phys, size)))
++ WARN_ON_ONCE(range_straddles_page_boundary(phys, size) ||
++ range_requires_alignment(phys, size)))
+ return;
+
+ if (TestClearPageXenRemapped(virt_to_page(vaddr)))
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 42c9899d9241c9..fe08c983d5bb4b 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -901,12 +901,11 @@ void clear_folio_extent_mapped(struct folio *folio)
+ folio_detach_private(folio);
+ }
+
+-static struct extent_map *__get_extent_map(struct inode *inode,
+- struct folio *folio, u64 start,
+- u64 len, struct extent_map **em_cached)
++static struct extent_map *get_extent_map(struct btrfs_inode *inode,
++ struct folio *folio, u64 start,
++ u64 len, struct extent_map **em_cached)
+ {
+ struct extent_map *em;
+- struct extent_state *cached_state = NULL;
+
+ ASSERT(em_cached);
+
+@@ -922,14 +921,12 @@ static struct extent_map *__get_extent_map(struct inode *inode,
+ *em_cached = NULL;
+ }
+
+- btrfs_lock_and_flush_ordered_range(BTRFS_I(inode), start, start + len - 1, &cached_state);
+- em = btrfs_get_extent(BTRFS_I(inode), folio, start, len);
++ em = btrfs_get_extent(inode, folio, start, len);
+ if (!IS_ERR(em)) {
+ BUG_ON(*em_cached);
+ refcount_inc(&em->refs);
+ *em_cached = em;
+ }
+- unlock_extent(&BTRFS_I(inode)->io_tree, start, start + len - 1, &cached_state);
+
+ return em;
+ }
+@@ -985,8 +982,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ end_folio_read(folio, true, cur, iosize);
+ break;
+ }
+- em = __get_extent_map(inode, folio, cur, end - cur + 1,
+- em_cached);
++ em = get_extent_map(BTRFS_I(inode), folio, cur, end - cur + 1, em_cached);
+ if (IS_ERR(em)) {
+ end_folio_read(folio, false, cur, end + 1 - cur);
+ return PTR_ERR(em);
+@@ -1087,11 +1083,18 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+
+ int btrfs_read_folio(struct file *file, struct folio *folio)
+ {
++ struct btrfs_inode *inode = folio_to_inode(folio);
++ const u64 start = folio_pos(folio);
++ const u64 end = start + folio_size(folio) - 1;
++ struct extent_state *cached_state = NULL;
+ struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
+ struct extent_map *em_cached = NULL;
+ int ret;
+
++ btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
+ ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
++ unlock_extent(&inode->io_tree, start, end, &cached_state);
++
+ free_extent_map(em_cached);
+
+ /*
+@@ -2268,12 +2271,20 @@ void btrfs_readahead(struct readahead_control *rac)
+ {
+ struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD };
+ struct folio *folio;
++ struct btrfs_inode *inode = BTRFS_I(rac->mapping->host);
++ const u64 start = readahead_pos(rac);
++ const u64 end = start + readahead_length(rac) - 1;
++ struct extent_state *cached_state = NULL;
+ struct extent_map *em_cached = NULL;
+ u64 prev_em_start = (u64)-1;
+
++ btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
++
+ while ((folio = readahead_folio(rac)) != NULL)
+ btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
+
++ unlock_extent(&inode->io_tree, start, end, &cached_state);
++
+ if (em_cached)
+ free_extent_map(em_cached);
+ submit_one_bio(&bio_ctrl);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 559c177456e6a0..848cb2c3d9ddeb 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1148,7 +1148,6 @@ int btrfs_write_check(struct kiocb *iocb, struct iov_iter *from, size_t count)
+ loff_t pos = iocb->ki_pos;
+ int ret;
+ loff_t oldsize;
+- loff_t start_pos;
+
+ /*
+ * Quickly bail out on NOWAIT writes if we don't have the nodatacow or
+@@ -1172,9 +1171,8 @@ int btrfs_write_check(struct kiocb *iocb, struct iov_iter *from, size_t count)
+ */
+ update_time_for_write(inode);
+
+- start_pos = round_down(pos, fs_info->sectorsize);
+ oldsize = i_size_read(inode);
+- if (start_pos > oldsize) {
++ if (pos > oldsize) {
+ /* Expand hole size to cover write data, preventing empty gap */
+ loff_t end_pos = round_up(pos + count, fs_info->sectorsize);
+
+diff --git a/fs/nfs/sysfs.c b/fs/nfs/sysfs.c
+index bf378ecd5d9fdd..7b59a40d40c061 100644
+--- a/fs/nfs/sysfs.c
++++ b/fs/nfs/sysfs.c
+@@ -280,9 +280,9 @@ void nfs_sysfs_link_rpc_client(struct nfs_server *server,
+ char name[RPC_CLIENT_NAME_SIZE];
+ int ret;
+
+- strcpy(name, clnt->cl_program->name);
+- strcat(name, uniq ? uniq : "");
+- strcat(name, "_client");
++ strscpy(name, clnt->cl_program->name, sizeof(name));
++ strncat(name, uniq ? uniq : "", sizeof(name) - strlen(name) - 1);
++ strncat(name, "_client", sizeof(name) - strlen(name) - 1);
+
+ ret = sysfs_create_link_nowarn(&server->kobj,
+ &clnt->cl_sysfs->kobject, name);
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 146a9463c3c230..d199688818557d 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -445,11 +445,20 @@ nfsd_file_dispose_list_delayed(struct list_head *dispose)
+ struct nfsd_file, nf_gc);
+ struct nfsd_net *nn = net_generic(nf->nf_net, nfsd_net_id);
+ struct nfsd_fcache_disposal *l = nn->fcache_disposal;
++ struct svc_serv *serv;
+
+ spin_lock(&l->lock);
+ list_move_tail(&nf->nf_gc, &l->freeme);
+ spin_unlock(&l->lock);
+- svc_wake_up(nn->nfsd_serv);
++
++ /*
++ * The filecache laundrette is shut down after the
++ * nn->nfsd_serv pointer is cleared, but before the
++ * svc_serv is freed.
++ */
++ serv = nn->nfsd_serv;
++ if (serv)
++ svc_wake_up(serv);
+ }
+ }
+
+diff --git a/fs/nfsd/nfs2acl.c b/fs/nfsd/nfs2acl.c
+index 4e3be7201b1c43..5fb202acb0fd00 100644
+--- a/fs/nfsd/nfs2acl.c
++++ b/fs/nfsd/nfs2acl.c
+@@ -84,6 +84,8 @@ static __be32 nfsacld_proc_getacl(struct svc_rqst *rqstp)
+ fail:
+ posix_acl_release(resp->acl_access);
+ posix_acl_release(resp->acl_default);
++ resp->acl_access = NULL;
++ resp->acl_default = NULL;
+ goto out;
+ }
+
+diff --git a/fs/nfsd/nfs3acl.c b/fs/nfsd/nfs3acl.c
+index 5e34e98db969db..7b5433bd301974 100644
+--- a/fs/nfsd/nfs3acl.c
++++ b/fs/nfsd/nfs3acl.c
+@@ -76,6 +76,8 @@ static __be32 nfsd3_proc_getacl(struct svc_rqst *rqstp)
+ fail:
+ posix_acl_release(resp->acl_access);
+ posix_acl_release(resp->acl_default);
++ resp->acl_access = NULL;
++ resp->acl_default = NULL;
+ goto out;
+ }
+
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index de076365254978..88c03e18257323 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1486,8 +1486,11 @@ nfsd4_run_cb_work(struct work_struct *work)
+ nfsd4_process_cb_update(cb);
+
+ clnt = clp->cl_cb_client;
+- if (!clnt) {
+- /* Callback channel broken, or client killed; give up: */
++ if (!clnt || clp->cl_state == NFSD4_COURTESY) {
++ /*
++ * Callback channel broken, client killed or
++ * nfs4_client in courtesy state; give up.
++ */
+ nfsd41_destroy_cb(cb);
+ return;
+ }
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 8d789b017fa9b6..da1a9312e61a0e 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -1406,7 +1406,7 @@ int attr_wof_frame_info(struct ntfs_inode *ni, struct ATTRIB *attr,
+ */
+ if (!attr->non_res) {
+ if (vbo[1] + bytes_per_off > le32_to_cpu(attr->res.data_size)) {
+- ntfs_inode_err(&ni->vfs_inode, "is corrupted");
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return -EINVAL;
+ }
+ addr = resident_data(attr);
+@@ -2587,7 +2587,7 @@ int attr_force_nonresident(struct ntfs_inode *ni)
+
+ attr = ni_find_attr(ni, NULL, &le, ATTR_DATA, NULL, 0, NULL, &mi);
+ if (!attr) {
+- ntfs_bad_inode(&ni->vfs_inode, "no data attribute");
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return -ENOENT;
+ }
+
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index fc6a8aa29e3afe..b6da80c69ca634 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -512,7 +512,7 @@ static int ntfs_readdir(struct file *file, struct dir_context *ctx)
+ ctx->pos = pos;
+ } else if (err < 0) {
+ if (err == -EINVAL)
+- ntfs_inode_err(dir, "directory corrupted");
++ _ntfs_bad_inode(dir);
+ ctx->pos = eod;
+ }
+
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index c33e818b3164cd..175662acd5eaf0 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -148,8 +148,10 @@ int ni_load_mi_ex(struct ntfs_inode *ni, CLST rno, struct mft_inode **mi)
+ goto out;
+
+ err = mi_get(ni->mi.sbi, rno, &r);
+- if (err)
++ if (err) {
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return err;
++ }
+
+ ni_add_mi(ni, r);
+
+@@ -238,8 +240,7 @@ struct ATTRIB *ni_find_attr(struct ntfs_inode *ni, struct ATTRIB *attr,
+ return attr;
+
+ out:
+- ntfs_inode_err(&ni->vfs_inode, "failed to parse mft record");
+- ntfs_set_state(ni->mi.sbi, NTFS_DIRTY_ERROR);
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return NULL;
+ }
+
+@@ -330,6 +331,7 @@ struct ATTRIB *ni_load_attr(struct ntfs_inode *ni, enum ATTR_TYPE type,
+ vcn <= le64_to_cpu(attr->nres.evcn))
+ return attr;
+
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return NULL;
+ }
+
+@@ -1604,8 +1606,8 @@ int ni_delete_all(struct ntfs_inode *ni)
+ roff = le16_to_cpu(attr->nres.run_off);
+
+ if (roff > asize) {
+- _ntfs_bad_inode(&ni->vfs_inode);
+- return -EINVAL;
++ /* ni_enum_attr_ex checks this case. */
++ continue;
+ }
+
+ /* run==1 means unpack and deallocate. */
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index 0fa636038b4e4d..6c73e93afb478c 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -908,7 +908,11 @@ void ntfs_bad_inode(struct inode *inode, const char *hint)
+
+ ntfs_inode_err(inode, "%s", hint);
+ make_bad_inode(inode);
+- ntfs_set_state(sbi, NTFS_DIRTY_ERROR);
++ /* Avoid recursion if bad inode is $Volume. */
++ if (inode->i_ino != MFT_REC_VOL &&
++ !(sbi->flags & NTFS_FLAGS_LOG_REPLAYING)) {
++ ntfs_set_state(sbi, NTFS_DIRTY_ERROR);
++ }
+ }
+
+ /*
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 9089c58a005ce1..7eb9fae22f8da6 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -1094,8 +1094,7 @@ int indx_read(struct ntfs_index *indx, struct ntfs_inode *ni, CLST vbn,
+
+ ok:
+ if (!index_buf_check(ib, bytes, &vbn)) {
+- ntfs_inode_err(&ni->vfs_inode, "directory corrupted");
+- ntfs_set_state(ni->mi.sbi, NTFS_DIRTY_ERROR);
++ _ntfs_bad_inode(&ni->vfs_inode);
+ err = -EINVAL;
+ goto out;
+ }
+@@ -1117,8 +1116,7 @@ int indx_read(struct ntfs_index *indx, struct ntfs_inode *ni, CLST vbn,
+
+ out:
+ if (err == -E_NTFS_CORRUPT) {
+- ntfs_inode_err(&ni->vfs_inode, "directory corrupted");
+- ntfs_set_state(ni->mi.sbi, NTFS_DIRTY_ERROR);
++ _ntfs_bad_inode(&ni->vfs_inode);
+ err = -EINVAL;
+ }
+
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index be04d2845bb7bc..a1e11228dafd02 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -410,6 +410,9 @@ static struct inode *ntfs_read_mft(struct inode *inode,
+ if (!std5)
+ goto out;
+
++ if (is_bad_inode(inode))
++ goto out;
++
+ if (!is_match && name) {
+ err = -ENOENT;
+ goto out;
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index 1b508f5433846e..fa41db08848802 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -393,9 +393,9 @@ static ssize_t orangefs_debug_write(struct file *file,
+ * Thwart users who try to jamb a ridiculous number
+ * of bytes into the debug file...
+ */
+- if (count > ORANGEFS_MAX_DEBUG_STRING_LEN + 1) {
++ if (count > ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ silly = count;
+- count = ORANGEFS_MAX_DEBUG_STRING_LEN + 1;
++ count = ORANGEFS_MAX_DEBUG_STRING_LEN;
+ }
+
+ buf = kzalloc(ORANGEFS_MAX_DEBUG_STRING_LEN, GFP_KERNEL);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 94785abc9b1b2d..05274121e46f04 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1476,7 +1476,6 @@ struct cifs_io_parms {
+ struct cifs_io_request {
+ struct netfs_io_request rreq;
+ struct cifsFileInfo *cfile;
+- struct TCP_Server_Info *server;
+ pid_t pid;
+ };
+
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index a58a3333ecc300..313c851fc1c122 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -147,7 +147,7 @@ static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
+ struct netfs_io_request *rreq = subreq->rreq;
+ struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
+ struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
+- struct TCP_Server_Info *server = req->server;
++ struct TCP_Server_Info *server;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb);
+ size_t size;
+ int rc = 0;
+@@ -156,6 +156,8 @@ static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
+ rdata->xid = get_xid();
+ rdata->have_xid = true;
+ }
++
++ server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses);
+ rdata->server = server;
+
+ if (cifs_sb->ctx->rsize == 0)
+@@ -198,7 +200,7 @@ static void cifs_issue_read(struct netfs_io_subrequest *subreq)
+ struct netfs_io_request *rreq = subreq->rreq;
+ struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
+ struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
+- struct TCP_Server_Info *server = req->server;
++ struct TCP_Server_Info *server = rdata->server;
+ int rc = 0;
+
+ cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n",
+@@ -265,7 +267,6 @@ static int cifs_init_request(struct netfs_io_request *rreq, struct file *file)
+ open_file = file->private_data;
+ rreq->netfs_priv = file->private_data;
+ req->cfile = cifsFileInfo_get(open_file);
+- req->server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses);
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)
+ req->pid = req->cfile->pid;
+ } else if (rreq->origin != NETFS_WRITEBACK) {
+diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
+index a6f8b098c56f14..3bd9f482f0c3e6 100644
+--- a/include/drm/display/drm_dp.h
++++ b/include/drm/display/drm_dp.h
+@@ -359,6 +359,7 @@
+ # define DP_DSC_BITS_PER_PIXEL_1_4 0x2
+ # define DP_DSC_BITS_PER_PIXEL_1_2 0x3
+ # define DP_DSC_BITS_PER_PIXEL_1_1 0x4
++# define DP_DSC_BITS_PER_PIXEL_MASK 0x7
+
+ #define DP_PSR_SUPPORT 0x070 /* XXX 1.2? */
+ # define DP_PSR_IS_SUPPORTED 1
+diff --git a/include/kunit/platform_device.h b/include/kunit/platform_device.h
+index 0fc0999d2420aa..f8236a8536f7eb 100644
+--- a/include/kunit/platform_device.h
++++ b/include/kunit/platform_device.h
+@@ -2,6 +2,7 @@
+ #ifndef _KUNIT_PLATFORM_DRIVER_H
+ #define _KUNIT_PLATFORM_DRIVER_H
+
++struct completion;
+ struct kunit;
+ struct platform_device;
+ struct platform_driver;
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index c5063e0a38a058..a53cbe25691043 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -880,12 +880,22 @@ static inline bool blk_mq_add_to_batch(struct request *req,
+ void (*complete)(struct io_comp_batch *))
+ {
+ /*
+- * blk_mq_end_request_batch() can't end request allocated from
+- * sched tags
++ * Check various conditions that exclude batch processing:
++ * 1) No batch container
++ * 2) Has scheduler data attached
++ * 3) Not a passthrough request and end_io set
++ * 4) Not a passthrough request and an ioerror
+ */
+- if (!iob || (req->rq_flags & RQF_SCHED_TAGS) || ioerror ||
+- (req->end_io && !blk_rq_is_passthrough(req)))
++ if (!iob)
+ return false;
++ if (req->rq_flags & RQF_SCHED_TAGS)
++ return false;
++ if (!blk_rq_is_passthrough(req)) {
++ if (req->end_io)
++ return false;
++ if (ioerror < 0)
++ return false;
++ }
+
+ if (!iob->complete)
+ iob->complete = complete;
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 47ae4c4d924c28..a32eebcd23da47 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -71,9 +71,6 @@ enum {
+
+ /* Cgroup is frozen. */
+ CGRP_FROZEN,
+-
+- /* Control group has to be killed. */
+- CGRP_KILL,
+ };
+
+ /* cgroup_root->flags */
+@@ -460,6 +457,9 @@ struct cgroup {
+
+ int nr_threaded_children; /* # of live threaded child cgroups */
+
++ /* sequence number for cgroup.kill, serialized by css_set_lock. */
++ unsigned int kill_seq;
++
+ struct kernfs_node *kn; /* cgroup kernfs entry */
+ struct cgroup_file procs_file; /* handle for "cgroup.procs" */
+ struct cgroup_file events_file; /* handle for "cgroup.events" */
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index e28d8806603376..2f1bfd7562eb2b 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -128,6 +128,7 @@ typedef struct {
+ #define EFI_MEMORY_RO ((u64)0x0000000000020000ULL) /* read-only */
+ #define EFI_MEMORY_SP ((u64)0x0000000000040000ULL) /* soft reserved */
+ #define EFI_MEMORY_CPU_CRYPTO ((u64)0x0000000000080000ULL) /* supports encryption */
++#define EFI_MEMORY_HOT_PLUGGABLE BIT_ULL(20) /* supports unplugging at runtime */
+ #define EFI_MEMORY_RUNTIME ((u64)0x8000000000000000ULL) /* range requires runtime mapping */
+ #define EFI_MEMORY_DESCRIPTOR_VERSION 1
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 02d3bafebbe77c..4f17b786828af7 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2577,6 +2577,12 @@ struct net *dev_net(const struct net_device *dev)
+ return read_pnet(&dev->nd_net);
+ }
+
++static inline
++struct net *dev_net_rcu(const struct net_device *dev)
++{
++ return read_pnet_rcu(&dev->nd_net);
++}
++
+ static inline
+ void dev_net_set(struct net_device *dev, struct net *net)
+ {
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 4cf6aaed5f35db..22f6b018cff8de 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2589,6 +2589,11 @@
+
+ #define PCI_VENDOR_ID_REDHAT 0x1b36
+
++#define PCI_VENDOR_ID_WCHIC 0x1c00
++#define PCI_DEVICE_ID_WCHIC_CH382_0S1P 0x3050
++#define PCI_DEVICE_ID_WCHIC_CH382_2S1P 0x3250
++#define PCI_DEVICE_ID_WCHIC_CH382_2S 0x3253
++
+ #define PCI_VENDOR_ID_SILICOM_DENMARK 0x1c2c
+
+ #define PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS 0x1c36
+@@ -2643,6 +2648,12 @@
+ #define PCI_VENDOR_ID_AKS 0x416c
+ #define PCI_DEVICE_ID_AKS_ALADDINCARD 0x0100
+
++#define PCI_VENDOR_ID_WCHCN 0x4348
++#define PCI_DEVICE_ID_WCHCN_CH353_4S 0x3453
++#define PCI_DEVICE_ID_WCHCN_CH353_2S1PF 0x5046
++#define PCI_DEVICE_ID_WCHCN_CH353_1S1P 0x5053
++#define PCI_DEVICE_ID_WCHCN_CH353_2S1P 0x7053
++
+ #define PCI_VENDOR_ID_ACCESSIO 0x494f
+ #define PCI_DEVICE_ID_ACCESSIO_WDG_CSM 0x22c0
+
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 0f2aeb37bbb047..ca1db4b92c3244 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -43,6 +43,7 @@ struct kernel_clone_args {
+ void *fn_arg;
+ struct cgroup *cgrp;
+ struct css_set *cset;
++ unsigned int kill_seq;
+ };
+
+ /*
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 0f303cc602520e..08647c99d79c9a 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -440,6 +440,15 @@ static inline void dst_set_expires(struct dst_entry *dst, int timeout)
+ dst->expires = expires;
+ }
+
++static inline unsigned int dst_dev_overhead(struct dst_entry *dst,
++ struct sk_buff *skb)
++{
++ if (likely(dst))
++ return LL_RESERVED_SPACE(dst->dev);
++
++ return skb->mac_len;
++}
++
+ INDIRECT_CALLABLE_DECLARE(int ip6_output(struct net *, struct sock *,
+ struct sk_buff *));
+ INDIRECT_CALLABLE_DECLARE(int ip_output(struct net *, struct sock *,
+diff --git a/include/net/ip.h b/include/net/ip.h
+index d92d3bc3ec0e25..fe4f8543811433 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -465,9 +465,12 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ bool forwarding)
+ {
+ const struct rtable *rt = dst_rtable(dst);
+- struct net *net = dev_net(dst->dev);
+- unsigned int mtu;
++ unsigned int mtu, res;
++ struct net *net;
++
++ rcu_read_lock();
+
++ net = dev_net_rcu(dst->dev);
+ if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) ||
+ ip_mtu_locked(dst) ||
+ !forwarding) {
+@@ -491,7 +494,11 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ out:
+ mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
+
+- return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
++ res = mtu - lwtunnel_headroom(dst->lwtstate, mtu);
++
++ rcu_read_unlock();
++
++ return res;
+ }
+
+ static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
+diff --git a/include/net/l3mdev.h b/include/net/l3mdev.h
+index 031c661aa14df7..bdfa9d414360c7 100644
+--- a/include/net/l3mdev.h
++++ b/include/net/l3mdev.h
+@@ -198,10 +198,12 @@ struct sk_buff *l3mdev_l3_out(struct sock *sk, struct sk_buff *skb, u16 proto)
+ if (netif_is_l3_slave(dev)) {
+ struct net_device *master;
+
++ rcu_read_lock();
+ master = netdev_master_upper_dev_get_rcu(dev);
+ if (master && master->l3mdev_ops->l3mdev_l3_out)
+ skb = master->l3mdev_ops->l3mdev_l3_out(master, sk,
+ skb, proto);
++ rcu_read_unlock();
+ }
+
+ return skb;
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 9398c8f4995368..da93873df4dbd7 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -387,7 +387,7 @@ static inline struct net *read_pnet(const possible_net_t *pnet)
+ #endif
+ }
+
+-static inline struct net *read_pnet_rcu(possible_net_t *pnet)
++static inline struct net *read_pnet_rcu(const possible_net_t *pnet)
+ {
+ #ifdef CONFIG_NET_NS
+ return rcu_dereference(pnet->net);
+diff --git a/include/net/route.h b/include/net/route.h
+index 1789f1e6640b46..da34b6fa9862dc 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -363,10 +363,15 @@ static inline int inet_iif(const struct sk_buff *skb)
+ static inline int ip4_dst_hoplimit(const struct dst_entry *dst)
+ {
+ int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT);
+- struct net *net = dev_net(dst->dev);
+
+- if (hoplimit == 0)
++ if (hoplimit == 0) {
++ const struct net *net;
++
++ rcu_read_lock();
++ net = dev_net_rcu(dst->dev);
+ hoplimit = READ_ONCE(net->ipv4.sysctl_ip_default_ttl);
++ rcu_read_unlock();
++ }
+ return hoplimit;
+ }
+
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index d5e43a1dcff226..47cba116f87b81 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -402,6 +402,9 @@ enum clk_gating_state {
+ * delay_ms
+ * @ungate_work: worker to turn on clocks that will be used in case of
+ * interrupt context
++ * @clk_gating_workq: workqueue for clock gating work.
++ * @lock: serialize access to some struct ufs_clk_gating members. An outer lock
++ * relative to the host lock
+ * @state: the current clocks state
+ * @delay_ms: gating delay in ms
+ * @is_suspended: clk gating is suspended when set to 1 which can be used
+@@ -412,11 +415,14 @@ enum clk_gating_state {
+ * @is_initialized: Indicates whether clock gating is initialized or not
+ * @active_reqs: number of requests that are pending and should be waited for
+ * completion before gating clocks.
+- * @clk_gating_workq: workqueue for clock gating work.
+ */
+ struct ufs_clk_gating {
+ struct delayed_work gate_work;
+ struct work_struct ungate_work;
++ struct workqueue_struct *clk_gating_workq;
++
++ spinlock_t lock;
++
+ enum clk_gating_state state;
+ unsigned long delay_ms;
+ bool is_suspended;
+@@ -425,7 +431,6 @@ struct ufs_clk_gating {
+ bool is_enabled;
+ bool is_initialized;
+ int active_reqs;
+- struct workqueue_struct *clk_gating_workq;
+ };
+
+ /**
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index eec5eb7de8430e..e1895952066eeb 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -420,6 +420,12 @@ void io_destroy_buffers(struct io_ring_ctx *ctx)
+ }
+ }
+
++static void io_destroy_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl)
++{
++ xa_erase(&ctx->io_bl_xa, bl->bgid);
++ io_put_bl(ctx, bl);
++}
++
+ int io_remove_buffers_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ struct io_provide_buf *p = io_kiocb_to_cmd(req, struct io_provide_buf);
+@@ -717,12 +723,13 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+ /* if mapped buffer ring OR classic exists, don't allow */
+ if (bl->flags & IOBL_BUF_RING || !list_empty(&bl->buf_list))
+ return -EEXIST;
+- } else {
+- free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
+- if (!bl)
+- return -ENOMEM;
++ io_destroy_bl(ctx, bl);
+ }
+
++ free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
++ if (!bl)
++ return -ENOMEM;
++
+ if (!(reg.flags & IOU_PBUF_RING_MMAP))
+ ret = io_pin_pbuf_ring(®, bl);
+ else
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 874f9e2defd583..b2ce4b56100271 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -65,9 +65,6 @@ bool io_uring_try_cancel_uring_cmd(struct io_ring_ctx *ctx,
+ continue;
+
+ if (cmd->flags & IORING_URING_CMD_CANCELABLE) {
+- /* ->sqe isn't available if no async data */
+- if (!req_has_async_data(req))
+- cmd->sqe = NULL;
+ file->f_op->uring_cmd(cmd, IO_URING_F_CANCEL |
+ IO_URING_F_COMPLETE_DEFER);
+ ret = true;
+diff --git a/io_uring/waitid.c b/io_uring/waitid.c
+index 6362ec20abc0cf..2f7b5eeab845e9 100644
+--- a/io_uring/waitid.c
++++ b/io_uring/waitid.c
+@@ -118,7 +118,6 @@ static int io_waitid_finish(struct io_kiocb *req, int ret)
+ static void io_waitid_complete(struct io_kiocb *req, int ret)
+ {
+ struct io_waitid *iw = io_kiocb_to_cmd(req, struct io_waitid);
+- struct io_tw_state ts = {};
+
+ /* anyone completing better be holding a reference */
+ WARN_ON_ONCE(!(atomic_read(&iw->refs) & IO_WAITID_REF_MASK));
+@@ -131,7 +130,6 @@ static void io_waitid_complete(struct io_kiocb *req, int ret)
+ if (ret < 0)
+ req_set_fail(req);
+ io_req_set_res(req, ret, 0);
+- io_req_task_complete(req, &ts);
+ }
+
+ static bool __io_waitid_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+@@ -153,6 +151,7 @@ static bool __io_waitid_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+ list_del_init(&iwa->wo.child_wait.entry);
+ spin_unlock_irq(&iw->head->lock);
+ io_waitid_complete(req, -ECANCELED);
++ io_req_queue_tw_complete(req, -ECANCELED);
+ return true;
+ }
+
+@@ -258,6 +257,7 @@ static void io_waitid_cb(struct io_kiocb *req, struct io_tw_state *ts)
+ }
+
+ io_waitid_complete(req, ret);
++ io_req_task_complete(req, ts);
+ }
+
+ static int io_waitid_wait(struct wait_queue_entry *wait, unsigned mode,
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index e275eaf2de7f8f..216535e055e112 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -4013,7 +4013,7 @@ static void __cgroup_kill(struct cgroup *cgrp)
+ lockdep_assert_held(&cgroup_mutex);
+
+ spin_lock_irq(&css_set_lock);
+- set_bit(CGRP_KILL, &cgrp->flags);
++ cgrp->kill_seq++;
+ spin_unlock_irq(&css_set_lock);
+
+ css_task_iter_start(&cgrp->self, CSS_TASK_ITER_PROCS | CSS_TASK_ITER_THREADED, &it);
+@@ -4029,10 +4029,6 @@ static void __cgroup_kill(struct cgroup *cgrp)
+ send_sig(SIGKILL, task, 0);
+ }
+ css_task_iter_end(&it);
+-
+- spin_lock_irq(&css_set_lock);
+- clear_bit(CGRP_KILL, &cgrp->flags);
+- spin_unlock_irq(&css_set_lock);
+ }
+
+ static void cgroup_kill(struct cgroup *cgrp)
+@@ -6489,6 +6485,10 @@ static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
+ spin_lock_irq(&css_set_lock);
+ cset = task_css_set(current);
+ get_css_set(cset);
++ if (kargs->cgrp)
++ kargs->kill_seq = kargs->cgrp->kill_seq;
++ else
++ kargs->kill_seq = cset->dfl_cgrp->kill_seq;
+ spin_unlock_irq(&css_set_lock);
+
+ if (!(kargs->flags & CLONE_INTO_CGROUP)) {
+@@ -6672,6 +6672,7 @@ void cgroup_post_fork(struct task_struct *child,
+ struct kernel_clone_args *kargs)
+ __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
+ {
++ unsigned int cgrp_kill_seq = 0;
+ unsigned long cgrp_flags = 0;
+ bool kill = false;
+ struct cgroup_subsys *ss;
+@@ -6685,10 +6686,13 @@ void cgroup_post_fork(struct task_struct *child,
+
+ /* init tasks are special, only link regular threads */
+ if (likely(child->pid)) {
+- if (kargs->cgrp)
++ if (kargs->cgrp) {
+ cgrp_flags = kargs->cgrp->flags;
+- else
++ cgrp_kill_seq = kargs->cgrp->kill_seq;
++ } else {
+ cgrp_flags = cset->dfl_cgrp->flags;
++ cgrp_kill_seq = cset->dfl_cgrp->kill_seq;
++ }
+
+ WARN_ON_ONCE(!list_empty(&child->cg_list));
+ cset->nr_tasks++;
+@@ -6723,7 +6727,7 @@ void cgroup_post_fork(struct task_struct *child,
+ * child down right after we finished preparing it for
+ * userspace.
+ */
+- kill = test_bit(CGRP_KILL, &cgrp_flags);
++ kill = kargs->kill_seq != cgrp_kill_seq;
+ }
+
+ spin_unlock_irq(&css_set_lock);
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index a06b452724118a..ce295b73c0a366 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -586,7 +586,6 @@ static void root_cgroup_cputime(struct cgroup_base_stat *bstat)
+
+ cputime->sum_exec_runtime += user;
+ cputime->sum_exec_runtime += sys;
+- cputime->sum_exec_runtime += cpustat[CPUTIME_STEAL];
+
+ #ifdef CONFIG_SCHED_CORE
+ bstat->forceidle_sum += cpustat[CPUTIME_FORCEIDLE];
+diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c
+index db68a964e34e26..c4a3ccf6a8ace4 100644
+--- a/kernel/sched/autogroup.c
++++ b/kernel/sched/autogroup.c
+@@ -150,7 +150,7 @@ void sched_autogroup_exit_task(struct task_struct *p)
+ * see this thread after that: we can no longer use signal->autogroup.
+ * See the PF_EXITING check in task_wants_autogroup().
+ */
+- sched_move_task(p);
++ sched_move_task(p, true);
+ }
+
+ static void
+@@ -182,7 +182,7 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag)
+ * sched_autogroup_exit_task().
+ */
+ for_each_thread(p, t)
+- sched_move_task(t);
++ sched_move_task(t, true);
+
+ unlock_task_sighand(p, &flags);
+ autogroup_kref_put(prev);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5d67f41d05d40b..c72356836eb628 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8953,7 +8953,7 @@ static void sched_change_group(struct task_struct *tsk, struct task_group *group
+ * now. This function just updates tsk->se.cfs_rq and tsk->se.parent to reflect
+ * its new group.
+ */
+-void sched_move_task(struct task_struct *tsk)
++void sched_move_task(struct task_struct *tsk, bool for_autogroup)
+ {
+ int queued, running, queue_flags =
+ DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
+@@ -8982,7 +8982,8 @@ void sched_move_task(struct task_struct *tsk)
+ put_prev_task(rq, tsk);
+
+ sched_change_group(tsk, group);
+- scx_move_task(tsk);
++ if (!for_autogroup)
++ scx_cgroup_move_task(tsk);
+
+ if (queued)
+ enqueue_task(rq, tsk, queue_flags);
+@@ -9083,7 +9084,7 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
+ struct cgroup_subsys_state *css;
+
+ cgroup_taskset_for_each(task, css, tset)
+- sched_move_task(task);
++ sched_move_task(task, false);
+
+ scx_cgroup_finish_attach();
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 4c4681cb9337b4..689f7e8f69f54d 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2458,6 +2458,9 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ {
+ struct rq *src_rq = task_rq(p);
+ struct rq *dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
++#ifdef CONFIG_SMP
++ struct rq *locked_rq = rq;
++#endif
+
+ /*
+ * We're synchronized against dequeue through DISPATCHING. As @p can't
+@@ -2494,8 +2497,9 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ atomic_long_set_release(&p->scx.ops_state, SCX_OPSS_NONE);
+
+ /* switch to @src_rq lock */
+- if (rq != src_rq) {
+- raw_spin_rq_unlock(rq);
++ if (locked_rq != src_rq) {
++ raw_spin_rq_unlock(locked_rq);
++ locked_rq = src_rq;
+ raw_spin_rq_lock(src_rq);
+ }
+
+@@ -2513,6 +2517,8 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ } else {
+ move_remote_task_to_local_dsq(p, enq_flags,
+ src_rq, dst_rq);
++ /* task has been moved to dst_rq, which is now locked */
++ locked_rq = dst_rq;
+ }
+
+ /* if the destination CPU is idle, wake it up */
+@@ -2521,8 +2527,8 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ }
+
+ /* switch back to @rq lock */
+- if (rq != dst_rq) {
+- raw_spin_rq_unlock(dst_rq);
++ if (locked_rq != rq) {
++ raw_spin_rq_unlock(locked_rq);
+ raw_spin_rq_lock(rq);
+ }
+ #else /* CONFIG_SMP */
+@@ -3441,7 +3447,7 @@ static void task_tick_scx(struct rq *rq, struct task_struct *curr, int queued)
+ curr->scx.slice = 0;
+ touch_core_sched(rq, curr);
+ } else if (SCX_HAS_OP(tick)) {
+- SCX_CALL_OP(SCX_KF_REST, tick, curr);
++ SCX_CALL_OP_TASK(SCX_KF_REST, tick, curr);
+ }
+
+ if (!curr->scx.slice)
+@@ -3588,7 +3594,7 @@ static void scx_ops_disable_task(struct task_struct *p)
+ WARN_ON_ONCE(scx_get_task_state(p) != SCX_TASK_ENABLED);
+
+ if (SCX_HAS_OP(disable))
+- SCX_CALL_OP(SCX_KF_REST, disable, p);
++ SCX_CALL_OP_TASK(SCX_KF_REST, disable, p);
+ scx_set_task_state(p, SCX_TASK_READY);
+ }
+
+@@ -3617,7 +3623,7 @@ static void scx_ops_exit_task(struct task_struct *p)
+ }
+
+ if (SCX_HAS_OP(exit_task))
+- SCX_CALL_OP(SCX_KF_REST, exit_task, p, &args);
++ SCX_CALL_OP_TASK(SCX_KF_REST, exit_task, p, &args);
+ scx_set_task_state(p, SCX_TASK_NONE);
+ }
+
+@@ -3913,24 +3919,11 @@ int scx_cgroup_can_attach(struct cgroup_taskset *tset)
+ return ops_sanitize_err("cgroup_prep_move", ret);
+ }
+
+-void scx_move_task(struct task_struct *p)
++void scx_cgroup_move_task(struct task_struct *p)
+ {
+ if (!scx_cgroup_enabled)
+ return;
+
+- /*
+- * We're called from sched_move_task() which handles both cgroup and
+- * autogroup moves. Ignore the latter.
+- *
+- * Also ignore exiting tasks, because in the exit path tasks transition
+- * from the autogroup to the root group, so task_group_is_autogroup()
+- * alone isn't able to catch exiting autogroup tasks. This is safe for
+- * cgroup_move(), because cgroup migrations never happen for PF_EXITING
+- * tasks.
+- */
+- if (task_group_is_autogroup(task_group(p)) || (p->flags & PF_EXITING))
+- return;
+-
+ /*
+ * @p must have ops.cgroup_prep_move() called on it and thus
+ * cgrp_moving_from set.
+diff --git a/kernel/sched/ext.h b/kernel/sched/ext.h
+index 4d022d17ac7dd6..1079b56b0f7aea 100644
+--- a/kernel/sched/ext.h
++++ b/kernel/sched/ext.h
+@@ -73,7 +73,7 @@ static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {}
+ int scx_tg_online(struct task_group *tg);
+ void scx_tg_offline(struct task_group *tg);
+ int scx_cgroup_can_attach(struct cgroup_taskset *tset);
+-void scx_move_task(struct task_struct *p);
++void scx_cgroup_move_task(struct task_struct *p);
+ void scx_cgroup_finish_attach(void);
+ void scx_cgroup_cancel_attach(struct cgroup_taskset *tset);
+ void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight);
+@@ -82,7 +82,7 @@ void scx_group_set_idle(struct task_group *tg, bool idle);
+ static inline int scx_tg_online(struct task_group *tg) { return 0; }
+ static inline void scx_tg_offline(struct task_group *tg) {}
+ static inline int scx_cgroup_can_attach(struct cgroup_taskset *tset) { return 0; }
+-static inline void scx_move_task(struct task_struct *p) {}
++static inline void scx_cgroup_move_task(struct task_struct *p) {}
+ static inline void scx_cgroup_finish_attach(void) {}
+ static inline void scx_cgroup_cancel_attach(struct cgroup_taskset *tset) {}
+ static inline void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight) {}
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 5426969cf478a0..d79de755c1c269 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -572,7 +572,7 @@ extern void sched_online_group(struct task_group *tg,
+ extern void sched_destroy_group(struct task_group *tg);
+ extern void sched_release_group(struct task_group *tg);
+
+-extern void sched_move_task(struct task_struct *tsk);
++extern void sched_move_task(struct task_struct *tsk, bool for_autogroup);
+
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 8a40a616288b81..58fb7280cabbe6 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -365,16 +365,18 @@ void clocksource_verify_percpu(struct clocksource *cs)
+ cpumask_clear(&cpus_ahead);
+ cpumask_clear(&cpus_behind);
+ cpus_read_lock();
+- preempt_disable();
++ migrate_disable();
+ clocksource_verify_choose_cpus();
+ if (cpumask_empty(&cpus_chosen)) {
+- preempt_enable();
++ migrate_enable();
+ cpus_read_unlock();
+ pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name);
+ return;
+ }
+ testcpu = smp_processor_id();
+- pr_warn("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n", cs->name, testcpu, cpumask_pr_args(&cpus_chosen));
++ pr_info("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n",
++ cs->name, testcpu, cpumask_pr_args(&cpus_chosen));
++ preempt_disable();
+ for_each_cpu(cpu, &cpus_chosen) {
+ if (cpu == testcpu)
+ continue;
+@@ -394,6 +396,7 @@ void clocksource_verify_percpu(struct clocksource *cs)
+ cs_nsec_min = cs_nsec;
+ }
+ preempt_enable();
++ migrate_enable();
+ cpus_read_unlock();
+ if (!cpumask_empty(&cpus_ahead))
+ pr_warn(" CPUs %*pbl ahead of CPU %d for clocksource %s.\n",
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 0f8f3ffc6f0904..ea8ad5480e286d 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1672,7 +1672,8 @@ static void *rb_range_buffer(struct ring_buffer_per_cpu *cpu_buffer, int idx)
+ * must be the same.
+ */
+ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+- struct trace_buffer *buffer, int nr_pages)
++ struct trace_buffer *buffer, int nr_pages,
++ unsigned long *subbuf_mask)
+ {
+ int subbuf_size = PAGE_SIZE;
+ struct buffer_data_page *subbuf;
+@@ -1680,6 +1681,9 @@ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+ unsigned long buffers_end;
+ int i;
+
++ if (!subbuf_mask)
++ return false;
++
+ /* Check the meta magic and meta struct size */
+ if (meta->magic != RING_BUFFER_META_MAGIC ||
+ meta->struct_size != sizeof(*meta)) {
+@@ -1712,6 +1716,8 @@ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+
+ subbuf = rb_subbufs_from_meta(meta);
+
++ bitmap_clear(subbuf_mask, 0, meta->nr_subbufs);
++
+ /* Is the meta buffers and the subbufs themselves have correct data? */
+ for (i = 0; i < meta->nr_subbufs; i++) {
+ if (meta->buffers[i] < 0 ||
+@@ -1725,6 +1731,12 @@ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+ return false;
+ }
+
++ if (test_bit(meta->buffers[i], subbuf_mask)) {
++ pr_info("Ring buffer boot meta [%d] array has duplicates\n", cpu);
++ return false;
++ }
++
++ set_bit(meta->buffers[i], subbuf_mask);
+ subbuf = (void *)subbuf + subbuf_size;
+ }
+
+@@ -1838,6 +1850,11 @@ static void rb_meta_validate_events(struct ring_buffer_per_cpu *cpu_buffer)
+ cpu_buffer->cpu);
+ goto invalid;
+ }
++
++ /* If the buffer has content, update pages_touched */
++ if (ret)
++ local_inc(&cpu_buffer->pages_touched);
++
+ entries += ret;
+ entry_bytes += local_read(&head_page->page->commit);
+ local_set(&cpu_buffer->head_page->entries, ret);
+@@ -1889,17 +1906,22 @@ static void rb_meta_init_text_addr(struct ring_buffer_meta *meta)
+ static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)
+ {
+ struct ring_buffer_meta *meta;
++ unsigned long *subbuf_mask;
+ unsigned long delta;
+ void *subbuf;
+ int cpu;
+ int i;
+
++ /* Create a mask to test the subbuf array */
++ subbuf_mask = bitmap_alloc(nr_pages + 1, GFP_KERNEL);
++ /* If subbuf_mask fails to allocate, then rb_meta_valid() will return false */
++
+ for (cpu = 0; cpu < nr_cpu_ids; cpu++) {
+ void *next_meta;
+
+ meta = rb_range_meta(buffer, nr_pages, cpu);
+
+- if (rb_meta_valid(meta, cpu, buffer, nr_pages)) {
++ if (rb_meta_valid(meta, cpu, buffer, nr_pages, subbuf_mask)) {
+ /* Make the mappings match the current address */
+ subbuf = rb_subbufs_from_meta(meta);
+ delta = (unsigned long)subbuf - meta->first_buffer;
+@@ -1943,6 +1965,7 @@ static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)
+ subbuf += meta->subbuf_size;
+ }
+ }
++ bitmap_free(subbuf_mask);
+ }
+
+ static void *rbm_start(struct seq_file *m, loff_t *pos)
+@@ -7157,6 +7180,7 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu,
+ kfree(cpu_buffer->subbuf_ids);
+ cpu_buffer->subbuf_ids = NULL;
+ rb_free_meta_page(cpu_buffer);
++ atomic_dec(&cpu_buffer->resize_disabled);
+ }
+
+ unlock:
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b04990385a6a87..bfc4ac265c2c33 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8364,6 +8364,10 @@ static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma)
+ struct trace_iterator *iter = &info->iter;
+ int ret = 0;
+
++ /* Currently the boot mapped buffer is not supported for mmap */
++ if (iter->tr->flags & TRACE_ARRAY_FL_BOOT)
++ return -ENODEV;
++
+ ret = get_snapshot_map(iter->tr);
+ if (ret)
+ return ret;
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index cee65cb4310816..a9d64e08dffc7c 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3509,12 +3509,6 @@ static int rescuer_thread(void *__rescuer)
+ }
+ }
+
+- /*
+- * Put the reference grabbed by send_mayday(). @pool won't
+- * go away while we're still attached to it.
+- */
+- put_pwq(pwq);
+-
+ /*
+ * Leave this pool. Notify regular workers; otherwise, we end up
+ * with 0 concurrency and stalling the execution.
+@@ -3525,6 +3519,12 @@ static int rescuer_thread(void *__rescuer)
+
+ worker_detach_from_pool(rescuer);
+
++ /*
++ * Put the reference grabbed by send_mayday(). @pool might
++ * go away any time after it.
++ */
++ put_pwq_unlocked(pwq);
++
+ raw_spin_lock_irq(&wq_mayday_lock);
+ }
+
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index aa6c714892ec9d..9f3b8b682adb29 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -685,6 +685,15 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
++ if (ax25->ax25_dev) {
++ if (dev == ax25->ax25_dev->dev) {
++ rcu_read_unlock();
++ break;
++ }
++ netdev_put(ax25->ax25_dev->dev, &ax25->dev_tracker);
++ ax25_dev_put(ax25->ax25_dev);
++ }
++
+ ax25->ax25_dev = ax25_dev_ax25dev(dev);
+ if (!ax25->ax25_dev) {
+ rcu_read_unlock();
+@@ -692,6 +701,8 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+ ax25_fillin_cb(ax25, ax25->ax25_dev);
++ netdev_hold(dev, &ax25->dev_tracker, GFP_ATOMIC);
++ ax25_dev_hold(ax25->ax25_dev);
+ rcu_read_unlock();
+ break;
+
+diff --git a/net/batman-adv/bat_v.c b/net/batman-adv/bat_v.c
+index ac11f1f08db0f9..d35479c465e2c4 100644
+--- a/net/batman-adv/bat_v.c
++++ b/net/batman-adv/bat_v.c
+@@ -113,8 +113,6 @@ static void
+ batadv_v_hardif_neigh_init(struct batadv_hardif_neigh_node *hardif_neigh)
+ {
+ ewma_throughput_init(&hardif_neigh->bat_v.throughput);
+- INIT_WORK(&hardif_neigh->bat_v.metric_work,
+- batadv_v_elp_throughput_metric_update);
+ }
+
+ /**
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 1d704574e6bf54..b065578b4436ee 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -18,6 +18,7 @@
+ #include <linux/if_ether.h>
+ #include <linux/jiffies.h>
+ #include <linux/kref.h>
++#include <linux/list.h>
+ #include <linux/minmax.h>
+ #include <linux/netdevice.h>
+ #include <linux/nl80211.h>
+@@ -26,6 +27,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/skbuff.h>
++#include <linux/slab.h>
+ #include <linux/stddef.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+@@ -41,6 +43,18 @@
+ #include "routing.h"
+ #include "send.h"
+
++/**
++ * struct batadv_v_metric_queue_entry - list of hardif neighbors which require
++ * and metric update
++ */
++struct batadv_v_metric_queue_entry {
++ /** @hardif_neigh: hardif neighbor scheduled for metric update */
++ struct batadv_hardif_neigh_node *hardif_neigh;
++
++ /** @list: list node for metric_queue */
++ struct list_head list;
++};
++
+ /**
+ * batadv_v_elp_start_timer() - restart timer for ELP periodic work
+ * @hard_iface: the interface for which the timer has to be reset
+@@ -59,25 +73,36 @@ static void batadv_v_elp_start_timer(struct batadv_hard_iface *hard_iface)
+ /**
+ * batadv_v_elp_get_throughput() - get the throughput towards a neighbour
+ * @neigh: the neighbour for which the throughput has to be obtained
++ * @pthroughput: calculated throughput towards the given neighbour in multiples
++ * of 100kpbs (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).
+ *
+- * Return: The throughput towards the given neighbour in multiples of 100kpbs
+- * (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).
++ * Return: true when value behind @pthroughput was set
+ */
+-static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
++static bool batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh,
++ u32 *pthroughput)
+ {
+ struct batadv_hard_iface *hard_iface = neigh->if_incoming;
++ struct net_device *soft_iface = hard_iface->soft_iface;
+ struct ethtool_link_ksettings link_settings;
+ struct net_device *real_netdev;
+ struct station_info sinfo;
+ u32 throughput;
+ int ret;
+
++ /* don't query throughput when no longer associated with any
++ * batman-adv interface
++ */
++ if (!soft_iface)
++ return false;
++
+ /* if the user specified a customised value for this interface, then
+ * return it directly
+ */
+ throughput = atomic_read(&hard_iface->bat_v.throughput_override);
+- if (throughput != 0)
+- return throughput;
++ if (throughput != 0) {
++ *pthroughput = throughput;
++ return true;
++ }
+
+ /* if this is a wireless device, then ask its throughput through
+ * cfg80211 API
+@@ -104,27 +129,39 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ * possible to delete this neighbor. For now set
+ * the throughput metric to 0.
+ */
+- return 0;
++ *pthroughput = 0;
++ return true;
+ }
+ if (ret)
+ goto default_throughput;
+
+- if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT))
+- return sinfo.expected_throughput / 100;
++ if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)) {
++ *pthroughput = sinfo.expected_throughput / 100;
++ return true;
++ }
+
+ /* try to estimate the expected throughput based on reported tx
+ * rates
+ */
+- if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE))
+- return cfg80211_calculate_bitrate(&sinfo.txrate) / 3;
++ if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE)) {
++ *pthroughput = cfg80211_calculate_bitrate(&sinfo.txrate) / 3;
++ return true;
++ }
+
+ goto default_throughput;
+ }
+
++ /* only use rtnl_trylock because the elp worker will be cancelled while
++ * the rntl_lock is held. the cancel_delayed_work_sync() would otherwise
++ * wait forever when the elp work_item was started and it is then also
++ * trying to rtnl_lock
++ */
++ if (!rtnl_trylock())
++ return false;
++
+ /* if not a wifi interface, check if this device provides data via
+ * ethtool (e.g. an Ethernet adapter)
+ */
+- rtnl_lock();
+ ret = __ethtool_get_link_ksettings(hard_iface->net_dev, &link_settings);
+ rtnl_unlock();
+ if (ret == 0) {
+@@ -135,13 +172,15 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ hard_iface->bat_v.flags &= ~BATADV_FULL_DUPLEX;
+
+ throughput = link_settings.base.speed;
+- if (throughput && throughput != SPEED_UNKNOWN)
+- return throughput * 10;
++ if (throughput && throughput != SPEED_UNKNOWN) {
++ *pthroughput = throughput * 10;
++ return true;
++ }
+ }
+
+ default_throughput:
+ if (!(hard_iface->bat_v.flags & BATADV_WARNING_DEFAULT)) {
+- batadv_info(hard_iface->soft_iface,
++ batadv_info(soft_iface,
+ "WiFi driver or ethtool info does not provide information about link speeds on interface %s, therefore defaulting to hardcoded throughput values of %u.%1u Mbps. Consider overriding the throughput manually or checking your driver.\n",
+ hard_iface->net_dev->name,
+ BATADV_THROUGHPUT_DEFAULT_VALUE / 10,
+@@ -150,31 +189,26 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ }
+
+ /* if none of the above cases apply, return the base_throughput */
+- return BATADV_THROUGHPUT_DEFAULT_VALUE;
++ *pthroughput = BATADV_THROUGHPUT_DEFAULT_VALUE;
++ return true;
+ }
+
+ /**
+ * batadv_v_elp_throughput_metric_update() - worker updating the throughput
+ * metric of a single hop neighbour
+- * @work: the work queue item
++ * @neigh: the neighbour to probe
+ */
+-void batadv_v_elp_throughput_metric_update(struct work_struct *work)
++static void
++batadv_v_elp_throughput_metric_update(struct batadv_hardif_neigh_node *neigh)
+ {
+- struct batadv_hardif_neigh_node_bat_v *neigh_bat_v;
+- struct batadv_hardif_neigh_node *neigh;
+-
+- neigh_bat_v = container_of(work, struct batadv_hardif_neigh_node_bat_v,
+- metric_work);
+- neigh = container_of(neigh_bat_v, struct batadv_hardif_neigh_node,
+- bat_v);
++ u32 throughput;
++ bool valid;
+
+- ewma_throughput_add(&neigh->bat_v.throughput,
+- batadv_v_elp_get_throughput(neigh));
++ valid = batadv_v_elp_get_throughput(neigh, &throughput);
++ if (!valid)
++ return;
+
+- /* decrement refcounter to balance increment performed before scheduling
+- * this task
+- */
+- batadv_hardif_neigh_put(neigh);
++ ewma_throughput_add(&neigh->bat_v.throughput, throughput);
+ }
+
+ /**
+@@ -248,14 +282,16 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ */
+ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ {
++ struct batadv_v_metric_queue_entry *metric_entry;
++ struct batadv_v_metric_queue_entry *metric_safe;
+ struct batadv_hardif_neigh_node *hardif_neigh;
+ struct batadv_hard_iface *hard_iface;
+ struct batadv_hard_iface_bat_v *bat_v;
+ struct batadv_elp_packet *elp_packet;
++ struct list_head metric_queue;
+ struct batadv_priv *bat_priv;
+ struct sk_buff *skb;
+ u32 elp_interval;
+- bool ret;
+
+ bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);
+ hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);
+@@ -291,6 +327,8 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+
+ atomic_inc(&hard_iface->bat_v.elp_seqno);
+
++ INIT_LIST_HEAD(&metric_queue);
++
+ /* The throughput metric is updated on each sent packet. This way, if a
+ * node is dead and no longer sends packets, batman-adv is still able to
+ * react timely to its death.
+@@ -315,16 +353,28 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+
+ /* Reading the estimated throughput from cfg80211 is a task that
+ * may sleep and that is not allowed in an rcu protected
+- * context. Therefore schedule a task for that.
++ * context. Therefore add it to metric_queue and process it
++ * outside rcu protected context.
+ */
+- ret = queue_work(batadv_event_workqueue,
+- &hardif_neigh->bat_v.metric_work);
+-
+- if (!ret)
++ metric_entry = kzalloc(sizeof(*metric_entry), GFP_ATOMIC);
++ if (!metric_entry) {
+ batadv_hardif_neigh_put(hardif_neigh);
++ continue;
++ }
++
++ metric_entry->hardif_neigh = hardif_neigh;
++ list_add(&metric_entry->list, &metric_queue);
+ }
+ rcu_read_unlock();
+
++ list_for_each_entry_safe(metric_entry, metric_safe, &metric_queue, list) {
++ batadv_v_elp_throughput_metric_update(metric_entry->hardif_neigh);
++
++ batadv_hardif_neigh_put(metric_entry->hardif_neigh);
++ list_del(&metric_entry->list);
++ kfree(metric_entry);
++ }
++
+ restart_timer:
+ batadv_v_elp_start_timer(hard_iface);
+ out:
+diff --git a/net/batman-adv/bat_v_elp.h b/net/batman-adv/bat_v_elp.h
+index 9e2740195fa2d4..c9cb0a30710045 100644
+--- a/net/batman-adv/bat_v_elp.h
++++ b/net/batman-adv/bat_v_elp.h
+@@ -10,7 +10,6 @@
+ #include "main.h"
+
+ #include <linux/skbuff.h>
+-#include <linux/workqueue.h>
+
+ int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface);
+ void batadv_v_elp_iface_disable(struct batadv_hard_iface *hard_iface);
+@@ -19,6 +18,5 @@ void batadv_v_elp_iface_activate(struct batadv_hard_iface *primary_iface,
+ void batadv_v_elp_primary_iface_set(struct batadv_hard_iface *primary_iface);
+ int batadv_v_elp_packet_recv(struct sk_buff *skb,
+ struct batadv_hard_iface *if_incoming);
+-void batadv_v_elp_throughput_metric_update(struct work_struct *work);
+
+ #endif /* _NET_BATMAN_ADV_BAT_V_ELP_H_ */
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index 04f6398b3a40e8..85a50096f5b24d 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -596,9 +596,6 @@ struct batadv_hardif_neigh_node_bat_v {
+ * neighbor
+ */
+ unsigned long last_unicast_tx;
+-
+- /** @metric_work: work queue callback item for metric update */
+- struct work_struct metric_work;
+ };
+
+ /**
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 305dd72c844c70..17226b2341d03d 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -1132,7 +1132,7 @@ static int j1939_sk_send_loop(struct j1939_priv *priv, struct sock *sk,
+
+ todo_size = size;
+
+- while (todo_size) {
++ do {
+ struct j1939_sk_buff_cb *skcb;
+
+ segment_size = min_t(size_t, J1939_MAX_TP_PACKET_SIZE,
+@@ -1177,7 +1177,7 @@ static int j1939_sk_send_loop(struct j1939_priv *priv, struct sock *sk,
+
+ todo_size -= segment_size;
+ session->total_queued_size += segment_size;
+- }
++ } while (todo_size);
+
+ switch (ret) {
+ case 0: /* OK */
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 95f7a7e65a73fa..9b72d118d756dd 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -382,8 +382,9 @@ sk_buff *j1939_session_skb_get_by_offset(struct j1939_session *session,
+ skb_queue_walk(&session->skb_queue, do_skb) {
+ do_skcb = j1939_skb_to_cb(do_skb);
+
+- if (offset_start >= do_skcb->offset &&
+- offset_start < (do_skcb->offset + do_skb->len)) {
++ if ((offset_start >= do_skcb->offset &&
++ offset_start < (do_skcb->offset + do_skb->len)) ||
++ (offset_start == 0 && do_skcb->offset == 0 && do_skb->len == 0)) {
+ skb = do_skb;
+ }
+ }
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index 154a2681f55cc6..388ff1d6d86b7a 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -37,8 +37,8 @@ static const struct fib_kuid_range fib_kuid_range_unset = {
+
+ bool fib_rule_matchall(const struct fib_rule *rule)
+ {
+- if (rule->iifindex || rule->oifindex || rule->mark || rule->tun_id ||
+- rule->flags)
++ if (READ_ONCE(rule->iifindex) || READ_ONCE(rule->oifindex) ||
++ rule->mark || rule->tun_id || rule->flags)
+ return false;
+ if (rule->suppress_ifgroup != -1 || rule->suppress_prefixlen != -1)
+ return false;
+@@ -260,12 +260,14 @@ static int fib_rule_match(struct fib_rule *rule, struct fib_rules_ops *ops,
+ struct flowi *fl, int flags,
+ struct fib_lookup_arg *arg)
+ {
+- int ret = 0;
++ int iifindex, oifindex, ret = 0;
+
+- if (rule->iifindex && (rule->iifindex != fl->flowi_iif))
++ iifindex = READ_ONCE(rule->iifindex);
++ if (iifindex && (iifindex != fl->flowi_iif))
+ goto out;
+
+- if (rule->oifindex && (rule->oifindex != fl->flowi_oif))
++ oifindex = READ_ONCE(rule->oifindex);
++ if (oifindex && (oifindex != fl->flowi_oif))
+ goto out;
+
+ if ((rule->mark ^ fl->flowi_mark) & rule->mark_mask)
+@@ -1038,14 +1040,14 @@ static int fib_nl_fill_rule(struct sk_buff *skb, struct fib_rule *rule,
+ if (rule->iifname[0]) {
+ if (nla_put_string(skb, FRA_IIFNAME, rule->iifname))
+ goto nla_put_failure;
+- if (rule->iifindex == -1)
++ if (READ_ONCE(rule->iifindex) == -1)
+ frh->flags |= FIB_RULE_IIF_DETACHED;
+ }
+
+ if (rule->oifname[0]) {
+ if (nla_put_string(skb, FRA_OIFNAME, rule->oifname))
+ goto nla_put_failure;
+- if (rule->oifindex == -1)
++ if (READ_ONCE(rule->oifindex) == -1)
+ frh->flags |= FIB_RULE_OIF_DETACHED;
+ }
+
+@@ -1217,10 +1219,10 @@ static void attach_rules(struct list_head *rules, struct net_device *dev)
+ list_for_each_entry(rule, rules, list) {
+ if (rule->iifindex == -1 &&
+ strcmp(dev->name, rule->iifname) == 0)
+- rule->iifindex = dev->ifindex;
++ WRITE_ONCE(rule->iifindex, dev->ifindex);
+ if (rule->oifindex == -1 &&
+ strcmp(dev->name, rule->oifname) == 0)
+- rule->oifindex = dev->ifindex;
++ WRITE_ONCE(rule->oifindex, dev->ifindex);
+ }
+ }
+
+@@ -1230,9 +1232,9 @@ static void detach_rules(struct list_head *rules, struct net_device *dev)
+
+ list_for_each_entry(rule, rules, list) {
+ if (rule->iifindex == dev->ifindex)
+- rule->iifindex = -1;
++ WRITE_ONCE(rule->iifindex, -1);
+ if (rule->oifindex == dev->ifindex)
+- rule->oifindex = -1;
++ WRITE_ONCE(rule->oifindex, -1);
+ }
+ }
+
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 0e638a37aa0961..5db41bf2ed93e0 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1108,10 +1108,12 @@ bool __skb_flow_dissect(const struct net *net,
+ FLOW_DISSECTOR_KEY_BASIC,
+ target_container);
+
++ rcu_read_lock();
++
+ if (skb) {
+ if (!net) {
+ if (skb->dev)
+- net = dev_net(skb->dev);
++ net = dev_net_rcu(skb->dev);
+ else if (skb->sk)
+ net = sock_net(skb->sk);
+ }
+@@ -1122,7 +1124,6 @@ bool __skb_flow_dissect(const struct net *net,
+ enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR;
+ struct bpf_prog_array *run_array;
+
+- rcu_read_lock();
+ run_array = rcu_dereference(init_net.bpf.run_array[type]);
+ if (!run_array)
+ run_array = rcu_dereference(net->bpf.run_array[type]);
+@@ -1150,17 +1151,17 @@ bool __skb_flow_dissect(const struct net *net,
+ prog = READ_ONCE(run_array->items[0].prog);
+ result = bpf_flow_dissect(prog, &ctx, n_proto, nhoff,
+ hlen, flags);
+- if (result == BPF_FLOW_DISSECTOR_CONTINUE)
+- goto dissect_continue;
+- __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
+- target_container);
+- rcu_read_unlock();
+- return result == BPF_OK;
++ if (result != BPF_FLOW_DISSECTOR_CONTINUE) {
++ __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
++ target_container);
++ rcu_read_unlock();
++ return result == BPF_OK;
++ }
+ }
+-dissect_continue:
+- rcu_read_unlock();
+ }
+
++ rcu_read_unlock();
++
+ if (dissector_uses_key(flow_dissector,
+ FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ struct ethhdr *eth = eth_hdr(skb);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index cc58315a40a79c..c7f7ea61b524a2 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -3513,10 +3513,12 @@ static const struct seq_operations neigh_stat_seq_ops = {
+ static void __neigh_notify(struct neighbour *n, int type, int flags,
+ u32 pid)
+ {
+- struct net *net = dev_net(n->dev);
+ struct sk_buff *skb;
+ int err = -ENOBUFS;
++ struct net *net;
+
++ rcu_read_lock();
++ net = dev_net_rcu(n->dev);
+ skb = nlmsg_new(neigh_nlmsg_size(), GFP_ATOMIC);
+ if (skb == NULL)
+ goto errout;
+@@ -3529,9 +3531,11 @@ static void __neigh_notify(struct neighbour *n, int type, int flags,
+ goto errout;
+ }
+ rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
+- return;
++ goto out;
+ errout:
+ rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
++out:
++ rcu_read_unlock();
+ }
+
+ void neigh_app_ns(struct neighbour *n)
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 11c1519b36993d..59ffaa89d7b05f 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -659,10 +659,12 @@ static int arp_xmit_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ */
+ void arp_xmit(struct sk_buff *skb)
+ {
++ rcu_read_lock();
+ /* Send it off, maybe filter it using firewalling first. */
+ NF_HOOK(NFPROTO_ARP, NF_ARP_OUT,
+- dev_net(skb->dev), NULL, skb, NULL, skb->dev,
++ dev_net_rcu(skb->dev), NULL, skb, NULL, skb->dev,
+ arp_xmit_finish);
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(arp_xmit);
+
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 7cf5f7d0d0de23..a55e95046984da 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1351,10 +1351,11 @@ __be32 inet_select_addr(const struct net_device *dev, __be32 dst, int scope)
+ __be32 addr = 0;
+ unsigned char localnet_scope = RT_SCOPE_HOST;
+ struct in_device *in_dev;
+- struct net *net = dev_net(dev);
++ struct net *net;
+ int master_idx;
+
+ rcu_read_lock();
++ net = dev_net_rcu(dev);
+ in_dev = __in_dev_get_rcu(dev);
+ if (!in_dev)
+ goto no_in_dev;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 932bd775fc2682..f45bc187a92a7e 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -399,10 +399,10 @@ static void icmp_push_reply(struct sock *sk,
+
+ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ {
+- struct ipcm_cookie ipc;
+ struct rtable *rt = skb_rtable(skb);
+- struct net *net = dev_net(rt->dst.dev);
++ struct net *net = dev_net_rcu(rt->dst.dev);
+ bool apply_ratelimit = false;
++ struct ipcm_cookie ipc;
+ struct flowi4 fl4;
+ struct sock *sk;
+ struct inet_sock *inet;
+@@ -610,12 +610,14 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ struct sock *sk;
+
+ if (!rt)
+- goto out;
++ return;
++
++ rcu_read_lock();
+
+ if (rt->dst.dev)
+- net = dev_net(rt->dst.dev);
++ net = dev_net_rcu(rt->dst.dev);
+ else if (skb_in->dev)
+- net = dev_net(skb_in->dev);
++ net = dev_net_rcu(skb_in->dev);
+ else
+ goto out;
+
+@@ -786,7 +788,8 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ icmp_xmit_unlock(sk);
+ out_bh_enable:
+ local_bh_enable();
+-out:;
++out:
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(__icmp_send);
+
+@@ -835,7 +838,7 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
+ * avoid additional coding at protocol handlers.
+ */
+ if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) {
+- __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
++ __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
+ return;
+ }
+
+@@ -869,7 +872,7 @@ static enum skb_drop_reason icmp_unreach(struct sk_buff *skb)
+ struct net *net;
+ u32 info = 0;
+
+- net = dev_net(skb_dst(skb)->dev);
++ net = dev_net_rcu(skb_dst(skb)->dev);
+
+ /*
+ * Incomplete header ?
+@@ -980,7 +983,7 @@ static enum skb_drop_reason icmp_unreach(struct sk_buff *skb)
+ static enum skb_drop_reason icmp_redirect(struct sk_buff *skb)
+ {
+ if (skb->len < sizeof(struct iphdr)) {
+- __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
++ __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
+ return SKB_DROP_REASON_PKT_TOO_SMALL;
+ }
+
+@@ -1012,7 +1015,7 @@ static enum skb_drop_reason icmp_echo(struct sk_buff *skb)
+ struct icmp_bxm icmp_param;
+ struct net *net;
+
+- net = dev_net(skb_dst(skb)->dev);
++ net = dev_net_rcu(skb_dst(skb)->dev);
+ /* should there be an ICMP stat for ignored echos? */
+ if (READ_ONCE(net->ipv4.sysctl_icmp_echo_ignore_all))
+ return SKB_NOT_DROPPED_YET;
+@@ -1041,9 +1044,9 @@ static enum skb_drop_reason icmp_echo(struct sk_buff *skb)
+
+ bool icmp_build_probe(struct sk_buff *skb, struct icmphdr *icmphdr)
+ {
++ struct net *net = dev_net_rcu(skb->dev);
+ struct icmp_ext_hdr *ext_hdr, _ext_hdr;
+ struct icmp_ext_echo_iio *iio, _iio;
+- struct net *net = dev_net(skb->dev);
+ struct inet6_dev *in6_dev;
+ struct in_device *in_dev;
+ struct net_device *dev;
+@@ -1182,7 +1185,7 @@ static enum skb_drop_reason icmp_timestamp(struct sk_buff *skb)
+ return SKB_NOT_DROPPED_YET;
+
+ out_err:
+- __ICMP_INC_STATS(dev_net(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
++ __ICMP_INC_STATS(dev_net_rcu(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
+ return SKB_DROP_REASON_PKT_TOO_SMALL;
+ }
+
+@@ -1199,7 +1202,7 @@ int icmp_rcv(struct sk_buff *skb)
+ {
+ enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
+ struct rtable *rt = skb_rtable(skb);
+- struct net *net = dev_net(rt->dst.dev);
++ struct net *net = dev_net_rcu(rt->dst.dev);
+ struct icmphdr *icmph;
+
+ if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+@@ -1372,9 +1375,9 @@ int icmp_err(struct sk_buff *skb, u32 info)
+ struct iphdr *iph = (struct iphdr *)skb->data;
+ int offset = iph->ihl<<2;
+ struct icmphdr *icmph = (struct icmphdr *)(skb->data + offset);
++ struct net *net = dev_net_rcu(skb->dev);
+ int type = icmp_hdr(skb)->type;
+ int code = icmp_hdr(skb)->code;
+- struct net *net = dev_net(skb->dev);
+
+ /*
+ * Use ping_err to handle all icmp errors except those
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 2a27913588d05a..41b320f0c20ebf 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -390,7 +390,13 @@ static inline int ip_rt_proc_init(void)
+
+ static inline bool rt_is_expired(const struct rtable *rth)
+ {
+- return rth->rt_genid != rt_genid_ipv4(dev_net(rth->dst.dev));
++ bool res;
++
++ rcu_read_lock();
++ res = rth->rt_genid != rt_genid_ipv4(dev_net_rcu(rth->dst.dev));
++ rcu_read_unlock();
++
++ return res;
+ }
+
+ void rt_cache_flush(struct net *net)
+@@ -1002,9 +1008,9 @@ out: kfree_skb_reason(skb, reason);
+ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ {
+ struct dst_entry *dst = &rt->dst;
+- struct net *net = dev_net(dst->dev);
+ struct fib_result res;
+ bool lock = false;
++ struct net *net;
+ u32 old_mtu;
+
+ if (ip_mtu_locked(dst))
+@@ -1014,6 +1020,8 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ if (old_mtu < mtu)
+ return;
+
++ rcu_read_lock();
++ net = dev_net_rcu(dst->dev);
+ if (mtu < net->ipv4.ip_rt_min_pmtu) {
+ lock = true;
+ mtu = min(old_mtu, net->ipv4.ip_rt_min_pmtu);
+@@ -1021,17 +1029,29 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+
+ if (rt->rt_pmtu == mtu && !lock &&
+ time_before(jiffies, dst->expires - net->ipv4.ip_rt_mtu_expires / 2))
+- return;
++ goto out;
+
+- rcu_read_lock();
+ if (fib_lookup(net, fl4, &res, 0) == 0) {
+ struct fib_nh_common *nhc;
+
+ fib_select_path(net, &res, fl4, NULL);
++#ifdef CONFIG_IP_ROUTE_MULTIPATH
++ if (fib_info_num_path(res.fi) > 1) {
++ int nhsel;
++
++ for (nhsel = 0; nhsel < fib_info_num_path(res.fi); nhsel++) {
++ nhc = fib_info_nhc(res.fi, nhsel);
++ update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
++ jiffies + net->ipv4.ip_rt_mtu_expires);
++ }
++ goto out;
++ }
++#endif /* CONFIG_IP_ROUTE_MULTIPATH */
+ nhc = FIB_RES_NHC(res);
+ update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
+ jiffies + net->ipv4.ip_rt_mtu_expires);
+ }
++out:
+ rcu_read_unlock();
+ }
+
+@@ -1294,10 +1314,15 @@ static void set_class_tag(struct rtable *rt, u32 tag)
+
+ static unsigned int ipv4_default_advmss(const struct dst_entry *dst)
+ {
+- struct net *net = dev_net(dst->dev);
+ unsigned int header_size = sizeof(struct tcphdr) + sizeof(struct iphdr);
+- unsigned int advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size,
+- net->ipv4.ip_rt_min_advmss);
++ unsigned int advmss;
++ struct net *net;
++
++ rcu_read_lock();
++ net = dev_net_rcu(dst->dev);
++ advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size,
++ net->ipv4.ip_rt_min_advmss);
++ rcu_read_unlock();
+
+ return min(advmss, IPV4_MAX_PMTU - header_size);
+ }
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index a6984a29fdb9dd..4d14ab7f7e99f1 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -76,7 +76,7 @@ static int icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ {
+ /* icmpv6_notify checks 8 bytes can be pulled, icmp6hdr is 8 bytes */
+ struct icmp6hdr *icmp6 = (struct icmp6hdr *) (skb->data + offset);
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+
+ if (type == ICMPV6_PKT_TOOBIG)
+ ip6_update_pmtu(skb, net, info, skb->dev->ifindex, 0, sock_net_uid(net, NULL));
+@@ -473,7 +473,10 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+
+ if (!skb->dev)
+ return;
+- net = dev_net(skb->dev);
++
++ rcu_read_lock();
++
++ net = dev_net_rcu(skb->dev);
+ mark = IP6_REPLY_MARK(net, skb->mark);
+ /*
+ * Make sure we respect the rules
+@@ -496,7 +499,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ !(type == ICMPV6_PARAMPROB &&
+ code == ICMPV6_UNK_OPTION &&
+ (opt_unrec(skb, info))))
+- return;
++ goto out;
+
+ saddr = NULL;
+ }
+@@ -526,7 +529,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ if ((addr_type == IPV6_ADDR_ANY) || (addr_type & IPV6_ADDR_MULTICAST)) {
+ net_dbg_ratelimited("icmp6_send: addr_any/mcast source [%pI6c > %pI6c]\n",
+ &hdr->saddr, &hdr->daddr);
+- return;
++ goto out;
+ }
+
+ /*
+@@ -535,7 +538,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ if (is_ineligible(skb)) {
+ net_dbg_ratelimited("icmp6_send: no reply to icmp error [%pI6c > %pI6c]\n",
+ &hdr->saddr, &hdr->daddr);
+- return;
++ goto out;
+ }
+
+ /* Needed by both icmpv6_global_allow and icmpv6_xmit_lock */
+@@ -582,7 +585,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ np = inet6_sk(sk);
+
+ if (!icmpv6_xrlim_allow(sk, type, &fl6, apply_ratelimit))
+- goto out;
++ goto out_unlock;
+
+ tmp_hdr.icmp6_type = type;
+ tmp_hdr.icmp6_code = code;
+@@ -600,7 +603,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+
+ dst = icmpv6_route_lookup(net, skb, sk, &fl6);
+ if (IS_ERR(dst))
+- goto out;
++ goto out_unlock;
+
+ ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+
+@@ -616,7 +619,6 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ goto out_dst_release;
+ }
+
+- rcu_read_lock();
+ idev = __in6_dev_get(skb->dev);
+
+ if (ip6_append_data(sk, icmpv6_getfrag, &msg,
+@@ -630,13 +632,15 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ icmpv6_push_pending_frames(sk, &fl6, &tmp_hdr,
+ len + sizeof(struct icmp6hdr));
+ }
+- rcu_read_unlock();
++
+ out_dst_release:
+ dst_release(dst);
+-out:
++out_unlock:
+ icmpv6_xmit_unlock(sk);
+ out_bh_enable:
+ local_bh_enable();
++out:
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(icmp6_send);
+
+@@ -679,8 +683,8 @@ int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
+ skb_pull(skb2, nhs);
+ skb_reset_network_header(skb2);
+
+- rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0,
+- skb, 0);
++ rt = rt6_lookup(dev_net_rcu(skb->dev), &ipv6_hdr(skb2)->saddr,
++ NULL, 0, skb, 0);
+
+ if (rt && rt->dst.dev)
+ skb2->dev = rt->dst.dev;
+@@ -717,7 +721,7 @@ EXPORT_SYMBOL(ip6_err_gen_icmpv6_unreach);
+
+ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
+ {
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+ struct sock *sk;
+ struct inet6_dev *idev;
+ struct ipv6_pinfo *np;
+@@ -832,7 +836,7 @@ enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
+ u8 code, __be32 info)
+ {
+ struct inet6_skb_parm *opt = IP6CB(skb);
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+ const struct inet6_protocol *ipprot;
+ enum skb_drop_reason reason;
+ int inner_offset;
+@@ -889,7 +893,7 @@ enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
+ static int icmpv6_rcv(struct sk_buff *skb)
+ {
+ enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+ struct net_device *dev = icmp6_dev(skb);
+ struct inet6_dev *idev = __in6_dev_get(dev);
+ const struct in6_addr *saddr, *daddr;
+@@ -921,7 +925,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
+ skb_set_network_header(skb, nh);
+ }
+
+- __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INMSGS);
++ __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INMSGS);
+
+ saddr = &ipv6_hdr(skb)->saddr;
+ daddr = &ipv6_hdr(skb)->daddr;
+@@ -939,7 +943,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
+
+ type = hdr->icmp6_type;
+
+- ICMP6MSGIN_INC_STATS(dev_net(dev), idev, type);
++ ICMP6MSGIN_INC_STATS(dev_net_rcu(dev), idev, type);
+
+ switch (type) {
+ case ICMPV6_ECHO_REQUEST:
+@@ -1034,9 +1038,9 @@ static int icmpv6_rcv(struct sk_buff *skb)
+
+ csum_error:
+ reason = SKB_DROP_REASON_ICMP_CSUM;
+- __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_CSUMERRORS);
++ __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_CSUMERRORS);
+ discard_it:
+- __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INERRORS);
++ __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INERRORS);
+ drop_no_count:
+ kfree_skb_reason(skb, reason);
+ return 0;
+diff --git a/net/ipv6/ioam6_iptunnel.c b/net/ipv6/ioam6_iptunnel.c
+index beb6b4cfc551cf..4215cebe7d85a9 100644
+--- a/net/ipv6/ioam6_iptunnel.c
++++ b/net/ipv6/ioam6_iptunnel.c
+@@ -255,14 +255,15 @@ static int ioam6_do_fill(struct net *net, struct sk_buff *skb)
+ }
+
+ static int ioam6_do_inline(struct net *net, struct sk_buff *skb,
+- struct ioam6_lwt_encap *tuninfo)
++ struct ioam6_lwt_encap *tuninfo,
++ struct dst_entry *cache_dst)
+ {
+ struct ipv6hdr *oldhdr, *hdr;
+ int hdrlen, err;
+
+ hdrlen = (tuninfo->eh.hdrlen + 1) << 3;
+
+- err = skb_cow_head(skb, hdrlen + skb->mac_len);
++ err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -293,7 +294,8 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+ struct ioam6_lwt_encap *tuninfo,
+ bool has_tunsrc,
+ struct in6_addr *tunsrc,
+- struct in6_addr *tundst)
++ struct in6_addr *tundst,
++ struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct ipv6hdr *hdr, *inner_hdr;
+@@ -302,7 +304,7 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+ hdrlen = (tuninfo->eh.hdrlen + 1) << 3;
+ len = sizeof(*hdr) + hdrlen;
+
+- err = skb_cow_head(skb, len + skb->mac_len);
++ err = skb_cow_head(skb, len + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -336,7 +338,7 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+
+ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+- struct dst_entry *dst = skb_dst(skb);
++ struct dst_entry *dst = skb_dst(skb), *cache_dst = NULL;
+ struct in6_addr orig_daddr;
+ struct ioam6_lwt *ilwt;
+ int err = -EINVAL;
+@@ -354,6 +356,10 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+
+ orig_daddr = ipv6_hdr(skb)->daddr;
+
++ local_bh_disable();
++ cache_dst = dst_cache_get(&ilwt->cache);
++ local_bh_enable();
++
+ switch (ilwt->mode) {
+ case IOAM6_IPTUNNEL_MODE_INLINE:
+ do_inline:
+@@ -361,7 +367,7 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ if (ipv6_hdr(skb)->nexthdr == NEXTHDR_HOP)
+ goto out;
+
+- err = ioam6_do_inline(net, skb, &ilwt->tuninfo);
++ err = ioam6_do_inline(net, skb, &ilwt->tuninfo, cache_dst);
+ if (unlikely(err))
+ goto drop;
+
+@@ -371,7 +377,7 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ /* Encapsulation (ip6ip6) */
+ err = ioam6_do_encap(net, skb, &ilwt->tuninfo,
+ ilwt->has_tunsrc, &ilwt->tunsrc,
+- &ilwt->tundst);
++ &ilwt->tundst, cache_dst);
+ if (unlikely(err))
+ goto drop;
+
+@@ -389,46 +395,45 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ goto drop;
+ }
+
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
++ if (unlikely(!cache_dst)) {
++ struct ipv6hdr *hdr = ipv6_hdr(skb);
++ struct flowi6 fl6;
+
+- if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) {
+- local_bh_disable();
+- dst = dst_cache_get(&ilwt->cache);
+- local_bh_enable();
+-
+- if (unlikely(!dst)) {
+- struct ipv6hdr *hdr = ipv6_hdr(skb);
+- struct flowi6 fl6;
+-
+- memset(&fl6, 0, sizeof(fl6));
+- fl6.daddr = hdr->daddr;
+- fl6.saddr = hdr->saddr;
+- fl6.flowlabel = ip6_flowinfo(hdr);
+- fl6.flowi6_mark = skb->mark;
+- fl6.flowi6_proto = hdr->nexthdr;
+-
+- dst = ip6_route_output(net, NULL, &fl6);
+- if (dst->error) {
+- err = dst->error;
+- dst_release(dst);
+- goto drop;
+- }
++ memset(&fl6, 0, sizeof(fl6));
++ fl6.daddr = hdr->daddr;
++ fl6.saddr = hdr->saddr;
++ fl6.flowlabel = ip6_flowinfo(hdr);
++ fl6.flowi6_mark = skb->mark;
++ fl6.flowi6_proto = hdr->nexthdr;
++
++ cache_dst = ip6_route_output(net, NULL, &fl6);
++ if (cache_dst->error) {
++ err = cache_dst->error;
++ goto drop;
++ }
+
++ /* cache only if we don't create a dst reference loop */
++ if (dst->lwtstate != cache_dst->lwtstate) {
+ local_bh_disable();
+- dst_cache_set_ip6(&ilwt->cache, dst, &fl6.saddr);
++ dst_cache_set_ip6(&ilwt->cache, cache_dst, &fl6.saddr);
+ local_bh_enable();
+ }
+
+- skb_dst_drop(skb);
+- skb_dst_set(skb, dst);
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(cache_dst->dev));
++ if (unlikely(err))
++ goto drop;
++ }
+
++ if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) {
++ skb_dst_drop(skb);
++ skb_dst_set(skb, cache_dst);
+ return dst_output(net, sk, skb);
+ }
+ out:
++ dst_release(cache_dst);
+ return dst->lwtstate->orig_output(net, sk, skb);
+ drop:
++ dst_release(cache_dst);
+ kfree_skb(skb);
+ return err;
+ }
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index b244dbf61d5f39..b7b62e5a562e5d 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1730,21 +1730,19 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+ struct net_device *dev = idev->dev;
+ int hlen = LL_RESERVED_SPACE(dev);
+ int tlen = dev->needed_tailroom;
+- struct net *net = dev_net(dev);
+ const struct in6_addr *saddr;
+ struct in6_addr addr_buf;
+ struct mld2_report *pmr;
+ struct sk_buff *skb;
+ unsigned int size;
+ struct sock *sk;
+- int err;
++ struct net *net;
+
+- sk = net->ipv6.igmp_sk;
+ /* we assume size > sizeof(ra) here
+ * Also try to not allocate high-order pages for big MTU
+ */
+ size = min_t(int, mtu, PAGE_SIZE / 2) + hlen + tlen;
+- skb = sock_alloc_send_skb(sk, size, 1, &err);
++ skb = alloc_skb(size, GFP_KERNEL);
+ if (!skb)
+ return NULL;
+
+@@ -1752,6 +1750,12 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+ skb_reserve(skb, hlen);
+ skb_tailroom_reserve(skb, mtu, tlen);
+
++ rcu_read_lock();
++
++ net = dev_net_rcu(dev);
++ sk = net->ipv6.igmp_sk;
++ skb_set_owner_w(skb, sk);
++
+ if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) {
+ /* <draft-ietf-magma-mld-source-05.txt>:
+ * use unspecified address as the source address
+@@ -1763,6 +1767,8 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+
+ ip6_mc_hdr(sk, skb, dev, saddr, &mld2_all_mcr, NEXTHDR_HOP, 0);
+
++ rcu_read_unlock();
++
+ skb_put_data(skb, ra, sizeof(ra));
+
+ skb_set_transport_header(skb, skb_tail_pointer(skb) - skb->data);
+@@ -2122,21 +2128,21 @@ static void mld_send_cr(struct inet6_dev *idev)
+
+ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
+ {
+- struct net *net = dev_net(dev);
+- struct sock *sk = net->ipv6.igmp_sk;
++ const struct in6_addr *snd_addr, *saddr;
++ int err, len, payload_len, full_len;
++ struct in6_addr addr_buf;
+ struct inet6_dev *idev;
+ struct sk_buff *skb;
+ struct mld_msg *hdr;
+- const struct in6_addr *snd_addr, *saddr;
+- struct in6_addr addr_buf;
+ int hlen = LL_RESERVED_SPACE(dev);
+ int tlen = dev->needed_tailroom;
+- int err, len, payload_len, full_len;
+ u8 ra[8] = { IPPROTO_ICMPV6, 0,
+ IPV6_TLV_ROUTERALERT, 2, 0, 0,
+ IPV6_TLV_PADN, 0 };
+- struct flowi6 fl6;
+ struct dst_entry *dst;
++ struct flowi6 fl6;
++ struct net *net;
++ struct sock *sk;
+
+ if (type == ICMPV6_MGM_REDUCTION)
+ snd_addr = &in6addr_linklocal_allrouters;
+@@ -2147,19 +2153,21 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
+ payload_len = len + sizeof(ra);
+ full_len = sizeof(struct ipv6hdr) + payload_len;
+
+- rcu_read_lock();
+- IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_OUTREQUESTS);
+- rcu_read_unlock();
++ skb = alloc_skb(hlen + tlen + full_len, GFP_KERNEL);
+
+- skb = sock_alloc_send_skb(sk, hlen + tlen + full_len, 1, &err);
++ rcu_read_lock();
+
++ net = dev_net_rcu(dev);
++ idev = __in6_dev_get(dev);
++ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
+ if (!skb) {
+- rcu_read_lock();
+- IP6_INC_STATS(net, __in6_dev_get(dev),
+- IPSTATS_MIB_OUTDISCARDS);
++ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
+ rcu_read_unlock();
+ return;
+ }
++ sk = net->ipv6.igmp_sk;
++ skb_set_owner_w(skb, sk);
++
+ skb->priority = TC_PRIO_CONTROL;
+ skb_reserve(skb, hlen);
+
+@@ -2184,9 +2192,6 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
+ IPPROTO_ICMPV6,
+ csum_partial(hdr, len, 0));
+
+- rcu_read_lock();
+- idev = __in6_dev_get(skb->dev);
+-
+ icmpv6_flow_init(sk, &fl6, type,
+ &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr,
+ skb->dev->ifindex);
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index d044c67019de6d..8699d1a188dc4a 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -418,15 +418,11 @@ static struct sk_buff *ndisc_alloc_skb(struct net_device *dev,
+ {
+ int hlen = LL_RESERVED_SPACE(dev);
+ int tlen = dev->needed_tailroom;
+- struct sock *sk = dev_net(dev)->ipv6.ndisc_sk;
+ struct sk_buff *skb;
+
+ skb = alloc_skb(hlen + sizeof(struct ipv6hdr) + len + tlen, GFP_ATOMIC);
+- if (!skb) {
+- ND_PRINTK(0, err, "ndisc: %s failed to allocate an skb\n",
+- __func__);
++ if (!skb)
+ return NULL;
+- }
+
+ skb->protocol = htons(ETH_P_IPV6);
+ skb->dev = dev;
+@@ -437,7 +433,9 @@ static struct sk_buff *ndisc_alloc_skb(struct net_device *dev,
+ /* Manually assign socket ownership as we avoid calling
+ * sock_alloc_send_pskb() to bypass wmem buffer limits
+ */
+- skb_set_owner_w(skb, sk);
++ rcu_read_lock();
++ skb_set_owner_w(skb, dev_net_rcu(dev)->ipv6.ndisc_sk);
++ rcu_read_unlock();
+
+ return skb;
+ }
+@@ -473,16 +471,20 @@ static void ip6_nd_hdr(struct sk_buff *skb,
+ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
+ const struct in6_addr *saddr)
+ {
++ struct icmp6hdr *icmp6h = icmp6_hdr(skb);
+ struct dst_entry *dst = skb_dst(skb);
+- struct net *net = dev_net(skb->dev);
+- struct sock *sk = net->ipv6.ndisc_sk;
+ struct inet6_dev *idev;
++ struct net *net;
++ struct sock *sk;
+ int err;
+- struct icmp6hdr *icmp6h = icmp6_hdr(skb);
+ u8 type;
+
+ type = icmp6h->icmp6_type;
+
++ rcu_read_lock();
++
++ net = dev_net_rcu(skb->dev);
++ sk = net->ipv6.ndisc_sk;
+ if (!dst) {
+ struct flowi6 fl6;
+ int oif = skb->dev->ifindex;
+@@ -490,6 +492,7 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
+ icmpv6_flow_init(sk, &fl6, type, saddr, daddr, oif);
+ dst = icmp6_dst_alloc(skb->dev, &fl6);
+ if (IS_ERR(dst)) {
++ rcu_read_unlock();
+ kfree_skb(skb);
+ return;
+ }
+@@ -504,7 +507,6 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
+
+ ip6_nd_hdr(skb, saddr, daddr, READ_ONCE(inet6_sk(sk)->hop_limit), skb->len);
+
+- rcu_read_lock();
+ idev = __in6_dev_get(dst->dev);
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
+
+@@ -1694,7 +1696,7 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
+ bool ret;
+
+ if (netif_is_l3_master(skb->dev)) {
+- dev = __dev_get_by_index(dev_net(skb->dev), IPCB(skb)->iif);
++ dev = dev_get_by_index_rcu(dev_net(skb->dev), IPCB(skb)->iif);
+ if (!dev)
+ return;
+ }
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 8ebfed5d63232e..2736dea77575b5 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3196,13 +3196,18 @@ static unsigned int ip6_default_advmss(const struct dst_entry *dst)
+ {
+ struct net_device *dev = dst->dev;
+ unsigned int mtu = dst_mtu(dst);
+- struct net *net = dev_net(dev);
++ struct net *net;
+
+ mtu -= sizeof(struct ipv6hdr) + sizeof(struct tcphdr);
+
++ rcu_read_lock();
++
++ net = dev_net_rcu(dev);
+ if (mtu < net->ipv6.sysctl.ip6_rt_min_advmss)
+ mtu = net->ipv6.sysctl.ip6_rt_min_advmss;
+
++ rcu_read_unlock();
++
+ /*
+ * Maximal non-jumbo IPv6 payload is IPV6_MAXPLEN and
+ * corresponding MSS is IPV6_MAXPLEN - tcp_header_size.
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index db3c19a42e1ca7..0ac4283acdf20c 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -125,7 +125,8 @@ static void rpl_destroy_state(struct lwtunnel_state *lwt)
+ }
+
+ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+- const struct ipv6_rpl_sr_hdr *srh)
++ const struct ipv6_rpl_sr_hdr *srh,
++ struct dst_entry *cache_dst)
+ {
+ struct ipv6_rpl_sr_hdr *isrh, *csrh;
+ const struct ipv6hdr *oldhdr;
+@@ -153,7 +154,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+
+ hdrlen = ((csrh->hdrlen + 1) << 3);
+
+- err = skb_cow_head(skb, hdrlen + skb->mac_len);
++ err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err)) {
+ kfree(buf);
+ return err;
+@@ -186,7 +187,8 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ return 0;
+ }
+
+-static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt)
++static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt,
++ struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct rpl_iptunnel_encap *tinfo;
+@@ -196,7 +198,7 @@ static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt)
+
+ tinfo = rpl_encap_lwtunnel(dst->lwtstate);
+
+- return rpl_do_srh_inline(skb, rlwt, tinfo->srh);
++ return rpl_do_srh_inline(skb, rlwt, tinfo->srh, cache_dst);
+ }
+
+ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+@@ -208,14 +210,14 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+
+ rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
+
+- err = rpl_do_srh(skb, rlwt);
+- if (unlikely(err))
+- goto drop;
+-
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
+ local_bh_enable();
+
++ err = rpl_do_srh(skb, rlwt, dst);
++ if (unlikely(err))
++ goto drop;
++
+ if (unlikely(!dst)) {
+ struct ipv6hdr *hdr = ipv6_hdr(skb);
+ struct flowi6 fl6;
+@@ -230,25 +232,28 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ dst = ip6_route_output(net, NULL, &fl6);
+ if (dst->error) {
+ err = dst->error;
+- dst_release(dst);
+ goto drop;
+ }
+
+- local_bh_disable();
+- dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr);
+- local_bh_enable();
++ /* cache only if we don't create a dst reference loop */
++ if (orig_dst->lwtstate != dst->lwtstate) {
++ local_bh_disable();
++ dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr);
++ local_bh_enable();
++ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ }
+
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+-
+ return dst_output(net, sk, skb);
+
+ drop:
++ dst_release(dst);
+ kfree_skb(skb);
+ return err;
+ }
+@@ -262,29 +267,33 @@ static int rpl_input(struct sk_buff *skb)
+
+ rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
+
+- err = rpl_do_srh(skb, rlwt);
+- if (unlikely(err))
+- goto drop;
+-
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
++ local_bh_enable();
++
++ err = rpl_do_srh(skb, rlwt, dst);
++ if (unlikely(err)) {
++ dst_release(dst);
++ goto drop;
++ }
+
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+ if (!dst->error) {
++ local_bh_disable();
+ dst_cache_set_ip6(&rlwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
++ local_bh_enable();
+ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ } else {
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+ }
+- local_bh_enable();
+-
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+
+ return dst_input(skb);
+
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 098632adc9b5af..33833b2064c072 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -124,8 +124,8 @@ static __be32 seg6_make_flowlabel(struct net *net, struct sk_buff *skb,
+ return flowlabel;
+ }
+
+-/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */
+-int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
++static int __seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh,
++ int proto, struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct net *net = dev_net(dst->dev);
+@@ -137,7 +137,7 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+ hdrlen = (osrh->hdrlen + 1) << 3;
+ tot_len = hdrlen + sizeof(*hdr);
+
+- err = skb_cow_head(skb, tot_len + skb->mac_len);
++ err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -197,11 +197,18 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+
+ return 0;
+ }
++
++/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */
++int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
++{
++ return __seg6_do_srh_encap(skb, osrh, proto, NULL);
++}
+ EXPORT_SYMBOL_GPL(seg6_do_srh_encap);
+
+ /* encapsulate an IPv6 packet within an outer IPv6 header with reduced SRH */
+ static int seg6_do_srh_encap_red(struct sk_buff *skb,
+- struct ipv6_sr_hdr *osrh, int proto)
++ struct ipv6_sr_hdr *osrh, int proto,
++ struct dst_entry *cache_dst)
+ {
+ __u8 first_seg = osrh->first_segment;
+ struct dst_entry *dst = skb_dst(skb);
+@@ -230,7 +237,7 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb,
+
+ tot_len = red_hdrlen + sizeof(struct ipv6hdr);
+
+- err = skb_cow_head(skb, tot_len + skb->mac_len);
++ err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -317,8 +324,8 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb,
+ return 0;
+ }
+
+-/* insert an SRH within an IPv6 packet, just after the IPv6 header */
+-int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
++static int __seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh,
++ struct dst_entry *cache_dst)
+ {
+ struct ipv6hdr *hdr, *oldhdr;
+ struct ipv6_sr_hdr *isrh;
+@@ -326,7 +333,7 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
+
+ hdrlen = (osrh->hdrlen + 1) << 3;
+
+- err = skb_cow_head(skb, hdrlen + skb->mac_len);
++ err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -369,9 +376,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
+
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(seg6_do_srh_inline);
+
+-static int seg6_do_srh(struct sk_buff *skb)
++static int seg6_do_srh(struct sk_buff *skb, struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct seg6_iptunnel_encap *tinfo;
+@@ -384,7 +390,7 @@ static int seg6_do_srh(struct sk_buff *skb)
+ if (skb->protocol != htons(ETH_P_IPV6))
+ return -EINVAL;
+
+- err = seg6_do_srh_inline(skb, tinfo->srh);
++ err = __seg6_do_srh_inline(skb, tinfo->srh, cache_dst);
+ if (err)
+ return err;
+ break;
+@@ -402,9 +408,11 @@ static int seg6_do_srh(struct sk_buff *skb)
+ return -EINVAL;
+
+ if (tinfo->mode == SEG6_IPTUN_MODE_ENCAP)
+- err = seg6_do_srh_encap(skb, tinfo->srh, proto);
++ err = __seg6_do_srh_encap(skb, tinfo->srh,
++ proto, cache_dst);
+ else
+- err = seg6_do_srh_encap_red(skb, tinfo->srh, proto);
++ err = seg6_do_srh_encap_red(skb, tinfo->srh,
++ proto, cache_dst);
+
+ if (err)
+ return err;
+@@ -425,11 +433,13 @@ static int seg6_do_srh(struct sk_buff *skb)
+ skb_push(skb, skb->mac_len);
+
+ if (tinfo->mode == SEG6_IPTUN_MODE_L2ENCAP)
+- err = seg6_do_srh_encap(skb, tinfo->srh,
+- IPPROTO_ETHERNET);
++ err = __seg6_do_srh_encap(skb, tinfo->srh,
++ IPPROTO_ETHERNET,
++ cache_dst);
+ else
+ err = seg6_do_srh_encap_red(skb, tinfo->srh,
+- IPPROTO_ETHERNET);
++ IPPROTO_ETHERNET,
++ cache_dst);
+
+ if (err)
+ return err;
+@@ -444,6 +454,13 @@ static int seg6_do_srh(struct sk_buff *skb)
+ return 0;
+ }
+
++/* insert an SRH within an IPv6 packet, just after the IPv6 header */
++int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
++{
++ return __seg6_do_srh_inline(skb, osrh, NULL);
++}
++EXPORT_SYMBOL_GPL(seg6_do_srh_inline);
++
+ static int seg6_input_finish(struct net *net, struct sock *sk,
+ struct sk_buff *skb)
+ {
+@@ -458,31 +475,35 @@ static int seg6_input_core(struct net *net, struct sock *sk,
+ struct seg6_lwt *slwt;
+ int err;
+
+- err = seg6_do_srh(skb);
+- if (unlikely(err))
+- goto drop;
+-
+ slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
+
+ local_bh_disable();
+ dst = dst_cache_get(&slwt->cache);
++ local_bh_enable();
++
++ err = seg6_do_srh(skb, dst);
++ if (unlikely(err)) {
++ dst_release(dst);
++ goto drop;
++ }
+
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+ if (!dst->error) {
++ local_bh_disable();
+ dst_cache_set_ip6(&slwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
++ local_bh_enable();
+ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ } else {
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+ }
+- local_bh_enable();
+-
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+
+ if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled))
+ return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
+@@ -528,16 +549,16 @@ static int seg6_output_core(struct net *net, struct sock *sk,
+ struct seg6_lwt *slwt;
+ int err;
+
+- err = seg6_do_srh(skb);
+- if (unlikely(err))
+- goto drop;
+-
+ slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
+
+ local_bh_disable();
+ dst = dst_cache_get(&slwt->cache);
+ local_bh_enable();
+
++ err = seg6_do_srh(skb, dst);
++ if (unlikely(err))
++ goto drop;
++
+ if (unlikely(!dst)) {
+ struct ipv6hdr *hdr = ipv6_hdr(skb);
+ struct flowi6 fl6;
+@@ -552,28 +573,31 @@ static int seg6_output_core(struct net *net, struct sock *sk,
+ dst = ip6_route_output(net, NULL, &fl6);
+ if (dst->error) {
+ err = dst->error;
+- dst_release(dst);
+ goto drop;
+ }
+
+- local_bh_disable();
+- dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr);
+- local_bh_enable();
++ /* cache only if we don't create a dst reference loop */
++ if (orig_dst->lwtstate != dst->lwtstate) {
++ local_bh_disable();
++ dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr);
++ local_bh_enable();
++ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ }
+
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+-
+ if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled))
+ return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb,
+ NULL, skb_dst(skb)->dev, dst_output);
+
+ return dst_output(net, sk, skb);
+ drop:
++ dst_release(dst);
+ kfree_skb(skb);
+ return err;
+ }
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 78d9961fcd446d..8d3c01f0e2aa19 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -2102,6 +2102,7 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ {
+ struct ovs_header *ovs_header;
+ struct ovs_vport_stats vport_stats;
++ struct net *net_vport;
+ int err;
+
+ ovs_header = genlmsg_put(skb, portid, seq, &dp_vport_genl_family,
+@@ -2118,12 +2119,15 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ nla_put_u32(skb, OVS_VPORT_ATTR_IFINDEX, vport->dev->ifindex))
+ goto nla_put_failure;
+
+- if (!net_eq(net, dev_net(vport->dev))) {
+- int id = peernet2id_alloc(net, dev_net(vport->dev), gfp);
++ rcu_read_lock();
++ net_vport = dev_net_rcu(vport->dev);
++ if (!net_eq(net, net_vport)) {
++ int id = peernet2id_alloc(net, net_vport, GFP_ATOMIC);
+
+ if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id))
+- goto nla_put_failure;
++ goto nla_put_failure_unlock;
+ }
++ rcu_read_unlock();
+
+ ovs_vport_get_stats(vport, &vport_stats);
+ if (nla_put_64bit(skb, OVS_VPORT_ATTR_STATS,
+@@ -2144,6 +2148,8 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ genlmsg_end(skb, ovs_header);
+ return 0;
+
++nla_put_failure_unlock:
++ rcu_read_unlock();
+ nla_put_failure:
+ err = -EMSGSIZE;
+ error:
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index f5d116a1bdea1a..37299a7ca1876e 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -337,7 +337,10 @@ EXPORT_SYMBOL_GPL(vsock_find_connected_socket);
+
+ void vsock_remove_sock(struct vsock_sock *vsk)
+ {
+- vsock_remove_bound(vsk);
++ /* Transport reassignment must not remove the binding. */
++ if (sock_flag(sk_vsock(vsk), SOCK_DEAD))
++ vsock_remove_bound(vsk);
++
+ vsock_remove_connected(vsk);
+ }
+ EXPORT_SYMBOL_GPL(vsock_remove_sock);
+@@ -821,6 +824,13 @@ static void __vsock_release(struct sock *sk, int level)
+ */
+ lock_sock_nested(sk, level);
+
++ /* Indicate to vsock_remove_sock() that the socket is being released and
++ * can be removed from the bound_table. Unlike transport reassignment
++ * case, where the socket must remain bound despite vsock_remove_sock()
++ * being called from the transport release() callback.
++ */
++ sock_set_flag(sk, SOCK_DEAD);
++
+ if (vsk->transport)
+ vsk->transport->release(vsk);
+ else if (sock_type_connectible(sk->sk_type))
+diff --git a/rust/Makefile b/rust/Makefile
+index 9f59baacaf7730..45779a064fa4f4 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -229,6 +229,7 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ -fzero-call-used-regs=% -fno-stack-clash-protection \
+ -fno-inline-functions-called-once -fsanitize=bounds-strict \
+ -fstrict-flex-arrays=% -fmin-function-alignment=% \
++ -fzero-init-padding-bits=% \
+ --param=% --param asan-%
+
+ # Derived from `scripts/Makefile.clang`.
+diff --git a/rust/kernel/rbtree.rs b/rust/kernel/rbtree.rs
+index d03e4aa1f4812b..7543378d372927 100644
+--- a/rust/kernel/rbtree.rs
++++ b/rust/kernel/rbtree.rs
+@@ -1147,7 +1147,7 @@ pub struct VacantEntry<'a, K, V> {
+ /// # Invariants
+ /// - `parent` may be null if the new node becomes the root.
+ /// - `child_field_of_parent` is a valid pointer to the left-child or right-child of `parent`. If `parent` is
+-/// null, it is a pointer to the root of the [`RBTree`].
++/// null, it is a pointer to the root of the [`RBTree`].
+ struct RawVacantEntry<'a, K, V> {
+ rbtree: *mut RBTree<K, V>,
+ /// The node that will become the parent of the new node if we insert one.
+diff --git a/scripts/Makefile.defconf b/scripts/Makefile.defconf
+index 226ea3df3b4b4c..a44307f08e9d68 100644
+--- a/scripts/Makefile.defconf
++++ b/scripts/Makefile.defconf
+@@ -1,6 +1,11 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Configuration heplers
+
++cmd_merge_fragments = \
++ $(srctree)/scripts/kconfig/merge_config.sh \
++ $4 -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$2 \
++ $(foreach config,$3,$(srctree)/arch/$(SRCARCH)/configs/$(config).config)
++
+ # Creates 'merged defconfigs'
+ # ---------------------------------------------------------------------------
+ # Usage:
+@@ -8,9 +13,7 @@
+ #
+ # Input config fragments without '.config' suffix
+ define merge_into_defconfig
+- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \
+- -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$(1) \
+- $(foreach config,$(2),$(srctree)/arch/$(SRCARCH)/configs/$(config).config)
++ $(call cmd,merge_fragments,$1,$2)
+ +$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+ endef
+
+@@ -22,8 +25,6 @@ endef
+ #
+ # Input config fragments without '.config' suffix
+ define merge_into_defconfig_override
+- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \
+- -Q -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$(1) \
+- $(foreach config,$(2),$(srctree)/arch/$(SRCARCH)/configs/$(config).config)
++ $(call cmd,merge_fragments,$1,$2,-Q)
+ +$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+ endef
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 04faf15ed316a9..dc081cf46d211c 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -31,6 +31,11 @@ KBUILD_CFLAGS-$(CONFIG_CC_NO_ARRAY_BOUNDS) += -Wno-array-bounds
+ ifdef CONFIG_CC_IS_CLANG
+ # The kernel builds with '-std=gnu11' so use of GNU extensions is acceptable.
+ KBUILD_CFLAGS += -Wno-gnu
++
++# Clang checks for overflow/truncation with '%p', while GCC does not:
++# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111219
++KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow-non-kprintf)
++KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation-non-kprintf)
+ else
+
+ # gcc inanely warns about local variables called 'main'
+@@ -77,6 +82,9 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=designated-init)
+ # Warn if there is an enum types mismatch
+ KBUILD_CFLAGS += $(call cc-option,-Wenum-conversion)
+
++# Explicitly clear padding bits during variable initialization
++KBUILD_CFLAGS += $(call cc-option,-fzero-init-padding-bits=all)
++
+ KBUILD_CFLAGS += -Wextra
+ KBUILD_CFLAGS += -Wunused
+
+@@ -102,11 +110,6 @@ KBUILD_CFLAGS += $(call cc-disable-warning, packed-not-aligned)
+ KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow)
+ ifdef CONFIG_CC_IS_GCC
+ KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation)
+-else
+-# Clang checks for overflow/truncation with '%p', while GCC does not:
+-# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111219
+-KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow-non-kprintf)
+-KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation-non-kprintf)
+ endif
+ KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
+
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index a0a0be38cbdc14..fb50bd4f4103f2 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -105,9 +105,11 @@ configfiles = $(wildcard $(srctree)/kernel/configs/$(1) $(srctree)/arch/$(SRCARC
+ all-config-fragments = $(call configfiles,*.config)
+ config-fragments = $(call configfiles,$@)
+
++cmd_merge_fragments = $(srctree)/scripts/kconfig/merge_config.sh -m $(KCONFIG_CONFIG) $(config-fragments)
++
+ %.config: $(obj)/conf
+ $(if $(config-fragments),, $(error $@ fragment does not exists on this architecture))
+- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh -m $(KCONFIG_CONFIG) $(config-fragments)
++ $(call cmd,merge_fragments)
+ $(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+
+ PHONY += tinyconfig
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 54f77f57ec8e25..1148e9498d8e83 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -1132,7 +1132,22 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF2 |
+ BYT_RT5640_MCLK_EN),
+ },
+- { /* Vexia Edu Atla 10 tablet */
++ {
++ /* Vexia Edu Atla 10 tablet 5V version */
++ .matches = {
++ /* Having all 3 of these not set is somewhat unique */
++ DMI_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "05/14/2015"),
++ },
++ .driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++ BYT_RT5640_JD_NOT_INV |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
++ { /* Vexia Edu Atla 10 tablet 9V version */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index f0d8796b984a80..8e02db7e83323b 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -218,6 +218,7 @@ static bool is_rust_noreturn(const struct symbol *func)
+ str_ends_with(func->name, "_4core9panicking18panic_bounds_check") ||
+ str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||
+ str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
++ strstr(func->name, "_4core9panicking13assert_failed") ||
+ strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||
+ (strstr(func->name, "_4core5slice5index24slice_") &&
+ str_ends_with(func->name, "_fail"));
+diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
+index 248ab790d143ed..f7206374a73dd8 100644
+--- a/tools/sched_ext/include/scx/common.bpf.h
++++ b/tools/sched_ext/include/scx/common.bpf.h
+@@ -251,8 +251,16 @@ void bpf_obj_drop_impl(void *kptr, void *meta) __ksym;
+ #define bpf_obj_new(type) ((type *)bpf_obj_new_impl(bpf_core_type_id_local(type), NULL))
+ #define bpf_obj_drop(kptr) bpf_obj_drop_impl(kptr, NULL)
+
+-void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
+-void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
++int bpf_list_push_front_impl(struct bpf_list_head *head,
++ struct bpf_list_node *node,
++ void *meta, __u64 off) __ksym;
++#define bpf_list_push_front(head, node) bpf_list_push_front_impl(head, node, NULL, 0)
++
++int bpf_list_push_back_impl(struct bpf_list_head *head,
++ struct bpf_list_node *node,
++ void *meta, __u64 off) __ksym;
++#define bpf_list_push_back(head, node) bpf_list_push_back_impl(head, node, NULL, 0)
++
+ struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) __ksym;
+ struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksym;
+ struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+index f0a3a9c18e9ef5..9006549a12945f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+@@ -226,7 +226,7 @@ static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
+ ASSERT_OK(pthread_create(&thread_id, NULL, &do_nothing_wait, NULL),
+ "pthread_create");
+
+- skel->bss->tid = gettid();
++ skel->bss->tid = syscall(SYS_gettid);
+
+ do_dummy_read_opts(skel->progs.dump_task, opts);
+
+@@ -255,10 +255,10 @@ static void *run_test_task_tid(void *arg)
+ union bpf_iter_link_info linfo;
+ int num_unknown_tid, num_known_tid;
+
+- ASSERT_NEQ(getpid(), gettid(), "check_new_thread_id");
++ ASSERT_NEQ(getpid(), syscall(SYS_gettid), "check_new_thread_id");
+
+ memset(&linfo, 0, sizeof(linfo));
+- linfo.task.tid = gettid();
++ linfo.task.tid = syscall(SYS_gettid);
+ opts.link_info = &linfo;
+ opts.link_info_len = sizeof(linfo);
+ test_task_common(&opts, 0, 1);
+diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+index 844f6fc8487b67..c1ac813ff9bae3 100644
+--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
++++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+@@ -869,21 +869,14 @@ static void consumer_test(struct uprobe_multi_consumers *skel,
+ fmt = "prog 0/1: uprobe";
+ } else {
+ /*
+- * uprobe return is tricky ;-)
+- *
+ * to trigger uretprobe consumer, the uretprobe needs to be installed,
+ * which means one of the 'return' uprobes was alive when probe was hit:
+ *
+ * idxs: 2/3 uprobe return in 'installed' mask
+- *
+- * in addition if 'after' state removes everything that was installed in
+- * 'before' state, then uprobe kernel object goes away and return uprobe
+- * is not installed and we won't hit it even if it's in 'after' state.
+ */
+ unsigned long had_uretprobes = before & 0b1100; /* is uretprobe installed */
+- unsigned long probe_preserved = before & after; /* did uprobe go away */
+
+- if (had_uretprobes && probe_preserved && test_bit(idx, after))
++ if (had_uretprobes && test_bit(idx, after))
+ val++;
+ fmt = "idx 2/3: uretprobe";
+ }
+diff --git a/tools/testing/selftests/gpio/gpio-sim.sh b/tools/testing/selftests/gpio/gpio-sim.sh
+index 6fb66a687f1737..bbc29ed9c60a91 100755
+--- a/tools/testing/selftests/gpio/gpio-sim.sh
++++ b/tools/testing/selftests/gpio/gpio-sim.sh
+@@ -46,12 +46,6 @@ remove_chip() {
+ rmdir $CONFIGFS_DIR/$CHIP || fail "Unable to remove the chip"
+ }
+
+-configfs_cleanup() {
+- for CHIP in `ls $CONFIGFS_DIR/`; do
+- remove_chip $CHIP
+- done
+-}
+-
+ create_chip() {
+ local CHIP=$1
+
+@@ -105,6 +99,13 @@ disable_chip() {
+ echo 0 > $CONFIGFS_DIR/$CHIP/live || fail "Unable to disable the chip"
+ }
+
++configfs_cleanup() {
++ for CHIP in `ls $CONFIGFS_DIR/`; do
++ disable_chip $CHIP
++ remove_chip $CHIP
++ done
++}
++
+ configfs_chip_name() {
+ local CHIP=$1
+ local BANK=$2
+@@ -181,6 +182,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test -n `cat $CONFIGFS_DIR/chip/bank/chip_name` || fail "chip_name doesn't work"
++disable_chip chip
+ remove_chip chip
+
+ echo "1.2. chip_name returns 'none' if the chip is still pending"
+@@ -195,6 +197,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test -n `cat $CONFIGFS_DIR/chip/dev_name` || fail "dev_name doesn't work"
++disable_chip chip
+ remove_chip chip
+
+ echo "2. Creating and configuring simulated chips"
+@@ -204,6 +207,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test "`get_chip_num_lines chip bank`" = "1" || fail "default number of lines is not 1"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.2. Number of lines can be specified"
+@@ -212,6 +216,7 @@ create_bank chip bank
+ set_num_lines chip bank 16
+ enable_chip chip
+ test "`get_chip_num_lines chip bank`" = "16" || fail "number of lines is not 16"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.3. Label can be set"
+@@ -220,6 +225,7 @@ create_bank chip bank
+ set_label chip bank foobar
+ enable_chip chip
+ test "`get_chip_label chip bank`" = "foobar" || fail "label is incorrect"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.4. Label can be left empty"
+@@ -227,6 +233,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test -z "`cat $CONFIGFS_DIR/chip/bank/label`" || fail "label is not empty"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.5. Line names can be configured"
+@@ -238,6 +245,7 @@ set_line_name chip bank 2 bar
+ enable_chip chip
+ test "`get_line_name chip bank 0`" = "foo" || fail "line name is incorrect"
+ test "`get_line_name chip bank 2`" = "bar" || fail "line name is incorrect"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.6. Line config can remain unused if offset is greater than number of lines"
+@@ -248,6 +256,7 @@ set_line_name chip bank 5 foobar
+ enable_chip chip
+ test "`get_line_name chip bank 0`" = "" || fail "line name is incorrect"
+ test "`get_line_name chip bank 1`" = "" || fail "line name is incorrect"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.7. Line configfs directory names are sanitized"
+@@ -267,6 +276,7 @@ for CHIP in $CHIPS; do
+ enable_chip $CHIP
+ done
+ for CHIP in $CHIPS; do
++ disable_chip $CHIP
+ remove_chip $CHIP
+ done
+
+@@ -278,6 +288,7 @@ echo foobar > $CONFIGFS_DIR/chip/bank/label 2> /dev/null && \
+ fail "Setting label of a live chip should fail"
+ echo 8 > $CONFIGFS_DIR/chip/bank/num_lines 2> /dev/null && \
+ fail "Setting number of lines of a live chip should fail"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.10. Can't create line items when chip is live"
+@@ -285,6 +296,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ mkdir $CONFIGFS_DIR/chip/bank/line0 2> /dev/null && fail "Creating line item should fail"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.11. Probe errors are propagated to user-space"
+@@ -316,6 +328,7 @@ mkdir -p $CONFIGFS_DIR/chip/bank/line4/hog
+ enable_chip chip
+ $BASE_DIR/gpio-mockup-cdev -s 1 /dev/`configfs_chip_name chip bank` 4 2> /dev/null && \
+ fail "Setting the value of a hogged line shouldn't succeed"
++disable_chip chip
+ remove_chip chip
+
+ echo "3. Controlling simulated chips"
+@@ -331,6 +344,7 @@ test "$?" = "1" || fail "pull set incorrectly"
+ sysfs_set_pull chip bank 0 pull-down
+ $BASE_DIR/gpio-mockup-cdev /dev/`configfs_chip_name chip bank` 1
+ test "$?" = "0" || fail "pull set incorrectly"
++disable_chip chip
+ remove_chip chip
+
+ echo "3.2. Pull can be read from sysfs"
+@@ -344,6 +358,7 @@ SYSFS_PATH=/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/pull
+ test `cat $SYSFS_PATH` = "pull-down" || fail "reading the pull failed"
+ sysfs_set_pull chip bank 0 pull-up
+ test `cat $SYSFS_PATH` = "pull-up" || fail "reading the pull failed"
++disable_chip chip
+ remove_chip chip
+
+ echo "3.3. Incorrect input in sysfs is rejected"
+@@ -355,6 +370,7 @@ DEVNAME=`configfs_dev_name chip`
+ CHIPNAME=`configfs_chip_name chip bank`
+ SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/pull"
+ echo foobar > $SYSFS_PATH 2> /dev/null && fail "invalid input not detected"
++disable_chip chip
+ remove_chip chip
+
+ echo "3.4. Can't write to value"
+@@ -365,6 +381,7 @@ DEVNAME=`configfs_dev_name chip`
+ CHIPNAME=`configfs_chip_name chip bank`
+ SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
+ echo 1 > $SYSFS_PATH 2> /dev/null && fail "writing to 'value' succeeded unexpectedly"
++disable_chip chip
+ remove_chip chip
+
+ echo "4. Simulated GPIO chips are functional"
+@@ -382,6 +399,7 @@ $BASE_DIR/gpio-mockup-cdev -s 1 /dev/`configfs_chip_name chip bank` 0 &
+ sleep 0.1 # FIXME Any better way?
+ test `cat $SYSFS_PATH` = "1" || fail "incorrect value read from sysfs"
+ kill $!
++disable_chip chip
+ remove_chip chip
+
+ echo "4.2. Bias settings work correctly"
+@@ -394,6 +412,7 @@ CHIPNAME=`configfs_chip_name chip bank`
+ SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
+ $BASE_DIR/gpio-mockup-cdev -b pull-up /dev/`configfs_chip_name chip bank` 0
+ test `cat $SYSFS_PATH` = "1" || fail "bias setting does not work"
++disable_chip chip
+ remove_chip chip
+
+ echo "GPIO $MODULE test PASS"
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 6c651c880fe83d..66be7699c72c9a 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -197,6 +197,12 @@
+ #
+ # - pmtu_ipv6_route_change
+ # Same as above but with IPv6
++#
++# - pmtu_ipv4_mp_exceptions
++# Use the same topology as in pmtu_ipv4, but add routeable addresses
++# on host A and B on lo reachable via both routers. Host A and B
++# addresses have multipath routes to each other, b_r1 mtu = 1500.
++# Check that PMTU exceptions are created for both paths.
+
+ source lib.sh
+ source net_helper.sh
+@@ -266,7 +272,8 @@ tests="
+ list_flush_ipv4_exception ipv4: list and flush cached exceptions 1
+ list_flush_ipv6_exception ipv6: list and flush cached exceptions 1
+ pmtu_ipv4_route_change ipv4: PMTU exception w/route replace 1
+- pmtu_ipv6_route_change ipv6: PMTU exception w/route replace 1"
++ pmtu_ipv6_route_change ipv6: PMTU exception w/route replace 1
++ pmtu_ipv4_mp_exceptions ipv4: PMTU multipath nh exceptions 1"
+
+ # Addressing and routing for tests with routers: four network segments, with
+ # index SEGMENT between 1 and 4, a common prefix (PREFIX4 or PREFIX6) and an
+@@ -343,6 +350,9 @@ tunnel6_a_addr="fd00:2::a"
+ tunnel6_b_addr="fd00:2::b"
+ tunnel6_mask="64"
+
++host4_a_addr="192.168.99.99"
++host4_b_addr="192.168.88.88"
++
+ dummy6_0_prefix="fc00:1000::"
+ dummy6_1_prefix="fc00:1001::"
+ dummy6_mask="64"
+@@ -984,6 +994,52 @@ setup_ovs_bridge() {
+ run_cmd ip route add ${prefix6}:${b_r1}::1 via ${prefix6}:${a_r1}::2
+ }
+
++setup_multipath_new() {
++ # Set up host A with multipath routes to host B host4_b_addr
++ run_cmd ${ns_a} ip addr add ${host4_a_addr} dev lo
++ run_cmd ${ns_a} ip nexthop add id 401 via ${prefix4}.${a_r1}.2 dev veth_A-R1
++ run_cmd ${ns_a} ip nexthop add id 402 via ${prefix4}.${a_r2}.2 dev veth_A-R2
++ run_cmd ${ns_a} ip nexthop add id 403 group 401/402
++ run_cmd ${ns_a} ip route add ${host4_b_addr} src ${host4_a_addr} nhid 403
++
++ # Set up host B with multipath routes to host A host4_a_addr
++ run_cmd ${ns_b} ip addr add ${host4_b_addr} dev lo
++ run_cmd ${ns_b} ip nexthop add id 401 via ${prefix4}.${b_r1}.2 dev veth_B-R1
++ run_cmd ${ns_b} ip nexthop add id 402 via ${prefix4}.${b_r2}.2 dev veth_B-R2
++ run_cmd ${ns_b} ip nexthop add id 403 group 401/402
++ run_cmd ${ns_b} ip route add ${host4_a_addr} src ${host4_b_addr} nhid 403
++}
++
++setup_multipath_old() {
++ # Set up host A with multipath routes to host B host4_b_addr
++ run_cmd ${ns_a} ip addr add ${host4_a_addr} dev lo
++ run_cmd ${ns_a} ip route add ${host4_b_addr} \
++ src ${host4_a_addr} \
++ nexthop via ${prefix4}.${a_r1}.2 weight 1 \
++ nexthop via ${prefix4}.${a_r2}.2 weight 1
++
++ # Set up host B with multipath routes to host A host4_a_addr
++ run_cmd ${ns_b} ip addr add ${host4_b_addr} dev lo
++ run_cmd ${ns_b} ip route add ${host4_a_addr} \
++ src ${host4_b_addr} \
++ nexthop via ${prefix4}.${b_r1}.2 weight 1 \
++ nexthop via ${prefix4}.${b_r2}.2 weight 1
++}
++
++setup_multipath() {
++ if [ "$USE_NH" = "yes" ]; then
++ setup_multipath_new
++ else
++ setup_multipath_old
++ fi
++
++ # Set up routers with routes to dummies
++ run_cmd ${ns_r1} ip route add ${host4_a_addr} via ${prefix4}.${a_r1}.1
++ run_cmd ${ns_r2} ip route add ${host4_a_addr} via ${prefix4}.${a_r2}.1
++ run_cmd ${ns_r1} ip route add ${host4_b_addr} via ${prefix4}.${b_r1}.1
++ run_cmd ${ns_r2} ip route add ${host4_b_addr} via ${prefix4}.${b_r2}.1
++}
++
+ setup() {
+ [ "$(id -u)" -ne 0 ] && echo " need to run as root" && return $ksft_skip
+
+@@ -1076,23 +1132,15 @@ link_get_mtu() {
+ }
+
+ route_get_dst_exception() {
+- ns_cmd="${1}"
+- dst="${2}"
+- dsfield="${3}"
++ ns_cmd="${1}"; shift
+
+- if [ -z "${dsfield}" ]; then
+- dsfield=0
+- fi
+-
+- ${ns_cmd} ip route get "${dst}" dsfield "${dsfield}"
++ ${ns_cmd} ip route get "$@"
+ }
+
+ route_get_dst_pmtu_from_exception() {
+- ns_cmd="${1}"
+- dst="${2}"
+- dsfield="${3}"
++ ns_cmd="${1}"; shift
+
+- mtu_parse "$(route_get_dst_exception "${ns_cmd}" "${dst}" "${dsfield}")"
++ mtu_parse "$(route_get_dst_exception "${ns_cmd}" "$@")"
+ }
+
+ check_pmtu_value() {
+@@ -1235,10 +1283,10 @@ test_pmtu_ipv4_dscp_icmp_exception() {
+ run_cmd "${ns_a}" ping -q -M want -Q "${dsfield}" -c 1 -w 1 -s "${len}" "${dst2}"
+
+ # Check that exceptions have been created with the correct PMTU
+- pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" "${policy_mark}")"
++ pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" dsfield "${policy_mark}")"
+ check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
+
+- pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" "${policy_mark}")"
++ pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" dsfield "${policy_mark}")"
+ check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
+ }
+
+@@ -1285,9 +1333,9 @@ test_pmtu_ipv4_dscp_udp_exception() {
+ UDP:"${dst2}":50000,tos="${dsfield}"
+
+ # Check that exceptions have been created with the correct PMTU
+- pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" "${policy_mark}")"
++ pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" dsfield "${policy_mark}")"
+ check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
+- pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" "${policy_mark}")"
++ pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" dsfield "${policy_mark}")"
+ check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
+ }
+
+@@ -2329,6 +2377,36 @@ test_pmtu_ipv6_route_change() {
+ test_pmtu_ipvX_route_change 6
+ }
+
++test_pmtu_ipv4_mp_exceptions() {
++ setup namespaces routing multipath || return $ksft_skip
++
++ trace "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \
++ "${ns_r1}" veth_R1-B "${ns_b}" veth_B-R1 \
++ "${ns_a}" veth_A-R2 "${ns_r2}" veth_R2-A \
++ "${ns_r2}" veth_R2-B "${ns_b}" veth_B-R2
++
++ # Set up initial MTU values
++ mtu "${ns_a}" veth_A-R1 2000
++ mtu "${ns_r1}" veth_R1-A 2000
++ mtu "${ns_r1}" veth_R1-B 1500
++ mtu "${ns_b}" veth_B-R1 1500
++
++ mtu "${ns_a}" veth_A-R2 2000
++ mtu "${ns_r2}" veth_R2-A 2000
++ mtu "${ns_r2}" veth_R2-B 1500
++ mtu "${ns_b}" veth_B-R2 1500
++
++ # Ping and expect two nexthop exceptions for two routes
++ run_cmd ${ns_a} ping -q -M want -i 0.1 -c 1 -s 1800 "${host4_b_addr}"
++
++ # Check that exceptions have been created with the correct PMTU
++ pmtu_a_R1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${host4_b_addr}" oif veth_A-R1)"
++ pmtu_a_R2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${host4_b_addr}" oif veth_A-R2)"
++
++ check_pmtu_value "1500" "${pmtu_a_R1}" "exceeding MTU (veth_A-R1)" || return 1
++ check_pmtu_value "1500" "${pmtu_a_R2}" "exceeding MTU (veth_A-R2)" || return 1
++}
++
+ usage() {
+ echo
+ echo "$0 [OPTIONS] [TEST]..."
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index bdf6f10d055891..87dce3efe31e4a 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -809,10 +809,10 @@ kci_test_ipsec_offload()
+ # does driver have correct offload info
+ run_cmd diff $sysfsf - << EOF
+ SA count=2 tx=3
+-sa[0] tx ipaddr=0x00000000 00000000 00000000 00000000
++sa[0] tx ipaddr=$dstip
+ sa[0] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1
+ sa[0] key=0x34333231 38373635 32313039 36353433
+-sa[1] rx ipaddr=0x00000000 00000000 00000000 037ba8c0
++sa[1] rx ipaddr=$srcip
+ sa[1] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1
+ sa[1] key=0x34333231 38373635 32313039 36353433
+ EOF
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 4cbd2d8ebb0461..397dc962f5e2ad 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1143,6 +1143,14 @@ static int stop_tracing;
+ static struct trace_instance *hist_inst = NULL;
+ static void stop_hist(int sig)
+ {
++ if (stop_tracing) {
++ /*
++ * Stop requested twice in a row; abort event processing and
++ * exit immediately
++ */
++ tracefs_iterate_stop(hist_inst->inst);
++ return;
++ }
+ stop_tracing = 1;
+ if (hist_inst)
+ trace_instance_stop(hist_inst);
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index d13be28dacd599..0def5fec51ed7a 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -897,6 +897,14 @@ static int stop_tracing;
+ static struct trace_instance *top_inst = NULL;
+ static void stop_top(int sig)
+ {
++ if (stop_tracing) {
++ /*
++ * Stop requested twice in a row; abort event processing and
++ * exit immediately
++ */
++ tracefs_iterate_stop(top_inst->inst);
++ return;
++ }
+ stop_tracing = 1;
+ if (top_inst)
+ trace_instance_stop(top_inst);
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-27 13:22 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-02-27 13:22 UTC (permalink / raw
To: gentoo-commits
commit: 43affa7d97bc920177a436c3997d9b1fb0cf1521
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 27 13:22:00 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb 27 13:22:00 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43affa7d
Linux patch 6.12.17
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1016_linux-6.12.17.patch | 9620 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9624 insertions(+)
diff --git a/0000_README b/0000_README
index 9f0c3a67..8efc8938 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-6.12.16.patch
From: https://www.kernel.org
Desc: Linux 6.12.16
+Patch: 1016_linux-6.12.17.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.17
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1016_linux-6.12.17.patch b/1016_linux-6.12.17.patch
new file mode 100644
index 00000000..cebbb158
--- /dev/null
+++ b/1016_linux-6.12.17.patch
@@ -0,0 +1,9620 @@
+diff --git a/Documentation/networking/strparser.rst b/Documentation/networking/strparser.rst
+index 6cab1f74ae05a3..7f623d1db72aae 100644
+--- a/Documentation/networking/strparser.rst
++++ b/Documentation/networking/strparser.rst
+@@ -112,7 +112,7 @@ Functions
+ Callbacks
+ =========
+
+-There are six callbacks:
++There are seven callbacks:
+
+ ::
+
+@@ -182,6 +182,13 @@ There are six callbacks:
+ the length of the message. skb->len - offset may be greater
+ then full_len since strparser does not trim the skb.
+
++ ::
++
++ int (*read_sock)(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor);
++
++ The read_sock callback is used by strparser instead of
++ sock->ops->read_sock, if provided.
+ ::
+
+ int (*read_sock_done)(struct strparser *strp, int err);
+diff --git a/Makefile b/Makefile
+index 340da922fa4f2c..e8b8c5b3840505 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts b/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts
+index 1aa668c3ccf928..dbdee604edab43 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts
+@@ -63,6 +63,18 @@ thermistor {
+ pulldown-ohm = <0>;
+ io-channels = <&auxadc 0>;
+ };
++
++ connector {
++ compatible = "hdmi-connector";
++ label = "hdmi";
++ type = "d";
++
++ port {
++ hdmi_connector_in: endpoint {
++ remote-endpoint = <&hdmi_connector_out>;
++ };
++ };
++ };
+ };
+
+ &auxadc {
+@@ -120,6 +132,43 @@ &i2c6 {
+ pinctrl-0 = <&i2c6_pins>;
+ status = "okay";
+ clock-frequency = <100000>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ it66121hdmitx: hdmitx@4c {
++ compatible = "ite,it66121";
++ reg = <0x4c>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&ite_pins>;
++ reset-gpios = <&pio 160 GPIO_ACTIVE_LOW>;
++ interrupt-parent = <&pio>;
++ interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
++ vcn33-supply = <&mt6358_vcn33_reg>;
++ vcn18-supply = <&mt6358_vcn18_reg>;
++ vrf12-supply = <&mt6358_vrf12_reg>;
++
++ ports {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ port@0 {
++ reg = <0>;
++
++ it66121_in: endpoint {
++ bus-width = <12>;
++ remote-endpoint = <&dpi_out>;
++ };
++ };
++
++ port@1 {
++ reg = <1>;
++
++ hdmi_connector_out: endpoint {
++ remote-endpoint = <&hdmi_connector_in>;
++ };
++ };
++ };
++ };
+ };
+
+ &keyboard {
+@@ -362,6 +411,67 @@ pins_clk {
+ input-enable;
+ };
+ };
++
++ ite_pins: ite-pins {
++ pins-irq {
++ pinmux = <PINMUX_GPIO4__FUNC_GPIO4>;
++ input-enable;
++ bias-pull-up;
++ };
++
++ pins-rst {
++ pinmux = <PINMUX_GPIO160__FUNC_GPIO160>;
++ output-high;
++ };
++ };
++
++ dpi_func_pins: dpi-func-pins {
++ pins-dpi {
++ pinmux = <PINMUX_GPIO12__FUNC_I2S5_BCK>,
++ <PINMUX_GPIO46__FUNC_I2S5_LRCK>,
++ <PINMUX_GPIO47__FUNC_I2S5_DO>,
++ <PINMUX_GPIO13__FUNC_DBPI_D0>,
++ <PINMUX_GPIO14__FUNC_DBPI_D1>,
++ <PINMUX_GPIO15__FUNC_DBPI_D2>,
++ <PINMUX_GPIO16__FUNC_DBPI_D3>,
++ <PINMUX_GPIO17__FUNC_DBPI_D4>,
++ <PINMUX_GPIO18__FUNC_DBPI_D5>,
++ <PINMUX_GPIO19__FUNC_DBPI_D6>,
++ <PINMUX_GPIO20__FUNC_DBPI_D7>,
++ <PINMUX_GPIO21__FUNC_DBPI_D8>,
++ <PINMUX_GPIO22__FUNC_DBPI_D9>,
++ <PINMUX_GPIO23__FUNC_DBPI_D10>,
++ <PINMUX_GPIO24__FUNC_DBPI_D11>,
++ <PINMUX_GPIO25__FUNC_DBPI_HSYNC>,
++ <PINMUX_GPIO26__FUNC_DBPI_VSYNC>,
++ <PINMUX_GPIO27__FUNC_DBPI_DE>,
++ <PINMUX_GPIO28__FUNC_DBPI_CK>;
++ };
++ };
++
++ dpi_idle_pins: dpi-idle-pins {
++ pins-idle {
++ pinmux = <PINMUX_GPIO12__FUNC_GPIO12>,
++ <PINMUX_GPIO46__FUNC_GPIO46>,
++ <PINMUX_GPIO47__FUNC_GPIO47>,
++ <PINMUX_GPIO13__FUNC_GPIO13>,
++ <PINMUX_GPIO14__FUNC_GPIO14>,
++ <PINMUX_GPIO15__FUNC_GPIO15>,
++ <PINMUX_GPIO16__FUNC_GPIO16>,
++ <PINMUX_GPIO17__FUNC_GPIO17>,
++ <PINMUX_GPIO18__FUNC_GPIO18>,
++ <PINMUX_GPIO19__FUNC_GPIO19>,
++ <PINMUX_GPIO20__FUNC_GPIO20>,
++ <PINMUX_GPIO21__FUNC_GPIO21>,
++ <PINMUX_GPIO22__FUNC_GPIO22>,
++ <PINMUX_GPIO23__FUNC_GPIO23>,
++ <PINMUX_GPIO24__FUNC_GPIO24>,
++ <PINMUX_GPIO25__FUNC_GPIO25>,
++ <PINMUX_GPIO26__FUNC_GPIO26>,
++ <PINMUX_GPIO27__FUNC_GPIO27>,
++ <PINMUX_GPIO28__FUNC_GPIO28>;
++ };
++ };
+ };
+
+ &pmic {
+@@ -412,6 +522,15 @@ &scp {
+ status = "okay";
+ };
+
+-&dsi0 {
+- status = "disabled";
++&dpi0 {
++ pinctrl-names = "default", "sleep";
++ pinctrl-0 = <&dpi_func_pins>;
++ pinctrl-1 = <&dpi_idle_pins>;
++ status = "okay";
++
++ port {
++ dpi_out: endpoint {
++ remote-endpoint = <&it66121_in>;
++ };
++ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 5cb6bd3c5acbb0..92c41463d10e37 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1835,6 +1835,7 @@ dsi0: dsi@14014000 {
+ resets = <&mmsys MT8183_MMSYS_SW0_RST_B_DISP_DSI0>;
+ phys = <&mipi_tx0>;
+ phy-names = "dphy";
++ status = "disabled";
+ };
+
+ dpi0: dpi@14015000 {
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts b/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
+index ae398acdcf45e6..0905668cbe1f4e 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
+@@ -226,7 +226,6 @@ &uart0 {
+ };
+
+ &uart5 {
+- pinctrl-0 = <&uart5_xfer>;
+ rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+index b7163ed74232d7..f743aaf78359d2 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+@@ -373,6 +373,12 @@ &u2phy_host {
+ status = "okay";
+ };
+
++&uart5 {
++ /delete-property/ dmas;
++ /delete-property/ dma-names;
++ pinctrl-0 = <&uart5_xfer>;
++};
++
+ /* Mule UCAN */
+ &usb_host0_ehci {
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts b/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
+index 4237f2ee8fee33..f57d4acd9807cb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
+@@ -15,9 +15,11 @@ / {
+ };
+
+ &gmac2io {
++ /delete-property/ tx_delay;
++ /delete-property/ rx_delay;
++
+ phy-handle = <&yt8531c>;
+- tx_delay = <0x19>;
+- rx_delay = <0x05>;
++ phy-mode = "rgmii-id";
+
+ mdio {
+ /delete-node/ ethernet-phy@1;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+index fc67585b64b7ba..83e7e0fbe7839e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+@@ -549,10 +549,10 @@ usb_host2_xhci: usb@fcd00000 {
+ mmu600_pcie: iommu@fc900000 {
+ compatible = "arm,smmu-v3";
+ reg = <0x0 0xfc900000 0x0 0x200000>;
+- interrupts = <GIC_SPI 369 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 371 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 367 IRQ_TYPE_LEVEL_HIGH 0>;
++ interrupts = <GIC_SPI 369 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 371 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 374 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 367 IRQ_TYPE_EDGE_RISING 0>;
+ interrupt-names = "eventq", "gerror", "priq", "cmdq-sync";
+ #iommu-cells = <1>;
+ status = "disabled";
+@@ -561,10 +561,10 @@ mmu600_pcie: iommu@fc900000 {
+ mmu600_php: iommu@fcb00000 {
+ compatible = "arm,smmu-v3";
+ reg = <0x0 0xfcb00000 0x0 0x200000>;
+- interrupts = <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 383 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 386 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 379 IRQ_TYPE_LEVEL_HIGH 0>;
++ interrupts = <GIC_SPI 381 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 383 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 386 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 379 IRQ_TYPE_EDGE_RISING 0>;
+ interrupt-names = "eventq", "gerror", "priq", "cmdq-sync";
+ #iommu-cells = <1>;
+ status = "disabled";
+@@ -2626,9 +2626,9 @@ tsadc: tsadc@fec00000 {
+ rockchip,hw-tshut-temp = <120000>;
+ rockchip,hw-tshut-mode = <0>; /* tshut mode 0:CRU 1:GPIO */
+ rockchip,hw-tshut-polarity = <0>; /* tshut polarity 0:LOW 1:HIGH */
+- pinctrl-0 = <&tsadc_gpio_func>;
+- pinctrl-1 = <&tsadc_shut>;
+- pinctrl-names = "gpio", "otpout";
++ pinctrl-0 = <&tsadc_shut_org>;
++ pinctrl-1 = <&tsadc_gpio_func>;
++ pinctrl-names = "default", "sleep";
+ #thermal-sensor-cells = <1>;
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts b/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
+index 6418286efe40d3..762d36ad733ab2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
+@@ -101,7 +101,7 @@ vcc3v3_lcd: vcc3v3-lcd-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc3v3_lcd";
+ enable-active-high;
+- gpio = <&gpio1 RK_PC4 GPIO_ACTIVE_HIGH>;
++ gpio = <&gpio0 RK_PC4 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcdpwr_en>;
+ vin-supply = <&vcc3v3_sys>;
+@@ -207,7 +207,7 @@ &pcie3x4 {
+ &pinctrl {
+ lcd {
+ lcdpwr_en: lcdpwr-en {
+- rockchip,pins = <1 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>;
++ rockchip,pins = <0 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>;
+ };
+
+ bl_en: bl-en {
+diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
+index 798d965760d434..5a280ac7570cdd 100644
+--- a/arch/arm64/include/asm/mman.h
++++ b/arch/arm64/include/asm/mman.h
+@@ -41,9 +41,12 @@ static inline unsigned long arch_calc_vm_flag_bits(struct file *file,
+ * backed by tags-capable memory. The vm_flags may be overridden by a
+ * filesystem supporting MTE (RAM-based).
+ */
+- if (system_supports_mte() &&
+- ((flags & MAP_ANONYMOUS) || shmem_file(file)))
+- return VM_MTE_ALLOWED;
++ if (system_supports_mte()) {
++ if ((flags & MAP_ANONYMOUS) && !(flags & MAP_HUGETLB))
++ return VM_MTE_ALLOWED;
++ if (shmem_file(file))
++ return VM_MTE_ALLOWED;
++ }
+
+ return 0;
+ }
+diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+index c3efacab4b9412..aa90a048f319a3 100644
+--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
++++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+@@ -77,9 +77,17 @@
+ /*
+ * With 4K page size the real_pte machinery is all nops.
+ */
+-#define __real_pte(e, p, o) ((real_pte_t){(e)})
++static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep, int offset)
++{
++ return (real_pte_t){pte};
++}
++
+ #define __rpte_to_pte(r) ((r).pte)
+-#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT)
++
++static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
++{
++ return pte_val(__rpte_to_pte(rpte)) >> H_PAGE_F_GIX_SHIFT;
++}
+
+ #define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
+ do { \
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index acdab294b340a8..c1d9b031f0d578 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -108,7 +108,7 @@ static int text_area_cpu_up(unsigned int cpu)
+ unsigned long addr;
+ int err;
+
+- area = get_vm_area(PAGE_SIZE, VM_ALLOC);
++ area = get_vm_area(PAGE_SIZE, 0);
+ if (!area) {
+ WARN_ONCE(1, "Failed to create text area for cpu %d\n",
+ cpu);
+@@ -493,7 +493,9 @@ static int __do_patch_instructions_mm(u32 *addr, u32 *code, size_t len, bool rep
+
+ orig_mm = start_using_temp_mm(patching_mm);
+
++ kasan_disable_current();
+ err = __patch_instructions(patch_addr, code, len, repeat_instr);
++ kasan_enable_current();
+
+ /* context synchronisation performed by __patch_instructions */
+ stop_using_temp_mm(patching_mm, orig_mm);
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index c2ee0745f59edc..7b69be63d5d20a 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -75,7 +75,7 @@ static int cmma_test_essa(void)
+ : [reg1] "=&d" (reg1),
+ [reg2] "=&a" (reg2),
+ [rc] "+&d" (rc),
+- [tmp] "=&d" (tmp),
++ [tmp] "+&d" (tmp),
+ "+Q" (get_lowcore()->program_new_psw),
+ "=Q" (old)
+ : [psw_old] "a" (&old),
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index f5bf400f6a2833..9ec3170c18f925 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -397,34 +397,28 @@ static struct event_constraint intel_lnc_event_constraints[] = {
+ METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FETCH_LAT, 6),
+ METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_MEM_BOUND, 7),
+
++ INTEL_EVENT_CONSTRAINT(0x20, 0xf),
++
++ INTEL_UEVENT_CONSTRAINT(0x012a, 0xf),
++ INTEL_UEVENT_CONSTRAINT(0x012b, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x0148, 0x4),
+ INTEL_UEVENT_CONSTRAINT(0x0175, 0x4),
+
+ INTEL_EVENT_CONSTRAINT(0x2e, 0x3ff),
+ INTEL_EVENT_CONSTRAINT(0x3c, 0x3ff),
+- /*
+- * Generally event codes < 0x90 are restricted to counters 0-3.
+- * The 0x2E and 0x3C are exception, which has no restriction.
+- */
+- INTEL_EVENT_CONSTRAINT_RANGE(0x01, 0x8f, 0xf),
+
+- INTEL_UEVENT_CONSTRAINT(0x01a3, 0xf),
+- INTEL_UEVENT_CONSTRAINT(0x02a3, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4),
+ INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4),
+ INTEL_UEVENT_CONSTRAINT(0x04a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x08a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x10a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x01b1, 0x8),
++ INTEL_UEVENT_CONSTRAINT(0x01cd, 0x3fc),
+ INTEL_UEVENT_CONSTRAINT(0x02cd, 0x3),
+- INTEL_EVENT_CONSTRAINT(0xce, 0x1),
+
+ INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xdf, 0xf),
+- /*
+- * Generally event codes >= 0x90 are likely to have no restrictions.
+- * The exception are defined as above.
+- */
+- INTEL_EVENT_CONSTRAINT_RANGE(0x90, 0xfe, 0x3ff),
++
++ INTEL_UEVENT_CONSTRAINT(0x00e0, 0xf),
+
+ EVENT_CONSTRAINT_END
+ };
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index b6303b0224531b..c07ca43e67e7f1 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1178,7 +1178,7 @@ struct event_constraint intel_lnc_pebs_event_constraints[] = {
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x100, 0x100000000ULL), /* INST_RETIRED.PREC_DIST */
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL),
+
+- INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3ff),
++ INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3fc),
+ INTEL_HYBRID_STLAT_CONSTRAINT(0x2cd, 0x3),
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 375bbb9600d3c1..1a8148dec4afe9 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -816,6 +816,17 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
+ }
+ }
+
++void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu)
++{
++ struct kvm_lapic *apic = vcpu->arch.apic;
++
++ if (WARN_ON_ONCE(!lapic_in_kernel(vcpu)) || !apic->apicv_active)
++ return;
++
++ kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
++}
++EXPORT_SYMBOL_GPL(kvm_apic_update_hwapic_isr);
++
+ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
+ {
+ /* This may race with setting of irr in __apic_accept_irq() and
+diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
+index 1b8ef9856422a4..3aa599db779689 100644
+--- a/arch/x86/kvm/lapic.h
++++ b/arch/x86/kvm/lapic.h
+@@ -117,11 +117,10 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
+ struct kvm_lapic_irq *irq, int *r, struct dest_map *dest_map);
+ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high);
+
+-u64 kvm_get_apic_base(struct kvm_vcpu *vcpu);
+ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
+ int kvm_apic_get_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
+ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
+-enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu);
++void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu);
+ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
+
+ u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu);
+@@ -271,6 +270,11 @@ static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
+ return apic_base & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE);
+ }
+
++static inline enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
++{
++ return kvm_apic_mode(vcpu->arch.apic_base);
++}
++
+ static inline u8 kvm_xapic_id(struct kvm_lapic *apic)
+ {
+ return kvm_lapic_get_reg(apic, APIC_ID) >> 24;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 931a7361c30f2d..22bee8a711442d 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5043,6 +5043,11 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
+ }
+
++ if (vmx->nested.update_vmcs01_hwapic_isr) {
++ vmx->nested.update_vmcs01_hwapic_isr = false;
++ kvm_apic_update_hwapic_isr(vcpu);
++ }
++
+ if ((vm_exit_reason != -1) &&
+ (enable_shadow_vmcs || nested_vmx_is_evmptr12_valid(vmx)))
+ vmx->nested.need_vmcs12_to_shadow_sync = true;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index f06d443ec3c68d..1af30e3472cdd9 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6858,6 +6858,27 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+ u16 status;
+ u8 old;
+
++ /*
++ * If L2 is active, defer the SVI update until vmcs01 is loaded, as SVI
++ * is only relevant for if and only if Virtual Interrupt Delivery is
++ * enabled in vmcs12, and if VID is enabled then L2 EOIs affect L2's
++ * vAPIC, not L1's vAPIC. KVM must update vmcs01 on the next nested
++ * VM-Exit, otherwise L1 with run with a stale SVI.
++ */
++ if (is_guest_mode(vcpu)) {
++ /*
++ * KVM is supposed to forward intercepted L2 EOIs to L1 if VID
++ * is enabled in vmcs12; as above, the EOIs affect L2's vAPIC.
++ * Note, userspace can stuff state while L2 is active; assert
++ * that VID is disabled if and only if the vCPU is in KVM_RUN
++ * to avoid false positives if userspace is setting APIC state.
++ */
++ WARN_ON_ONCE(vcpu->wants_to_run &&
++ nested_cpu_has_vid(get_vmcs12(vcpu)));
++ to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
++ return;
++ }
++
+ if (max_isr == -1)
+ max_isr = 0;
+
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 2325f773a20be0..41bf59bbc6426c 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -176,6 +176,7 @@ struct nested_vmx {
+ bool reload_vmcs01_apic_access_page;
+ bool update_vmcs01_cpu_dirty_logging;
+ bool update_vmcs01_apicv_status;
++ bool update_vmcs01_hwapic_isr;
+
+ /*
+ * Enlightened VMCS has been enabled. It does not mean that L1 has to
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0846e3af5f6c5a..b67a2f46e40b05 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -667,17 +667,6 @@ static void drop_user_return_notifiers(void)
+ kvm_on_user_return(&msrs->urn);
+ }
+
+-u64 kvm_get_apic_base(struct kvm_vcpu *vcpu)
+-{
+- return vcpu->arch.apic_base;
+-}
+-
+-enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
+-{
+- return kvm_apic_mode(kvm_get_apic_base(vcpu));
+-}
+-EXPORT_SYMBOL_GPL(kvm_get_apic_mode);
+-
+ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ {
+ enum lapic_mode old_mode = kvm_get_apic_mode(vcpu);
+@@ -4314,7 +4303,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ msr_info->data = 1 << 24;
+ break;
+ case MSR_IA32_APICBASE:
+- msr_info->data = kvm_get_apic_base(vcpu);
++ msr_info->data = vcpu->arch.apic_base;
+ break;
+ case APIC_BASE_MSR ... APIC_BASE_MSR + 0xff:
+ return kvm_x2apic_msr_read(vcpu, msr_info->index, &msr_info->data);
+@@ -10159,7 +10148,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
+
+ kvm_run->if_flag = kvm_x86_call(get_if_flag)(vcpu);
+ kvm_run->cr8 = kvm_get_cr8(vcpu);
+- kvm_run->apic_base = kvm_get_apic_base(vcpu);
++ kvm_run->apic_base = vcpu->arch.apic_base;
+
+ kvm_run->ready_for_interrupt_injection =
+ pic_in_kernel(vcpu->kvm) ||
+@@ -11718,7 +11707,7 @@ static void __get_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+ sregs->cr4 = kvm_read_cr4(vcpu);
+ sregs->cr8 = kvm_get_cr8(vcpu);
+ sregs->efer = vcpu->arch.efer;
+- sregs->apic_base = kvm_get_apic_base(vcpu);
++ sregs->apic_base = vcpu->arch.apic_base;
+ }
+
+ static void __get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig
+index 682c532452863e..e4d418b44626ed 100644
+--- a/drivers/accel/ivpu/Kconfig
++++ b/drivers/accel/ivpu/Kconfig
+@@ -8,6 +8,7 @@ config DRM_ACCEL_IVPU
+ select FW_LOADER
+ select DRM_GEM_SHMEM_HELPER
+ select GENERIC_ALLOCATOR
++ select WANT_DEV_COREDUMP
+ help
+ Choose this option if you have a system with an 14th generation
+ Intel CPU (Meteor Lake) or newer. Intel NPU (formerly called Intel VPU)
+diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
+index ebd682a42eb124..232ea6d28c6e25 100644
+--- a/drivers/accel/ivpu/Makefile
++++ b/drivers/accel/ivpu/Makefile
+@@ -19,5 +19,6 @@ intel_vpu-y := \
+ ivpu_sysfs.o
+
+ intel_vpu-$(CONFIG_DEBUG_FS) += ivpu_debugfs.o
++intel_vpu-$(CONFIG_DEV_COREDUMP) += ivpu_coredump.o
+
+ obj-$(CONFIG_DRM_ACCEL_IVPU) += intel_vpu.o
+diff --git a/drivers/accel/ivpu/ivpu_coredump.c b/drivers/accel/ivpu/ivpu_coredump.c
+new file mode 100644
+index 00000000000000..16ad0c30818ccf
+--- /dev/null
++++ b/drivers/accel/ivpu/ivpu_coredump.c
+@@ -0,0 +1,39 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (C) 2020-2024 Intel Corporation
++ */
++
++#include <linux/devcoredump.h>
++#include <linux/firmware.h>
++
++#include "ivpu_coredump.h"
++#include "ivpu_fw.h"
++#include "ivpu_gem.h"
++#include "vpu_boot_api.h"
++
++#define CRASH_DUMP_HEADER "Intel NPU crash dump"
++#define CRASH_DUMP_HEADERS_SIZE SZ_4K
++
++void ivpu_dev_coredump(struct ivpu_device *vdev)
++{
++ struct drm_print_iterator pi = {};
++ struct drm_printer p;
++ size_t coredump_size;
++ char *coredump;
++
++ coredump_size = CRASH_DUMP_HEADERS_SIZE + FW_VERSION_HEADER_SIZE +
++ ivpu_bo_size(vdev->fw->mem_log_crit) + ivpu_bo_size(vdev->fw->mem_log_verb);
++ coredump = vmalloc(coredump_size);
++ if (!coredump)
++ return;
++
++ pi.data = coredump;
++ pi.remain = coredump_size;
++ p = drm_coredump_printer(&pi);
++
++ drm_printf(&p, "%s\n", CRASH_DUMP_HEADER);
++ drm_printf(&p, "FW version: %s\n", vdev->fw->version);
++ ivpu_fw_log_print(vdev, false, &p);
++
++ dev_coredumpv(vdev->drm.dev, coredump, pi.offset, GFP_KERNEL);
++}
+diff --git a/drivers/accel/ivpu/ivpu_coredump.h b/drivers/accel/ivpu/ivpu_coredump.h
+new file mode 100644
+index 00000000000000..8efb09d0244115
+--- /dev/null
++++ b/drivers/accel/ivpu/ivpu_coredump.h
+@@ -0,0 +1,25 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2020-2024 Intel Corporation
++ */
++
++#ifndef __IVPU_COREDUMP_H__
++#define __IVPU_COREDUMP_H__
++
++#include <drm/drm_print.h>
++
++#include "ivpu_drv.h"
++#include "ivpu_fw_log.h"
++
++#ifdef CONFIG_DEV_COREDUMP
++void ivpu_dev_coredump(struct ivpu_device *vdev);
++#else
++static inline void ivpu_dev_coredump(struct ivpu_device *vdev)
++{
++ struct drm_printer p = drm_info_printer(vdev->drm.dev);
++
++ ivpu_fw_log_print(vdev, false, &p);
++}
++#endif
++
++#endif /* __IVPU_COREDUMP_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index c91400ecf92651..38b4158f52784b 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -14,7 +14,7 @@
+ #include <drm/drm_ioctl.h>
+ #include <drm/drm_prime.h>
+
+-#include "vpu_boot_api.h"
++#include "ivpu_coredump.h"
+ #include "ivpu_debugfs.h"
+ #include "ivpu_drv.h"
+ #include "ivpu_fw.h"
+@@ -29,6 +29,7 @@
+ #include "ivpu_ms.h"
+ #include "ivpu_pm.h"
+ #include "ivpu_sysfs.h"
++#include "vpu_boot_api.h"
+
+ #ifndef DRIVER_VERSION_STR
+ #define DRIVER_VERSION_STR __stringify(DRM_IVPU_DRIVER_MAJOR) "." \
+@@ -382,7 +383,7 @@ int ivpu_boot(struct ivpu_device *vdev)
+ ivpu_err(vdev, "Failed to boot the firmware: %d\n", ret);
+ ivpu_hw_diagnose_failure(vdev);
+ ivpu_mmu_evtq_dump(vdev);
+- ivpu_fw_log_dump(vdev);
++ ivpu_dev_coredump(vdev);
+ return ret;
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
+index 63f13b697eed71..2b30cc2e9272e4 100644
+--- a/drivers/accel/ivpu/ivpu_drv.h
++++ b/drivers/accel/ivpu/ivpu_drv.h
+@@ -152,6 +152,7 @@ struct ivpu_device {
+ int tdr;
+ int autosuspend;
+ int d0i3_entry_msg;
++ int state_dump_msg;
+ } timeout;
+ };
+
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index ede6165e09d90d..b2b6d89f06537f 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -25,7 +25,6 @@
+ #define FW_SHAVE_NN_MAX_SIZE SZ_2M
+ #define FW_RUNTIME_MIN_ADDR (FW_GLOBAL_MEM_START)
+ #define FW_RUNTIME_MAX_ADDR (FW_GLOBAL_MEM_END - FW_SHARED_MEM_SIZE)
+-#define FW_VERSION_HEADER_SIZE SZ_4K
+ #define FW_FILE_IMAGE_OFFSET (VPU_FW_HEADER_SIZE + FW_VERSION_HEADER_SIZE)
+
+ #define WATCHDOG_MSS_REDIRECT 32
+@@ -191,8 +190,10 @@ static int ivpu_fw_parse(struct ivpu_device *vdev)
+ ivpu_dbg(vdev, FW_BOOT, "Header version: 0x%x, format 0x%x\n",
+ fw_hdr->header_version, fw_hdr->image_format);
+
+- ivpu_info(vdev, "Firmware: %s, version: %s", fw->name,
+- (const char *)fw_hdr + VPU_FW_HEADER_SIZE);
++ if (!scnprintf(fw->version, sizeof(fw->version), "%s", fw->file->data + VPU_FW_HEADER_SIZE))
++ ivpu_warn(vdev, "Missing firmware version\n");
++
++ ivpu_info(vdev, "Firmware: %s, version: %s\n", fw->name, fw->version);
+
+ if (IVPU_FW_CHECK_API_COMPAT(vdev, fw_hdr, BOOT, 3))
+ return -EINVAL;
+diff --git a/drivers/accel/ivpu/ivpu_fw.h b/drivers/accel/ivpu/ivpu_fw.h
+index 40d9d17be3f528..5e8eb608b70f1f 100644
+--- a/drivers/accel/ivpu/ivpu_fw.h
++++ b/drivers/accel/ivpu/ivpu_fw.h
+@@ -1,11 +1,14 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (C) 2020-2023 Intel Corporation
++ * Copyright (C) 2020-2024 Intel Corporation
+ */
+
+ #ifndef __IVPU_FW_H__
+ #define __IVPU_FW_H__
+
++#define FW_VERSION_HEADER_SIZE SZ_4K
++#define FW_VERSION_STR_SIZE SZ_256
++
+ struct ivpu_device;
+ struct ivpu_bo;
+ struct vpu_boot_params;
+@@ -13,6 +16,7 @@ struct vpu_boot_params;
+ struct ivpu_fw_info {
+ const struct firmware *file;
+ const char *name;
++ char version[FW_VERSION_STR_SIZE];
+ struct ivpu_bo *mem;
+ struct ivpu_bo *mem_shave_nn;
+ struct ivpu_bo *mem_log_crit;
+diff --git a/drivers/accel/ivpu/ivpu_fw_log.h b/drivers/accel/ivpu/ivpu_fw_log.h
+index 0b2573f6f31519..4b390a99699d66 100644
+--- a/drivers/accel/ivpu/ivpu_fw_log.h
++++ b/drivers/accel/ivpu/ivpu_fw_log.h
+@@ -8,8 +8,6 @@
+
+ #include <linux/types.h>
+
+-#include <drm/drm_print.h>
+-
+ #include "ivpu_drv.h"
+
+ #define IVPU_FW_LOG_DEFAULT 0
+@@ -28,11 +26,5 @@ extern unsigned int ivpu_log_level;
+ void ivpu_fw_log_print(struct ivpu_device *vdev, bool only_new_msgs, struct drm_printer *p);
+ void ivpu_fw_log_clear(struct ivpu_device *vdev);
+
+-static inline void ivpu_fw_log_dump(struct ivpu_device *vdev)
+-{
+- struct drm_printer p = drm_info_printer(vdev->drm.dev);
+-
+- ivpu_fw_log_print(vdev, false, &p);
+-}
+
+ #endif /* __IVPU_FW_LOG_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_hw.c b/drivers/accel/ivpu/ivpu_hw.c
+index e69c0613513f11..08b3cef58fd2d7 100644
+--- a/drivers/accel/ivpu/ivpu_hw.c
++++ b/drivers/accel/ivpu/ivpu_hw.c
+@@ -89,12 +89,14 @@ static void timeouts_init(struct ivpu_device *vdev)
+ vdev->timeout.tdr = 2000000;
+ vdev->timeout.autosuspend = -1;
+ vdev->timeout.d0i3_entry_msg = 500;
++ vdev->timeout.state_dump_msg = 10;
+ } else if (ivpu_is_simics(vdev)) {
+ vdev->timeout.boot = 50;
+ vdev->timeout.jsm = 500;
+ vdev->timeout.tdr = 10000;
+ vdev->timeout.autosuspend = -1;
+ vdev->timeout.d0i3_entry_msg = 100;
++ vdev->timeout.state_dump_msg = 10;
+ } else {
+ vdev->timeout.boot = 1000;
+ vdev->timeout.jsm = 500;
+@@ -104,6 +106,7 @@ static void timeouts_init(struct ivpu_device *vdev)
+ else
+ vdev->timeout.autosuspend = 100;
+ vdev->timeout.d0i3_entry_msg = 5;
++ vdev->timeout.state_dump_msg = 10;
+ }
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 29b723039a3459..13c8a12162e89e 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -353,6 +353,32 @@ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ return ret;
+ }
+
++int ivpu_ipc_send_and_wait(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ u32 channel, unsigned long timeout_ms)
++{
++ struct ivpu_ipc_consumer cons;
++ int ret;
++
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
++ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
++
++ ret = ivpu_ipc_send(vdev, &cons, req);
++ if (ret) {
++ ivpu_warn_ratelimited(vdev, "IPC send failed: %d\n", ret);
++ goto consumer_del;
++ }
++
++ msleep(timeout_ms);
++
++consumer_del:
++ ivpu_ipc_consumer_del(vdev, &cons);
++ ivpu_rpm_put(vdev);
++ return ret;
++}
++
+ static bool
+ ivpu_ipc_match_consumer(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ struct ivpu_ipc_hdr *ipc_hdr, struct vpu_jsm_msg *jsm_msg)
+diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h
+index fb4de7fb8210ea..b4dfb504679bac 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.h
++++ b/drivers/accel/ivpu/ivpu_ipc.h
+@@ -107,5 +107,7 @@ int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg
+ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+ u32 channel, unsigned long timeout_ms);
++int ivpu_ipc_send_and_wait(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ u32 channel, unsigned long timeout_ms);
+
+ #endif /* __IVPU_IPC_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
+index 88105963c1b288..f7618b605f0219 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
+@@ -555,3 +555,11 @@ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, &resp,
+ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
++
++int ivpu_jsm_state_dump(struct ivpu_device *vdev)
++{
++ struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_STATE_DUMP };
++
++ return ivpu_ipc_send_and_wait(vdev, &req, VPU_IPC_CHAN_ASYNC_CMD,
++ vdev->timeout.state_dump_msg);
++}
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.h b/drivers/accel/ivpu/ivpu_jsm_msg.h
+index e4e42c0ff6e656..9e84d3526a1463 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.h
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.h
+@@ -43,4 +43,6 @@ int ivpu_jsm_metric_streamer_info(struct ivpu_device *vdev, u64 metric_group_mas
+ u64 buffer_size, u32 *sample_size, u64 *info_size);
+ int ivpu_jsm_dct_enable(struct ivpu_device *vdev, u32 active_us, u32 inactive_us);
+ int ivpu_jsm_dct_disable(struct ivpu_device *vdev);
++int ivpu_jsm_state_dump(struct ivpu_device *vdev);
++
+ #endif
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index ef9a4ba18cb8a8..fbb61a2c3b19ce 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -9,17 +9,18 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/reboot.h>
+
+-#include "vpu_boot_api.h"
++#include "ivpu_coredump.h"
+ #include "ivpu_drv.h"
+-#include "ivpu_hw.h"
+ #include "ivpu_fw.h"
+ #include "ivpu_fw_log.h"
++#include "ivpu_hw.h"
+ #include "ivpu_ipc.h"
+ #include "ivpu_job.h"
+ #include "ivpu_jsm_msg.h"
+ #include "ivpu_mmu.h"
+ #include "ivpu_ms.h"
+ #include "ivpu_pm.h"
++#include "vpu_boot_api.h"
+
+ static bool ivpu_disable_recovery;
+ module_param_named_unsafe(disable_recovery, ivpu_disable_recovery, bool, 0644);
+@@ -110,40 +111,57 @@ static int ivpu_resume(struct ivpu_device *vdev)
+ return ret;
+ }
+
+-static void ivpu_pm_recovery_work(struct work_struct *work)
++static void ivpu_pm_reset_begin(struct ivpu_device *vdev)
+ {
+- struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, recovery_work);
+- struct ivpu_device *vdev = pm->vdev;
+- char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
+- int ret;
+-
+- ivpu_err(vdev, "Recovering the NPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
+-
+- ret = pm_runtime_resume_and_get(vdev->drm.dev);
+- if (ret)
+- ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
+-
+- ivpu_fw_log_dump(vdev);
++ pm_runtime_disable(vdev->drm.dev);
+
+ atomic_inc(&vdev->pm->reset_counter);
+ atomic_set(&vdev->pm->reset_pending, 1);
+ down_write(&vdev->pm->reset_lock);
++}
++
++static void ivpu_pm_reset_complete(struct ivpu_device *vdev)
++{
++ int ret;
+
+- ivpu_suspend(vdev);
+ ivpu_pm_prepare_cold_boot(vdev);
+ ivpu_jobs_abort_all(vdev);
+ ivpu_ms_cleanup_all(vdev);
+
+ ret = ivpu_resume(vdev);
+- if (ret)
++ if (ret) {
+ ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
++ pm_runtime_set_suspended(vdev->drm.dev);
++ } else {
++ pm_runtime_set_active(vdev->drm.dev);
++ }
+
+ up_write(&vdev->pm->reset_lock);
+ atomic_set(&vdev->pm->reset_pending, 0);
+
+- kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
+ pm_runtime_mark_last_busy(vdev->drm.dev);
+- pm_runtime_put_autosuspend(vdev->drm.dev);
++ pm_runtime_enable(vdev->drm.dev);
++}
++
++static void ivpu_pm_recovery_work(struct work_struct *work)
++{
++ struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, recovery_work);
++ struct ivpu_device *vdev = pm->vdev;
++ char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
++
++ ivpu_err(vdev, "Recovering the NPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
++
++ ivpu_pm_reset_begin(vdev);
++
++ if (!pm_runtime_status_suspended(vdev->drm.dev)) {
++ ivpu_jsm_state_dump(vdev);
++ ivpu_dev_coredump(vdev);
++ ivpu_suspend(vdev);
++ }
++
++ ivpu_pm_reset_complete(vdev);
++
++ kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
+ }
+
+ void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
+@@ -262,7 +280,7 @@ int ivpu_pm_runtime_suspend_cb(struct device *dev)
+ if (!is_idle || ret_d0i3) {
+ ivpu_err(vdev, "Forcing cold boot due to previous errors\n");
+ atomic_inc(&vdev->pm->reset_counter);
+- ivpu_fw_log_dump(vdev);
++ ivpu_dev_coredump(vdev);
+ ivpu_pm_prepare_cold_boot(vdev);
+ } else {
+ ivpu_pm_prepare_warm_boot(vdev);
+@@ -314,16 +332,13 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
+ struct ivpu_device *vdev = pci_get_drvdata(pdev);
+
+ ivpu_dbg(vdev, PM, "Pre-reset..\n");
+- atomic_inc(&vdev->pm->reset_counter);
+- atomic_set(&vdev->pm->reset_pending, 1);
+
+- pm_runtime_get_sync(vdev->drm.dev);
+- down_write(&vdev->pm->reset_lock);
+- ivpu_prepare_for_reset(vdev);
+- ivpu_hw_reset(vdev);
+- ivpu_pm_prepare_cold_boot(vdev);
+- ivpu_jobs_abort_all(vdev);
+- ivpu_ms_cleanup_all(vdev);
++ ivpu_pm_reset_begin(vdev);
++
++ if (!pm_runtime_status_suspended(vdev->drm.dev)) {
++ ivpu_prepare_for_reset(vdev);
++ ivpu_hw_reset(vdev);
++ }
+
+ ivpu_dbg(vdev, PM, "Pre-reset done.\n");
+ }
+@@ -331,18 +346,12 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
+ void ivpu_pm_reset_done_cb(struct pci_dev *pdev)
+ {
+ struct ivpu_device *vdev = pci_get_drvdata(pdev);
+- int ret;
+
+ ivpu_dbg(vdev, PM, "Post-reset..\n");
+- ret = ivpu_resume(vdev);
+- if (ret)
+- ivpu_err(vdev, "Failed to set RESUME state: %d\n", ret);
+- up_write(&vdev->pm->reset_lock);
+- atomic_set(&vdev->pm->reset_pending, 0);
+- ivpu_dbg(vdev, PM, "Post-reset done.\n");
+
+- pm_runtime_mark_last_busy(vdev->drm.dev);
+- pm_runtime_put_autosuspend(vdev->drm.dev);
++ ivpu_pm_reset_complete(vdev);
++
++ ivpu_dbg(vdev, PM, "Post-reset done.\n");
+ }
+
+ void ivpu_pm_init(struct ivpu_device *vdev)
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index dfbbac92242a84..04d02c746ec0fd 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -272,6 +272,39 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
+ }
+ EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
+
++static bool qca_filename_has_extension(const char *filename)
++{
++ const char *suffix = strrchr(filename, '.');
++
++ /* File extensions require a dot, but not as the first or last character */
++ if (!suffix || suffix == filename || *(suffix + 1) == '\0')
++ return 0;
++
++ /* Avoid matching directories with names that look like files with extensions */
++ return !strchr(suffix, '/');
++}
++
++static bool qca_get_alt_nvm_file(char *filename, size_t max_size)
++{
++ char fwname[64];
++ const char *suffix;
++
++ /* nvm file name has an extension, replace with .bin */
++ if (qca_filename_has_extension(filename)) {
++ suffix = strrchr(filename, '.');
++ strscpy(fwname, filename, suffix - filename + 1);
++ snprintf(fwname + (suffix - filename),
++ sizeof(fwname) - (suffix - filename), ".bin");
++ /* If nvm file is already the default one, return false to skip the retry. */
++ if (strcmp(fwname, filename) == 0)
++ return false;
++
++ snprintf(filename, max_size, "%s", fwname);
++ return true;
++ }
++ return false;
++}
++
+ static int qca_tlv_check_data(struct hci_dev *hdev,
+ struct qca_fw_config *config,
+ u8 *fw_data, size_t fw_size,
+@@ -564,6 +597,19 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ config->fwname, ret);
+ return ret;
+ }
++ }
++ /* If the board-specific file is missing, try loading the default
++ * one, unless that was attempted already.
++ */
++ else if (config->type == TLV_TYPE_NVM &&
++ qca_get_alt_nvm_file(config->fwname, sizeof(config->fwname))) {
++ bt_dev_info(hdev, "QCA Downloading %s", config->fwname);
++ ret = request_firmware(&fw, config->fwname, &hdev->dev);
++ if (ret) {
++ bt_dev_err(hdev, "QCA Failed to request file: %s (%d)",
++ config->fwname, ret);
++ return ret;
++ }
+ } else {
+ bt_dev_err(hdev, "QCA Failed to request file: %s (%d)",
+ config->fwname, ret);
+@@ -700,34 +746,38 @@ static int qca_check_bdaddr(struct hci_dev *hdev, const struct qca_fw_config *co
+ return 0;
+ }
+
+-static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size,
++static void qca_get_nvm_name_by_board(char *fwname, size_t max_size,
++ const char *stem, enum qca_btsoc_type soc_type,
+ struct qca_btsoc_version ver, u8 rom_ver, u16 bid)
+ {
+ const char *variant;
++ const char *prefix;
+
+- /* hsp gf chip */
+- if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID)
+- variant = "g";
+- else
+- variant = "";
++ /* Set the default value to variant and prefix */
++ variant = "";
++ prefix = "b";
+
+- if (bid == 0x0)
+- snprintf(fwname, max_size, "qca/hpnv%02x%s.bin", rom_ver, variant);
+- else
+- snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid);
+-}
++ if (soc_type == QCA_QCA2066)
++ prefix = "";
+
+-static inline void qca_get_nvm_name_generic(struct qca_fw_config *cfg,
+- const char *stem, u8 rom_ver, u16 bid)
+-{
+- if (bid == 0x0)
+- snprintf(cfg->fwname, sizeof(cfg->fwname), "qca/%snv%02x.bin", stem, rom_ver);
+- else if (bid & 0xff00)
+- snprintf(cfg->fwname, sizeof(cfg->fwname),
+- "qca/%snv%02x.b%x", stem, rom_ver, bid);
+- else
+- snprintf(cfg->fwname, sizeof(cfg->fwname),
+- "qca/%snv%02x.b%02x", stem, rom_ver, bid);
++ if (soc_type == QCA_WCN6855 || soc_type == QCA_QCA2066) {
++ /* If the chip is manufactured by GlobalFoundries */
++ if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID)
++ variant = "g";
++ }
++
++ if (rom_ver != 0) {
++ if (bid == 0x0 || bid == 0xffff)
++ snprintf(fwname, max_size, "qca/%s%02x%s.bin", stem, rom_ver, variant);
++ else
++ snprintf(fwname, max_size, "qca/%s%02x%s.%s%02x", stem, rom_ver,
++ variant, prefix, bid);
++ } else {
++ if (bid == 0x0 || bid == 0xffff)
++ snprintf(fwname, max_size, "qca/%s%s.bin", stem, variant);
++ else
++ snprintf(fwname, max_size, "qca/%s%s.%s%02x", stem, variant, prefix, bid);
++ }
+ }
+
+ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+@@ -816,8 +866,14 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ /* Download NVM configuration */
+ config.type = TLV_TYPE_NVM;
+ if (firmware_name) {
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/%s", firmware_name);
++ /* The firmware name has an extension, use it directly */
++ if (qca_filename_has_extension(firmware_name)) {
++ snprintf(config.fwname, sizeof(config.fwname), "qca/%s", firmware_name);
++ } else {
++ qca_read_fw_board_id(hdev, &boardid);
++ qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
++ firmware_name, soc_type, ver, 0, boardid);
++ }
+ } else {
+ switch (soc_type) {
+ case QCA_WCN3990:
+@@ -836,8 +892,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ "qca/apnv%02x.bin", rom_ver);
+ break;
+ case QCA_QCA2066:
+- qca_generate_hsp_nvm_name(config.fwname,
+- sizeof(config.fwname), ver, rom_ver, boardid);
++ qca_get_nvm_name_by_board(config.fwname,
++ sizeof(config.fwname), "hpnv", soc_type, ver,
++ rom_ver, boardid);
+ break;
+ case QCA_QCA6390:
+ snprintf(config.fwname, sizeof(config.fwname),
+@@ -848,13 +905,14 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ "qca/msnv%02x.bin", rom_ver);
+ break;
+ case QCA_WCN6855:
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/hpnv%02x.bin", rom_ver);
++ qca_read_fw_board_id(hdev, &boardid);
++ qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
++ "hpnv", soc_type, ver, rom_ver, boardid);
+ break;
+ case QCA_WCN7850:
+- qca_get_nvm_name_generic(&config, "hmt", rom_ver, boardid);
++ qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
++ "hmtnv", soc_type, ver, rom_ver, boardid);
+ break;
+-
+ default:
+ snprintf(config.fwname, sizeof(config.fwname),
+ "qca/nvm_%08x.bin", soc_ver);
+diff --git a/drivers/clocksource/jcore-pit.c b/drivers/clocksource/jcore-pit.c
+index a3fe98cd383820..82815428f8f925 100644
+--- a/drivers/clocksource/jcore-pit.c
++++ b/drivers/clocksource/jcore-pit.c
+@@ -114,6 +114,18 @@ static int jcore_pit_local_init(unsigned cpu)
+ pit->periodic_delta = DIV_ROUND_CLOSEST(NSEC_PER_SEC, HZ * buspd);
+
+ clockevents_config_and_register(&pit->ced, freq, 1, ULONG_MAX);
++ enable_percpu_irq(pit->ced.irq, IRQ_TYPE_NONE);
++
++ return 0;
++}
++
++static int jcore_pit_local_teardown(unsigned cpu)
++{
++ struct jcore_pit *pit = this_cpu_ptr(jcore_pit_percpu);
++
++ pr_info("Local J-Core PIT teardown on cpu %u\n", cpu);
++
++ disable_percpu_irq(pit->ced.irq);
+
+ return 0;
+ }
+@@ -168,6 +180,7 @@ static int __init jcore_pit_init(struct device_node *node)
+ return -ENOMEM;
+ }
+
++ irq_set_percpu_devid(pit_irq);
+ err = request_percpu_irq(pit_irq, jcore_timer_interrupt,
+ "jcore_pit", jcore_pit_percpu);
+ if (err) {
+@@ -237,7 +250,7 @@ static int __init jcore_pit_init(struct device_node *node)
+
+ cpuhp_setup_state(CPUHP_AP_JCORE_TIMER_STARTING,
+ "clockevents/jcore:starting",
+- jcore_pit_local_init, NULL);
++ jcore_pit_local_init, jcore_pit_local_teardown);
+
+ return 0;
+ }
+diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
+index a9a8ba067007a9..0fd7a777fe7d27 100644
+--- a/drivers/edac/qcom_edac.c
++++ b/drivers/edac/qcom_edac.c
+@@ -95,7 +95,7 @@ static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_b
+ * Configure interrupt enable registers such that Tag, Data RAM related
+ * interrupts are propagated to interrupt controller for servicing
+ */
+- ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable,
++ ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable,
+ TRP0_INTERRUPT_ENABLE,
+ TRP0_INTERRUPT_ENABLE);
+ if (ret)
+@@ -113,7 +113,7 @@ static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_b
+ if (ret)
+ return ret;
+
+- ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable,
++ ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable,
+ DRP0_INTERRUPT_ENABLE,
+ DRP0_INTERRUPT_ENABLE);
+ if (ret)
+diff --git a/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c b/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
+index a86ab9b35953f7..2641faa329cdd0 100644
+--- a/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
++++ b/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
+@@ -254,8 +254,8 @@ static int scmi_imx_misc_ctrl_set(const struct scmi_protocol_handle *ph,
+ if (num > max_num)
+ return -EINVAL;
+
+- ret = ph->xops->xfer_get_init(ph, SCMI_IMX_MISC_CTRL_SET, sizeof(*in),
+- 0, &t);
++ ret = ph->xops->xfer_get_init(ph, SCMI_IMX_MISC_CTRL_SET,
++ sizeof(*in) + num * sizeof(__le32), 0, &t);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/firmware/imx/Kconfig b/drivers/firmware/imx/Kconfig
+index 907cd149c40a8b..c964f4924359fc 100644
+--- a/drivers/firmware/imx/Kconfig
++++ b/drivers/firmware/imx/Kconfig
+@@ -25,6 +25,7 @@ config IMX_SCU
+
+ config IMX_SCMI_MISC_DRV
+ tristate "IMX SCMI MISC Protocol driver"
++ depends on ARCH_MXC || COMPILE_TEST
+ default y if ARCH_MXC
+ help
+ The System Controller Management Interface firmware (SCMI FW) is
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 1e8f0bdb6ae3b4..209871c219d697 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -3068,6 +3068,8 @@ static int gpiod_get_raw_value_commit(const struct gpio_desc *desc)
+ static int gpio_chip_get_multiple(struct gpio_chip *gc,
+ unsigned long *mask, unsigned long *bits)
+ {
++ lockdep_assert_held(&gc->gpiodev->srcu);
++
+ if (gc->get_multiple)
+ return gc->get_multiple(gc, mask, bits);
+ if (gc->get) {
+@@ -3098,6 +3100,7 @@ int gpiod_get_array_value_complex(bool raw, bool can_sleep,
+ struct gpio_array *array_info,
+ unsigned long *value_bitmap)
+ {
++ struct gpio_chip *gc;
+ int ret, i = 0;
+
+ /*
+@@ -3109,10 +3112,15 @@ int gpiod_get_array_value_complex(bool raw, bool can_sleep,
+ array_size <= array_info->size &&
+ (void *)array_info == desc_array + array_info->size) {
+ if (!can_sleep)
+- WARN_ON(array_info->chip->can_sleep);
++ WARN_ON(array_info->gdev->can_sleep);
++
++ guard(srcu)(&array_info->gdev->srcu);
++ gc = srcu_dereference(array_info->gdev->chip,
++ &array_info->gdev->srcu);
++ if (!gc)
++ return -ENODEV;
+
+- ret = gpio_chip_get_multiple(array_info->chip,
+- array_info->get_mask,
++ ret = gpio_chip_get_multiple(gc, array_info->get_mask,
+ value_bitmap);
+ if (ret)
+ return ret;
+@@ -3393,6 +3401,8 @@ static void gpiod_set_raw_value_commit(struct gpio_desc *desc, bool value)
+ static void gpio_chip_set_multiple(struct gpio_chip *gc,
+ unsigned long *mask, unsigned long *bits)
+ {
++ lockdep_assert_held(&gc->gpiodev->srcu);
++
+ if (gc->set_multiple) {
+ gc->set_multiple(gc, mask, bits);
+ } else {
+@@ -3410,6 +3420,7 @@ int gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ struct gpio_array *array_info,
+ unsigned long *value_bitmap)
+ {
++ struct gpio_chip *gc;
+ int i = 0;
+
+ /*
+@@ -3421,14 +3432,19 @@ int gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ array_size <= array_info->size &&
+ (void *)array_info == desc_array + array_info->size) {
+ if (!can_sleep)
+- WARN_ON(array_info->chip->can_sleep);
++ WARN_ON(array_info->gdev->can_sleep);
++
++ guard(srcu)(&array_info->gdev->srcu);
++ gc = srcu_dereference(array_info->gdev->chip,
++ &array_info->gdev->srcu);
++ if (!gc)
++ return -ENODEV;
+
+ if (!raw && !bitmap_empty(array_info->invert_mask, array_size))
+ bitmap_xor(value_bitmap, value_bitmap,
+ array_info->invert_mask, array_size);
+
+- gpio_chip_set_multiple(array_info->chip, array_info->set_mask,
+- value_bitmap);
++ gpio_chip_set_multiple(gc, array_info->set_mask, value_bitmap);
+
+ i = find_first_zero_bit(array_info->set_mask, array_size);
+ if (i == array_size)
+@@ -4684,9 +4700,10 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ {
+ struct gpio_desc *desc;
+ struct gpio_descs *descs;
++ struct gpio_device *gdev;
+ struct gpio_array *array_info = NULL;
+- struct gpio_chip *gc;
+ int count, bitmap_size;
++ unsigned long dflags;
+ size_t descs_size;
+
+ count = gpiod_count(dev, con_id);
+@@ -4707,7 +4724,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+
+ descs->desc[descs->ndescs] = desc;
+
+- gc = gpiod_to_chip(desc);
++ gdev = gpiod_to_gpio_device(desc);
+ /*
+ * If pin hardware number of array member 0 is also 0, select
+ * its chip as a candidate for fast bitmap processing path.
+@@ -4715,8 +4732,8 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ if (descs->ndescs == 0 && gpio_chip_hwgpio(desc) == 0) {
+ struct gpio_descs *array;
+
+- bitmap_size = BITS_TO_LONGS(gc->ngpio > count ?
+- gc->ngpio : count);
++ bitmap_size = BITS_TO_LONGS(gdev->ngpio > count ?
++ gdev->ngpio : count);
+
+ array = krealloc(descs, descs_size +
+ struct_size(array_info, invert_mask, 3 * bitmap_size),
+@@ -4736,7 +4753,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+
+ array_info->desc = descs->desc;
+ array_info->size = count;
+- array_info->chip = gc;
++ array_info->gdev = gdev;
+ bitmap_set(array_info->get_mask, descs->ndescs,
+ count - descs->ndescs);
+ bitmap_set(array_info->set_mask, descs->ndescs,
+@@ -4749,7 +4766,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ continue;
+
+ /* Unmark array members which don't belong to the 'fast' chip */
+- if (array_info->chip != gc) {
++ if (array_info->gdev != gdev) {
+ __clear_bit(descs->ndescs, array_info->get_mask);
+ __clear_bit(descs->ndescs, array_info->set_mask);
+ }
+@@ -4772,9 +4789,10 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ array_info->set_mask);
+ }
+ } else {
++ dflags = READ_ONCE(desc->flags);
+ /* Exclude open drain or open source from fast output */
+- if (gpiochip_line_is_open_drain(gc, descs->ndescs) ||
+- gpiochip_line_is_open_source(gc, descs->ndescs))
++ if (test_bit(FLAG_OPEN_DRAIN, &dflags) ||
++ test_bit(FLAG_OPEN_SOURCE, &dflags))
+ __clear_bit(descs->ndescs,
+ array_info->set_mask);
+ /* Identify 'fast' pins which require invertion */
+@@ -4786,7 +4804,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ if (array_info)
+ dev_dbg(dev,
+ "GPIO array info: chip=%s, size=%d, get_mask=%lx, set_mask=%lx, invert_mask=%lx\n",
+- array_info->chip->label, array_info->size,
++ array_info->gdev->label, array_info->size,
+ *array_info->get_mask, *array_info->set_mask,
+ *array_info->invert_mask);
+ return descs;
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 067197d61d57e4..87ce3753500e4b 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -110,7 +110,7 @@ extern const char *const gpio_suffixes[];
+ *
+ * @desc: Array of pointers to the GPIO descriptors
+ * @size: Number of elements in desc
+- * @chip: Parent GPIO chip
++ * @gdev: Parent GPIO device
+ * @get_mask: Get mask used in fastpath
+ * @set_mask: Set mask used in fastpath
+ * @invert_mask: Invert mask used in fastpath
+@@ -122,7 +122,7 @@ extern const char *const gpio_suffixes[];
+ struct gpio_array {
+ struct gpio_desc **desc;
+ unsigned int size;
+- struct gpio_chip *chip;
++ struct gpio_device *gdev;
+ unsigned long *get_mask;
+ unsigned long *set_mask;
+ unsigned long invert_mask[];
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index c27b4c36a7c0f5..32afcf9485245e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -119,9 +119,10 @@
+ * - 3.58.0 - Add GFX12 DCC support
+ * - 3.59.0 - Cleared VRAM
+ * - 3.60.0 - Add AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE (Vulkan requirement)
++ * - 3.61.0 - Contains fix for RV/PCO compute queues
+ */
+ #define KMS_DRIVER_MAJOR 3
+-#define KMS_DRIVER_MINOR 60
++#define KMS_DRIVER_MINOR 61
+ #define KMS_DRIVER_PATCHLEVEL 0
+
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index e2501c98e107d3..05d1ae2ef84b4e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -7415,6 +7415,34 @@ static void gfx_v9_0_ring_emit_cleaner_shader(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, 0); /* RESERVED field, programmed to zero */
+ }
+
++static void gfx_v9_0_ring_begin_use_compute(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++
++ amdgpu_gfx_enforce_isolation_ring_begin_use(ring);
++
++ /* Raven and PCO APUs seem to have stability issues
++ * with compute and gfxoff and gfx pg. Disable gfx pg during
++ * submission and allow again afterwards.
++ */
++ if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 1, 0))
++ gfx_v9_0_set_powergating_state(adev, AMD_PG_STATE_UNGATE);
++}
++
++static void gfx_v9_0_ring_end_use_compute(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++
++ /* Raven and PCO APUs seem to have stability issues
++ * with compute and gfxoff and gfx pg. Disable gfx pg during
++ * submission and allow again afterwards.
++ */
++ if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 1, 0))
++ gfx_v9_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
++
++ amdgpu_gfx_enforce_isolation_ring_end_use(ring);
++}
++
+ static const struct amd_ip_funcs gfx_v9_0_ip_funcs = {
+ .name = "gfx_v9_0",
+ .early_init = gfx_v9_0_early_init,
+@@ -7591,8 +7619,8 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
+ .emit_wave_limit = gfx_v9_0_emit_wave_limit,
+ .reset = gfx_v9_0_reset_kcq,
+ .emit_cleaner_shader = gfx_v9_0_ring_emit_cleaner_shader,
+- .begin_use = amdgpu_gfx_enforce_isolation_ring_begin_use,
+- .end_use = amdgpu_gfx_enforce_isolation_ring_end_use,
++ .begin_use = gfx_v9_0_ring_begin_use_compute,
++ .end_use = gfx_v9_0_ring_end_use_compute,
+ };
+
+ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_kiq = {
+diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+index 02f7ba8c93cd45..7062f12b5b7511 100644
+--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
++++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+@@ -4117,7 +4117,8 @@ static const uint32_t cwsr_trap_gfx12_hex[] = {
+ 0x0000ffff, 0x8bfe7e7e,
+ 0x8bea6a6a, 0xb97af804,
+ 0xbe804ec2, 0xbf94fffe,
+- 0xbe804a6c, 0xbfb10000,
++ 0xbe804a6c, 0xbe804ec2,
++ 0xbf94fffe, 0xbfb10000,
+ 0xbf9f0000, 0xbf9f0000,
+ 0xbf9f0000, 0xbf9f0000,
+ 0xbf9f0000, 0x00000000,
+diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
+index 44772eec9ef4df..96fbb16ceb216d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
++++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
+@@ -34,41 +34,24 @@
+ * cpp -DASIC_FAMILY=CHIP_PLUM_BONITO cwsr_trap_handler_gfx10.asm -P -o gfx11.sp3
+ * sp3 gfx11.sp3 -hex gfx11.hex
+ *
+- * gfx12:
+- * cpp -DASIC_FAMILY=CHIP_GFX12 cwsr_trap_handler_gfx10.asm -P -o gfx12.sp3
+- * sp3 gfx12.sp3 -hex gfx12.hex
+ */
+
+ #define CHIP_NAVI10 26
+ #define CHIP_SIENNA_CICHLID 30
+ #define CHIP_PLUM_BONITO 36
+-#define CHIP_GFX12 37
+
+ #define NO_SQC_STORE (ASIC_FAMILY >= CHIP_SIENNA_CICHLID)
+ #define HAVE_XNACK (ASIC_FAMILY < CHIP_SIENNA_CICHLID)
+ #define HAVE_SENDMSG_RTN (ASIC_FAMILY >= CHIP_PLUM_BONITO)
+ #define HAVE_BUFFER_LDS_LOAD (ASIC_FAMILY < CHIP_PLUM_BONITO)
+-#define SW_SA_TRAP (ASIC_FAMILY >= CHIP_PLUM_BONITO && ASIC_FAMILY < CHIP_GFX12)
++#define SW_SA_TRAP (ASIC_FAMILY == CHIP_PLUM_BONITO)
+ #define SAVE_AFTER_XNACK_ERROR (HAVE_XNACK && !NO_SQC_STORE) // workaround for TCP store failure after XNACK error when ALLOW_REPLAY=0, for debugger
+ #define SINGLE_STEP_MISSED_WORKAROUND 1 //workaround for lost MODE.DEBUG_EN exception when SAVECTX raised
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ #define S_COHERENCE glc:1
+ #define V_COHERENCE slc:1 glc:1
+ #define S_WAITCNT_0 s_waitcnt 0
+-#else
+-#define S_COHERENCE scope:SCOPE_SYS
+-#define V_COHERENCE scope:SCOPE_SYS
+-#define S_WAITCNT_0 s_wait_idle
+-
+-#define HW_REG_SHADER_FLAT_SCRATCH_LO HW_REG_WAVE_SCRATCH_BASE_LO
+-#define HW_REG_SHADER_FLAT_SCRATCH_HI HW_REG_WAVE_SCRATCH_BASE_HI
+-#define HW_REG_GPR_ALLOC HW_REG_WAVE_GPR_ALLOC
+-#define HW_REG_LDS_ALLOC HW_REG_WAVE_LDS_ALLOC
+-#define HW_REG_MODE HW_REG_WAVE_MODE
+-#endif
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ var SQ_WAVE_STATUS_SPI_PRIO_MASK = 0x00000006
+ var SQ_WAVE_STATUS_HALT_MASK = 0x2000
+ var SQ_WAVE_STATUS_ECC_ERR_MASK = 0x20000
+@@ -81,21 +64,6 @@ var S_STATUS_ALWAYS_CLEAR_MASK = SQ_WAVE_STATUS_SPI_PRIO_MASK|SQ_WAVE_STATUS_E
+ var S_STATUS_HALT_MASK = SQ_WAVE_STATUS_HALT_MASK
+ var S_SAVE_PC_HI_TRAP_ID_MASK = 0x00FF0000
+ var S_SAVE_PC_HI_HT_MASK = 0x01000000
+-#else
+-var SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK = 0x4
+-var SQ_WAVE_STATE_PRIV_SCC_SHIFT = 9
+-var SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK = 0xC00
+-var SQ_WAVE_STATE_PRIV_HALT_MASK = 0x4000
+-var SQ_WAVE_STATE_PRIV_POISON_ERR_MASK = 0x8000
+-var SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT = 15
+-var SQ_WAVE_STATUS_WAVE64_SHIFT = 29
+-var SQ_WAVE_STATUS_WAVE64_SIZE = 1
+-var SQ_WAVE_LDS_ALLOC_GRANULARITY = 9
+-var S_STATUS_HWREG = HW_REG_WAVE_STATE_PRIV
+-var S_STATUS_ALWAYS_CLEAR_MASK = SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK|SQ_WAVE_STATE_PRIV_POISON_ERR_MASK
+-var S_STATUS_HALT_MASK = SQ_WAVE_STATE_PRIV_HALT_MASK
+-var S_SAVE_PC_HI_TRAP_ID_MASK = 0xF0000000
+-#endif
+
+ var SQ_WAVE_STATUS_NO_VGPRS_SHIFT = 24
+ var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT = 12
+@@ -110,7 +78,6 @@ var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT = 8
+ var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT = 12
+ #endif
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ var SQ_WAVE_TRAPSTS_SAVECTX_MASK = 0x400
+ var SQ_WAVE_TRAPSTS_EXCP_MASK = 0x1FF
+ var SQ_WAVE_TRAPSTS_SAVECTX_SHIFT = 10
+@@ -161,39 +128,6 @@ var S_TRAPSTS_RESTORE_PART_3_SIZE = 32 - S_TRAPSTS_RESTORE_PART_3_SHIFT
+ var S_TRAPSTS_HWREG = HW_REG_TRAPSTS
+ var S_TRAPSTS_SAVE_CONTEXT_MASK = SQ_WAVE_TRAPSTS_SAVECTX_MASK
+ var S_TRAPSTS_SAVE_CONTEXT_SHIFT = SQ_WAVE_TRAPSTS_SAVECTX_SHIFT
+-#else
+-var SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK = 0xF
+-var SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK = 0x10
+-var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT = 5
+-var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK = 0x20
+-var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK = 0x40
+-var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT = 6
+-var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK = 0x80
+-var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT = 7
+-var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK = 0x100
+-var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT = 8
+-var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK = 0x200
+-var SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK = 0x800
+-var SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK = 0x80
+-var SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK = 0x200
+-
+-var S_TRAPSTS_HWREG = HW_REG_WAVE_EXCP_FLAG_PRIV
+-var S_TRAPSTS_SAVE_CONTEXT_MASK = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK
+-var S_TRAPSTS_SAVE_CONTEXT_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT
+-var S_TRAPSTS_NON_MASKABLE_EXCP_MASK = SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK
+-var S_TRAPSTS_RESTORE_PART_1_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT
+-var S_TRAPSTS_RESTORE_PART_2_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
+-var S_TRAPSTS_RESTORE_PART_2_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT - SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
+-var S_TRAPSTS_RESTORE_PART_3_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT
+-var S_TRAPSTS_RESTORE_PART_3_SIZE = 32 - S_TRAPSTS_RESTORE_PART_3_SHIFT
+-var BARRIER_STATE_SIGNAL_OFFSET = 16
+-var BARRIER_STATE_VALID_OFFSET = 0
+-#endif
+
+ // bits [31:24] unused by SPI debug data
+ var TTMP11_SAVE_REPLAY_W64H_SHIFT = 31
+@@ -305,11 +239,7 @@ L_TRAP_NO_BARRIER:
+
+ L_HALTED:
+ // Host trap may occur while wave is halted.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_TRAP_ID_MASK
+-#else
+- s_and_b32 ttmp2, s_save_trapsts, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
+-#endif
+ s_cbranch_scc1 L_FETCH_2ND_TRAP
+
+ L_CHECK_SAVE:
+@@ -336,7 +266,6 @@ L_NOT_HALTED:
+ // Check for maskable exceptions in trapsts.excp and trapsts.excp_hi.
+ // Maskable exceptions only cause the wave to enter the trap handler if
+ // their respective bit in mode.excp_en is set.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_and_b32 ttmp2, s_save_trapsts, SQ_WAVE_TRAPSTS_EXCP_MASK|SQ_WAVE_TRAPSTS_EXCP_HI_MASK
+ s_cbranch_scc0 L_CHECK_TRAP_ID
+
+@@ -349,17 +278,6 @@ L_NOT_ADDR_WATCH:
+ s_lshl_b32 ttmp2, ttmp2, SQ_WAVE_MODE_EXCP_EN_SHIFT
+ s_and_b32 ttmp2, ttmp2, ttmp3
+ s_cbranch_scc1 L_FETCH_2ND_TRAP
+-#else
+- s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
+- s_and_b32 ttmp3, s_save_trapsts, SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK
+- s_cbranch_scc0 L_NOT_ADDR_WATCH
+- s_or_b32 ttmp2, ttmp2, SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK
+-
+-L_NOT_ADDR_WATCH:
+- s_getreg_b32 ttmp3, hwreg(HW_REG_WAVE_TRAP_CTRL)
+- s_and_b32 ttmp2, ttmp3, ttmp2
+- s_cbranch_scc1 L_FETCH_2ND_TRAP
+-#endif
+
+ L_CHECK_TRAP_ID:
+ // Check trap_id != 0
+@@ -369,13 +287,8 @@ L_CHECK_TRAP_ID:
+ #if SINGLE_STEP_MISSED_WORKAROUND
+ // Prioritize single step exception over context save.
+ // Second-level trap will halt wave and RFE, re-entering for SAVECTX.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_getreg_b32 ttmp2, hwreg(HW_REG_MODE)
+ s_and_b32 ttmp2, ttmp2, SQ_WAVE_MODE_DEBUG_EN_MASK
+-#else
+- // WAVE_TRAP_CTRL is already in ttmp3.
+- s_and_b32 ttmp3, ttmp3, SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK
+-#endif
+ s_cbranch_scc1 L_FETCH_2ND_TRAP
+ #endif
+
+@@ -425,12 +338,7 @@ L_NO_NEXT_TRAP:
+ s_cbranch_scc1 L_TRAP_CASE
+
+ // Host trap will not cause trap re-entry.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_HT_MASK
+-#else
+- s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
+- s_and_b32 ttmp2, ttmp2, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
+-#endif
+ s_cbranch_scc1 L_EXIT_TRAP
+ s_or_b32 s_save_status, s_save_status, S_STATUS_HALT_MASK
+
+@@ -457,16 +365,7 @@ L_EXIT_TRAP:
+ s_and_b64 exec, exec, exec // Restore STATUS.EXECZ, not writable by s_setreg_b32
+ s_and_b64 vcc, vcc, vcc // Restore STATUS.VCCZ, not writable by s_setreg_b32
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_setreg_b32 hwreg(S_STATUS_HWREG), s_save_status
+-#else
+- // STATE_PRIV.BARRIER_COMPLETE may have changed since we read it.
+- // Only restore fields which the trap handler changes.
+- s_lshr_b32 s_save_status, s_save_status, SQ_WAVE_STATE_PRIV_SCC_SHIFT
+- s_setreg_b32 hwreg(S_STATUS_HWREG, SQ_WAVE_STATE_PRIV_SCC_SHIFT, \
+- SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT - SQ_WAVE_STATE_PRIV_SCC_SHIFT + 1), s_save_status
+-#endif
+-
+ s_rfe_b64 [ttmp0, ttmp1]
+
+ L_SAVE:
+@@ -478,14 +377,6 @@ L_SAVE:
+ s_endpgm
+ L_HAVE_VGPRS:
+ #endif
+-#if ASIC_FAMILY >= CHIP_GFX12
+- s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
+- s_bitcmp1_b32 s_save_tmp, SQ_WAVE_STATUS_NO_VGPRS_SHIFT
+- s_cbranch_scc0 L_HAVE_VGPRS
+- s_endpgm
+-L_HAVE_VGPRS:
+-#endif
+-
+ s_and_b32 s_save_pc_hi, s_save_pc_hi, 0x0000ffff //pc[47:32]
+ s_mov_b32 s_save_tmp, 0
+ s_setreg_b32 hwreg(S_TRAPSTS_HWREG, S_TRAPSTS_SAVE_CONTEXT_SHIFT, 1), s_save_tmp //clear saveCtx bit
+@@ -671,19 +562,6 @@ L_SAVE_HWREG:
+ s_mov_b32 m0, 0x0 //Next lane of v2 to write to
+ #endif
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Ensure no further changes to barrier or LDS state.
+- // STATE_PRIV.BARRIER_COMPLETE may change up to this point.
+- s_barrier_signal -2
+- s_barrier_wait -2
+-
+- // Re-read final state of BARRIER_COMPLETE field for save.
+- s_getreg_b32 s_save_tmp, hwreg(S_STATUS_HWREG)
+- s_and_b32 s_save_tmp, s_save_tmp, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
+- s_andn2_b32 s_save_status, s_save_status, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
+- s_or_b32 s_save_status, s_save_status, s_save_tmp
+-#endif
+-
+ write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+ write_hwreg_to_mem(s_save_pc_lo, s_save_buf_rsrc0, s_save_mem_offset)
+ s_andn2_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
+@@ -707,21 +585,6 @@ L_SAVE_HWREG:
+ s_getreg_b32 s_save_m0, hwreg(HW_REG_SHADER_FLAT_SCRATCH_HI)
+ write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
+- write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+-
+- s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_TRAP_CTRL)
+- write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+-
+- s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
+- write_hwreg_to_mem(s_save_tmp, s_save_buf_rsrc0, s_save_mem_offset)
+-
+- s_get_barrier_state s_save_tmp, -1
+- s_wait_kmcnt (0)
+- write_hwreg_to_mem(s_save_tmp, s_save_buf_rsrc0, s_save_mem_offset)
+-#endif
+-
+ #if NO_SQC_STORE
+ // Write HWREGs with 16 VGPR lanes. TTMPs occupy space after this.
+ s_mov_b32 exec_lo, 0xFFFF
+@@ -814,9 +677,7 @@ L_SAVE_LDS_NORMAL:
+ s_and_b32 s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF //lds_size is zero?
+ s_cbranch_scc0 L_SAVE_LDS_DONE //no lds used? jump to L_SAVE_DONE
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_barrier //LDS is used? wait for other waves in the same TG
+-#endif
+ s_and_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
+ s_cbranch_scc0 L_SAVE_LDS_DONE
+
+@@ -1081,11 +942,6 @@ L_RESTORE:
+ s_mov_b32 s_restore_buf_rsrc2, 0 //NUM_RECORDS initial value = 0 (in bytes)
+ s_mov_b32 s_restore_buf_rsrc3, S_RESTORE_BUF_RSRC_WORD3_MISC
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Save s_restore_spi_init_hi for later use.
+- s_mov_b32 s_restore_spi_init_hi_save, s_restore_spi_init_hi
+-#endif
+-
+ //determine it is wave32 or wave64
+ get_wave_size2(s_restore_size)
+
+@@ -1320,9 +1176,7 @@ L_RESTORE_SGPR:
+ // s_barrier with MODE.DEBUG_EN=1, STATUS.PRIV=1 incorrectly asserts debug exception.
+ // Clear DEBUG_EN before and restore MODE after the barrier.
+ s_setreg_imm32_b32 hwreg(HW_REG_MODE), 0
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_barrier //barrier to ensure the readiness of LDS before access attemps from any other wave in the same TG
+-#endif
+
+ /* restore HW registers */
+ L_RESTORE_HWREG:
+@@ -1334,11 +1188,6 @@ L_RESTORE_HWREG:
+
+ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Restore s_restore_spi_init_hi before the saved value gets clobbered.
+- s_mov_b32 s_restore_spi_init_hi, s_restore_spi_init_hi_save
+-#endif
+-
+ read_hwreg_from_mem(s_restore_m0, s_restore_buf_rsrc0, s_restore_mem_offset)
+ read_hwreg_from_mem(s_restore_pc_lo, s_restore_buf_rsrc0, s_restore_mem_offset)
+ read_hwreg_from_mem(s_restore_pc_hi, s_restore_buf_rsrc0, s_restore_mem_offset)
+@@ -1358,44 +1207,6 @@ L_RESTORE_HWREG:
+
+ s_setreg_b32 hwreg(HW_REG_SHADER_FLAT_SCRATCH_HI), s_restore_flat_scratch
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
+- S_WAITCNT_0
+- s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_USER), s_restore_tmp
+-
+- read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
+- S_WAITCNT_0
+- s_setreg_b32 hwreg(HW_REG_WAVE_TRAP_CTRL), s_restore_tmp
+-
+- // Only the first wave needs to restore the workgroup barrier.
+- s_and_b32 s_restore_tmp, s_restore_spi_init_hi, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK
+- s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
+-
+- // Skip over WAVE_STATUS, since there is no state to restore from it
+- s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 4
+-
+- read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
+- S_WAITCNT_0
+-
+- s_bitcmp1_b32 s_restore_tmp, BARRIER_STATE_VALID_OFFSET
+- s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
+-
+- // extract the saved signal count from s_restore_tmp
+- s_lshr_b32 s_restore_tmp, s_restore_tmp, BARRIER_STATE_SIGNAL_OFFSET
+-
+- // We need to call s_barrier_signal repeatedly to restore the signal
+- // count of the work group barrier. The member count is already
+- // initialized with the number of waves in the work group.
+-L_BARRIER_RESTORE_LOOP:
+- s_and_b32 s_restore_tmp, s_restore_tmp, s_restore_tmp
+- s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
+- s_barrier_signal -1
+- s_add_i32 s_restore_tmp, s_restore_tmp, -1
+- s_branch L_BARRIER_RESTORE_LOOP
+-
+-L_SKIP_BARRIER_RESTORE:
+-#endif
+-
+ s_mov_b32 m0, s_restore_m0
+ s_mov_b32 exec_lo, s_restore_exec_lo
+ s_mov_b32 exec_hi, s_restore_exec_hi
+@@ -1453,13 +1264,6 @@ L_RETURN_WITHOUT_PRIV:
+
+ s_setreg_b32 hwreg(S_STATUS_HWREG), s_restore_status // SCC is included, which is changed by previous salu
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Make barrier and LDS state visible to all waves in the group.
+- // STATE_PRIV.BARRIER_COMPLETE may change after this point.
+- s_barrier_signal -2
+- s_barrier_wait -2
+-#endif
+-
+ s_rfe_b64 s_restore_pc_lo //Return to the main shader program and resume execution
+
+ L_END_PGM:
+@@ -1598,11 +1402,7 @@ function get_hwreg_size_bytes
+ end
+
+ function get_wave_size2(s_reg)
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_getreg_b32 s_reg, hwreg(HW_REG_IB_STS2,SQ_WAVE_IB_STS2_WAVE64_SHIFT,SQ_WAVE_IB_STS2_WAVE64_SIZE)
+-#else
+- s_getreg_b32 s_reg, hwreg(HW_REG_WAVE_STATUS,SQ_WAVE_STATUS_WAVE64_SHIFT,SQ_WAVE_STATUS_WAVE64_SIZE)
+-#endif
+ s_lshl_b32 s_reg, s_reg, S_WAVE_SIZE
+ end
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx12.asm b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx12.asm
+new file mode 100644
+index 00000000000000..7b9d36e5fa4372
+--- /dev/null
++++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx12.asm
+@@ -0,0 +1,1130 @@
++/*
++ * Copyright 2018 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++
++/* To compile this assembly code:
++ *
++ * gfx12:
++ * cpp -DASIC_FAMILY=CHIP_GFX12 cwsr_trap_handler_gfx12.asm -P -o gfx12.sp3
++ * sp3 gfx12.sp3 -hex gfx12.hex
++ */
++
++#define CHIP_GFX12 37
++
++#define SINGLE_STEP_MISSED_WORKAROUND 1 //workaround for lost TRAP_AFTER_INST exception when SAVECTX raised
++
++var SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK = 0x4
++var SQ_WAVE_STATE_PRIV_SCC_SHIFT = 9
++var SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK = 0xC00
++var SQ_WAVE_STATE_PRIV_HALT_MASK = 0x4000
++var SQ_WAVE_STATE_PRIV_POISON_ERR_MASK = 0x8000
++var SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT = 15
++var SQ_WAVE_STATUS_WAVE64_SHIFT = 29
++var SQ_WAVE_STATUS_WAVE64_SIZE = 1
++var SQ_WAVE_STATUS_NO_VGPRS_SHIFT = 24
++var SQ_WAVE_STATE_PRIV_ALWAYS_CLEAR_MASK = SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK|SQ_WAVE_STATE_PRIV_POISON_ERR_MASK
++var S_SAVE_PC_HI_TRAP_ID_MASK = 0xF0000000
++
++var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT = 12
++var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE = 9
++var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE = 8
++var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT = 12
++var SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT = 24
++var SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE = 4
++var SQ_WAVE_LDS_ALLOC_GRANULARITY = 9
++
++var SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK = 0xF
++var SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK = 0x10
++var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT = 5
++var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK = 0x20
++var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK = 0x40
++var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT = 6
++var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK = 0x80
++var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT = 7
++var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK = 0x100
++var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT = 8
++var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK = 0x200
++var SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK = 0x800
++var SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK = 0x80
++var SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK = 0x200
++
++var SQ_WAVE_EXCP_FLAG_PRIV_NON_MASKABLE_EXCP_MASK= SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_1_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT - SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SIZE = 32 - SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT
++var BARRIER_STATE_SIGNAL_OFFSET = 16
++var BARRIER_STATE_VALID_OFFSET = 0
++
++var TTMP11_DEBUG_TRAP_ENABLED_SHIFT = 23
++var TTMP11_DEBUG_TRAP_ENABLED_MASK = 0x800000
++
++// SQ_SEL_X/Y/Z/W, BUF_NUM_FORMAT_FLOAT, (0 for MUBUF stride[17:14]
++// when ADD_TID_ENABLE and BUF_DATA_FORMAT_32 for MTBUF), ADD_TID_ENABLE
++var S_SAVE_BUF_RSRC_WORD1_STRIDE = 0x00040000
++var S_SAVE_BUF_RSRC_WORD3_MISC = 0x10807FAC
++var S_SAVE_SPI_INIT_FIRST_WAVE_MASK = 0x04000000
++var S_SAVE_SPI_INIT_FIRST_WAVE_SHIFT = 26
++
++var S_SAVE_PC_HI_FIRST_WAVE_MASK = 0x80000000
++var S_SAVE_PC_HI_FIRST_WAVE_SHIFT = 31
++
++var s_sgpr_save_num = 108
++
++var s_save_spi_init_lo = exec_lo
++var s_save_spi_init_hi = exec_hi
++var s_save_pc_lo = ttmp0
++var s_save_pc_hi = ttmp1
++var s_save_exec_lo = ttmp2
++var s_save_exec_hi = ttmp3
++var s_save_state_priv = ttmp12
++var s_save_excp_flag_priv = ttmp15
++var s_save_xnack_mask = s_save_excp_flag_priv
++var s_wave_size = ttmp7
++var s_save_buf_rsrc0 = ttmp8
++var s_save_buf_rsrc1 = ttmp9
++var s_save_buf_rsrc2 = ttmp10
++var s_save_buf_rsrc3 = ttmp11
++var s_save_mem_offset = ttmp4
++var s_save_alloc_size = s_save_excp_flag_priv
++var s_save_tmp = ttmp14
++var s_save_m0 = ttmp5
++var s_save_ttmps_lo = s_save_tmp
++var s_save_ttmps_hi = s_save_excp_flag_priv
++
++var S_RESTORE_BUF_RSRC_WORD1_STRIDE = S_SAVE_BUF_RSRC_WORD1_STRIDE
++var S_RESTORE_BUF_RSRC_WORD3_MISC = S_SAVE_BUF_RSRC_WORD3_MISC
++
++var S_RESTORE_SPI_INIT_FIRST_WAVE_MASK = 0x04000000
++var S_RESTORE_SPI_INIT_FIRST_WAVE_SHIFT = 26
++var S_WAVE_SIZE = 25
++
++var s_restore_spi_init_lo = exec_lo
++var s_restore_spi_init_hi = exec_hi
++var s_restore_mem_offset = ttmp12
++var s_restore_alloc_size = ttmp3
++var s_restore_tmp = ttmp2
++var s_restore_mem_offset_save = s_restore_tmp
++var s_restore_m0 = s_restore_alloc_size
++var s_restore_mode = ttmp7
++var s_restore_flat_scratch = s_restore_tmp
++var s_restore_pc_lo = ttmp0
++var s_restore_pc_hi = ttmp1
++var s_restore_exec_lo = ttmp4
++var s_restore_exec_hi = ttmp5
++var s_restore_state_priv = ttmp14
++var s_restore_excp_flag_priv = ttmp15
++var s_restore_xnack_mask = ttmp13
++var s_restore_buf_rsrc0 = ttmp8
++var s_restore_buf_rsrc1 = ttmp9
++var s_restore_buf_rsrc2 = ttmp10
++var s_restore_buf_rsrc3 = ttmp11
++var s_restore_size = ttmp6
++var s_restore_ttmps_lo = s_restore_tmp
++var s_restore_ttmps_hi = s_restore_alloc_size
++var s_restore_spi_init_hi_save = s_restore_exec_hi
++
++shader main
++ asic(DEFAULT)
++ type(CS)
++ wave_size(32)
++
++ s_branch L_SKIP_RESTORE //NOT restore. might be a regular trap or save
++
++L_JUMP_TO_RESTORE:
++ s_branch L_RESTORE
++
++L_SKIP_RESTORE:
++ s_getreg_b32 s_save_state_priv, hwreg(HW_REG_WAVE_STATE_PRIV) //save STATUS since we will change SCC
++
++ // Clear SPI_PRIO: do not save with elevated priority.
++ // Clear ECC_ERR: prevents SQC store and triggers FATAL_HALT if setreg'd.
++ s_andn2_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_ALWAYS_CLEAR_MASK
++
++ s_getreg_b32 s_save_excp_flag_priv, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++
++ s_and_b32 ttmp2, s_save_state_priv, SQ_WAVE_STATE_PRIV_HALT_MASK
++ s_cbranch_scc0 L_NOT_HALTED
++
++L_HALTED:
++ // Host trap may occur while wave is halted.
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++L_CHECK_SAVE:
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK
++ s_cbranch_scc1 L_SAVE
++
++ // Wave is halted but neither host trap nor SAVECTX is raised.
++ // Caused by instruction fetch memory violation.
++ // Spin wait until context saved to prevent interrupt storm.
++ s_sleep 0x10
++ s_getreg_b32 s_save_excp_flag_priv, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++ s_branch L_CHECK_SAVE
++
++L_NOT_HALTED:
++ // Let second-level handle non-SAVECTX exception or trap.
++ // Any concurrent SAVECTX will be handled upon re-entry once halted.
++
++ // Check non-maskable exceptions. memory_violation, illegal_instruction
++ // and xnack_error exceptions always cause the wave to enter the trap
++ // handler.
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_NON_MASKABLE_EXCP_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++ // Check for maskable exceptions in trapsts.excp and trapsts.excp_hi.
++ // Maskable exceptions only cause the wave to enter the trap handler if
++ // their respective bit in mode.excp_en is set.
++ s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
++ s_and_b32 ttmp3, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK
++ s_cbranch_scc0 L_NOT_ADDR_WATCH
++ s_or_b32 ttmp2, ttmp2, SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK
++
++L_NOT_ADDR_WATCH:
++ s_getreg_b32 ttmp3, hwreg(HW_REG_WAVE_TRAP_CTRL)
++ s_and_b32 ttmp2, ttmp3, ttmp2
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++L_CHECK_TRAP_ID:
++ // Check trap_id != 0
++ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_TRAP_ID_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++#if SINGLE_STEP_MISSED_WORKAROUND
++ // Prioritize single step exception over context save.
++ // Second-level trap will halt wave and RFE, re-entering for SAVECTX.
++ // WAVE_TRAP_CTRL is already in ttmp3.
++ s_and_b32 ttmp3, ttmp3, SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++#endif
++
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK
++ s_cbranch_scc1 L_SAVE
++
++L_FETCH_2ND_TRAP:
++ // Read second-level TBA/TMA from first-level TMA and jump if available.
++ // ttmp[2:5] and ttmp12 can be used (others hold SPI-initialized debug data)
++ // ttmp12 holds SQ_WAVE_STATUS
++ s_sendmsg_rtn_b64 [ttmp14, ttmp15], sendmsg(MSG_RTN_GET_TMA)
++ s_wait_idle
++ s_lshl_b64 [ttmp14, ttmp15], [ttmp14, ttmp15], 0x8
++
++ s_bitcmp1_b32 ttmp15, 0xF
++ s_cbranch_scc0 L_NO_SIGN_EXTEND_TMA
++ s_or_b32 ttmp15, ttmp15, 0xFFFF0000
++L_NO_SIGN_EXTEND_TMA:
++
++ s_load_dword ttmp2, [ttmp14, ttmp15], 0x10 scope:SCOPE_SYS // debug trap enabled flag
++ s_wait_idle
++ s_lshl_b32 ttmp2, ttmp2, TTMP11_DEBUG_TRAP_ENABLED_SHIFT
++ s_andn2_b32 ttmp11, ttmp11, TTMP11_DEBUG_TRAP_ENABLED_MASK
++ s_or_b32 ttmp11, ttmp11, ttmp2
++
++ s_load_dwordx2 [ttmp2, ttmp3], [ttmp14, ttmp15], 0x0 scope:SCOPE_SYS // second-level TBA
++ s_wait_idle
++ s_load_dwordx2 [ttmp14, ttmp15], [ttmp14, ttmp15], 0x8 scope:SCOPE_SYS // second-level TMA
++ s_wait_idle
++
++ s_and_b64 [ttmp2, ttmp3], [ttmp2, ttmp3], [ttmp2, ttmp3]
++ s_cbranch_scc0 L_NO_NEXT_TRAP // second-level trap handler not been set
++ s_setpc_b64 [ttmp2, ttmp3] // jump to second-level trap handler
++
++L_NO_NEXT_TRAP:
++ // If not caused by trap then halt wave to prevent re-entry.
++ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_TRAP_ID_MASK
++ s_cbranch_scc1 L_TRAP_CASE
++
++ // Host trap will not cause trap re-entry.
++ s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++ s_and_b32 ttmp2, ttmp2, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
++ s_cbranch_scc1 L_EXIT_TRAP
++ s_or_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_HALT_MASK
++
++ // If the PC points to S_ENDPGM then context save will fail if STATE_PRIV.HALT is set.
++ // Rewind the PC to prevent this from occurring.
++ s_sub_u32 ttmp0, ttmp0, 0x8
++ s_subb_u32 ttmp1, ttmp1, 0x0
++
++ s_branch L_EXIT_TRAP
++
++L_TRAP_CASE:
++ // Advance past trap instruction to prevent re-entry.
++ s_add_u32 ttmp0, ttmp0, 0x4
++ s_addc_u32 ttmp1, ttmp1, 0x0
++
++L_EXIT_TRAP:
++ s_and_b32 ttmp1, ttmp1, 0xFFFF
++
++ // Restore SQ_WAVE_STATUS.
++ s_and_b64 exec, exec, exec // Restore STATUS.EXECZ, not writable by s_setreg_b32
++ s_and_b64 vcc, vcc, vcc // Restore STATUS.VCCZ, not writable by s_setreg_b32
++
++ // STATE_PRIV.BARRIER_COMPLETE may have changed since we read it.
++ // Only restore fields which the trap handler changes.
++ s_lshr_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_SCC_SHIFT
++ s_setreg_b32 hwreg(HW_REG_WAVE_STATE_PRIV, SQ_WAVE_STATE_PRIV_SCC_SHIFT, \
++ SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT - SQ_WAVE_STATE_PRIV_SCC_SHIFT + 1), s_save_state_priv
++
++ s_rfe_b64 [ttmp0, ttmp1]
++
++L_SAVE:
++ // If VGPRs have been deallocated then terminate the wavefront.
++ // It has no remaining program to run and cannot save without VGPRs.
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
++ s_bitcmp1_b32 s_save_tmp, SQ_WAVE_STATUS_NO_VGPRS_SHIFT
++ s_cbranch_scc0 L_HAVE_VGPRS
++ s_endpgm
++L_HAVE_VGPRS:
++
++ s_and_b32 s_save_pc_hi, s_save_pc_hi, 0x0000ffff //pc[47:32]
++ s_mov_b32 s_save_tmp, 0
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT, 1), s_save_tmp //clear saveCtx bit
++
++ /* inform SPI the readiness and wait for SPI's go signal */
++ s_mov_b32 s_save_exec_lo, exec_lo //save EXEC and use EXEC for the go signal from SPI
++ s_mov_b32 s_save_exec_hi, exec_hi
++ s_mov_b64 exec, 0x0 //clear EXEC to get ready to receive
++
++ s_sendmsg_rtn_b64 [exec_lo, exec_hi], sendmsg(MSG_RTN_SAVE_WAVE)
++ s_wait_idle
++
++ // Save first_wave flag so we can clear high bits of save address.
++ s_and_b32 s_save_tmp, s_save_spi_init_hi, S_SAVE_SPI_INIT_FIRST_WAVE_MASK
++ s_lshl_b32 s_save_tmp, s_save_tmp, (S_SAVE_PC_HI_FIRST_WAVE_SHIFT - S_SAVE_SPI_INIT_FIRST_WAVE_SHIFT)
++ s_or_b32 s_save_pc_hi, s_save_pc_hi, s_save_tmp
++
++ // Trap temporaries must be saved via VGPR but all VGPRs are in use.
++ // There is no ttmp space to hold the resource constant for VGPR save.
++ // Save v0 by itself since it requires only two SGPRs.
++ s_mov_b32 s_save_ttmps_lo, exec_lo
++ s_and_b32 s_save_ttmps_hi, exec_hi, 0xFFFF
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++ global_store_dword_addtid v0, [s_save_ttmps_lo, s_save_ttmps_hi] scope:SCOPE_SYS
++ v_mov_b32 v0, 0x0
++ s_mov_b32 exec_lo, s_save_ttmps_lo
++ s_mov_b32 exec_hi, s_save_ttmps_hi
++
++ // Save trap temporaries 4-11, 13 initialized by SPI debug dispatch logic
++ // ttmp SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)+0x40
++ get_wave_size2(s_save_ttmps_hi)
++ get_vgpr_size_bytes(s_save_ttmps_lo, s_save_ttmps_hi)
++ get_svgpr_size_bytes(s_save_ttmps_hi)
++ s_add_u32 s_save_ttmps_lo, s_save_ttmps_lo, s_save_ttmps_hi
++ s_and_b32 s_save_ttmps_hi, s_save_spi_init_hi, 0xFFFF
++ s_add_u32 s_save_ttmps_lo, s_save_ttmps_lo, get_sgpr_size_bytes()
++ s_add_u32 s_save_ttmps_lo, s_save_ttmps_lo, s_save_spi_init_lo
++ s_addc_u32 s_save_ttmps_hi, s_save_ttmps_hi, 0x0
++
++ v_writelane_b32 v0, ttmp4, 0x4
++ v_writelane_b32 v0, ttmp5, 0x5
++ v_writelane_b32 v0, ttmp6, 0x6
++ v_writelane_b32 v0, ttmp7, 0x7
++ v_writelane_b32 v0, ttmp8, 0x8
++ v_writelane_b32 v0, ttmp9, 0x9
++ v_writelane_b32 v0, ttmp10, 0xA
++ v_writelane_b32 v0, ttmp11, 0xB
++ v_writelane_b32 v0, ttmp13, 0xD
++ v_writelane_b32 v0, exec_lo, 0xE
++ v_writelane_b32 v0, exec_hi, 0xF
++
++ s_mov_b32 exec_lo, 0x3FFF
++ s_mov_b32 exec_hi, 0x0
++ global_store_dword_addtid v0, [s_save_ttmps_lo, s_save_ttmps_hi] offset:0x40 scope:SCOPE_SYS
++ v_readlane_b32 ttmp14, v0, 0xE
++ v_readlane_b32 ttmp15, v0, 0xF
++ s_mov_b32 exec_lo, ttmp14
++ s_mov_b32 exec_hi, ttmp15
++
++ /* setup Resource Contants */
++ s_mov_b32 s_save_buf_rsrc0, s_save_spi_init_lo //base_addr_lo
++ s_and_b32 s_save_buf_rsrc1, s_save_spi_init_hi, 0x0000FFFF //base_addr_hi
++ s_or_b32 s_save_buf_rsrc1, s_save_buf_rsrc1, S_SAVE_BUF_RSRC_WORD1_STRIDE
++ s_mov_b32 s_save_buf_rsrc2, 0 //NUM_RECORDS initial value = 0 (in bytes) although not neccessarily inited
++ s_mov_b32 s_save_buf_rsrc3, S_SAVE_BUF_RSRC_WORD3_MISC
++
++ s_mov_b32 s_save_m0, m0
++
++ /* global mem offset */
++ s_mov_b32 s_save_mem_offset, 0x0
++ get_wave_size2(s_wave_size)
++
++ /* save first 4 VGPRs, needed for SGPR save */
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_SAVE_4VGPR_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_SAVE_4VGPR_WAVE32
++L_ENABLE_SAVE_4VGPR_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++ s_branch L_SAVE_4VGPR_WAVE64
++L_SAVE_4VGPR_WAVE32:
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR Allocated in 4-GPR granularity
++
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*3
++ s_branch L_SAVE_HWREG
++
++L_SAVE_4VGPR_WAVE64:
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR Allocated in 4-GPR granularity
++
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*3
++
++ /* save HW registers */
++
++L_SAVE_HWREG:
++ // HWREG SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)
++ get_vgpr_size_bytes(s_save_mem_offset, s_wave_size)
++ get_svgpr_size_bytes(s_save_tmp)
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s_save_tmp
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, get_sgpr_size_bytes()
++
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ v_mov_b32 v0, 0x0 //Offset[31:0] from buffer resource
++ v_mov_b32 v1, 0x0 //Offset[63:32] from buffer resource
++ v_mov_b32 v2, 0x0 //Set of SGPRs for TCP store
++ s_mov_b32 m0, 0x0 //Next lane of v2 to write to
++
++ // Ensure no further changes to barrier or LDS state.
++ // STATE_PRIV.BARRIER_COMPLETE may change up to this point.
++ s_barrier_signal -2
++ s_barrier_wait -2
++
++ // Re-read final state of BARRIER_COMPLETE field for save.
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATE_PRIV)
++ s_and_b32 s_save_tmp, s_save_tmp, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
++ s_andn2_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
++ s_or_b32 s_save_state_priv, s_save_state_priv, s_save_tmp
++
++ write_hwreg_to_v2(s_save_m0)
++ write_hwreg_to_v2(s_save_pc_lo)
++ s_andn2_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
++ write_hwreg_to_v2(s_save_tmp)
++ write_hwreg_to_v2(s_save_exec_lo)
++ write_hwreg_to_v2(s_save_exec_hi)
++ write_hwreg_to_v2(s_save_state_priv)
++
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++ write_hwreg_to_v2(s_save_tmp)
++
++ write_hwreg_to_v2(s_save_xnack_mask)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_MODE)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_SCRATCH_BASE_LO)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_SCRATCH_BASE_HI)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_TRAP_CTRL)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
++ write_hwreg_to_v2(s_save_tmp)
++
++ s_get_barrier_state s_save_tmp, -1
++ s_wait_kmcnt (0)
++ write_hwreg_to_v2(s_save_tmp)
++
++ // Write HWREGs with 16 VGPR lanes. TTMPs occupy space after this.
++ s_mov_b32 exec_lo, 0xFFFF
++ s_mov_b32 exec_hi, 0x0
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ // Write SGPRs with 32 VGPR lanes. This works in wave32 and wave64 mode.
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++
++ /* save SGPRs */
++ // Save SGPR before LDS save, then the s0 to s4 can be used during LDS save...
++
++ // SGPR SR memory offset : size(VGPR)+size(SVGPR)
++ get_vgpr_size_bytes(s_save_mem_offset, s_wave_size)
++ get_svgpr_size_bytes(s_save_tmp)
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s_save_tmp
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ s_mov_b32 ttmp13, 0x0 //next VGPR lane to copy SGPR into
++
++ s_mov_b32 m0, 0x0 //SGPR initial index value =0
++ s_nop 0x0 //Manually inserted wait states
++L_SAVE_SGPR_LOOP:
++ // SGPR is allocated in 16 SGPR granularity
++ s_movrels_b64 s0, s0 //s0 = s[0+m0], s1 = s[1+m0]
++ s_movrels_b64 s2, s2 //s2 = s[2+m0], s3 = s[3+m0]
++ s_movrels_b64 s4, s4 //s4 = s[4+m0], s5 = s[5+m0]
++ s_movrels_b64 s6, s6 //s6 = s[6+m0], s7 = s[7+m0]
++ s_movrels_b64 s8, s8 //s8 = s[8+m0], s9 = s[9+m0]
++ s_movrels_b64 s10, s10 //s10 = s[10+m0], s11 = s[11+m0]
++ s_movrels_b64 s12, s12 //s12 = s[12+m0], s13 = s[13+m0]
++ s_movrels_b64 s14, s14 //s14 = s[14+m0], s15 = s[15+m0]
++
++ write_16sgpr_to_v2(s0)
++
++ s_cmp_eq_u32 ttmp13, 0x20 //have 32 VGPR lanes filled?
++ s_cbranch_scc0 L_SAVE_SGPR_SKIP_TCP_STORE
++
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 0x80
++ s_mov_b32 ttmp13, 0x0
++ v_mov_b32 v2, 0x0
++L_SAVE_SGPR_SKIP_TCP_STORE:
++
++ s_add_u32 m0, m0, 16 //next sgpr index
++ s_cmp_lt_u32 m0, 96 //scc = (m0 < first 96 SGPR) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_SGPR_LOOP //first 96 SGPR save is complete?
++
++ //save the rest 12 SGPR
++ s_movrels_b64 s0, s0 //s0 = s[0+m0], s1 = s[1+m0]
++ s_movrels_b64 s2, s2 //s2 = s[2+m0], s3 = s[3+m0]
++ s_movrels_b64 s4, s4 //s4 = s[4+m0], s5 = s[5+m0]
++ s_movrels_b64 s6, s6 //s6 = s[6+m0], s7 = s[7+m0]
++ s_movrels_b64 s8, s8 //s8 = s[8+m0], s9 = s[9+m0]
++ s_movrels_b64 s10, s10 //s10 = s[10+m0], s11 = s[11+m0]
++ write_12sgpr_to_v2(s0)
++
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ /* save LDS */
++
++L_SAVE_LDS:
++ // Change EXEC to all threads...
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_SAVE_LDS_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_SAVE_LDS_NORMAL
++L_ENABLE_SAVE_LDS_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_SAVE_LDS_NORMAL:
++ s_getreg_b32 s_save_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE)
++ s_and_b32 s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF //lds_size is zero?
++ s_cbranch_scc0 L_SAVE_LDS_DONE //no lds used? jump to L_SAVE_DONE
++
++ s_and_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
++ s_cbranch_scc0 L_SAVE_LDS_DONE
++
++ // first wave do LDS save;
++
++ s_lshl_b32 s_save_alloc_size, s_save_alloc_size, SQ_WAVE_LDS_ALLOC_GRANULARITY
++ s_mov_b32 s_save_buf_rsrc2, s_save_alloc_size //NUM_RECORDS in bytes
++
++ // LDS at offset: size(VGPR)+size(SVGPR)+SIZE(SGPR)+SIZE(HWREG)
++ //
++ get_vgpr_size_bytes(s_save_mem_offset, s_wave_size)
++ get_svgpr_size_bytes(s_save_tmp)
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s_save_tmp
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, get_sgpr_size_bytes()
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, get_hwreg_size_bytes()
++
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ //load 0~63*4(byte address) to vgpr v0
++ v_mbcnt_lo_u32_b32 v0, -1, 0
++ v_mbcnt_hi_u32_b32 v0, -1, v0
++ v_mul_u32_u24 v0, 4, v0
++
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_mov_b32 m0, 0x0
++ s_cbranch_scc1 L_SAVE_LDS_W64
++
++L_SAVE_LDS_W32:
++ s_mov_b32 s3, 128
++ s_nop 0
++ s_nop 0
++ s_nop 0
++L_SAVE_LDS_LOOP_W32:
++ ds_read_b32 v1, v0
++ s_wait_idle
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ s_add_u32 m0, m0, s3 //every buffer_store_lds does 128 bytes
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s3
++ v_add_nc_u32 v0, v0, 128 //mem offset increased by 128 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc=(m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_LDS_LOOP_W32 //LDS save is complete?
++
++ s_branch L_SAVE_LDS_DONE
++
++L_SAVE_LDS_W64:
++ s_mov_b32 s3, 256
++ s_nop 0
++ s_nop 0
++ s_nop 0
++L_SAVE_LDS_LOOP_W64:
++ ds_read_b32 v1, v0
++ s_wait_idle
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ s_add_u32 m0, m0, s3 //every buffer_store_lds does 256 bytes
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s3
++ v_add_nc_u32 v0, v0, 256 //mem offset increased by 256 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc=(m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_LDS_LOOP_W64 //LDS save is complete?
++
++L_SAVE_LDS_DONE:
++ /* save VGPRs - set the Rest VGPRs */
++L_SAVE_VGPR:
++ // VGPR SR memory offset: 0
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_SAVE_VGPR_EXEC_HI
++ s_mov_b32 s_save_mem_offset, (0+128*4) // for the rest VGPRs
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_SAVE_VGPR_NORMAL
++L_ENABLE_SAVE_VGPR_EXEC_HI:
++ s_mov_b32 s_save_mem_offset, (0+256*4) // for the rest VGPRs
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_SAVE_VGPR_NORMAL:
++ s_getreg_b32 s_save_alloc_size, hwreg(HW_REG_WAVE_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE)
++ s_add_u32 s_save_alloc_size, s_save_alloc_size, 1
++ s_lshl_b32 s_save_alloc_size, s_save_alloc_size, 2 //Number of VGPRs = (vgpr_size + 1) * 4 (non-zero value)
++ //determine it is wave32 or wave64
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_SAVE_VGPR_WAVE64
++
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR Allocated in 4-GPR granularity
++
++ // VGPR store using dw burst
++ s_mov_b32 m0, 0x4 //VGPR initial index value =4
++ s_cmp_lt_u32 m0, s_save_alloc_size
++ s_cbranch_scc0 L_SAVE_VGPR_END
++
++L_SAVE_VGPR_W32_LOOP:
++ v_movrels_b32 v0, v0 //v0 = v[0+m0]
++ v_movrels_b32 v1, v1 //v1 = v[1+m0]
++ v_movrels_b32 v2, v2 //v2 = v[2+m0]
++ v_movrels_b32 v3, v3 //v3 = v[3+m0]
++
++ buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*3
++
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 128*4 //every buffer_store_dword does 128 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc = (m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_VGPR_W32_LOOP //VGPR save is complete?
++
++ s_branch L_SAVE_VGPR_END
++
++L_SAVE_VGPR_WAVE64:
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR store using dw burst
++ s_mov_b32 m0, 0x4 //VGPR initial index value =4
++ s_cmp_lt_u32 m0, s_save_alloc_size
++ s_cbranch_scc0 L_SAVE_SHARED_VGPR
++
++L_SAVE_VGPR_W64_LOOP:
++ v_movrels_b32 v0, v0 //v0 = v[0+m0]
++ v_movrels_b32 v1, v1 //v1 = v[1+m0]
++ v_movrels_b32 v2, v2 //v2 = v[2+m0]
++ v_movrels_b32 v3, v3 //v3 = v[3+m0]
++
++ buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*3
++
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 256*4 //every buffer_store_dword does 256 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc = (m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_VGPR_W64_LOOP //VGPR save is complete?
++
++L_SAVE_SHARED_VGPR:
++ s_getreg_b32 s_save_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE)
++ s_and_b32 s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF //shared_vgpr_size is zero?
++ s_cbranch_scc0 L_SAVE_VGPR_END //no shared_vgpr used? jump to L_SAVE_LDS
++ s_lshl_b32 s_save_alloc_size, s_save_alloc_size, 3 //Number of SHARED_VGPRs = shared_vgpr_size * 8 (non-zero value)
++ //m0 now has the value of normal vgpr count, just add the m0 with shared_vgpr count to get the total count.
++ //save shared_vgpr will start from the index of m0
++ s_add_u32 s_save_alloc_size, s_save_alloc_size, m0
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++ s_mov_b32 exec_hi, 0x00000000
++
++L_SAVE_SHARED_VGPR_WAVE64_LOOP:
++ v_movrels_b32 v0, v0 //v0 = v[0+m0]
++ buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ s_add_u32 m0, m0, 1 //next vgpr index
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 128
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc = (m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_SHARED_VGPR_WAVE64_LOOP //SHARED_VGPR save is complete?
++
++L_SAVE_VGPR_END:
++ s_branch L_END_PGM
++
++L_RESTORE:
++ /* Setup Resource Contants */
++ s_mov_b32 s_restore_buf_rsrc0, s_restore_spi_init_lo //base_addr_lo
++ s_and_b32 s_restore_buf_rsrc1, s_restore_spi_init_hi, 0x0000FFFF //base_addr_hi
++ s_or_b32 s_restore_buf_rsrc1, s_restore_buf_rsrc1, S_RESTORE_BUF_RSRC_WORD1_STRIDE
++ s_mov_b32 s_restore_buf_rsrc2, 0 //NUM_RECORDS initial value = 0 (in bytes)
++ s_mov_b32 s_restore_buf_rsrc3, S_RESTORE_BUF_RSRC_WORD3_MISC
++
++ // Save s_restore_spi_init_hi for later use.
++ s_mov_b32 s_restore_spi_init_hi_save, s_restore_spi_init_hi
++
++ //determine it is wave32 or wave64
++ get_wave_size2(s_restore_size)
++
++ s_and_b32 s_restore_tmp, s_restore_spi_init_hi, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK
++ s_cbranch_scc0 L_RESTORE_VGPR
++
++ /* restore LDS */
++L_RESTORE_LDS:
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_RESTORE_LDS_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_RESTORE_LDS_NORMAL
++L_ENABLE_RESTORE_LDS_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_RESTORE_LDS_NORMAL:
++ s_getreg_b32 s_restore_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE)
++ s_and_b32 s_restore_alloc_size, s_restore_alloc_size, 0xFFFFFFFF //lds_size is zero?
++ s_cbranch_scc0 L_RESTORE_VGPR //no lds used? jump to L_RESTORE_VGPR
++ s_lshl_b32 s_restore_alloc_size, s_restore_alloc_size, SQ_WAVE_LDS_ALLOC_GRANULARITY
++ s_mov_b32 s_restore_buf_rsrc2, s_restore_alloc_size //NUM_RECORDS in bytes
++
++ // LDS at offset: size(VGPR)+size(SVGPR)+SIZE(SGPR)+SIZE(HWREG)
++ //
++ get_vgpr_size_bytes(s_restore_mem_offset, s_restore_size)
++ get_svgpr_size_bytes(s_restore_tmp)
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, s_restore_tmp
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_sgpr_size_bytes()
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_hwreg_size_bytes()
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_mov_b32 m0, 0x0
++ s_cbranch_scc1 L_RESTORE_LDS_LOOP_W64
++
++L_RESTORE_LDS_LOOP_W32:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset
++ s_wait_idle
++ ds_store_addtid_b32 v0
++ s_add_u32 m0, m0, 128 // 128 DW
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128 //mem offset increased by 128DW
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc=(m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_LDS_LOOP_W32 //LDS restore is complete?
++ s_branch L_RESTORE_VGPR
++
++L_RESTORE_LDS_LOOP_W64:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset
++ s_wait_idle
++ ds_store_addtid_b32 v0
++ s_add_u32 m0, m0, 256 // 256 DW
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 256 //mem offset increased by 256DW
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc=(m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_LDS_LOOP_W64 //LDS restore is complete?
++
++ /* restore VGPRs */
++L_RESTORE_VGPR:
++ // VGPR SR memory offset : 0
++ s_mov_b32 s_restore_mem_offset, 0x0
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_RESTORE_VGPR_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_RESTORE_VGPR_NORMAL
++L_ENABLE_RESTORE_VGPR_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_RESTORE_VGPR_NORMAL:
++ s_getreg_b32 s_restore_alloc_size, hwreg(HW_REG_WAVE_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE)
++ s_add_u32 s_restore_alloc_size, s_restore_alloc_size, 1
++ s_lshl_b32 s_restore_alloc_size, s_restore_alloc_size, 2 //Number of VGPRs = (vgpr_size + 1) * 4 (non-zero value)
++ //determine it is wave32 or wave64
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_RESTORE_VGPR_WAVE64
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR load using dw burst
++ s_mov_b32 s_restore_mem_offset_save, s_restore_mem_offset // restore start with v1, v0 will be the last
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128*4
++ s_mov_b32 m0, 4 //VGPR initial index value = 4
++ s_cmp_lt_u32 m0, s_restore_alloc_size
++ s_cbranch_scc0 L_RESTORE_SGPR
++
++L_RESTORE_VGPR_WAVE32_LOOP:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:128
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:128*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:128*3
++ s_wait_idle
++ v_movreld_b32 v0, v0 //v[0+m0] = v0
++ v_movreld_b32 v1, v1
++ v_movreld_b32 v2, v2
++ v_movreld_b32 v3, v3
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128*4 //every buffer_load_dword does 128 bytes
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc = (m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_VGPR_WAVE32_LOOP //VGPR restore (except v0) is complete?
++
++ /* VGPR restore on v0 */
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:128
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:128*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:128*3
++ s_wait_idle
++
++ s_branch L_RESTORE_SGPR
++
++L_RESTORE_VGPR_WAVE64:
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR load using dw burst
++ s_mov_b32 s_restore_mem_offset_save, s_restore_mem_offset // restore start with v4, v0 will be the last
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 256*4
++ s_mov_b32 m0, 4 //VGPR initial index value = 4
++ s_cmp_lt_u32 m0, s_restore_alloc_size
++ s_cbranch_scc0 L_RESTORE_SHARED_VGPR
++
++L_RESTORE_VGPR_WAVE64_LOOP:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:256
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:256*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:256*3
++ s_wait_idle
++ v_movreld_b32 v0, v0 //v[0+m0] = v0
++ v_movreld_b32 v1, v1
++ v_movreld_b32 v2, v2
++ v_movreld_b32 v3, v3
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 256*4 //every buffer_load_dword does 256 bytes
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc = (m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_VGPR_WAVE64_LOOP //VGPR restore (except v0) is complete?
++
++L_RESTORE_SHARED_VGPR:
++ s_getreg_b32 s_restore_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE) //shared_vgpr_size
++ s_and_b32 s_restore_alloc_size, s_restore_alloc_size, 0xFFFFFFFF //shared_vgpr_size is zero?
++ s_cbranch_scc0 L_RESTORE_V0 //no shared_vgpr used?
++ s_lshl_b32 s_restore_alloc_size, s_restore_alloc_size, 3 //Number of SHARED_VGPRs = shared_vgpr_size * 8 (non-zero value)
++ //m0 now has the value of normal vgpr count, just add the m0 with shared_vgpr count to get the total count.
++ //restore shared_vgpr will start from the index of m0
++ s_add_u32 s_restore_alloc_size, s_restore_alloc_size, m0
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++ s_mov_b32 exec_hi, 0x00000000
++L_RESTORE_SHARED_VGPR_WAVE64_LOOP:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS
++ s_wait_idle
++ v_movreld_b32 v0, v0 //v[0+m0] = v0
++ s_add_u32 m0, m0, 1 //next vgpr index
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc = (m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_SHARED_VGPR_WAVE64_LOOP //VGPR restore (except v0) is complete?
++
++ s_mov_b32 exec_hi, 0xFFFFFFFF //restore back exec_hi before restoring V0!!
++
++ /* VGPR restore on v0 */
++L_RESTORE_V0:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:256
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:256*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:256*3
++ s_wait_idle
++
++ /* restore SGPRs */
++ //will be 2+8+16*6
++ // SGPR SR memory offset : size(VGPR)+size(SVGPR)
++L_RESTORE_SGPR:
++ get_vgpr_size_bytes(s_restore_mem_offset, s_restore_size)
++ get_svgpr_size_bytes(s_restore_tmp)
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, s_restore_tmp
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_sgpr_size_bytes()
++ s_sub_u32 s_restore_mem_offset, s_restore_mem_offset, 20*4 //s108~s127 is not saved
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ s_mov_b32 m0, s_sgpr_save_num
++
++ read_4sgpr_from_mem(s0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_sub_u32 m0, m0, 4 // Restore from S[0] to S[104]
++ s_nop 0 // hazard SALU M0=> S_MOVREL
++
++ s_movreld_b64 s0, s0 //s[0+m0] = s0
++ s_movreld_b64 s2, s2
++
++ read_8sgpr_from_mem(s0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_sub_u32 m0, m0, 8 // Restore from S[0] to S[96]
++ s_nop 0 // hazard SALU M0=> S_MOVREL
++
++ s_movreld_b64 s0, s0 //s[0+m0] = s0
++ s_movreld_b64 s2, s2
++ s_movreld_b64 s4, s4
++ s_movreld_b64 s6, s6
++
++ L_RESTORE_SGPR_LOOP:
++ read_16sgpr_from_mem(s0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_sub_u32 m0, m0, 16 // Restore from S[n] to S[0]
++ s_nop 0 // hazard SALU M0=> S_MOVREL
++
++ s_movreld_b64 s0, s0 //s[0+m0] = s0
++ s_movreld_b64 s2, s2
++ s_movreld_b64 s4, s4
++ s_movreld_b64 s6, s6
++ s_movreld_b64 s8, s8
++ s_movreld_b64 s10, s10
++ s_movreld_b64 s12, s12
++ s_movreld_b64 s14, s14
++
++ s_cmp_eq_u32 m0, 0 //scc = (m0 < s_sgpr_save_num) ? 1 : 0
++ s_cbranch_scc0 L_RESTORE_SGPR_LOOP
++
++ // s_barrier with STATE_PRIV.TRAP_AFTER_INST=1, STATUS.PRIV=1 incorrectly asserts debug exception.
++ // Clear DEBUG_EN before and restore MODE after the barrier.
++ s_setreg_imm32_b32 hwreg(HW_REG_WAVE_MODE), 0
++
++ /* restore HW registers */
++L_RESTORE_HWREG:
++ // HWREG SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)
++ get_vgpr_size_bytes(s_restore_mem_offset, s_restore_size)
++ get_svgpr_size_bytes(s_restore_tmp)
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, s_restore_tmp
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_sgpr_size_bytes()
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // Restore s_restore_spi_init_hi before the saved value gets clobbered.
++ s_mov_b32 s_restore_spi_init_hi, s_restore_spi_init_hi_save
++
++ read_hwreg_from_mem(s_restore_m0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_pc_lo, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_pc_hi, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_exec_lo, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_exec_hi, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_state_priv, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_excp_flag_priv, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_xnack_mask, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_mode, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_flat_scratch, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_SCRATCH_BASE_LO), s_restore_flat_scratch
++
++ read_hwreg_from_mem(s_restore_flat_scratch, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_SCRATCH_BASE_HI), s_restore_flat_scratch
++
++ read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_USER), s_restore_tmp
++
++ read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++ s_setreg_b32 hwreg(HW_REG_WAVE_TRAP_CTRL), s_restore_tmp
++
++ // Only the first wave needs to restore the workgroup barrier.
++ s_and_b32 s_restore_tmp, s_restore_spi_init_hi, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK
++ s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
++
++ // Skip over WAVE_STATUS, since there is no state to restore from it
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 4
++
++ read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_bitcmp1_b32 s_restore_tmp, BARRIER_STATE_VALID_OFFSET
++ s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
++
++ // extract the saved signal count from s_restore_tmp
++ s_lshr_b32 s_restore_tmp, s_restore_tmp, BARRIER_STATE_SIGNAL_OFFSET
++
++ // We need to call s_barrier_signal repeatedly to restore the signal
++ // count of the work group barrier. The member count is already
++ // initialized with the number of waves in the work group.
++L_BARRIER_RESTORE_LOOP:
++ s_and_b32 s_restore_tmp, s_restore_tmp, s_restore_tmp
++ s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
++ s_barrier_signal -1
++ s_add_i32 s_restore_tmp, s_restore_tmp, -1
++ s_branch L_BARRIER_RESTORE_LOOP
++
++L_SKIP_BARRIER_RESTORE:
++
++ s_mov_b32 m0, s_restore_m0
++ s_mov_b32 exec_lo, s_restore_exec_lo
++ s_mov_b32 exec_hi, s_restore_exec_hi
++
++ // EXCP_FLAG_PRIV.SAVE_CONTEXT and HOST_TRAP may have changed.
++ // Only restore the other fields to avoid clobbering them.
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, 0, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_1_SIZE), s_restore_excp_flag_priv
++ s_lshr_b32 s_restore_excp_flag_priv, s_restore_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SIZE), s_restore_excp_flag_priv
++ s_lshr_b32 s_restore_excp_flag_priv, s_restore_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT - SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SIZE), s_restore_excp_flag_priv
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_MODE), s_restore_mode
++
++ // Restore trap temporaries 4-11, 13 initialized by SPI debug dispatch logic
++ // ttmp SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)+0x40
++ get_vgpr_size_bytes(s_restore_ttmps_lo, s_restore_size)
++ get_svgpr_size_bytes(s_restore_ttmps_hi)
++ s_add_u32 s_restore_ttmps_lo, s_restore_ttmps_lo, s_restore_ttmps_hi
++ s_add_u32 s_restore_ttmps_lo, s_restore_ttmps_lo, get_sgpr_size_bytes()
++ s_add_u32 s_restore_ttmps_lo, s_restore_ttmps_lo, s_restore_buf_rsrc0
++ s_addc_u32 s_restore_ttmps_hi, s_restore_buf_rsrc1, 0x0
++ s_and_b32 s_restore_ttmps_hi, s_restore_ttmps_hi, 0xFFFF
++ s_load_dwordx4 [ttmp4, ttmp5, ttmp6, ttmp7], [s_restore_ttmps_lo, s_restore_ttmps_hi], 0x50 scope:SCOPE_SYS
++ s_load_dwordx4 [ttmp8, ttmp9, ttmp10, ttmp11], [s_restore_ttmps_lo, s_restore_ttmps_hi], 0x60 scope:SCOPE_SYS
++ s_load_dword ttmp13, [s_restore_ttmps_lo, s_restore_ttmps_hi], 0x74 scope:SCOPE_SYS
++ s_wait_idle
++
++ s_and_b32 s_restore_pc_hi, s_restore_pc_hi, 0x0000ffff //pc[47:32] //Do it here in order not to affect STATUS
++ s_and_b64 exec, exec, exec // Restore STATUS.EXECZ, not writable by s_setreg_b32
++ s_and_b64 vcc, vcc, vcc // Restore STATUS.VCCZ, not writable by s_setreg_b32
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_STATE_PRIV), s_restore_state_priv // SCC is included, which is changed by previous salu
++
++ // Make barrier and LDS state visible to all waves in the group.
++ // STATE_PRIV.BARRIER_COMPLETE may change after this point.
++ s_barrier_signal -2
++ s_barrier_wait -2
++
++ s_rfe_b64 s_restore_pc_lo //Return to the main shader program and resume execution
++
++L_END_PGM:
++ // Make sure that no wave of the workgroup can exit the trap handler
++ // before the workgroup barrier state is saved.
++ s_barrier_signal -2
++ s_barrier_wait -2
++ s_endpgm_saved
++end
++
++function write_hwreg_to_v2(s)
++ // Copy into VGPR for later TCP store.
++ v_writelane_b32 v2, s, m0
++ s_add_u32 m0, m0, 0x1
++end
++
++
++function write_16sgpr_to_v2(s)
++ // Copy into VGPR for later TCP store.
++ for var sgpr_idx = 0; sgpr_idx < 16; sgpr_idx ++
++ v_writelane_b32 v2, s[sgpr_idx], ttmp13
++ s_add_u32 ttmp13, ttmp13, 0x1
++ end
++end
++
++function write_12sgpr_to_v2(s)
++ // Copy into VGPR for later TCP store.
++ for var sgpr_idx = 0; sgpr_idx < 12; sgpr_idx ++
++ v_writelane_b32 v2, s[sgpr_idx], ttmp13
++ s_add_u32 ttmp13, ttmp13, 0x1
++ end
++end
++
++function read_hwreg_from_mem(s, s_rsrc, s_mem_offset)
++ s_buffer_load_dword s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++ s_add_u32 s_mem_offset, s_mem_offset, 4
++end
++
++function read_16sgpr_from_mem(s, s_rsrc, s_mem_offset)
++ s_sub_u32 s_mem_offset, s_mem_offset, 4*16
++ s_buffer_load_dwordx16 s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++end
++
++function read_8sgpr_from_mem(s, s_rsrc, s_mem_offset)
++ s_sub_u32 s_mem_offset, s_mem_offset, 4*8
++ s_buffer_load_dwordx8 s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++end
++
++function read_4sgpr_from_mem(s, s_rsrc, s_mem_offset)
++ s_sub_u32 s_mem_offset, s_mem_offset, 4*4
++ s_buffer_load_dwordx4 s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++end
++
++function get_vgpr_size_bytes(s_vgpr_size_byte, s_size)
++ s_getreg_b32 s_vgpr_size_byte, hwreg(HW_REG_WAVE_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE)
++ s_add_u32 s_vgpr_size_byte, s_vgpr_size_byte, 1
++ s_bitcmp1_b32 s_size, S_WAVE_SIZE
++ s_cbranch_scc1 L_ENABLE_SHIFT_W64
++ s_lshl_b32 s_vgpr_size_byte, s_vgpr_size_byte, (2+7) //Number of VGPRs = (vgpr_size + 1) * 4 * 32 * 4 (non-zero value)
++ s_branch L_SHIFT_DONE
++L_ENABLE_SHIFT_W64:
++ s_lshl_b32 s_vgpr_size_byte, s_vgpr_size_byte, (2+8) //Number of VGPRs = (vgpr_size + 1) * 4 * 64 * 4 (non-zero value)
++L_SHIFT_DONE:
++end
++
++function get_svgpr_size_bytes(s_svgpr_size_byte)
++ s_getreg_b32 s_svgpr_size_byte, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE)
++ s_lshl_b32 s_svgpr_size_byte, s_svgpr_size_byte, (3+7)
++end
++
++function get_sgpr_size_bytes
++ return 512
++end
++
++function get_hwreg_size_bytes
++ return 128
++end
++
++function get_wave_size2(s_reg)
++ s_getreg_b32 s_reg, hwreg(HW_REG_WAVE_STATUS,SQ_WAVE_STATUS_WAVE64_SHIFT,SQ_WAVE_STATUS_WAVE64_SIZE)
++ s_lshl_b32 s_reg, s_reg, S_WAVE_SIZE
++end
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+index ab1132bc896a32..d9955c5d2e5ed5 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+@@ -174,7 +174,7 @@ AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN32)
+ ###############################################################################
+ # DCN35
+ ###############################################################################
+-CLK_MGR_DCN35 = dcn35_smu.o dcn35_clk_mgr.o
++CLK_MGR_DCN35 = dcn35_smu.o dcn351_clk_mgr.o dcn35_clk_mgr.o
+
+ AMD_DAL_CLK_MGR_DCN35 = $(addprefix $(AMDDALPATH)/dc/clk_mgr/dcn35/,$(CLK_MGR_DCN35))
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+index 0e243f4344d050..4c3e58c730b11c 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+@@ -355,8 +355,11 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
++ if (ctx->dce_version == DCN_VERSION_3_51)
++ dcn351_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
++ else
++ dcn35_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+
+- dcn35_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+ return &clk_mgr->base.base;
+ }
+ break;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+index e93df3d6222e68..bc123f1884da32 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+@@ -50,12 +50,13 @@
+ #include "link.h"
+
+ #include "logger_types.h"
++
++
++#include "yellow_carp_offset.h"
+ #undef DC_LOGGER
+ #define DC_LOGGER \
+ clk_mgr->base.base.ctx->logger
+
+-#include "yellow_carp_offset.h"
+-
+ #define regCLK1_CLK_PLL_REQ 0x0237
+ #define regCLK1_CLK_PLL_REQ_BASE_IDX 0
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+index 29eff386505ab5..91d872d6d392b1 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+@@ -53,9 +53,6 @@
+
+
+ #include "logger_types.h"
+-#undef DC_LOGGER
+-#define DC_LOGGER \
+- clk_mgr->base.base.ctx->logger
+
+
+ #define MAX_INSTANCE 7
+@@ -77,6 +74,9 @@ static const struct IP_BASE CLK_BASE = { { { { 0x00016C00, 0x02401800, 0, 0, 0,
+ { { 0x0001B200, 0x0242DC00, 0, 0, 0, 0, 0, 0 } },
+ { { 0x0001B400, 0x0242E000, 0, 0, 0, 0, 0, 0 } } } };
+
++#undef DC_LOGGER
++#define DC_LOGGER \
++ clk_mgr->base.base.ctx->logger
+ #define regCLK1_CLK_PLL_REQ 0x0237
+ #define regCLK1_CLK_PLL_REQ_BASE_IDX 0
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
+new file mode 100644
+index 00000000000000..6a6ae618650b6d
+--- /dev/null
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
+@@ -0,0 +1,140 @@
++/*
++ * Copyright 2024 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ *
++ * Authors: AMD
++ *
++ */
++
++#include "core_types.h"
++#include "dcn35_clk_mgr.h"
++
++#define DCN_BASE__INST0_SEG1 0x000000C0
++#define mmCLK1_CLK_PLL_REQ 0x16E37
++
++#define mmCLK1_CLK0_DFS_CNTL 0x16E69
++#define mmCLK1_CLK1_DFS_CNTL 0x16E6C
++#define mmCLK1_CLK2_DFS_CNTL 0x16E6F
++#define mmCLK1_CLK3_DFS_CNTL 0x16E72
++#define mmCLK1_CLK4_DFS_CNTL 0x16E75
++#define mmCLK1_CLK5_DFS_CNTL 0x16E78
++
++#define mmCLK1_CLK0_CURRENT_CNT 0x16EFC
++#define mmCLK1_CLK1_CURRENT_CNT 0x16EFD
++#define mmCLK1_CLK2_CURRENT_CNT 0x16EFE
++#define mmCLK1_CLK3_CURRENT_CNT 0x16EFF
++#define mmCLK1_CLK4_CURRENT_CNT 0x16F00
++#define mmCLK1_CLK5_CURRENT_CNT 0x16F01
++
++#define mmCLK1_CLK0_BYPASS_CNTL 0x16E8A
++#define mmCLK1_CLK1_BYPASS_CNTL 0x16E93
++#define mmCLK1_CLK2_BYPASS_CNTL 0x16E9C
++#define mmCLK1_CLK3_BYPASS_CNTL 0x16EA5
++#define mmCLK1_CLK4_BYPASS_CNTL 0x16EAE
++#define mmCLK1_CLK5_BYPASS_CNTL 0x16EB7
++
++#define mmCLK1_CLK0_DS_CNTL 0x16E83
++#define mmCLK1_CLK1_DS_CNTL 0x16E8C
++#define mmCLK1_CLK2_DS_CNTL 0x16E95
++#define mmCLK1_CLK3_DS_CNTL 0x16E9E
++#define mmCLK1_CLK4_DS_CNTL 0x16EA7
++#define mmCLK1_CLK5_DS_CNTL 0x16EB0
++
++#define mmCLK1_CLK0_ALLOW_DS 0x16E84
++#define mmCLK1_CLK1_ALLOW_DS 0x16E8D
++#define mmCLK1_CLK2_ALLOW_DS 0x16E96
++#define mmCLK1_CLK3_ALLOW_DS 0x16E9F
++#define mmCLK1_CLK4_ALLOW_DS 0x16EA8
++#define mmCLK1_CLK5_ALLOW_DS 0x16EB1
++
++#define mmCLK5_spll_field_8 0x1B04B
++#define mmDENTIST_DISPCLK_CNTL 0x0124
++#define regDENTIST_DISPCLK_CNTL 0x0064
++#define regDENTIST_DISPCLK_CNTL_BASE_IDX 1
++
++#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT 0x0
++#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT 0xc
++#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT 0x10
++#define CLK1_CLK_PLL_REQ__FbMult_int_MASK 0x000001FFL
++#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L
++#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L
++
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L
++
++// DENTIST_DISPCLK_CNTL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER__SHIFT 0x0
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER__SHIFT 0x8
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE__SHIFT 0x13
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE__SHIFT 0x14
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER__SHIFT 0x18
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER_MASK 0x0000007FL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER_MASK 0x00007F00L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE_MASK 0x00080000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE_MASK 0x00100000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER_MASK 0x7F000000L
++
++#define CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
++
++#define REG(reg) \
++ (clk_mgr->regs->reg)
++
++#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
++
++#define BASE(seg) BASE_INNER(seg)
++
++#define SR(reg_name)\
++ .reg_name = BASE(reg ## reg_name ## _BASE_IDX) + \
++ reg ## reg_name
++
++#define CLK_SR_DCN35(reg_name)\
++ .reg_name = mm ## reg_name
++
++static const struct clk_mgr_registers clk_mgr_regs_dcn351 = {
++ CLK_REG_LIST_DCN35()
++};
++
++static const struct clk_mgr_shift clk_mgr_shift_dcn351 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
++};
++
++static const struct clk_mgr_mask clk_mgr_mask_dcn351 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(_MASK)
++};
++
++#define TO_CLK_MGR_DCN35(clk_mgr)\
++ container_of(clk_mgr, struct clk_mgr_dcn35, base)
++
++
++void dcn351_clk_mgr_construct(
++ struct dc_context *ctx,
++ struct clk_mgr_dcn35 *clk_mgr,
++ struct pp_smu_funcs *pp_smu,
++ struct dccg *dccg)
++{
++ /*register offset changed*/
++ clk_mgr->base.regs = &clk_mgr_regs_dcn351;
++ clk_mgr->base.clk_mgr_shift = &clk_mgr_shift_dcn351;
++ clk_mgr->base.clk_mgr_mask = &clk_mgr_mask_dcn351;
++
++ dcn35_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
++
++}
++
++
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 3bd0d46c170109..7d0d8852ce8d27 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -36,15 +36,11 @@
+ #include "dcn20/dcn20_clk_mgr.h"
+
+
+-
+-
+ #include "reg_helper.h"
+ #include "core_types.h"
+ #include "dcn35_smu.h"
+ #include "dm_helpers.h"
+
+-/* TODO: remove this include once we ported over remaining clk mgr functions*/
+-#include "dcn30/dcn30_clk_mgr.h"
+ #include "dcn31/dcn31_clk_mgr.h"
+
+ #include "dc_dmub_srv.h"
+@@ -55,34 +51,102 @@
+ #define DC_LOGGER \
+ clk_mgr->base.base.ctx->logger
+
+-#define regCLK1_CLK_PLL_REQ 0x0237
+-#define regCLK1_CLK_PLL_REQ_BASE_IDX 0
++#define DCN_BASE__INST0_SEG1 0x000000C0
++#define mmCLK1_CLK_PLL_REQ 0x16E37
++
++#define mmCLK1_CLK0_DFS_CNTL 0x16E69
++#define mmCLK1_CLK1_DFS_CNTL 0x16E6C
++#define mmCLK1_CLK2_DFS_CNTL 0x16E6F
++#define mmCLK1_CLK3_DFS_CNTL 0x16E72
++#define mmCLK1_CLK4_DFS_CNTL 0x16E75
++#define mmCLK1_CLK5_DFS_CNTL 0x16E78
++
++#define mmCLK1_CLK0_CURRENT_CNT 0x16EFB
++#define mmCLK1_CLK1_CURRENT_CNT 0x16EFC
++#define mmCLK1_CLK2_CURRENT_CNT 0x16EFD
++#define mmCLK1_CLK3_CURRENT_CNT 0x16EFE
++#define mmCLK1_CLK4_CURRENT_CNT 0x16EFF
++#define mmCLK1_CLK5_CURRENT_CNT 0x16F00
++
++#define mmCLK1_CLK0_BYPASS_CNTL 0x16E8A
++#define mmCLK1_CLK1_BYPASS_CNTL 0x16E93
++#define mmCLK1_CLK2_BYPASS_CNTL 0x16E9C
++#define mmCLK1_CLK3_BYPASS_CNTL 0x16EA5
++#define mmCLK1_CLK4_BYPASS_CNTL 0x16EAE
++#define mmCLK1_CLK5_BYPASS_CNTL 0x16EB7
++
++#define mmCLK1_CLK0_DS_CNTL 0x16E83
++#define mmCLK1_CLK1_DS_CNTL 0x16E8C
++#define mmCLK1_CLK2_DS_CNTL 0x16E95
++#define mmCLK1_CLK3_DS_CNTL 0x16E9E
++#define mmCLK1_CLK4_DS_CNTL 0x16EA7
++#define mmCLK1_CLK5_DS_CNTL 0x16EB0
++
++#define mmCLK1_CLK0_ALLOW_DS 0x16E84
++#define mmCLK1_CLK1_ALLOW_DS 0x16E8D
++#define mmCLK1_CLK2_ALLOW_DS 0x16E96
++#define mmCLK1_CLK3_ALLOW_DS 0x16E9F
++#define mmCLK1_CLK4_ALLOW_DS 0x16EA8
++#define mmCLK1_CLK5_ALLOW_DS 0x16EB1
++
++#define mmCLK5_spll_field_8 0x1B24B
++#define mmDENTIST_DISPCLK_CNTL 0x0124
++#define regDENTIST_DISPCLK_CNTL 0x0064
++#define regDENTIST_DISPCLK_CNTL_BASE_IDX 1
++
++#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT 0x0
++#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT 0xc
++#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT 0x10
++#define CLK1_CLK_PLL_REQ__FbMult_int_MASK 0x000001FFL
++#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L
++#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L
++
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV_MASK 0x000F0000L
++// DENTIST_DISPCLK_CNTL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER__SHIFT 0x0
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER__SHIFT 0x8
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE__SHIFT 0x13
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE__SHIFT 0x14
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER__SHIFT 0x18
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER_MASK 0x0000007FL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER_MASK 0x00007F00L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE_MASK 0x00080000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE_MASK 0x00100000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER_MASK 0x7F000000L
++
++#define CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
+
+-#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT 0x0
+-#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT 0xc
+-#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT 0x10
+-#define CLK1_CLK_PLL_REQ__FbMult_int_MASK 0x000001FFL
+-#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L
+-#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L
++#define SMU_VER_THRESHOLD 0x5D4A00 //93.74.0
++#undef FN
++#define FN(reg_name, field_name) \
++ clk_mgr->clk_mgr_shift->field_name, clk_mgr->clk_mgr_mask->field_name
+
+-#define regCLK1_CLK2_BYPASS_CNTL 0x029c
+-#define regCLK1_CLK2_BYPASS_CNTL_BASE_IDX 0
++#define REG(reg) \
++ (clk_mgr->regs->reg)
+
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL__SHIFT 0x0
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV__SHIFT 0x10
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV_MASK 0x000F0000L
++#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+-#define regCLK5_0_CLK5_spll_field_8 0x464b
+-#define regCLK5_0_CLK5_spll_field_8_BASE_IDX 0
++#define BASE(seg) BASE_INNER(seg)
+
+-#define CLK5_0_CLK5_spll_field_8__spll_ssc_en__SHIFT 0xd
+-#define CLK5_0_CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
++#define SR(reg_name)\
++ .reg_name = BASE(reg ## reg_name ## _BASE_IDX) + \
++ reg ## reg_name
+
+-#define SMU_VER_THRESHOLD 0x5D4A00 //93.74.0
++#define CLK_SR_DCN35(reg_name)\
++ .reg_name = mm ## reg_name
+
+-#define REG(reg_name) \
+- (ctx->clk_reg_offsets[reg ## reg_name ## _BASE_IDX] + reg ## reg_name)
++static const struct clk_mgr_registers clk_mgr_regs_dcn35 = {
++ CLK_REG_LIST_DCN35()
++};
++
++static const struct clk_mgr_shift clk_mgr_shift_dcn35 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
++};
++
++static const struct clk_mgr_mask clk_mgr_mask_dcn35 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(_MASK)
++};
+
+ #define TO_CLK_MGR_DCN35(clk_mgr)\
+ container_of(clk_mgr, struct clk_mgr_dcn35, base)
+@@ -443,7 +507,6 @@ static int get_vco_frequency_from_reg(struct clk_mgr_internal *clk_mgr)
+ struct fixed31_32 pll_req;
+ unsigned int fbmult_frac_val = 0;
+ unsigned int fbmult_int_val = 0;
+- struct dc_context *ctx = clk_mgr->base.ctx;
+
+ /*
+ * Register value of fbmult is in 8.16 format, we are converting to 314.32
+@@ -503,12 +566,12 @@ static void dcn35_dump_clk_registers(struct clk_state_registers_and_bypass *regs
+ static bool dcn35_is_spll_ssc_enabled(struct clk_mgr *clk_mgr_base)
+ {
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+- struct dc_context *ctx = clk_mgr->base.ctx;
++
+ uint32_t ssc_enable;
+
+- REG_GET(CLK5_0_CLK5_spll_field_8, spll_ssc_en, &ssc_enable);
++ ssc_enable = REG_READ(CLK5_spll_field_8) & CLK5_spll_field_8__spll_ssc_en_MASK;
+
+- return ssc_enable == 1;
++ return ssc_enable != 0;
+ }
+
+ static void init_clk_states(struct clk_mgr *clk_mgr)
+@@ -633,10 +696,10 @@ static struct dcn35_ss_info_table ss_info_table = {
+
+ static void dcn35_read_ss_info_from_lut(struct clk_mgr_internal *clk_mgr)
+ {
+- struct dc_context *ctx = clk_mgr->base.ctx;
+- uint32_t clock_source;
++ uint32_t clock_source = 0;
++
++ clock_source = REG_READ(CLK1_CLK2_BYPASS_CNTL) & CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK;
+
+- REG_GET(CLK1_CLK2_BYPASS_CNTL, CLK2_BYPASS_SEL, &clock_source);
+ // If it's DFS mode, clock_source is 0.
+ if (dcn35_is_spll_ssc_enabled(&clk_mgr->base) && (clock_source < ARRAY_SIZE(ss_info_table.ss_percentage))) {
+ clk_mgr->dprefclk_ss_percentage = ss_info_table.ss_percentage[clock_source];
+@@ -1106,6 +1169,12 @@ void dcn35_clk_mgr_construct(
+ clk_mgr->base.dprefclk_ss_divider = 1000;
+ clk_mgr->base.ss_on_dprefclk = false;
+ clk_mgr->base.dfs_ref_freq_khz = 48000;
++ if (ctx->dce_version == DCN_VERSION_3_5) {
++ clk_mgr->base.regs = &clk_mgr_regs_dcn35;
++ clk_mgr->base.clk_mgr_shift = &clk_mgr_shift_dcn35;
++ clk_mgr->base.clk_mgr_mask = &clk_mgr_mask_dcn35;
++ }
++
+
+ clk_mgr->smu_wm_set.wm_set = (struct dcn35_watermarks *)dm_helpers_allocate_gpu_mem(
+ clk_mgr->base.base.ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h
+index 1203dc605b12c4..a12a9bf90806ed 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h
+@@ -60,4 +60,8 @@ void dcn35_clk_mgr_construct(struct dc_context *ctx,
+
+ void dcn35_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr_int);
+
++void dcn351_clk_mgr_construct(struct dc_context *ctx,
++ struct clk_mgr_dcn35 *clk_mgr,
++ struct pp_smu_funcs *pp_smu,
++ struct dccg *dccg);
+ #endif //__DCN35_CLK_MGR_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+index c2dd061892f4d9..7a1ca1e98059b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+@@ -166,6 +166,41 @@ enum dentist_divider_range {
+ CLK_SR_DCN32(CLK1_CLK4_CURRENT_CNT), \
+ CLK_SR_DCN32(CLK4_CLK0_CURRENT_CNT)
+
++#define CLK_REG_LIST_DCN35() \
++ CLK_SR_DCN35(CLK1_CLK_PLL_REQ), \
++ CLK_SR_DCN35(CLK1_CLK0_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK1_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK2_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK3_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK4_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK5_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK0_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK1_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK2_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK3_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK4_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK5_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK0_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK1_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK2_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK3_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK4_BYPASS_CNTL),\
++ CLK_SR_DCN35(CLK1_CLK5_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK0_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK1_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK2_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK3_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK4_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK5_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK0_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK1_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK2_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK3_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK4_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK5_ALLOW_DS), \
++ CLK_SR_DCN35(CLK5_spll_field_8), \
++ SR(DENTIST_DISPCLK_CNTL), \
++
+ #define CLK_COMMON_MASK_SH_LIST_DCN32(mask_sh) \
+ CLK_COMMON_MASK_SH_LIST_DCN20_BASE(mask_sh),\
+ CLK_SF(CLK1_CLK_PLL_REQ, FbMult_int, mask_sh),\
+@@ -236,6 +271,7 @@ struct clk_mgr_registers {
+ uint32_t CLK1_CLK2_DFS_CNTL;
+ uint32_t CLK1_CLK3_DFS_CNTL;
+ uint32_t CLK1_CLK4_DFS_CNTL;
++ uint32_t CLK1_CLK5_DFS_CNTL;
+ uint32_t CLK2_CLK2_DFS_CNTL;
+
+ uint32_t CLK1_CLK0_CURRENT_CNT;
+@@ -243,11 +279,34 @@ struct clk_mgr_registers {
+ uint32_t CLK1_CLK2_CURRENT_CNT;
+ uint32_t CLK1_CLK3_CURRENT_CNT;
+ uint32_t CLK1_CLK4_CURRENT_CNT;
++ uint32_t CLK1_CLK5_CURRENT_CNT;
+
+ uint32_t CLK0_CLK0_DFS_CNTL;
+ uint32_t CLK0_CLK1_DFS_CNTL;
+ uint32_t CLK0_CLK3_DFS_CNTL;
+ uint32_t CLK0_CLK4_DFS_CNTL;
++ uint32_t CLK1_CLK0_BYPASS_CNTL;
++ uint32_t CLK1_CLK1_BYPASS_CNTL;
++ uint32_t CLK1_CLK2_BYPASS_CNTL;
++ uint32_t CLK1_CLK3_BYPASS_CNTL;
++ uint32_t CLK1_CLK4_BYPASS_CNTL;
++ uint32_t CLK1_CLK5_BYPASS_CNTL;
++
++ uint32_t CLK1_CLK0_DS_CNTL;
++ uint32_t CLK1_CLK1_DS_CNTL;
++ uint32_t CLK1_CLK2_DS_CNTL;
++ uint32_t CLK1_CLK3_DS_CNTL;
++ uint32_t CLK1_CLK4_DS_CNTL;
++ uint32_t CLK1_CLK5_DS_CNTL;
++
++ uint32_t CLK1_CLK0_ALLOW_DS;
++ uint32_t CLK1_CLK1_ALLOW_DS;
++ uint32_t CLK1_CLK2_ALLOW_DS;
++ uint32_t CLK1_CLK3_ALLOW_DS;
++ uint32_t CLK1_CLK4_ALLOW_DS;
++ uint32_t CLK1_CLK5_ALLOW_DS;
++ uint32_t CLK5_spll_field_8;
++
+ };
+
+ struct clk_mgr_shift {
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index d78c8ec4de79e7..885e749cdc6e96 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -51,9 +51,10 @@
+ #include "dc_dmub_srv.h"
+ #include "gpio_service_interface.h"
+
++#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
++
+ #define DC_LOGGER \
+ link->ctx->logger
+-#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
+
+ #ifndef MAX
+ #define MAX(X, Y) ((X) > (Y) ? (X) : (Y))
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index b1c294236cc878..2f1d9ce87ceb01 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3363,7 +3363,7 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state,
+ intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(dev_priv, port),
+ XELPDP_PORT_WIDTH_MASK | XELPDP_PORT_REVERSAL, port_buf);
+
+- buf_ctl |= DDI_PORT_WIDTH(lane_count);
++ buf_ctl |= DDI_PORT_WIDTH(crtc_state->lane_count);
+
+ if (DISPLAY_VER(dev_priv) >= 20)
+ buf_ctl |= XE2LPD_DDI_BUF_D2D_LINK_ENABLE;
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index b4ef4d59da1ace..2c6d0da8a16f8c 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -6369,12 +6369,30 @@ static int intel_async_flip_check_hw(struct intel_atomic_state *state, struct in
+ static int intel_joiner_add_affected_crtcs(struct intel_atomic_state *state)
+ {
+ struct drm_i915_private *i915 = to_i915(state->base.dev);
++ const struct intel_plane_state *plane_state;
+ struct intel_crtc_state *crtc_state;
++ struct intel_plane *plane;
+ struct intel_crtc *crtc;
+ u8 affected_pipes = 0;
+ u8 modeset_pipes = 0;
+ int i;
+
++ /*
++ * Any plane which is in use by the joiner needs its crtc.
++ * Pull those in first as this will not have happened yet
++ * if the plane remains disabled according to uapi.
++ */
++ for_each_new_intel_plane_in_state(state, plane, plane_state, i) {
++ crtc = to_intel_crtc(plane_state->hw.crtc);
++ if (!crtc)
++ continue;
++
++ crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++ }
++
++ /* Now pull in all joined crtcs */
+ for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
+ affected_pipes |= crtc_state->joiner_pipes;
+ if (intel_crtc_needs_modeset(crtc_state))
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+index 40bedc31d6bf2f..5d8f93d4cdc6a6 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+@@ -1561,7 +1561,7 @@ intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
+
+ if (wait_for(intel_dp_128b132b_intra_hop(intel_dp, crtc_state) == 0, 500)) {
+ lt_err(intel_dp, DP_PHY_DPRX, "128b/132b intra-hop not clear\n");
+- return false;
++ goto out;
+ }
+
+ if (intel_dp_128b132b_lane_eq(intel_dp, crtc_state) &&
+@@ -1573,6 +1573,19 @@ intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
+ passed ? "passed" : "failed",
+ crtc_state->port_clock, crtc_state->lane_count);
+
++out:
++ /*
++ * Ensure that the training pattern does get set to TPS2 even in case
++ * of a failure, as is the case at the end of a passing link training
++ * and what is expected by the transcoder. Leaving TPS1 set (and
++ * disabling the link train mode in DP_TP_CTL later from TPS1 directly)
++ * would result in a stuck transcoder HW state and flip-done timeouts
++ * later in the modeset sequence.
++ */
++ if (!passed)
++ intel_dp_program_link_training_pattern(intel_dp, crtc_state,
++ DP_PHY_DPRX, DP_TRAINING_PATTERN_2);
++
+ return passed;
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index b0e94c95940f67..8aaadbb702df6d 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -3425,10 +3425,10 @@ static inline int guc_lrc_desc_unpin(struct intel_context *ce)
+ */
+ ret = deregister_context(ce, ce->guc_id.id);
+ if (ret) {
+- spin_lock(&ce->guc_state.lock);
++ spin_lock_irqsave(&ce->guc_state.lock, flags);
+ set_context_registered(ce);
+ clr_context_destroyed(ce);
+- spin_unlock(&ce->guc_state.lock);
++ spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+ /*
+ * As gt-pm is awake at function entry, intel_wakeref_put_async merely decrements
+ * the wakeref immediately but per function spec usage call this after unlock.
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 41f4350a7c6c58..b7f521a9b337d3 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -3869,7 +3869,7 @@ enum skl_power_gate {
+ #define DDI_BUF_IS_IDLE (1 << 7)
+ #define DDI_BUF_CTL_TC_PHY_OWNERSHIP REG_BIT(6)
+ #define DDI_A_4_LANES (1 << 4)
+-#define DDI_PORT_WIDTH(width) (((width) - 1) << 1)
++#define DDI_PORT_WIDTH(width) (((width) == 3 ? 4 : ((width) - 1)) << 1)
+ #define DDI_PORT_WIDTH_MASK (7 << 1)
+ #define DDI_PORT_WIDTH_SHIFT 1
+ #define DDI_INIT_DISPLAY_DETECTED (1 << 0)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 421afacb724803..36cc9dbc00b5c1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -297,7 +297,7 @@ static const struct dpu_wb_cfg sm8150_wb[] = {
+ {
+ .name = "wb_2", .id = WB_2,
+ .base = 0x65000, .len = 0x2c8,
+- .features = WB_SDM845_MASK,
++ .features = WB_SM8250_MASK,
+ .format_list = wb2_formats_rgb,
+ .num_formats = ARRAY_SIZE(wb2_formats_rgb),
+ .clk_ctrl = DPU_CLK_CTRL_WB2,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index 641023b102bf59..e8eacdb47967a2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -304,7 +304,7 @@ static const struct dpu_wb_cfg sc8180x_wb[] = {
+ {
+ .name = "wb_2", .id = WB_2,
+ .base = 0x65000, .len = 0x2c8,
+- .features = WB_SDM845_MASK,
++ .features = WB_SM8250_MASK,
+ .format_list = wb2_formats_rgb,
+ .num_formats = ARRAY_SIZE(wb2_formats_rgb),
+ .clk_ctrl = DPU_CLK_CTRL_WB2,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
+index d039b96beb97cf..76f60a2df7a890 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
+@@ -144,7 +144,7 @@ static const struct dpu_wb_cfg sm6125_wb[] = {
+ {
+ .name = "wb_2", .id = WB_2,
+ .base = 0x65000, .len = 0x2c8,
+- .features = WB_SDM845_MASK,
++ .features = WB_SM8250_MASK,
+ .format_list = wb2_formats_rgb,
+ .num_formats = ARRAY_SIZE(wb2_formats_rgb),
+ .clk_ctrl = DPU_CLK_CTRL_WB2,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index bd3698bf0cf740..2cf8150adf81ff 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2125,6 +2125,9 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)
+ }
+ }
+
++ if (phys_enc->hw_pp && phys_enc->hw_pp->ops.setup_dither)
++ phys_enc->hw_pp->ops.setup_dither(phys_enc->hw_pp, NULL);
++
+ /* reset the merge 3D HW block */
+ if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d) {
+ phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+index 5e9aad1b2aa283..d1e0fb2139765c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+@@ -52,6 +52,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
+ u32 slice_last_group_size;
+ u32 det_thresh_flatness;
+ bool is_cmd_mode = !(mode & DSC_MODE_VIDEO);
++ bool input_10_bits = dsc->bits_per_component == 10;
+
+ DPU_REG_WRITE(c, DSC_COMMON_MODE, mode);
+
+@@ -68,7 +69,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
+ data |= (dsc->line_buf_depth << 3);
+ data |= (dsc->simple_422 << 2);
+ data |= (dsc->convert_rgb << 1);
+- data |= dsc->bits_per_component;
++ data |= input_10_bits;
+
+ DPU_REG_WRITE(c, DSC_ENC, data);
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+index 0f40eea7f5e247..2040bee8d512f6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+@@ -272,7 +272,7 @@ static void _setup_mdp_ops(struct dpu_hw_mdp_ops *ops,
+
+ if (cap & BIT(DPU_MDP_VSYNC_SEL))
+ ops->setup_vsync_source = dpu_hw_setup_vsync_sel;
+- else
++ else if (!(cap & BIT(DPU_MDP_PERIPH_0_REMOVED)))
+ ops->setup_vsync_source = dpu_hw_setup_wd_timer;
+
+ ops->get_safe_status = dpu_hw_get_safe_status;
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 031446c87daec0..798168180c1ab6 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -83,6 +83,9 @@ struct dsi_pll_7nm {
+ /* protects REG_DSI_7nm_PHY_CMN_CLK_CFG0 register */
+ spinlock_t postdiv_lock;
+
++ /* protects REG_DSI_7nm_PHY_CMN_CLK_CFG1 register */
++ spinlock_t pclk_mux_lock;
++
+ struct pll_7nm_cached_state cached_state;
+
+ struct dsi_pll_7nm *slave;
+@@ -372,22 +375,41 @@ static void dsi_pll_enable_pll_bias(struct dsi_pll_7nm *pll)
+ ndelay(250);
+ }
+
+-static void dsi_pll_disable_global_clk(struct dsi_pll_7nm *pll)
++static void dsi_pll_cmn_clk_cfg0_write(struct dsi_pll_7nm *pll, u32 val)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&pll->postdiv_lock, flags);
++ writel(val, pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG0);
++ spin_unlock_irqrestore(&pll->postdiv_lock, flags);
++}
++
++static void dsi_pll_cmn_clk_cfg1_update(struct dsi_pll_7nm *pll, u32 mask,
++ u32 val)
++{
++ unsigned long flags;
+ u32 data;
+
++ spin_lock_irqsave(&pll->pclk_mux_lock, flags);
+ data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
+- writel(data & ~BIT(5), pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ data &= ~mask;
++ data |= val & mask;
++
++ writel(data, pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ spin_unlock_irqrestore(&pll->pclk_mux_lock, flags);
++}
++
++static void dsi_pll_disable_global_clk(struct dsi_pll_7nm *pll)
++{
++ dsi_pll_cmn_clk_cfg1_update(pll, DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN, 0);
+ }
+
+ static void dsi_pll_enable_global_clk(struct dsi_pll_7nm *pll)
+ {
+- u32 data;
++ u32 cfg_1 = DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN | DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN_SEL;
+
+ writel(0x04, pll->phy->base + REG_DSI_7nm_PHY_CMN_CTRL_3);
+-
+- data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
+- writel(data | BIT(5) | BIT(4), pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ dsi_pll_cmn_clk_cfg1_update(pll, cfg_1, cfg_1);
+ }
+
+ static void dsi_pll_phy_dig_reset(struct dsi_pll_7nm *pll)
+@@ -565,7 +587,6 @@ static int dsi_7nm_pll_restore_state(struct msm_dsi_phy *phy)
+ {
+ struct dsi_pll_7nm *pll_7nm = to_pll_7nm(phy->vco_hw);
+ struct pll_7nm_cached_state *cached = &pll_7nm->cached_state;
+- void __iomem *phy_base = pll_7nm->phy->base;
+ u32 val;
+ int ret;
+
+@@ -574,13 +595,10 @@ static int dsi_7nm_pll_restore_state(struct msm_dsi_phy *phy)
+ val |= cached->pll_out_div;
+ writel(val, pll_7nm->phy->pll_base + REG_DSI_7nm_PHY_PLL_PLL_OUTDIV_RATE);
+
+- writel(cached->bit_clk_div | (cached->pix_clk_div << 4),
+- phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG0);
+-
+- val = readl(phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
+- val &= ~0x3;
+- val |= cached->pll_mux;
+- writel(val, phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ dsi_pll_cmn_clk_cfg0_write(pll_7nm,
++ DSI_7nm_PHY_CMN_CLK_CFG0_DIV_CTRL_3_0(cached->bit_clk_div) |
++ DSI_7nm_PHY_CMN_CLK_CFG0_DIV_CTRL_7_4(cached->pix_clk_div));
++ dsi_pll_cmn_clk_cfg1_update(pll_7nm, 0x3, cached->pll_mux);
+
+ ret = dsi_pll_7nm_vco_set_rate(phy->vco_hw,
+ pll_7nm->vco_current_rate,
+@@ -599,7 +617,6 @@ static int dsi_7nm_pll_restore_state(struct msm_dsi_phy *phy)
+ static int dsi_7nm_set_usecase(struct msm_dsi_phy *phy)
+ {
+ struct dsi_pll_7nm *pll_7nm = to_pll_7nm(phy->vco_hw);
+- void __iomem *base = phy->base;
+ u32 data = 0x0; /* internal PLL */
+
+ DBG("DSI PLL%d", pll_7nm->phy->id);
+@@ -618,7 +635,8 @@ static int dsi_7nm_set_usecase(struct msm_dsi_phy *phy)
+ }
+
+ /* set PLL src */
+- writel(data << 2, base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ dsi_pll_cmn_clk_cfg1_update(pll_7nm, DSI_7nm_PHY_CMN_CLK_CFG1_BITCLK_SEL__MASK,
++ DSI_7nm_PHY_CMN_CLK_CFG1_BITCLK_SEL(data));
+
+ return 0;
+ }
+@@ -733,7 +751,7 @@ static int pll_7nm_register(struct dsi_pll_7nm *pll_7nm, struct clk_hw **provide
+ pll_by_2_bit,
+ }), 2, 0, pll_7nm->phy->base +
+ REG_DSI_7nm_PHY_CMN_CLK_CFG1,
+- 0, 1, 0, NULL);
++ 0, 1, 0, &pll_7nm->pclk_mux_lock);
+ if (IS_ERR(hw)) {
+ ret = PTR_ERR(hw);
+ goto fail;
+@@ -778,6 +796,7 @@ static int dsi_pll_7nm_init(struct msm_dsi_phy *phy)
+ pll_7nm_list[phy->id] = pll_7nm;
+
+ spin_lock_init(&pll_7nm->postdiv_lock);
++ spin_lock_init(&pll_7nm->pclk_mux_lock);
+
+ pll_7nm->phy = phy;
+
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index 2e28a13446366c..9526b22038ab82 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -543,15 +543,12 @@ static inline int align_pitch(int width, int bpp)
+ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
+ {
+ ktime_t now = ktime_get();
+- s64 remaining_jiffies;
+
+- if (ktime_compare(*timeout, now) < 0) {
+- remaining_jiffies = 0;
+- } else {
+- ktime_t rem = ktime_sub(*timeout, now);
+- remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
+- }
++ if (ktime_compare(*timeout, now) <= 0)
++ return 0;
+
++ ktime_t rem = ktime_sub(*timeout, now);
++ s64 remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
+ return clamp(remaining_jiffies, 1LL, (s64)INT_MAX);
+ }
+
+diff --git a/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml b/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
+index d54b72f924493b..35f7f40e405b7d 100644
+--- a/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
++++ b/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
+@@ -9,8 +9,15 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ <reg32 offset="0x00004" name="REVISION_ID1"/>
+ <reg32 offset="0x00008" name="REVISION_ID2"/>
+ <reg32 offset="0x0000c" name="REVISION_ID3"/>
+- <reg32 offset="0x00010" name="CLK_CFG0"/>
+- <reg32 offset="0x00014" name="CLK_CFG1"/>
++ <reg32 offset="0x00010" name="CLK_CFG0">
++ <bitfield name="DIV_CTRL_3_0" low="0" high="3" type="uint"/>
++ <bitfield name="DIV_CTRL_7_4" low="4" high="7" type="uint"/>
++ </reg32>
++ <reg32 offset="0x00014" name="CLK_CFG1">
++ <bitfield name="CLK_EN" pos="5" type="boolean"/>
++ <bitfield name="CLK_EN_SEL" pos="4" type="boolean"/>
++ <bitfield name="BITCLK_SEL" low="2" high="3" type="uint"/>
++ </reg32>
+ <reg32 offset="0x00018" name="GLBL_CTRL"/>
+ <reg32 offset="0x0001c" name="RBUF_CTRL"/>
+ <reg32 offset="0x00020" name="VREG_CTRL_0"/>
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index b4da82ddbb6b2f..8ea98f06d39afc 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -590,6 +590,7 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
+ unsigned long timeout =
+ jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
+ struct mm_struct *mm = svmm->notifier.mm;
++ struct folio *folio;
+ struct page *page;
+ unsigned long start = args->p.addr;
+ unsigned long notifier_seq;
+@@ -616,12 +617,16 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
+ ret = -EINVAL;
+ goto out;
+ }
++ folio = page_folio(page);
+
+ mutex_lock(&svmm->mutex);
+ if (!mmu_interval_read_retry(¬ifier->notifier,
+ notifier_seq))
+ break;
+ mutex_unlock(&svmm->mutex);
++
++ folio_unlock(folio);
++ folio_put(folio);
+ }
+
+ /* Map the page on the GPU. */
+@@ -637,8 +642,8 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
+ ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL);
+ mutex_unlock(&svmm->mutex);
+
+- unlock_page(page);
+- put_page(page);
++ folio_unlock(folio);
++ folio_put(folio);
+
+ out:
+ mmu_interval_notifier_remove(¬ifier->notifier);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index a6f410ba60bc94..d393bc540f8628 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -75,7 +75,7 @@ gp10b_pmu_acr = {
+ .bootstrap_multiple_falcons = gp10b_pmu_acr_bootstrap_multiple_falcons,
+ };
+
+-#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
++#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC)
+ MODULE_FIRMWARE("nvidia/gp10b/pmu/desc.bin");
+ MODULE_FIRMWARE("nvidia/gp10b/pmu/image.bin");
+ MODULE_FIRMWARE("nvidia/gp10b/pmu/sig.bin");
+diff --git a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+index 45d09e6fa667fd..7d68a8acfe2ea4 100644
+--- a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
++++ b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+@@ -109,13 +109,13 @@ static int jadard_prepare(struct drm_panel *panel)
+ if (jadard->desc->lp11_to_reset_delay_ms)
+ msleep(jadard->desc->lp11_to_reset_delay_ms);
+
+- gpiod_set_value(jadard->reset, 1);
++ gpiod_set_value(jadard->reset, 0);
+ msleep(5);
+
+- gpiod_set_value(jadard->reset, 0);
++ gpiod_set_value(jadard->reset, 1);
+ msleep(10);
+
+- gpiod_set_value(jadard->reset, 1);
++ gpiod_set_value(jadard->reset, 0);
+ msleep(130);
+
+ ret = jadard->desc->init(jadard);
+@@ -1130,7 +1130,7 @@ static int jadard_dsi_probe(struct mipi_dsi_device *dsi)
+ dsi->format = desc->format;
+ dsi->lanes = desc->lanes;
+
+- jadard->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
++ jadard->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(jadard->reset)) {
+ DRM_DEV_ERROR(&dsi->dev, "failed to get our reset GPIO\n");
+ return PTR_ERR(jadard->reset);
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 6fc00d63b2857f..e6744422dee492 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -36,11 +36,17 @@
+ #include "xe_pm.h"
+ #include "xe_sched_job.h"
+ #include "xe_sriov.h"
++#include "xe_sync.h"
+
+ #define DEFAULT_POLL_FREQUENCY_HZ 200
+ #define DEFAULT_POLL_PERIOD_NS (NSEC_PER_SEC / DEFAULT_POLL_FREQUENCY_HZ)
+ #define XE_OA_UNIT_INVALID U32_MAX
+
++enum xe_oa_submit_deps {
++ XE_OA_SUBMIT_NO_DEPS,
++ XE_OA_SUBMIT_ADD_DEPS,
++};
++
+ struct xe_oa_reg {
+ struct xe_reg addr;
+ u32 value;
+@@ -63,13 +69,8 @@ struct xe_oa_config {
+ struct rcu_head rcu;
+ };
+
+-struct flex {
+- struct xe_reg reg;
+- u32 offset;
+- u32 value;
+-};
+-
+ struct xe_oa_open_param {
++ struct xe_file *xef;
+ u32 oa_unit_id;
+ bool sample;
+ u32 metric_set;
+@@ -81,6 +82,9 @@ struct xe_oa_open_param {
+ struct xe_exec_queue *exec_q;
+ struct xe_hw_engine *hwe;
+ bool no_preempt;
++ struct drm_xe_sync __user *syncs_user;
++ int num_syncs;
++ struct xe_sync_entry *syncs;
+ };
+
+ struct xe_oa_config_bo {
+@@ -567,32 +571,60 @@ static __poll_t xe_oa_poll(struct file *file, poll_table *wait)
+ return ret;
+ }
+
+-static int xe_oa_submit_bb(struct xe_oa_stream *stream, struct xe_bb *bb)
++static void xe_oa_lock_vma(struct xe_exec_queue *q)
++{
++ if (q->vm) {
++ down_read(&q->vm->lock);
++ xe_vm_lock(q->vm, false);
++ }
++}
++
++static void xe_oa_unlock_vma(struct xe_exec_queue *q)
+ {
++ if (q->vm) {
++ xe_vm_unlock(q->vm);
++ up_read(&q->vm->lock);
++ }
++}
++
++static struct dma_fence *xe_oa_submit_bb(struct xe_oa_stream *stream, enum xe_oa_submit_deps deps,
++ struct xe_bb *bb)
++{
++ struct xe_exec_queue *q = stream->exec_q ?: stream->k_exec_q;
+ struct xe_sched_job *job;
+ struct dma_fence *fence;
+- long timeout;
+ int err = 0;
+
+- /* Kernel configuration is issued on stream->k_exec_q, not stream->exec_q */
+- job = xe_bb_create_job(stream->k_exec_q, bb);
++ xe_oa_lock_vma(q);
++
++ job = xe_bb_create_job(q, bb);
+ if (IS_ERR(job)) {
+ err = PTR_ERR(job);
+ goto exit;
+ }
++ job->ggtt = true;
++
++ if (deps == XE_OA_SUBMIT_ADD_DEPS) {
++ for (int i = 0; i < stream->num_syncs && !err; i++)
++ err = xe_sync_entry_add_deps(&stream->syncs[i], job);
++ if (err) {
++ drm_dbg(&stream->oa->xe->drm, "xe_sync_entry_add_deps err %d\n", err);
++ goto err_put_job;
++ }
++ }
+
+ xe_sched_job_arm(job);
+ fence = dma_fence_get(&job->drm.s_fence->finished);
+ xe_sched_job_push(job);
+
+- timeout = dma_fence_wait_timeout(fence, false, HZ);
+- dma_fence_put(fence);
+- if (timeout < 0)
+- err = timeout;
+- else if (!timeout)
+- err = -ETIME;
++ xe_oa_unlock_vma(q);
++
++ return fence;
++err_put_job:
++ xe_sched_job_put(job);
+ exit:
+- return err;
++ xe_oa_unlock_vma(q);
++ return ERR_PTR(err);
+ }
+
+ static void write_cs_mi_lri(struct xe_bb *bb, const struct xe_oa_reg *reg_data, u32 n_regs)
+@@ -639,54 +671,30 @@ static void xe_oa_free_configs(struct xe_oa_stream *stream)
+ free_oa_config_bo(oa_bo);
+ }
+
+-static void xe_oa_store_flex(struct xe_oa_stream *stream, struct xe_lrc *lrc,
+- struct xe_bb *bb, const struct flex *flex, u32 count)
+-{
+- u32 offset = xe_bo_ggtt_addr(lrc->bo);
+-
+- do {
+- bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1);
+- bb->cs[bb->len++] = offset + flex->offset * sizeof(u32);
+- bb->cs[bb->len++] = 0;
+- bb->cs[bb->len++] = flex->value;
+-
+- } while (flex++, --count);
+-}
+-
+-static int xe_oa_modify_ctx_image(struct xe_oa_stream *stream, struct xe_lrc *lrc,
+- const struct flex *flex, u32 count)
++static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count)
+ {
++ struct dma_fence *fence;
+ struct xe_bb *bb;
+ int err;
+
+- bb = xe_bb_new(stream->gt, 4 * count, false);
++ bb = xe_bb_new(stream->gt, 2 * count + 1, false);
+ if (IS_ERR(bb)) {
+ err = PTR_ERR(bb);
+ goto exit;
+ }
+
+- xe_oa_store_flex(stream, lrc, bb, flex, count);
+-
+- err = xe_oa_submit_bb(stream, bb);
+- xe_bb_free(bb, NULL);
+-exit:
+- return err;
+-}
+-
+-static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri)
+-{
+- struct xe_bb *bb;
+- int err;
++ write_cs_mi_lri(bb, reg_lri, count);
+
+- bb = xe_bb_new(stream->gt, 3, false);
+- if (IS_ERR(bb)) {
+- err = PTR_ERR(bb);
+- goto exit;
++ fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb);
++ if (IS_ERR(fence)) {
++ err = PTR_ERR(fence);
++ goto free_bb;
+ }
++ xe_bb_free(bb, fence);
++ dma_fence_put(fence);
+
+- write_cs_mi_lri(bb, reg_lri, 1);
+-
+- err = xe_oa_submit_bb(stream, bb);
++ return 0;
++free_bb:
+ xe_bb_free(bb, NULL);
+ exit:
+ return err;
+@@ -695,70 +703,54 @@ static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *re
+ static int xe_oa_configure_oar_context(struct xe_oa_stream *stream, bool enable)
+ {
+ const struct xe_oa_format *format = stream->oa_buffer.format;
+- struct xe_lrc *lrc = stream->exec_q->lrc[0];
+- u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32);
+ u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) |
+ (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0);
+
+- struct flex regs_context[] = {
++ struct xe_oa_reg reg_lri[] = {
+ {
+ OACTXCONTROL(stream->hwe->mmio_base),
+- stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1,
+ enable ? OA_COUNTER_RESUME : 0,
+ },
++ {
++ OAR_OACONTROL,
++ oacontrol,
++ },
+ {
+ RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
+- regs_offset + CTX_CONTEXT_CONTROL,
+- _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE),
++ _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
++ enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0)
+ },
+ };
+- struct xe_oa_reg reg_lri = { OAR_OACONTROL, oacontrol };
+- int err;
+
+- /* Modify stream hwe context image with regs_context */
+- err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0],
+- regs_context, ARRAY_SIZE(regs_context));
+- if (err)
+- return err;
+-
+- /* Apply reg_lri using LRI */
+- return xe_oa_load_with_lri(stream, ®_lri);
++ return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri));
+ }
+
+ static int xe_oa_configure_oac_context(struct xe_oa_stream *stream, bool enable)
+ {
+ const struct xe_oa_format *format = stream->oa_buffer.format;
+- struct xe_lrc *lrc = stream->exec_q->lrc[0];
+- u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32);
+ u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) |
+ (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0);
+- struct flex regs_context[] = {
++ struct xe_oa_reg reg_lri[] = {
+ {
+ OACTXCONTROL(stream->hwe->mmio_base),
+- stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1,
+ enable ? OA_COUNTER_RESUME : 0,
+ },
++ {
++ OAC_OACONTROL,
++ oacontrol
++ },
+ {
+ RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
+- regs_offset + CTX_CONTEXT_CONTROL,
+- _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE) |
++ _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
++ enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) |
+ _MASKED_FIELD(CTX_CTRL_RUN_ALONE, enable ? CTX_CTRL_RUN_ALONE : 0),
+ },
+ };
+- struct xe_oa_reg reg_lri = { OAC_OACONTROL, oacontrol };
+- int err;
+
+ /* Set ccs select to enable programming of OAC_OACONTROL */
+ xe_mmio_write32(stream->gt, __oa_regs(stream)->oa_ctrl, __oa_ccs_select(stream));
+
+- /* Modify stream hwe context image with regs_context */
+- err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0],
+- regs_context, ARRAY_SIZE(regs_context));
+- if (err)
+- return err;
+-
+- /* Apply reg_lri using LRI */
+- return xe_oa_load_with_lri(stream, ®_lri);
++ return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri));
+ }
+
+ static int xe_oa_configure_oa_context(struct xe_oa_stream *stream, bool enable)
+@@ -914,15 +906,32 @@ static int xe_oa_emit_oa_config(struct xe_oa_stream *stream, struct xe_oa_config
+ {
+ #define NOA_PROGRAM_ADDITIONAL_DELAY_US 500
+ struct xe_oa_config_bo *oa_bo;
+- int err, us = NOA_PROGRAM_ADDITIONAL_DELAY_US;
++ int err = 0, us = NOA_PROGRAM_ADDITIONAL_DELAY_US;
++ struct dma_fence *fence;
++ long timeout;
+
++ /* Emit OA configuration batch */
+ oa_bo = xe_oa_alloc_config_buffer(stream, config);
+ if (IS_ERR(oa_bo)) {
+ err = PTR_ERR(oa_bo);
+ goto exit;
+ }
+
+- err = xe_oa_submit_bb(stream, oa_bo->bb);
++ fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_ADD_DEPS, oa_bo->bb);
++ if (IS_ERR(fence)) {
++ err = PTR_ERR(fence);
++ goto exit;
++ }
++
++ /* Wait till all previous batches have executed */
++ timeout = dma_fence_wait_timeout(fence, false, 5 * HZ);
++ dma_fence_put(fence);
++ if (timeout < 0)
++ err = timeout;
++ else if (!timeout)
++ err = -ETIME;
++ if (err)
++ drm_dbg(&stream->oa->xe->drm, "dma_fence_wait_timeout err %d\n", err);
+
+ /* Additional empirical delay needed for NOA programming after registers are written */
+ usleep_range(us, 2 * us);
+@@ -1362,6 +1371,9 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
+ stream->period_exponent = param->period_exponent;
+ stream->no_preempt = param->no_preempt;
+
++ stream->num_syncs = param->num_syncs;
++ stream->syncs = param->syncs;
++
+ /*
+ * For Xe2+, when overrun mode is enabled, there are no partial reports at the end
+ * of buffer, making the OA buffer effectively a non-power-of-2 size circular
+@@ -1712,6 +1724,20 @@ static int xe_oa_set_no_preempt(struct xe_oa *oa, u64 value,
+ return 0;
+ }
+
++static int xe_oa_set_prop_num_syncs(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->num_syncs = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_syncs_user(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->syncs_user = u64_to_user_ptr(value);
++ return 0;
++}
++
+ typedef int (*xe_oa_set_property_fn)(struct xe_oa *oa, u64 value,
+ struct xe_oa_open_param *param);
+ static const xe_oa_set_property_fn xe_oa_set_property_funcs[] = {
+@@ -1724,6 +1750,8 @@ static const xe_oa_set_property_fn xe_oa_set_property_funcs[] = {
+ [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_exec_queue_id,
+ [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_engine_instance,
+ [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_no_preempt,
++ [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
++ [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
+ };
+
+ static int xe_oa_user_ext_set_property(struct xe_oa *oa, u64 extension,
+@@ -1783,6 +1811,49 @@ static int xe_oa_user_extensions(struct xe_oa *oa, u64 extension, int ext_number
+ return 0;
+ }
+
++static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param)
++{
++ int ret, num_syncs, num_ufence = 0;
++
++ if (param->num_syncs && !param->syncs_user) {
++ drm_dbg(&oa->xe->drm, "num_syncs specified without sync array\n");
++ ret = -EINVAL;
++ goto exit;
++ }
++
++ if (param->num_syncs) {
++ param->syncs = kcalloc(param->num_syncs, sizeof(*param->syncs), GFP_KERNEL);
++ if (!param->syncs) {
++ ret = -ENOMEM;
++ goto exit;
++ }
++ }
++
++ for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) {
++ ret = xe_sync_entry_parse(oa->xe, param->xef, ¶m->syncs[num_syncs],
++ ¶m->syncs_user[num_syncs], 0);
++ if (ret)
++ goto err_syncs;
++
++ if (xe_sync_is_ufence(¶m->syncs[num_syncs]))
++ num_ufence++;
++ }
++
++ if (XE_IOCTL_DBG(oa->xe, num_ufence > 1)) {
++ ret = -EINVAL;
++ goto err_syncs;
++ }
++
++ return 0;
++
++err_syncs:
++ while (num_syncs--)
++ xe_sync_entry_cleanup(¶m->syncs[num_syncs]);
++ kfree(param->syncs);
++exit:
++ return ret;
++}
++
+ /**
+ * xe_oa_stream_open_ioctl - Opens an OA stream
+ * @dev: @drm_device
+@@ -1808,6 +1879,7 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ return -ENODEV;
+ }
+
++ param.xef = xef;
+ ret = xe_oa_user_extensions(oa, data, 0, ¶m);
+ if (ret)
+ return ret;
+@@ -1817,8 +1889,8 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ if (XE_IOCTL_DBG(oa->xe, !param.exec_q))
+ return -ENOENT;
+
+- if (param.exec_q->width > 1)
+- drm_dbg(&oa->xe->drm, "exec_q->width > 1, programming only exec_q->lrc[0]\n");
++ if (XE_IOCTL_DBG(oa->xe, param.exec_q->width > 1))
++ return -EOPNOTSUPP;
+ }
+
+ /*
+@@ -1876,11 +1948,24 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ drm_dbg(&oa->xe->drm, "Using periodic sampling freq %lld Hz\n", oa_freq_hz);
+ }
+
++ ret = xe_oa_parse_syncs(oa, ¶m);
++ if (ret)
++ goto err_exec_q;
++
+ mutex_lock(¶m.hwe->gt->oa.gt_lock);
+ ret = xe_oa_stream_open_ioctl_locked(oa, ¶m);
+ mutex_unlock(¶m.hwe->gt->oa.gt_lock);
++ if (ret < 0)
++ goto err_sync_cleanup;
++
++ return ret;
++
++err_sync_cleanup:
++ while (param.num_syncs--)
++ xe_sync_entry_cleanup(¶m.syncs[param.num_syncs]);
++ kfree(param.syncs);
+ err_exec_q:
+- if (ret < 0 && param.exec_q)
++ if (param.exec_q)
+ xe_exec_queue_put(param.exec_q);
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
+index 8862eca73fbe32..99f4b2d4bdcf6a 100644
+--- a/drivers/gpu/drm/xe/xe_oa_types.h
++++ b/drivers/gpu/drm/xe/xe_oa_types.h
+@@ -238,5 +238,11 @@ struct xe_oa_stream {
+
+ /** @no_preempt: Whether preemption and timeslicing is disabled for stream exec_q */
+ u32 no_preempt;
++
++ /** @num_syncs: size of @syncs array */
++ u32 num_syncs;
++
++ /** @syncs: syncs to wait on and to signal */
++ struct xe_sync_entry *syncs;
+ };
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 1c96375bd7df75..6fec5d1a1eb44b 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -679,7 +679,7 @@ static int query_oa_units(struct xe_device *xe,
+ du->oa_unit_id = u->oa_unit_id;
+ du->oa_unit_type = u->type;
+ du->oa_timestamp_freq = xe_oa_timestamp_frequency(gt);
+- du->capabilities = DRM_XE_OA_CAPS_BASE;
++ du->capabilities = DRM_XE_OA_CAPS_BASE | DRM_XE_OA_CAPS_SYNCS;
+
+ j = 0;
+ for_each_hw_engine(hwe, gt, hwe_id) {
+diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
+index 0be4f489d3e126..9f327f27c0726e 100644
+--- a/drivers/gpu/drm/xe/xe_ring_ops.c
++++ b/drivers/gpu/drm/xe/xe_ring_ops.c
+@@ -221,7 +221,10 @@ static int emit_pipe_imm_ggtt(u32 addr, u32 value, bool stall_only, u32 *dw,
+
+ static u32 get_ppgtt_flag(struct xe_sched_job *job)
+ {
+- return job->q->vm ? BIT(8) : 0;
++ if (job->q->vm && !job->ggtt)
++ return BIT(8);
++
++ return 0;
+ }
+
+ static int emit_copy_timestamp(struct xe_lrc *lrc, u32 *dw, int i)
+diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h b/drivers/gpu/drm/xe/xe_sched_job_types.h
+index 0d3f76fb05cea2..c207361bf43e1c 100644
+--- a/drivers/gpu/drm/xe/xe_sched_job_types.h
++++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
+@@ -57,6 +57,8 @@ struct xe_sched_job {
+ u32 migrate_flush_flags;
+ /** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */
+ bool ring_ops_flush_tlb;
++ /** @ggtt: mapped in ggtt. */
++ bool ggtt;
+ /** @ptrs: per instance pointers. */
+ struct xe_job_ptrs ptrs[];
+ };
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 380aa1614442f4..3d1459b551bb2e 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -667,23 +667,50 @@ static void synaptics_pt_stop(struct serio *serio)
+ serio_continue_rx(parent->ps2dev.serio);
+ }
+
++static int synaptics_pt_open(struct serio *serio)
++{
++ struct psmouse *parent = psmouse_from_serio(serio->parent);
++ struct synaptics_data *priv = parent->private;
++
++ guard(serio_pause_rx)(parent->ps2dev.serio);
++ priv->pt_port_open = true;
++
++ return 0;
++}
++
++static void synaptics_pt_close(struct serio *serio)
++{
++ struct psmouse *parent = psmouse_from_serio(serio->parent);
++ struct synaptics_data *priv = parent->private;
++
++ guard(serio_pause_rx)(parent->ps2dev.serio);
++ priv->pt_port_open = false;
++}
++
+ static int synaptics_is_pt_packet(u8 *buf)
+ {
+ return (buf[0] & 0xFC) == 0x84 && (buf[3] & 0xCC) == 0xC4;
+ }
+
+-static void synaptics_pass_pt_packet(struct serio *ptport, u8 *packet)
++static void synaptics_pass_pt_packet(struct synaptics_data *priv, u8 *packet)
+ {
+- struct psmouse *child = psmouse_from_serio(ptport);
++ struct serio *ptport;
+
+- if (child && child->state == PSMOUSE_ACTIVATED) {
+- serio_interrupt(ptport, packet[1], 0);
+- serio_interrupt(ptport, packet[4], 0);
+- serio_interrupt(ptport, packet[5], 0);
+- if (child->pktsize == 4)
+- serio_interrupt(ptport, packet[2], 0);
+- } else {
+- serio_interrupt(ptport, packet[1], 0);
++ ptport = priv->pt_port;
++ if (!ptport)
++ return;
++
++ serio_interrupt(ptport, packet[1], 0);
++
++ if (priv->pt_port_open) {
++ struct psmouse *child = psmouse_from_serio(ptport);
++
++ if (child->state == PSMOUSE_ACTIVATED) {
++ serio_interrupt(ptport, packet[4], 0);
++ serio_interrupt(ptport, packet[5], 0);
++ if (child->pktsize == 4)
++ serio_interrupt(ptport, packet[2], 0);
++ }
+ }
+ }
+
+@@ -722,6 +749,8 @@ static void synaptics_pt_create(struct psmouse *psmouse)
+ serio->write = synaptics_pt_write;
+ serio->start = synaptics_pt_start;
+ serio->stop = synaptics_pt_stop;
++ serio->open = synaptics_pt_open;
++ serio->close = synaptics_pt_close;
+ serio->parent = psmouse->ps2dev.serio;
+
+ psmouse->pt_activate = synaptics_pt_activate;
+@@ -1218,11 +1247,10 @@ static psmouse_ret_t synaptics_process_byte(struct psmouse *psmouse)
+
+ if (SYN_CAP_PASS_THROUGH(priv->info.capabilities) &&
+ synaptics_is_pt_packet(psmouse->packet)) {
+- if (priv->pt_port)
+- synaptics_pass_pt_packet(priv->pt_port,
+- psmouse->packet);
+- } else
++ synaptics_pass_pt_packet(priv, psmouse->packet);
++ } else {
+ synaptics_process_packet(psmouse);
++ }
+
+ return PSMOUSE_FULL_PACKET;
+ }
+diff --git a/drivers/input/mouse/synaptics.h b/drivers/input/mouse/synaptics.h
+index 08533d1b1b16fc..4b34f13b9f7616 100644
+--- a/drivers/input/mouse/synaptics.h
++++ b/drivers/input/mouse/synaptics.h
+@@ -188,6 +188,7 @@ struct synaptics_data {
+ bool disable_gesture; /* disable gestures */
+
+ struct serio *pt_port; /* Pass-through serio port */
++ bool pt_port_open;
+
+ /*
+ * Last received Advanced Gesture Mode (AGM) packet. An AGM packet
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 8fdee511bc0f2c..cf469a67249723 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -44,6 +44,7 @@ static u8 dist_prio_nmi __ro_after_init = GICV3_PRIO_NMI;
+ #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
+ #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1)
+ #define FLAGS_WORKAROUND_ASR_ERRATUM_8601001 (1ULL << 2)
++#define FLAGS_WORKAROUND_INSECURE (1ULL << 3)
+
+ #define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
+
+@@ -83,6 +84,8 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
+ #define GIC_LINE_NR min(GICD_TYPER_SPIS(gic_data.rdists.gicd_typer), 1020U)
+ #define GIC_ESPI_NR GICD_TYPER_ESPIS(gic_data.rdists.gicd_typer)
+
++static bool nmi_support_forbidden;
++
+ /*
+ * There are 16 SGIs, though we only actually use 8 in Linux. The other 8 SGIs
+ * are potentially stolen by the secure side. Some code, especially code dealing
+@@ -163,21 +166,27 @@ static void __init gic_prio_init(void)
+ {
+ bool ds;
+
+- ds = gic_dist_security_disabled();
+- if (!ds) {
+- u32 val;
+-
+- val = readl_relaxed(gic_data.dist_base + GICD_CTLR);
+- val |= GICD_CTLR_DS;
+- writel_relaxed(val, gic_data.dist_base + GICD_CTLR);
++ cpus_have_group0 = gic_has_group0();
+
+- ds = gic_dist_security_disabled();
+- if (ds)
+- pr_warn("Broken GIC integration, security disabled");
++ ds = gic_dist_security_disabled();
++ if ((gic_data.flags & FLAGS_WORKAROUND_INSECURE) && !ds) {
++ if (cpus_have_group0) {
++ u32 val;
++
++ val = readl_relaxed(gic_data.dist_base + GICD_CTLR);
++ val |= GICD_CTLR_DS;
++ writel_relaxed(val, gic_data.dist_base + GICD_CTLR);
++
++ ds = gic_dist_security_disabled();
++ if (ds)
++ pr_warn("Broken GIC integration, security disabled\n");
++ } else {
++ pr_warn("Broken GIC integration, pNMI forbidden\n");
++ nmi_support_forbidden = true;
++ }
+ }
+
+ cpus_have_security_disabled = ds;
+- cpus_have_group0 = gic_has_group0();
+
+ /*
+ * How priority values are used by the GIC depends on two things:
+@@ -209,7 +218,7 @@ static void __init gic_prio_init(void)
+ * be in the non-secure range, we program the non-secure values into
+ * the distributor to match the PMR values we want.
+ */
+- if (cpus_have_group0 & !cpus_have_security_disabled) {
++ if (cpus_have_group0 && !cpus_have_security_disabled) {
+ dist_prio_irq = __gicv3_prio_to_ns(dist_prio_irq);
+ dist_prio_nmi = __gicv3_prio_to_ns(dist_prio_nmi);
+ }
+@@ -1922,6 +1931,18 @@ static bool gic_enable_quirk_arm64_2941627(void *data)
+ return true;
+ }
+
++static bool gic_enable_quirk_rk3399(void *data)
++{
++ struct gic_chip_data *d = data;
++
++ if (of_machine_is_compatible("rockchip,rk3399")) {
++ d->flags |= FLAGS_WORKAROUND_INSECURE;
++ return true;
++ }
++
++ return false;
++}
++
+ static bool rd_set_non_coherent(void *data)
+ {
+ struct gic_chip_data *d = data;
+@@ -1996,6 +2017,12 @@ static const struct gic_quirk gic_quirks[] = {
+ .property = "dma-noncoherent",
+ .init = rd_set_non_coherent,
+ },
++ {
++ .desc = "GICv3: Insecure RK3399 integration",
++ .iidr = 0x0000043b,
++ .mask = 0xff000fff,
++ .init = gic_enable_quirk_rk3399,
++ },
+ {
+ }
+ };
+@@ -2004,7 +2031,7 @@ static void gic_enable_nmi_support(void)
+ {
+ int i;
+
+- if (!gic_prio_masking_enabled())
++ if (!gic_prio_masking_enabled() || nmi_support_forbidden)
+ return;
+
+ rdist_nmi_refs = kcalloc(gic_data.ppi_nr + SGI_NR,
+diff --git a/drivers/irqchip/irq-jcore-aic.c b/drivers/irqchip/irq-jcore-aic.c
+index b9dcc8e78c7501..1f613eb7b7f034 100644
+--- a/drivers/irqchip/irq-jcore-aic.c
++++ b/drivers/irqchip/irq-jcore-aic.c
+@@ -38,7 +38,7 @@ static struct irq_chip jcore_aic;
+ static void handle_jcore_irq(struct irq_desc *desc)
+ {
+ if (irqd_is_per_cpu(irq_desc_get_irq_data(desc)))
+- handle_percpu_irq(desc);
++ handle_percpu_devid_irq(desc);
+ else
+ handle_simple_irq(desc);
+ }
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 32d58752477847..31bea72bcb01ad 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -385,10 +385,8 @@ static int raid0_set_limits(struct mddev *mddev)
+ lim.io_min = mddev->chunk_sectors << 9;
+ lim.io_opt = lim.io_min * mddev->raid_disks;
+ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+- if (err) {
+- queue_limits_cancel_update(mddev->gendisk->queue);
++ if (err)
+ return err;
+- }
+ return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index d83fe3b3abc009..8a994a1975ca7b 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -3171,10 +3171,8 @@ static int raid1_set_limits(struct mddev *mddev)
+ md_init_stacking_limits(&lim);
+ lim.max_write_zeroes_sectors = 0;
+ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+- if (err) {
+- queue_limits_cancel_update(mddev->gendisk->queue);
++ if (err)
+ return err;
+- }
+ return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index daf42acc4fb6f3..a214fed4f16226 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3963,10 +3963,8 @@ static int raid10_set_queue_limits(struct mddev *mddev)
+ lim.io_min = mddev->chunk_sectors << 9;
+ lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
+ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+- if (err) {
+- queue_limits_cancel_update(mddev->gendisk->queue);
++ if (err)
+ return err;
+- }
+ return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+
+diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
+index 3bc89b3569632d..fca54e21a164f3 100644
+--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
++++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
+@@ -471,6 +471,8 @@ struct cdns_nand_ctrl {
+ struct {
+ void __iomem *virt;
+ dma_addr_t dma;
++ dma_addr_t iova_dma;
++ u32 size;
+ } io;
+
+ int irq;
+@@ -1835,11 +1837,11 @@ static int cadence_nand_slave_dma_transfer(struct cdns_nand_ctrl *cdns_ctrl,
+ }
+
+ if (dir == DMA_FROM_DEVICE) {
+- src_dma = cdns_ctrl->io.dma;
++ src_dma = cdns_ctrl->io.iova_dma;
+ dst_dma = buf_dma;
+ } else {
+ src_dma = buf_dma;
+- dst_dma = cdns_ctrl->io.dma;
++ dst_dma = cdns_ctrl->io.iova_dma;
+ }
+
+ tx = dmaengine_prep_dma_memcpy(cdns_ctrl->dmac, dst_dma, src_dma, len,
+@@ -1861,12 +1863,12 @@ static int cadence_nand_slave_dma_transfer(struct cdns_nand_ctrl *cdns_ctrl,
+ dma_async_issue_pending(cdns_ctrl->dmac);
+ wait_for_completion(&finished);
+
+- dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir);
++ dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
+
+ return 0;
+
+ err_unmap:
+- dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir);
++ dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
+
+ err:
+ dev_dbg(cdns_ctrl->dev, "Fall back to CPU I/O\n");
+@@ -2869,6 +2871,7 @@ cadence_nand_irq_cleanup(int irqnum, struct cdns_nand_ctrl *cdns_ctrl)
+ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ {
+ dma_cap_mask_t mask;
++ struct dma_device *dma_dev = cdns_ctrl->dmac->device;
+ int ret;
+
+ cdns_ctrl->cdma_desc = dma_alloc_coherent(cdns_ctrl->dev,
+@@ -2904,15 +2907,24 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ dma_cap_set(DMA_MEMCPY, mask);
+
+ if (cdns_ctrl->caps1->has_dma) {
+- cdns_ctrl->dmac = dma_request_channel(mask, NULL, NULL);
+- if (!cdns_ctrl->dmac) {
+- dev_err(cdns_ctrl->dev,
+- "Unable to get a DMA channel\n");
+- ret = -EBUSY;
++ cdns_ctrl->dmac = dma_request_chan_by_mask(&mask);
++ if (IS_ERR(cdns_ctrl->dmac)) {
++ ret = dev_err_probe(cdns_ctrl->dev, PTR_ERR(cdns_ctrl->dmac),
++ "%d: Failed to get a DMA channel\n", ret);
+ goto disable_irq;
+ }
+ }
+
++ cdns_ctrl->io.iova_dma = dma_map_resource(dma_dev->dev, cdns_ctrl->io.dma,
++ cdns_ctrl->io.size,
++ DMA_BIDIRECTIONAL, 0);
++
++ ret = dma_mapping_error(dma_dev->dev, cdns_ctrl->io.iova_dma);
++ if (ret) {
++ dev_err(cdns_ctrl->dev, "Failed to map I/O resource to DMA\n");
++ goto dma_release_chnl;
++ }
++
+ nand_controller_init(&cdns_ctrl->controller);
+ INIT_LIST_HEAD(&cdns_ctrl->chips);
+
+@@ -2923,18 +2935,22 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ if (ret) {
+ dev_err(cdns_ctrl->dev, "Failed to register MTD: %d\n",
+ ret);
+- goto dma_release_chnl;
++ goto unmap_dma_resource;
+ }
+
+ kfree(cdns_ctrl->buf);
+ cdns_ctrl->buf = kzalloc(cdns_ctrl->buf_size, GFP_KERNEL);
+ if (!cdns_ctrl->buf) {
+ ret = -ENOMEM;
+- goto dma_release_chnl;
++ goto unmap_dma_resource;
+ }
+
+ return 0;
+
++unmap_dma_resource:
++ dma_unmap_resource(dma_dev->dev, cdns_ctrl->io.iova_dma,
++ cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0);
++
+ dma_release_chnl:
+ if (cdns_ctrl->dmac)
+ dma_release_channel(cdns_ctrl->dmac);
+@@ -2956,6 +2972,8 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ static void cadence_nand_remove(struct cdns_nand_ctrl *cdns_ctrl)
+ {
+ cadence_nand_chips_cleanup(cdns_ctrl);
++ dma_unmap_resource(cdns_ctrl->dmac->device->dev, cdns_ctrl->io.iova_dma,
++ cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0);
+ cadence_nand_irq_cleanup(cdns_ctrl->irq, cdns_ctrl);
+ kfree(cdns_ctrl->buf);
+ dma_free_coherent(cdns_ctrl->dev, sizeof(struct cadence_nand_cdma_desc),
+@@ -3020,7 +3038,9 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev)
+ cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res);
+ if (IS_ERR(cdns_ctrl->io.virt))
+ return PTR_ERR(cdns_ctrl->io.virt);
++
+ cdns_ctrl->io.dma = res->start;
++ cdns_ctrl->io.size = resource_size(res);
+
+ dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk");
+ if (IS_ERR(dt->clk))
+diff --git a/drivers/mtd/spi-nor/sst.c b/drivers/mtd/spi-nor/sst.c
+index b5ad7118c49a2b..175211fe6a5ed2 100644
+--- a/drivers/mtd/spi-nor/sst.c
++++ b/drivers/mtd/spi-nor/sst.c
+@@ -174,7 +174,7 @@ static int sst_nor_write_data(struct spi_nor *nor, loff_t to, size_t len,
+ int ret;
+
+ nor->program_opcode = op;
+- ret = spi_nor_write_data(nor, to, 1, buf);
++ ret = spi_nor_write_data(nor, to, len, buf);
+ if (ret < 0)
+ return ret;
+ WARN(ret != len, "While writing %zu byte written %i bytes\n", len, ret);
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index 95471cfcff420a..8ddd366d9fde54 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -1110,6 +1110,16 @@ static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
+ return gve_xdp_tx_queue_id(priv, 0);
+ }
+
++static inline bool gve_supports_xdp_xmit(struct gve_priv *priv)
++{
++ switch (priv->queue_format) {
++ case GVE_GQI_QPL_FORMAT:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ /* gqi napi handler defined in gve_main.c */
+ int gve_napi_poll(struct napi_struct *napi, int budget);
+
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index f985a3cf2b11fa..862c4575701fec 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1895,6 +1895,8 @@ static void gve_turndown(struct gve_priv *priv)
+ /* Stop tx queues */
+ netif_tx_disable(priv->dev);
+
++ xdp_features_clear_redirect_target(priv->dev);
++
+ gve_clear_napi_enabled(priv);
+ gve_clear_report_stats(priv);
+
+@@ -1955,6 +1957,9 @@ static void gve_turnup(struct gve_priv *priv)
+ napi_schedule(&block->napi);
+ }
+
++ if (priv->num_xdp_queues && gve_supports_xdp_xmit(priv))
++ xdp_features_set_redirect_target(priv->dev, false);
++
+ gve_set_napi_enabled(priv);
+ }
+
+@@ -2229,7 +2234,6 @@ static void gve_set_netdev_xdp_features(struct gve_priv *priv)
+ if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
+ xdp_features = NETDEV_XDP_ACT_BASIC;
+ xdp_features |= NETDEV_XDP_ACT_REDIRECT;
+- xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
+ xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ } else {
+ xdp_features = 0;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 97425c06e1ed7f..61db00b2b33e43 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2310,7 +2310,7 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+ tx_buff = &tx_pool->tx_buff[index];
+ adapter->netdev->stats.tx_packets--;
+ adapter->netdev->stats.tx_bytes -= tx_buff->skb->len;
+- adapter->tx_stats_buffers[queue_num].packets--;
++ adapter->tx_stats_buffers[queue_num].batched_packets--;
+ adapter->tx_stats_buffers[queue_num].bytes -=
+ tx_buff->skb->len;
+ dev_kfree_skb_any(tx_buff->skb);
+@@ -2402,11 +2402,13 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ unsigned int tx_map_failed = 0;
+ union sub_crq indir_arr[16];
+ unsigned int tx_dropped = 0;
+- unsigned int tx_packets = 0;
++ unsigned int tx_dpackets = 0;
++ unsigned int tx_bpackets = 0;
+ unsigned int tx_bytes = 0;
+ dma_addr_t data_dma_addr;
+ struct netdev_queue *txq;
+ unsigned long lpar_rc;
++ unsigned int skblen;
+ union sub_crq tx_crq;
+ unsigned int offset;
+ bool use_scrq_send_direct = false;
+@@ -2521,6 +2523,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ tx_buff->skb = skb;
+ tx_buff->index = bufidx;
+ tx_buff->pool_index = queue_num;
++ skblen = skb->len;
+
+ memset(&tx_crq, 0, sizeof(tx_crq));
+ tx_crq.v1.first = IBMVNIC_CRQ_CMD;
+@@ -2575,6 +2578,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ if (lpar_rc != H_SUCCESS)
+ goto tx_err;
+
++ tx_dpackets++;
+ goto early_exit;
+ }
+
+@@ -2603,6 +2607,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ goto tx_err;
+ }
+
++ tx_bpackets++;
++
+ early_exit:
+ if (atomic_add_return(num_entries, &tx_scrq->used)
+ >= adapter->req_tx_entries_per_subcrq) {
+@@ -2610,8 +2616,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ netif_stop_subqueue(netdev, queue_num);
+ }
+
+- tx_packets++;
+- tx_bytes += skb->len;
++ tx_bytes += skblen;
+ txq_trans_cond_update(txq);
+ ret = NETDEV_TX_OK;
+ goto out;
+@@ -2640,10 +2645,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ rcu_read_unlock();
+ netdev->stats.tx_dropped += tx_dropped;
+ netdev->stats.tx_bytes += tx_bytes;
+- netdev->stats.tx_packets += tx_packets;
++ netdev->stats.tx_packets += tx_bpackets + tx_dpackets;
+ adapter->tx_send_failed += tx_send_failed;
+ adapter->tx_map_failed += tx_map_failed;
+- adapter->tx_stats_buffers[queue_num].packets += tx_packets;
++ adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
++ adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
+ adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
+ adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
+
+@@ -3808,7 +3814,10 @@ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+ memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
+
+ for (i = 0; i < adapter->req_tx_queues; i++) {
+- snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i);
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_batched_packets", i);
++ data += ETH_GSTRING_LEN;
++
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_direct_packets", i);
+ data += ETH_GSTRING_LEN;
+
+ snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i);
+@@ -3873,7 +3882,9 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev,
+ (adapter, ibmvnic_stats[i].offset));
+
+ for (j = 0; j < adapter->req_tx_queues; j++) {
+- data[i] = adapter->tx_stats_buffers[j].packets;
++ data[i] = adapter->tx_stats_buffers[j].batched_packets;
++ i++;
++ data[i] = adapter->tx_stats_buffers[j].direct_packets;
+ i++;
+ data[i] = adapter->tx_stats_buffers[j].bytes;
+ i++;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index 94ac36b1408be9..a189038d88df03 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -213,7 +213,8 @@ struct ibmvnic_statistics {
+
+ #define NUM_TX_STATS 3
+ struct ibmvnic_tx_queue_stats {
+- u64 packets;
++ u64 batched_packets;
++ u64 direct_packets;
+ u64 bytes;
+ u64 dropped_packets;
+ };
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+index 2ec62c8d86e1c1..59486fe2ad18c2 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+@@ -20,6 +20,8 @@ nfp_bpf_cmsg_alloc(struct nfp_app_bpf *bpf, unsigned int size)
+ struct sk_buff *skb;
+
+ skb = nfp_app_ctrl_msg_alloc(bpf->app, size, GFP_KERNEL);
++ if (!skb)
++ return NULL;
+ skb_put(skb, size);
+
+ return skb;
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index de10a2d08c428e..fe3438abcd253d 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2888,6 +2888,7 @@ static int axienet_probe(struct platform_device *pdev)
+
+ lp->phylink_config.dev = &ndev->dev;
+ lp->phylink_config.type = PHYLINK_NETDEV;
++ lp->phylink_config.mac_managed_pm = true;
+ lp->phylink_config.mac_capabilities = MAC_SYM_PAUSE | MAC_ASYM_PAUSE |
+ MAC_10FD | MAC_100FD | MAC_1000FD;
+
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index ba15a0a4ce629e..963fb9261f017c 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -1902,21 +1902,9 @@ static void geneve_destroy_tunnels(struct net *net, struct list_head *head)
+ {
+ struct geneve_net *gn = net_generic(net, geneve_net_id);
+ struct geneve_dev *geneve, *next;
+- struct net_device *dev, *aux;
+
+- /* gather any geneve devices that were moved into this ns */
+- for_each_netdev_safe(net, dev, aux)
+- if (dev->rtnl_link_ops == &geneve_link_ops)
+- unregister_netdevice_queue(dev, head);
+-
+- /* now gather any other geneve devices that were created in this ns */
+- list_for_each_entry_safe(geneve, next, &gn->geneve_list, next) {
+- /* If geneve->dev is in the same netns, it was already added
+- * to the list by the previous loop.
+- */
+- if (!net_eq(dev_net(geneve->dev), net))
+- unregister_netdevice_queue(geneve->dev, head);
+- }
++ list_for_each_entry_safe(geneve, next, &gn->geneve_list, next)
++ geneve_dellink(geneve->dev, head);
+ }
+
+ static void __net_exit geneve_exit_batch_rtnl(struct list_head *net_list,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 47406ce9901612..33b78b4007fe7a 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -2487,11 +2487,6 @@ static void __net_exit gtp_net_exit_batch_rtnl(struct list_head *net_list,
+ list_for_each_entry(net, net_list, exit_list) {
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+ struct gtp_dev *gtp, *gtp_next;
+- struct net_device *dev;
+-
+- for_each_netdev(net, dev)
+- if (dev->rtnl_link_ops == >p_link_ops)
+- gtp_dellink(dev, dev_to_kill);
+
+ list_for_each_entry_safe(gtp, gtp_next, &gn->gtp_dev_list, list)
+ gtp_dellink(gtp->dev, dev_to_kill);
+diff --git a/drivers/net/pse-pd/pd692x0.c b/drivers/net/pse-pd/pd692x0.c
+index 0af7db80b2f883..7cfc36cadb5761 100644
+--- a/drivers/net/pse-pd/pd692x0.c
++++ b/drivers/net/pse-pd/pd692x0.c
+@@ -999,13 +999,12 @@ static int pd692x0_pi_get_voltage(struct pse_controller_dev *pcdev, int id)
+ return (buf.sub[0] << 8 | buf.sub[1]) * 100000;
+ }
+
+-static int pd692x0_pi_get_current_limit(struct pse_controller_dev *pcdev,
+- int id)
++static int pd692x0_pi_get_pw_limit(struct pse_controller_dev *pcdev,
++ int id)
+ {
+ struct pd692x0_priv *priv = to_pd692x0_priv(pcdev);
+ struct pd692x0_msg msg, buf = {0};
+- int mW, uV, uA, ret;
+- s64 tmp_64;
++ int ret;
+
+ msg = pd692x0_msg_template_list[PD692X0_MSG_GET_PORT_PARAM];
+ msg.sub[2] = id;
+@@ -1013,48 +1012,24 @@ static int pd692x0_pi_get_current_limit(struct pse_controller_dev *pcdev,
+ if (ret < 0)
+ return ret;
+
+- ret = pd692x0_pi_get_pw_from_table(buf.data[2], buf.data[3]);
+- if (ret < 0)
+- return ret;
+- mW = ret;
+-
+- ret = pd692x0_pi_get_voltage(pcdev, id);
+- if (ret < 0)
+- return ret;
+- uV = ret;
+-
+- tmp_64 = mW;
+- tmp_64 *= 1000000000ull;
+- /* uA = mW * 1000000000 / uV */
+- uA = DIV_ROUND_CLOSEST_ULL(tmp_64, uV);
+- return uA;
++ return pd692x0_pi_get_pw_from_table(buf.data[0], buf.data[1]);
+ }
+
+-static int pd692x0_pi_set_current_limit(struct pse_controller_dev *pcdev,
+- int id, int max_uA)
++static int pd692x0_pi_set_pw_limit(struct pse_controller_dev *pcdev,
++ int id, int max_mW)
+ {
+ struct pd692x0_priv *priv = to_pd692x0_priv(pcdev);
+ struct device *dev = &priv->client->dev;
+ struct pd692x0_msg msg, buf = {0};
+- int uV, ret, mW;
+- s64 tmp_64;
++ int ret;
+
+ ret = pd692x0_fw_unavailable(priv);
+ if (ret)
+ return ret;
+
+- ret = pd692x0_pi_get_voltage(pcdev, id);
+- if (ret < 0)
+- return ret;
+- uV = ret;
+-
+ msg = pd692x0_msg_template_list[PD692X0_MSG_SET_PORT_PARAM];
+ msg.sub[2] = id;
+- tmp_64 = uV;
+- tmp_64 *= max_uA;
+- /* mW = uV * uA / 1000000000 */
+- mW = DIV_ROUND_CLOSEST_ULL(tmp_64, 1000000000);
+- ret = pd692x0_pi_set_pw_from_table(dev, &msg, mW);
++ ret = pd692x0_pi_set_pw_from_table(dev, &msg, max_mW);
+ if (ret)
+ return ret;
+
+@@ -1068,8 +1043,8 @@ static const struct pse_controller_ops pd692x0_ops = {
+ .pi_disable = pd692x0_pi_disable,
+ .pi_is_enabled = pd692x0_pi_is_enabled,
+ .pi_get_voltage = pd692x0_pi_get_voltage,
+- .pi_get_current_limit = pd692x0_pi_get_current_limit,
+- .pi_set_current_limit = pd692x0_pi_set_current_limit,
++ .pi_get_pw_limit = pd692x0_pi_get_pw_limit,
++ .pi_set_pw_limit = pd692x0_pi_set_pw_limit,
+ };
+
+ #define PD692X0_FW_LINE_MAX_SZ 0xff
+diff --git a/drivers/net/pse-pd/pse_core.c b/drivers/net/pse-pd/pse_core.c
+index 2906ce173f66cd..bb509d973e914e 100644
+--- a/drivers/net/pse-pd/pse_core.c
++++ b/drivers/net/pse-pd/pse_core.c
+@@ -291,32 +291,24 @@ static int pse_pi_get_voltage(struct regulator_dev *rdev)
+ return ret;
+ }
+
+-static int _pse_ethtool_get_status(struct pse_controller_dev *pcdev,
+- int id,
+- struct netlink_ext_ack *extack,
+- struct pse_control_status *status);
+-
+ static int pse_pi_get_current_limit(struct regulator_dev *rdev)
+ {
+ struct pse_controller_dev *pcdev = rdev_get_drvdata(rdev);
+ const struct pse_controller_ops *ops;
+- struct netlink_ext_ack extack = {};
+- struct pse_control_status st = {};
+- int id, uV, ret;
++ int id, uV, mW, ret;
+ s64 tmp_64;
+
+ ops = pcdev->ops;
+ id = rdev_get_id(rdev);
++ if (!ops->pi_get_pw_limit || !ops->pi_get_voltage)
++ return -EOPNOTSUPP;
++
+ mutex_lock(&pcdev->lock);
+- if (ops->pi_get_current_limit) {
+- ret = ops->pi_get_current_limit(pcdev, id);
++ ret = ops->pi_get_pw_limit(pcdev, id);
++ if (ret < 0)
+ goto out;
+- }
++ mW = ret;
+
+- /* If pi_get_current_limit() callback not populated get voltage
+- * from pi_get_voltage() and power limit from ethtool_get_status()
+- * to calculate current limit.
+- */
+ ret = _pse_pi_get_voltage(rdev);
+ if (!ret) {
+ dev_err(pcdev->dev, "Voltage null\n");
+@@ -327,16 +319,7 @@ static int pse_pi_get_current_limit(struct regulator_dev *rdev)
+ goto out;
+ uV = ret;
+
+- ret = _pse_ethtool_get_status(pcdev, id, &extack, &st);
+- if (ret)
+- goto out;
+-
+- if (!st.c33_avail_pw_limit) {
+- ret = -ENODATA;
+- goto out;
+- }
+-
+- tmp_64 = st.c33_avail_pw_limit;
++ tmp_64 = mW;
+ tmp_64 *= 1000000000ull;
+ /* uA = mW * 1000000000 / uV */
+ ret = DIV_ROUND_CLOSEST_ULL(tmp_64, uV);
+@@ -351,15 +334,33 @@ static int pse_pi_set_current_limit(struct regulator_dev *rdev, int min_uA,
+ {
+ struct pse_controller_dev *pcdev = rdev_get_drvdata(rdev);
+ const struct pse_controller_ops *ops;
+- int id, ret;
++ int id, mW, ret;
++ s64 tmp_64;
+
+ ops = pcdev->ops;
+- if (!ops->pi_set_current_limit)
++ if (!ops->pi_set_pw_limit || !ops->pi_get_voltage)
+ return -EOPNOTSUPP;
+
++ if (max_uA > MAX_PI_CURRENT)
++ return -ERANGE;
++
+ id = rdev_get_id(rdev);
+ mutex_lock(&pcdev->lock);
+- ret = ops->pi_set_current_limit(pcdev, id, max_uA);
++ ret = _pse_pi_get_voltage(rdev);
++ if (!ret) {
++ dev_err(pcdev->dev, "Voltage null\n");
++ ret = -ERANGE;
++ goto out;
++ }
++ if (ret < 0)
++ goto out;
++
++ tmp_64 = ret;
++ tmp_64 *= max_uA;
++ /* mW = uA * uV / 1000000000 */
++ mW = DIV_ROUND_CLOSEST_ULL(tmp_64, 1000000000);
++ ret = ops->pi_set_pw_limit(pcdev, id, mW);
++out:
+ mutex_unlock(&pcdev->lock);
+
+ return ret;
+@@ -403,11 +404,9 @@ devm_pse_pi_regulator_register(struct pse_controller_dev *pcdev,
+
+ rinit_data->constraints.valid_ops_mask = REGULATOR_CHANGE_STATUS;
+
+- if (pcdev->ops->pi_set_current_limit) {
++ if (pcdev->ops->pi_set_pw_limit)
+ rinit_data->constraints.valid_ops_mask |=
+ REGULATOR_CHANGE_CURRENT;
+- rinit_data->constraints.max_uA = MAX_PI_CURRENT;
+- }
+
+ rinit_data->supply_regulator = "vpwr";
+
+@@ -736,23 +735,6 @@ struct pse_control *of_pse_control_get(struct device_node *node)
+ }
+ EXPORT_SYMBOL_GPL(of_pse_control_get);
+
+-static int _pse_ethtool_get_status(struct pse_controller_dev *pcdev,
+- int id,
+- struct netlink_ext_ack *extack,
+- struct pse_control_status *status)
+-{
+- const struct pse_controller_ops *ops;
+-
+- ops = pcdev->ops;
+- if (!ops->ethtool_get_status) {
+- NL_SET_ERR_MSG(extack,
+- "PSE driver does not support status report");
+- return -EOPNOTSUPP;
+- }
+-
+- return ops->ethtool_get_status(pcdev, id, extack, status);
+-}
+-
+ /**
+ * pse_ethtool_get_status - get status of PSE control
+ * @psec: PSE control pointer
+@@ -765,11 +747,21 @@ int pse_ethtool_get_status(struct pse_control *psec,
+ struct netlink_ext_ack *extack,
+ struct pse_control_status *status)
+ {
++ const struct pse_controller_ops *ops;
++ struct pse_controller_dev *pcdev;
+ int err;
+
+- mutex_lock(&psec->pcdev->lock);
+- err = _pse_ethtool_get_status(psec->pcdev, psec->id, extack, status);
+- mutex_unlock(&psec->pcdev->lock);
++ pcdev = psec->pcdev;
++ ops = pcdev->ops;
++ if (!ops->ethtool_get_status) {
++ NL_SET_ERR_MSG(extack,
++ "PSE driver does not support status report");
++ return -EOPNOTSUPP;
++ }
++
++ mutex_lock(&pcdev->lock);
++ err = ops->ethtool_get_status(pcdev, psec->id, extack, status);
++ mutex_unlock(&pcdev->lock);
+
+ return err;
+ }
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index a96976b22fa796..61af1583356c27 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -276,8 +276,7 @@ static bool nvme_validate_passthru_nsid(struct nvme_ctrl *ctrl,
+ {
+ if (ns && nsid != ns->head->ns_id) {
+ dev_err(ctrl->device,
+- "%s: nsid (%u) in cmd does not match nsid (%u)"
+- "of namespace\n",
++ "%s: nsid (%u) in cmd does not match nsid (%u) of namespace\n",
+ current->comm, nsid, ns->head->ns_id);
+ return false;
+ }
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 8305d3c1280748..840ae475074d09 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1449,11 +1449,14 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ msg.msg_control = cbuf;
+ msg.msg_controllen = sizeof(cbuf);
+ }
++ msg.msg_flags = MSG_WAITALL;
+ ret = kernel_recvmsg(queue->sock, &msg, &iov, 1,
+ iov.iov_len, msg.msg_flags);
+- if (ret < 0) {
++ if (ret < sizeof(*icresp)) {
+ pr_warn("queue %d: failed to receive icresp, error %d\n",
+ nvme_tcp_queue_id(queue), ret);
++ if (ret >= 0)
++ ret = -ECONNRESET;
+ goto free_icresp;
+ }
+ ret = -ENOTCONN;
+@@ -1565,7 +1568,7 @@ static bool nvme_tcp_poll_queue(struct nvme_tcp_queue *queue)
+ ctrl->io_queues[HCTX_TYPE_POLL];
+ }
+
+-/**
++/*
+ * Track the number of queues assigned to each cpu using a global per-cpu
+ * counter and select the least used cpu from the mq_map. Our goal is to spread
+ * different controllers I/O threads across different cpu cores.
+diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
+index b133967faef840..643f85849ef64b 100644
+--- a/drivers/pci/devres.c
++++ b/drivers/pci/devres.c
+@@ -411,46 +411,20 @@ static inline bool mask_contains_bar(int mask, int bar)
+ return mask & BIT(bar);
+ }
+
+-/*
+- * This is a copy of pci_intx() used to bypass the problem of recursive
+- * function calls due to the hybrid nature of pci_intx().
+- */
+-static void __pcim_intx(struct pci_dev *pdev, int enable)
+-{
+- u16 pci_command, new;
+-
+- pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
+-
+- if (enable)
+- new = pci_command & ~PCI_COMMAND_INTX_DISABLE;
+- else
+- new = pci_command | PCI_COMMAND_INTX_DISABLE;
+-
+- if (new != pci_command)
+- pci_write_config_word(pdev, PCI_COMMAND, new);
+-}
+-
+ static void pcim_intx_restore(struct device *dev, void *data)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct pcim_intx_devres *res = data;
+
+- __pcim_intx(pdev, res->orig_intx);
++ pci_intx(pdev, res->orig_intx);
+ }
+
+-static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev)
++static void save_orig_intx(struct pci_dev *pdev, struct pcim_intx_devres *res)
+ {
+- struct pcim_intx_devres *res;
+-
+- res = devres_find(dev, pcim_intx_restore, NULL, NULL);
+- if (res)
+- return res;
++ u16 pci_command;
+
+- res = devres_alloc(pcim_intx_restore, sizeof(*res), GFP_KERNEL);
+- if (res)
+- devres_add(dev, res);
+-
+- return res;
++ pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
++ res->orig_intx = !(pci_command & PCI_COMMAND_INTX_DISABLE);
+ }
+
+ /**
+@@ -466,16 +440,28 @@ static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev)
+ int pcim_intx(struct pci_dev *pdev, int enable)
+ {
+ struct pcim_intx_devres *res;
++ struct device *dev = &pdev->dev;
+
+- res = get_or_create_intx_devres(&pdev->dev);
+- if (!res)
+- return -ENOMEM;
++ /*
++ * pcim_intx() must only restore the INTx value that existed before the
++ * driver was loaded, i.e., before it called pcim_intx() for the
++ * first time.
++ */
++ res = devres_find(dev, pcim_intx_restore, NULL, NULL);
++ if (!res) {
++ res = devres_alloc(pcim_intx_restore, sizeof(*res), GFP_KERNEL);
++ if (!res)
++ return -ENOMEM;
++
++ save_orig_intx(pdev, res);
++ devres_add(dev, res);
++ }
+
+- res->orig_intx = !enable;
+- __pcim_intx(pdev, enable);
++ pci_intx(pdev, enable);
+
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(pcim_intx);
+
+ static void pcim_disable_device(void *pdev_raw)
+ {
+@@ -939,7 +925,7 @@ static void pcim_release_all_regions(struct pci_dev *pdev)
+ * desired, release individual regions with pcim_release_region() or all of
+ * them at once with pcim_release_all_regions().
+ */
+-static int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
++int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
+ {
+ int ret;
+ int bar;
+@@ -957,6 +943,7 @@ static int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
+
+ return ret;
+ }
++EXPORT_SYMBOL(pcim_request_all_regions);
+
+ /**
+ * pcim_iomap_regions_request_all - Request all BARs and iomap specified ones
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index dd3c6dcb47ae4a..1aa5d6f98ebda2 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4486,11 +4486,6 @@ void pci_disable_parity(struct pci_dev *dev)
+ * @enable: boolean: whether to enable or disable PCI INTx
+ *
+ * Enables/disables PCI INTx for device @pdev
+- *
+- * NOTE:
+- * This is a "hybrid" function: It's normally unmanaged, but becomes managed
+- * when pcim_enable_device() has been called in advance. This hybrid feature is
+- * DEPRECATED! If you want managed cleanup, use pcim_intx() instead.
+ */
+ void pci_intx(struct pci_dev *pdev, int enable)
+ {
+@@ -4503,15 +4498,10 @@ void pci_intx(struct pci_dev *pdev, int enable)
+ else
+ new = pci_command | PCI_COMMAND_INTX_DISABLE;
+
+- if (new != pci_command) {
+- /* Preserve the "hybrid" behavior for backwards compatibility */
+- if (pci_is_managed(pdev)) {
+- WARN_ON_ONCE(pcim_intx(pdev, enable) != 0);
+- return;
+- }
++ if (new == pci_command)
++ return;
+
+- pci_write_config_word(pdev, PCI_COMMAND, new);
+- }
++ pci_write_config_word(pdev, PCI_COMMAND, new);
+ }
+ EXPORT_SYMBOL_GPL(pci_intx);
+
+diff --git a/drivers/platform/cznic/Kconfig b/drivers/platform/cznic/Kconfig
+index 49c383eb678541..13e37b49d9d01e 100644
+--- a/drivers/platform/cznic/Kconfig
++++ b/drivers/platform/cznic/Kconfig
+@@ -6,6 +6,7 @@
+
+ menuconfig CZNIC_PLATFORMS
+ bool "Platform support for CZ.NIC's Turris hardware"
++ depends on ARCH_MVEBU || COMPILE_TEST
+ help
+ Say Y here to be able to choose driver support for CZ.NIC's Turris
+ devices. This option alone does not add any kernel code.
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index f71cc90fea1273..57eba1ddb17ba5 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -466,10 +466,9 @@ static int axp717_battery_get_prop(struct power_supply *psy,
+
+ /*
+ * If a fault is detected it must also be cleared; if the
+- * condition persists it should reappear (This is an
+- * assumption, it's actually not documented). A restart was
+- * not sufficient to clear the bit in testing despite the
+- * register listed as POR.
++ * condition persists it should reappear. A restart was not
++ * sufficient to clear the bit in testing despite the register
++ * listed as POR.
+ */
+ case POWER_SUPPLY_PROP_HEALTH:
+ ret = regmap_read(axp20x_batt->regmap, AXP717_PMU_FAULT,
+@@ -480,26 +479,26 @@ static int axp717_battery_get_prop(struct power_supply *psy,
+ switch (reg & AXP717_BATT_PMU_FAULT_MASK) {
+ case AXP717_BATT_UVLO_2_5V:
+ val->intval = POWER_SUPPLY_HEALTH_DEAD;
+- regmap_update_bits(axp20x_batt->regmap,
+- AXP717_PMU_FAULT,
+- AXP717_BATT_UVLO_2_5V,
+- AXP717_BATT_UVLO_2_5V);
++ regmap_write_bits(axp20x_batt->regmap,
++ AXP717_PMU_FAULT,
++ AXP717_BATT_UVLO_2_5V,
++ AXP717_BATT_UVLO_2_5V);
+ return 0;
+
+ case AXP717_BATT_OVER_TEMP:
+ val->intval = POWER_SUPPLY_HEALTH_HOT;
+- regmap_update_bits(axp20x_batt->regmap,
+- AXP717_PMU_FAULT,
+- AXP717_BATT_OVER_TEMP,
+- AXP717_BATT_OVER_TEMP);
++ regmap_write_bits(axp20x_batt->regmap,
++ AXP717_PMU_FAULT,
++ AXP717_BATT_OVER_TEMP,
++ AXP717_BATT_OVER_TEMP);
+ return 0;
+
+ case AXP717_BATT_UNDER_TEMP:
+ val->intval = POWER_SUPPLY_HEALTH_COLD;
+- regmap_update_bits(axp20x_batt->regmap,
+- AXP717_PMU_FAULT,
+- AXP717_BATT_UNDER_TEMP,
+- AXP717_BATT_UNDER_TEMP);
++ regmap_write_bits(axp20x_batt->regmap,
++ AXP717_PMU_FAULT,
++ AXP717_BATT_UNDER_TEMP,
++ AXP717_BATT_UNDER_TEMP);
+ return 0;
+
+ default:
+diff --git a/drivers/power/supply/da9150-fg.c b/drivers/power/supply/da9150-fg.c
+index 652c1f213af1c2..4f28ef1bba1a3c 100644
+--- a/drivers/power/supply/da9150-fg.c
++++ b/drivers/power/supply/da9150-fg.c
+@@ -247,9 +247,9 @@ static int da9150_fg_current_avg(struct da9150_fg *fg,
+ DA9150_QIF_SD_GAIN_SIZE);
+ da9150_fg_read_sync_end(fg);
+
+- div = (u64) (sd_gain * shunt_val * 65536ULL);
++ div = 65536ULL * sd_gain * shunt_val;
+ do_div(div, 1000000);
+- res = (u64) (iavg * 1000000ULL);
++ res = 1000000ULL * iavg;
+ do_div(res, div);
+
+ val->intval = (int) res;
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index e36e3ea165d3b2..2f34761e64135c 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -588,6 +588,15 @@ static int ism_dev_init(struct ism_dev *ism)
+ return ret;
+ }
+
++static void ism_dev_release(struct device *dev)
++{
++ struct ism_dev *ism;
++
++ ism = container_of(dev, struct ism_dev, dev);
++
++ kfree(ism);
++}
++
+ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct ism_dev *ism;
+@@ -601,6 +610,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ dev_set_drvdata(&pdev->dev, ism);
+ ism->pdev = pdev;
+ ism->dev.parent = &pdev->dev;
++ ism->dev.release = ism_dev_release;
+ device_initialize(&ism->dev);
+ dev_set_name(&ism->dev, dev_name(&pdev->dev));
+ ret = device_add(&ism->dev);
+@@ -637,7 +647,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ device_del(&ism->dev);
+ err_dev:
+ dev_set_drvdata(&pdev->dev, NULL);
+- kfree(ism);
++ put_device(&ism->dev);
+
+ return ret;
+ }
+@@ -682,7 +692,7 @@ static void ism_remove(struct pci_dev *pdev)
+ pci_disable_device(pdev);
+ device_del(&ism->dev);
+ dev_set_drvdata(&pdev->dev, NULL);
+- kfree(ism);
++ put_device(&ism->dev);
+ }
+
+ static struct pci_driver ism_driver = {
+diff --git a/drivers/soc/loongson/loongson2_guts.c b/drivers/soc/loongson/loongson2_guts.c
+index ef352a0f502208..1fcf7ca8083e10 100644
+--- a/drivers/soc/loongson/loongson2_guts.c
++++ b/drivers/soc/loongson/loongson2_guts.c
+@@ -114,8 +114,11 @@ static int loongson2_guts_probe(struct platform_device *pdev)
+ if (of_property_read_string(root, "model", &machine))
+ of_property_read_string_index(root, "compatible", 0, &machine);
+ of_node_put(root);
+- if (machine)
++ if (machine) {
+ soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL);
++ if (!soc_dev_attr.machine)
++ return -ENOMEM;
++ }
+
+ svr = loongson2_guts_get_svr();
+ soc_die = loongson2_soc_die_match(svr, loongson2_soc_die);
+diff --git a/drivers/tee/optee/supp.c b/drivers/tee/optee/supp.c
+index 322a543b8c278a..d0f397c9024201 100644
+--- a/drivers/tee/optee/supp.c
++++ b/drivers/tee/optee/supp.c
+@@ -80,7 +80,6 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
+ struct optee *optee = tee_get_drvdata(ctx->teedev);
+ struct optee_supp *supp = &optee->supp;
+ struct optee_supp_req *req;
+- bool interruptable;
+ u32 ret;
+
+ /*
+@@ -111,36 +110,18 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
+ /*
+ * Wait for supplicant to process and return result, once we've
+ * returned from wait_for_completion(&req->c) successfully we have
+- * exclusive access again.
++ * exclusive access again. Allow the wait to be killable such that
++ * the wait doesn't turn into an indefinite state if the supplicant
++ * gets hung for some reason.
+ */
+- while (wait_for_completion_interruptible(&req->c)) {
++ if (wait_for_completion_killable(&req->c)) {
+ mutex_lock(&supp->mutex);
+- interruptable = !supp->ctx;
+- if (interruptable) {
+- /*
+- * There's no supplicant available and since the
+- * supp->mutex currently is held none can
+- * become available until the mutex released
+- * again.
+- *
+- * Interrupting an RPC to supplicant is only
+- * allowed as a way of slightly improving the user
+- * experience in case the supplicant hasn't been
+- * started yet. During normal operation the supplicant
+- * will serve all requests in a timely manner and
+- * interrupting then wouldn't make sense.
+- */
+- if (req->in_queue) {
+- list_del(&req->link);
+- req->in_queue = false;
+- }
++ if (req->in_queue) {
++ list_del(&req->link);
++ req->in_queue = false;
+ }
+ mutex_unlock(&supp->mutex);
+-
+- if (interruptable) {
+- req->ret = TEEC_ERROR_COMMUNICATION;
+- break;
+- }
++ req->ret = TEEC_ERROR_COMMUNICATION;
+ }
+
+ ret = req->ret;
+diff --git a/drivers/usb/gadget/function/f_midi.c b/drivers/usb/gadget/function/f_midi.c
+index 4153643c67dcec..1f18f15dba2778 100644
+--- a/drivers/usb/gadget/function/f_midi.c
++++ b/drivers/usb/gadget/function/f_midi.c
+@@ -283,7 +283,7 @@ f_midi_complete(struct usb_ep *ep, struct usb_request *req)
+ /* Our transmit completed. See if there's more to go.
+ * f_midi_transmit eats req, don't queue it again. */
+ req->length = 0;
+- f_midi_transmit(midi);
++ queue_work(system_highpri_wq, &midi->work);
+ return;
+ }
+ break;
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 90aef2627ca27b..40332ab62f1018 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -545,8 +545,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
+ * subpage::readers and to unlock the page.
+ */
+ if (fs_info->sectorsize < PAGE_SIZE)
+- btrfs_subpage_start_reader(fs_info, folio, cur,
+- add_size);
++ btrfs_folio_set_lock(fs_info, folio, cur, add_size);
+ folio_put(folio);
+ cur += add_size;
+ }
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index fe08c983d5bb4b..660a5b9c08e9e4 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -190,7 +190,7 @@ static void process_one_folio(struct btrfs_fs_info *fs_info,
+ btrfs_folio_clamp_clear_writeback(fs_info, folio, start, len);
+
+ if (folio != locked_folio && (page_ops & PAGE_UNLOCK))
+- btrfs_folio_end_writer_lock(fs_info, folio, start, len);
++ btrfs_folio_end_lock(fs_info, folio, start, len);
+ }
+
+ static void __process_folios_contig(struct address_space *mapping,
+@@ -276,7 +276,7 @@ static noinline int lock_delalloc_folios(struct inode *inode,
+ range_start = max_t(u64, folio_pos(folio), start);
+ range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
+ end + 1) - range_start;
+- btrfs_folio_set_writer_lock(fs_info, folio, range_start, range_len);
++ btrfs_folio_set_lock(fs_info, folio, range_start, range_len);
+
+ processed_end = range_start + range_len - 1;
+ }
+@@ -438,7 +438,7 @@ static void end_folio_read(struct folio *folio, bool uptodate, u64 start, u32 le
+ if (!btrfs_is_subpage(fs_info, folio->mapping))
+ folio_unlock(folio);
+ else
+- btrfs_subpage_end_reader(fs_info, folio, start, len);
++ btrfs_folio_end_lock(fs_info, folio, start, len);
+ }
+
+ /*
+@@ -495,7 +495,7 @@ static void begin_folio_read(struct btrfs_fs_info *fs_info, struct folio *folio)
+ return;
+
+ ASSERT(folio_test_private(folio));
+- btrfs_subpage_start_reader(fs_info, folio, folio_pos(folio), PAGE_SIZE);
++ btrfs_folio_set_lock(fs_info, folio, folio_pos(folio), PAGE_SIZE);
+ }
+
+ /*
+@@ -1105,15 +1105,59 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
+ return ret;
+ }
+
++static void set_delalloc_bitmap(struct folio *folio, unsigned long *delalloc_bitmap,
++ u64 start, u32 len)
++{
++ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
++ const u64 folio_start = folio_pos(folio);
++ unsigned int start_bit;
++ unsigned int nbits;
++
++ ASSERT(start >= folio_start && start + len <= folio_start + PAGE_SIZE);
++ start_bit = (start - folio_start) >> fs_info->sectorsize_bits;
++ nbits = len >> fs_info->sectorsize_bits;
++ ASSERT(bitmap_test_range_all_zero(delalloc_bitmap, start_bit, nbits));
++ bitmap_set(delalloc_bitmap, start_bit, nbits);
++}
++
++static bool find_next_delalloc_bitmap(struct folio *folio,
++ unsigned long *delalloc_bitmap, u64 start,
++ u64 *found_start, u32 *found_len)
++{
++ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
++ const u64 folio_start = folio_pos(folio);
++ const unsigned int bitmap_size = fs_info->sectors_per_page;
++ unsigned int start_bit;
++ unsigned int first_zero;
++ unsigned int first_set;
++
++ ASSERT(start >= folio_start && start < folio_start + PAGE_SIZE);
++
++ start_bit = (start - folio_start) >> fs_info->sectorsize_bits;
++ first_set = find_next_bit(delalloc_bitmap, bitmap_size, start_bit);
++ if (first_set >= bitmap_size)
++ return false;
++
++ *found_start = folio_start + (first_set << fs_info->sectorsize_bits);
++ first_zero = find_next_zero_bit(delalloc_bitmap, bitmap_size, first_set);
++ *found_len = (first_zero - first_set) << fs_info->sectorsize_bits;
++ return true;
++}
++
+ /*
+- * helper for extent_writepage(), doing all of the delayed allocation setup.
++ * Do all of the delayed allocation setup.
++ *
++ * Return >0 if all the dirty blocks are submitted async (compression) or inlined.
++ * The @folio should no longer be touched (treat it as already unlocked).
+ *
+- * This returns 1 if btrfs_run_delalloc_range function did all the work required
+- * to write the page (copy into inline extent). In this case the IO has
+- * been started and the page is already unlocked.
++ * Return 0 if there is still dirty block that needs to be submitted through
++ * extent_writepage_io().
++ * bio_ctrl->submit_bitmap will indicate which blocks of the folio should be
++ * submitted, and @folio is still kept locked.
+ *
+- * This returns 0 if all went well (page still locked)
+- * This returns < 0 if there were errors (page still locked)
++ * Return <0 if there is any error hit.
++ * Any allocated ordered extent range covering this folio will be marked
++ * finished (IOERR), and @folio is still kept locked.
+ */
+ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ struct folio *folio,
+@@ -1124,16 +1168,28 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ const bool is_subpage = btrfs_is_subpage(fs_info, folio->mapping);
+ const u64 page_start = folio_pos(folio);
+ const u64 page_end = page_start + folio_size(folio) - 1;
++ unsigned long delalloc_bitmap = 0;
+ /*
+ * Save the last found delalloc end. As the delalloc end can go beyond
+ * page boundary, thus we cannot rely on subpage bitmap to locate the
+ * last delalloc end.
+ */
+ u64 last_delalloc_end = 0;
++ /*
++ * The range end (exclusive) of the last successfully finished delalloc
++ * range.
++ * Any range covered by ordered extent must either be manually marked
++ * finished (error handling), or has IO submitted (and finish the
++ * ordered extent normally).
++ *
++ * This records the end of ordered extent cleanup if we hit an error.
++ */
++ u64 last_finished_delalloc_end = page_start;
+ u64 delalloc_start = page_start;
+ u64 delalloc_end = page_end;
+ u64 delalloc_to_write = 0;
+ int ret = 0;
++ int bit;
+
+ /* Save the dirty bitmap as our submission bitmap will be a subset of it. */
+ if (btrfs_is_subpage(fs_info, inode->vfs_inode.i_mapping)) {
+@@ -1143,6 +1199,12 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ bio_ctrl->submit_bitmap = 1;
+ }
+
++ for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->sectors_per_page) {
++ u64 start = page_start + (bit << fs_info->sectorsize_bits);
++
++ btrfs_folio_set_lock(fs_info, folio, start, fs_info->sectorsize);
++ }
++
+ /* Lock all (subpage) delalloc ranges inside the folio first. */
+ while (delalloc_start < page_end) {
+ delalloc_end = page_end;
+@@ -1151,9 +1213,8 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ delalloc_start = delalloc_end + 1;
+ continue;
+ }
+- btrfs_folio_set_writer_lock(fs_info, folio, delalloc_start,
+- min(delalloc_end, page_end) + 1 -
+- delalloc_start);
++ set_delalloc_bitmap(folio, &delalloc_bitmap, delalloc_start,
++ min(delalloc_end, page_end) + 1 - delalloc_start);
+ last_delalloc_end = delalloc_end;
+ delalloc_start = delalloc_end + 1;
+ }
+@@ -1178,7 +1239,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ found_len = last_delalloc_end + 1 - found_start;
+ found = true;
+ } else {
+- found = btrfs_subpage_find_writer_locked(fs_info, folio,
++ found = find_next_delalloc_bitmap(folio, &delalloc_bitmap,
+ delalloc_start, &found_start, &found_len);
+ }
+ if (!found)
+@@ -1192,11 +1253,19 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ found_len = last_delalloc_end + 1 - found_start;
+
+ if (ret >= 0) {
++ /*
++ * Some delalloc range may be created by previous folios.
++ * Thus we still need to clean up this range during error
++ * handling.
++ */
++ last_finished_delalloc_end = found_start;
+ /* No errors hit so far, run the current delalloc range. */
+ ret = btrfs_run_delalloc_range(inode, folio,
+ found_start,
+ found_start + found_len - 1,
+ wbc);
++ if (ret >= 0)
++ last_finished_delalloc_end = found_start + found_len;
+ } else {
+ /*
+ * We've hit an error during previous delalloc range,
+@@ -1231,8 +1300,22 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+
+ delalloc_start = found_start + found_len;
+ }
+- if (ret < 0)
++ /*
++ * It's possible we had some ordered extents created before we hit
++ * an error, cleanup non-async successfully created delalloc ranges.
++ */
++ if (unlikely(ret < 0)) {
++ unsigned int bitmap_size = min(
++ (last_finished_delalloc_end - page_start) >>
++ fs_info->sectorsize_bits,
++ fs_info->sectors_per_page);
++
++ for_each_set_bit(bit, &bio_ctrl->submit_bitmap, bitmap_size)
++ btrfs_mark_ordered_io_finished(inode, folio,
++ page_start + (bit << fs_info->sectorsize_bits),
++ fs_info->sectorsize, false);
+ return ret;
++ }
+ out:
+ if (last_delalloc_end)
+ delalloc_end = last_delalloc_end;
+@@ -1348,6 +1431,7 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ unsigned long range_bitmap = 0;
+ bool submitted_io = false;
++ bool error = false;
+ const u64 folio_start = folio_pos(folio);
+ u64 cur;
+ int bit;
+@@ -1390,13 +1474,26 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ break;
+ }
+ ret = submit_one_sector(inode, folio, cur, bio_ctrl, i_size);
+- if (ret < 0)
+- goto out;
++ if (unlikely(ret < 0)) {
++ /*
++ * bio_ctrl may contain a bio crossing several folios.
++ * Submit it immediately so that the bio has a chance
++ * to finish normally, other than marked as error.
++ */
++ submit_one_bio(bio_ctrl);
++ /*
++ * Failed to grab the extent map which should be very rare.
++ * Since there is no bio submitted to finish the ordered
++ * extent, we have to manually finish this sector.
++ */
++ btrfs_mark_ordered_io_finished(inode, folio, cur,
++ fs_info->sectorsize, false);
++ error = true;
++ continue;
++ }
+ submitted_io = true;
+ }
+
+- btrfs_folio_assert_not_dirty(fs_info, folio, start, len);
+-out:
+ /*
+ * If we didn't submitted any sector (>= i_size), folio dirty get
+ * cleared but PAGECACHE_TAG_DIRTY is not cleared (only cleared
+@@ -1404,8 +1501,11 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ *
+ * Here we set writeback and clear for the range. If the full folio
+ * is no longer dirty then we clear the PAGECACHE_TAG_DIRTY tag.
++ *
++ * If we hit any error, the corresponding sector will still be dirty
++ * thus no need to clear PAGECACHE_TAG_DIRTY.
+ */
+- if (!submitted_io) {
++ if (!submitted_io && !error) {
+ btrfs_folio_set_writeback(fs_info, folio, start, len);
+ btrfs_folio_clear_writeback(fs_info, folio, start, len);
+ }
+@@ -1423,15 +1523,14 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ */
+ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl)
+ {
+- struct inode *inode = folio->mapping->host;
+- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+- const u64 page_start = folio_pos(folio);
++ struct btrfs_inode *inode = BTRFS_I(folio->mapping->host);
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ int ret;
+ size_t pg_offset;
+- loff_t i_size = i_size_read(inode);
++ loff_t i_size = i_size_read(&inode->vfs_inode);
+ unsigned long end_index = i_size >> PAGE_SHIFT;
+
+- trace_extent_writepage(folio, inode, bio_ctrl->wbc);
++ trace_extent_writepage(folio, &inode->vfs_inode, bio_ctrl->wbc);
+
+ WARN_ON(!folio_test_locked(folio));
+
+@@ -1455,13 +1554,13 @@ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl
+ if (ret < 0)
+ goto done;
+
+- ret = writepage_delalloc(BTRFS_I(inode), folio, bio_ctrl);
++ ret = writepage_delalloc(inode, folio, bio_ctrl);
+ if (ret == 1)
+ return 0;
+ if (ret)
+ goto done;
+
+- ret = extent_writepage_io(BTRFS_I(inode), folio, folio_pos(folio),
++ ret = extent_writepage_io(inode, folio, folio_pos(folio),
+ PAGE_SIZE, bio_ctrl, i_size);
+ if (ret == 1)
+ return 0;
+@@ -1469,17 +1568,13 @@ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl
+ bio_ctrl->wbc->nr_to_write--;
+
+ done:
+- if (ret) {
+- btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio,
+- page_start, PAGE_SIZE, !ret);
++ if (ret < 0)
+ mapping_set_error(folio->mapping, ret);
+- }
+-
+ /*
+ * Only unlock ranges that are submitted. As there can be some async
+ * submitted ranges inside the folio.
+ */
+- btrfs_folio_end_writer_lock_bitmap(fs_info, folio, bio_ctrl->submit_bitmap);
++ btrfs_folio_end_lock_bitmap(fs_info, folio, bio_ctrl->submit_bitmap);
+ ASSERT(ret <= 0);
+ return ret;
+ }
+@@ -2231,12 +2326,9 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
+ if (ret == 1)
+ goto next_page;
+
+- if (ret) {
+- btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio,
+- cur, cur_len, !ret);
++ if (ret)
+ mapping_set_error(mapping, ret);
+- }
+- btrfs_folio_end_writer_lock(fs_info, folio, cur, cur_len);
++ btrfs_folio_end_lock(fs_info, folio, cur, cur_len);
+ if (ret < 0)
+ found_error = true;
+ next_page:
+@@ -2463,12 +2555,6 @@ static bool folio_range_has_eb(struct btrfs_fs_info *fs_info, struct folio *foli
+ subpage = folio_get_private(folio);
+ if (atomic_read(&subpage->eb_refs))
+ return true;
+- /*
+- * Even there is no eb refs here, we may still have
+- * end_folio_read() call relying on page::private.
+- */
+- if (atomic_read(&subpage->readers))
+- return true;
+ }
+ return false;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index f7e7d864f41440..5b842276573e82 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2419,8 +2419,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_fol
+
+ out:
+ if (ret < 0)
+- btrfs_cleanup_ordered_extents(inode, locked_folio, start,
+- end - start + 1);
++ btrfs_cleanup_ordered_extents(inode, NULL, start, end - start + 1);
+ return ret;
+ }
+
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index ec7328a6bfd755..88a01d51ab11f1 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -140,12 +140,10 @@ struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info,
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock_init(&ret->lock);
+- if (type == BTRFS_SUBPAGE_METADATA) {
++ if (type == BTRFS_SUBPAGE_METADATA)
+ atomic_set(&ret->eb_refs, 0);
+- } else {
+- atomic_set(&ret->readers, 0);
+- atomic_set(&ret->writers, 0);
+- }
++ else
++ atomic_set(&ret->nr_locked, 0);
+ return ret;
+ }
+
+@@ -221,62 +219,6 @@ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
+ __start_bit; \
+ })
+
+-void btrfs_subpage_start_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+- const int nbits = len >> fs_info->sectorsize_bits;
+- unsigned long flags;
+-
+-
+- btrfs_subpage_assert(fs_info, folio, start, len);
+-
+- spin_lock_irqsave(&subpage->lock, flags);
+- /*
+- * Even though it's just for reading the page, no one should have
+- * locked the subpage range.
+- */
+- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+- bitmap_set(subpage->bitmaps, start_bit, nbits);
+- atomic_add(nbits, &subpage->readers);
+- spin_unlock_irqrestore(&subpage->lock, flags);
+-}
+-
+-void btrfs_subpage_end_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+- const int nbits = len >> fs_info->sectorsize_bits;
+- unsigned long flags;
+- bool is_data;
+- bool last;
+-
+- btrfs_subpage_assert(fs_info, folio, start, len);
+- is_data = is_data_inode(BTRFS_I(folio->mapping->host));
+-
+- spin_lock_irqsave(&subpage->lock, flags);
+-
+- /* The range should have already been locked. */
+- ASSERT(bitmap_test_range_all_set(subpage->bitmaps, start_bit, nbits));
+- ASSERT(atomic_read(&subpage->readers) >= nbits);
+-
+- bitmap_clear(subpage->bitmaps, start_bit, nbits);
+- last = atomic_sub_and_test(nbits, &subpage->readers);
+-
+- /*
+- * For data we need to unlock the page if the last read has finished.
+- *
+- * And please don't replace @last with atomic_sub_and_test() call
+- * inside if () condition.
+- * As we want the atomic_sub_and_test() to be always executed.
+- */
+- if (is_data && last)
+- folio_unlock(folio);
+- spin_unlock_irqrestore(&subpage->lock, flags);
+-}
+-
+ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
+ {
+ u64 orig_start = *start;
+@@ -295,28 +237,8 @@ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
+ orig_start + orig_len) - *start;
+ }
+
+-static void btrfs_subpage_start_writer(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+- const int nbits = (len >> fs_info->sectorsize_bits);
+- unsigned long flags;
+- int ret;
+-
+- btrfs_subpage_assert(fs_info, folio, start, len);
+-
+- spin_lock_irqsave(&subpage->lock, flags);
+- ASSERT(atomic_read(&subpage->readers) == 0);
+- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+- bitmap_set(subpage->bitmaps, start_bit, nbits);
+- ret = atomic_add_return(nbits, &subpage->writers);
+- ASSERT(ret == nbits);
+- spin_unlock_irqrestore(&subpage->lock, flags);
+-}
+-
+-static bool btrfs_subpage_end_and_test_writer(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
++static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len)
+ {
+ struct btrfs_subpage *subpage = folio_get_private(folio);
+ const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+@@ -334,9 +256,9 @@ static bool btrfs_subpage_end_and_test_writer(const struct btrfs_fs_info *fs_inf
+ * extent_clear_unlock_delalloc() for compression path.
+ *
+ * This @locked_page is locked by plain lock_page(), thus its
+- * subpage::writers is 0. Handle them in a special way.
++ * subpage::locked is 0. Handle them in a special way.
+ */
+- if (atomic_read(&subpage->writers) == 0) {
++ if (atomic_read(&subpage->nr_locked) == 0) {
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ return true;
+ }
+@@ -345,39 +267,12 @@ static bool btrfs_subpage_end_and_test_writer(const struct btrfs_fs_info *fs_inf
+ clear_bit(bit, subpage->bitmaps);
+ cleared++;
+ }
+- ASSERT(atomic_read(&subpage->writers) >= cleared);
+- last = atomic_sub_and_test(cleared, &subpage->writers);
++ ASSERT(atomic_read(&subpage->nr_locked) >= cleared);
++ last = atomic_sub_and_test(cleared, &subpage->nr_locked);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ return last;
+ }
+
+-/*
+- * Lock a folio for delalloc page writeback.
+- *
+- * Return -EAGAIN if the page is not properly initialized.
+- * Return 0 with the page locked, and writer counter updated.
+- *
+- * Even with 0 returned, the page still need extra check to make sure
+- * it's really the correct page, as the caller is using
+- * filemap_get_folios_contig(), which can race with page invalidating.
+- */
+-int btrfs_folio_start_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- if (unlikely(!fs_info) || !btrfs_is_subpage(fs_info, folio->mapping)) {
+- folio_lock(folio);
+- return 0;
+- }
+- folio_lock(folio);
+- if (!folio_test_private(folio) || !folio_get_private(folio)) {
+- folio_unlock(folio);
+- return -EAGAIN;
+- }
+- btrfs_subpage_clamp_range(folio, &start, &len);
+- btrfs_subpage_start_writer(fs_info, folio, start, len);
+- return 0;
+-}
+-
+ /*
+ * Handle different locked folios:
+ *
+@@ -394,8 +289,8 @@ int btrfs_folio_start_writer_lock(const struct btrfs_fs_info *fs_info,
+ * bitmap, reduce the writer lock number, and unlock the page if that's
+ * the last locked range.
+ */
+-void btrfs_folio_end_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
++void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len)
+ {
+ struct btrfs_subpage *subpage = folio_get_private(folio);
+
+@@ -408,24 +303,24 @@ void btrfs_folio_end_writer_lock(const struct btrfs_fs_info *fs_info,
+
+ /*
+ * For subpage case, there are two types of locked page. With or
+- * without writers number.
++ * without locked number.
+ *
+- * Since we own the page lock, no one else could touch subpage::writers
++ * Since we own the page lock, no one else could touch subpage::locked
+ * and we are safe to do several atomic operations without spinlock.
+ */
+- if (atomic_read(&subpage->writers) == 0) {
+- /* No writers, locked by plain lock_page(). */
++ if (atomic_read(&subpage->nr_locked) == 0) {
++ /* No subpage lock, locked by plain lock_page(). */
+ folio_unlock(folio);
+ return;
+ }
+
+ btrfs_subpage_clamp_range(folio, &start, &len);
+- if (btrfs_subpage_end_and_test_writer(fs_info, folio, start, len))
++ if (btrfs_subpage_end_and_test_lock(fs_info, folio, start, len))
+ folio_unlock(folio);
+ }
+
+-void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, unsigned long bitmap)
++void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, unsigned long bitmap)
+ {
+ struct btrfs_subpage *subpage = folio_get_private(folio);
+ const int start_bit = fs_info->sectors_per_page * btrfs_bitmap_nr_locked;
+@@ -439,8 +334,8 @@ void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ return;
+ }
+
+- if (atomic_read(&subpage->writers) == 0) {
+- /* No writers, locked by plain lock_page(). */
++ if (atomic_read(&subpage->nr_locked) == 0) {
++ /* No subpage lock, locked by plain lock_page(). */
+ folio_unlock(folio);
+ return;
+ }
+@@ -450,8 +345,8 @@ void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ if (test_and_clear_bit(bit + start_bit, subpage->bitmaps))
+ cleared++;
+ }
+- ASSERT(atomic_read(&subpage->writers) >= cleared);
+- last = atomic_sub_and_test(cleared, &subpage->writers);
++ ASSERT(atomic_read(&subpage->nr_locked) >= cleared);
++ last = atomic_sub_and_test(cleared, &subpage->nr_locked);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ if (last)
+ folio_unlock(folio);
+@@ -776,8 +671,8 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info,
+ * This populates the involved subpage ranges so that subpage helpers can
+ * properly unlock them.
+ */
+-void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
++void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len)
+ {
+ struct btrfs_subpage *subpage;
+ unsigned long flags;
+@@ -796,58 +691,11 @@ void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
+ /* Target range should not yet be locked. */
+ ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+ bitmap_set(subpage->bitmaps, start_bit, nbits);
+- ret = atomic_add_return(nbits, &subpage->writers);
++ ret = atomic_add_return(nbits, &subpage->nr_locked);
+ ASSERT(ret <= fs_info->sectors_per_page);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ }
+
+-/*
+- * Find any subpage writer locked range inside @folio, starting at file offset
+- * @search_start. The caller should ensure the folio is locked.
+- *
+- * Return true and update @found_start_ret and @found_len_ret to the first
+- * writer locked range.
+- * Return false if there is no writer locked range.
+- */
+-bool btrfs_subpage_find_writer_locked(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 search_start,
+- u64 *found_start_ret, u32 *found_len_ret)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const u32 sectors_per_page = fs_info->sectors_per_page;
+- const unsigned int len = PAGE_SIZE - offset_in_page(search_start);
+- const unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+- locked, search_start, len);
+- const unsigned int locked_bitmap_start = sectors_per_page * btrfs_bitmap_nr_locked;
+- const unsigned int locked_bitmap_end = locked_bitmap_start + sectors_per_page;
+- unsigned long flags;
+- int first_zero;
+- int first_set;
+- bool found = false;
+-
+- ASSERT(folio_test_locked(folio));
+- spin_lock_irqsave(&subpage->lock, flags);
+- first_set = find_next_bit(subpage->bitmaps, locked_bitmap_end, start_bit);
+- if (first_set >= locked_bitmap_end)
+- goto out;
+-
+- found = true;
+-
+- *found_start_ret = folio_pos(folio) +
+- ((first_set - locked_bitmap_start) << fs_info->sectorsize_bits);
+- /*
+- * Since @first_set is ensured to be smaller than locked_bitmap_end
+- * here, @found_start_ret should be inside the folio.
+- */
+- ASSERT(*found_start_ret < folio_pos(folio) + PAGE_SIZE);
+-
+- first_zero = find_next_zero_bit(subpage->bitmaps, locked_bitmap_end, first_set);
+- *found_len_ret = (first_zero - first_set) << fs_info->sectorsize_bits;
+-out:
+- spin_unlock_irqrestore(&subpage->lock, flags);
+- return found;
+-}
+-
+ #define GET_SUBPAGE_BITMAP(subpage, fs_info, name, dst) \
+ { \
+ const int sectors_per_page = fs_info->sectors_per_page; \
+diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h
+index cdb554e0d215e2..44fff1f4eac482 100644
+--- a/fs/btrfs/subpage.h
++++ b/fs/btrfs/subpage.h
+@@ -45,14 +45,6 @@ enum {
+ struct btrfs_subpage {
+ /* Common members for both data and metadata pages */
+ spinlock_t lock;
+- /*
+- * Both data and metadata needs to track how many readers are for the
+- * page.
+- * Data relies on @readers to unlock the page when last reader finished.
+- * While metadata doesn't need page unlock, it needs to prevent
+- * page::private get cleared before the last end_page_read().
+- */
+- atomic_t readers;
+ union {
+ /*
+ * Structures only used by metadata
+@@ -62,8 +54,12 @@ struct btrfs_subpage {
+ */
+ atomic_t eb_refs;
+
+- /* Structures only used by data */
+- atomic_t writers;
++ /*
++ * Structures only used by data,
++ *
++ * How many sectors inside the page is locked.
++ */
++ atomic_t nr_locked;
+ };
+ unsigned long bitmaps[];
+ };
+@@ -95,23 +91,12 @@ void btrfs_free_subpage(struct btrfs_subpage *subpage);
+ void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio);
+ void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio);
+
+-void btrfs_subpage_start_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_subpage_end_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-
+-int btrfs_folio_start_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_folio_end_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, unsigned long bitmap);
+-bool btrfs_subpage_find_writer_locked(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 search_start,
+- u64 *found_start_ret, u32 *found_len_ret);
+-
++void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len);
++void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len);
++void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, unsigned long bitmap);
+ /*
+ * Template for subpage related operations.
+ *
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index fafc07e38663ca..e11e67af760f44 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1381,7 +1381,7 @@ int cifs_get_inode_info(struct inode **inode,
+ struct cifs_fattr fattr = {};
+ int rc;
+
+- if (is_inode_cache_good(*inode)) {
++ if (!data && is_inode_cache_good(*inode)) {
+ cifs_dbg(FYI, "No need to revalidate cached inode sizes\n");
+ return 0;
+ }
+@@ -1480,7 +1480,7 @@ int smb311_posix_get_inode_info(struct inode **inode,
+ struct cifs_fattr fattr = {};
+ int rc;
+
+- if (is_inode_cache_good(*inode)) {
++ if (!data && is_inode_cache_good(*inode)) {
+ cifs_dbg(FYI, "No need to revalidate cached inode sizes\n");
+ return 0;
+ }
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 44952727fef9ef..e8da63d29a28f1 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4991,6 +4991,10 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
+ next_buffer = (char *)cifs_buf_get();
+ else
+ next_buffer = (char *)cifs_small_buf_get();
++ if (!next_buffer) {
++ cifs_server_dbg(VFS, "No memory for (large) SMB response\n");
++ return -1;
++ }
+ memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd);
+ }
+
+diff --git a/fs/xfs/scrub/common.h b/fs/xfs/scrub/common.h
+index 47148cc4a833e5..eb00d48590f200 100644
+--- a/fs/xfs/scrub/common.h
++++ b/fs/xfs/scrub/common.h
+@@ -179,7 +179,6 @@ static inline bool xchk_skip_xref(struct xfs_scrub_metadata *sm)
+ bool xchk_dir_looks_zapped(struct xfs_inode *dp);
+ bool xchk_pptr_looks_zapped(struct xfs_inode *ip);
+
+-#ifdef CONFIG_XFS_ONLINE_REPAIR
+ /* Decide if a repair is required. */
+ static inline bool xchk_needs_repair(const struct xfs_scrub_metadata *sm)
+ {
+@@ -199,10 +198,6 @@ static inline bool xchk_could_repair(const struct xfs_scrub *sc)
+ return (sc->sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR) &&
+ !(sc->flags & XREP_ALREADY_FIXED);
+ }
+-#else
+-# define xchk_needs_repair(sc) (false)
+-# define xchk_could_repair(sc) (false)
+-#endif /* CONFIG_XFS_ONLINE_REPAIR */
+
+ int xchk_metadata_inode_forks(struct xfs_scrub *sc);
+
+diff --git a/fs/xfs/scrub/repair.h b/fs/xfs/scrub/repair.h
+index 0e0dc2bf985c21..96180176c582f3 100644
+--- a/fs/xfs/scrub/repair.h
++++ b/fs/xfs/scrub/repair.h
+@@ -163,7 +163,16 @@ bool xrep_buf_verify_struct(struct xfs_buf *bp, const struct xfs_buf_ops *ops);
+ #else
+
+ #define xrep_ino_dqattach(sc) (0)
+-#define xrep_will_attempt(sc) (false)
++
++/*
++ * When online repair is not built into the kernel, we still want to attempt
++ * the repair so that the stub xrep_attempt below will return EOPNOTSUPP.
++ */
++static inline bool xrep_will_attempt(const struct xfs_scrub *sc)
++{
++ return (sc->sm->sm_flags & XFS_SCRUB_IFLAG_FORCE_REBUILD) ||
++ xchk_needs_repair(sc->sm);
++}
+
+ static inline int
+ xrep_attempt(
+diff --git a/fs/xfs/scrub/scrub.c b/fs/xfs/scrub/scrub.c
+index 4cbcf7a86dbec5..5c266d2842dbe9 100644
+--- a/fs/xfs/scrub/scrub.c
++++ b/fs/xfs/scrub/scrub.c
+@@ -149,6 +149,18 @@ xchk_probe(
+ if (xchk_should_terminate(sc, &error))
+ return error;
+
++ /*
++ * If the caller is probing to see if repair works but repair isn't
++ * built into the kernel, return EOPNOTSUPP because that's the signal
++ * that userspace expects. If online repair is built in, set the
++ * CORRUPT flag (without any of the usual tracing/logging) to force us
++ * into xrep_probe.
++ */
++ if (xchk_could_repair(sc)) {
++ if (!IS_ENABLED(CONFIG_XFS_ONLINE_REPAIR))
++ return -EOPNOTSUPP;
++ sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
++ }
+ return 0;
+ }
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 4f17b786828af7..35b886385f3298 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3064,6 +3064,8 @@ static inline struct net_device *first_net_device_rcu(struct net *net)
+ }
+
+ int netdev_boot_setup_check(struct net_device *dev);
++struct net_device *dev_getbyhwaddr(struct net *net, unsigned short type,
++ const char *hwaddr);
+ struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type,
+ const char *hwaddr);
+ struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type);
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 4e77c4230c0a19..74114acbb07fbb 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2293,6 +2293,8 @@ static inline void pci_fixup_device(enum pci_fixup_pass pass,
+ struct pci_dev *dev) { }
+ #endif
+
++int pcim_intx(struct pci_dev *pdev, int enabled);
++int pcim_request_all_regions(struct pci_dev *pdev, const char *name);
+ void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen);
+ void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
+ const char *name);
+diff --git a/include/linux/pse-pd/pse.h b/include/linux/pse-pd/pse.h
+index 591a53e082e650..df1592022d938e 100644
+--- a/include/linux/pse-pd/pse.h
++++ b/include/linux/pse-pd/pse.h
+@@ -75,12 +75,8 @@ struct pse_control_status {
+ * @pi_disable: Configure the PSE PI as disabled.
+ * @pi_get_voltage: Return voltage similarly to get_voltage regulator
+ * callback.
+- * @pi_get_current_limit: Get the configured current limit similarly to
+- * get_current_limit regulator callback.
+- * @pi_set_current_limit: Configure the current limit similarly to
+- * set_current_limit regulator callback.
+- * Should not return an error in case of MAX_PI_CURRENT
+- * current value set.
++ * @pi_get_pw_limit: Get the configured power limit of the PSE PI.
++ * @pi_set_pw_limit: Configure the power limit of the PSE PI.
+ */
+ struct pse_controller_ops {
+ int (*ethtool_get_status)(struct pse_controller_dev *pcdev,
+@@ -91,10 +87,10 @@ struct pse_controller_ops {
+ int (*pi_enable)(struct pse_controller_dev *pcdev, int id);
+ int (*pi_disable)(struct pse_controller_dev *pcdev, int id);
+ int (*pi_get_voltage)(struct pse_controller_dev *pcdev, int id);
+- int (*pi_get_current_limit)(struct pse_controller_dev *pcdev,
+- int id);
+- int (*pi_set_current_limit)(struct pse_controller_dev *pcdev,
+- int id, int max_uA);
++ int (*pi_get_pw_limit)(struct pse_controller_dev *pcdev,
++ int id);
++ int (*pi_set_pw_limit)(struct pse_controller_dev *pcdev,
++ int id, int max_mW);
+ };
+
+ struct module;
+diff --git a/include/linux/serio.h b/include/linux/serio.h
+index bf2191f2535093..69a47674af653c 100644
+--- a/include/linux/serio.h
++++ b/include/linux/serio.h
+@@ -6,6 +6,7 @@
+ #define _SERIO_H
+
+
++#include <linux/cleanup.h>
+ #include <linux/types.h>
+ #include <linux/interrupt.h>
+ #include <linux/list.h>
+@@ -161,4 +162,6 @@ static inline void serio_continue_rx(struct serio *serio)
+ spin_unlock_irq(&serio->lock);
+ }
+
++DEFINE_GUARD(serio_pause_rx, struct serio *, serio_pause_rx(_T), serio_continue_rx(_T))
++
+ #endif
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 2cbe0c22a32f3c..0b9095a281b898 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -91,6 +91,8 @@ struct sk_psock {
+ struct sk_psock_progs progs;
+ #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
+ struct strparser strp;
++ u32 copied_seq;
++ u32 ingress_bytes;
+ #endif
+ struct sk_buff_head ingress_skb;
+ struct list_head ingress_msg;
+diff --git a/include/net/gro.h b/include/net/gro.h
+index b9b58c1f8d190b..7b548f91754bf3 100644
+--- a/include/net/gro.h
++++ b/include/net/gro.h
+@@ -11,6 +11,9 @@
+ #include <net/udp.h>
+ #include <net/hotdata.h>
+
++/* This should be increased if a protocol with a bigger head is added. */
++#define GRO_MAX_HEAD (MAX_HEADER + 128)
++
+ struct napi_gro_cb {
+ union {
+ struct {
+diff --git a/include/net/strparser.h b/include/net/strparser.h
+index 41e2ce9e9e10ff..0a83010b3a64a9 100644
+--- a/include/net/strparser.h
++++ b/include/net/strparser.h
+@@ -43,6 +43,8 @@ struct strparser;
+ struct strp_callbacks {
+ int (*parse_msg)(struct strparser *strp, struct sk_buff *skb);
+ void (*rcv_msg)(struct strparser *strp, struct sk_buff *skb);
++ int (*read_sock)(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor);
+ int (*read_sock_done)(struct strparser *strp, int err);
+ void (*abort_parser)(struct strparser *strp, int err);
+ void (*lock)(struct strparser *strp);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index d1948d357dade0..3255a199ef60d5 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -41,6 +41,7 @@
+ #include <net/inet_ecn.h>
+ #include <net/dst.h>
+ #include <net/mptcp.h>
++#include <net/xfrm.h>
+
+ #include <linux/seq_file.h>
+ #include <linux/memcontrol.h>
+@@ -683,6 +684,19 @@ void tcp_fin(struct sock *sk);
+ void tcp_check_space(struct sock *sk);
+ void tcp_sack_compress_send_ack(struct sock *sk);
+
++static inline void tcp_cleanup_skb(struct sk_buff *skb)
++{
++ skb_dst_drop(skb);
++ secpath_reset(skb);
++}
++
++static inline void tcp_add_receive_queue(struct sock *sk, struct sk_buff *skb)
++{
++ DEBUG_NET_WARN_ON_ONCE(skb_dst(skb));
++ DEBUG_NET_WARN_ON_ONCE(secpath_exists(skb));
++ __skb_queue_tail(&sk->sk_receive_queue, skb);
++}
++
+ /* tcp_timer.c */
+ void tcp_init_xmit_timers(struct sock *);
+ static inline void tcp_clear_xmit_timers(struct sock *sk)
+@@ -729,6 +743,9 @@ void tcp_get_info(struct sock *, struct tcp_info *);
+ /* Read 'sendfile()'-style from a TCP socket */
+ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ sk_read_actor_t recv_actor);
++int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor, bool noack,
++ u32 *copied_seq);
+ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
+ struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off);
+ void tcp_read_done(struct sock *sk, size_t len);
+@@ -2595,6 +2612,11 @@ struct sk_psock;
+ #ifdef CONFIG_BPF_SYSCALL
+ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
+ void tcp_bpf_clone(const struct sock *sk, struct sock *newsk);
++#ifdef CONFIG_BPF_STREAM_PARSER
++struct strparser;
++int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor);
++#endif /* CONFIG_BPF_STREAM_PARSER */
+ #endif /* CONFIG_BPF_SYSCALL */
+
+ #ifdef CONFIG_INET
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index c4182e95a61955..4a8a4a63e99ca8 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -1485,6 +1485,7 @@ struct drm_xe_oa_unit {
+ /** @capabilities: OA capabilities bit-mask */
+ __u64 capabilities;
+ #define DRM_XE_OA_CAPS_BASE (1 << 0)
++#define DRM_XE_OA_CAPS_SYNCS (1 << 1)
+
+ /** @oa_timestamp_freq: OA timestamp freq */
+ __u64 oa_timestamp_freq;
+@@ -1634,6 +1635,22 @@ enum drm_xe_oa_property_id {
+ * to be disabled for the stream exec queue.
+ */
+ DRM_XE_OA_PROPERTY_NO_PREEMPT,
++
++ /**
++ * @DRM_XE_OA_PROPERTY_NUM_SYNCS: Number of syncs in the sync array
++ * specified in @DRM_XE_OA_PROPERTY_SYNCS
++ */
++ DRM_XE_OA_PROPERTY_NUM_SYNCS,
++
++ /**
++ * @DRM_XE_OA_PROPERTY_SYNCS: Pointer to struct @drm_xe_sync array
++ * with array size specified via @DRM_XE_OA_PROPERTY_NUM_SYNCS. OA
++ * configuration will wait till input fences signal. Output fences
++ * will signal after the new OA configuration takes effect. For
++ * @DRM_XE_SYNC_TYPE_USER_FENCE, @addr is a user pointer, similar
++ * to the VM bind case.
++ */
++ DRM_XE_OA_PROPERTY_SYNCS,
+ };
+
+ /**
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 21f1bcba2f52b5..cf28d29fffbf0e 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2053,6 +2053,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ req->opcode = 0;
+ return io_init_fail_req(req, -EINVAL);
+ }
++ opcode = array_index_nospec(opcode, IORING_OP_LAST);
++
+ def = &io_issue_defs[opcode];
+ if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
+ /* enforce forwards compatibility on users */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 39ad25d16ed404..6abc495602a4e9 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -862,7 +862,15 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
+ if (unlikely(ret))
+ return ret;
+
+- ret = io_iter_do_read(rw, &io->iter);
++ if (unlikely(req->opcode == IORING_OP_READ_MULTISHOT)) {
++ void *cb_copy = rw->kiocb.ki_complete;
++
++ rw->kiocb.ki_complete = NULL;
++ ret = io_iter_do_read(rw, &io->iter);
++ rw->kiocb.ki_complete = cb_copy;
++ } else {
++ ret = io_iter_do_read(rw, &io->iter);
++ }
+
+ /*
+ * Some file systems like to return -EOPNOTSUPP for an IOCB_NOWAIT
+@@ -887,7 +895,8 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
+ } else if (ret == -EIOCBQUEUED) {
+ return IOU_ISSUE_SKIP_COMPLETE;
+ } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
+- (req->flags & REQ_F_NOWAIT) || !need_complete_io(req)) {
++ (req->flags & REQ_F_NOWAIT) || !need_complete_io(req) ||
++ (issue_flags & IO_URING_F_MULTISHOT)) {
+ /* read all, failed, already did sync or don't want to retry */
+ goto done;
+ }
+diff --git a/kernel/acct.c b/kernel/acct.c
+index 179848ad33e978..d9d55fa4d01a71 100644
+--- a/kernel/acct.c
++++ b/kernel/acct.c
+@@ -103,48 +103,50 @@ struct bsd_acct_struct {
+ atomic_long_t count;
+ struct rcu_head rcu;
+ struct mutex lock;
+- int active;
++ bool active;
++ bool check_space;
+ unsigned long needcheck;
+ struct file *file;
+ struct pid_namespace *ns;
+ struct work_struct work;
+ struct completion done;
++ acct_t ac;
+ };
+
+-static void do_acct_process(struct bsd_acct_struct *acct);
++static void fill_ac(struct bsd_acct_struct *acct);
++static void acct_write_process(struct bsd_acct_struct *acct);
+
+ /*
+ * Check the amount of free space and suspend/resume accordingly.
+ */
+-static int check_free_space(struct bsd_acct_struct *acct)
++static bool check_free_space(struct bsd_acct_struct *acct)
+ {
+ struct kstatfs sbuf;
+
+- if (time_is_after_jiffies(acct->needcheck))
+- goto out;
++ if (!acct->check_space)
++ return acct->active;
+
+ /* May block */
+ if (vfs_statfs(&acct->file->f_path, &sbuf))
+- goto out;
++ return acct->active;
+
+ if (acct->active) {
+ u64 suspend = sbuf.f_blocks * SUSPEND;
+ do_div(suspend, 100);
+ if (sbuf.f_bavail <= suspend) {
+- acct->active = 0;
++ acct->active = false;
+ pr_info("Process accounting paused\n");
+ }
+ } else {
+ u64 resume = sbuf.f_blocks * RESUME;
+ do_div(resume, 100);
+ if (sbuf.f_bavail >= resume) {
+- acct->active = 1;
++ acct->active = true;
+ pr_info("Process accounting resumed\n");
+ }
+ }
+
+ acct->needcheck = jiffies + ACCT_TIMEOUT*HZ;
+-out:
+ return acct->active;
+ }
+
+@@ -189,7 +191,11 @@ static void acct_pin_kill(struct fs_pin *pin)
+ {
+ struct bsd_acct_struct *acct = to_acct(pin);
+ mutex_lock(&acct->lock);
+- do_acct_process(acct);
++ /*
++ * Fill the accounting struct with the exiting task's info
++ * before punting to the workqueue.
++ */
++ fill_ac(acct);
+ schedule_work(&acct->work);
+ wait_for_completion(&acct->done);
+ cmpxchg(&acct->ns->bacct, pin, NULL);
+@@ -202,6 +208,9 @@ static void close_work(struct work_struct *work)
+ {
+ struct bsd_acct_struct *acct = container_of(work, struct bsd_acct_struct, work);
+ struct file *file = acct->file;
++
++ /* We were fired by acct_pin_kill() which holds acct->lock. */
++ acct_write_process(acct);
+ if (file->f_op->flush)
+ file->f_op->flush(file, NULL);
+ __fput_sync(file);
+@@ -234,6 +243,20 @@ static int acct_on(struct filename *pathname)
+ return -EACCES;
+ }
+
++ /* Exclude kernel kernel internal filesystems. */
++ if (file_inode(file)->i_sb->s_flags & (SB_NOUSER | SB_KERNMOUNT)) {
++ kfree(acct);
++ filp_close(file, NULL);
++ return -EINVAL;
++ }
++
++ /* Exclude procfs and sysfs. */
++ if (file_inode(file)->i_sb->s_iflags & SB_I_USERNS_VISIBLE) {
++ kfree(acct);
++ filp_close(file, NULL);
++ return -EINVAL;
++ }
++
+ if (!(file->f_mode & FMODE_CAN_WRITE)) {
+ kfree(acct);
+ filp_close(file, NULL);
+@@ -430,13 +453,27 @@ static u32 encode_float(u64 value)
+ * do_exit() or when switching to a different output file.
+ */
+
+-static void fill_ac(acct_t *ac)
++static void fill_ac(struct bsd_acct_struct *acct)
+ {
+ struct pacct_struct *pacct = ¤t->signal->pacct;
++ struct file *file = acct->file;
++ acct_t *ac = &acct->ac;
+ u64 elapsed, run_time;
+ time64_t btime;
+ struct tty_struct *tty;
+
++ lockdep_assert_held(&acct->lock);
++
++ if (time_is_after_jiffies(acct->needcheck)) {
++ acct->check_space = false;
++
++ /* Don't fill in @ac if nothing will be written. */
++ if (!acct->active)
++ return;
++ } else {
++ acct->check_space = true;
++ }
++
+ /*
+ * Fill the accounting struct with the needed info as recorded
+ * by the different kernel functions.
+@@ -484,64 +521,61 @@ static void fill_ac(acct_t *ac)
+ ac->ac_majflt = encode_comp_t(pacct->ac_majflt);
+ ac->ac_exitcode = pacct->ac_exitcode;
+ spin_unlock_irq(¤t->sighand->siglock);
+-}
+-/*
+- * do_acct_process does all actual work. Caller holds the reference to file.
+- */
+-static void do_acct_process(struct bsd_acct_struct *acct)
+-{
+- acct_t ac;
+- unsigned long flim;
+- const struct cred *orig_cred;
+- struct file *file = acct->file;
+
+- /*
+- * Accounting records are not subject to resource limits.
+- */
+- flim = rlimit(RLIMIT_FSIZE);
+- current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+- /* Perform file operations on behalf of whoever enabled accounting */
+- orig_cred = override_creds(file->f_cred);
+-
+- /*
+- * First check to see if there is enough free_space to continue
+- * the process accounting system.
+- */
+- if (!check_free_space(acct))
+- goto out;
+-
+- fill_ac(&ac);
+ /* we really need to bite the bullet and change layout */
+- ac.ac_uid = from_kuid_munged(file->f_cred->user_ns, orig_cred->uid);
+- ac.ac_gid = from_kgid_munged(file->f_cred->user_ns, orig_cred->gid);
++ ac->ac_uid = from_kuid_munged(file->f_cred->user_ns, current_uid());
++ ac->ac_gid = from_kgid_munged(file->f_cred->user_ns, current_gid());
+ #if ACCT_VERSION == 1 || ACCT_VERSION == 2
+ /* backward-compatible 16 bit fields */
+- ac.ac_uid16 = ac.ac_uid;
+- ac.ac_gid16 = ac.ac_gid;
++ ac->ac_uid16 = ac->ac_uid;
++ ac->ac_gid16 = ac->ac_gid;
+ #elif ACCT_VERSION == 3
+ {
+ struct pid_namespace *ns = acct->ns;
+
+- ac.ac_pid = task_tgid_nr_ns(current, ns);
++ ac->ac_pid = task_tgid_nr_ns(current, ns);
+ rcu_read_lock();
+- ac.ac_ppid = task_tgid_nr_ns(rcu_dereference(current->real_parent),
+- ns);
++ ac->ac_ppid = task_tgid_nr_ns(rcu_dereference(current->real_parent), ns);
+ rcu_read_unlock();
+ }
+ #endif
++}
++
++static void acct_write_process(struct bsd_acct_struct *acct)
++{
++ struct file *file = acct->file;
++ const struct cred *cred;
++ acct_t *ac = &acct->ac;
++
++ /* Perform file operations on behalf of whoever enabled accounting */
++ cred = override_creds(file->f_cred);
++
+ /*
+- * Get freeze protection. If the fs is frozen, just skip the write
+- * as we could deadlock the system otherwise.
++ * First check to see if there is enough free_space to continue
++ * the process accounting system. Then get freeze protection. If
++ * the fs is frozen, just skip the write as we could deadlock
++ * the system otherwise.
+ */
+- if (file_start_write_trylock(file)) {
++ if (check_free_space(acct) && file_start_write_trylock(file)) {
+ /* it's been opened O_APPEND, so position is irrelevant */
+ loff_t pos = 0;
+- __kernel_write(file, &ac, sizeof(acct_t), &pos);
++ __kernel_write(file, ac, sizeof(acct_t), &pos);
+ file_end_write(file);
+ }
+-out:
++
++ revert_creds(cred);
++}
++
++static void do_acct_process(struct bsd_acct_struct *acct)
++{
++ unsigned long flim;
++
++ /* Accounting records are not subject to resource limits. */
++ flim = rlimit(RLIMIT_FSIZE);
++ current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
++ fill_ac(acct);
++ acct_write_process(acct);
+ current->signal->rlim[RLIMIT_FSIZE].rlim_cur = flim;
+- revert_creds(orig_cred);
+ }
+
+ /**
+diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
+index 93e48c7cad4eff..8c775a1401d3ed 100644
+--- a/kernel/bpf/arena.c
++++ b/kernel/bpf/arena.c
+@@ -37,7 +37,7 @@
+ */
+
+ /* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */
+-#define GUARD_SZ (1ull << sizeof_field(struct bpf_insn, off) * 8)
++#define GUARD_SZ round_up(1ull << sizeof_field(struct bpf_insn, off) * 8, PAGE_SIZE << 1)
+ #define KERN_VM_SZ (SZ_4G + GUARD_SZ)
+
+ struct bpf_arena {
+diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
+index 28efd0a3f2200c..6547fb7ac0dcb2 100644
+--- a/kernel/bpf/bpf_cgrp_storage.c
++++ b/kernel/bpf/bpf_cgrp_storage.c
+@@ -154,7 +154,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
+
+ static void cgroup_storage_map_free(struct bpf_map *map)
+ {
+- bpf_local_storage_map_free(map, &cgroup_cache, NULL);
++ bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy);
+ }
+
+ /* *gfp_flags* is a hidden argument provided by the verifier */
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index a44f4be592be79..2c54c148a94f30 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6483,6 +6483,8 @@ static const struct bpf_raw_tp_null_args raw_tp_null_args[] = {
+ /* rxrpc */
+ { "rxrpc_recvdata", 0x1 },
+ { "rxrpc_resend", 0x10 },
++ /* skb */
++ {"kfree_skb", 0x1000},
+ /* sunrpc */
+ { "xs_stream_read_data", 0x1 },
+ /* ... from xprt_cong_event event class */
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index e1cfe890e0be64..1499d8caa9a351 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -268,8 +268,6 @@ static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma
+ /* allow writable mapping for the consumer_pos only */
+ if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
+ return -EPERM;
+- } else {
+- vm_flags_clear(vma, VM_MAYWRITE);
+ }
+ /* remap_vmalloc_range() checks size and offset constraints */
+ return remap_vmalloc_range(vma, rb_map->rb,
+@@ -289,8 +287,6 @@ static int ringbuf_map_mmap_user(struct bpf_map *map, struct vm_area_struct *vma
+ * position, and the ring buffer data itself.
+ */
+ return -EPERM;
+- } else {
+- vm_flags_clear(vma, VM_MAYWRITE);
+ }
+ /* remap_vmalloc_range() checks size and offset constraints */
+ return remap_vmalloc_range(vma, rb_map->rb, vma->vm_pgoff + RINGBUF_PGOFF);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 368ae8d231d417..696e5a2cbea2e8 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -936,7 +936,7 @@ static const struct vm_operations_struct bpf_map_default_vmops = {
+ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ struct bpf_map *map = filp->private_data;
+- int err;
++ int err = 0;
+
+ if (!map->ops->map_mmap || !IS_ERR_OR_NULL(map->record))
+ return -ENOTSUPP;
+@@ -960,24 +960,33 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ err = -EACCES;
+ goto out;
+ }
++ bpf_map_write_active_inc(map);
+ }
++out:
++ mutex_unlock(&map->freeze_mutex);
++ if (err)
++ return err;
+
+ /* set default open/close callbacks */
+ vma->vm_ops = &bpf_map_default_vmops;
+ vma->vm_private_data = map;
+ vm_flags_clear(vma, VM_MAYEXEC);
++ /* If mapping is read-only, then disallow potentially re-mapping with
++ * PROT_WRITE by dropping VM_MAYWRITE flag. This VM_MAYWRITE clearing
++ * means that as far as BPF map's memory-mapped VMAs are concerned,
++ * VM_WRITE and VM_MAYWRITE and equivalent, if one of them is set,
++ * both should be set, so we can forget about VM_MAYWRITE and always
++ * check just VM_WRITE
++ */
+ if (!(vma->vm_flags & VM_WRITE))
+- /* disallow re-mapping with PROT_WRITE */
+ vm_flags_clear(vma, VM_MAYWRITE);
+
+ err = map->ops->map_mmap(map, vma);
+- if (err)
+- goto out;
++ if (err) {
++ if (vma->vm_flags & VM_WRITE)
++ bpf_map_write_active_dec(map);
++ }
+
+- if (vma->vm_flags & VM_MAYWRITE)
+- bpf_map_write_active_inc(map);
+-out:
+- mutex_unlock(&map->freeze_mutex);
+ return err;
+ }
+
+@@ -1863,8 +1872,6 @@ int generic_map_update_batch(struct bpf_map *map, struct file *map_file,
+ return err;
+ }
+
+-#define MAP_LOOKUP_RETRIES 3
+-
+ int generic_map_lookup_batch(struct bpf_map *map,
+ const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+@@ -1874,8 +1881,8 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ void __user *values = u64_to_user_ptr(attr->batch.values);
+ void __user *keys = u64_to_user_ptr(attr->batch.keys);
+ void *buf, *buf_prevkey, *prev_key, *key, *value;
+- int err, retry = MAP_LOOKUP_RETRIES;
+ u32 value_size, cp, max_count;
++ int err;
+
+ if (attr->batch.elem_flags & ~BPF_F_LOCK)
+ return -EINVAL;
+@@ -1921,14 +1928,8 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ err = bpf_map_copy_value(map, key, value,
+ attr->batch.elem_flags);
+
+- if (err == -ENOENT) {
+- if (retry) {
+- retry--;
+- continue;
+- }
+- err = -EINTR;
+- break;
+- }
++ if (err == -ENOENT)
++ goto next_key;
+
+ if (err)
+ goto free_buf;
+@@ -1943,12 +1944,12 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ goto free_buf;
+ }
+
++ cp++;
++next_key:
+ if (!prev_key)
+ prev_key = buf_prevkey;
+
+ swap(prev_key, key);
+- retry = MAP_LOOKUP_RETRIES;
+- cp++;
+ cond_resched();
+ }
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 689f7e8f69f54d..aa57ae3eb1ff5e 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2300,12 +2300,35 @@ static void move_remote_task_to_local_dsq(struct task_struct *p, u64 enq_flags,
+ *
+ * - The BPF scheduler is bypassed while the rq is offline and we can always say
+ * no to the BPF scheduler initiated migrations while offline.
++ *
++ * The caller must ensure that @p and @rq are on different CPUs.
+ */
+ static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,
+ bool trigger_error)
+ {
+ int cpu = cpu_of(rq);
+
++ SCHED_WARN_ON(task_cpu(p) == cpu);
++
++ /*
++ * If @p has migration disabled, @p->cpus_ptr is updated to contain only
++ * the pinned CPU in migrate_disable_switch() while @p is being switched
++ * out. However, put_prev_task_scx() is called before @p->cpus_ptr is
++ * updated and thus another CPU may see @p on a DSQ inbetween leading to
++ * @p passing the below task_allowed_on_cpu() check while migration is
++ * disabled.
++ *
++ * Test the migration disabled state first as the race window is narrow
++ * and the BPF scheduler failing to check migration disabled state can
++ * easily be masked if task_allowed_on_cpu() is done first.
++ */
++ if (unlikely(is_migration_disabled(p))) {
++ if (trigger_error)
++ scx_ops_error("SCX_DSQ_LOCAL[_ON] cannot move migration disabled %s[%d] from CPU %d to %d",
++ p->comm, p->pid, task_cpu(p), cpu);
++ return false;
++ }
++
+ /*
+ * We don't require the BPF scheduler to avoid dispatching to offline
+ * CPUs mostly for convenience but also because CPUs can go offline
+@@ -2314,14 +2337,11 @@ static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,
+ */
+ if (!task_allowed_on_cpu(p, cpu)) {
+ if (trigger_error)
+- scx_ops_error("SCX_DSQ_LOCAL[_ON] verdict target cpu %d not allowed for %s[%d]",
+- cpu_of(rq), p->comm, p->pid);
++ scx_ops_error("SCX_DSQ_LOCAL[_ON] target CPU %d not allowed for %s[%d]",
++ cpu, p->comm, p->pid);
+ return false;
+ }
+
+- if (unlikely(is_migration_disabled(p)))
+- return false;
+-
+ if (!scx_rq_online(rq))
+ return false;
+
+@@ -2397,6 +2417,74 @@ static inline bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *r
+ static inline bool consume_remote_task(struct rq *this_rq, struct task_struct *p, struct scx_dispatch_q *dsq, struct rq *task_rq) { return false; }
+ #endif /* CONFIG_SMP */
+
++/**
++ * move_task_between_dsqs() - Move a task from one DSQ to another
++ * @p: target task
++ * @enq_flags: %SCX_ENQ_*
++ * @src_dsq: DSQ @p is currently on, must not be a local DSQ
++ * @dst_dsq: DSQ @p is being moved to, can be any DSQ
++ *
++ * Must be called with @p's task_rq and @src_dsq locked. If @dst_dsq is a local
++ * DSQ and @p is on a different CPU, @p will be migrated and thus its task_rq
++ * will change. As @p's task_rq is locked, this function doesn't need to use the
++ * holding_cpu mechanism.
++ *
++ * On return, @src_dsq is unlocked and only @p's new task_rq, which is the
++ * return value, is locked.
++ */
++static struct rq *move_task_between_dsqs(struct task_struct *p, u64 enq_flags,
++ struct scx_dispatch_q *src_dsq,
++ struct scx_dispatch_q *dst_dsq)
++{
++ struct rq *src_rq = task_rq(p), *dst_rq;
++
++ BUG_ON(src_dsq->id == SCX_DSQ_LOCAL);
++ lockdep_assert_held(&src_dsq->lock);
++ lockdep_assert_rq_held(src_rq);
++
++ if (dst_dsq->id == SCX_DSQ_LOCAL) {
++ dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
++ if (src_rq != dst_rq &&
++ unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {
++ dst_dsq = find_global_dsq(p);
++ dst_rq = src_rq;
++ }
++ } else {
++ /* no need to migrate if destination is a non-local DSQ */
++ dst_rq = src_rq;
++ }
++
++ /*
++ * Move @p into $dst_dsq. If $dst_dsq is the local DSQ of a different
++ * CPU, @p will be migrated.
++ */
++ if (dst_dsq->id == SCX_DSQ_LOCAL) {
++ /* @p is going from a non-local DSQ to a local DSQ */
++ if (src_rq == dst_rq) {
++ task_unlink_from_dsq(p, src_dsq);
++ move_local_task_to_local_dsq(p, enq_flags,
++ src_dsq, dst_rq);
++ raw_spin_unlock(&src_dsq->lock);
++ } else {
++ raw_spin_unlock(&src_dsq->lock);
++ move_remote_task_to_local_dsq(p, enq_flags,
++ src_rq, dst_rq);
++ }
++ } else {
++ /*
++ * @p is going from a non-local DSQ to a non-local DSQ. As
++ * $src_dsq is already locked, do an abbreviated dequeue.
++ */
++ task_unlink_from_dsq(p, src_dsq);
++ p->scx.dsq = NULL;
++ raw_spin_unlock(&src_dsq->lock);
++
++ dispatch_enqueue(dst_dsq, p, enq_flags);
++ }
++
++ return dst_rq;
++}
++
+ static bool consume_dispatch_q(struct rq *rq, struct scx_dispatch_q *dsq)
+ {
+ struct task_struct *p;
+@@ -2474,7 +2562,8 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ }
+
+ #ifdef CONFIG_SMP
+- if (unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {
++ if (src_rq != dst_rq &&
++ unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {
+ dispatch_enqueue(find_global_dsq(p), p,
+ enq_flags | SCX_ENQ_CLEAR_OPSS);
+ return;
+@@ -6134,7 +6223,7 @@ static bool scx_dispatch_from_dsq(struct bpf_iter_scx_dsq_kern *kit,
+ u64 enq_flags)
+ {
+ struct scx_dispatch_q *src_dsq = kit->dsq, *dst_dsq;
+- struct rq *this_rq, *src_rq, *dst_rq, *locked_rq;
++ struct rq *this_rq, *src_rq, *locked_rq;
+ bool dispatched = false;
+ bool in_balance;
+ unsigned long flags;
+@@ -6180,51 +6269,18 @@ static bool scx_dispatch_from_dsq(struct bpf_iter_scx_dsq_kern *kit,
+ /* @p is still on $src_dsq and stable, determine the destination */
+ dst_dsq = find_dsq_for_dispatch(this_rq, dsq_id, p);
+
+- if (dst_dsq->id == SCX_DSQ_LOCAL) {
+- dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
+- if (!task_can_run_on_remote_rq(p, dst_rq, true)) {
+- dst_dsq = find_global_dsq(p);
+- dst_rq = src_rq;
+- }
+- } else {
+- /* no need to migrate if destination is a non-local DSQ */
+- dst_rq = src_rq;
+- }
+-
+ /*
+- * Move @p into $dst_dsq. If $dst_dsq is the local DSQ of a different
+- * CPU, @p will be migrated.
++ * Apply vtime and slice updates before moving so that the new time is
++ * visible before inserting into $dst_dsq. @p is still on $src_dsq but
++ * this is safe as we're locking it.
+ */
+- if (dst_dsq->id == SCX_DSQ_LOCAL) {
+- /* @p is going from a non-local DSQ to a local DSQ */
+- if (src_rq == dst_rq) {
+- task_unlink_from_dsq(p, src_dsq);
+- move_local_task_to_local_dsq(p, enq_flags,
+- src_dsq, dst_rq);
+- raw_spin_unlock(&src_dsq->lock);
+- } else {
+- raw_spin_unlock(&src_dsq->lock);
+- move_remote_task_to_local_dsq(p, enq_flags,
+- src_rq, dst_rq);
+- locked_rq = dst_rq;
+- }
+- } else {
+- /*
+- * @p is going from a non-local DSQ to a non-local DSQ. As
+- * $src_dsq is already locked, do an abbreviated dequeue.
+- */
+- task_unlink_from_dsq(p, src_dsq);
+- p->scx.dsq = NULL;
+- raw_spin_unlock(&src_dsq->lock);
+-
+- if (kit->cursor.flags & __SCX_DSQ_ITER_HAS_VTIME)
+- p->scx.dsq_vtime = kit->vtime;
+- dispatch_enqueue(dst_dsq, p, enq_flags);
+- }
+-
++ if (kit->cursor.flags & __SCX_DSQ_ITER_HAS_VTIME)
++ p->scx.dsq_vtime = kit->vtime;
+ if (kit->cursor.flags & __SCX_DSQ_ITER_HAS_SLICE)
+ p->scx.slice = kit->slice;
+
++ /* execute move */
++ locked_rq = move_task_between_dsqs(p, enq_flags, src_dsq, dst_dsq);
+ dispatched = true;
+ out:
+ if (in_balance) {
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index cd9dbfb3038330..71cc1bbfe9aa3e 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3219,15 +3219,22 @@ static struct ftrace_hash *copy_hash(struct ftrace_hash *src)
+ * The filter_hash updates uses just the append_hash() function
+ * and the notrace_hash does not.
+ */
+-static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash)
++static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash,
++ int size_bits)
+ {
+ struct ftrace_func_entry *entry;
+ int size;
+ int i;
+
+- /* An empty hash does everything */
+- if (ftrace_hash_empty(*hash))
+- return 0;
++ if (*hash) {
++ /* An empty hash does everything */
++ if (ftrace_hash_empty(*hash))
++ return 0;
++ } else {
++ *hash = alloc_ftrace_hash(size_bits);
++ if (!*hash)
++ return -ENOMEM;
++ }
+
+ /* If new_hash has everything make hash have everything */
+ if (ftrace_hash_empty(new_hash)) {
+@@ -3291,16 +3298,18 @@ static int intersect_hash(struct ftrace_hash **hash, struct ftrace_hash *new_has
+ /* Return a new hash that has a union of all @ops->filter_hash entries */
+ static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
+ {
+- struct ftrace_hash *new_hash;
++ struct ftrace_hash *new_hash = NULL;
+ struct ftrace_ops *subops;
++ int size_bits;
+ int ret;
+
+- new_hash = alloc_ftrace_hash(ops->func_hash->filter_hash->size_bits);
+- if (!new_hash)
+- return NULL;
++ if (ops->func_hash->filter_hash)
++ size_bits = ops->func_hash->filter_hash->size_bits;
++ else
++ size_bits = FTRACE_HASH_DEFAULT_BITS;
+
+ list_for_each_entry(subops, &ops->subop_list, list) {
+- ret = append_hash(&new_hash, subops->func_hash->filter_hash);
++ ret = append_hash(&new_hash, subops->func_hash->filter_hash, size_bits);
+ if (ret < 0) {
+ free_ftrace_hash(new_hash);
+ return NULL;
+@@ -3309,7 +3318,8 @@ static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
+ if (ftrace_hash_empty(new_hash))
+ break;
+ }
+- return new_hash;
++ /* Can't return NULL as that means this failed */
++ return new_hash ? : EMPTY_HASH;
+ }
+
+ /* Make @ops trace evenything except what all its subops do not trace */
+@@ -3504,7 +3514,8 @@ int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int
+ filter_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->filter_hash);
+ if (!filter_hash)
+ return -ENOMEM;
+- ret = append_hash(&filter_hash, subops->func_hash->filter_hash);
++ ret = append_hash(&filter_hash, subops->func_hash->filter_hash,
++ size_bits);
+ if (ret < 0) {
+ free_ftrace_hash(filter_hash);
+ return ret;
+@@ -5747,6 +5758,9 @@ __ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
+ return -ENOENT;
+ free_hash_entry(hash, entry);
+ return 0;
++ } else if (__ftrace_lookup_ip(hash, ip) != NULL) {
++ /* Already exists */
++ return 0;
+ }
+
+ entry = add_hash_entry(hash, ip);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index bfc4ac265c2c33..ffe1422ab03f88 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -26,6 +26,7 @@
+ #include <linux/hardirq.h>
+ #include <linux/linkage.h>
+ #include <linux/uaccess.h>
++#include <linux/cleanup.h>
+ #include <linux/vmalloc.h>
+ #include <linux/ftrace.h>
+ #include <linux/module.h>
+@@ -535,19 +536,16 @@ LIST_HEAD(ftrace_trace_arrays);
+ int trace_array_get(struct trace_array *this_tr)
+ {
+ struct trace_array *tr;
+- int ret = -ENODEV;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ if (tr == this_tr) {
+ tr->ref++;
+- ret = 0;
+- break;
++ return 0;
+ }
+ }
+- mutex_unlock(&trace_types_lock);
+
+- return ret;
++ return -ENODEV;
+ }
+
+ static void __trace_array_put(struct trace_array *this_tr)
+@@ -1456,22 +1454,20 @@ EXPORT_SYMBOL_GPL(tracing_snapshot_alloc);
+ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
+ cond_update_fn_t update)
+ {
+- struct cond_snapshot *cond_snapshot;
+- int ret = 0;
++ struct cond_snapshot *cond_snapshot __free(kfree) =
++ kzalloc(sizeof(*cond_snapshot), GFP_KERNEL);
++ int ret;
+
+- cond_snapshot = kzalloc(sizeof(*cond_snapshot), GFP_KERNEL);
+ if (!cond_snapshot)
+ return -ENOMEM;
+
+ cond_snapshot->cond_data = cond_data;
+ cond_snapshot->update = update;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+- if (tr->current_trace->use_max_tr) {
+- ret = -EBUSY;
+- goto fail_unlock;
+- }
++ if (tr->current_trace->use_max_tr)
++ return -EBUSY;
+
+ /*
+ * The cond_snapshot can only change to NULL without the
+@@ -1481,29 +1477,20 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
+ * do safely with only holding the trace_types_lock and not
+ * having to take the max_lock.
+ */
+- if (tr->cond_snapshot) {
+- ret = -EBUSY;
+- goto fail_unlock;
+- }
++ if (tr->cond_snapshot)
++ return -EBUSY;
+
+ ret = tracing_arm_snapshot_locked(tr);
+ if (ret)
+- goto fail_unlock;
++ return ret;
+
+ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+- tr->cond_snapshot = cond_snapshot;
++ tr->cond_snapshot = no_free_ptr(cond_snapshot);
+ arch_spin_unlock(&tr->max_lock);
+ local_irq_enable();
+
+- mutex_unlock(&trace_types_lock);
+-
+- return ret;
+-
+- fail_unlock:
+- mutex_unlock(&trace_types_lock);
+- kfree(cond_snapshot);
+- return ret;
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(tracing_snapshot_cond_enable);
+
+@@ -2216,10 +2203,10 @@ static __init int init_trace_selftests(void)
+
+ selftests_can_run = true;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ if (list_empty(&postponed_selftests))
+- goto out;
++ return 0;
+
+ pr_info("Running postponed tracer tests:\n");
+
+@@ -2248,9 +2235,6 @@ static __init int init_trace_selftests(void)
+ }
+ tracing_selftest_running = false;
+
+- out:
+- mutex_unlock(&trace_types_lock);
+-
+ return 0;
+ }
+ core_initcall(init_trace_selftests);
+@@ -2818,7 +2802,7 @@ int tracepoint_printk_sysctl(const struct ctl_table *table, int write,
+ int save_tracepoint_printk;
+ int ret;
+
+- mutex_lock(&tracepoint_printk_mutex);
++ guard(mutex)(&tracepoint_printk_mutex);
+ save_tracepoint_printk = tracepoint_printk;
+
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+@@ -2831,16 +2815,13 @@ int tracepoint_printk_sysctl(const struct ctl_table *table, int write,
+ tracepoint_printk = 0;
+
+ if (save_tracepoint_printk == tracepoint_printk)
+- goto out;
++ return ret;
+
+ if (tracepoint_printk)
+ static_key_enable(&tracepoint_printk_key.key);
+ else
+ static_key_disable(&tracepoint_printk_key.key);
+
+- out:
+- mutex_unlock(&tracepoint_printk_mutex);
+-
+ return ret;
+ }
+
+@@ -5150,7 +5131,8 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
+ u32 tracer_flags;
+ int i;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
++
+ tracer_flags = tr->current_trace->flags->val;
+ trace_opts = tr->current_trace->flags->opts;
+
+@@ -5167,7 +5149,6 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
+ else
+ seq_printf(m, "no%s\n", trace_opts[i].name);
+ }
+- mutex_unlock(&trace_types_lock);
+
+ return 0;
+ }
+@@ -5832,7 +5813,7 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
+ return;
+ }
+
+- mutex_lock(&trace_eval_mutex);
++ guard(mutex)(&trace_eval_mutex);
+
+ if (!trace_eval_maps)
+ trace_eval_maps = map_array;
+@@ -5856,8 +5837,6 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
+ map_array++;
+ }
+ memset(map_array, 0, sizeof(*map_array));
+-
+- mutex_unlock(&trace_eval_mutex);
+ }
+
+ static void trace_create_eval_file(struct dentry *d_tracer)
+@@ -6019,26 +5998,15 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
+ ssize_t tracing_resize_ring_buffer(struct trace_array *tr,
+ unsigned long size, int cpu_id)
+ {
+- int ret;
+-
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ if (cpu_id != RING_BUFFER_ALL_CPUS) {
+ /* make sure, this cpu is enabled in the mask */
+- if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask))
++ return -EINVAL;
+ }
+
+- ret = __tracing_resize_ring_buffer(tr, size, cpu_id);
+- if (ret < 0)
+- ret = -ENOMEM;
+-
+-out:
+- mutex_unlock(&trace_types_lock);
+-
+- return ret;
++ return __tracing_resize_ring_buffer(tr, size, cpu_id);
+ }
+
+ static void update_last_data(struct trace_array *tr)
+@@ -6129,9 +6097,9 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ #ifdef CONFIG_TRACER_MAX_TRACE
+ bool had_max_tr;
+ #endif
+- int ret = 0;
++ int ret;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ update_last_data(tr);
+
+@@ -6139,7 +6107,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ ret = __tracing_resize_ring_buffer(tr, trace_buf_size,
+ RING_BUFFER_ALL_CPUS);
+ if (ret < 0)
+- goto out;
++ return ret;
+ ret = 0;
+ }
+
+@@ -6147,43 +6115,37 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (strcmp(t->name, buf) == 0)
+ break;
+ }
+- if (!t) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (!t)
++ return -EINVAL;
++
+ if (t == tr->current_trace)
+- goto out;
++ return 0;
+
+ #ifdef CONFIG_TRACER_SNAPSHOT
+ if (t->use_max_tr) {
+ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+- if (tr->cond_snapshot)
+- ret = -EBUSY;
++ ret = tr->cond_snapshot ? -EBUSY : 0;
+ arch_spin_unlock(&tr->max_lock);
+ local_irq_enable();
+ if (ret)
+- goto out;
++ return ret;
+ }
+ #endif
+ /* Some tracers won't work on kernel command line */
+ if (system_state < SYSTEM_RUNNING && t->noboot) {
+ pr_warn("Tracer '%s' is not allowed on command line, ignored\n",
+ t->name);
+- goto out;
++ return 0;
+ }
+
+ /* Some tracers are only allowed for the top level buffer */
+- if (!trace_ok_for_array(t, tr)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (!trace_ok_for_array(t, tr))
++ return -EINVAL;
+
+ /* If trace pipe files are being read, we can't change the tracer */
+- if (tr->trace_ref) {
+- ret = -EBUSY;
+- goto out;
+- }
++ if (tr->trace_ref)
++ return -EBUSY;
+
+ trace_branch_disable();
+
+@@ -6214,7 +6176,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (!had_max_tr && t->use_max_tr) {
+ ret = tracing_arm_snapshot_locked(tr);
+ if (ret)
+- goto out;
++ return ret;
+ }
+ #else
+ tr->current_trace = &nop_trace;
+@@ -6227,17 +6189,15 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (t->use_max_tr)
+ tracing_disarm_snapshot(tr);
+ #endif
+- goto out;
++ return ret;
+ }
+ }
+
+ tr->current_trace = t;
+ tr->current_trace->enabled++;
+ trace_branch_enable(tr);
+- out:
+- mutex_unlock(&trace_types_lock);
+
+- return ret;
++ return 0;
+ }
+
+ static ssize_t
+@@ -6315,22 +6275,18 @@ tracing_thresh_write(struct file *filp, const char __user *ubuf,
+ struct trace_array *tr = filp->private_data;
+ int ret;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+ ret = tracing_nsecs_write(&tracing_thresh, ubuf, cnt, ppos);
+ if (ret < 0)
+- goto out;
++ return ret;
+
+ if (tr->current_trace->update_thresh) {
+ ret = tr->current_trace->update_thresh(tr);
+ if (ret < 0)
+- goto out;
++ return ret;
+ }
+
+- ret = cnt;
+-out:
+- mutex_unlock(&trace_types_lock);
+-
+- return ret;
++ return cnt;
+ }
+
+ #ifdef CONFIG_TRACER_MAX_TRACE
+@@ -6549,31 +6505,29 @@ tracing_read_pipe(struct file *filp, char __user *ubuf,
+ * This is just a matter of traces coherency, the ring buffer itself
+ * is protected.
+ */
+- mutex_lock(&iter->mutex);
++ guard(mutex)(&iter->mutex);
+
+ /* return any leftover data */
+ sret = trace_seq_to_user(&iter->seq, ubuf, cnt);
+ if (sret != -EBUSY)
+- goto out;
++ return sret;
+
+ trace_seq_init(&iter->seq);
+
+ if (iter->trace->read) {
+ sret = iter->trace->read(iter, filp, ubuf, cnt, ppos);
+ if (sret)
+- goto out;
++ return sret;
+ }
+
+ waitagain:
+ sret = tracing_wait_pipe(filp);
+ if (sret <= 0)
+- goto out;
++ return sret;
+
+ /* stop when tracing is finished */
+- if (trace_empty(iter)) {
+- sret = 0;
+- goto out;
+- }
++ if (trace_empty(iter))
++ return 0;
+
+ if (cnt >= TRACE_SEQ_BUFFER_SIZE)
+ cnt = TRACE_SEQ_BUFFER_SIZE - 1;
+@@ -6637,9 +6591,6 @@ tracing_read_pipe(struct file *filp, char __user *ubuf,
+ if (sret == -EBUSY)
+ goto waitagain;
+
+-out:
+- mutex_unlock(&iter->mutex);
+-
+ return sret;
+ }
+
+@@ -7231,25 +7182,19 @@ u64 tracing_event_time_stamp(struct trace_buffer *buffer, struct ring_buffer_eve
+ */
+ int tracing_set_filter_buffering(struct trace_array *tr, bool set)
+ {
+- int ret = 0;
+-
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ if (set && tr->no_filter_buffering_ref++)
+- goto out;
++ return 0;
+
+ if (!set) {
+- if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (WARN_ON_ONCE(!tr->no_filter_buffering_ref))
++ return -EINVAL;
+
+ --tr->no_filter_buffering_ref;
+ }
+- out:
+- mutex_unlock(&trace_types_lock);
+
+- return ret;
++ return 0;
+ }
+
+ struct ftrace_buffer_info {
+@@ -7325,12 +7270,10 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ if (ret)
+ return ret;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+- if (tr->current_trace->use_max_tr) {
+- ret = -EBUSY;
+- goto out;
+- }
++ if (tr->current_trace->use_max_tr)
++ return -EBUSY;
+
+ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+@@ -7339,24 +7282,20 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ arch_spin_unlock(&tr->max_lock);
+ local_irq_enable();
+ if (ret)
+- goto out;
++ return ret;
+
+ switch (val) {
+ case 0:
+- if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
+- ret = -EINVAL;
+- break;
+- }
++ if (iter->cpu_file != RING_BUFFER_ALL_CPUS)
++ return -EINVAL;
+ if (tr->allocated_snapshot)
+ free_snapshot(tr);
+ break;
+ case 1:
+ /* Only allow per-cpu swap if the ring buffer supports it */
+ #ifndef CONFIG_RING_BUFFER_ALLOW_SWAP
+- if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
+- ret = -EINVAL;
+- break;
+- }
++ if (iter->cpu_file != RING_BUFFER_ALL_CPUS)
++ return -EINVAL;
+ #endif
+ if (tr->allocated_snapshot)
+ ret = resize_buffer_duplicate_size(&tr->max_buffer,
+@@ -7364,7 +7303,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+
+ ret = tracing_arm_snapshot_locked(tr);
+ if (ret)
+- break;
++ return ret;
+
+ /* Now, we're going to swap */
+ if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+@@ -7391,8 +7330,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ *ppos += cnt;
+ ret = cnt;
+ }
+-out:
+- mutex_unlock(&trace_types_lock);
++
+ return ret;
+ }
+
+@@ -7778,12 +7716,11 @@ void tracing_log_err(struct trace_array *tr,
+
+ len += sizeof(CMD_PREFIX) + 2 * sizeof("\n") + strlen(cmd) + 1;
+
+- mutex_lock(&tracing_err_log_lock);
++ guard(mutex)(&tracing_err_log_lock);
++
+ err = get_tracing_log_err(tr, len);
+- if (PTR_ERR(err) == -ENOMEM) {
+- mutex_unlock(&tracing_err_log_lock);
++ if (PTR_ERR(err) == -ENOMEM)
+ return;
+- }
+
+ snprintf(err->loc, TRACING_LOG_LOC_MAX, "%s: error: ", loc);
+ snprintf(err->cmd, len, "\n" CMD_PREFIX "%s\n", cmd);
+@@ -7794,7 +7731,6 @@ void tracing_log_err(struct trace_array *tr,
+ err->info.ts = local_clock();
+
+ list_add_tail(&err->list, &tr->err_log);
+- mutex_unlock(&tracing_err_log_lock);
+ }
+
+ static void clear_tracing_err_log(struct trace_array *tr)
+@@ -9535,20 +9471,17 @@ static int instance_mkdir(const char *name)
+ struct trace_array *tr;
+ int ret;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+ ret = -EEXIST;
+ if (trace_array_find(name))
+- goto out_unlock;
++ return -EEXIST;
+
+ tr = trace_array_create(name);
+
+ ret = PTR_ERR_OR_ZERO(tr);
+
+-out_unlock:
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+ return ret;
+ }
+
+@@ -9598,24 +9531,23 @@ struct trace_array *trace_array_get_by_name(const char *name, const char *system
+ {
+ struct trace_array *tr;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+- if (tr->name && strcmp(tr->name, name) == 0)
+- goto out_unlock;
++ if (tr->name && strcmp(tr->name, name) == 0) {
++ tr->ref++;
++ return tr;
++ }
+ }
+
+ tr = trace_array_create_systems(name, systems, 0, 0);
+
+ if (IS_ERR(tr))
+ tr = NULL;
+-out_unlock:
+- if (tr)
++ else
+ tr->ref++;
+
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+ return tr;
+ }
+ EXPORT_SYMBOL_GPL(trace_array_get_by_name);
+@@ -9666,48 +9598,36 @@ static int __remove_instance(struct trace_array *tr)
+ int trace_array_destroy(struct trace_array *this_tr)
+ {
+ struct trace_array *tr;
+- int ret;
+
+ if (!this_tr)
+ return -EINVAL;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+- ret = -ENODEV;
+
+ /* Making sure trace array exists before destroying it. */
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+- if (tr == this_tr) {
+- ret = __remove_instance(tr);
+- break;
+- }
++ if (tr == this_tr)
++ return __remove_instance(tr);
+ }
+
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+-
+- return ret;
++ return -ENODEV;
+ }
+ EXPORT_SYMBOL_GPL(trace_array_destroy);
+
+ static int instance_rmdir(const char *name)
+ {
+ struct trace_array *tr;
+- int ret;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+- ret = -ENODEV;
+ tr = trace_array_find(name);
+- if (tr)
+- ret = __remove_instance(tr);
+-
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
++ if (!tr)
++ return -ENODEV;
+
+- return ret;
++ return __remove_instance(tr);
+ }
+
+ static __init void create_trace_instances(struct dentry *d_tracer)
+@@ -9720,19 +9640,16 @@ static __init void create_trace_instances(struct dentry *d_tracer)
+ if (MEM_FAIL(!trace_instance_dir, "Failed to create instances directory\n"))
+ return;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ if (!tr->name)
+ continue;
+ if (MEM_FAIL(trace_array_create_dir(tr) < 0,
+ "Failed to create instance directory\n"))
+- break;
++ return;
+ }
+-
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+ }
+
+ static void
+@@ -9946,7 +9863,7 @@ static void trace_module_remove_evals(struct module *mod)
+ if (!mod->num_trace_evals)
+ return;
+
+- mutex_lock(&trace_eval_mutex);
++ guard(mutex)(&trace_eval_mutex);
+
+ map = trace_eval_maps;
+
+@@ -9958,12 +9875,10 @@ static void trace_module_remove_evals(struct module *mod)
+ map = map->tail.next;
+ }
+ if (!map)
+- goto out;
++ return;
+
+ *last = trace_eval_jmp_to_tail(map)->tail.next;
+ kfree(map);
+- out:
+- mutex_unlock(&trace_eval_mutex);
+ }
+ #else
+ static inline void trace_module_remove_evals(struct module *mod) { }
+diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
+index 3b0cea37e0297b..fbbc3c719d2f68 100644
+--- a/kernel/trace/trace_functions.c
++++ b/kernel/trace/trace_functions.c
+@@ -193,7 +193,7 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
+ if (bit < 0)
+ return;
+
+- trace_ctx = tracing_gen_ctx();
++ trace_ctx = tracing_gen_ctx_dec();
+
+ cpu = smp_processor_id();
+ data = per_cpu_ptr(tr->array_buffer.data, cpu);
+@@ -298,7 +298,6 @@ function_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
+ struct trace_array *tr = op->private;
+ struct trace_array_cpu *data;
+ unsigned int trace_ctx;
+- unsigned long flags;
+ int bit;
+ int cpu;
+
+@@ -325,8 +324,7 @@ function_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
+ if (is_repeat_check(tr, last_info, ip, parent_ip))
+ goto out;
+
+- local_save_flags(flags);
+- trace_ctx = tracing_gen_ctx_flags(flags);
++ trace_ctx = tracing_gen_ctx_dec();
+ process_repeats(tr, ip, parent_ip, last_info, trace_ctx);
+
+ trace_function(tr, ip, parent_ip, trace_ctx);
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 908e75a28d90bd..bdb37d572e97ca 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1428,6 +1428,8 @@ static ssize_t __import_iovec_ubuf(int type, const struct iovec __user *uvec,
+ struct iovec *iov = *iovp;
+ ssize_t ret;
+
++ *iovp = NULL;
++
+ if (compat)
+ ret = copy_compat_iovec_from_user(iov, uvec, 1);
+ else
+@@ -1438,7 +1440,6 @@ static ssize_t __import_iovec_ubuf(int type, const struct iovec __user *uvec,
+ ret = import_ubuf(type, iov->iov_base, iov->iov_len, i);
+ if (unlikely(ret))
+ return ret;
+- *iovp = NULL;
+ return i->count;
+ }
+
+diff --git a/mm/madvise.c b/mm/madvise.c
+index ff139e57cca292..c211e8fa4e49bb 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -920,7 +920,16 @@ static long madvise_dontneed_free(struct vm_area_struct *vma,
+ */
+ end = vma->vm_end;
+ }
+- VM_WARN_ON(start >= end);
++ /*
++ * If the memory region between start and end was
++ * originally backed by 4kB pages and then remapped to
++ * be backed by hugepages while mmap_lock was dropped,
++ * the adjustment for hugetlb vma above may have rounded
++ * end down to the start address.
++ */
++ if (start == end)
++ return 0;
++ VM_WARN_ON(start > end);
+ }
+
+ if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED)
+diff --git a/mm/migrate_device.c b/mm/migrate_device.c
+index 9cf26592ac934d..5bd888223cc8b8 100644
+--- a/mm/migrate_device.c
++++ b/mm/migrate_device.c
+@@ -840,20 +840,15 @@ void migrate_device_finalize(unsigned long *src_pfns,
+ dst = src;
+ }
+
++ if (!folio_is_zone_device(dst))
++ folio_add_lru(dst);
+ remove_migration_ptes(src, dst, 0);
+ folio_unlock(src);
+-
+- if (folio_is_zone_device(src))
+- folio_put(src);
+- else
+- folio_putback_lru(src);
++ folio_put(src);
+
+ if (dst != src) {
+ folio_unlock(dst);
+- if (folio_is_zone_device(dst))
+- folio_put(dst);
+- else
+- folio_putback_lru(dst);
++ folio_put(dst);
+ }
+ }
+ }
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 501ec4249fedc3..8612023bec60dc 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -660,12 +660,9 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
+ void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
+ void *data;
+
+- if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom)
++ if (user_size < ETH_HLEN || user_size > PAGE_SIZE - headroom - tailroom)
+ return ERR_PTR(-EINVAL);
+
+- if (user_size > size)
+- return ERR_PTR(-EMSGSIZE);
+-
+ size = SKB_DATA_ALIGN(size);
+ data = kzalloc(size + headroom + tailroom, GFP_USER);
+ if (!data)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2e0fe38d0e877d..c761f862bc5a2d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1012,6 +1012,12 @@ int netdev_get_name(struct net *net, char *name, int ifindex)
+ return ret;
+ }
+
++static bool dev_addr_cmp(struct net_device *dev, unsigned short type,
++ const char *ha)
++{
++ return dev->type == type && !memcmp(dev->dev_addr, ha, dev->addr_len);
++}
++
+ /**
+ * dev_getbyhwaddr_rcu - find a device by its hardware address
+ * @net: the applicable net namespace
+@@ -1020,7 +1026,7 @@ int netdev_get_name(struct net *net, char *name, int ifindex)
+ *
+ * Search for an interface by MAC address. Returns NULL if the device
+ * is not found or a pointer to the device.
+- * The caller must hold RCU or RTNL.
++ * The caller must hold RCU.
+ * The returned device has not had its ref count increased
+ * and the caller must therefore be careful about locking
+ *
+@@ -1032,14 +1038,39 @@ struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type,
+ struct net_device *dev;
+
+ for_each_netdev_rcu(net, dev)
+- if (dev->type == type &&
+- !memcmp(dev->dev_addr, ha, dev->addr_len))
++ if (dev_addr_cmp(dev, type, ha))
+ return dev;
+
+ return NULL;
+ }
+ EXPORT_SYMBOL(dev_getbyhwaddr_rcu);
+
++/**
++ * dev_getbyhwaddr() - find a device by its hardware address
++ * @net: the applicable net namespace
++ * @type: media type of device
++ * @ha: hardware address
++ *
++ * Similar to dev_getbyhwaddr_rcu(), but the owner needs to hold
++ * rtnl_lock.
++ *
++ * Context: rtnl_lock() must be held.
++ * Return: pointer to the net_device, or NULL if not found
++ */
++struct net_device *dev_getbyhwaddr(struct net *net, unsigned short type,
++ const char *ha)
++{
++ struct net_device *dev;
++
++ ASSERT_RTNL();
++ for_each_netdev(net, dev)
++ if (dev_addr_cmp(dev, type, ha))
++ return dev;
++
++ return NULL;
++}
++EXPORT_SYMBOL(dev_getbyhwaddr);
++
+ struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type)
+ {
+ struct net_device *dev, *ret = NULL;
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 6efd4cccc9ddd2..212f0a048cab68 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -1734,30 +1734,30 @@ static int __init init_net_drop_monitor(void)
+ return -ENOSPC;
+ }
+
+- rc = genl_register_family(&net_drop_monitor_family);
+- if (rc) {
+- pr_err("Could not create drop monitor netlink family\n");
+- return rc;
++ for_each_possible_cpu(cpu) {
++ net_dm_cpu_data_init(cpu);
++ net_dm_hw_cpu_data_init(cpu);
+ }
+- WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
+
+ rc = register_netdevice_notifier(&dropmon_net_notifier);
+ if (rc < 0) {
+ pr_crit("Failed to register netdevice notifier\n");
++ return rc;
++ }
++
++ rc = genl_register_family(&net_drop_monitor_family);
++ if (rc) {
++ pr_err("Could not create drop monitor netlink family\n");
+ goto out_unreg;
+ }
++ WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
+
+ rc = 0;
+
+- for_each_possible_cpu(cpu) {
+- net_dm_cpu_data_init(cpu);
+- net_dm_hw_cpu_data_init(cpu);
+- }
+-
+ goto out;
+
+ out_unreg:
+- genl_unregister_family(&net_drop_monitor_family);
++ WARN_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
+ out:
+ return rc;
+ }
+@@ -1766,19 +1766,18 @@ static void exit_net_drop_monitor(void)
+ {
+ int cpu;
+
+- BUG_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
+-
+ /*
+ * Because of the module_get/put we do in the trace state change path
+ * we are guaranteed not to have any current users when we get here
+ */
++ BUG_ON(genl_unregister_family(&net_drop_monitor_family));
++
++ BUG_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
+
+ for_each_possible_cpu(cpu) {
+ net_dm_hw_cpu_data_fini(cpu);
+ net_dm_cpu_data_fini(cpu);
+ }
+-
+- BUG_ON(genl_unregister_family(&net_drop_monitor_family));
+ }
+
+ module_init(init_net_drop_monitor);
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 5db41bf2ed93e0..9cd8de6bebb543 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -853,23 +853,30 @@ __skb_flow_dissect_ports(const struct sk_buff *skb,
+ void *target_container, const void *data,
+ int nhoff, u8 ip_proto, int hlen)
+ {
+- enum flow_dissector_key_id dissector_ports = FLOW_DISSECTOR_KEY_MAX;
+- struct flow_dissector_key_ports *key_ports;
++ struct flow_dissector_key_ports_range *key_ports_range = NULL;
++ struct flow_dissector_key_ports *key_ports = NULL;
++ __be32 ports;
+
+ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS))
+- dissector_ports = FLOW_DISSECTOR_KEY_PORTS;
+- else if (dissector_uses_key(flow_dissector,
+- FLOW_DISSECTOR_KEY_PORTS_RANGE))
+- dissector_ports = FLOW_DISSECTOR_KEY_PORTS_RANGE;
++ key_ports = skb_flow_dissector_target(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS,
++ target_container);
+
+- if (dissector_ports == FLOW_DISSECTOR_KEY_MAX)
++ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS_RANGE))
++ key_ports_range = skb_flow_dissector_target(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS_RANGE,
++ target_container);
++
++ if (!key_ports && !key_ports_range)
+ return;
+
+- key_ports = skb_flow_dissector_target(flow_dissector,
+- dissector_ports,
+- target_container);
+- key_ports->ports = __skb_flow_get_ports(skb, nhoff, ip_proto,
+- data, hlen);
++ ports = __skb_flow_get_ports(skb, nhoff, ip_proto, data, hlen);
++
++ if (key_ports)
++ key_ports->ports = ports;
++
++ if (key_ports_range)
++ key_ports_range->tp.ports = ports;
+ }
+
+ static void
+@@ -924,6 +931,7 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
+ struct flow_dissector *flow_dissector,
+ void *target_container)
+ {
++ struct flow_dissector_key_ports_range *key_ports_range = NULL;
+ struct flow_dissector_key_ports *key_ports = NULL;
+ struct flow_dissector_key_control *key_control;
+ struct flow_dissector_key_basic *key_basic;
+@@ -968,20 +976,21 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
+ key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ }
+
+- if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS))
++ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS)) {
+ key_ports = skb_flow_dissector_target(flow_dissector,
+ FLOW_DISSECTOR_KEY_PORTS,
+ target_container);
+- else if (dissector_uses_key(flow_dissector,
+- FLOW_DISSECTOR_KEY_PORTS_RANGE))
+- key_ports = skb_flow_dissector_target(flow_dissector,
+- FLOW_DISSECTOR_KEY_PORTS_RANGE,
+- target_container);
+-
+- if (key_ports) {
+ key_ports->src = flow_keys->sport;
+ key_ports->dst = flow_keys->dport;
+ }
++ if (dissector_uses_key(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS_RANGE)) {
++ key_ports_range = skb_flow_dissector_target(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS_RANGE,
++ target_container);
++ key_ports_range->tp.src = flow_keys->sport;
++ key_ports_range->tp.dst = flow_keys->dport;
++ }
+
+ if (dissector_uses_key(flow_dissector,
+ FLOW_DISSECTOR_KEY_FLOW_LABEL)) {
+diff --git a/net/core/gro.c b/net/core/gro.c
+index d1f44084e978fb..78b320b6317445 100644
+--- a/net/core/gro.c
++++ b/net/core/gro.c
+@@ -7,9 +7,6 @@
+
+ #define MAX_GRO_SKBS 8
+
+-/* This should be increased if a protocol with a bigger head is added. */
+-#define GRO_MAX_HEAD (MAX_HEADER + 128)
+-
+ static DEFINE_SPINLOCK(offload_lock);
+
+ /**
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 74149dc4ee318d..61a950f13a91c7 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -69,6 +69,7 @@
+ #include <net/dst.h>
+ #include <net/sock.h>
+ #include <net/checksum.h>
++#include <net/gro.h>
+ #include <net/gso.h>
+ #include <net/hotdata.h>
+ #include <net/ip6_checksum.h>
+@@ -95,7 +96,9 @@
+ static struct kmem_cache *skbuff_ext_cache __ro_after_init;
+ #endif
+
+-#define SKB_SMALL_HEAD_SIZE SKB_HEAD_ALIGN(MAX_TCP_HEADER)
++#define GRO_MAX_HEAD_PAD (GRO_MAX_HEAD + NET_SKB_PAD + NET_IP_ALIGN)
++#define SKB_SMALL_HEAD_SIZE SKB_HEAD_ALIGN(max(MAX_TCP_HEADER, \
++ GRO_MAX_HEAD_PAD))
+
+ /* We want SKB_SMALL_HEAD_CACHE_SIZE to not be a power of two.
+ * This should ensure that SKB_SMALL_HEAD_HEADROOM is a unique
+@@ -736,7 +739,7 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
+ /* If requested length is either too small or too big,
+ * we use kmalloc() for skb->head allocation.
+ */
+- if (len <= SKB_WITH_OVERHEAD(1024) ||
++ if (len <= SKB_WITH_OVERHEAD(SKB_SMALL_HEAD_CACHE_SIZE) ||
+ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
+ (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
+@@ -816,7 +819,8 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len)
+ * When the small frag allocator is available, prefer it over kmalloc
+ * for small fragments
+ */
+- if ((!NAPI_HAS_SMALL_PAGE_FRAG && len <= SKB_WITH_OVERHEAD(1024)) ||
++ if ((!NAPI_HAS_SMALL_PAGE_FRAG &&
++ len <= SKB_WITH_OVERHEAD(SKB_SMALL_HEAD_CACHE_SIZE)) ||
+ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
+ (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX | SKB_ALLOC_NAPI,
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8ad7e6755fd642..f76cbf49c68c8d 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -548,6 +548,9 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ return num_sge;
+ }
+
++#if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
++ psock->ingress_bytes += len;
++#endif
+ copied = len;
+ msg->sg.start = 0;
+ msg->sg.size = copied;
+@@ -1143,6 +1146,10 @@ int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock)
+ if (!ret)
+ sk_psock_set_state(psock, SK_PSOCK_RX_STRP_ENABLED);
+
++ if (sk_is_tcp(sk)) {
++ psock->strp.cb.read_sock = tcp_bpf_strp_read_sock;
++ psock->copied_seq = tcp_sk(sk)->copied_seq;
++ }
+ return ret;
+ }
+
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index f1b9b3958792cd..82a14f131d00c6 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -303,7 +303,10 @@ static int sock_map_link(struct bpf_map *map, struct sock *sk)
+
+ write_lock_bh(&sk->sk_callback_lock);
+ if (stream_parser && stream_verdict && !psock->saved_data_ready) {
+- ret = sk_psock_init_strp(sk, psock);
++ if (sk_is_tcp(sk))
++ ret = sk_psock_init_strp(sk, psock);
++ else
++ ret = -EOPNOTSUPP;
+ if (ret) {
+ write_unlock_bh(&sk->sk_callback_lock);
+ sk_psock_put(sk, psock);
+@@ -541,6 +544,9 @@ static bool sock_map_sk_state_allowed(const struct sock *sk)
+ return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_LISTEN);
+ if (sk_is_stream_unix(sk))
+ return (1 << sk->sk_state) & TCPF_ESTABLISHED;
++ if (sk_is_vsock(sk) &&
++ (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET))
++ return (1 << sk->sk_state) & TCPF_ESTABLISHED;
+ return true;
+ }
+
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 59ffaa89d7b05f..8fb48f42581ce1 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1077,7 +1077,7 @@ static int arp_req_set_public(struct net *net, struct arpreq *r,
+ __be32 mask = ((struct sockaddr_in *)&r->arp_netmask)->sin_addr.s_addr;
+
+ if (!dev && (r->arp_flags & ATF_COM)) {
+- dev = dev_getbyhwaddr_rcu(net, r->arp_ha.sa_family,
++ dev = dev_getbyhwaddr(net, r->arp_ha.sa_family,
+ r->arp_ha.sa_data);
+ if (!dev)
+ return -ENODEV;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 4f77bd862e957f..68cb6a966b18b8 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1564,12 +1564,13 @@ EXPORT_SYMBOL(tcp_recv_skb);
+ * or for 'peeking' the socket using this routine
+ * (although both would be easy to implement).
+ */
+-int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+- sk_read_actor_t recv_actor)
++static int __tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor, bool noack,
++ u32 *copied_seq)
+ {
+ struct sk_buff *skb;
+ struct tcp_sock *tp = tcp_sk(sk);
+- u32 seq = tp->copied_seq;
++ u32 seq = *copied_seq;
+ u32 offset;
+ int copied = 0;
+
+@@ -1623,9 +1624,12 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ tcp_eat_recv_skb(sk, skb);
+ if (!desc->count)
+ break;
+- WRITE_ONCE(tp->copied_seq, seq);
++ WRITE_ONCE(*copied_seq, seq);
+ }
+- WRITE_ONCE(tp->copied_seq, seq);
++ WRITE_ONCE(*copied_seq, seq);
++
++ if (noack)
++ goto out;
+
+ tcp_rcv_space_adjust(sk);
+
+@@ -1634,10 +1638,25 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ tcp_recv_skb(sk, seq, &offset);
+ tcp_cleanup_rbuf(sk, copied);
+ }
++out:
+ return copied;
+ }
++
++int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor)
++{
++ return __tcp_read_sock(sk, desc, recv_actor, false,
++ &tcp_sk(sk)->copied_seq);
++}
+ EXPORT_SYMBOL(tcp_read_sock);
+
++int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor, bool noack,
++ u32 *copied_seq)
++{
++ return __tcp_read_sock(sk, desc, recv_actor, noack, copied_seq);
++}
++
+ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ {
+ struct sk_buff *skb;
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 392678ae80f4ed..22e8a2af5dd8b0 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -646,6 +646,42 @@ static int tcp_bpf_assert_proto_ops(struct proto *ops)
+ ops->sendmsg == tcp_sendmsg ? 0 : -ENOTSUPP;
+ }
+
++#if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
++int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor)
++{
++ struct sock *sk = strp->sk;
++ struct sk_psock *psock;
++ struct tcp_sock *tp;
++ int copied = 0;
++
++ tp = tcp_sk(sk);
++ rcu_read_lock();
++ psock = sk_psock(sk);
++ if (WARN_ON_ONCE(!psock)) {
++ desc->error = -EINVAL;
++ goto out;
++ }
++
++ psock->ingress_bytes = 0;
++ copied = tcp_read_sock_noack(sk, desc, recv_actor, true,
++ &psock->copied_seq);
++ if (copied < 0)
++ goto out;
++ /* recv_actor may redirect skb to another socket (SK_REDIRECT) or
++ * just put skb into ingress queue of current socket (SK_PASS).
++ * For SK_REDIRECT, we need to ack the frame immediately but for
++ * SK_PASS, we want to delay the ack until tcp_bpf_recvmsg_parser().
++ */
++ tp->copied_seq = psock->copied_seq - psock->ingress_bytes;
++ tcp_rcv_space_adjust(sk);
++ __tcp_cleanup_rbuf(sk, copied - psock->ingress_bytes);
++out:
++ rcu_read_unlock();
++ return copied;
++}
++#endif /* CONFIG_BPF_STREAM_PARSER */
++
+ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
+ {
+ int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 0f523cbfe329ef..32b28fc21b63c0 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -178,7 +178,7 @@ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb)
+ if (!skb)
+ return;
+
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+ /* segs_in has been initialized to 1 in tcp_create_openreq_child().
+ * Hence, reset segs_in to 0 before calling tcp_segs_in()
+ * to avoid double counting. Also, tcp_segs_in() expects
+@@ -195,7 +195,7 @@ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb)
+ TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_SYN;
+
+ tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+- __skb_queue_tail(&sk->sk_receive_queue, skb);
++ tcp_add_receive_queue(sk, skb);
+ tp->syn_data_acked = 1;
+
+ /* u64_stats_update_begin(&tp->syncp) not needed here,
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 2d43b29da15e20..d93a5a89c5692d 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -243,9 +243,15 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
+ do_div(val, skb->truesize);
+ tcp_sk(sk)->scaling_ratio = val ? val : 1;
+
+- if (old_ratio != tcp_sk(sk)->scaling_ratio)
+- WRITE_ONCE(tcp_sk(sk)->window_clamp,
+- tcp_win_from_space(sk, sk->sk_rcvbuf));
++ if (old_ratio != tcp_sk(sk)->scaling_ratio) {
++ struct tcp_sock *tp = tcp_sk(sk);
++
++ val = tcp_win_from_space(sk, sk->sk_rcvbuf);
++ tcp_set_window_clamp(sk, val);
++
++ if (tp->window_clamp < tp->rcvq_space.space)
++ tp->rcvq_space.space = tp->window_clamp;
++ }
+ }
+ icsk->icsk_ack.rcv_mss = min_t(unsigned int, len,
+ tcp_sk(sk)->advmss);
+@@ -4964,7 +4970,7 @@ static void tcp_ofo_queue(struct sock *sk)
+ tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
+ fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN;
+ if (!eaten)
+- __skb_queue_tail(&sk->sk_receive_queue, skb);
++ tcp_add_receive_queue(sk, skb);
+ else
+ kfree_skb_partial(skb, fragstolen);
+
+@@ -5156,7 +5162,7 @@ static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb,
+ skb, fragstolen)) ? 1 : 0;
+ tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq);
+ if (!eaten) {
+- __skb_queue_tail(&sk->sk_receive_queue, skb);
++ tcp_add_receive_queue(sk, skb);
+ skb_set_owner_r(skb, sk);
+ }
+ return eaten;
+@@ -5239,7 +5245,7 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
+ __kfree_skb(skb);
+ return;
+ }
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+ __skb_pull(skb, tcp_hdr(skb)->doff * 4);
+
+ reason = SKB_DROP_REASON_NOT_SPECIFIED;
+@@ -6208,7 +6214,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb)
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPHITS);
+
+ /* Bulk data transfer: receiver */
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+ __skb_pull(skb, tcp_header_len);
+ eaten = tcp_queue_rcv(sk, skb, &fragstolen);
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index bcc2f1e090c7db..824048679e1b8f 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2025,7 +2025,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
+ */
+ skb_condense(skb);
+
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+
+ if (unlikely(tcp_checksum_complete(skb))) {
+ bh_unlock_sock(sk);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index dfa3067084948f..998ea3b5badfce 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -97,7 +97,7 @@ tcf_exts_miss_cookie_base_alloc(struct tcf_exts *exts, struct tcf_proto *tp,
+
+ err = xa_alloc_cyclic(&tcf_exts_miss_cookies_xa, &n->miss_cookie_base,
+ n, xa_limit_32b, &next, GFP_KERNEL);
+- if (err)
++ if (err < 0)
+ goto err_xa_alloc;
+
+ exts->miss_cookie_node = n;
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index 8299ceb3e3739d..95696f42647ec1 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -347,7 +347,10 @@ static int strp_read_sock(struct strparser *strp)
+ struct socket *sock = strp->sk->sk_socket;
+ read_descriptor_t desc;
+
+- if (unlikely(!sock || !sock->ops || !sock->ops->read_sock))
++ if (unlikely(!sock || !sock->ops))
++ return -EBUSY;
++
++ if (unlikely(!strp->cb.read_sock && !sock->ops->read_sock))
+ return -EBUSY;
+
+ desc.arg.data = strp;
+@@ -355,7 +358,10 @@ static int strp_read_sock(struct strparser *strp)
+ desc.count = 1; /* give more than one skb per call */
+
+ /* sk should be locked here, so okay to do read_sock */
+- sock->ops->read_sock(strp->sk, &desc, strp_recv);
++ if (strp->cb.read_sock)
++ strp->cb.read_sock(strp, &desc, strp_recv);
++ else
++ sock->ops->read_sock(strp->sk, &desc, strp_recv);
+
+ desc.error = strp->cb.read_sock_done(strp, desc.error);
+
+@@ -468,6 +474,7 @@ int strp_init(struct strparser *strp, struct sock *sk,
+ strp->cb.unlock = cb->unlock ? : strp_sock_unlock;
+ strp->cb.rcv_msg = cb->rcv_msg;
+ strp->cb.parse_msg = cb->parse_msg;
++ strp->cb.read_sock = cb->read_sock;
+ strp->cb.read_sock_done = cb->read_sock_done ? : default_read_sock_done;
+ strp->cb.abort_parser = cb->abort_parser ? : strp_abort_strp;
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 37299a7ca1876e..eb6ea26b390ee8 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1189,6 +1189,9 @@ static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
+ {
+ struct vsock_sock *vsk = vsock_sk(sk);
+
++ if (WARN_ON_ONCE(!vsk->transport))
++ return -ENODEV;
++
+ return vsk->transport->read_skb(vsk, read_actor);
+ }
+
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index b58c3818f284f1..f0e48e6911fc46 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -670,6 +670,13 @@ static int virtio_vsock_vqs_init(struct virtio_vsock *vsock)
+ };
+ int ret;
+
++ mutex_lock(&vsock->rx_lock);
++ vsock->rx_buf_nr = 0;
++ vsock->rx_buf_max_nr = 0;
++ mutex_unlock(&vsock->rx_lock);
++
++ atomic_set(&vsock->queued_replies, 0);
++
+ ret = virtio_find_vqs(vdev, VSOCK_VQ_MAX, vsock->vqs, vqs_info, NULL);
+ if (ret < 0)
+ return ret;
+@@ -779,9 +786,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
+
+ vsock->vdev = vdev;
+
+- vsock->rx_buf_nr = 0;
+- vsock->rx_buf_max_nr = 0;
+- atomic_set(&vsock->queued_replies, 0);
+
+ mutex_init(&vsock->tx_lock);
+ mutex_init(&vsock->rx_lock);
+diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
+index f201d9eca1df2f..07b96d56f3a577 100644
+--- a/net/vmw_vsock/vsock_bpf.c
++++ b/net/vmw_vsock/vsock_bpf.c
+@@ -87,7 +87,7 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ lock_sock(sk);
+ vsk = vsock_sk(sk);
+
+- if (!vsk->transport) {
++ if (WARN_ON_ONCE(!vsk->transport)) {
+ copied = -ENODEV;
+ goto out;
+ }
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 77b6ac9b5c11bc..9955c4d54e42a7 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -678,12 +678,18 @@ static int snd_seq_deliver_single_event(struct snd_seq_client *client,
+ dest_port->time_real);
+
+ #if IS_ENABLED(CONFIG_SND_SEQ_UMP)
+- if (!(dest->filter & SNDRV_SEQ_FILTER_NO_CONVERT)) {
+- if (snd_seq_ev_is_ump(event)) {
++ if (snd_seq_ev_is_ump(event)) {
++ if (!(dest->filter & SNDRV_SEQ_FILTER_NO_CONVERT)) {
+ result = snd_seq_deliver_from_ump(client, dest, dest_port,
+ event, atomic, hop);
+ goto __skip;
+- } else if (snd_seq_client_is_ump(dest)) {
++ } else if (dest->type == USER_CLIENT &&
++ !snd_seq_client_is_ump(dest)) {
++ result = 0; // drop the event
++ goto __skip;
++ }
++ } else if (snd_seq_client_is_ump(dest)) {
++ if (!(dest->filter & SNDRV_SEQ_FILTER_NO_CONVERT)) {
+ result = snd_seq_deliver_to_ump(client, dest, dest_port,
+ event, atomic, hop);
+ goto __skip;
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 14763c0f31ad9f..46a2204049993d 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2470,7 +2470,9 @@ int snd_hda_create_dig_out_ctls(struct hda_codec *codec,
+ break;
+ id = kctl->id;
+ id.index = spdif_index;
+- snd_ctl_rename_id(codec->card, &kctl->id, &id);
++ err = snd_ctl_rename_id(codec->card, &kctl->id, &id);
++ if (err < 0)
++ return err;
+ }
+ bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI;
+ }
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 538c37a78a56f7..84ab357b840d67 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1080,6 +1080,7 @@ static const struct hda_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
++ SND_PCI_QUIRK(0x103c, 0x8231, "HP ProBook 450 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c
+index 759f48038273df..621f947e38174d 100644
+--- a/sound/pci/hda/patch_cs8409-tables.c
++++ b/sound/pci/hda/patch_cs8409-tables.c
+@@ -121,7 +121,7 @@ static const struct cs8409_i2c_param cs42l42_init_reg_seq[] = {
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3f },
+- { CS42L42_HP_CTL, 0x03 },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_MIC_DET_CTL1, 0xB6 },
+ { CS42L42_TIPSENSE_CTL, 0xC2 },
+ { CS42L42_HS_CLAMP_DISABLE, 0x01 },
+@@ -315,7 +315,7 @@ static const struct cs8409_i2c_param dolphin_c0_init_reg_seq[] = {
+ { CS42L42_ASP_TX_SZ_EN, 0x01 },
+ { CS42L42_PWR_CTL1, 0x0A },
+ { CS42L42_PWR_CTL2, 0x84 },
+- { CS42L42_HP_CTL, 0x03 },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3f },
+@@ -371,7 +371,7 @@ static const struct cs8409_i2c_param dolphin_c1_init_reg_seq[] = {
+ { CS42L42_ASP_TX_SZ_EN, 0x00 },
+ { CS42L42_PWR_CTL1, 0x0E },
+ { CS42L42_PWR_CTL2, 0x84 },
+- { CS42L42_HP_CTL, 0x01 },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3f },
+diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c
+index 614327218634c0..b760332a4e3577 100644
+--- a/sound/pci/hda/patch_cs8409.c
++++ b/sound/pci/hda/patch_cs8409.c
+@@ -876,7 +876,7 @@ static void cs42l42_resume(struct sub_codec *cs42l42)
+ { CS42L42_DET_INT_STATUS2, 0x00 },
+ { CS42L42_TSRS_PLUG_STATUS, 0x00 },
+ };
+- int fsv_old, fsv_new;
++ unsigned int fsv;
+
+ /* Bring CS42L42 out of Reset */
+ spec->gpio_data = snd_hda_codec_read(codec, CS8409_PIN_AFG, 0, AC_VERB_GET_GPIO_DATA, 0);
+@@ -893,13 +893,15 @@ static void cs42l42_resume(struct sub_codec *cs42l42)
+ /* Clear interrupts, by reading interrupt status registers */
+ cs8409_i2c_bulk_read(cs42l42, irq_regs, ARRAY_SIZE(irq_regs));
+
+- fsv_old = cs8409_i2c_read(cs42l42, CS42L42_HP_CTL);
+- if (cs42l42->full_scale_vol == CS42L42_FULL_SCALE_VOL_0DB)
+- fsv_new = fsv_old & ~CS42L42_FULL_SCALE_VOL_MASK;
+- else
+- fsv_new = fsv_old & CS42L42_FULL_SCALE_VOL_MASK;
+- if (fsv_new != fsv_old)
+- cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv_new);
++ fsv = cs8409_i2c_read(cs42l42, CS42L42_HP_CTL);
++ if (cs42l42->full_scale_vol) {
++ // Set the full scale volume bit
++ fsv |= CS42L42_FULL_SCALE_VOL_MASK;
++ cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv);
++ }
++ // Unmute analog channels A and B
++ fsv = (fsv & ~CS42L42_ANA_MUTE_AB);
++ cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv);
+
+ /* we have to explicitly allow unsol event handling even during the
+ * resume phase so that the jack event is processed properly
+@@ -920,7 +922,7 @@ static void cs42l42_suspend(struct sub_codec *cs42l42)
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+- { CS42L42_HP_CTL, 0x0F },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_ASP_RX_DAI0_EN, 0x00 },
+ { CS42L42_ASP_CLK_CFG, 0x00 },
+ { CS42L42_PWR_CTL1, 0xFE },
+diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h
+index 5e48115caf096b..14645d25e70fd2 100644
+--- a/sound/pci/hda/patch_cs8409.h
++++ b/sound/pci/hda/patch_cs8409.h
+@@ -230,9 +230,10 @@ enum cs8409_coefficient_index_registers {
+ #define CS42L42_PDN_TIMEOUT_US (250000)
+ #define CS42L42_PDN_SLEEP_US (2000)
+ #define CS42L42_INIT_TIMEOUT_MS (45)
++#define CS42L42_ANA_MUTE_AB (0x0C)
+ #define CS42L42_FULL_SCALE_VOL_MASK (2)
+-#define CS42L42_FULL_SCALE_VOL_0DB (1)
+-#define CS42L42_FULL_SCALE_VOL_MINUS6DB (0)
++#define CS42L42_FULL_SCALE_VOL_0DB (0)
++#define CS42L42_FULL_SCALE_VOL_MINUS6DB (1)
+
+ /* Dell BULLSEYE / WARLOCK / CYBORG Specific Definitions */
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f3f849b96402d1..9bf99fe6cd34dd 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3790,6 +3790,7 @@ static void alc225_init(struct hda_codec *codec)
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
+ msleep(75);
++ alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ }
+ }
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 67c2d4cb0dea21..7cfe77b57b3c25 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -156,6 +156,8 @@ static int micfil_set_quality(struct fsl_micfil *micfil)
+ case QUALITY_VLOW2:
+ qsel = MICFIL_QSEL_VLOW2_QUALITY;
+ break;
++ default:
++ return -EINVAL;
+ }
+
+ return regmap_update_bits(micfil->regmap, REG_MICFIL_CTRL2,
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 8e7b75cf64db42..ff3671226306bd 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -23,7 +23,6 @@ struct imx_audmix {
+ struct snd_soc_card card;
+ struct platform_device *audmix_pdev;
+ struct platform_device *out_pdev;
+- struct clk *cpu_mclk;
+ int num_dai;
+ struct snd_soc_dai_link *dai;
+ int num_dai_conf;
+@@ -32,34 +31,11 @@ struct imx_audmix {
+ struct snd_soc_dapm_route *dapm_routes;
+ };
+
+-static const u32 imx_audmix_rates[] = {
+- 8000, 12000, 16000, 24000, 32000, 48000, 64000, 96000,
+-};
+-
+-static const struct snd_pcm_hw_constraint_list imx_audmix_rate_constraints = {
+- .count = ARRAY_SIZE(imx_audmix_rates),
+- .list = imx_audmix_rates,
+-};
+-
+ static int imx_audmix_fe_startup(struct snd_pcm_substream *substream)
+ {
+- struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+- struct imx_audmix *priv = snd_soc_card_get_drvdata(rtd->card);
+ struct snd_pcm_runtime *runtime = substream->runtime;
+- struct device *dev = rtd->card->dev;
+- unsigned long clk_rate = clk_get_rate(priv->cpu_mclk);
+ int ret;
+
+- if (clk_rate % 24576000 == 0) {
+- ret = snd_pcm_hw_constraint_list(runtime, 0,
+- SNDRV_PCM_HW_PARAM_RATE,
+- &imx_audmix_rate_constraints);
+- if (ret < 0)
+- return ret;
+- } else {
+- dev_warn(dev, "mclk may be not supported %lu\n", clk_rate);
+- }
+-
+ ret = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 1, 8);
+ if (ret < 0)
+@@ -325,13 +301,6 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ }
+ put_device(&cpu_pdev->dev);
+
+- priv->cpu_mclk = devm_clk_get(&cpu_pdev->dev, "mclk1");
+- if (IS_ERR(priv->cpu_mclk)) {
+- ret = PTR_ERR(priv->cpu_mclk);
+- dev_err(&cpu_pdev->dev, "failed to get DAI mclk1: %d\n", ret);
+- return ret;
+- }
+-
+ priv->audmix_pdev = audmix_pdev;
+ priv->out_pdev = cpu_pdev;
+
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index acd75e48851fcf..7feefeb6b876dc 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -451,11 +451,11 @@ static int rockchip_i2s_tdm_set_fmt(struct snd_soc_dai *cpu_dai,
+ break;
+ case SND_SOC_DAIFMT_DSP_A:
+ val = I2S_TXCR_TFS_TDM_PCM;
+- tdm_val = TDM_SHIFT_CTRL(0);
++ tdm_val = TDM_SHIFT_CTRL(2);
+ break;
+ case SND_SOC_DAIFMT_DSP_B:
+ val = I2S_TXCR_TFS_TDM_PCM;
+- tdm_val = TDM_SHIFT_CTRL(2);
++ tdm_val = TDM_SHIFT_CTRL(4);
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index 32db2cead8a4ec..4f483bfa584f5b 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -416,8 +416,12 @@ static int rz_ssi_stop(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
+ rz_ssi_reg_mask_setl(ssi, SSICR, SSICR_TEN | SSICR_REN, 0);
+
+ /* Cancel all remaining DMA transactions */
+- if (rz_ssi_is_dma_enabled(ssi))
+- dmaengine_terminate_async(strm->dma_ch);
++ if (rz_ssi_is_dma_enabled(ssi)) {
++ if (ssi->playback.dma_ch)
++ dmaengine_terminate_async(ssi->playback.dma_ch);
++ if (ssi->capture.dma_ch)
++ dmaengine_terminate_async(ssi->capture.dma_ch);
++ }
+
+ rz_ssi_set_idle(ssi);
+
+@@ -524,6 +528,8 @@ static int rz_ssi_pio_send(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
+ sample_space = strm->fifo_sample_size;
+ ssifsr = rz_ssi_reg_readl(ssi, SSIFSR);
+ sample_space -= (ssifsr >> SSIFSR_TDC_SHIFT) & SSIFSR_TDC_MASK;
++ if (sample_space < 0)
++ return -EINVAL;
+
+ /* Only add full frames at a time */
+ while (frames_left && (sample_space >= runtime->channels)) {
+diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c
+index 240fee2166d125..f82db7f2a6b7e7 100644
+--- a/sound/soc/sof/ipc4-topology.c
++++ b/sound/soc/sof/ipc4-topology.c
+@@ -671,10 +671,16 @@ static int sof_ipc4_widget_setup_comp_dai(struct snd_sof_widget *swidget)
+ }
+
+ list_for_each_entry(w, &sdev->widget_list, list) {
+- if (w->widget->sname &&
++ struct snd_sof_dai *alh_dai;
++
++ if (!WIDGET_IS_DAI(w->id) || !w->widget->sname ||
+ strcmp(w->widget->sname, swidget->widget->sname))
+ continue;
+
++ alh_dai = w->private;
++ if (alh_dai->type != SOF_DAI_INTEL_ALH)
++ continue;
++
+ blob->alh_cfg.device_count++;
+ }
+
+@@ -1973,11 +1979,13 @@ sof_ipc4_prepare_copier_module(struct snd_sof_widget *swidget,
+ list_for_each_entry(w, &sdev->widget_list, list) {
+ u32 node_type;
+
+- if (w->widget->sname &&
++ if (!WIDGET_IS_DAI(w->id) || !w->widget->sname ||
+ strcmp(w->widget->sname, swidget->widget->sname))
+ continue;
+
+ dai = w->private;
++ if (dai->type != SOF_DAI_INTEL_ALH)
++ continue;
+ alh_copier = (struct sof_ipc4_copier *)dai->private;
+ alh_data = &alh_copier->data;
+ node_type = SOF_IPC4_GET_NODE_TYPE(alh_data->gtw_cfg.node_id);
+diff --git a/sound/soc/sof/pcm.c b/sound/soc/sof/pcm.c
+index 35a7462d8b6938..c5c6353f18ceef 100644
+--- a/sound/soc/sof/pcm.c
++++ b/sound/soc/sof/pcm.c
+@@ -511,6 +511,8 @@ static int sof_pcm_close(struct snd_soc_component *component,
+ */
+ }
+
++ spcm->stream[substream->stream].substream = NULL;
++
+ return 0;
+ }
+
+diff --git a/sound/soc/sof/stream-ipc.c b/sound/soc/sof/stream-ipc.c
+index 794c7bbccbaf92..8262443ac89ad1 100644
+--- a/sound/soc/sof/stream-ipc.c
++++ b/sound/soc/sof/stream-ipc.c
+@@ -43,7 +43,7 @@ int sof_ipc_msg_data(struct snd_sof_dev *sdev,
+ return -ESTRPIPE;
+
+ posn_offset = stream->posn_offset;
+- } else {
++ } else if (sps->cstream) {
+
+ struct sof_compr_stream *sstream = sps->cstream->runtime->private_data;
+
+@@ -51,6 +51,10 @@ int sof_ipc_msg_data(struct snd_sof_dev *sdev,
+ return -ESTRPIPE;
+
+ posn_offset = sstream->posn_offset;
++
++ } else {
++ dev_err(sdev->dev, "%s: No stream opened\n", __func__);
++ return -EINVAL;
+ }
+
+ snd_sof_dsp_mailbox_read(sdev, posn_offset, p, sz);
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
+index 6c3b4d4f173ac6..aeef86b3da747a 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
+@@ -40,6 +40,14 @@ DECLARE_TRACE(bpf_testmod_test_nullable_bare,
+ TP_ARGS(ctx__nullable)
+ );
+
++struct sk_buff;
++
++DECLARE_TRACE(bpf_testmod_test_raw_tp_null,
++ TP_PROTO(struct sk_buff *skb),
++ TP_ARGS(skb)
++);
++
++
+ #undef BPF_TESTMOD_DECLARE_TRACE
+ #ifdef DECLARE_TRACE_WRITABLE
+ #define BPF_TESTMOD_DECLARE_TRACE(call, proto, args, size) \
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+index 8835761d9a126a..4e6a9e9c036873 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+@@ -380,6 +380,8 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
+
+ (void)bpf_testmod_test_arg_ptr_to_struct(&struct_arg1_2);
+
++ (void)trace_bpf_testmod_test_raw_tp_null(NULL);
++
+ struct_arg3 = kmalloc((sizeof(struct bpf_testmod_struct_arg_3) +
+ sizeof(int)), GFP_KERNEL);
+ if (struct_arg3 != NULL) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/raw_tp_null.c b/tools/testing/selftests/bpf/prog_tests/raw_tp_null.c
+new file mode 100644
+index 00000000000000..6fa19449297e9b
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/raw_tp_null.c
+@@ -0,0 +1,25 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
++
++#include <test_progs.h>
++#include "raw_tp_null.skel.h"
++
++void test_raw_tp_null(void)
++{
++ struct raw_tp_null *skel;
++
++ skel = raw_tp_null__open_and_load();
++ if (!ASSERT_OK_PTR(skel, "raw_tp_null__open_and_load"))
++ return;
++
++ skel->bss->tid = sys_gettid();
++
++ if (!ASSERT_OK(raw_tp_null__attach(skel), "raw_tp_null__attach"))
++ goto end;
++
++ ASSERT_OK(trigger_module_test_read(2), "trigger testmod read");
++ ASSERT_EQ(skel->bss->i, 3, "invocations");
++
++end:
++ raw_tp_null__destroy(skel);
++}
+diff --git a/tools/testing/selftests/bpf/progs/raw_tp_null.c b/tools/testing/selftests/bpf/progs/raw_tp_null.c
+new file mode 100644
+index 00000000000000..457f34c151e32f
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/raw_tp_null.c
+@@ -0,0 +1,32 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
++
++#include <vmlinux.h>
++#include <bpf/bpf_tracing.h>
++
++char _license[] SEC("license") = "GPL";
++
++int tid;
++int i;
++
++SEC("tp_btf/bpf_testmod_test_raw_tp_null")
++int BPF_PROG(test_raw_tp_null, struct sk_buff *skb)
++{
++ struct task_struct *task = bpf_get_current_task_btf();
++
++ if (task->pid != tid)
++ return 0;
++
++ i = i + skb->mark + 1;
++ /* The compiler may move the NULL check before this deref, which causes
++ * the load to fail as deref of scalar. Prevent that by using a barrier.
++ */
++ barrier();
++ /* If dead code elimination kicks in, the increment below will
++ * be removed. For raw_tp programs, we mark input arguments as
++ * PTR_MAYBE_NULL, so branch prediction should never kick in.
++ */
++ if (!skb)
++ i += 2;
++ return 0;
++}
+diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
+index 02e1204971b0a8..c0138cb19705bc 100644
+--- a/tools/testing/selftests/mm/Makefile
++++ b/tools/testing/selftests/mm/Makefile
+@@ -33,9 +33,16 @@ endif
+ # LDLIBS.
+ MAKEFLAGS += --no-builtin-rules
+
+-CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
++CFLAGS = -Wall -O2 -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
+ LDLIBS = -lrt -lpthread -lm
+
++# Some distributions (such as Ubuntu) configure GCC so that _FORTIFY_SOURCE is
++# automatically enabled at -O1 or above. This triggers various unused-result
++# warnings where functions such as read() or write() are called and their
++# return value is not checked. Disable _FORTIFY_SOURCE to silence those
++# warnings.
++CFLAGS += -U_FORTIFY_SOURCE
++
+ TEST_GEN_FILES = cow
+ TEST_GEN_FILES += compaction_test
+ TEST_GEN_FILES += gup_longterm
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-07 18:22 Mike Pagano
0 siblings, 0 replies; 31+ messages in thread
From: Mike Pagano @ 2025-03-07 18:22 UTC (permalink / raw
To: gentoo-commits
commit: d5a9b4d7acad17d938141574f2e0bd1e9d28bbf8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 7 18:22:16 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 7 18:22:16 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d5a9b4d7
Linux patch 6.12.18
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1017_linux-6.12.18.patch | 8470 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8474 insertions(+)
diff --git a/0000_README b/0000_README
index 8efc8938..85e743e9 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-6.12.17.patch
From: https://www.kernel.org
Desc: Linux 6.12.17
+Patch: 1017_linux-6.12.18.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.18
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1017_linux-6.12.18.patch b/1017_linux-6.12.18.patch
new file mode 100644
index 00000000..75258348
--- /dev/null
+++ b/1017_linux-6.12.18.patch
@@ -0,0 +1,8470 @@
+diff --git a/Makefile b/Makefile
+index e8b8c5b3840505..17dfe0a8ca8fa9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index c315bc1a4e9adf..1bf70fa1045dcd 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -1243,7 +1243,7 @@ int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
+ extern unsigned int __ro_after_init kvm_arm_vmid_bits;
+ int __init kvm_arm_vmid_alloc_init(void);
+ void __init kvm_arm_vmid_alloc_free(void);
+-bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
++void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
+ void kvm_arm_vmid_clear_active(void);
+
+ static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 117702f033218d..3cf65daa75a51f 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -580,6 +580,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ mmu = vcpu->arch.hw_mmu;
+ last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
+
++ /*
++ * Ensure a VMID is allocated for the MMU before programming VTTBR_EL2,
++ * which happens eagerly in VHE.
++ *
++ * Also, the VMID allocator only preserves VMIDs that are active at the
++ * time of rollover, so KVM might need to grab a new VMID for the MMU if
++ * this is called from kvm_sched_in().
++ */
++ kvm_arm_vmid_update(&mmu->vmid);
++
+ /*
+ * We guarantee that both TLBs and I-cache are private to each
+ * vcpu. If detecting that a vcpu from the same VM has
+@@ -1155,18 +1165,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ */
+ preempt_disable();
+
+- /*
+- * The VMID allocator only tracks active VMIDs per
+- * physical CPU, and therefore the VMID allocated may not be
+- * preserved on VMID roll-over if the task was preempted,
+- * making a thread's VMID inactive. So we need to call
+- * kvm_arm_vmid_update() in non-premptible context.
+- */
+- if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) &&
+- has_vhe())
+- __load_stage2(vcpu->arch.hw_mmu,
+- vcpu->arch.hw_mmu->arch);
+-
+ kvm_pmu_flush_hwstate(vcpu);
+
+ local_irq_disable();
+diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c
+index 806223b7022afd..7fe8ba1a2851c5 100644
+--- a/arch/arm64/kvm/vmid.c
++++ b/arch/arm64/kvm/vmid.c
+@@ -135,11 +135,10 @@ void kvm_arm_vmid_clear_active(void)
+ atomic64_set(this_cpu_ptr(&active_vmids), VMID_ACTIVE_INVALID);
+ }
+
+-bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
++void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
+ {
+ unsigned long flags;
+ u64 vmid, old_active_vmid;
+- bool updated = false;
+
+ vmid = atomic64_read(&kvm_vmid->id);
+
+@@ -157,21 +156,17 @@ bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
+ if (old_active_vmid != 0 && vmid_gen_match(vmid) &&
+ 0 != atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
+ old_active_vmid, vmid))
+- return false;
++ return;
+
+ raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
+
+ /* Check that our VMID belongs to the current generation. */
+ vmid = atomic64_read(&kvm_vmid->id);
+- if (!vmid_gen_match(vmid)) {
++ if (!vmid_gen_match(vmid))
+ vmid = new_vmid(kvm_vmid);
+- updated = true;
+- }
+
+ atomic64_set(this_cpu_ptr(&active_vmids), vmid);
+ raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
+-
+- return updated;
+ }
+
+ /*
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index ea71ef2e343c2c..93ba66de160ce4 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -278,12 +278,7 @@ void __init arm64_memblock_init(void)
+
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ extern u16 memstart_offset_seed;
+-
+- /*
+- * Use the sanitised version of id_aa64mmfr0_el1 so that linear
+- * map randomization can be enabled by shrinking the IPA space.
+- */
+- u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
++ u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+ int parange = cpuid_feature_extract_unsigned_field(
+ mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+ s64 range = linear_region_size -
+diff --git a/arch/riscv/include/asm/futex.h b/arch/riscv/include/asm/futex.h
+index fc8130f995c1ee..6907c456ac8c05 100644
+--- a/arch/riscv/include/asm/futex.h
++++ b/arch/riscv/include/asm/futex.h
+@@ -93,7 +93,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \
+ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \
+ : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp)
+- : [ov] "Jr" (oldval), [nv] "Jr" (newval)
++ : [ov] "Jr" ((long)(int)oldval), [nv] "Jr" (newval)
+ : "memory");
+ __disable_user_access();
+
+diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
+index 2d40736fc37cec..26b085dbdd073f 100644
+--- a/arch/riscv/kernel/cacheinfo.c
++++ b/arch/riscv/kernel/cacheinfo.c
+@@ -108,11 +108,11 @@ int populate_cache_leaves(unsigned int cpu)
+ if (!np)
+ return -ENOENT;
+
+- if (of_property_read_bool(np, "cache-size"))
++ if (of_property_present(np, "cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level);
+- if (of_property_read_bool(np, "i-cache-size"))
++ if (of_property_present(np, "i-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
+- if (of_property_read_bool(np, "d-cache-size"))
++ if (of_property_present(np, "d-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
+
+ prev = np;
+@@ -125,11 +125,11 @@ int populate_cache_leaves(unsigned int cpu)
+ break;
+ if (level <= levels)
+ break;
+- if (of_property_read_bool(np, "cache-size"))
++ if (of_property_present(np, "cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level);
+- if (of_property_read_bool(np, "i-cache-size"))
++ if (of_property_present(np, "i-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
+- if (of_property_read_bool(np, "d-cache-size"))
++ if (of_property_present(np, "d-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
+ levels = level;
+ }
+diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
+index 3a8eeaa9310c32..308430af3e8f83 100644
+--- a/arch/riscv/kernel/cpufeature.c
++++ b/arch/riscv/kernel/cpufeature.c
+@@ -454,7 +454,7 @@ static void __init riscv_resolve_isa(unsigned long *source_isa,
+ if (bit < RISCV_ISA_EXT_BASE)
+ *this_hwcap |= isa2hwcap[bit];
+ }
+- } while (loop && memcmp(prev_resolved_isa, resolved_isa, sizeof(prev_resolved_isa)));
++ } while (loop && !bitmap_equal(prev_resolved_isa, resolved_isa, RISCV_ISA_EXT_MAX));
+ }
+
+ static void __init match_isa_ext(const char *name, const char *name_end, unsigned long *bitmap)
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 2b3c152d3c91f5..7934613a98c883 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -288,8 +288,8 @@ void __init setup_arch(char **cmdline_p)
+
+ riscv_init_cbo_blocksizes();
+ riscv_fill_hwcap();
+- init_rt_signal_env();
+ apply_boot_alternatives();
++ init_rt_signal_env();
+
+ if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
+ riscv_isa_extension_available(NULL, ZICBOM))
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index dcd28241945613..c3c517b9eee554 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -215,12 +215,6 @@ static size_t get_rt_frame_size(bool cal_all)
+ if (cal_all || riscv_v_vstate_query(task_pt_regs(current)))
+ total_context_size += riscv_v_sc_size;
+ }
+- /*
+- * Preserved a __riscv_ctx_hdr for END signal context header if an
+- * extension uses __riscv_extra_ext_header
+- */
+- if (total_context_size)
+- total_context_size += sizeof(struct __riscv_ctx_hdr);
+
+ frame_size += total_context_size;
+
+diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
+index dce667f4b6ab08..3070bb31745de7 100644
+--- a/arch/riscv/kvm/vcpu_sbi_hsm.c
++++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
+@@ -9,6 +9,7 @@
+ #include <linux/errno.h>
+ #include <linux/err.h>
+ #include <linux/kvm_host.h>
++#include <linux/wordpart.h>
+ #include <asm/sbi.h>
+ #include <asm/kvm_vcpu_sbi.h>
+
+@@ -79,12 +80,12 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
+ target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
+ if (!target_vcpu)
+ return SBI_ERR_INVALID_PARAM;
+- if (!kvm_riscv_vcpu_stopped(target_vcpu))
+- return SBI_HSM_STATE_STARTED;
+- else if (vcpu->stat.generic.blocking)
++ if (kvm_riscv_vcpu_stopped(target_vcpu))
++ return SBI_HSM_STATE_STOPPED;
++ else if (target_vcpu->stat.generic.blocking)
+ return SBI_HSM_STATE_SUSPENDED;
+ else
+- return SBI_HSM_STATE_STOPPED;
++ return SBI_HSM_STATE_STARTED;
+ }
+
+ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+@@ -109,7 +110,7 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ }
+ return 0;
+ case SBI_EXT_HSM_HART_SUSPEND:
+- switch (cp->a0) {
++ switch (lower_32_bits(cp->a0)) {
+ case SBI_HSM_SUSPEND_RET_DEFAULT:
+ kvm_riscv_vcpu_wfi(vcpu);
+ break;
+diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c
+index 9c2ab3dfa93aa5..5fbf3f94f1e855 100644
+--- a/arch/riscv/kvm/vcpu_sbi_replace.c
++++ b/arch/riscv/kvm/vcpu_sbi_replace.c
+@@ -21,7 +21,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ u64 next_cycle;
+
+ if (cp->a6 != SBI_EXT_TIME_SET_TIMER) {
+- retdata->err_val = SBI_ERR_INVALID_PARAM;
++ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ return 0;
+ }
+
+@@ -51,9 +51,10 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+ unsigned long hmask = cp->a0;
+ unsigned long hbase = cp->a1;
++ unsigned long hart_bit = 0, sentmask = 0;
+
+ if (cp->a6 != SBI_EXT_IPI_SEND_IPI) {
+- retdata->err_val = SBI_ERR_INVALID_PARAM;
++ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ return 0;
+ }
+
+@@ -62,15 +63,23 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ if (hbase != -1UL) {
+ if (tmp->vcpu_id < hbase)
+ continue;
+- if (!(hmask & (1UL << (tmp->vcpu_id - hbase))))
++ hart_bit = tmp->vcpu_id - hbase;
++ if (hart_bit >= __riscv_xlen)
++ goto done;
++ if (!(hmask & (1UL << hart_bit)))
+ continue;
+ }
+ ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT);
+ if (ret < 0)
+ break;
++ sentmask |= 1UL << hart_bit;
+ kvm_riscv_vcpu_pmu_incr_fw(tmp, SBI_PMU_FW_IPI_RCVD);
+ }
+
++done:
++ if (hbase != -1UL && (hmask ^ sentmask))
++ retdata->err_val = SBI_ERR_INVALID_PARAM;
++
+ return ret;
+ }
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 1b0c2397d65753..6f8e9af827e0c9 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1333,6 +1333,7 @@ config X86_REBOOTFIXUPS
+ config MICROCODE
+ def_bool y
+ depends on CPU_SUP_AMD || CPU_SUP_INTEL
++ select CRYPTO_LIB_SHA256 if CPU_SUP_AMD
+
+ config MICROCODE_INITRD32
+ def_bool y
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 65ab6460aed4d7..0d33c85da45355 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -628,7 +628,7 @@ int x86_pmu_hw_config(struct perf_event *event)
+ if (event->attr.type == event->pmu->type)
+ event->hw.config |= x86_pmu_get_event_config(event);
+
+- if (event->attr.sample_period && x86_pmu.limit_period) {
++ if (!event->attr.freq && x86_pmu.limit_period) {
+ s64 left = event->attr.sample_period;
+ x86_pmu.limit_period(event, &left);
+ if (left > event->attr.sample_period)
+diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
+index 9651275aecd1bb..dfec2c61e3547d 100644
+--- a/arch/x86/kernel/cpu/cyrix.c
++++ b/arch/x86/kernel/cpu/cyrix.c
+@@ -153,8 +153,8 @@ static void geode_configure(void)
+ u8 ccr3;
+ local_irq_save(flags);
+
+- /* Suspend on halt power saving and enable #SUSP pin */
+- setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88);
++ /* Suspend on halt power saving */
++ setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x08);
+
+ ccr3 = getCx86(CX86_CCR3);
+ setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index fb5d0c67fbab17..f5365b32582a5c 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -23,14 +23,18 @@
+
+ #include <linux/earlycpio.h>
+ #include <linux/firmware.h>
++#include <linux/bsearch.h>
+ #include <linux/uaccess.h>
+ #include <linux/vmalloc.h>
+ #include <linux/initrd.h>
+ #include <linux/kernel.h>
+ #include <linux/pci.h>
+
++#include <crypto/sha2.h>
++
+ #include <asm/microcode.h>
+ #include <asm/processor.h>
++#include <asm/cmdline.h>
+ #include <asm/setup.h>
+ #include <asm/cpu.h>
+ #include <asm/msr.h>
+@@ -145,6 +149,107 @@ ucode_path[] __maybe_unused = "kernel/x86/microcode/AuthenticAMD.bin";
+ */
+ static u32 bsp_cpuid_1_eax __ro_after_init;
+
++static bool sha_check = true;
++
++struct patch_digest {
++ u32 patch_id;
++ u8 sha256[SHA256_DIGEST_SIZE];
++};
++
++#include "amd_shas.c"
++
++static int cmp_id(const void *key, const void *elem)
++{
++ struct patch_digest *pd = (struct patch_digest *)elem;
++ u32 patch_id = *(u32 *)key;
++
++ if (patch_id == pd->patch_id)
++ return 0;
++ else if (patch_id < pd->patch_id)
++ return -1;
++ else
++ return 1;
++}
++
++static bool need_sha_check(u32 cur_rev)
++{
++ switch (cur_rev >> 8) {
++ case 0x80012: return cur_rev <= 0x800126f; break;
++ case 0x83010: return cur_rev <= 0x830107c; break;
++ case 0x86001: return cur_rev <= 0x860010e; break;
++ case 0x86081: return cur_rev <= 0x8608108; break;
++ case 0x87010: return cur_rev <= 0x8701034; break;
++ case 0x8a000: return cur_rev <= 0x8a0000a; break;
++ case 0xa0011: return cur_rev <= 0xa0011da; break;
++ case 0xa0012: return cur_rev <= 0xa001243; break;
++ case 0xa1011: return cur_rev <= 0xa101153; break;
++ case 0xa1012: return cur_rev <= 0xa10124e; break;
++ case 0xa1081: return cur_rev <= 0xa108109; break;
++ case 0xa2010: return cur_rev <= 0xa20102f; break;
++ case 0xa2012: return cur_rev <= 0xa201212; break;
++ case 0xa6012: return cur_rev <= 0xa60120a; break;
++ case 0xa7041: return cur_rev <= 0xa704109; break;
++ case 0xa7052: return cur_rev <= 0xa705208; break;
++ case 0xa7080: return cur_rev <= 0xa708009; break;
++ case 0xa70c0: return cur_rev <= 0xa70C009; break;
++ case 0xaa002: return cur_rev <= 0xaa00218; break;
++ default: break;
++ }
++
++ pr_info("You should not be seeing this. Please send the following couple of lines to x86-<at>-kernel.org\n");
++ pr_info("CPUID(1).EAX: 0x%x, current revision: 0x%x\n", bsp_cpuid_1_eax, cur_rev);
++ return true;
++}
++
++static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsigned int len)
++{
++ struct patch_digest *pd = NULL;
++ u8 digest[SHA256_DIGEST_SIZE];
++ struct sha256_state s;
++ int i;
++
++ if (x86_family(bsp_cpuid_1_eax) < 0x17 ||
++ x86_family(bsp_cpuid_1_eax) > 0x19)
++ return true;
++
++ if (!need_sha_check(cur_rev))
++ return true;
++
++ if (!sha_check)
++ return true;
++
++ pd = bsearch(&patch_id, phashes, ARRAY_SIZE(phashes), sizeof(struct patch_digest), cmp_id);
++ if (!pd) {
++ pr_err("No sha256 digest for patch ID: 0x%x found\n", patch_id);
++ return false;
++ }
++
++ sha256_init(&s);
++ sha256_update(&s, data, len);
++ sha256_final(&s, digest);
++
++ if (memcmp(digest, pd->sha256, sizeof(digest))) {
++ pr_err("Patch 0x%x SHA256 digest mismatch!\n", patch_id);
++
++ for (i = 0; i < SHA256_DIGEST_SIZE; i++)
++ pr_cont("0x%x ", digest[i]);
++ pr_info("\n");
++
++ return false;
++ }
++
++ return true;
++}
++
++static u32 get_patch_level(void)
++{
++ u32 rev, dummy __always_unused;
++
++ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
++
++ return rev;
++}
++
+ static union cpuid_1_eax ucode_rev_to_cpuid(unsigned int val)
+ {
+ union zen_patch_rev p;
+@@ -246,8 +351,7 @@ static bool verify_equivalence_table(const u8 *buf, size_t buf_size)
+ * On success, @sh_psize returns the patch size according to the section header,
+ * to the caller.
+ */
+-static bool
+-__verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize)
++static bool __verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize)
+ {
+ u32 p_type, p_size;
+ const u32 *hdr;
+@@ -484,10 +588,13 @@ static void scan_containers(u8 *ucode, size_t size, struct cont_desc *desc)
+ }
+ }
+
+-static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)
++static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev,
++ unsigned int psize)
+ {
+ unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+- u32 rev, dummy;
++
++ if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize))
++ return -1;
+
+ native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
+
+@@ -505,47 +612,13 @@ static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)
+ }
+
+ /* verify patch application was successful */
+- native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+-
+- if (rev != mc->hdr.patch_id)
+- return -1;
++ *cur_rev = get_patch_level();
++ if (*cur_rev != mc->hdr.patch_id)
++ return false;
+
+- return 0;
++ return true;
+ }
+
+-/*
+- * Early load occurs before we can vmalloc(). So we look for the microcode
+- * patch container file in initrd, traverse equivalent cpu table, look for a
+- * matching microcode patch, and update, all in initrd memory in place.
+- * When vmalloc() is available for use later -- on 64-bit during first AP load,
+- * and on 32-bit during save_microcode_in_initrd_amd() -- we can call
+- * load_microcode_amd() to save equivalent cpu table and microcode patches in
+- * kernel heap memory.
+- *
+- * Returns true if container found (sets @desc), false otherwise.
+- */
+-static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size)
+-{
+- struct cont_desc desc = { 0 };
+- struct microcode_amd *mc;
+- bool ret = false;
+-
+- scan_containers(ucode, size, &desc);
+-
+- mc = desc.mc;
+- if (!mc)
+- return ret;
+-
+- /*
+- * Allow application of the same revision to pick up SMT-specific
+- * changes even if the revision of the other SMT thread is already
+- * up-to-date.
+- */
+- if (old_rev > mc->hdr.patch_id)
+- return ret;
+-
+- return !__apply_microcode_amd(mc, desc.psize);
+-}
+
+ static bool get_builtin_microcode(struct cpio_data *cp)
+ {
+@@ -569,64 +642,74 @@ static bool get_builtin_microcode(struct cpio_data *cp)
+ return false;
+ }
+
+-static void __init find_blobs_in_containers(struct cpio_data *ret)
++static bool __init find_blobs_in_containers(struct cpio_data *ret)
+ {
+ struct cpio_data cp;
++ bool found;
+
+ if (!get_builtin_microcode(&cp))
+ cp = find_microcode_in_initrd(ucode_path);
+
+- *ret = cp;
++ found = cp.data && cp.size;
++ if (found)
++ *ret = cp;
++
++ return found;
+ }
+
++/*
++ * Early load occurs before we can vmalloc(). So we look for the microcode
++ * patch container file in initrd, traverse equivalent cpu table, look for a
++ * matching microcode patch, and update, all in initrd memory in place.
++ * When vmalloc() is available for use later -- on 64-bit during first AP load,
++ * and on 32-bit during save_microcode_in_initrd() -- we can call
++ * load_microcode_amd() to save equivalent cpu table and microcode patches in
++ * kernel heap memory.
++ */
+ void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax)
+ {
++ struct cont_desc desc = { };
++ struct microcode_amd *mc;
+ struct cpio_data cp = { };
+- u32 dummy;
++ char buf[4];
++ u32 rev;
++
++ if (cmdline_find_option(boot_command_line, "microcode.amd_sha_check", buf, 4)) {
++ if (!strncmp(buf, "off", 3)) {
++ sha_check = false;
++ pr_warn_once("It is a very very bad idea to disable the blobs SHA check!\n");
++ add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
++ }
++ }
+
+ bsp_cpuid_1_eax = cpuid_1_eax;
+
+- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy);
++ rev = get_patch_level();
++ ed->old_rev = rev;
+
+ /* Needed in load_microcode_amd() */
+ ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax;
+
+- find_blobs_in_containers(&cp);
+- if (!(cp.data && cp.size))
++ if (!find_blobs_in_containers(&cp))
+ return;
+
+- if (early_apply_microcode(ed->old_rev, cp.data, cp.size))
+- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy);
+-}
+-
+-static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size);
+-
+-static int __init save_microcode_in_initrd(void)
+-{
+- unsigned int cpuid_1_eax = native_cpuid_eax(1);
+- struct cpuinfo_x86 *c = &boot_cpu_data;
+- struct cont_desc desc = { 0 };
+- enum ucode_state ret;
+- struct cpio_data cp;
+-
+- if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)
+- return 0;
+-
+- find_blobs_in_containers(&cp);
+- if (!(cp.data && cp.size))
+- return -EINVAL;
+-
+ scan_containers(cp.data, cp.size, &desc);
+- if (!desc.mc)
+- return -EINVAL;
+
+- ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
+- if (ret > UCODE_UPDATED)
+- return -EINVAL;
++ mc = desc.mc;
++ if (!mc)
++ return;
+
+- return 0;
++ /*
++ * Allow application of the same revision to pick up SMT-specific
++ * changes even if the revision of the other SMT thread is already
++ * up-to-date.
++ */
++ if (ed->old_rev > mc->hdr.patch_id)
++ return;
++
++ if (__apply_microcode_amd(mc, &rev, desc.psize))
++ ed->new_rev = rev;
+ }
+-early_initcall(save_microcode_in_initrd);
+
+ static inline bool patch_cpus_equivalent(struct ucode_patch *p,
+ struct ucode_patch *n,
+@@ -727,14 +810,9 @@ static void free_cache(void)
+ static struct ucode_patch *find_patch(unsigned int cpu)
+ {
+ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+- u32 rev, dummy __always_unused;
+ u16 equiv_id = 0;
+
+- /* fetch rev if not populated yet: */
+- if (!uci->cpu_sig.rev) {
+- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+- uci->cpu_sig.rev = rev;
+- }
++ uci->cpu_sig.rev = get_patch_level();
+
+ if (x86_family(bsp_cpuid_1_eax) < 0x17) {
+ equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig);
+@@ -757,22 +835,20 @@ void reload_ucode_amd(unsigned int cpu)
+
+ mc = p->data;
+
+- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+-
++ rev = get_patch_level();
+ if (rev < mc->hdr.patch_id) {
+- if (!__apply_microcode_amd(mc, p->size))
+- pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id);
++ if (__apply_microcode_amd(mc, &rev, p->size))
++ pr_info_once("reload revision: 0x%08x\n", rev);
+ }
+ }
+
+ static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig)
+ {
+- struct cpuinfo_x86 *c = &cpu_data(cpu);
+ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+ struct ucode_patch *p;
+
+ csig->sig = cpuid_eax(0x00000001);
+- csig->rev = c->microcode;
++ csig->rev = get_patch_level();
+
+ /*
+ * a patch could have been loaded early, set uci->mc so that
+@@ -813,7 +889,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ goto out;
+ }
+
+- if (__apply_microcode_amd(mc_amd, p->size)) {
++ if (!__apply_microcode_amd(mc_amd, &rev, p->size)) {
+ pr_err("CPU%d: update failed for patch_level=0x%08x\n",
+ cpu, mc_amd->hdr.patch_id);
+ return UCODE_ERROR;
+@@ -935,8 +1011,7 @@ static int verify_and_add_patch(u8 family, u8 *fw, unsigned int leftover,
+ }
+
+ /* Scan the blob in @data and add microcode patches to the cache. */
+-static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
+- size_t size)
++static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, size_t size)
+ {
+ u8 *fw = (u8 *)data;
+ size_t offset;
+@@ -1011,6 +1086,32 @@ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t siz
+ return ret;
+ }
+
++static int __init save_microcode_in_initrd(void)
++{
++ unsigned int cpuid_1_eax = native_cpuid_eax(1);
++ struct cpuinfo_x86 *c = &boot_cpu_data;
++ struct cont_desc desc = { 0 };
++ enum ucode_state ret;
++ struct cpio_data cp;
++
++ if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)
++ return 0;
++
++ if (!find_blobs_in_containers(&cp))
++ return -EINVAL;
++
++ scan_containers(cp.data, cp.size, &desc);
++ if (!desc.mc)
++ return -EINVAL;
++
++ ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
++ if (ret > UCODE_UPDATED)
++ return -EINVAL;
++
++ return 0;
++}
++early_initcall(save_microcode_in_initrd);
++
+ /*
+ * AMD microcode firmware naming convention, up to family 15h they are in
+ * the legacy file:
+diff --git a/arch/x86/kernel/cpu/microcode/amd_shas.c b/arch/x86/kernel/cpu/microcode/amd_shas.c
+new file mode 100644
+index 00000000000000..2a1655b1fdd883
+--- /dev/null
++++ b/arch/x86/kernel/cpu/microcode/amd_shas.c
+@@ -0,0 +1,444 @@
++/* Keep 'em sorted. */
++static const struct patch_digest phashes[] = {
++ { 0x8001227, {
++ 0x99,0xc0,0x9b,0x2b,0xcc,0x9f,0x52,0x1b,
++ 0x1a,0x5f,0x1d,0x83,0xa1,0x6c,0xc4,0x46,
++ 0xe2,0x6c,0xda,0x73,0xfb,0x2d,0x23,0xa8,
++ 0x77,0xdc,0x15,0x31,0x33,0x4a,0x46,0x18,
++ }
++ },
++ { 0x8001250, {
++ 0xc0,0x0b,0x6b,0x19,0xfd,0x5c,0x39,0x60,
++ 0xd5,0xc3,0x57,0x46,0x54,0xe4,0xd1,0xaa,
++ 0xa8,0xf7,0x1f,0xa8,0x6a,0x60,0x3e,0xe3,
++ 0x27,0x39,0x8e,0x53,0x30,0xf8,0x49,0x19,
++ }
++ },
++ { 0x800126e, {
++ 0xf3,0x8b,0x2b,0xb6,0x34,0xe3,0xc8,0x2c,
++ 0xef,0xec,0x63,0x6d,0xc8,0x76,0x77,0xb3,
++ 0x25,0x5a,0xb7,0x52,0x8c,0x83,0x26,0xe6,
++ 0x4c,0xbe,0xbf,0xe9,0x7d,0x22,0x6a,0x43,
++ }
++ },
++ { 0x800126f, {
++ 0x2b,0x5a,0xf2,0x9c,0xdd,0xd2,0x7f,0xec,
++ 0xec,0x96,0x09,0x57,0xb0,0x96,0x29,0x8b,
++ 0x2e,0x26,0x91,0xf0,0x49,0x33,0x42,0x18,
++ 0xdd,0x4b,0x65,0x5a,0xd4,0x15,0x3d,0x33,
++ }
++ },
++ { 0x800820d, {
++ 0x68,0x98,0x83,0xcd,0x22,0x0d,0xdd,0x59,
++ 0x73,0x2c,0x5b,0x37,0x1f,0x84,0x0e,0x67,
++ 0x96,0x43,0x83,0x0c,0x46,0x44,0xab,0x7c,
++ 0x7b,0x65,0x9e,0x57,0xb5,0x90,0x4b,0x0e,
++ }
++ },
++ { 0x8301025, {
++ 0xe4,0x7d,0xdb,0x1e,0x14,0xb4,0x5e,0x36,
++ 0x8f,0x3e,0x48,0x88,0x3c,0x6d,0x76,0xa1,
++ 0x59,0xc6,0xc0,0x72,0x42,0xdf,0x6c,0x30,
++ 0x6f,0x0b,0x28,0x16,0x61,0xfc,0x79,0x77,
++ }
++ },
++ { 0x8301055, {
++ 0x81,0x7b,0x99,0x1b,0xae,0x2d,0x4f,0x9a,
++ 0xef,0x13,0xce,0xb5,0x10,0xaf,0x6a,0xea,
++ 0xe5,0xb0,0x64,0x98,0x10,0x68,0x34,0x3b,
++ 0x9d,0x7a,0xd6,0x22,0x77,0x5f,0xb3,0x5b,
++ }
++ },
++ { 0x8301072, {
++ 0xcf,0x76,0xa7,0x1a,0x49,0xdf,0x2a,0x5e,
++ 0x9e,0x40,0x70,0xe5,0xdd,0x8a,0xa8,0x28,
++ 0x20,0xdc,0x91,0xd8,0x2c,0xa6,0xa0,0xb1,
++ 0x2d,0x22,0x26,0x94,0x4b,0x40,0x85,0x30,
++ }
++ },
++ { 0x830107a, {
++ 0x2a,0x65,0x8c,0x1a,0x5e,0x07,0x21,0x72,
++ 0xdf,0x90,0xa6,0x51,0x37,0xd3,0x4b,0x34,
++ 0xc4,0xda,0x03,0xe1,0x8a,0x6c,0xfb,0x20,
++ 0x04,0xb2,0x81,0x05,0xd4,0x87,0xf4,0x0a,
++ }
++ },
++ { 0x830107b, {
++ 0xb3,0x43,0x13,0x63,0x56,0xc1,0x39,0xad,
++ 0x10,0xa6,0x2b,0xcc,0x02,0xe6,0x76,0x2a,
++ 0x1e,0x39,0x58,0x3e,0x23,0x6e,0xa4,0x04,
++ 0x95,0xea,0xf9,0x6d,0xc2,0x8a,0x13,0x19,
++ }
++ },
++ { 0x830107c, {
++ 0x21,0x64,0xde,0xfb,0x9f,0x68,0x96,0x47,
++ 0x70,0x5c,0xe2,0x8f,0x18,0x52,0x6a,0xac,
++ 0xa4,0xd2,0x2e,0xe0,0xde,0x68,0x66,0xc3,
++ 0xeb,0x1e,0xd3,0x3f,0xbc,0x51,0x1d,0x38,
++ }
++ },
++ { 0x860010d, {
++ 0x86,0xb6,0x15,0x83,0xbc,0x3b,0x9c,0xe0,
++ 0xb3,0xef,0x1d,0x99,0x84,0x35,0x15,0xf7,
++ 0x7c,0x2a,0xc6,0x42,0xdb,0x73,0x07,0x5c,
++ 0x7d,0xc3,0x02,0xb5,0x43,0x06,0x5e,0xf8,
++ }
++ },
++ { 0x8608108, {
++ 0x14,0xfe,0x57,0x86,0x49,0xc8,0x68,0xe2,
++ 0x11,0xa3,0xcb,0x6e,0xff,0x6e,0xd5,0x38,
++ 0xfe,0x89,0x1a,0xe0,0x67,0xbf,0xc4,0xcc,
++ 0x1b,0x9f,0x84,0x77,0x2b,0x9f,0xaa,0xbd,
++ }
++ },
++ { 0x8701034, {
++ 0xc3,0x14,0x09,0xa8,0x9c,0x3f,0x8d,0x83,
++ 0x9b,0x4c,0xa5,0xb7,0x64,0x8b,0x91,0x5d,
++ 0x85,0x6a,0x39,0x26,0x1e,0x14,0x41,0xa8,
++ 0x75,0xea,0xa6,0xf9,0xc9,0xd1,0xea,0x2b,
++ }
++ },
++ { 0x8a00008, {
++ 0xd7,0x2a,0x93,0xdc,0x05,0x2f,0xa5,0x6e,
++ 0x0c,0x61,0x2c,0x07,0x9f,0x38,0xe9,0x8e,
++ 0xef,0x7d,0x2a,0x05,0x4d,0x56,0xaf,0x72,
++ 0xe7,0x56,0x47,0x6e,0x60,0x27,0xd5,0x8c,
++ }
++ },
++ { 0x8a0000a, {
++ 0x73,0x31,0x26,0x22,0xd4,0xf9,0xee,0x3c,
++ 0x07,0x06,0xe7,0xb9,0xad,0xd8,0x72,0x44,
++ 0x33,0x31,0xaa,0x7d,0xc3,0x67,0x0e,0xdb,
++ 0x47,0xb5,0xaa,0xbc,0xf5,0xbb,0xd9,0x20,
++ }
++ },
++ { 0xa00104c, {
++ 0x3c,0x8a,0xfe,0x04,0x62,0xd8,0x6d,0xbe,
++ 0xa7,0x14,0x28,0x64,0x75,0xc0,0xa3,0x76,
++ 0xb7,0x92,0x0b,0x97,0x0a,0x8e,0x9c,0x5b,
++ 0x1b,0xc8,0x9d,0x3a,0x1e,0x81,0x3d,0x3b,
++ }
++ },
++ { 0xa00104e, {
++ 0xc4,0x35,0x82,0x67,0xd2,0x86,0xe5,0xb2,
++ 0xfd,0x69,0x12,0x38,0xc8,0x77,0xba,0xe0,
++ 0x70,0xf9,0x77,0x89,0x10,0xa6,0x74,0x4e,
++ 0x56,0x58,0x13,0xf5,0x84,0x70,0x28,0x0b,
++ }
++ },
++ { 0xa001053, {
++ 0x92,0x0e,0xf4,0x69,0x10,0x3b,0xf9,0x9d,
++ 0x31,0x1b,0xa6,0x99,0x08,0x7d,0xd7,0x25,
++ 0x7e,0x1e,0x89,0xba,0x35,0x8d,0xac,0xcb,
++ 0x3a,0xb4,0xdf,0x58,0x12,0xcf,0xc0,0xc3,
++ }
++ },
++ { 0xa001058, {
++ 0x33,0x7d,0xa9,0xb5,0x4e,0x62,0x13,0x36,
++ 0xef,0x66,0xc9,0xbd,0x0a,0xa6,0x3b,0x19,
++ 0xcb,0xf5,0xc2,0xc3,0x55,0x47,0x20,0xec,
++ 0x1f,0x7b,0xa1,0x44,0x0e,0x8e,0xa4,0xb2,
++ }
++ },
++ { 0xa001075, {
++ 0x39,0x02,0x82,0xd0,0x7c,0x26,0x43,0xe9,
++ 0x26,0xa3,0xd9,0x96,0xf7,0x30,0x13,0x0a,
++ 0x8a,0x0e,0xac,0xe7,0x1d,0xdc,0xe2,0x0f,
++ 0xcb,0x9e,0x8d,0xbc,0xd2,0xa2,0x44,0xe0,
++ }
++ },
++ { 0xa001078, {
++ 0x2d,0x67,0xc7,0x35,0xca,0xef,0x2f,0x25,
++ 0x4c,0x45,0x93,0x3f,0x36,0x01,0x8c,0xce,
++ 0xa8,0x5b,0x07,0xd3,0xc1,0x35,0x3c,0x04,
++ 0x20,0xa2,0xfc,0xdc,0xe6,0xce,0x26,0x3e,
++ }
++ },
++ { 0xa001079, {
++ 0x43,0xe2,0x05,0x9c,0xfd,0xb7,0x5b,0xeb,
++ 0x5b,0xe9,0xeb,0x3b,0x96,0xf4,0xe4,0x93,
++ 0x73,0x45,0x3e,0xac,0x8d,0x3b,0xe4,0xdb,
++ 0x10,0x31,0xc1,0xe4,0xa2,0xd0,0x5a,0x8a,
++ }
++ },
++ { 0xa00107a, {
++ 0x5f,0x92,0xca,0xff,0xc3,0x59,0x22,0x5f,
++ 0x02,0xa0,0x91,0x3b,0x4a,0x45,0x10,0xfd,
++ 0x19,0xe1,0x8a,0x6d,0x9a,0x92,0xc1,0x3f,
++ 0x75,0x78,0xac,0x78,0x03,0x1d,0xdb,0x18,
++ }
++ },
++ { 0xa001143, {
++ 0x56,0xca,0xf7,0x43,0x8a,0x4c,0x46,0x80,
++ 0xec,0xde,0xe5,0x9c,0x50,0x84,0x9a,0x42,
++ 0x27,0xe5,0x51,0x84,0x8f,0x19,0xc0,0x8d,
++ 0x0c,0x25,0xb4,0xb0,0x8f,0x10,0xf3,0xf8,
++ }
++ },
++ { 0xa001144, {
++ 0x42,0xd5,0x9b,0xa7,0xd6,0x15,0x29,0x41,
++ 0x61,0xc4,0x72,0x3f,0xf3,0x06,0x78,0x4b,
++ 0x65,0xf3,0x0e,0xfa,0x9c,0x87,0xde,0x25,
++ 0xbd,0xb3,0x9a,0xf4,0x75,0x13,0x53,0xdc,
++ }
++ },
++ { 0xa00115d, {
++ 0xd4,0xc4,0x49,0x36,0x89,0x0b,0x47,0xdd,
++ 0xfb,0x2f,0x88,0x3b,0x5f,0xf2,0x8e,0x75,
++ 0xc6,0x6c,0x37,0x5a,0x90,0x25,0x94,0x3e,
++ 0x36,0x9c,0xae,0x02,0x38,0x6c,0xf5,0x05,
++ }
++ },
++ { 0xa001173, {
++ 0x28,0xbb,0x9b,0xd1,0xa0,0xa0,0x7e,0x3a,
++ 0x59,0x20,0xc0,0xa9,0xb2,0x5c,0xc3,0x35,
++ 0x53,0x89,0xe1,0x4c,0x93,0x2f,0x1d,0xc3,
++ 0xe5,0xf7,0xf3,0xc8,0x9b,0x61,0xaa,0x9e,
++ }
++ },
++ { 0xa0011a8, {
++ 0x97,0xc6,0x16,0x65,0x99,0xa4,0x85,0x3b,
++ 0xf6,0xce,0xaa,0x49,0x4a,0x3a,0xc5,0xb6,
++ 0x78,0x25,0xbc,0x53,0xaf,0x5d,0xcf,0xf4,
++ 0x23,0x12,0xbb,0xb1,0xbc,0x8a,0x02,0x2e,
++ }
++ },
++ { 0xa0011ce, {
++ 0xcf,0x1c,0x90,0xa3,0x85,0x0a,0xbf,0x71,
++ 0x94,0x0e,0x80,0x86,0x85,0x4f,0xd7,0x86,
++ 0xae,0x38,0x23,0x28,0x2b,0x35,0x9b,0x4e,
++ 0xfe,0xb8,0xcd,0x3d,0x3d,0x39,0xc9,0x6a,
++ }
++ },
++ { 0xa0011d1, {
++ 0xdf,0x0e,0xca,0xde,0xf6,0xce,0x5c,0x1e,
++ 0x4c,0xec,0xd7,0x71,0x83,0xcc,0xa8,0x09,
++ 0xc7,0xc5,0xfe,0xb2,0xf7,0x05,0xd2,0xc5,
++ 0x12,0xdd,0xe4,0xf3,0x92,0x1c,0x3d,0xb8,
++ }
++ },
++ { 0xa0011d3, {
++ 0x91,0xe6,0x10,0xd7,0x57,0xb0,0x95,0x0b,
++ 0x9a,0x24,0xee,0xf7,0xcf,0x56,0xc1,0xa6,
++ 0x4a,0x52,0x7d,0x5f,0x9f,0xdf,0xf6,0x00,
++ 0x65,0xf7,0xea,0xe8,0x2a,0x88,0xe2,0x26,
++ }
++ },
++ { 0xa0011d5, {
++ 0xed,0x69,0x89,0xf4,0xeb,0x64,0xc2,0x13,
++ 0xe0,0x51,0x1f,0x03,0x26,0x52,0x7d,0xb7,
++ 0x93,0x5d,0x65,0xca,0xb8,0x12,0x1d,0x62,
++ 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21,
++ }
++ },
++ { 0xa001223, {
++ 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8,
++ 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4,
++ 0x83,0x75,0x94,0xdd,0xeb,0x7e,0xb7,0x15,
++ 0x8e,0x3b,0x50,0x29,0x8a,0x9c,0xcc,0x45,
++ }
++ },
++ { 0xa001224, {
++ 0x0e,0x0c,0xdf,0xb4,0x89,0xee,0x35,0x25,
++ 0xdd,0x9e,0xdb,0xc0,0x69,0x83,0x0a,0xad,
++ 0x26,0xa9,0xaa,0x9d,0xfc,0x3c,0xea,0xf9,
++ 0x6c,0xdc,0xd5,0x6d,0x8b,0x6e,0x85,0x4a,
++ }
++ },
++ { 0xa001227, {
++ 0xab,0xc6,0x00,0x69,0x4b,0x50,0x87,0xad,
++ 0x5f,0x0e,0x8b,0xea,0x57,0x38,0xce,0x1d,
++ 0x0f,0x75,0x26,0x02,0xf6,0xd6,0x96,0xe9,
++ 0x87,0xb9,0xd6,0x20,0x27,0x7c,0xd2,0xe0,
++ }
++ },
++ { 0xa001229, {
++ 0x7f,0x49,0x49,0x48,0x46,0xa5,0x50,0xa6,
++ 0x28,0x89,0x98,0xe2,0x9e,0xb4,0x7f,0x75,
++ 0x33,0xa7,0x04,0x02,0xe4,0x82,0xbf,0xb4,
++ 0xa5,0x3a,0xba,0x24,0x8d,0x31,0x10,0x1d,
++ }
++ },
++ { 0xa00122e, {
++ 0x56,0x94,0xa9,0x5d,0x06,0x68,0xfe,0xaf,
++ 0xdf,0x7a,0xff,0x2d,0xdf,0x74,0x0f,0x15,
++ 0x66,0xfb,0x00,0xb5,0x51,0x97,0x9b,0xfa,
++ 0xcb,0x79,0x85,0x46,0x25,0xb4,0xd2,0x10,
++ }
++ },
++ { 0xa001231, {
++ 0x0b,0x46,0xa5,0xfc,0x18,0x15,0xa0,0x9e,
++ 0xa6,0xdc,0xb7,0xff,0x17,0xf7,0x30,0x64,
++ 0xd4,0xda,0x9e,0x1b,0xc3,0xfc,0x02,0x3b,
++ 0xe2,0xc6,0x0e,0x41,0x54,0xb5,0x18,0xdd,
++ }
++ },
++ { 0xa001234, {
++ 0x88,0x8d,0xed,0xab,0xb5,0xbd,0x4e,0xf7,
++ 0x7f,0xd4,0x0e,0x95,0x34,0x91,0xff,0xcc,
++ 0xfb,0x2a,0xcd,0xf7,0xd5,0xdb,0x4c,0x9b,
++ 0xd6,0x2e,0x73,0x50,0x8f,0x83,0x79,0x1a,
++ }
++ },
++ { 0xa001236, {
++ 0x3d,0x30,0x00,0xb9,0x71,0xba,0x87,0x78,
++ 0xa8,0x43,0x55,0xc4,0x26,0x59,0xcf,0x9d,
++ 0x93,0xce,0x64,0x0e,0x8b,0x72,0x11,0x8b,
++ 0xa3,0x8f,0x51,0xe9,0xca,0x98,0xaa,0x25,
++ }
++ },
++ { 0xa001238, {
++ 0x72,0xf7,0x4b,0x0c,0x7d,0x58,0x65,0xcc,
++ 0x00,0xcc,0x57,0x16,0x68,0x16,0xf8,0x2a,
++ 0x1b,0xb3,0x8b,0xe1,0xb6,0x83,0x8c,0x7e,
++ 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59,
++ }
++ },
++ { 0xa00820c, {
++ 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3,
++ 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63,
++ 0xf1,0x8c,0x88,0x45,0xd7,0x82,0x80,0xd1,
++ 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2,
++ }
++ },
++ { 0xa10113e, {
++ 0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10,
++ 0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0,
++ 0x15,0xe3,0x3f,0x4b,0x1d,0x0d,0x0a,0xd5,
++ 0xfa,0x90,0xc4,0xed,0x9d,0x90,0xaf,0x53,
++ }
++ },
++ { 0xa101144, {
++ 0xb3,0x0b,0x26,0x9a,0xf8,0x7c,0x02,0x26,
++ 0x35,0x84,0x53,0xa4,0xd3,0x2c,0x7c,0x09,
++ 0x68,0x7b,0x96,0xb6,0x93,0xef,0xde,0xbc,
++ 0xfd,0x4b,0x15,0xd2,0x81,0xd3,0x51,0x47,
++ }
++ },
++ { 0xa101148, {
++ 0x20,0xd5,0x6f,0x40,0x4a,0xf6,0x48,0x90,
++ 0xc2,0x93,0x9a,0xc2,0xfd,0xac,0xef,0x4f,
++ 0xfa,0xc0,0x3d,0x92,0x3c,0x6d,0x01,0x08,
++ 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4,
++ }
++ },
++ { 0xa10123e, {
++ 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18,
++ 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d,
++ 0x1d,0x13,0x53,0x63,0xfe,0x42,0x6f,0xfc,
++ 0x19,0x0f,0xf1,0xfc,0xa7,0xdd,0x89,0x1b,
++ }
++ },
++ { 0xa101244, {
++ 0x71,0x56,0xb5,0x9f,0x21,0xbf,0xb3,0x3c,
++ 0x8c,0xd7,0x36,0xd0,0x34,0x52,0x1b,0xb1,
++ 0x46,0x2f,0x04,0xf0,0x37,0xd8,0x1e,0x72,
++ 0x24,0xa2,0x80,0x84,0x83,0x65,0x84,0xc0,
++ }
++ },
++ { 0xa101248, {
++ 0xed,0x3b,0x95,0xa6,0x68,0xa7,0x77,0x3e,
++ 0xfc,0x17,0x26,0xe2,0x7b,0xd5,0x56,0x22,
++ 0x2c,0x1d,0xef,0xeb,0x56,0xdd,0xba,0x6e,
++ 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75,
++ }
++ },
++ { 0xa108108, {
++ 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9,
++ 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6,
++ 0xf5,0xd4,0x3f,0x7b,0x14,0xd5,0x60,0x2c,
++ 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16,
++ }
++ },
++ { 0xa20102d, {
++ 0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11,
++ 0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89,
++ 0x8b,0x89,0x2f,0xb5,0xbb,0x82,0xef,0x23,
++ 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4,
++ }
++ },
++ { 0xa201210, {
++ 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe,
++ 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9,
++ 0x6d,0x3d,0x0e,0x6b,0xa7,0xac,0xe3,0x68,
++ 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41,
++ }
++ },
++ { 0xa404107, {
++ 0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45,
++ 0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0,
++ 0x8b,0x0d,0x9f,0xf9,0x3a,0xdf,0xc6,0x81,
++ 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99,
++ }
++ },
++ { 0xa500011, {
++ 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4,
++ 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1,
++ 0xd7,0x5b,0x65,0x3a,0x7d,0xab,0xdf,0xa2,
++ 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74,
++ }
++ },
++ { 0xa601209, {
++ 0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32,
++ 0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30,
++ 0x15,0x86,0xcc,0x5d,0x97,0x0f,0xc0,0x46,
++ 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d,
++ }
++ },
++ { 0xa704107, {
++ 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6,
++ 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93,
++ 0x2a,0xad,0x8e,0x6b,0xea,0x9b,0xb7,0xc2,
++ 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39,
++ }
++ },
++ { 0xa705206, {
++ 0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4,
++ 0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7,
++ 0x67,0x6f,0x04,0x18,0xae,0x20,0x87,0x4b,
++ 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc,
++ }
++ },
++ { 0xa708007, {
++ 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3,
++ 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2,
++ 0x07,0xaa,0x3a,0xe0,0x57,0x13,0x72,0x80,
++ 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93,
++ }
++ },
++ { 0xa70c005, {
++ 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b,
++ 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f,
++ 0x1f,0x1f,0xf1,0x97,0xeb,0xfe,0x56,0x55,
++ 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13,
++ }
++ },
++ { 0xaa00116, {
++ 0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63,
++ 0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5,
++ 0xfe,0x1d,0x5e,0x65,0xc7,0xaa,0x92,0x4d,
++ 0x91,0xee,0x76,0xbb,0x4c,0x66,0x78,0xc9,
++ }
++ },
++ { 0xaa00212, {
++ 0xbd,0x57,0x5d,0x0a,0x0a,0x30,0xc1,0x75,
++ 0x95,0x58,0x5e,0x93,0x02,0x28,0x43,0x71,
++ 0xed,0x42,0x29,0xc8,0xec,0x34,0x2b,0xb2,
++ 0x1a,0x65,0x4b,0xfe,0x07,0x0f,0x34,0xa1,
++ }
++ },
++ { 0xaa00213, {
++ 0xed,0x58,0xb7,0x76,0x81,0x7f,0xd9,0x3a,
++ 0x1a,0xff,0x8b,0x34,0xb8,0x4a,0x99,0x0f,
++ 0x28,0x49,0x6c,0x56,0x2b,0xdc,0xb7,0xed,
++ 0x96,0xd5,0x9d,0xc1,0x7a,0xd4,0x51,0x9b,
++ }
++ },
++ { 0xaa00215, {
++ 0x55,0xd3,0x28,0xcb,0x87,0xa9,0x32,0xe9,
++ 0x4e,0x85,0x4b,0x7c,0x6b,0xd5,0x7c,0xd4,
++ 0x1b,0x51,0x71,0x3a,0x0e,0x0b,0xdc,0x9b,
++ 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef,
++ }
++ },
++};
+diff --git a/arch/x86/kernel/cpu/microcode/internal.h b/arch/x86/kernel/cpu/microcode/internal.h
+index 21776c529fa97a..5df621752fefac 100644
+--- a/arch/x86/kernel/cpu/microcode/internal.h
++++ b/arch/x86/kernel/cpu/microcode/internal.h
+@@ -100,14 +100,12 @@ extern bool force_minrev;
+ #ifdef CONFIG_CPU_SUP_AMD
+ void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family);
+ void load_ucode_amd_ap(unsigned int family);
+-int save_microcode_in_initrd_amd(unsigned int family);
+ void reload_ucode_amd(unsigned int cpu);
+ struct microcode_ops *init_amd_microcode(void);
+ void exit_amd_microcode(void);
+ #else /* CONFIG_CPU_SUP_AMD */
+ static inline void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family) { }
+ static inline void load_ucode_amd_ap(unsigned int family) { }
+-static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
+ static inline void reload_ucode_amd(unsigned int cpu) { }
+ static inline struct microcode_ops *init_amd_microcode(void) { return NULL; }
+ static inline void exit_amd_microcode(void) { }
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 767bcbce74facb..c11db5be253248 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -427,13 +427,14 @@ static bool disk_insert_zone_wplug(struct gendisk *disk,
+ }
+ }
+ hlist_add_head_rcu(&zwplug->node, &disk->zone_wplugs_hash[idx]);
++ atomic_inc(&disk->nr_zone_wplugs);
+ spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+
+ return true;
+ }
+
+-static struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
+- sector_t sector)
++static struct blk_zone_wplug *disk_get_hashed_zone_wplug(struct gendisk *disk,
++ sector_t sector)
+ {
+ unsigned int zno = disk_zone_no(disk, sector);
+ unsigned int idx = hash_32(zno, disk->zone_wplugs_hash_bits);
+@@ -454,6 +455,15 @@ static struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
+ return NULL;
+ }
+
++static inline struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
++ sector_t sector)
++{
++ if (!atomic_read(&disk->nr_zone_wplugs))
++ return NULL;
++
++ return disk_get_hashed_zone_wplug(disk, sector);
++}
++
+ static void disk_free_zone_wplug_rcu(struct rcu_head *rcu_head)
+ {
+ struct blk_zone_wplug *zwplug =
+@@ -518,6 +528,7 @@ static void disk_remove_zone_wplug(struct gendisk *disk,
+ zwplug->flags |= BLK_ZONE_WPLUG_UNHASHED;
+ spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+ hlist_del_init_rcu(&zwplug->node);
++ atomic_dec(&disk->nr_zone_wplugs);
+ spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+ disk_put_zone_wplug(zwplug);
+ }
+@@ -607,6 +618,11 @@ static void disk_zone_wplug_abort(struct blk_zone_wplug *zwplug)
+ {
+ struct bio *bio;
+
++ if (bio_list_empty(&zwplug->bio_list))
++ return;
++
++ pr_warn_ratelimited("%s: zone %u: Aborting plugged BIOs\n",
++ zwplug->disk->disk_name, zwplug->zone_no);
+ while ((bio = bio_list_pop(&zwplug->bio_list)))
+ blk_zone_wplug_bio_io_error(zwplug, bio);
+ }
+@@ -1055,6 +1071,47 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+ return true;
+ }
+
++static void blk_zone_wplug_handle_native_zone_append(struct bio *bio)
++{
++ struct gendisk *disk = bio->bi_bdev->bd_disk;
++ struct blk_zone_wplug *zwplug;
++ unsigned long flags;
++
++ /*
++ * We have native support for zone append operations, so we are not
++ * going to handle @bio through plugging. However, we may already have a
++ * zone write plug for the target zone if that zone was previously
++ * partially written using regular writes. In such case, we risk leaving
++ * the plug in the disk hash table if the zone is fully written using
++ * zone append operations. Avoid this by removing the zone write plug.
++ */
++ zwplug = disk_get_zone_wplug(disk, bio->bi_iter.bi_sector);
++ if (likely(!zwplug))
++ return;
++
++ spin_lock_irqsave(&zwplug->lock, flags);
++
++ /*
++ * We are about to remove the zone write plug. But if the user
++ * (mistakenly) has issued regular writes together with native zone
++ * append, we must aborts the writes as otherwise the plugged BIOs would
++ * not be executed by the plug BIO work as disk_get_zone_wplug() will
++ * return NULL after the plug is removed. Aborting the plugged write
++ * BIOs is consistent with the fact that these writes will most likely
++ * fail anyway as there is no ordering guarantees between zone append
++ * operations and regular write operations.
++ */
++ if (!bio_list_empty(&zwplug->bio_list)) {
++ pr_warn_ratelimited("%s: zone %u: Invalid mix of zone append and regular writes\n",
++ disk->disk_name, zwplug->zone_no);
++ disk_zone_wplug_abort(zwplug);
++ }
++ disk_remove_zone_wplug(disk, zwplug);
++ spin_unlock_irqrestore(&zwplug->lock, flags);
++
++ disk_put_zone_wplug(zwplug);
++}
++
+ /**
+ * blk_zone_plug_bio - Handle a zone write BIO with zone write plugging
+ * @bio: The BIO being submitted
+@@ -1111,8 +1168,10 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ */
+ switch (bio_op(bio)) {
+ case REQ_OP_ZONE_APPEND:
+- if (!bdev_emulates_zone_append(bdev))
++ if (!bdev_emulates_zone_append(bdev)) {
++ blk_zone_wplug_handle_native_zone_append(bio);
+ return false;
++ }
+ fallthrough;
+ case REQ_OP_WRITE:
+ case REQ_OP_WRITE_ZEROES:
+@@ -1299,6 +1358,7 @@ static int disk_alloc_zone_resources(struct gendisk *disk,
+ {
+ unsigned int i;
+
++ atomic_set(&disk->nr_zone_wplugs, 0);
+ disk->zone_wplugs_hash_bits =
+ min(ilog2(pool_size) + 1, BLK_ZONE_WPLUG_MAX_HASH_BITS);
+
+@@ -1353,6 +1413,7 @@ static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
+ }
+ }
+
++ WARN_ON_ONCE(atomic_read(&disk->nr_zone_wplugs));
+ kfree(disk->zone_wplugs_hash);
+ disk->zone_wplugs_hash = NULL;
+ disk->zone_wplugs_hash_bits = 0;
+@@ -1570,11 +1631,12 @@ static int blk_revalidate_seq_zone(struct blk_zone *zone, unsigned int idx,
+ }
+
+ /*
+- * We need to track the write pointer of all zones that are not
+- * empty nor full. So make sure we have a zone write plug for
+- * such zone if the device has a zone write plug hash table.
++ * If the device needs zone append emulation, we need to track the
++ * write pointer of all zones that are not empty nor full. So make sure
++ * we have a zone write plug for such zone if the device has a zone
++ * write plug hash table.
+ */
+- if (!disk->zone_wplugs_hash)
++ if (!queue_emulates_zone_append(disk->queue) || !disk->zone_wplugs_hash)
+ return 0;
+
+ disk_zone_wplug_sync_wp_offset(disk, zone);
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 419220fa42fd7e..bd1ea99c3b4751 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -1609,8 +1609,8 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ goto out_fw;
+ }
+
+- ret = regmap_raw_write_async(regmap, reg, buf->buf,
+- le32_to_cpu(region->len));
++ ret = regmap_raw_write(regmap, reg, buf->buf,
++ le32_to_cpu(region->len));
+ if (ret != 0) {
+ cs_dsp_err(dsp,
+ "%s.%d: Failed to write %d bytes at %d in %s: %d\n",
+@@ -1625,12 +1625,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ regions++;
+ }
+
+- ret = regmap_async_complete(regmap);
+- if (ret != 0) {
+- cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret);
+- goto out_fw;
+- }
+-
+ if (pos > firmware->size)
+ cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n",
+ file, regions, pos - firmware->size);
+@@ -1638,7 +1632,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ cs_dsp_debugfs_save_wmfwname(dsp, file);
+
+ out_fw:
+- regmap_async_complete(regmap);
+ cs_dsp_buf_free(&buf_list);
+
+ if (ret == -EOVERFLOW)
+@@ -2326,8 +2319,8 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ cs_dsp_dbg(dsp, "%s.%d: Writing %d bytes at %x\n",
+ file, blocks, le32_to_cpu(blk->len),
+ reg);
+- ret = regmap_raw_write_async(regmap, reg, buf->buf,
+- le32_to_cpu(blk->len));
++ ret = regmap_raw_write(regmap, reg, buf->buf,
++ le32_to_cpu(blk->len));
+ if (ret != 0) {
+ cs_dsp_err(dsp,
+ "%s.%d: Failed to write to %x in %s: %d\n",
+@@ -2339,10 +2332,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ blocks++;
+ }
+
+- ret = regmap_async_complete(regmap);
+- if (ret != 0)
+- cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret);
+-
+ if (pos > firmware->size)
+ cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n",
+ file, blocks, pos - firmware->size);
+@@ -2350,7 +2339,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ cs_dsp_debugfs_save_binname(dsp, file);
+
+ out_fw:
+- regmap_async_complete(regmap);
+ cs_dsp_buf_free(&buf_list);
+
+ if (ret == -EOVERFLOW)
+@@ -2561,8 +2549,8 @@ static int cs_dsp_adsp2_enable_core(struct cs_dsp *dsp)
+ {
+ int ret;
+
+- ret = regmap_update_bits_async(dsp->regmap, dsp->base + ADSP2_CONTROL,
+- ADSP2_SYS_ENA, ADSP2_SYS_ENA);
++ ret = regmap_update_bits(dsp->regmap, dsp->base + ADSP2_CONTROL,
++ ADSP2_SYS_ENA, ADSP2_SYS_ENA);
+ if (ret != 0)
+ return ret;
+
+diff --git a/drivers/firmware/efi/mokvar-table.c b/drivers/firmware/efi/mokvar-table.c
+index 5ed0602c2f75f0..4eb0dff4dfaf8b 100644
+--- a/drivers/firmware/efi/mokvar-table.c
++++ b/drivers/firmware/efi/mokvar-table.c
+@@ -103,9 +103,7 @@ void __init efi_mokvar_table_init(void)
+ void *va = NULL;
+ unsigned long cur_offset = 0;
+ unsigned long offset_limit;
+- unsigned long map_size = 0;
+ unsigned long map_size_needed = 0;
+- unsigned long size;
+ struct efi_mokvar_table_entry *mokvar_entry;
+ int err;
+
+@@ -134,48 +132,34 @@ void __init efi_mokvar_table_init(void)
+ */
+ err = -EINVAL;
+ while (cur_offset + sizeof(*mokvar_entry) <= offset_limit) {
+- mokvar_entry = va + cur_offset;
+- map_size_needed = cur_offset + sizeof(*mokvar_entry);
+- if (map_size_needed > map_size) {
+- if (va)
+- early_memunmap(va, map_size);
+- /*
+- * Map a little more than the fixed size entry
+- * header, anticipating some data. It's safe to
+- * do so as long as we stay within current memory
+- * descriptor.
+- */
+- map_size = min(map_size_needed + 2*EFI_PAGE_SIZE,
+- offset_limit);
+- va = early_memremap(efi.mokvar_table, map_size);
+- if (!va) {
+- pr_err("Failed to map EFI MOKvar config table pa=0x%lx, size=%lu.\n",
+- efi.mokvar_table, map_size);
+- return;
+- }
+- mokvar_entry = va + cur_offset;
++ if (va)
++ early_memunmap(va, sizeof(*mokvar_entry));
++ va = early_memremap(efi.mokvar_table + cur_offset, sizeof(*mokvar_entry));
++ if (!va) {
++ pr_err("Failed to map EFI MOKvar config table pa=0x%lx, size=%zu.\n",
++ efi.mokvar_table + cur_offset, sizeof(*mokvar_entry));
++ return;
+ }
++ mokvar_entry = va;
+
+ /* Check for last sentinel entry */
+ if (mokvar_entry->name[0] == '\0') {
+ if (mokvar_entry->data_size != 0)
+ break;
+ err = 0;
++ map_size_needed = cur_offset + sizeof(*mokvar_entry);
+ break;
+ }
+
+- /* Sanity check that the name is null terminated */
+- size = strnlen(mokvar_entry->name,
+- sizeof(mokvar_entry->name));
+- if (size >= sizeof(mokvar_entry->name))
+- break;
++ /* Enforce that the name is NUL terminated */
++ mokvar_entry->name[sizeof(mokvar_entry->name) - 1] = '\0';
+
+ /* Advance to the next entry */
+- cur_offset = map_size_needed + mokvar_entry->data_size;
++ cur_offset += sizeof(*mokvar_entry) + mokvar_entry->data_size;
+ }
+
+ if (va)
+- early_memunmap(va, map_size);
++ early_memunmap(va, sizeof(*mokvar_entry));
+ if (err) {
+ pr_err("EFI MOKvar config table is not valid\n");
+ return;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 45e28726e148e9..96845541b2d255 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1542,6 +1542,13 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
+ if (amdgpu_sriov_vf(adev))
+ return 0;
+
++ /* resizing on Dell G5 SE platforms causes problems with runtime pm */
++ if ((amdgpu_runtime_pm != 0) &&
++ adev->pdev->vendor == PCI_VENDOR_ID_ATI &&
++ adev->pdev->device == 0x731f &&
++ adev->pdev->subsystem_vendor == PCI_VENDOR_ID_DELL)
++ return 0;
++
+ /* PCI_EXT_CAP_ID_VNDR extended capability is located at 0x100 */
+ if (!pci_find_ext_capability(adev->pdev, PCI_EXT_CAP_ID_VNDR))
+ DRM_WARN("System can't access extended configuration space, please check!!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 425073d994912f..1c8ac4cf08c5ac 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -2280,7 +2280,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
+ struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+ struct amdgpu_res_cursor cursor;
+ u64 addr;
+- int r;
++ int r = 0;
+
+ if (!adev->mman.buffer_funcs_enabled)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+index 2eff37aaf8273b..1695dd78ede8e6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+@@ -107,6 +107,8 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x53 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -167,10 +169,10 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |=
+ ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+- m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
++
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+ m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+index 68dbc0399c87aa..3c0ae28c5923b5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+@@ -154,6 +154,8 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x55 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -221,10 +223,9 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |=
+ ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+- m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+ m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
+index 2b72d5b4949b6c..565858b9044d46 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
+@@ -121,6 +121,8 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x55 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -184,10 +186,9 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |=
+ ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+- m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+ m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 84e8ea3a8a0c94..217af36dc0976f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -182,6 +182,9 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x53 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
++
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -244,7 +247,7 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |= order_base_2(q->queue_size / 4) - 1;
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 85e58e0f6059a6..5df26f8937cc81 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1593,75 +1593,130 @@ static bool dm_should_disable_stutter(struct pci_dev *pdev)
+ return false;
+ }
+
+-static const struct dmi_system_id hpd_disconnect_quirk_table[] = {
++struct amdgpu_dm_quirks {
++ bool aux_hpd_discon;
++ bool support_edp0_on_dp1;
++};
++
++static struct amdgpu_dm_quirks quirk_entries = {
++ .aux_hpd_discon = false,
++ .support_edp0_on_dp1 = false
++};
++
++static int edp0_on_dp1_callback(const struct dmi_system_id *id)
++{
++ quirk_entries.support_edp0_on_dp1 = true;
++ return 0;
++}
++
++static int aux_hpd_discon_callback(const struct dmi_system_id *id)
++{
++ quirk_entries.aux_hpd_discon = true;
++ return 0;
++}
++
++static const struct dmi_system_id dmi_quirk_table[] = {
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3660"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3260"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3460"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower Plus 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF Plus 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro Plus 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro 7010"),
+ },
+ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
++ },
++ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
++ },
++ },
+ {}
+ /* TODO: refactor this from a fixed table to a dynamic option */
+ };
+
+-static void retrieve_dmi_info(struct amdgpu_display_manager *dm)
++static void retrieve_dmi_info(struct amdgpu_display_manager *dm, struct dc_init_data *init_data)
+ {
+- const struct dmi_system_id *dmi_id;
++ int dmi_id;
++ struct drm_device *dev = dm->ddev;
+
+ dm->aux_hpd_discon_quirk = false;
++ init_data->flags.support_edp0_on_dp1 = false;
++
++ dmi_id = dmi_check_system(dmi_quirk_table);
+
+- dmi_id = dmi_first_match(hpd_disconnect_quirk_table);
+- if (dmi_id) {
++ if (!dmi_id)
++ return;
++
++ if (quirk_entries.aux_hpd_discon) {
+ dm->aux_hpd_discon_quirk = true;
+- DRM_INFO("aux_hpd_discon_quirk attached\n");
++ drm_info(dev, "aux_hpd_discon_quirk attached\n");
++ }
++ if (quirk_entries.support_edp0_on_dp1) {
++ init_data->flags.support_edp0_on_dp1 = true;
++ drm_info(dev, "aux_hpd_discon_quirk attached\n");
+ }
+ }
+
+@@ -1969,7 +2024,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0))
+ init_data.num_virtual_links = 1;
+
+- retrieve_dmi_info(&adev->dm);
++ retrieve_dmi_info(&adev->dm, &init_data);
+
+ if (adev->dm.bb_from_dmub)
+ init_data.bb_from_dmub = adev->dm.bb_from_dmub;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 3390f0d8420a05..c4a7fd453e5fc0 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -894,6 +894,7 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+ struct drm_device *dev = adev_to_drm(adev);
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
++ int i;
+
+ drm_connector_list_iter_begin(dev, &iter);
+ drm_for_each_connector_iter(connector, &iter) {
+@@ -920,6 +921,12 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+ }
+ }
+ drm_connector_list_iter_end(&iter);
++
++ /* Update reference counts for HPDs */
++ for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {
++ if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))
++ drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i);
++ }
+ }
+
+ /**
+@@ -935,6 +942,7 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ struct drm_device *dev = adev_to_drm(adev);
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
++ int i;
+
+ drm_connector_list_iter_begin(dev, &iter);
+ drm_for_each_connector_iter(connector, &iter) {
+@@ -960,4 +968,10 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ }
+ }
+ drm_connector_list_iter_end(&iter);
++
++ /* Update reference counts for HPDs */
++ for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {
++ if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))
++ drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i);
++ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index 45858bf1523d8f..e140b7a04d7246 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -54,7 +54,8 @@ static bool link_supports_psrsu(struct dc_link *link)
+ if (amdgpu_dc_debug_mask & DC_DISABLE_PSR_SU)
+ return false;
+
+- return dc_dmub_check_min_version(dc->ctx->dmub_srv->dmub);
++ /* Temporarily disable PSR-SU to avoid glitches */
++ return false;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+index e8b6989a40f35a..6b34a33d788f29 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+@@ -3043,6 +3043,7 @@ static int kv_dpm_hw_init(void *handle)
+ if (!amdgpu_dpm)
+ return 0;
+
++ mutex_lock(&adev->pm.mutex);
+ kv_dpm_setup_asic(adev);
+ ret = kv_dpm_enable(adev);
+ if (ret)
+@@ -3050,6 +3051,8 @@ static int kv_dpm_hw_init(void *handle)
+ else
+ adev->pm.dpm_enabled = true;
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ mutex_unlock(&adev->pm.mutex);
++
+ return ret;
+ }
+
+@@ -3067,32 +3070,42 @@ static int kv_dpm_suspend(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
++ cancel_work_sync(&adev->pm.dpm.thermal.work);
++
+ if (adev->pm.dpm_enabled) {
++ mutex_lock(&adev->pm.mutex);
++ adev->pm.dpm_enabled = false;
+ /* disable dpm */
+ kv_dpm_disable(adev);
+ /* reset the power state */
+ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
++ mutex_unlock(&adev->pm.mutex);
+ }
+ return 0;
+ }
+
+ static int kv_dpm_resume(void *handle)
+ {
+- int ret;
++ int ret = 0;
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+- if (adev->pm.dpm_enabled) {
++ if (!amdgpu_dpm)
++ return 0;
++
++ if (!adev->pm.dpm_enabled) {
++ mutex_lock(&adev->pm.mutex);
+ /* asic init will reset to the boot state */
+ kv_dpm_setup_asic(adev);
+ ret = kv_dpm_enable(adev);
+- if (ret)
++ if (ret) {
+ adev->pm.dpm_enabled = false;
+- else
++ } else {
+ adev->pm.dpm_enabled = true;
+- if (adev->pm.dpm_enabled)
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ }
++ mutex_unlock(&adev->pm.mutex);
+ }
+- return 0;
++ return ret;
+ }
+
+ static bool kv_dpm_is_idle(void *handle)
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+index e861355ebd75b9..c7518b13e78795 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+@@ -1009,9 +1009,12 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
+ enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
+ int temp, size = sizeof(temp);
+
+- if (!adev->pm.dpm_enabled)
+- return;
++ mutex_lock(&adev->pm.mutex);
+
++ if (!adev->pm.dpm_enabled) {
++ mutex_unlock(&adev->pm.mutex);
++ return;
++ }
+ if (!pp_funcs->read_sensor(adev->powerplay.pp_handle,
+ AMDGPU_PP_SENSOR_GPU_TEMP,
+ (void *)&temp,
+@@ -1033,4 +1036,5 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
+ adev->pm.dpm.state = dpm_state;
+
+ amdgpu_legacy_dpm_compute_clocks(adev->powerplay.pp_handle);
++ mutex_unlock(&adev->pm.mutex);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+index a1baa13ab2c263..a5ad1b60597e61 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+@@ -7783,6 +7783,7 @@ static int si_dpm_hw_init(void *handle)
+ if (!amdgpu_dpm)
+ return 0;
+
++ mutex_lock(&adev->pm.mutex);
+ si_dpm_setup_asic(adev);
+ ret = si_dpm_enable(adev);
+ if (ret)
+@@ -7790,6 +7791,7 @@ static int si_dpm_hw_init(void *handle)
+ else
+ adev->pm.dpm_enabled = true;
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ mutex_unlock(&adev->pm.mutex);
+ return ret;
+ }
+
+@@ -7807,32 +7809,44 @@ static int si_dpm_suspend(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
++ cancel_work_sync(&adev->pm.dpm.thermal.work);
++
+ if (adev->pm.dpm_enabled) {
++ mutex_lock(&adev->pm.mutex);
++ adev->pm.dpm_enabled = false;
+ /* disable dpm */
+ si_dpm_disable(adev);
+ /* reset the power state */
+ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
++ mutex_unlock(&adev->pm.mutex);
+ }
++
+ return 0;
+ }
+
+ static int si_dpm_resume(void *handle)
+ {
+- int ret;
++ int ret = 0;
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+- if (adev->pm.dpm_enabled) {
++ if (!amdgpu_dpm)
++ return 0;
++
++ if (!adev->pm.dpm_enabled) {
+ /* asic init will reset to the boot state */
++ mutex_lock(&adev->pm.mutex);
+ si_dpm_setup_asic(adev);
+ ret = si_dpm_enable(adev);
+- if (ret)
++ if (ret) {
+ adev->pm.dpm_enabled = false;
+- else
++ } else {
+ adev->pm.dpm_enabled = true;
+- if (adev->pm.dpm_enabled)
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ }
++ mutex_unlock(&adev->pm.mutex);
+ }
+- return 0;
++
++ return ret;
+ }
+
+ static bool si_dpm_is_idle(void *handle)
+diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+index 7c78496e6213cc..192e571348f6b3 100644
+--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+@@ -53,7 +53,6 @@
+
+ #define RING_CTL(base) XE_REG((base) + 0x3c)
+ #define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
+-#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
+
+ #define RING_START_UDW(base) XE_REG((base) + 0x48)
+
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index e6744422dee492..448766033690c7 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -47,6 +47,11 @@ enum xe_oa_submit_deps {
+ XE_OA_SUBMIT_ADD_DEPS,
+ };
+
++enum xe_oa_user_extn_from {
++ XE_OA_USER_EXTN_FROM_OPEN,
++ XE_OA_USER_EXTN_FROM_CONFIG,
++};
++
+ struct xe_oa_reg {
+ struct xe_reg addr;
+ u32 value;
+@@ -94,6 +99,17 @@ struct xe_oa_config_bo {
+ struct xe_bb *bb;
+ };
+
++struct xe_oa_fence {
++ /* @base: dma fence base */
++ struct dma_fence base;
++ /* @lock: lock for the fence */
++ spinlock_t lock;
++ /* @work: work to signal @base */
++ struct delayed_work work;
++ /* @cb: callback to schedule @work */
++ struct dma_fence_cb cb;
++};
++
+ #define DRM_FMT(x) DRM_XE_OA_FMT_TYPE_##x
+
+ static const struct xe_oa_format oa_formats[] = {
+@@ -166,10 +182,10 @@ static struct xe_oa_config *xe_oa_get_oa_config(struct xe_oa *oa, int metrics_se
+ return oa_config;
+ }
+
+-static void free_oa_config_bo(struct xe_oa_config_bo *oa_bo)
++static void free_oa_config_bo(struct xe_oa_config_bo *oa_bo, struct dma_fence *last_fence)
+ {
+ xe_oa_config_put(oa_bo->oa_config);
+- xe_bb_free(oa_bo->bb, NULL);
++ xe_bb_free(oa_bo->bb, last_fence);
+ kfree(oa_bo);
+ }
+
+@@ -668,7 +684,8 @@ static void xe_oa_free_configs(struct xe_oa_stream *stream)
+
+ xe_oa_config_put(stream->oa_config);
+ llist_for_each_entry_safe(oa_bo, tmp, stream->oa_config_bos.first, node)
+- free_oa_config_bo(oa_bo);
++ free_oa_config_bo(oa_bo, stream->last_fence);
++ dma_fence_put(stream->last_fence);
+ }
+
+ static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count)
+@@ -832,6 +849,7 @@ static void xe_oa_stream_destroy(struct xe_oa_stream *stream)
+ xe_gt_WARN_ON(gt, xe_guc_pc_unset_gucrc_mode(>->uc.guc.pc));
+
+ xe_oa_free_configs(stream);
++ xe_file_put(stream->xef);
+ }
+
+ static int xe_oa_alloc_oa_buffer(struct xe_oa_stream *stream)
+@@ -902,40 +920,113 @@ xe_oa_alloc_config_buffer(struct xe_oa_stream *stream, struct xe_oa_config *oa_c
+ return oa_bo;
+ }
+
++static void xe_oa_update_last_fence(struct xe_oa_stream *stream, struct dma_fence *fence)
++{
++ dma_fence_put(stream->last_fence);
++ stream->last_fence = dma_fence_get(fence);
++}
++
++static void xe_oa_fence_work_fn(struct work_struct *w)
++{
++ struct xe_oa_fence *ofence = container_of(w, typeof(*ofence), work.work);
++
++ /* Signal fence to indicate new OA configuration is active */
++ dma_fence_signal(&ofence->base);
++ dma_fence_put(&ofence->base);
++}
++
++static void xe_oa_config_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
++{
++ /* Additional empirical delay needed for NOA programming after registers are written */
++#define NOA_PROGRAM_ADDITIONAL_DELAY_US 500
++
++ struct xe_oa_fence *ofence = container_of(cb, typeof(*ofence), cb);
++
++ INIT_DELAYED_WORK(&ofence->work, xe_oa_fence_work_fn);
++ queue_delayed_work(system_unbound_wq, &ofence->work,
++ usecs_to_jiffies(NOA_PROGRAM_ADDITIONAL_DELAY_US));
++ dma_fence_put(fence);
++}
++
++static const char *xe_oa_get_driver_name(struct dma_fence *fence)
++{
++ return "xe_oa";
++}
++
++static const char *xe_oa_get_timeline_name(struct dma_fence *fence)
++{
++ return "unbound";
++}
++
++static const struct dma_fence_ops xe_oa_fence_ops = {
++ .get_driver_name = xe_oa_get_driver_name,
++ .get_timeline_name = xe_oa_get_timeline_name,
++};
++
+ static int xe_oa_emit_oa_config(struct xe_oa_stream *stream, struct xe_oa_config *config)
+ {
+ #define NOA_PROGRAM_ADDITIONAL_DELAY_US 500
+ struct xe_oa_config_bo *oa_bo;
+- int err = 0, us = NOA_PROGRAM_ADDITIONAL_DELAY_US;
++ struct xe_oa_fence *ofence;
++ int i, err, num_signal = 0;
+ struct dma_fence *fence;
+- long timeout;
+
+- /* Emit OA configuration batch */
++ ofence = kzalloc(sizeof(*ofence), GFP_KERNEL);
++ if (!ofence) {
++ err = -ENOMEM;
++ goto exit;
++ }
++
+ oa_bo = xe_oa_alloc_config_buffer(stream, config);
+ if (IS_ERR(oa_bo)) {
+ err = PTR_ERR(oa_bo);
+ goto exit;
+ }
+
++ /* Emit OA configuration batch */
+ fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_ADD_DEPS, oa_bo->bb);
+ if (IS_ERR(fence)) {
+ err = PTR_ERR(fence);
+ goto exit;
+ }
+
+- /* Wait till all previous batches have executed */
+- timeout = dma_fence_wait_timeout(fence, false, 5 * HZ);
+- dma_fence_put(fence);
+- if (timeout < 0)
+- err = timeout;
+- else if (!timeout)
+- err = -ETIME;
+- if (err)
+- drm_dbg(&stream->oa->xe->drm, "dma_fence_wait_timeout err %d\n", err);
++ /* Point of no return: initialize and set fence to signal */
++ spin_lock_init(&ofence->lock);
++ dma_fence_init(&ofence->base, &xe_oa_fence_ops, &ofence->lock, 0, 0);
+
+- /* Additional empirical delay needed for NOA programming after registers are written */
+- usleep_range(us, 2 * us);
++ for (i = 0; i < stream->num_syncs; i++) {
++ if (stream->syncs[i].flags & DRM_XE_SYNC_FLAG_SIGNAL)
++ num_signal++;
++ xe_sync_entry_signal(&stream->syncs[i], &ofence->base);
++ }
++
++ /* Additional dma_fence_get in case we dma_fence_wait */
++ if (!num_signal)
++ dma_fence_get(&ofence->base);
++
++ /* Update last fence too before adding callback */
++ xe_oa_update_last_fence(stream, fence);
++
++ /* Add job fence callback to schedule work to signal ofence->base */
++ err = dma_fence_add_callback(fence, &ofence->cb, xe_oa_config_cb);
++ xe_gt_assert(stream->gt, !err || err == -ENOENT);
++ if (err == -ENOENT)
++ xe_oa_config_cb(fence, &ofence->cb);
++
++ /* If nothing needs to be signaled we wait synchronously */
++ if (!num_signal) {
++ dma_fence_wait(&ofence->base, false);
++ dma_fence_put(&ofence->base);
++ }
++
++ /* Done with syncs */
++ for (i = 0; i < stream->num_syncs; i++)
++ xe_sync_entry_cleanup(&stream->syncs[i]);
++ kfree(stream->syncs);
++
++ return 0;
+ exit:
++ kfree(ofence);
+ return err;
+ }
+
+@@ -1006,6 +1097,262 @@ static int xe_oa_enable_metric_set(struct xe_oa_stream *stream)
+ return xe_oa_emit_oa_config(stream, stream->oa_config);
+ }
+
++static int decode_oa_format(struct xe_oa *oa, u64 fmt, enum xe_oa_format_name *name)
++{
++ u32 counter_size = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE, fmt);
++ u32 counter_sel = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SEL, fmt);
++ u32 bc_report = FIELD_GET(DRM_XE_OA_FORMAT_MASK_BC_REPORT, fmt);
++ u32 type = FIELD_GET(DRM_XE_OA_FORMAT_MASK_FMT_TYPE, fmt);
++ int idx;
++
++ for_each_set_bit(idx, oa->format_mask, __XE_OA_FORMAT_MAX) {
++ const struct xe_oa_format *f = &oa->oa_formats[idx];
++
++ if (counter_size == f->counter_size && bc_report == f->bc_report &&
++ type == f->type && counter_sel == f->counter_select) {
++ *name = idx;
++ return 0;
++ }
++ }
++
++ return -EINVAL;
++}
++
++static int xe_oa_set_prop_oa_unit_id(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ if (value >= oa->oa_unit_ids) {
++ drm_dbg(&oa->xe->drm, "OA unit ID out of range %lld\n", value);
++ return -EINVAL;
++ }
++ param->oa_unit_id = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_sample_oa(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->sample = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_metric_set(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->metric_set = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_oa_format(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ int ret = decode_oa_format(oa, value, ¶m->oa_format);
++
++ if (ret) {
++ drm_dbg(&oa->xe->drm, "Unsupported OA report format %#llx\n", value);
++ return ret;
++ }
++ return 0;
++}
++
++static int xe_oa_set_prop_oa_exponent(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++#define OA_EXPONENT_MAX 31
++
++ if (value > OA_EXPONENT_MAX) {
++ drm_dbg(&oa->xe->drm, "OA timer exponent too high (> %u)\n", OA_EXPONENT_MAX);
++ return -EINVAL;
++ }
++ param->period_exponent = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_disabled(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->disabled = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_exec_queue_id(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->exec_queue_id = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_engine_instance(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->engine_instance = value;
++ return 0;
++}
++
++static int xe_oa_set_no_preempt(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->no_preempt = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_num_syncs(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->num_syncs = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_syncs_user(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->syncs_user = u64_to_user_ptr(value);
++ return 0;
++}
++
++static int xe_oa_set_prop_ret_inval(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ return -EINVAL;
++}
++
++typedef int (*xe_oa_set_property_fn)(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param);
++static const xe_oa_set_property_fn xe_oa_set_property_funcs_open[] = {
++ [DRM_XE_OA_PROPERTY_OA_UNIT_ID] = xe_oa_set_prop_oa_unit_id,
++ [DRM_XE_OA_PROPERTY_SAMPLE_OA] = xe_oa_set_prop_sample_oa,
++ [DRM_XE_OA_PROPERTY_OA_METRIC_SET] = xe_oa_set_prop_metric_set,
++ [DRM_XE_OA_PROPERTY_OA_FORMAT] = xe_oa_set_prop_oa_format,
++ [DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT] = xe_oa_set_prop_oa_exponent,
++ [DRM_XE_OA_PROPERTY_OA_DISABLED] = xe_oa_set_prop_disabled,
++ [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_exec_queue_id,
++ [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_engine_instance,
++ [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_no_preempt,
++ [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
++ [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
++};
++
++static const xe_oa_set_property_fn xe_oa_set_property_funcs_config[] = {
++ [DRM_XE_OA_PROPERTY_OA_UNIT_ID] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_SAMPLE_OA] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_METRIC_SET] = xe_oa_set_prop_metric_set,
++ [DRM_XE_OA_PROPERTY_OA_FORMAT] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_DISABLED] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
++ [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
++};
++
++static int xe_oa_user_ext_set_property(struct xe_oa *oa, enum xe_oa_user_extn_from from,
++ u64 extension, struct xe_oa_open_param *param)
++{
++ u64 __user *address = u64_to_user_ptr(extension);
++ struct drm_xe_ext_set_property ext;
++ int err;
++ u32 idx;
++
++ err = __copy_from_user(&ext, address, sizeof(ext));
++ if (XE_IOCTL_DBG(oa->xe, err))
++ return -EFAULT;
++
++ BUILD_BUG_ON(ARRAY_SIZE(xe_oa_set_property_funcs_open) !=
++ ARRAY_SIZE(xe_oa_set_property_funcs_config));
++
++ if (XE_IOCTL_DBG(oa->xe, ext.property >= ARRAY_SIZE(xe_oa_set_property_funcs_open)) ||
++ XE_IOCTL_DBG(oa->xe, ext.pad))
++ return -EINVAL;
++
++ idx = array_index_nospec(ext.property, ARRAY_SIZE(xe_oa_set_property_funcs_open));
++
++ if (from == XE_OA_USER_EXTN_FROM_CONFIG)
++ return xe_oa_set_property_funcs_config[idx](oa, ext.value, param);
++ else
++ return xe_oa_set_property_funcs_open[idx](oa, ext.value, param);
++}
++
++typedef int (*xe_oa_user_extension_fn)(struct xe_oa *oa, enum xe_oa_user_extn_from from,
++ u64 extension, struct xe_oa_open_param *param);
++static const xe_oa_user_extension_fn xe_oa_user_extension_funcs[] = {
++ [DRM_XE_OA_EXTENSION_SET_PROPERTY] = xe_oa_user_ext_set_property,
++};
++
++#define MAX_USER_EXTENSIONS 16
++static int xe_oa_user_extensions(struct xe_oa *oa, enum xe_oa_user_extn_from from, u64 extension,
++ int ext_number, struct xe_oa_open_param *param)
++{
++ u64 __user *address = u64_to_user_ptr(extension);
++ struct drm_xe_user_extension ext;
++ int err;
++ u32 idx;
++
++ if (XE_IOCTL_DBG(oa->xe, ext_number >= MAX_USER_EXTENSIONS))
++ return -E2BIG;
++
++ err = __copy_from_user(&ext, address, sizeof(ext));
++ if (XE_IOCTL_DBG(oa->xe, err))
++ return -EFAULT;
++
++ if (XE_IOCTL_DBG(oa->xe, ext.pad) ||
++ XE_IOCTL_DBG(oa->xe, ext.name >= ARRAY_SIZE(xe_oa_user_extension_funcs)))
++ return -EINVAL;
++
++ idx = array_index_nospec(ext.name, ARRAY_SIZE(xe_oa_user_extension_funcs));
++ err = xe_oa_user_extension_funcs[idx](oa, from, extension, param);
++ if (XE_IOCTL_DBG(oa->xe, err))
++ return err;
++
++ if (ext.next_extension)
++ return xe_oa_user_extensions(oa, from, ext.next_extension, ++ext_number, param);
++
++ return 0;
++}
++
++static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param)
++{
++ int ret, num_syncs, num_ufence = 0;
++
++ if (param->num_syncs && !param->syncs_user) {
++ drm_dbg(&oa->xe->drm, "num_syncs specified without sync array\n");
++ ret = -EINVAL;
++ goto exit;
++ }
++
++ if (param->num_syncs) {
++ param->syncs = kcalloc(param->num_syncs, sizeof(*param->syncs), GFP_KERNEL);
++ if (!param->syncs) {
++ ret = -ENOMEM;
++ goto exit;
++ }
++ }
++
++ for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) {
++ ret = xe_sync_entry_parse(oa->xe, param->xef, ¶m->syncs[num_syncs],
++ ¶m->syncs_user[num_syncs], 0);
++ if (ret)
++ goto err_syncs;
++
++ if (xe_sync_is_ufence(¶m->syncs[num_syncs]))
++ num_ufence++;
++ }
++
++ if (XE_IOCTL_DBG(oa->xe, num_ufence > 1)) {
++ ret = -EINVAL;
++ goto err_syncs;
++ }
++
++ return 0;
++
++err_syncs:
++ while (num_syncs--)
++ xe_sync_entry_cleanup(¶m->syncs[num_syncs]);
++ kfree(param->syncs);
++exit:
++ return ret;
++}
++
+ static void xe_oa_stream_enable(struct xe_oa_stream *stream)
+ {
+ stream->pollin = false;
+@@ -1099,36 +1446,38 @@ static int xe_oa_disable_locked(struct xe_oa_stream *stream)
+
+ static long xe_oa_config_locked(struct xe_oa_stream *stream, u64 arg)
+ {
+- struct drm_xe_ext_set_property ext;
++ struct xe_oa_open_param param = {};
+ long ret = stream->oa_config->id;
+ struct xe_oa_config *config;
+ int err;
+
+- err = __copy_from_user(&ext, u64_to_user_ptr(arg), sizeof(ext));
+- if (XE_IOCTL_DBG(stream->oa->xe, err))
+- return -EFAULT;
+-
+- if (XE_IOCTL_DBG(stream->oa->xe, ext.pad) ||
+- XE_IOCTL_DBG(stream->oa->xe, ext.base.name != DRM_XE_OA_EXTENSION_SET_PROPERTY) ||
+- XE_IOCTL_DBG(stream->oa->xe, ext.base.next_extension) ||
+- XE_IOCTL_DBG(stream->oa->xe, ext.property != DRM_XE_OA_PROPERTY_OA_METRIC_SET))
+- return -EINVAL;
++ err = xe_oa_user_extensions(stream->oa, XE_OA_USER_EXTN_FROM_CONFIG, arg, 0, ¶m);
++ if (err)
++ return err;
+
+- config = xe_oa_get_oa_config(stream->oa, ext.value);
++ config = xe_oa_get_oa_config(stream->oa, param.metric_set);
+ if (!config)
+ return -ENODEV;
+
+- if (config != stream->oa_config) {
+- err = xe_oa_emit_oa_config(stream, config);
+- if (!err)
+- config = xchg(&stream->oa_config, config);
+- else
+- ret = err;
++ param.xef = stream->xef;
++ err = xe_oa_parse_syncs(stream->oa, ¶m);
++ if (err)
++ goto err_config_put;
++
++ stream->num_syncs = param.num_syncs;
++ stream->syncs = param.syncs;
++
++ err = xe_oa_emit_oa_config(stream, config);
++ if (!err) {
++ config = xchg(&stream->oa_config, config);
++ drm_dbg(&stream->oa->xe->drm, "changed to oa config uuid=%s\n",
++ stream->oa_config->uuid);
+ }
+
++err_config_put:
+ xe_oa_config_put(config);
+
+- return ret;
++ return err ?: ret;
+ }
+
+ static long xe_oa_status_locked(struct xe_oa_stream *stream, unsigned long arg)
+@@ -1367,10 +1716,11 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
+ stream->oa_buffer.format = &stream->oa->oa_formats[param->oa_format];
+
+ stream->sample = param->sample;
+- stream->periodic = param->period_exponent > 0;
++ stream->periodic = param->period_exponent >= 0;
+ stream->period_exponent = param->period_exponent;
+ stream->no_preempt = param->no_preempt;
+
++ stream->xef = xe_file_get(param->xef);
+ stream->num_syncs = param->num_syncs;
+ stream->syncs = param->syncs;
+
+@@ -1470,6 +1820,7 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
+ err_free_configs:
+ xe_oa_free_configs(stream);
+ exit:
++ xe_file_put(stream->xef);
+ return ret;
+ }
+
+@@ -1579,27 +1930,6 @@ static bool engine_supports_oa_format(const struct xe_hw_engine *hwe, int type)
+ }
+ }
+
+-static int decode_oa_format(struct xe_oa *oa, u64 fmt, enum xe_oa_format_name *name)
+-{
+- u32 counter_size = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE, fmt);
+- u32 counter_sel = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SEL, fmt);
+- u32 bc_report = FIELD_GET(DRM_XE_OA_FORMAT_MASK_BC_REPORT, fmt);
+- u32 type = FIELD_GET(DRM_XE_OA_FORMAT_MASK_FMT_TYPE, fmt);
+- int idx;
+-
+- for_each_set_bit(idx, oa->format_mask, __XE_OA_FORMAT_MAX) {
+- const struct xe_oa_format *f = &oa->oa_formats[idx];
+-
+- if (counter_size == f->counter_size && bc_report == f->bc_report &&
+- type == f->type && counter_sel == f->counter_select) {
+- *name = idx;
+- return 0;
+- }
+- }
+-
+- return -EINVAL;
+-}
+-
+ /**
+ * xe_oa_unit_id - Return OA unit ID for a hardware engine
+ * @hwe: @xe_hw_engine
+@@ -1646,214 +1976,6 @@ static int xe_oa_assign_hwe(struct xe_oa *oa, struct xe_oa_open_param *param)
+ return ret;
+ }
+
+-static int xe_oa_set_prop_oa_unit_id(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- if (value >= oa->oa_unit_ids) {
+- drm_dbg(&oa->xe->drm, "OA unit ID out of range %lld\n", value);
+- return -EINVAL;
+- }
+- param->oa_unit_id = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_sample_oa(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->sample = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_metric_set(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->metric_set = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_oa_format(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- int ret = decode_oa_format(oa, value, ¶m->oa_format);
+-
+- if (ret) {
+- drm_dbg(&oa->xe->drm, "Unsupported OA report format %#llx\n", value);
+- return ret;
+- }
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_oa_exponent(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+-#define OA_EXPONENT_MAX 31
+-
+- if (value > OA_EXPONENT_MAX) {
+- drm_dbg(&oa->xe->drm, "OA timer exponent too high (> %u)\n", OA_EXPONENT_MAX);
+- return -EINVAL;
+- }
+- param->period_exponent = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_disabled(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->disabled = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_exec_queue_id(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->exec_queue_id = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_engine_instance(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->engine_instance = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_no_preempt(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->no_preempt = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_num_syncs(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->num_syncs = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_syncs_user(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->syncs_user = u64_to_user_ptr(value);
+- return 0;
+-}
+-
+-typedef int (*xe_oa_set_property_fn)(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param);
+-static const xe_oa_set_property_fn xe_oa_set_property_funcs[] = {
+- [DRM_XE_OA_PROPERTY_OA_UNIT_ID] = xe_oa_set_prop_oa_unit_id,
+- [DRM_XE_OA_PROPERTY_SAMPLE_OA] = xe_oa_set_prop_sample_oa,
+- [DRM_XE_OA_PROPERTY_OA_METRIC_SET] = xe_oa_set_prop_metric_set,
+- [DRM_XE_OA_PROPERTY_OA_FORMAT] = xe_oa_set_prop_oa_format,
+- [DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT] = xe_oa_set_prop_oa_exponent,
+- [DRM_XE_OA_PROPERTY_OA_DISABLED] = xe_oa_set_prop_disabled,
+- [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_exec_queue_id,
+- [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_engine_instance,
+- [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_no_preempt,
+- [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
+- [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
+-};
+-
+-static int xe_oa_user_ext_set_property(struct xe_oa *oa, u64 extension,
+- struct xe_oa_open_param *param)
+-{
+- u64 __user *address = u64_to_user_ptr(extension);
+- struct drm_xe_ext_set_property ext;
+- int err;
+- u32 idx;
+-
+- err = __copy_from_user(&ext, address, sizeof(ext));
+- if (XE_IOCTL_DBG(oa->xe, err))
+- return -EFAULT;
+-
+- if (XE_IOCTL_DBG(oa->xe, ext.property >= ARRAY_SIZE(xe_oa_set_property_funcs)) ||
+- XE_IOCTL_DBG(oa->xe, ext.pad))
+- return -EINVAL;
+-
+- idx = array_index_nospec(ext.property, ARRAY_SIZE(xe_oa_set_property_funcs));
+- return xe_oa_set_property_funcs[idx](oa, ext.value, param);
+-}
+-
+-typedef int (*xe_oa_user_extension_fn)(struct xe_oa *oa, u64 extension,
+- struct xe_oa_open_param *param);
+-static const xe_oa_user_extension_fn xe_oa_user_extension_funcs[] = {
+- [DRM_XE_OA_EXTENSION_SET_PROPERTY] = xe_oa_user_ext_set_property,
+-};
+-
+-#define MAX_USER_EXTENSIONS 16
+-static int xe_oa_user_extensions(struct xe_oa *oa, u64 extension, int ext_number,
+- struct xe_oa_open_param *param)
+-{
+- u64 __user *address = u64_to_user_ptr(extension);
+- struct drm_xe_user_extension ext;
+- int err;
+- u32 idx;
+-
+- if (XE_IOCTL_DBG(oa->xe, ext_number >= MAX_USER_EXTENSIONS))
+- return -E2BIG;
+-
+- err = __copy_from_user(&ext, address, sizeof(ext));
+- if (XE_IOCTL_DBG(oa->xe, err))
+- return -EFAULT;
+-
+- if (XE_IOCTL_DBG(oa->xe, ext.pad) ||
+- XE_IOCTL_DBG(oa->xe, ext.name >= ARRAY_SIZE(xe_oa_user_extension_funcs)))
+- return -EINVAL;
+-
+- idx = array_index_nospec(ext.name, ARRAY_SIZE(xe_oa_user_extension_funcs));
+- err = xe_oa_user_extension_funcs[idx](oa, extension, param);
+- if (XE_IOCTL_DBG(oa->xe, err))
+- return err;
+-
+- if (ext.next_extension)
+- return xe_oa_user_extensions(oa, ext.next_extension, ++ext_number, param);
+-
+- return 0;
+-}
+-
+-static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param)
+-{
+- int ret, num_syncs, num_ufence = 0;
+-
+- if (param->num_syncs && !param->syncs_user) {
+- drm_dbg(&oa->xe->drm, "num_syncs specified without sync array\n");
+- ret = -EINVAL;
+- goto exit;
+- }
+-
+- if (param->num_syncs) {
+- param->syncs = kcalloc(param->num_syncs, sizeof(*param->syncs), GFP_KERNEL);
+- if (!param->syncs) {
+- ret = -ENOMEM;
+- goto exit;
+- }
+- }
+-
+- for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) {
+- ret = xe_sync_entry_parse(oa->xe, param->xef, ¶m->syncs[num_syncs],
+- ¶m->syncs_user[num_syncs], 0);
+- if (ret)
+- goto err_syncs;
+-
+- if (xe_sync_is_ufence(¶m->syncs[num_syncs]))
+- num_ufence++;
+- }
+-
+- if (XE_IOCTL_DBG(oa->xe, num_ufence > 1)) {
+- ret = -EINVAL;
+- goto err_syncs;
+- }
+-
+- return 0;
+-
+-err_syncs:
+- while (num_syncs--)
+- xe_sync_entry_cleanup(¶m->syncs[num_syncs]);
+- kfree(param->syncs);
+-exit:
+- return ret;
+-}
+-
+ /**
+ * xe_oa_stream_open_ioctl - Opens an OA stream
+ * @dev: @drm_device
+@@ -1880,7 +2002,8 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ }
+
+ param.xef = xef;
+- ret = xe_oa_user_extensions(oa, data, 0, ¶m);
++ param.period_exponent = -1;
++ ret = xe_oa_user_extensions(oa, XE_OA_USER_EXTN_FROM_OPEN, data, 0, ¶m);
+ if (ret)
+ return ret;
+
+@@ -1934,7 +2057,7 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ goto err_exec_q;
+ }
+
+- if (param.period_exponent > 0) {
++ if (param.period_exponent >= 0) {
+ u64 oa_period, oa_freq_hz;
+
+ /* Requesting samples from OAG buffer is a privileged operation */
+diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
+index 99f4b2d4bdcf6a..fea9d981e414fa 100644
+--- a/drivers/gpu/drm/xe/xe_oa_types.h
++++ b/drivers/gpu/drm/xe/xe_oa_types.h
+@@ -239,6 +239,12 @@ struct xe_oa_stream {
+ /** @no_preempt: Whether preemption and timeslicing is disabled for stream exec_q */
+ u32 no_preempt;
+
++ /** @xef: xe_file with which the stream was opened */
++ struct xe_file *xef;
++
++ /** @last_fence: fence to use in stream destroy when needed */
++ struct dma_fence *last_fence;
++
+ /** @num_syncs: size of @syncs array */
+ u32 num_syncs;
+
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index c99380271de62f..5693b337f5dffe 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -667,20 +667,33 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
+
+ /* Collect invalidated userptrs */
+ spin_lock(&vm->userptr.invalidated_lock);
++ xe_assert(vm->xe, list_empty(&vm->userptr.repin_list));
+ list_for_each_entry_safe(uvma, next, &vm->userptr.invalidated,
+ userptr.invalidate_link) {
+ list_del_init(&uvma->userptr.invalidate_link);
+- list_move_tail(&uvma->userptr.repin_link,
+- &vm->userptr.repin_list);
++ list_add_tail(&uvma->userptr.repin_link,
++ &vm->userptr.repin_list);
+ }
+ spin_unlock(&vm->userptr.invalidated_lock);
+
+- /* Pin and move to temporary list */
++ /* Pin and move to bind list */
+ list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
+ userptr.repin_link) {
+ err = xe_vma_userptr_pin_pages(uvma);
+ if (err == -EFAULT) {
+ list_del_init(&uvma->userptr.repin_link);
++ /*
++ * We might have already done the pin once already, but
++ * then had to retry before the re-bind happened, due
++ * some other condition in the caller, but in the
++ * meantime the userptr got dinged by the notifier such
++ * that we need to revalidate here, but this time we hit
++ * the EFAULT. In such a case make sure we remove
++ * ourselves from the rebind list to avoid going down in
++ * flames.
++ */
++ if (!list_empty(&uvma->vma.combined_links.rebind))
++ list_del_init(&uvma->vma.combined_links.rebind);
+
+ /* Wait for pending binds */
+ xe_vm_lock(vm, false);
+@@ -691,10 +704,10 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
+ err = xe_vm_invalidate_vma(&uvma->vma);
+ xe_vm_unlock(vm);
+ if (err)
+- return err;
++ break;
+ } else {
+- if (err < 0)
+- return err;
++ if (err)
++ break;
+
+ list_del_init(&uvma->userptr.repin_link);
+ list_move_tail(&uvma->vma.combined_links.rebind,
+@@ -702,7 +715,19 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
+ }
+ }
+
+- return 0;
++ if (err) {
++ down_write(&vm->userptr.notifier_lock);
++ spin_lock(&vm->userptr.invalidated_lock);
++ list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
++ userptr.repin_link) {
++ list_del_init(&uvma->userptr.repin_link);
++ list_move_tail(&uvma->userptr.invalidate_link,
++ &vm->userptr.invalidated);
++ }
++ spin_unlock(&vm->userptr.invalidated_lock);
++ up_write(&vm->userptr.notifier_lock);
++ }
++ return err;
+ }
+
+ /**
+@@ -1066,6 +1091,7 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
+ xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
+
+ spin_lock(&vm->userptr.invalidated_lock);
++ xe_assert(vm->xe, list_empty(&to_userptr_vma(vma)->userptr.repin_link));
+ list_del(&to_userptr_vma(vma)->userptr.invalidate_link);
+ spin_unlock(&vm->userptr.invalidated_lock);
+ } else if (!xe_vma_is_null(vma)) {
+diff --git a/drivers/i2c/busses/i2c-ls2x.c b/drivers/i2c/busses/i2c-ls2x.c
+index 8821cac3897b69..b475dd27b7af94 100644
+--- a/drivers/i2c/busses/i2c-ls2x.c
++++ b/drivers/i2c/busses/i2c-ls2x.c
+@@ -10,6 +10,7 @@
+ * Rewritten for mainline by Binbin Zhou <zhoubinbin@loongson.cn>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/bits.h>
+ #include <linux/completion.h>
+ #include <linux/device.h>
+@@ -26,7 +27,8 @@
+ #include <linux/units.h>
+
+ /* I2C Registers */
+-#define I2C_LS2X_PRER 0x0 /* Freq Division Register(16 bits) */
++#define I2C_LS2X_PRER_LO 0x0 /* Freq Division Low Byte Register */
++#define I2C_LS2X_PRER_HI 0x1 /* Freq Division High Byte Register */
+ #define I2C_LS2X_CTR 0x2 /* Control Register */
+ #define I2C_LS2X_TXR 0x3 /* Transport Data Register */
+ #define I2C_LS2X_RXR 0x3 /* Receive Data Register */
+@@ -93,6 +95,7 @@ static irqreturn_t ls2x_i2c_isr(int this_irq, void *dev_id)
+ */
+ static void ls2x_i2c_adjust_bus_speed(struct ls2x_i2c_priv *priv)
+ {
++ u16 val;
+ struct i2c_timings *t = &priv->i2c_t;
+ struct device *dev = priv->adapter.dev.parent;
+ u32 acpi_speed = i2c_acpi_find_bus_speed(dev);
+@@ -104,9 +107,14 @@ static void ls2x_i2c_adjust_bus_speed(struct ls2x_i2c_priv *priv)
+ else
+ t->bus_freq_hz = LS2X_I2C_FREQ_STD;
+
+- /* Calculate and set i2c frequency. */
+- writew(LS2X_I2C_PCLK_FREQ / (5 * t->bus_freq_hz) - 1,
+- priv->base + I2C_LS2X_PRER);
++ /*
++ * According to the chip manual, we can only access the registers as bytes,
++ * otherwise the high bits will be truncated.
++ * So set the I2C frequency with a sequential writeb() instead of writew().
++ */
++ val = LS2X_I2C_PCLK_FREQ / (5 * t->bus_freq_hz) - 1;
++ writeb(FIELD_GET(GENMASK(7, 0), val), priv->base + I2C_LS2X_PRER_LO);
++ writeb(FIELD_GET(GENMASK(15, 8), val), priv->base + I2C_LS2X_PRER_HI);
+ }
+
+ static void ls2x_i2c_init(struct ls2x_i2c_priv *priv)
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index bbcb4d6668ce63..a693ebb64edf41 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2319,6 +2319,13 @@ static int npcm_i2c_probe_bus(struct platform_device *pdev)
+ if (irq < 0)
+ return irq;
+
++ /*
++ * Disable the interrupt to avoid the interrupt handler being triggered
++ * incorrectly by the asynchronous interrupt status since the machine
++ * might do a warm reset during the last smbus/i2c transfer session.
++ */
++ npcm_i2c_int_enable(bus, false);
++
+ ret = devm_request_irq(bus->dev, irq, npcm_i2c_bus_irq, 0,
+ dev_name(bus->dev), bus);
+ if (ret)
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 67aebfe0fed665..524ed143f875d3 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -55,6 +55,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/mwait.h>
+ #include <asm/spec-ctrl.h>
++#include <asm/tsc.h>
+ #include <asm/fpu/api.h>
+
+ #define INTEL_IDLE_VERSION "0.5.1"
+@@ -1749,6 +1750,9 @@ static void __init intel_idle_init_cstates_acpi(struct cpuidle_driver *drv)
+ if (intel_idle_state_needs_timer_stop(state))
+ state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+
++ if (cx->type > ACPI_STATE_C1 && !boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
++ mark_tsc_unstable("TSC halts in idle");
++
+ state->enter = intel_idle;
+ state->enter_s2idle = intel_idle_s2idle;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+index e94518b12f86ee..a316afc0139c86 100644
+--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
++++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+@@ -154,6 +154,14 @@ struct bnxt_re_pacing {
+
+ #define BNXT_RE_GRC_FIFO_REG_BASE 0x2000
+
++#define BNXT_RE_MIN_MSIX 2
++#define BNXT_RE_MAX_MSIX BNXT_MAX_ROCE_MSIX
++struct bnxt_re_nq_record {
++ struct bnxt_msix_entry msix_entries[BNXT_RE_MAX_MSIX];
++ struct bnxt_qplib_nq nq[BNXT_RE_MAX_MSIX];
++ int num_msix;
++};
++
+ #define MAX_CQ_HASH_BITS (16)
+ #define MAX_SRQ_HASH_BITS (16)
+ struct bnxt_re_dev {
+@@ -174,24 +182,20 @@ struct bnxt_re_dev {
+ unsigned int version, major, minor;
+ struct bnxt_qplib_chip_ctx *chip_ctx;
+ struct bnxt_en_dev *en_dev;
+- int num_msix;
+
+ int id;
+
+ struct delayed_work worker;
+ u8 cur_prio_map;
+
+- /* FP Notification Queue (CQ & SRQ) */
+- struct tasklet_struct nq_task;
+-
+ /* RCFW Channel */
+ struct bnxt_qplib_rcfw rcfw;
+
+- /* NQ */
+- struct bnxt_qplib_nq nq[BNXT_MAX_ROCE_MSIX];
++ /* NQ record */
++ struct bnxt_re_nq_record *nqr;
+
+ /* Device Resources */
+- struct bnxt_qplib_dev_attr dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr;
+ struct bnxt_qplib_ctx qplib_ctx;
+ struct bnxt_qplib_res qplib_res;
+ struct bnxt_qplib_dpi dpi_privileged;
+diff --git a/drivers/infiniband/hw/bnxt_re/hw_counters.c b/drivers/infiniband/hw/bnxt_re/hw_counters.c
+index 1e63f809174837..f51adb0a97e667 100644
+--- a/drivers/infiniband/hw/bnxt_re/hw_counters.c
++++ b/drivers/infiniband/hw/bnxt_re/hw_counters.c
+@@ -357,8 +357,8 @@ int bnxt_re_ib_get_hw_stats(struct ib_device *ibdev,
+ goto done;
+ }
+ bnxt_re_copy_err_stats(rdev, stats, err_s);
+- if (_is_ext_stats_supported(rdev->dev_attr.dev_cap_flags) &&
+- !rdev->is_virtfn) {
++ if (bnxt_ext_stats_supported(rdev->chip_ctx, rdev->dev_attr->dev_cap_flags,
++ rdev->is_virtfn)) {
+ rc = bnxt_re_get_ext_stat(rdev, stats);
+ if (rc) {
+ clear_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS,
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index a7067c3c067972..0b21d8b5d96296 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -118,7 +118,7 @@ static enum ib_access_flags __to_ib_access_flags(int qflags)
+ static void bnxt_re_check_and_set_relaxed_ordering(struct bnxt_re_dev *rdev,
+ struct bnxt_qplib_mrw *qplib_mr)
+ {
+- if (_is_relaxed_ordering_supported(rdev->dev_attr.dev_cap_flags2) &&
++ if (_is_relaxed_ordering_supported(rdev->dev_attr->dev_cap_flags2) &&
+ pcie_relaxed_ordering_enabled(rdev->en_dev->pdev))
+ qplib_mr->flags |= CMDQ_REGISTER_MR_FLAGS_ENABLE_RO;
+ }
+@@ -143,7 +143,7 @@ int bnxt_re_query_device(struct ib_device *ibdev,
+ struct ib_udata *udata)
+ {
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+
+ memset(ib_attr, 0, sizeof(*ib_attr));
+ memcpy(&ib_attr->fw_ver, dev_attr->fw_ver,
+@@ -216,7 +216,7 @@ int bnxt_re_query_port(struct ib_device *ibdev, u32 port_num,
+ struct ib_port_attr *port_attr)
+ {
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ int rc;
+
+ memset(port_attr, 0, sizeof(*port_attr));
+@@ -274,8 +274,8 @@ void bnxt_re_query_fw_str(struct ib_device *ibdev, char *str)
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+
+ snprintf(str, IB_FW_VERSION_NAME_MAX, "%d.%d.%d.%d",
+- rdev->dev_attr.fw_ver[0], rdev->dev_attr.fw_ver[1],
+- rdev->dev_attr.fw_ver[2], rdev->dev_attr.fw_ver[3]);
++ rdev->dev_attr->fw_ver[0], rdev->dev_attr->fw_ver[1],
++ rdev->dev_attr->fw_ver[2], rdev->dev_attr->fw_ver[3]);
+ }
+
+ int bnxt_re_query_pkey(struct ib_device *ibdev, u32 port_num,
+@@ -526,7 +526,7 @@ static int bnxt_re_create_fence_mr(struct bnxt_re_pd *pd)
+ mr->qplib_mr.pd = &pd->qplib_pd;
+ mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR;
+ mr->qplib_mr.access_flags = __from_ib_access_flags(mr_access_flags);
+- if (!_is_alloc_mr_unified(rdev->dev_attr.dev_cap_flags)) {
++ if (!_is_alloc_mr_unified(rdev->dev_attr->dev_cap_flags)) {
+ rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Failed to alloc fence-HW-MR\n");
+@@ -1001,7 +1001,7 @@ static int bnxt_re_setup_swqe_size(struct bnxt_re_qp *qp,
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+ sq = &qplqp->sq;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ align = sizeof(struct sq_send_hdr);
+ ilsize = ALIGN(init_attr->cap.max_inline_data, align);
+@@ -1221,7 +1221,7 @@ static int bnxt_re_init_rq_attr(struct bnxt_re_qp *qp,
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+ rq = &qplqp->rq;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ if (init_attr->srq) {
+ struct bnxt_re_srq *srq;
+@@ -1258,7 +1258,7 @@ static void bnxt_re_adjust_gsi_rq_attr(struct bnxt_re_qp *qp)
+
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) {
+ qplqp->rq.max_sge = dev_attr->max_qp_sges;
+@@ -1284,7 +1284,7 @@ static int bnxt_re_init_sq_attr(struct bnxt_re_qp *qp,
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+ sq = &qplqp->sq;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ sq->max_sge = init_attr->cap.max_send_sge;
+ entries = init_attr->cap.max_send_wr;
+@@ -1337,7 +1337,7 @@ static void bnxt_re_adjust_gsi_sq_attr(struct bnxt_re_qp *qp,
+
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) {
+ entries = bnxt_re_init_depth(init_attr->cap.max_send_wr + 1, uctx);
+@@ -1386,7 +1386,7 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
+
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ /* Setup misc params */
+ ether_addr_copy(qplqp->smac, rdev->netdev->dev_addr);
+@@ -1556,7 +1556,7 @@ int bnxt_re_create_qp(struct ib_qp *ib_qp, struct ib_qp_init_attr *qp_init_attr,
+ ib_pd = ib_qp->pd;
+ pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd);
+ rdev = pd->rdev;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+ qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
+
+ uctx = rdma_udata_to_drv_context(udata, struct bnxt_re_ucontext, ib_uctx);
+@@ -1783,7 +1783,7 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ ib_pd = ib_srq->pd;
+ pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd);
+ rdev = pd->rdev;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+ srq = container_of(ib_srq, struct bnxt_re_srq, ib_srq);
+
+ if (srq_init_attr->attr.max_wr >= dev_attr->max_srq_wqes) {
+@@ -1814,8 +1814,10 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ srq->qplib_srq.wqe_size = bnxt_re_get_rwqe_size(dev_attr->max_srq_sges);
+ srq->qplib_srq.threshold = srq_init_attr->attr.srq_limit;
+ srq->srq_limit = srq_init_attr->attr.srq_limit;
+- srq->qplib_srq.eventq_hw_ring_id = rdev->nq[0].ring_id;
+- nq = &rdev->nq[0];
++ srq->qplib_srq.eventq_hw_ring_id = rdev->nqr->nq[0].ring_id;
++ srq->qplib_srq.sg_info.pgsize = PAGE_SIZE;
++ srq->qplib_srq.sg_info.pgshft = PAGE_SHIFT;
++ nq = &rdev->nqr->nq[0];
+
+ if (udata) {
+ rc = bnxt_re_init_user_srq(rdev, pd, srq, udata);
+@@ -1987,7 +1989,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ {
+ struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
+ struct bnxt_re_dev *rdev = qp->rdev;
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ enum ib_qp_state curr_qp_state, new_qp_state;
+ int rc, entries;
+ unsigned int flags;
+@@ -3011,7 +3013,7 @@ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata = &attrs->driver_udata;
+ struct bnxt_re_ucontext *uctx =
+ rdma_udata_to_drv_context(udata, struct bnxt_re_ucontext, ib_uctx);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ struct bnxt_qplib_chip_ctx *cctx;
+ struct bnxt_qplib_nq *nq = NULL;
+ unsigned int nq_alloc_cnt;
+@@ -3070,7 +3072,7 @@ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ * used for getting the NQ index.
+ */
+ nq_alloc_cnt = atomic_inc_return(&rdev->nq_alloc_cnt);
+- nq = &rdev->nq[nq_alloc_cnt % (rdev->num_msix - 1)];
++ nq = &rdev->nqr->nq[nq_alloc_cnt % (rdev->nqr->num_msix - 1)];
+ cq->qplib_cq.max_wqe = entries;
+ cq->qplib_cq.cnq_hw_ring_id = nq->ring_id;
+ cq->qplib_cq.nq = nq;
+@@ -3154,7 +3156,7 @@ int bnxt_re_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata)
+
+ cq = container_of(ibcq, struct bnxt_re_cq, ib_cq);
+ rdev = cq->rdev;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+ if (!ibcq->uobject) {
+ ibdev_err(&rdev->ibdev, "Kernel CQ Resize not supported");
+ return -EOPNOTSUPP;
+@@ -4127,7 +4129,7 @@ static struct ib_mr *__bnxt_re_user_reg_mr(struct ib_pd *ib_pd, u64 length, u64
+ mr->qplib_mr.access_flags = __from_ib_access_flags(mr_access_flags);
+ mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_MR;
+
+- if (!_is_alloc_mr_unified(rdev->dev_attr.dev_cap_flags)) {
++ if (!_is_alloc_mr_unified(rdev->dev_attr->dev_cap_flags)) {
+ rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Failed to allocate MR rc = %d", rc);
+@@ -4219,7 +4221,7 @@ int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata)
+ struct bnxt_re_ucontext *uctx =
+ container_of(ctx, struct bnxt_re_ucontext, ib_uctx);
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ struct bnxt_re_user_mmap_entry *entry;
+ struct bnxt_re_uctx_resp resp = {};
+ struct bnxt_re_uctx_req ureq = {};
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 8abd1b723f8ff5..9bd837a5b8a1ad 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -152,6 +152,10 @@ static void bnxt_re_destroy_chip_ctx(struct bnxt_re_dev *rdev)
+
+ if (!rdev->chip_ctx)
+ return;
++
++ kfree(rdev->dev_attr);
++ rdev->dev_attr = NULL;
++
+ chip_ctx = rdev->chip_ctx;
+ rdev->chip_ctx = NULL;
+ rdev->rcfw.res = NULL;
+@@ -165,7 +169,7 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev)
+ {
+ struct bnxt_qplib_chip_ctx *chip_ctx;
+ struct bnxt_en_dev *en_dev;
+- int rc;
++ int rc = -ENOMEM;
+
+ en_dev = rdev->en_dev;
+
+@@ -181,23 +185,30 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev)
+
+ rdev->qplib_res.cctx = rdev->chip_ctx;
+ rdev->rcfw.res = &rdev->qplib_res;
+- rdev->qplib_res.dattr = &rdev->dev_attr;
++ rdev->dev_attr = kzalloc(sizeof(*rdev->dev_attr), GFP_KERNEL);
++ if (!rdev->dev_attr)
++ goto free_chip_ctx;
++ rdev->qplib_res.dattr = rdev->dev_attr;
+ rdev->qplib_res.is_vf = BNXT_EN_VF(en_dev);
+
+ bnxt_re_set_drv_mode(rdev);
+
+ bnxt_re_set_db_offset(rdev);
+ rc = bnxt_qplib_map_db_bar(&rdev->qplib_res);
+- if (rc) {
+- kfree(rdev->chip_ctx);
+- rdev->chip_ctx = NULL;
+- return rc;
+- }
++ if (rc)
++ goto free_dev_attr;
+
+ if (bnxt_qplib_determine_atomics(en_dev->pdev))
+ ibdev_info(&rdev->ibdev,
+ "platform doesn't support global atomics.");
+ return 0;
++free_dev_attr:
++ kfree(rdev->dev_attr);
++ rdev->dev_attr = NULL;
++free_chip_ctx:
++ kfree(rdev->chip_ctx);
++ rdev->chip_ctx = NULL;
++ return rc;
+ }
+
+ /* SR-IOV helper functions */
+@@ -219,7 +230,7 @@ static void bnxt_re_limit_pf_res(struct bnxt_re_dev *rdev)
+ struct bnxt_qplib_ctx *ctx;
+ int i;
+
+- attr = &rdev->dev_attr;
++ attr = rdev->dev_attr;
+ ctx = &rdev->qplib_ctx;
+
+ ctx->qpc_count = min_t(u32, BNXT_RE_MAX_QPC_COUNT,
+@@ -233,7 +244,7 @@ static void bnxt_re_limit_pf_res(struct bnxt_re_dev *rdev)
+ if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx))
+ for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
+ rdev->qplib_ctx.tqm_ctx.qcount[i] =
+- rdev->dev_attr.tqm_alloc_reqs[i];
++ rdev->dev_attr->tqm_alloc_reqs[i];
+ }
+
+ static void bnxt_re_limit_vf_res(struct bnxt_qplib_ctx *qplib_ctx, u32 num_vf)
+@@ -314,10 +325,12 @@ static void bnxt_re_stop_irq(void *handle)
+ int indx;
+
+ rdev = en_info->rdev;
++ if (!rdev)
++ return;
+ rcfw = &rdev->rcfw;
+
+- for (indx = BNXT_RE_NQ_IDX; indx < rdev->num_msix; indx++) {
+- nq = &rdev->nq[indx - 1];
++ for (indx = BNXT_RE_NQ_IDX; indx < rdev->nqr->num_msix; indx++) {
++ nq = &rdev->nqr->nq[indx - 1];
+ bnxt_qplib_nq_stop_irq(nq, false);
+ }
+
+@@ -334,7 +347,9 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ int indx, rc;
+
+ rdev = en_info->rdev;
+- msix_ent = rdev->en_dev->msix_entries;
++ if (!rdev)
++ return;
++ msix_ent = rdev->nqr->msix_entries;
+ rcfw = &rdev->rcfw;
+ if (!ent) {
+ /* Not setting the f/w timeout bit in rcfw.
+@@ -349,8 +364,8 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ /* Vectors may change after restart, so update with new vectors
+ * in device sctructure.
+ */
+- for (indx = 0; indx < rdev->num_msix; indx++)
+- rdev->en_dev->msix_entries[indx].vector = ent[indx].vector;
++ for (indx = 0; indx < rdev->nqr->num_msix; indx++)
++ rdev->nqr->msix_entries[indx].vector = ent[indx].vector;
+
+ rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
+ false);
+@@ -358,8 +373,8 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ ibdev_warn(&rdev->ibdev, "Failed to reinit CREQ\n");
+ return;
+ }
+- for (indx = BNXT_RE_NQ_IDX ; indx < rdev->num_msix; indx++) {
+- nq = &rdev->nq[indx - 1];
++ for (indx = BNXT_RE_NQ_IDX ; indx < rdev->nqr->num_msix; indx++) {
++ nq = &rdev->nqr->nq[indx - 1];
+ rc = bnxt_qplib_nq_start_irq(nq, indx - 1,
+ msix_ent[indx].vector, false);
+ if (rc) {
+@@ -943,7 +958,7 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
+
+ addrconf_addr_eui48((u8 *)&ibdev->node_guid, rdev->netdev->dev_addr);
+
+- ibdev->num_comp_vectors = rdev->num_msix - 1;
++ ibdev->num_comp_vectors = rdev->nqr->num_msix - 1;
+ ibdev->dev.parent = &rdev->en_dev->pdev->dev;
+ ibdev->local_dma_lkey = BNXT_QPLIB_RSVD_LKEY;
+
+@@ -1276,8 +1291,8 @@ static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
+ {
+ int i;
+
+- for (i = 1; i < rdev->num_msix; i++)
+- bnxt_qplib_disable_nq(&rdev->nq[i - 1]);
++ for (i = 1; i < rdev->nqr->num_msix; i++)
++ bnxt_qplib_disable_nq(&rdev->nqr->nq[i - 1]);
+
+ if (rdev->qplib_res.rcfw)
+ bnxt_qplib_cleanup_res(&rdev->qplib_res);
+@@ -1291,10 +1306,10 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
+
+ bnxt_qplib_init_res(&rdev->qplib_res);
+
+- for (i = 1; i < rdev->num_msix ; i++) {
+- db_offt = rdev->en_dev->msix_entries[i].db_offset;
+- rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nq[i - 1],
+- i - 1, rdev->en_dev->msix_entries[i].vector,
++ for (i = 1; i < rdev->nqr->num_msix ; i++) {
++ db_offt = rdev->nqr->msix_entries[i].db_offset;
++ rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nqr->nq[i - 1],
++ i - 1, rdev->nqr->msix_entries[i].vector,
+ db_offt, &bnxt_re_cqn_handler,
+ &bnxt_re_srqn_handler);
+ if (rc) {
+@@ -1307,20 +1322,22 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
+ return 0;
+ fail:
+ for (i = num_vec_enabled; i >= 0; i--)
+- bnxt_qplib_disable_nq(&rdev->nq[i]);
++ bnxt_qplib_disable_nq(&rdev->nqr->nq[i]);
+ return rc;
+ }
+
+ static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev)
+ {
++ struct bnxt_qplib_nq *nq;
+ u8 type;
+ int i;
+
+- for (i = 0; i < rdev->num_msix - 1; i++) {
++ for (i = 0; i < rdev->nqr->num_msix - 1; i++) {
+ type = bnxt_qplib_get_ring_type(rdev->chip_ctx);
+- bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, type);
+- bnxt_qplib_free_nq(&rdev->nq[i]);
+- rdev->nq[i].res = NULL;
++ nq = &rdev->nqr->nq[i];
++ bnxt_re_net_ring_free(rdev, nq->ring_id, type);
++ bnxt_qplib_free_nq(nq);
++ nq->res = NULL;
+ }
+ }
+
+@@ -1347,12 +1364,11 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+
+ /* Configure and allocate resources for qplib */
+ rdev->qplib_res.rcfw = &rdev->rcfw;
+- rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
++ rc = bnxt_qplib_get_dev_attr(&rdev->rcfw);
+ if (rc)
+ goto fail;
+
+- rc = bnxt_qplib_alloc_res(&rdev->qplib_res, rdev->en_dev->pdev,
+- rdev->netdev, &rdev->dev_attr);
++ rc = bnxt_qplib_alloc_res(&rdev->qplib_res, rdev->netdev);
+ if (rc)
+ goto fail;
+
+@@ -1362,12 +1378,12 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ if (rc)
+ goto dealloc_res;
+
+- for (i = 0; i < rdev->num_msix - 1; i++) {
++ for (i = 0; i < rdev->nqr->num_msix - 1; i++) {
+ struct bnxt_qplib_nq *nq;
+
+- nq = &rdev->nq[i];
++ nq = &rdev->nqr->nq[i];
+ nq->hwq.max_elements = BNXT_QPLIB_NQE_MAX_CNT;
+- rc = bnxt_qplib_alloc_nq(&rdev->qplib_res, &rdev->nq[i]);
++ rc = bnxt_qplib_alloc_nq(&rdev->qplib_res, nq);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Alloc Failed NQ%d rc:%#x",
+ i, rc);
+@@ -1375,17 +1391,17 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ }
+ type = bnxt_qplib_get_ring_type(rdev->chip_ctx);
+ rattr.dma_arr = nq->hwq.pbl[PBL_LVL_0].pg_map_arr;
+- rattr.pages = nq->hwq.pbl[rdev->nq[i].hwq.level].pg_count;
++ rattr.pages = nq->hwq.pbl[rdev->nqr->nq[i].hwq.level].pg_count;
+ rattr.type = type;
+ rattr.mode = RING_ALLOC_REQ_INT_MODE_MSIX;
+ rattr.depth = BNXT_QPLIB_NQE_MAX_CNT - 1;
+- rattr.lrid = rdev->en_dev->msix_entries[i + 1].ring_idx;
++ rattr.lrid = rdev->nqr->msix_entries[i + 1].ring_idx;
+ rc = bnxt_re_net_ring_alloc(rdev, &rattr, &nq->ring_id);
+ if (rc) {
+ ibdev_err(&rdev->ibdev,
+ "Failed to allocate NQ fw id with rc = 0x%x",
+ rc);
+- bnxt_qplib_free_nq(&rdev->nq[i]);
++ bnxt_qplib_free_nq(nq);
+ goto free_nq;
+ }
+ num_vec_created++;
+@@ -1394,8 +1410,8 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ free_nq:
+ for (i = num_vec_created - 1; i >= 0; i--) {
+ type = bnxt_qplib_get_ring_type(rdev->chip_ctx);
+- bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, type);
+- bnxt_qplib_free_nq(&rdev->nq[i]);
++ bnxt_re_net_ring_free(rdev, rdev->nqr->nq[i].ring_id, type);
++ bnxt_qplib_free_nq(&rdev->nqr->nq[i]);
+ }
+ bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+ &rdev->dpi_privileged);
+@@ -1584,6 +1600,21 @@ static int bnxt_re_ib_init(struct bnxt_re_dev *rdev)
+ return rc;
+ }
+
++static int bnxt_re_alloc_nqr_mem(struct bnxt_re_dev *rdev)
++{
++ rdev->nqr = kzalloc(sizeof(*rdev->nqr), GFP_KERNEL);
++ if (!rdev->nqr)
++ return -ENOMEM;
++
++ return 0;
++}
++
++static void bnxt_re_free_nqr_mem(struct bnxt_re_dev *rdev)
++{
++ kfree(rdev->nqr);
++ rdev->nqr = NULL;
++}
++
+ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type)
+ {
+ u8 type;
+@@ -1611,11 +1642,12 @@ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type)
+ bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ }
+
+- rdev->num_msix = 0;
++ rdev->nqr->num_msix = 0;
+
+ if (rdev->pacing.dbr_pacing)
+ bnxt_re_deinitialize_dbr_pacing(rdev);
+
++ bnxt_re_free_nqr_mem(rdev);
+ bnxt_re_destroy_chip_ctx(rdev);
+ if (op_type == BNXT_RE_COMPLETE_REMOVE) {
+ if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags))
+@@ -1653,6 +1685,17 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ }
+ set_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
+
++ if (rdev->en_dev->ulp_tbl->msix_requested < BNXT_RE_MIN_MSIX) {
++ ibdev_err(&rdev->ibdev,
++ "RoCE requires minimum 2 MSI-X vectors, but only %d reserved\n",
++ rdev->en_dev->ulp_tbl->msix_requested);
++ bnxt_unregister_dev(rdev->en_dev);
++ clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
++ return -EINVAL;
++ }
++ ibdev_dbg(&rdev->ibdev, "Got %d MSI-X vectors\n",
++ rdev->en_dev->ulp_tbl->msix_requested);
++
+ rc = bnxt_re_setup_chip_ctx(rdev);
+ if (rc) {
+ bnxt_unregister_dev(rdev->en_dev);
+@@ -1661,19 +1704,20 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ return -EINVAL;
+ }
+
++ rc = bnxt_re_alloc_nqr_mem(rdev);
++ if (rc) {
++ bnxt_re_destroy_chip_ctx(rdev);
++ bnxt_unregister_dev(rdev->en_dev);
++ clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
++ return rc;
++ }
++ rdev->nqr->num_msix = rdev->en_dev->ulp_tbl->msix_requested;
++ memcpy(rdev->nqr->msix_entries, rdev->en_dev->msix_entries,
++ sizeof(struct bnxt_msix_entry) * rdev->nqr->num_msix);
++
+ /* Check whether VF or PF */
+ bnxt_re_get_sriov_func_type(rdev);
+
+- if (!rdev->en_dev->ulp_tbl->msix_requested) {
+- ibdev_err(&rdev->ibdev,
+- "Failed to get MSI-X vectors: %#x\n", rc);
+- rc = -EINVAL;
+- goto fail;
+- }
+- ibdev_dbg(&rdev->ibdev, "Got %d MSI-X vectors\n",
+- rdev->en_dev->ulp_tbl->msix_requested);
+- rdev->num_msix = rdev->en_dev->ulp_tbl->msix_requested;
+-
+ bnxt_re_query_hwrm_intf_version(rdev);
+
+ /* Establish RCFW Communication Channel to initialize the context
+@@ -1695,14 +1739,14 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ rattr.type = type;
+ rattr.mode = RING_ALLOC_REQ_INT_MODE_MSIX;
+ rattr.depth = BNXT_QPLIB_CREQE_MAX_CNT - 1;
+- rattr.lrid = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].ring_idx;
++ rattr.lrid = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].ring_idx;
+ rc = bnxt_re_net_ring_alloc(rdev, &rattr, &creq->ring_id);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Failed to allocate CREQ: %#x\n", rc);
+ goto free_rcfw;
+ }
+- db_offt = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].db_offset;
+- vid = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].vector;
++ db_offt = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].db_offset;
++ vid = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].vector;
+ rc = bnxt_qplib_enable_rcfw_channel(&rdev->rcfw,
+ vid, db_offt,
+ &bnxt_re_aeq_handler);
+@@ -1722,7 +1766,7 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ rdev->pacing.dbr_pacing = false;
+ }
+ }
+- rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
++ rc = bnxt_qplib_get_dev_attr(&rdev->rcfw);
+ if (rc)
+ goto disable_rcfw;
+
+@@ -2047,6 +2091,7 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state)
+ ibdev_info(&rdev->ibdev, "%s: L2 driver notified to stop en_state 0x%lx",
+ __func__, en_dev->en_state);
+ bnxt_re_remove_device(rdev, BNXT_RE_PRE_RECOVERY_REMOVE, adev);
++ bnxt_re_update_en_info_rdev(NULL, en_info, adev);
+ mutex_unlock(&bnxt_re_mutex);
+
+ return 0;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 96ceec1e8199a6..02922a0987ad7a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -876,14 +876,13 @@ void bnxt_qplib_free_res(struct bnxt_qplib_res *res)
+ bnxt_qplib_free_dpi_tbl(res, &res->dpi_tbl);
+ }
+
+-int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
+- struct net_device *netdev,
+- struct bnxt_qplib_dev_attr *dev_attr)
++int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct net_device *netdev)
+ {
++ struct bnxt_qplib_dev_attr *dev_attr;
+ int rc;
+
+- res->pdev = pdev;
+ res->netdev = netdev;
++ dev_attr = res->dattr;
+
+ rc = bnxt_qplib_alloc_sgid_tbl(res, &res->sgid_tbl, dev_attr->max_sgid);
+ if (rc)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+index c2f710364e0ffe..b40cff8252bc4d 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+@@ -421,9 +421,7 @@ int bnxt_qplib_dealloc_dpi(struct bnxt_qplib_res *res,
+ void bnxt_qplib_cleanup_res(struct bnxt_qplib_res *res);
+ int bnxt_qplib_init_res(struct bnxt_qplib_res *res);
+ void bnxt_qplib_free_res(struct bnxt_qplib_res *res);
+-int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
+- struct net_device *netdev,
+- struct bnxt_qplib_dev_attr *dev_attr);
++int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct net_device *netdev);
+ void bnxt_qplib_free_ctx(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_ctx *ctx);
+ int bnxt_qplib_alloc_ctx(struct bnxt_qplib_res *res,
+@@ -546,6 +544,14 @@ static inline bool _is_ext_stats_supported(u16 dev_cap_flags)
+ CREQ_QUERY_FUNC_RESP_SB_EXT_STATS;
+ }
+
++static inline int bnxt_ext_stats_supported(struct bnxt_qplib_chip_ctx *ctx,
++ u16 flags, bool virtfn)
++{
++ /* ext stats supported if cap flag is set AND is a PF OR a Thor2 VF */
++ return (_is_ext_stats_supported(flags) &&
++ ((virtfn && bnxt_qplib_is_chip_gen_p7(ctx)) || (!virtfn)));
++}
++
+ static inline bool _is_hw_retx_supported(u16 dev_cap_flags)
+ {
+ return dev_cap_flags &
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 3cca7b1395f6a7..807439b1acb51f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -88,9 +88,9 @@ static void bnxt_qplib_query_version(struct bnxt_qplib_rcfw *rcfw,
+ fw_ver[3] = resp.fw_rsvd;
+ }
+
+-int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+- struct bnxt_qplib_dev_attr *attr)
++int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw)
+ {
++ struct bnxt_qplib_dev_attr *attr = rcfw->res->dattr;
+ struct creq_query_func_resp resp = {};
+ struct bnxt_qplib_cmdqmsg msg = {};
+ struct creq_query_func_resp_sb *sb;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+index ecf3f45fea74fe..de959b3c28e01f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+@@ -325,8 +325,7 @@ int bnxt_qplib_add_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+ int bnxt_qplib_update_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+ struct bnxt_qplib_gid *gid, u16 gid_idx,
+ const u8 *smac);
+-int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+- struct bnxt_qplib_dev_attr *attr);
++int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw);
+ int bnxt_qplib_set_func_resources(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_rcfw *rcfw,
+ struct bnxt_qplib_ctx *ctx);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 0144e7210d05a1..f5c3e560df58d7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1286,10 +1286,8 @@ static u32 hns_roce_cmdq_tx_timeout(u16 opcode, u32 tx_timeout)
+ return tx_timeout;
+ }
+
+-static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u16 opcode)
++static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u32 tx_timeout)
+ {
+- struct hns_roce_v2_priv *priv = hr_dev->priv;
+- u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout);
+ u32 timeout = 0;
+
+ do {
+@@ -1299,8 +1297,9 @@ static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u16 opcode)
+ } while (++timeout < tx_timeout);
+ }
+
+-static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+- struct hns_roce_cmq_desc *desc, int num)
++static int __hns_roce_cmq_send_one(struct hns_roce_dev *hr_dev,
++ struct hns_roce_cmq_desc *desc,
++ int num, u32 tx_timeout)
+ {
+ struct hns_roce_v2_priv *priv = hr_dev->priv;
+ struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq;
+@@ -1309,8 +1308,6 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ int ret;
+ int i;
+
+- spin_lock_bh(&csq->lock);
+-
+ tail = csq->head;
+
+ for (i = 0; i < num; i++) {
+@@ -1324,22 +1321,17 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_CNT]);
+
+- hns_roce_wait_csq_done(hr_dev, le16_to_cpu(desc->opcode));
++ hns_roce_wait_csq_done(hr_dev, tx_timeout);
+ if (hns_roce_cmq_csq_done(hr_dev)) {
+ ret = 0;
+ for (i = 0; i < num; i++) {
+ /* check the result of hardware write back */
+- desc[i] = csq->desc[tail++];
++ desc_ret = le16_to_cpu(csq->desc[tail++].retval);
+ if (tail == csq->desc_num)
+ tail = 0;
+-
+- desc_ret = le16_to_cpu(desc[i].retval);
+ if (likely(desc_ret == CMD_EXEC_SUCCESS))
+ continue;
+
+- dev_err_ratelimited(hr_dev->dev,
+- "Cmdq IO error, opcode = 0x%x, return = 0x%x.\n",
+- desc->opcode, desc_ret);
+ ret = hns_roce_cmd_err_convert_errno(desc_ret);
+ }
+ } else {
+@@ -1354,14 +1346,54 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ ret = -EAGAIN;
+ }
+
+- spin_unlock_bh(&csq->lock);
+-
+ if (ret)
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_ERR_CNT]);
+
+ return ret;
+ }
+
++static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
++ struct hns_roce_cmq_desc *desc, int num)
++{
++ struct hns_roce_v2_priv *priv = hr_dev->priv;
++ struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq;
++ u16 opcode = le16_to_cpu(desc->opcode);
++ u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout);
++ u8 try_cnt = HNS_ROCE_OPC_POST_MB_TRY_CNT;
++ u32 rsv_tail;
++ int ret;
++ int i;
++
++ while (try_cnt) {
++ try_cnt--;
++
++ spin_lock_bh(&csq->lock);
++ rsv_tail = csq->head;
++ ret = __hns_roce_cmq_send_one(hr_dev, desc, num, tx_timeout);
++ if (opcode == HNS_ROCE_OPC_POST_MB && ret == -ETIME &&
++ try_cnt) {
++ spin_unlock_bh(&csq->lock);
++ mdelay(HNS_ROCE_OPC_POST_MB_RETRY_GAP_MSEC);
++ continue;
++ }
++
++ for (i = 0; i < num; i++) {
++ desc[i] = csq->desc[rsv_tail++];
++ if (rsv_tail == csq->desc_num)
++ rsv_tail = 0;
++ }
++ spin_unlock_bh(&csq->lock);
++ break;
++ }
++
++ if (ret)
++ dev_err_ratelimited(hr_dev->dev,
++ "Cmdq IO error, opcode = 0x%x, return = %d.\n",
++ opcode, ret);
++
++ return ret;
++}
++
+ static int hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ struct hns_roce_cmq_desc *desc, int num)
+ {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index cbdbc9edbce6ec..91a5665465ffba 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -230,6 +230,8 @@ enum hns_roce_opcode_type {
+ };
+
+ #define HNS_ROCE_OPC_POST_MB_TIMEOUT 35000
++#define HNS_ROCE_OPC_POST_MB_TRY_CNT 8
++#define HNS_ROCE_OPC_POST_MB_RETRY_GAP_MSEC 5
+ struct hns_roce_cmdq_tx_timeout_map {
+ u16 opcode;
+ u32 tx_timeout;
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 67c2d43135a8af..457cea6d990958 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -174,7 +174,7 @@ static int mana_gd_allocate_doorbell_page(struct gdma_context *gc,
+
+ req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE;
+ req.num_resources = 1;
+- req.alignment = 1;
++ req.alignment = PAGE_SIZE / MANA_PAGE_SIZE;
+
+ /* Have GDMA start searching from 0 */
+ req.allocated_resources = 0;
+diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c
+index 505bc47fd575d5..99036afb3aef0b 100644
+--- a/drivers/infiniband/hw/mlx5/ah.c
++++ b/drivers/infiniband/hw/mlx5/ah.c
+@@ -67,7 +67,8 @@ static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah,
+ ah->av.tclass = grh->traffic_class;
+ }
+
+- ah->av.stat_rate_sl = (rdma_ah_get_static_rate(ah_attr) << 4);
++ ah->av.stat_rate_sl =
++ (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4);
+
+ if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
+ if (init_attr->xmit_slave)
+diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c
+index 4f6c1968a2ee3c..81cfa74147a183 100644
+--- a/drivers/infiniband/hw/mlx5/counters.c
++++ b/drivers/infiniband/hw/mlx5/counters.c
+@@ -546,6 +546,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter,
+ struct ib_qp *qp)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(qp->device);
++ bool new = false;
+ int err;
+
+ if (!counter->id) {
+@@ -560,6 +561,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter,
+ return err;
+ counter->id =
+ MLX5_GET(alloc_q_counter_out, out, counter_set_id);
++ new = true;
+ }
+
+ err = mlx5_ib_qp_set_counter(qp, counter);
+@@ -569,8 +571,10 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter,
+ return 0;
+
+ fail_set_counter:
+- mlx5_ib_counter_dealloc(counter);
+- counter->id = 0;
++ if (new) {
++ mlx5_ib_counter_dealloc(counter);
++ counter->id = 0;
++ }
+
+ return err;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index bb02b6adbf2c21..753faa9ad06a88 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1550,7 +1550,7 @@ static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach)
+
+ dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
+
+- if (!umem_dmabuf->sgt)
++ if (!umem_dmabuf->sgt || !mr)
+ return;
+
+ mlx5r_umr_update_mr_pas(mr, MLX5_IB_UPD_XLT_ZAP);
+@@ -1935,7 +1935,8 @@ mlx5_alloc_priv_descs(struct ib_device *device,
+ static void
+ mlx5_free_priv_descs(struct mlx5_ib_mr *mr)
+ {
+- if (!mr->umem && !mr->data_direct && mr->descs) {
++ if (!mr->umem && !mr->data_direct &&
++ mr->ibmr.type != IB_MR_TYPE_DM && mr->descs) {
+ struct ib_device *device = mr->ibmr.device;
+ int size = mr->max_descs * mr->desc_size;
+ struct mlx5_ib_dev *dev = to_mdev(device);
+@@ -2022,11 +2023,16 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+ struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
+ bool is_odp = is_odp_mr(mr);
++ bool is_odp_dma_buf = is_dmabuf_mr(mr) &&
++ !to_ib_umem_dmabuf(mr->umem)->pinned;
+ int ret = 0;
+
+ if (is_odp)
+ mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+
++ if (is_odp_dma_buf)
++ dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, NULL);
++
+ if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
+ ent = mr->mmkey.cache_ent;
+ /* upon storing to a clean temp entry - schedule its cleanup */
+@@ -2054,6 +2060,12 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+ }
+
++ if (is_odp_dma_buf) {
++ if (!ret)
++ to_ib_umem_dmabuf(mr->umem)->private = NULL;
++ dma_resv_unlock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);
++ }
++
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 1d3bf56157702d..b4e2a6f9cb9c3d 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -242,6 +242,7 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
+ if (__xa_cmpxchg(&imr->implicit_children, idx, mr, NULL, GFP_KERNEL) !=
+ mr) {
+ xa_unlock(&imr->implicit_children);
++ mlx5r_deref_odp_mkey(&imr->mmkey);
+ return;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 10ce3b44f645f4..ded139b4e87aa4 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3420,11 +3420,11 @@ static int ib_to_mlx5_rate_map(u8 rate)
+ return 0;
+ }
+
+-static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
++int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate)
+ {
+ u32 stat_rate_support;
+
+- if (rate == IB_RATE_PORT_CURRENT)
++ if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS)
+ return 0;
+
+ if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS)
+@@ -3569,7 +3569,7 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ sizeof(grh->dgid.raw));
+ }
+
+- err = ib_rate_to_mlx5(dev, rdma_ah_get_static_rate(ah));
++ err = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah));
+ if (err < 0)
+ return err;
+ MLX5_SET(ads, path, stat_rate, err);
+@@ -4547,6 +4547,8 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+
+ set_id = mlx5_ib_get_counters_id(dev, attr->port_num - 1);
+ MLX5_SET(dctc, dctc, counter_set_id, set_id);
++
++ qp->port = attr->port_num;
+ } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) {
+ struct mlx5_ib_modify_qp_resp resp = {};
+ u32 out[MLX5_ST_SZ_DW(create_dct_out)] = {};
+@@ -5033,7 +5035,7 @@ static int mlx5_ib_dct_query_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *mqp,
+ }
+
+ if (qp_attr_mask & IB_QP_PORT)
+- qp_attr->port_num = MLX5_GET(dctc, dctc, port);
++ qp_attr->port_num = mqp->port;
+ if (qp_attr_mask & IB_QP_MIN_RNR_TIMER)
+ qp_attr->min_rnr_timer = MLX5_GET(dctc, dctc, min_rnr_nak);
+ if (qp_attr_mask & IB_QP_AV) {
+diff --git a/drivers/infiniband/hw/mlx5/qp.h b/drivers/infiniband/hw/mlx5/qp.h
+index b6ee7c3ee1ca1b..2530e7730635f3 100644
+--- a/drivers/infiniband/hw/mlx5/qp.h
++++ b/drivers/infiniband/hw/mlx5/qp.h
+@@ -56,4 +56,5 @@ int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn);
+ int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter);
+ int mlx5_ib_qp_event_init(void);
+ void mlx5_ib_qp_event_cleanup(void);
++int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate);
+ #endif /* _MLX5_IB_QP_H */
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index 887fd6fa3ba930..793f3c5c4d0126 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -231,30 +231,6 @@ void mlx5r_umr_cleanup(struct mlx5_ib_dev *dev)
+ ib_dealloc_pd(dev->umrc.pd);
+ }
+
+-static int mlx5r_umr_recover(struct mlx5_ib_dev *dev)
+-{
+- struct umr_common *umrc = &dev->umrc;
+- struct ib_qp_attr attr;
+- int err;
+-
+- attr.qp_state = IB_QPS_RESET;
+- err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE);
+- if (err) {
+- mlx5_ib_dbg(dev, "Couldn't modify UMR QP\n");
+- goto err;
+- }
+-
+- err = mlx5r_umr_qp_rst2rts(dev, umrc->qp);
+- if (err)
+- goto err;
+-
+- umrc->state = MLX5_UMR_STATE_ACTIVE;
+- return 0;
+-
+-err:
+- umrc->state = MLX5_UMR_STATE_ERR;
+- return err;
+-}
+
+ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe,
+ struct mlx5r_umr_wqe *wqe, bool with_data)
+@@ -302,6 +278,61 @@ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe,
+ return err;
+ }
+
++static int mlx5r_umr_recover(struct mlx5_ib_dev *dev, u32 mkey,
++ struct mlx5r_umr_context *umr_context,
++ struct mlx5r_umr_wqe *wqe, bool with_data)
++{
++ struct umr_common *umrc = &dev->umrc;
++ struct ib_qp_attr attr;
++ int err;
++
++ mutex_lock(&umrc->lock);
++ /* Preventing any further WRs to be sent now */
++ if (umrc->state != MLX5_UMR_STATE_RECOVER) {
++ mlx5_ib_warn(dev, "UMR recovery encountered an unexpected state=%d\n",
++ umrc->state);
++ umrc->state = MLX5_UMR_STATE_RECOVER;
++ }
++ mutex_unlock(&umrc->lock);
++
++ /* Sending a final/barrier WR (the failed one) and wait for its completion.
++ * This will ensure that all the previous WRs got a completion before
++ * we set the QP state to RESET.
++ */
++ err = mlx5r_umr_post_send(umrc->qp, mkey, &umr_context->cqe, wqe,
++ with_data);
++ if (err) {
++ mlx5_ib_warn(dev, "UMR recovery post send failed, err %d\n", err);
++ goto err;
++ }
++
++ /* Since the QP is in an error state, it will only receive
++ * IB_WC_WR_FLUSH_ERR. However, as it serves only as a barrier
++ * we don't care about its status.
++ */
++ wait_for_completion(&umr_context->done);
++
++ attr.qp_state = IB_QPS_RESET;
++ err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE);
++ if (err) {
++ mlx5_ib_warn(dev, "Couldn't modify UMR QP to RESET, err=%d\n", err);
++ goto err;
++ }
++
++ err = mlx5r_umr_qp_rst2rts(dev, umrc->qp);
++ if (err) {
++ mlx5_ib_warn(dev, "Couldn't modify UMR QP to RTS, err=%d\n", err);
++ goto err;
++ }
++
++ umrc->state = MLX5_UMR_STATE_ACTIVE;
++ return 0;
++
++err:
++ umrc->state = MLX5_UMR_STATE_ERR;
++ return err;
++}
++
+ static void mlx5r_umr_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ struct mlx5_ib_umr_context *context =
+@@ -366,9 +397,7 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey,
+ mlx5_ib_warn(dev,
+ "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs, mkey = %u\n",
+ umr_context.status, mkey);
+- mutex_lock(&umrc->lock);
+- err = mlx5r_umr_recover(dev);
+- mutex_unlock(&umrc->lock);
++ err = mlx5r_umr_recover(dev, mkey, &umr_context, wqe, with_data);
+ if (err)
+ mlx5_ib_warn(dev, "couldn't recover UMR, err %d\n",
+ err);
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index eaf862e8dea1a9..7f553f7aa3cb3b 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -2056,6 +2056,7 @@ int enable_drhd_fault_handling(unsigned int cpu)
+ /*
+ * Enable fault control interrupt.
+ */
++ guard(rwsem_read)(&dmar_global_lock);
+ for_each_iommu(iommu, drhd) {
+ u32 fault_status;
+ int ret;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index cc23cfcdeb2d59..9c46a4cd384842 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3307,7 +3307,14 @@ int __init intel_iommu_init(void)
+ iommu_device_sysfs_add(&iommu->iommu, NULL,
+ intel_iommu_groups,
+ "%s", iommu->name);
++ /*
++ * The iommu device probe is protected by the iommu_probe_device_lock.
++ * Release the dmar_global_lock before entering the device probe path
++ * to avoid unnecessary lock order splat.
++ */
++ up_read(&dmar_global_lock);
+ iommu_device_register(&iommu->iommu, &intel_iommu_ops, NULL);
++ down_read(&dmar_global_lock);
+
+ iommu_pmu_register(iommu);
+ }
+@@ -4547,9 +4554,6 @@ static int context_setup_pass_through_cb(struct pci_dev *pdev, u16 alias, void *
+ {
+ struct device *dev = data;
+
+- if (dev != &pdev->dev)
+- return 0;
+-
+ return context_setup_pass_through(dev, PCI_BUS_NUM(alias), alias & 0xff);
+ }
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index ee9f7cecd78e0e..555dc06b942287 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -3790,10 +3790,6 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
+ break;
+
+ case STATUSTYPE_TABLE: {
+- __u64 watermark_percentage = (__u64)(ic->journal_entries - ic->free_sectors_threshold) * 100;
+-
+- watermark_percentage += ic->journal_entries / 2;
+- do_div(watermark_percentage, ic->journal_entries);
+ arg_count = 3;
+ arg_count += !!ic->meta_dev;
+ arg_count += ic->sectors_per_block != 1;
+@@ -3826,6 +3822,10 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
+ DMEMIT(" interleave_sectors:%u", 1U << ic->sb->log2_interleave_sectors);
+ DMEMIT(" buffer_sectors:%u", 1U << ic->log2_buffer_sectors);
+ if (ic->mode == 'J') {
++ __u64 watermark_percentage = (__u64)(ic->journal_entries - ic->free_sectors_threshold) * 100;
++
++ watermark_percentage += ic->journal_entries / 2;
++ do_div(watermark_percentage, ic->journal_entries);
+ DMEMIT(" journal_watermark:%u", (unsigned int)watermark_percentage);
+ DMEMIT(" commit_time:%u", ic->autocommit_msec);
+ }
+diff --git a/drivers/md/dm-vdo/dedupe.c b/drivers/md/dm-vdo/dedupe.c
+index 80628ae93fbacc..5a74b3a85ec435 100644
+--- a/drivers/md/dm-vdo/dedupe.c
++++ b/drivers/md/dm-vdo/dedupe.c
+@@ -2178,6 +2178,7 @@ static int initialize_index(struct vdo *vdo, struct hash_zones *zones)
+
+ vdo_set_dedupe_index_timeout_interval(vdo_dedupe_index_timeout_interval);
+ vdo_set_dedupe_index_min_timer_interval(vdo_dedupe_index_min_timer_interval);
++ spin_lock_init(&zones->lock);
+
+ /*
+ * Since we will save up the timeouts that would have been reported but were ratelimited,
+diff --git a/drivers/net/dsa/realtek/Kconfig b/drivers/net/dsa/realtek/Kconfig
+index 6989972eebc306..10687722d14c08 100644
+--- a/drivers/net/dsa/realtek/Kconfig
++++ b/drivers/net/dsa/realtek/Kconfig
+@@ -43,4 +43,10 @@ config NET_DSA_REALTEK_RTL8366RB
+ help
+ Select to enable support for Realtek RTL8366RB.
+
++config NET_DSA_REALTEK_RTL8366RB_LEDS
++ bool "Support RTL8366RB LED control"
++ depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB)
++ depends on NET_DSA_REALTEK_RTL8366RB
++ default NET_DSA_REALTEK_RTL8366RB
++
+ endif
+diff --git a/drivers/net/dsa/realtek/Makefile b/drivers/net/dsa/realtek/Makefile
+index 35491dc20d6d6e..17367bcba496c1 100644
+--- a/drivers/net/dsa/realtek/Makefile
++++ b/drivers/net/dsa/realtek/Makefile
+@@ -12,4 +12,7 @@ endif
+
+ obj-$(CONFIG_NET_DSA_REALTEK_RTL8366RB) += rtl8366.o
+ rtl8366-objs := rtl8366-core.o rtl8366rb.o
++ifdef CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS
++rtl8366-objs += rtl8366rb-leds.o
++endif
+ obj-$(CONFIG_NET_DSA_REALTEK_RTL8365MB) += rtl8365mb.o
+diff --git a/drivers/net/dsa/realtek/rtl8366rb-leds.c b/drivers/net/dsa/realtek/rtl8366rb-leds.c
+new file mode 100644
+index 00000000000000..99c890681ae607
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366rb-leds.c
+@@ -0,0 +1,177 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/bitops.h>
++#include <linux/regmap.h>
++#include <net/dsa.h>
++#include "rtl83xx.h"
++#include "rtl8366rb.h"
++
++static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port)
++{
++ switch (led_group) {
++ case 0:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ case 1:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ case 2:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ case 3:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ default:
++ return 0;
++ }
++}
++
++static int rb8366rb_get_port_led(struct rtl8366rb_led *led)
++{
++ struct realtek_priv *priv = led->priv;
++ u8 led_group = led->led_group;
++ u8 port_num = led->port_num;
++ int ret;
++ u32 val;
++
++ ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group),
++ &val);
++ if (ret) {
++ dev_err(priv->dev, "error reading LED on port %d group %d\n",
++ led_group, port_num);
++ return ret;
++ }
++
++ return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num));
++}
++
++static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable)
++{
++ struct realtek_priv *priv = led->priv;
++ u8 led_group = led->led_group;
++ u8 port_num = led->port_num;
++ int ret;
++
++ ret = regmap_update_bits(priv->map,
++ RTL8366RB_LED_X_X_CTRL_REG(led_group),
++ rtl8366rb_led_group_port_mask(led_group,
++ port_num),
++ enable ? 0xffff : 0);
++ if (ret) {
++ dev_err(priv->dev, "error updating LED on port %d group %d\n",
++ led_group, port_num);
++ return ret;
++ }
++
++ /* Change the LED group to manual controlled LEDs if required */
++ ret = rb8366rb_set_ledgroup_mode(priv, led_group,
++ RTL8366RB_LEDGROUP_FORCE);
++
++ if (ret) {
++ dev_err(priv->dev, "error updating LED GROUP group %d\n",
++ led_group);
++ return ret;
++ }
++
++ return 0;
++}
++
++static int
++rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev,
++ enum led_brightness brightness)
++{
++ struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led,
++ cdev);
++
++ return rb8366rb_set_port_led(led, brightness == LED_ON);
++}
++
++static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp,
++ struct fwnode_handle *led_fwnode)
++{
++ struct rtl8366rb *rb = priv->chip_data;
++ struct led_init_data init_data = { };
++ enum led_default_state state;
++ struct rtl8366rb_led *led;
++ u32 led_group;
++ int ret;
++
++ ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group);
++ if (ret)
++ return ret;
++
++ if (led_group >= RTL8366RB_NUM_LEDGROUPS) {
++ dev_warn(priv->dev, "Invalid LED reg %d defined for port %d",
++ led_group, dp->index);
++ return -EINVAL;
++ }
++
++ led = &rb->leds[dp->index][led_group];
++ led->port_num = dp->index;
++ led->led_group = led_group;
++ led->priv = priv;
++
++ state = led_init_default_state_get(led_fwnode);
++ switch (state) {
++ case LEDS_DEFSTATE_ON:
++ led->cdev.brightness = 1;
++ rb8366rb_set_port_led(led, 1);
++ break;
++ case LEDS_DEFSTATE_KEEP:
++ led->cdev.brightness =
++ rb8366rb_get_port_led(led);
++ break;
++ case LEDS_DEFSTATE_OFF:
++ default:
++ led->cdev.brightness = 0;
++ rb8366rb_set_port_led(led, 0);
++ }
++
++ led->cdev.max_brightness = 1;
++ led->cdev.brightness_set_blocking =
++ rtl8366rb_cled_brightness_set_blocking;
++ init_data.fwnode = led_fwnode;
++ init_data.devname_mandatory = true;
++
++ init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d",
++ dp->ds->index, dp->index, led_group);
++ if (!init_data.devicename)
++ return -ENOMEM;
++
++ ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data);
++ if (ret) {
++ dev_warn(priv->dev, "Failed to init LED %d for port %d",
++ led_group, dp->index);
++ return ret;
++ }
++
++ return 0;
++}
++
++int rtl8366rb_setup_leds(struct realtek_priv *priv)
++{
++ struct dsa_switch *ds = &priv->ds;
++ struct device_node *leds_np;
++ struct dsa_port *dp;
++ int ret = 0;
++
++ dsa_switch_for_each_port(dp, ds) {
++ if (!dp->dn)
++ continue;
++
++ leds_np = of_get_child_by_name(dp->dn, "leds");
++ if (!leds_np) {
++ dev_dbg(priv->dev, "No leds defined for port %d",
++ dp->index);
++ continue;
++ }
++
++ for_each_child_of_node_scoped(leds_np, led_np) {
++ ret = rtl8366rb_setup_led(priv, dp,
++ of_fwnode_handle(led_np));
++ if (ret)
++ break;
++ }
++
++ of_node_put(leds_np);
++ if (ret)
++ return ret;
++ }
++ return 0;
++}
+diff --git a/drivers/net/dsa/realtek/rtl8366rb.c b/drivers/net/dsa/realtek/rtl8366rb.c
+index c7a8cd06058781..ae3d49fc22b809 100644
+--- a/drivers/net/dsa/realtek/rtl8366rb.c
++++ b/drivers/net/dsa/realtek/rtl8366rb.c
+@@ -26,11 +26,7 @@
+ #include "realtek-smi.h"
+ #include "realtek-mdio.h"
+ #include "rtl83xx.h"
+-
+-#define RTL8366RB_PORT_NUM_CPU 5
+-#define RTL8366RB_NUM_PORTS 6
+-#define RTL8366RB_PHY_NO_MAX 4
+-#define RTL8366RB_PHY_ADDR_MAX 31
++#include "rtl8366rb.h"
+
+ /* Switch Global Configuration register */
+ #define RTL8366RB_SGCR 0x0000
+@@ -175,39 +171,6 @@
+ */
+ #define RTL8366RB_VLAN_INGRESS_CTRL2_REG 0x037f
+
+-/* LED control registers */
+-/* The LED blink rate is global; it is used by all triggers in all groups. */
+-#define RTL8366RB_LED_BLINKRATE_REG 0x0430
+-#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
+-#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
+-#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
+-#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
+-#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
+-#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
+-#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
+-
+-/* LED trigger event for each group */
+-#define RTL8366RB_LED_CTRL_REG 0x0431
+-#define RTL8366RB_LED_CTRL_OFFSET(led_group) \
+- (4 * (led_group))
+-#define RTL8366RB_LED_CTRL_MASK(led_group) \
+- (0xf << RTL8366RB_LED_CTRL_OFFSET(led_group))
+-
+-/* The RTL8366RB_LED_X_X registers are used to manually set the LED state only
+- * when the corresponding LED group in RTL8366RB_LED_CTRL_REG is
+- * RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored.
+- */
+-#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
+-#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
+-#define RTL8366RB_LED_X_X_CTRL_REG(led_group) \
+- ((led_group) <= 1 ? \
+- RTL8366RB_LED_0_1_CTRL_REG : \
+- RTL8366RB_LED_2_3_CTRL_REG)
+-#define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0)
+-#define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6)
+-#define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0)
+-#define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6)
+-
+ #define RTL8366RB_MIB_COUNT 33
+ #define RTL8366RB_GLOBAL_MIB_COUNT 1
+ #define RTL8366RB_MIB_COUNTER_PORT_OFFSET 0x0050
+@@ -243,7 +206,6 @@
+ #define RTL8366RB_PORT_STATUS_AN_MASK 0x0080
+
+ #define RTL8366RB_NUM_VLANS 16
+-#define RTL8366RB_NUM_LEDGROUPS 4
+ #define RTL8366RB_NUM_VIDS 4096
+ #define RTL8366RB_PRIORITYMAX 7
+ #define RTL8366RB_NUM_FIDS 8
+@@ -350,46 +312,6 @@
+ #define RTL8366RB_GREEN_FEATURE_TX BIT(0)
+ #define RTL8366RB_GREEN_FEATURE_RX BIT(2)
+
+-enum rtl8366_ledgroup_mode {
+- RTL8366RB_LEDGROUP_OFF = 0x0,
+- RTL8366RB_LEDGROUP_DUP_COL = 0x1,
+- RTL8366RB_LEDGROUP_LINK_ACT = 0x2,
+- RTL8366RB_LEDGROUP_SPD1000 = 0x3,
+- RTL8366RB_LEDGROUP_SPD100 = 0x4,
+- RTL8366RB_LEDGROUP_SPD10 = 0x5,
+- RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6,
+- RTL8366RB_LEDGROUP_SPD100_ACT = 0x7,
+- RTL8366RB_LEDGROUP_SPD10_ACT = 0x8,
+- RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9,
+- RTL8366RB_LEDGROUP_FIBER = 0xa,
+- RTL8366RB_LEDGROUP_AN_FAULT = 0xb,
+- RTL8366RB_LEDGROUP_LINK_RX = 0xc,
+- RTL8366RB_LEDGROUP_LINK_TX = 0xd,
+- RTL8366RB_LEDGROUP_MASTER = 0xe,
+- RTL8366RB_LEDGROUP_FORCE = 0xf,
+-
+- __RTL8366RB_LEDGROUP_MODE_MAX
+-};
+-
+-struct rtl8366rb_led {
+- u8 port_num;
+- u8 led_group;
+- struct realtek_priv *priv;
+- struct led_classdev cdev;
+-};
+-
+-/**
+- * struct rtl8366rb - RTL8366RB-specific data
+- * @max_mtu: per-port max MTU setting
+- * @pvid_enabled: if PVID is set for respective port
+- * @leds: per-port and per-ledgroup led info
+- */
+-struct rtl8366rb {
+- unsigned int max_mtu[RTL8366RB_NUM_PORTS];
+- bool pvid_enabled[RTL8366RB_NUM_PORTS];
+- struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS];
+-};
+-
+ static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
+ { 0, 0, 4, "IfInOctets" },
+ { 0, 4, 4, "EtherStatsOctets" },
+@@ -830,9 +752,10 @@ static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
+ return 0;
+ }
+
+-static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
+- u8 led_group,
+- enum rtl8366_ledgroup_mode mode)
++/* This code is used also with LEDs disabled */
++int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
++ u8 led_group,
++ enum rtl8366_ledgroup_mode mode)
+ {
+ int ret;
+ u32 val;
+@@ -849,144 +772,7 @@ static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
+ return 0;
+ }
+
+-static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port)
+-{
+- switch (led_group) {
+- case 0:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- case 1:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- case 2:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- case 3:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- default:
+- return 0;
+- }
+-}
+-
+-static int rb8366rb_get_port_led(struct rtl8366rb_led *led)
+-{
+- struct realtek_priv *priv = led->priv;
+- u8 led_group = led->led_group;
+- u8 port_num = led->port_num;
+- int ret;
+- u32 val;
+-
+- ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group),
+- &val);
+- if (ret) {
+- dev_err(priv->dev, "error reading LED on port %d group %d\n",
+- led_group, port_num);
+- return ret;
+- }
+-
+- return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num));
+-}
+-
+-static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable)
+-{
+- struct realtek_priv *priv = led->priv;
+- u8 led_group = led->led_group;
+- u8 port_num = led->port_num;
+- int ret;
+-
+- ret = regmap_update_bits(priv->map,
+- RTL8366RB_LED_X_X_CTRL_REG(led_group),
+- rtl8366rb_led_group_port_mask(led_group,
+- port_num),
+- enable ? 0xffff : 0);
+- if (ret) {
+- dev_err(priv->dev, "error updating LED on port %d group %d\n",
+- led_group, port_num);
+- return ret;
+- }
+-
+- /* Change the LED group to manual controlled LEDs if required */
+- ret = rb8366rb_set_ledgroup_mode(priv, led_group,
+- RTL8366RB_LEDGROUP_FORCE);
+-
+- if (ret) {
+- dev_err(priv->dev, "error updating LED GROUP group %d\n",
+- led_group);
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-static int
+-rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev,
+- enum led_brightness brightness)
+-{
+- struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led,
+- cdev);
+-
+- return rb8366rb_set_port_led(led, brightness == LED_ON);
+-}
+-
+-static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp,
+- struct fwnode_handle *led_fwnode)
+-{
+- struct rtl8366rb *rb = priv->chip_data;
+- struct led_init_data init_data = { };
+- enum led_default_state state;
+- struct rtl8366rb_led *led;
+- u32 led_group;
+- int ret;
+-
+- ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group);
+- if (ret)
+- return ret;
+-
+- if (led_group >= RTL8366RB_NUM_LEDGROUPS) {
+- dev_warn(priv->dev, "Invalid LED reg %d defined for port %d",
+- led_group, dp->index);
+- return -EINVAL;
+- }
+-
+- led = &rb->leds[dp->index][led_group];
+- led->port_num = dp->index;
+- led->led_group = led_group;
+- led->priv = priv;
+-
+- state = led_init_default_state_get(led_fwnode);
+- switch (state) {
+- case LEDS_DEFSTATE_ON:
+- led->cdev.brightness = 1;
+- rb8366rb_set_port_led(led, 1);
+- break;
+- case LEDS_DEFSTATE_KEEP:
+- led->cdev.brightness =
+- rb8366rb_get_port_led(led);
+- break;
+- case LEDS_DEFSTATE_OFF:
+- default:
+- led->cdev.brightness = 0;
+- rb8366rb_set_port_led(led, 0);
+- }
+-
+- led->cdev.max_brightness = 1;
+- led->cdev.brightness_set_blocking =
+- rtl8366rb_cled_brightness_set_blocking;
+- init_data.fwnode = led_fwnode;
+- init_data.devname_mandatory = true;
+-
+- init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d",
+- dp->ds->index, dp->index, led_group);
+- if (!init_data.devicename)
+- return -ENOMEM;
+-
+- ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data);
+- if (ret) {
+- dev_warn(priv->dev, "Failed to init LED %d for port %d",
+- led_group, dp->index);
+- return ret;
+- }
+-
+- return 0;
+-}
+-
++/* This code is used also with LEDs disabled */
+ static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv)
+ {
+ int ret = 0;
+@@ -1007,38 +793,6 @@ static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv)
+ return ret;
+ }
+
+-static int rtl8366rb_setup_leds(struct realtek_priv *priv)
+-{
+- struct dsa_switch *ds = &priv->ds;
+- struct device_node *leds_np;
+- struct dsa_port *dp;
+- int ret = 0;
+-
+- dsa_switch_for_each_port(dp, ds) {
+- if (!dp->dn)
+- continue;
+-
+- leds_np = of_get_child_by_name(dp->dn, "leds");
+- if (!leds_np) {
+- dev_dbg(priv->dev, "No leds defined for port %d",
+- dp->index);
+- continue;
+- }
+-
+- for_each_child_of_node_scoped(leds_np, led_np) {
+- ret = rtl8366rb_setup_led(priv, dp,
+- of_fwnode_handle(led_np));
+- if (ret)
+- break;
+- }
+-
+- of_node_put(leds_np);
+- if (ret)
+- return ret;
+- }
+- return 0;
+-}
+-
+ static int rtl8366rb_setup(struct dsa_switch *ds)
+ {
+ struct realtek_priv *priv = ds->priv;
+diff --git a/drivers/net/dsa/realtek/rtl8366rb.h b/drivers/net/dsa/realtek/rtl8366rb.h
+new file mode 100644
+index 00000000000000..685ff3275faa17
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366rb.h
+@@ -0,0 +1,107 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++
++#ifndef _RTL8366RB_H
++#define _RTL8366RB_H
++
++#include "realtek.h"
++
++#define RTL8366RB_PORT_NUM_CPU 5
++#define RTL8366RB_NUM_PORTS 6
++#define RTL8366RB_PHY_NO_MAX 4
++#define RTL8366RB_NUM_LEDGROUPS 4
++#define RTL8366RB_PHY_ADDR_MAX 31
++
++/* LED control registers */
++/* The LED blink rate is global; it is used by all triggers in all groups. */
++#define RTL8366RB_LED_BLINKRATE_REG 0x0430
++#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
++#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
++#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
++#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
++#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
++#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
++#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
++
++/* LED trigger event for each group */
++#define RTL8366RB_LED_CTRL_REG 0x0431
++#define RTL8366RB_LED_CTRL_OFFSET(led_group) \
++ (4 * (led_group))
++#define RTL8366RB_LED_CTRL_MASK(led_group) \
++ (0xf << RTL8366RB_LED_CTRL_OFFSET(led_group))
++
++/* The RTL8366RB_LED_X_X registers are used to manually set the LED state only
++ * when the corresponding LED group in RTL8366RB_LED_CTRL_REG is
++ * RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored.
++ */
++#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
++#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
++#define RTL8366RB_LED_X_X_CTRL_REG(led_group) \
++ ((led_group) <= 1 ? \
++ RTL8366RB_LED_0_1_CTRL_REG : \
++ RTL8366RB_LED_2_3_CTRL_REG)
++#define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0)
++#define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6)
++#define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0)
++#define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6)
++
++enum rtl8366_ledgroup_mode {
++ RTL8366RB_LEDGROUP_OFF = 0x0,
++ RTL8366RB_LEDGROUP_DUP_COL = 0x1,
++ RTL8366RB_LEDGROUP_LINK_ACT = 0x2,
++ RTL8366RB_LEDGROUP_SPD1000 = 0x3,
++ RTL8366RB_LEDGROUP_SPD100 = 0x4,
++ RTL8366RB_LEDGROUP_SPD10 = 0x5,
++ RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6,
++ RTL8366RB_LEDGROUP_SPD100_ACT = 0x7,
++ RTL8366RB_LEDGROUP_SPD10_ACT = 0x8,
++ RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9,
++ RTL8366RB_LEDGROUP_FIBER = 0xa,
++ RTL8366RB_LEDGROUP_AN_FAULT = 0xb,
++ RTL8366RB_LEDGROUP_LINK_RX = 0xc,
++ RTL8366RB_LEDGROUP_LINK_TX = 0xd,
++ RTL8366RB_LEDGROUP_MASTER = 0xe,
++ RTL8366RB_LEDGROUP_FORCE = 0xf,
++
++ __RTL8366RB_LEDGROUP_MODE_MAX
++};
++
++#if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS)
++
++struct rtl8366rb_led {
++ u8 port_num;
++ u8 led_group;
++ struct realtek_priv *priv;
++ struct led_classdev cdev;
++};
++
++int rtl8366rb_setup_leds(struct realtek_priv *priv);
++
++#else
++
++static inline int rtl8366rb_setup_leds(struct realtek_priv *priv)
++{
++ return 0;
++}
++
++#endif /* IS_ENABLED(CONFIG_LEDS_CLASS) */
++
++/**
++ * struct rtl8366rb - RTL8366RB-specific data
++ * @max_mtu: per-port max MTU setting
++ * @pvid_enabled: if PVID is set for respective port
++ * @leds: per-port and per-ledgroup led info
++ */
++struct rtl8366rb {
++ unsigned int max_mtu[RTL8366RB_NUM_PORTS];
++ bool pvid_enabled[RTL8366RB_NUM_PORTS];
++#if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS)
++ struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS];
++#endif
++};
++
++/* This code is used also with LEDs disabled */
++int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
++ u8 led_group,
++ enum rtl8366_ledgroup_mode mode);
++
++#endif /* _RTL8366RB_H */
+diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
+index 5740c98d8c9f03..2847278d9cd48e 100644
+--- a/drivers/net/ethernet/cadence/macb.h
++++ b/drivers/net/ethernet/cadence/macb.h
+@@ -1279,6 +1279,8 @@ struct macb {
+ struct clk *rx_clk;
+ struct clk *tsu_clk;
+ struct net_device *dev;
++ /* Protects hw_stats and ethtool_stats */
++ spinlock_t stats_lock;
+ union {
+ struct macb_stats macb;
+ struct gem_stats gem;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 56901280ba0472..60847cdb516eef 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1992,10 +1992,12 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
+
+ if (status & MACB_BIT(ISR_ROVR)) {
+ /* We missed at least one packet */
++ spin_lock(&bp->stats_lock);
+ if (macb_is_gem(bp))
+ bp->hw_stats.gem.rx_overruns++;
+ else
+ bp->hw_stats.macb.rx_overruns++;
++ spin_unlock(&bp->stats_lock);
+
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
+@@ -3116,6 +3118,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ if (!netif_running(bp->dev))
+ return nstat;
+
++ spin_lock_irq(&bp->stats_lock);
+ gem_update_stats(bp);
+
+ nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors +
+@@ -3145,6 +3148,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ nstat->tx_aborted_errors = hwstat->tx_excessive_collisions;
+ nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors;
+ nstat->tx_fifo_errors = hwstat->tx_underrun;
++ spin_unlock_irq(&bp->stats_lock);
+
+ return nstat;
+ }
+@@ -3152,12 +3156,13 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ static void gem_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+ {
+- struct macb *bp;
++ struct macb *bp = netdev_priv(dev);
+
+- bp = netdev_priv(dev);
++ spin_lock_irq(&bp->stats_lock);
+ gem_update_stats(bp);
+ memcpy(data, &bp->ethtool_stats, sizeof(u64)
+ * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES));
++ spin_unlock_irq(&bp->stats_lock);
+ }
+
+ static int gem_get_sset_count(struct net_device *dev, int sset)
+@@ -3207,6 +3212,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev)
+ return gem_get_stats(bp);
+
+ /* read stats from hardware */
++ spin_lock_irq(&bp->stats_lock);
+ macb_update_stats(bp);
+
+ /* Convert HW stats into netdevice stats */
+@@ -3240,6 +3246,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev)
+ nstat->tx_carrier_errors = hwstat->tx_carrier_errors;
+ nstat->tx_fifo_errors = hwstat->tx_underruns;
+ /* Don't know about heartbeat or window errors... */
++ spin_unlock_irq(&bp->stats_lock);
+
+ return nstat;
+ }
+@@ -5110,6 +5117,7 @@ static int macb_probe(struct platform_device *pdev)
+ }
+ }
+ spin_lock_init(&bp->lock);
++ spin_lock_init(&bp->stats_lock);
+
+ /* setup capabilities */
+ macb_configure_caps(bp, macb_config);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 16a7908c79f703..f662a5d54986cf 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -145,6 +145,24 @@ static int enetc_ptp_parse(struct sk_buff *skb, u8 *udp,
+ return 0;
+ }
+
++/**
++ * enetc_unwind_tx_frame() - Unwind the DMA mappings of a multi-buffer Tx frame
++ * @tx_ring: Pointer to the Tx ring on which the buffer descriptors are located
++ * @count: Number of Tx buffer descriptors which need to be unmapped
++ * @i: Index of the last successfully mapped Tx buffer descriptor
++ */
++static void enetc_unwind_tx_frame(struct enetc_bdr *tx_ring, int count, int i)
++{
++ while (count--) {
++ struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i];
++
++ enetc_free_tx_frame(tx_ring, tx_swbd);
++ if (i == 0)
++ i = tx_ring->bd_count;
++ i--;
++ }
++}
++
+ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ {
+ bool do_vlan, do_onestep_tstamp = false, do_twostep_tstamp = false;
+@@ -235,9 +253,11 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ }
+
+ if (do_onestep_tstamp) {
+- u32 lo, hi, val;
+- u64 sec, nsec;
++ __be32 new_sec_l, new_nsec;
++ u32 lo, hi, nsec, val;
++ __be16 new_sec_h;
+ u8 *data;
++ u64 sec;
+
+ lo = enetc_rd_hot(hw, ENETC_SICTR0);
+ hi = enetc_rd_hot(hw, ENETC_SICTR1);
+@@ -251,13 +271,38 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ /* Update originTimestamp field of Sync packet
+ * - 48 bits seconds field
+ * - 32 bits nanseconds field
++ *
++ * In addition, the UDP checksum needs to be updated
++ * by software after updating originTimestamp field,
++ * otherwise the hardware will calculate the wrong
++ * checksum when updating the correction field and
++ * update it to the packet.
+ */
+ data = skb_mac_header(skb);
+- *(__be16 *)(data + offset2) =
+- htons((sec >> 32) & 0xffff);
+- *(__be32 *)(data + offset2 + 2) =
+- htonl(sec & 0xffffffff);
+- *(__be32 *)(data + offset2 + 6) = htonl(nsec);
++ new_sec_h = htons((sec >> 32) & 0xffff);
++ new_sec_l = htonl(sec & 0xffffffff);
++ new_nsec = htonl(nsec);
++ if (udp) {
++ struct udphdr *uh = udp_hdr(skb);
++ __be32 old_sec_l, old_nsec;
++ __be16 old_sec_h;
++
++ old_sec_h = *(__be16 *)(data + offset2);
++ inet_proto_csum_replace2(&uh->check, skb, old_sec_h,
++ new_sec_h, false);
++
++ old_sec_l = *(__be32 *)(data + offset2 + 2);
++ inet_proto_csum_replace4(&uh->check, skb, old_sec_l,
++ new_sec_l, false);
++
++ old_nsec = *(__be32 *)(data + offset2 + 6);
++ inet_proto_csum_replace4(&uh->check, skb, old_nsec,
++ new_nsec, false);
++ }
++
++ *(__be16 *)(data + offset2) = new_sec_h;
++ *(__be32 *)(data + offset2 + 2) = new_sec_l;
++ *(__be32 *)(data + offset2 + 6) = new_nsec;
+
+ /* Configure single-step register */
+ val = ENETC_PM0_SINGLE_STEP_EN;
+@@ -328,25 +373,20 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ dma_err:
+ dev_err(tx_ring->dev, "DMA map error");
+
+- do {
+- tx_swbd = &tx_ring->tx_swbd[i];
+- enetc_free_tx_frame(tx_ring, tx_swbd);
+- if (i == 0)
+- i = tx_ring->bd_count;
+- i--;
+- } while (count--);
++ enetc_unwind_tx_frame(tx_ring, count, i);
+
+ return 0;
+ }
+
+-static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+- struct enetc_tx_swbd *tx_swbd,
+- union enetc_tx_bd *txbd, int *i, int hdr_len,
+- int data_len)
++static int enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
++ struct enetc_tx_swbd *tx_swbd,
++ union enetc_tx_bd *txbd, int *i, int hdr_len,
++ int data_len)
+ {
+ union enetc_tx_bd txbd_tmp;
+ u8 flags = 0, e_flags = 0;
+ dma_addr_t addr;
++ int count = 1;
+
+ enetc_clear_tx_bd(&txbd_tmp);
+ addr = tx_ring->tso_headers_dma + *i * TSO_HEADER_SIZE;
+@@ -389,7 +429,10 @@ static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+ /* Write the BD */
+ txbd_tmp.ext.e_flags = e_flags;
+ *txbd = txbd_tmp;
++ count++;
+ }
++
++ return count;
+ }
+
+ static int enetc_map_tx_tso_data(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+@@ -521,9 +564,9 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
+
+ /* compute the csum over the L4 header */
+ csum = enetc_tso_hdr_csum(&tso, skb, hdr, hdr_len, &pos);
+- enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, &i, hdr_len, data_len);
++ count += enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd,
++ &i, hdr_len, data_len);
+ bd_data_num = 0;
+- count++;
+
+ while (data_len > 0) {
+ int size;
+@@ -547,8 +590,13 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
+ err = enetc_map_tx_tso_data(tx_ring, skb, tx_swbd, txbd,
+ tso.data, size,
+ size == data_len);
+- if (err)
++ if (err) {
++ if (i == 0)
++ i = tx_ring->bd_count;
++ i--;
++
+ goto err_map_data;
++ }
+
+ data_len -= size;
+ count++;
+@@ -577,13 +625,7 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
+ dev_err(tx_ring->dev, "DMA map error");
+
+ err_chained_bd:
+- do {
+- tx_swbd = &tx_ring->tx_swbd[i];
+- enetc_free_tx_frame(tx_ring, tx_swbd);
+- if (i == 0)
+- i = tx_ring->bd_count;
+- i--;
+- } while (count--);
++ enetc_unwind_tx_frame(tx_ring, count, i);
+
+ return 0;
+ }
+@@ -1623,7 +1665,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ enetc_xdp_drop(rx_ring, orig_i, i);
+ tx_ring->stats.xdp_tx_drops++;
+ } else {
+- tx_ring->stats.xdp_tx += xdp_tx_bd_cnt;
++ tx_ring->stats.xdp_tx++;
+ rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt;
+ xdp_tx_frm_cnt++;
+ /* The XDP_TX enqueue was successful, so we
+@@ -2929,6 +2971,9 @@ static int enetc_hwtstamp_set(struct net_device *ndev, struct ifreq *ifr)
+ new_offloads |= ENETC_F_TX_TSTAMP;
+ break;
+ case HWTSTAMP_TX_ONESTEP_SYNC:
++ if (!enetc_si_is_pf(priv->si))
++ return -EOPNOTSUPP;
++
+ new_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
+ new_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP;
+ break;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index 2563eb8ac7b63a..6a24324703bf49 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -843,6 +843,7 @@ static int enetc_set_coalesce(struct net_device *ndev,
+ static int enetc_get_ts_info(struct net_device *ndev,
+ struct kernel_ethtool_ts_info *info)
+ {
++ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ int *phc_idx;
+
+ phc_idx = symbol_get(enetc_phc_index);
+@@ -863,8 +864,10 @@ static int enetc_get_ts_info(struct net_device *ndev,
+ SOF_TIMESTAMPING_TX_SOFTWARE;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+- (1 << HWTSTAMP_TX_ON) |
+- (1 << HWTSTAMP_TX_ONESTEP_SYNC);
++ (1 << HWTSTAMP_TX_ON);
++
++ if (enetc_si_is_pf(priv->si))
++ info->tx_types |= (1 << HWTSTAMP_TX_ONESTEP_SYNC);
+
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_ALL);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 558cda577191d6..2960709f6b62ca 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -207,6 +207,7 @@ enum ice_feature {
+ ICE_F_GNSS,
+ ICE_F_ROCE_LAG,
+ ICE_F_SRIOV_LAG,
++ ICE_F_MBX_LIMIT,
+ ICE_F_MAX
+ };
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+index fb527434b58b15..d649c197cf673f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+@@ -38,8 +38,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf)
+ if (ice_vsi_add_vlan_zero(uplink_vsi))
+ goto err_vlan_zero;
+
+- if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
+- ICE_FLTR_RX))
++ if (ice_set_dflt_vsi(uplink_vsi))
+ goto err_def_rx;
+
+ if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
+diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+index 91cbae1eec89a0..8d31bfe28cc884 100644
+--- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
++++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+@@ -539,5 +539,8 @@
+ #define E830_PRTMAC_CL01_QNT_THR_CL0_M GENMASK(15, 0)
+ #define VFINT_DYN_CTLN(_i) (0x00003800 + ((_i) * 4))
+ #define VFINT_DYN_CTLN_CLEARPBA_M BIT(1)
++#define E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH 0x00234000
++#define E830_MBX_VF_DEC_TRIG(_VF) (0x00233800 + (_VF) * 4)
++#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(_VF) (0x00233000 + (_VF) * 4)
+
+ #endif /* _ICE_HW_AUTOGEN_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 06e712cdc3d9ed..d4e74f96a8ad5d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3880,6 +3880,9 @@ void ice_init_feature_support(struct ice_pf *pf)
+ default:
+ break;
+ }
++
++ if (pf->hw.mac_type == ICE_MAC_E830)
++ ice_set_feature_support(pf, ICE_F_MBX_LIMIT);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 45eefe22fb5b73..ca707dfcb286ef 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1546,12 +1546,20 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ ice_vf_lan_overflow_event(pf, &event);
+ break;
+ case ice_mbx_opc_send_msg_to_pf:
+- data.num_msg_proc = i;
+- data.num_pending_arq = pending;
+- data.max_num_msgs_mbx = hw->mailboxq.num_rq_entries;
+- data.async_watermark_val = ICE_MBX_OVERFLOW_WATERMARK;
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) {
++ ice_vc_process_vf_msg(pf, &event, NULL);
++ ice_mbx_vf_dec_trig_e830(hw, &event);
++ } else {
++ u16 val = hw->mailboxq.num_rq_entries;
++
++ data.max_num_msgs_mbx = val;
++ val = ICE_MBX_OVERFLOW_WATERMARK;
++ data.async_watermark_val = val;
++ data.num_msg_proc = i;
++ data.num_pending_arq = pending;
+
+- ice_vc_process_vf_msg(pf, &event, &data);
++ ice_vc_process_vf_msg(pf, &event, &data);
++ }
+ break;
+ case ice_aqc_opc_fw_logs_event:
+ ice_get_fwlog_data(pf, &event);
+@@ -4082,7 +4090,11 @@ static int ice_init_pf(struct ice_pf *pf)
+
+ mutex_init(&pf->vfs.table_lock);
+ hash_init(pf->vfs.table);
+- ice_mbx_init_snapshot(&pf->hw);
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ wr32(&pf->hw, E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH,
++ ICE_MBX_OVERFLOW_WATERMARK);
++ else
++ ice_mbx_init_snapshot(&pf->hw);
+
+ xa_init(&pf->dyn_ports);
+ xa_init(&pf->sf_nums);
+diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
+index 91cb393f616f2b..8aabf7749aa5e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
++++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
+@@ -36,6 +36,7 @@ static void ice_free_vf_entries(struct ice_pf *pf)
+
+ hash_for_each_safe(vfs->table, bkt, tmp, vf, entry) {
+ hash_del_rcu(&vf->entry);
++ ice_deinitialize_vf_entry(vf);
+ ice_put_vf(vf);
+ }
+ }
+@@ -193,9 +194,6 @@ void ice_free_vfs(struct ice_pf *pf)
+ wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
+ }
+
+- /* clear malicious info since the VF is getting released */
+- list_del(&vf->mbx_info.list_entry);
+-
+ mutex_unlock(&vf->cfg_lock);
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 8c434689e3f78e..815ad0bfe8326b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -716,6 +716,23 @@ ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 promisc_m)
+ return 0;
+ }
+
++/**
++ * ice_reset_vf_mbx_cnt - reset VF mailbox message count
++ * @vf: pointer to the VF structure
++ *
++ * This function clears the VF mailbox message count, and should be called on
++ * VF reset.
++ */
++static void ice_reset_vf_mbx_cnt(struct ice_vf *vf)
++{
++ struct ice_pf *pf = vf->pf;
++
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ ice_mbx_vf_clear_cnt_e830(&pf->hw, vf->vf_id);
++ else
++ ice_mbx_clear_malvf(&vf->mbx_info);
++}
++
+ /**
+ * ice_reset_all_vfs - reset all allocated VFs in one go
+ * @pf: pointer to the PF structure
+@@ -742,7 +759,7 @@ void ice_reset_all_vfs(struct ice_pf *pf)
+
+ /* clear all malicious info if the VFs are getting reset */
+ ice_for_each_vf(pf, bkt, vf)
+- ice_mbx_clear_malvf(&vf->mbx_info);
++ ice_reset_vf_mbx_cnt(vf);
+
+ /* If VFs have been disabled, there is no need to reset */
+ if (test_and_set_bit(ICE_VF_DIS, pf->state)) {
+@@ -958,7 +975,7 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+ ice_eswitch_update_repr(&vf->repr_id, vsi);
+
+ /* if the VF has been reset allow it to come up again */
+- ice_mbx_clear_malvf(&vf->mbx_info);
++ ice_reset_vf_mbx_cnt(vf);
+
+ out_unlock:
+ if (lag && lag->bonded && lag->primary &&
+@@ -1011,11 +1028,22 @@ void ice_initialize_vf_entry(struct ice_vf *vf)
+ ice_vf_fdir_init(vf);
+
+ /* Initialize mailbox info for this VF */
+- ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info);
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ ice_mbx_vf_clear_cnt_e830(&pf->hw, vf->vf_id);
++ else
++ ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info);
+
+ mutex_init(&vf->cfg_lock);
+ }
+
++void ice_deinitialize_vf_entry(struct ice_vf *vf)
++{
++ struct ice_pf *pf = vf->pf;
++
++ if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ list_del(&vf->mbx_info.list_entry);
++}
++
+ /**
+ * ice_dis_vf_qs - Disable the VF queues
+ * @vf: pointer to the VF structure
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+index 0c7e77c0a09fa6..5392b040498621 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+@@ -24,6 +24,7 @@
+ #endif
+
+ void ice_initialize_vf_entry(struct ice_vf *vf);
++void ice_deinitialize_vf_entry(struct ice_vf *vf);
+ void ice_dis_vf_qs(struct ice_vf *vf);
+ int ice_check_vf_init(struct ice_vf *vf);
+ enum virtchnl_status_code ice_err_to_virt_err(int err);
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c
+index 40cb4ba0789ced..75c8113e58ee92 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c
+@@ -210,6 +210,38 @@ ice_mbx_detect_malvf(struct ice_hw *hw, struct ice_mbx_vf_info *vf_info,
+ return 0;
+ }
+
++/**
++ * ice_mbx_vf_dec_trig_e830 - Decrements the VF mailbox queue counter
++ * @hw: pointer to the HW struct
++ * @event: pointer to the control queue receive event
++ *
++ * This function triggers to decrement the counter
++ * MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT when the driver replenishes
++ * the buffers at the PF mailbox queue.
++ */
++void ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw,
++ const struct ice_rq_event_info *event)
++{
++ u16 vfid = le16_to_cpu(event->desc.retval);
++
++ wr32(hw, E830_MBX_VF_DEC_TRIG(vfid), 1);
++}
++
++/**
++ * ice_mbx_vf_clear_cnt_e830 - Clear the VF mailbox queue count
++ * @hw: pointer to the HW struct
++ * @vf_id: VF ID in the PF space
++ *
++ * This function clears the counter MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT, and should
++ * be called when a VF is created and on VF reset.
++ */
++void ice_mbx_vf_clear_cnt_e830(const struct ice_hw *hw, u16 vf_id)
++{
++ u32 reg = rd32(hw, E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(vf_id));
++
++ wr32(hw, E830_MBX_VF_DEC_TRIG(vf_id), reg);
++}
++
+ /**
+ * ice_mbx_vf_state_handler - Handle states of the overflow algorithm
+ * @hw: pointer to the HW struct
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.h b/drivers/net/ethernet/intel/ice/ice_vf_mbx.h
+index 44bc030d17e07a..684de89e5c5ed7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.h
+@@ -19,6 +19,9 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
+ u8 *msg, u16 msglen, struct ice_sq_cd *cd);
+
+ u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed);
++void ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw,
++ const struct ice_rq_event_info *event);
++void ice_mbx_vf_clear_cnt_e830(const struct ice_hw *hw, u16 vf_id);
+ int
+ ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data *mbx_data,
+ struct ice_mbx_vf_info *vf_info, bool *report_malvf);
+@@ -47,5 +50,11 @@ static inline void ice_mbx_init_snapshot(struct ice_hw *hw)
+ {
+ }
+
++static inline void
++ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw,
++ const struct ice_rq_event_info *event)
++{
++}
++
+ #endif /* CONFIG_PCI_IOV */
+ #endif /* _ICE_VF_MBX_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index b6ec01f6fa73e0..c8c1d48ff793d7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -4008,8 +4008,10 @@ ice_is_malicious_vf(struct ice_vf *vf, struct ice_mbx_data *mbxdata)
+ * @event: pointer to the AQ event
+ * @mbxdata: information used to detect VF attempting mailbox overflow
+ *
+- * called from the common asq/arq handler to
+- * process request from VF
++ * Called from the common asq/arq handler to process request from VF. When this
++ * flow is used for devices with hardware VF to PF message queue overflow
++ * support (ICE_F_MBX_LIMIT) mbxdata is set to NULL and ice_is_malicious_vf
++ * check is skipped.
+ */
+ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ struct ice_mbx_data *mbxdata)
+@@ -4035,7 +4037,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ mutex_lock(&vf->cfg_lock);
+
+ /* Check if the VF is trying to overflow the mailbox */
+- if (ice_is_malicious_vf(vf, mbxdata))
++ if (mbxdata && ice_is_malicious_vf(vf, mbxdata))
+ goto finish;
+
+ /* Check if VF is disabled. */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 1e0d1f9b07fbcf..afc902ae4763e0 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -3013,7 +3013,6 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ skb_shinfo(skb)->gso_size = rsc_seg_len;
+
+ skb_reset_network_header(skb);
+- len = skb->len - skb_transport_offset(skb);
+
+ if (ipv4) {
+ struct iphdr *ipv4h = ip_hdr(skb);
+@@ -3022,6 +3021,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+
+ /* Reset and set transport header offset in skb */
+ skb_set_transport_header(skb, sizeof(struct iphdr));
++ len = skb->len - skb_transport_offset(skb);
+
+ /* Compute the TCP pseudo header checksum*/
+ tcp_hdr(skb)->check =
+@@ -3031,6 +3031,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
+ skb_set_transport_header(skb, sizeof(struct ipv6hdr));
++ len = skb->len - skb_transport_offset(skb);
+ tcp_hdr(skb)->check =
+ ~tcp_v6_check(len, &ipv6h->saddr, &ipv6h->daddr, 0);
+ }
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+index 1641791a2d5b4e..8ed83fb9886243 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+@@ -324,7 +324,7 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
+ MVPP2_PRS_RI_VLAN_MASK),
+ /* Non IP flow, with vlan tag */
+ MVPP2_DEF_FLOW(MVPP22_FLOW_ETHERNET, MVPP2_FL_NON_IP_TAG,
+- MVPP22_CLS_HEK_OPT_VLAN,
++ MVPP22_CLS_HEK_TAGGED,
+ 0, 0),
+ };
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+index 7db9cab9bedf69..d9362eabc6a1ca 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+@@ -572,7 +572,7 @@ irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name,
+ pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ;
+ pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ;
+ mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d",
+- name, size, start);
++ name ? name : "mlx5_pcif_pool", size, start);
+ return pool;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index bfe6e2d631bdf5..f5acfb7d4ff655 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -516,6 +516,19 @@ static int loongson_dwmac_acpi_config(struct pci_dev *pdev,
+ return 0;
+ }
+
++/* Loongson's DWMAC device may take nearly two seconds to complete DMA reset */
++static int loongson_dwmac_fix_reset(void *priv, void __iomem *ioaddr)
++{
++ u32 value = readl(ioaddr + DMA_BUS_MODE);
++
++ value |= DMA_BUS_MODE_SFT_RESET;
++ writel(value, ioaddr + DMA_BUS_MODE);
++
++ return readl_poll_timeout(ioaddr + DMA_BUS_MODE, value,
++ !(value & DMA_BUS_MODE_SFT_RESET),
++ 10000, 2000000);
++}
++
+ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct plat_stmmacenet_data *plat;
+@@ -566,6 +579,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+
+ plat->bsp_priv = ld;
+ plat->setup = loongson_dwmac_setup;
++ plat->fix_soc_reset = loongson_dwmac_fix_reset;
+ ld->dev = &pdev->dev;
+ ld->loongson_id = readl(res.addr + GMAC_VERSION) & 0xff;
+
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index 0d5a862cd78a6c..3a13d60a947a81 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -99,6 +99,7 @@ config TI_K3_AM65_CPSW_NUSS
+ select NET_DEVLINK
+ select TI_DAVINCI_MDIO
+ select PHYLINK
++ select PAGE_POOL
+ select TI_K3_CPPI_DESC_POOL
+ imply PHY_TI_GMII_SEL
+ depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index 768578c0d9587d..d59c1744840af2 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -474,26 +474,7 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
+ static int icss_iep_perout_enable(struct icss_iep *iep,
+ struct ptp_perout_request *req, int on)
+ {
+- int ret = 0;
+-
+- mutex_lock(&iep->ptp_clk_mutex);
+-
+- if (iep->pps_enabled) {
+- ret = -EBUSY;
+- goto exit;
+- }
+-
+- if (iep->perout_enabled == !!on)
+- goto exit;
+-
+- ret = icss_iep_perout_enable_hw(iep, req, on);
+- if (!ret)
+- iep->perout_enabled = !!on;
+-
+-exit:
+- mutex_unlock(&iep->ptp_clk_mutex);
+-
+- return ret;
++ return -EOPNOTSUPP;
+ }
+
+ static void icss_iep_cap_cmp_work(struct work_struct *work)
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index b1afcb8740de12..ca62188a317ad4 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -3,6 +3,7 @@
+ */
+
+ #include <net/inet_dscp.h>
++#include <net/ip.h>
+
+ #include "ipvlan.h"
+
+@@ -415,20 +416,25 @@ struct ipvl_addr *ipvlan_addr_lookup(struct ipvl_port *port, void *lyr3h,
+
+ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ {
+- const struct iphdr *ip4h = ip_hdr(skb);
+ struct net_device *dev = skb->dev;
+ struct net *net = dev_net(dev);
+- struct rtable *rt;
+ int err, ret = NET_XMIT_DROP;
++ const struct iphdr *ip4h;
++ struct rtable *rt;
+ struct flowi4 fl4 = {
+ .flowi4_oif = dev->ifindex,
+- .flowi4_tos = ip4h->tos & INET_DSCP_MASK,
+ .flowi4_flags = FLOWI_FLAG_ANYSRC,
+ .flowi4_mark = skb->mark,
+- .daddr = ip4h->daddr,
+- .saddr = ip4h->saddr,
+ };
+
++ if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
++ goto err;
++
++ ip4h = ip_hdr(skb);
++ fl4.daddr = ip4h->daddr;
++ fl4.saddr = ip4h->saddr;
++ fl4.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h));
++
+ rt = ip_route_output_flow(net, &fl4, NULL);
+ if (IS_ERR(rt))
+ goto err;
+@@ -487,6 +493,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ struct net_device *dev = skb->dev;
+ int err, ret = NET_XMIT_DROP;
+
++ if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) {
++ DEV_STATS_INC(dev, tx_errors);
++ kfree_skb(skb);
++ return ret;
++ }
++
+ err = ipvlan_route_v6_outbound(dev, skb);
+ if (unlikely(err)) {
+ DEV_STATS_INC(dev, tx_errors);
+diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
+index 1993b90b1a5f90..491e56b3263fd5 100644
+--- a/drivers/net/loopback.c
++++ b/drivers/net/loopback.c
+@@ -244,8 +244,22 @@ static netdev_tx_t blackhole_netdev_xmit(struct sk_buff *skb,
+ return NETDEV_TX_OK;
+ }
+
++static int blackhole_neigh_output(struct neighbour *n, struct sk_buff *skb)
++{
++ kfree_skb(skb);
++ return 0;
++}
++
++static int blackhole_neigh_construct(struct net_device *dev,
++ struct neighbour *n)
++{
++ n->output = blackhole_neigh_output;
++ return 0;
++}
++
+ static const struct net_device_ops blackhole_netdev_ops = {
+ .ndo_start_xmit = blackhole_netdev_xmit,
++ .ndo_neigh_construct = blackhole_neigh_construct,
+ };
+
+ /* This is a dst-dummy device used specifically for invalidated
+diff --git a/drivers/net/phy/qcom/qca807x.c b/drivers/net/phy/qcom/qca807x.c
+index bd8a51ec0ecd6a..ec336c3e338d6c 100644
+--- a/drivers/net/phy/qcom/qca807x.c
++++ b/drivers/net/phy/qcom/qca807x.c
+@@ -774,7 +774,7 @@ static int qca807x_config_init(struct phy_device *phydev)
+ control_dac &= ~QCA807X_CONTROL_DAC_MASK;
+ if (!priv->dac_full_amplitude)
+ control_dac |= QCA807X_CONTROL_DAC_DSP_AMPLITUDE;
+- if (!priv->dac_full_amplitude)
++ if (!priv->dac_full_bias_current)
+ control_dac |= QCA807X_CONTROL_DAC_DSP_BIAS_CURRENT;
+ if (!priv->dac_disable_bias_current_tweak)
+ control_dac |= QCA807X_CONTROL_DAC_BIAS_CURRENT_TWEAK;
+diff --git a/drivers/net/usb/gl620a.c b/drivers/net/usb/gl620a.c
+index 46af78caf457a6..0bfa37c1405918 100644
+--- a/drivers/net/usb/gl620a.c
++++ b/drivers/net/usb/gl620a.c
+@@ -179,9 +179,7 @@ static int genelink_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ dev->hard_mtu = GL_RCV_BUF_SIZE;
+ dev->net->hard_header_len += 4;
+- dev->in = usb_rcvbulkpipe(dev->udev, dev->driver_info->in);
+- dev->out = usb_sndbulkpipe(dev->udev, dev->driver_info->out);
+- return 0;
++ return usbnet_get_endpoints(dev, intf);
+ }
+
+ static const struct driver_info genelink_info = {
+diff --git a/drivers/phy/rockchip/Kconfig b/drivers/phy/rockchip/Kconfig
+index 2f7a05f21dc595..dcb8e1628632e6 100644
+--- a/drivers/phy/rockchip/Kconfig
++++ b/drivers/phy/rockchip/Kconfig
+@@ -125,6 +125,7 @@ config PHY_ROCKCHIP_USBDP
+ depends on ARCH_ROCKCHIP && OF
+ depends on TYPEC
+ select GENERIC_PHY
++ select USB_COMMON
+ help
+ Enable this to support the Rockchip USB3.0/DP combo PHY with
+ Samsung IP block. This is required for USB3 support on RK3588.
+diff --git a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+index 2eb3329ca23f67..1ef6d9630f7e09 100644
+--- a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
++++ b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+@@ -309,7 +309,10 @@ static int rockchip_combphy_parse_dt(struct device *dev, struct rockchip_combphy
+
+ priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
+
+- priv->phy_rst = devm_reset_control_get(dev, "phy");
++ priv->phy_rst = devm_reset_control_get_exclusive(dev, "phy");
++ /* fallback to old behaviour */
++ if (PTR_ERR(priv->phy_rst) == -ENOENT)
++ priv->phy_rst = devm_reset_control_array_get_exclusive(dev);
+ if (IS_ERR(priv->phy_rst))
+ return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
+
+diff --git a/drivers/phy/samsung/phy-exynos5-usbdrd.c b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+index c421b495eb0fe4..46b8f6987c62c3 100644
+--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c
++++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+@@ -488,9 +488,9 @@ exynos5_usbdrd_pipe3_set_refclk(struct phy_usb_instance *inst)
+ reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
+
+ /* FSEL settings corresponding to reference clock */
+- reg &= ~PHYCLKRST_FSEL_PIPE_MASK |
+- PHYCLKRST_MPLL_MULTIPLIER_MASK |
+- PHYCLKRST_SSC_REFCLKSEL_MASK;
++ reg &= ~(PHYCLKRST_FSEL_PIPE_MASK |
++ PHYCLKRST_MPLL_MULTIPLIER_MASK |
++ PHYCLKRST_SSC_REFCLKSEL_MASK);
+ switch (phy_drd->extrefclk) {
+ case EXYNOS5_FSEL_50MHZ:
+ reg |= (PHYCLKRST_MPLL_MULTIPLIER_50M_REF |
+@@ -532,9 +532,9 @@ exynos5_usbdrd_utmi_set_refclk(struct phy_usb_instance *inst)
+ reg &= ~PHYCLKRST_REFCLKSEL_MASK;
+ reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
+
+- reg &= ~PHYCLKRST_FSEL_UTMI_MASK |
+- PHYCLKRST_MPLL_MULTIPLIER_MASK |
+- PHYCLKRST_SSC_REFCLKSEL_MASK;
++ reg &= ~(PHYCLKRST_FSEL_UTMI_MASK |
++ PHYCLKRST_MPLL_MULTIPLIER_MASK |
++ PHYCLKRST_SSC_REFCLKSEL_MASK);
+ reg |= PHYCLKRST_FSEL(phy_drd->extrefclk);
+
+ return reg;
+@@ -1296,14 +1296,17 @@ static int exynos5_usbdrd_gs101_phy_exit(struct phy *phy)
+ struct exynos5_usbdrd_phy *phy_drd = to_usbdrd_phy(inst);
+ int ret;
+
++ if (inst->phy_cfg->id == EXYNOS5_DRDPHY_UTMI) {
++ ret = exynos850_usbdrd_phy_exit(phy);
++ if (ret)
++ return ret;
++ }
++
++ exynos5_usbdrd_phy_isol(inst, true);
++
+ if (inst->phy_cfg->id != EXYNOS5_DRDPHY_UTMI)
+ return 0;
+
+- ret = exynos850_usbdrd_phy_exit(phy);
+- if (ret)
+- return ret;
+-
+- exynos5_usbdrd_phy_isol(inst, true);
+ return regulator_bulk_disable(phy_drd->drv_data->n_regulators,
+ phy_drd->regulators);
+ }
+diff --git a/drivers/phy/tegra/xusb-tegra186.c b/drivers/phy/tegra/xusb-tegra186.c
+index 0f60d5d1c1678d..fae6242aa730e0 100644
+--- a/drivers/phy/tegra/xusb-tegra186.c
++++ b/drivers/phy/tegra/xusb-tegra186.c
+@@ -928,6 +928,7 @@ static int tegra186_utmi_phy_init(struct phy *phy)
+ unsigned int index = lane->index;
+ struct device *dev = padctl->dev;
+ int err;
++ u32 reg;
+
+ port = tegra_xusb_find_usb2_port(padctl, index);
+ if (!port) {
+@@ -935,6 +936,16 @@ static int tegra186_utmi_phy_init(struct phy *phy)
+ return -ENODEV;
+ }
+
++ if (port->mode == USB_DR_MODE_OTG ||
++ port->mode == USB_DR_MODE_PERIPHERAL) {
++ /* reset VBUS&ID OVERRIDE */
++ reg = padctl_readl(padctl, USB2_VBUS_ID);
++ reg &= ~VBUS_OVERRIDE;
++ reg &= ~ID_OVERRIDE(~0);
++ reg |= ID_OVERRIDE_FLOATING;
++ padctl_writel(padctl, reg, USB2_VBUS_ID);
++ }
++
+ if (port->supply && port->mode == USB_DR_MODE_HOST) {
+ err = regulator_enable(port->supply);
+ if (err) {
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index c9dde1ac9523e8..3023b07dc483b5 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1653,13 +1653,6 @@ static blk_status_t scsi_prepare_cmd(struct request *req)
+ if (in_flight)
+ __set_bit(SCMD_STATE_INFLIGHT, &cmd->state);
+
+- /*
+- * Only clear the driver-private command data if the LLD does not supply
+- * a function to initialize that data.
+- */
+- if (!shost->hostt->init_cmd_priv)
+- memset(cmd + 1, 0, shost->hostt->cmd_size);
+-
+ cmd->prot_op = SCSI_PROT_NORMAL;
+ if (blk_rq_bytes(req))
+ cmd->sc_data_direction = rq_dma_dir(req);
+@@ -1826,6 +1819,13 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
+ if (!scsi_host_queue_ready(q, shost, sdev, cmd))
+ goto out_dec_target_busy;
+
++ /*
++ * Only clear the driver-private command data if the LLD does not supply
++ * a function to initialize that data.
++ */
++ if (shost->hostt->cmd_size && !shost->hostt->init_cmd_priv)
++ memset(cmd + 1, 0, shost->hostt->cmd_size);
++
+ if (!(req->rq_flags & RQF_DONTPREP)) {
+ ret = scsi_prepare_cmd(req);
+ if (ret != BLK_STS_OK)
+diff --git a/drivers/thermal/gov_bang_bang.c b/drivers/thermal/gov_bang_bang.c
+index 863e7a4272e66f..b887e48e8c7e67 100644
+--- a/drivers/thermal/gov_bang_bang.c
++++ b/drivers/thermal/gov_bang_bang.c
+@@ -67,6 +67,7 @@ static void bang_bang_control(struct thermal_zone_device *tz,
+ const struct thermal_trip *trip,
+ bool crossed_up)
+ {
++ const struct thermal_trip_desc *td = trip_to_trip_desc(trip);
+ struct thermal_instance *instance;
+
+ lockdep_assert_held(&tz->lock);
+@@ -75,10 +76,8 @@ static void bang_bang_control(struct thermal_zone_device *tz,
+ thermal_zone_trip_id(tz, trip), trip->temperature,
+ tz->temperature, trip->hysteresis);
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (instance->trip == trip)
+- bang_bang_set_instance_target(instance, crossed_up);
+- }
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ bang_bang_set_instance_target(instance, crossed_up);
+ }
+
+ static void bang_bang_manage(struct thermal_zone_device *tz)
+@@ -104,8 +103,8 @@ static void bang_bang_manage(struct thermal_zone_device *tz)
+ * to the thermal zone temperature and the trip point threshold.
+ */
+ turn_on = tz->temperature >= td->threshold;
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (!instance->initialized && instance->trip == trip)
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
++ if (!instance->initialized)
+ bang_bang_set_instance_target(instance, turn_on);
+ }
+ }
+diff --git a/drivers/thermal/gov_fair_share.c b/drivers/thermal/gov_fair_share.c
+index ce0ea571ed67ab..d37d57d48c389a 100644
+--- a/drivers/thermal/gov_fair_share.c
++++ b/drivers/thermal/gov_fair_share.c
+@@ -44,7 +44,7 @@ static int get_trip_level(struct thermal_zone_device *tz)
+ /**
+ * fair_share_throttle - throttles devices associated with the given zone
+ * @tz: thermal_zone_device
+- * @trip: trip point
++ * @td: trip point descriptor
+ * @trip_level: number of trips crossed by the zone temperature
+ *
+ * Throttling Logic: This uses three parameters to calculate the new
+@@ -61,29 +61,23 @@ static int get_trip_level(struct thermal_zone_device *tz)
+ * new_state of cooling device = P3 * P2 * P1
+ */
+ static void fair_share_throttle(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ const struct thermal_trip_desc *td,
+ int trip_level)
+ {
+ struct thermal_instance *instance;
+ int total_weight = 0;
+ int nr_instances = 0;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (instance->trip != trip)
+- continue;
+-
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ total_weight += instance->weight;
+ nr_instances++;
+ }
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ struct thermal_cooling_device *cdev = instance->cdev;
+ u64 dividend;
+ u32 divisor;
+
+- if (instance->trip != trip)
+- continue;
+-
+ dividend = trip_level;
+ dividend *= cdev->max_state;
+ divisor = tz->num_trips;
+@@ -116,7 +110,7 @@ static void fair_share_manage(struct thermal_zone_device *tz)
+ trip->type == THERMAL_TRIP_HOT)
+ continue;
+
+- fair_share_throttle(tz, trip, trip_level);
++ fair_share_throttle(tz, td, trip_level);
+ }
+ }
+
+diff --git a/drivers/thermal/gov_power_allocator.c b/drivers/thermal/gov_power_allocator.c
+index 1b2345a697c5a0..90b4bfd9237bce 100644
+--- a/drivers/thermal/gov_power_allocator.c
++++ b/drivers/thermal/gov_power_allocator.c
+@@ -97,11 +97,9 @@ struct power_allocator_params {
+ struct power_actor *power;
+ };
+
+-static bool power_actor_is_valid(struct power_allocator_params *params,
+- struct thermal_instance *instance)
++static bool power_actor_is_valid(struct thermal_instance *instance)
+ {
+- return (instance->trip == params->trip_max &&
+- cdev_is_power_actor(instance->cdev));
++ return cdev_is_power_actor(instance->cdev);
+ }
+
+ /**
+@@ -118,13 +116,14 @@ static bool power_actor_is_valid(struct power_allocator_params *params,
+ static u32 estimate_sustainable_power(struct thermal_zone_device *tz)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ struct thermal_cooling_device *cdev;
+ struct thermal_instance *instance;
+ u32 sustainable_power = 0;
+ u32 min_power;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (!power_actor_is_valid(params, instance))
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ cdev = instance->cdev;
+@@ -364,7 +363,7 @@ static void divvy_up_power(struct power_actor *power, int num_actors,
+
+ for (i = 0; i < num_actors; i++) {
+ struct power_actor *pa = &power[i];
+- u64 req_range = (u64)pa->req_power * power_range;
++ u64 req_range = (u64)pa->weighted_req_power * power_range;
+
+ pa->granted_power = DIV_ROUND_CLOSEST_ULL(req_range,
+ total_req_power);
+@@ -400,6 +399,7 @@ static void divvy_up_power(struct power_actor *power, int num_actors,
+ static void allocate_power(struct thermal_zone_device *tz, int control_temp)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ unsigned int num_actors = params->num_actors;
+ struct power_actor *power = params->power;
+ struct thermal_cooling_device *cdev;
+@@ -417,10 +417,10 @@ static void allocate_power(struct thermal_zone_device *tz, int control_temp)
+ /* Clean all buffers for new power estimations */
+ memset(power, 0, params->buffer_size);
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ struct power_actor *pa = &power[i];
+
+- if (!power_actor_is_valid(params, instance))
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ cdev = instance->cdev;
+@@ -454,10 +454,10 @@ static void allocate_power(struct thermal_zone_device *tz, int control_temp)
+ power_range);
+
+ i = 0;
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ struct power_actor *pa = &power[i];
+
+- if (!power_actor_is_valid(params, instance))
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ power_actor_set_power(instance->cdev, instance,
+@@ -538,12 +538,13 @@ static void reset_pid_controller(struct power_allocator_params *params)
+ static void allow_maximum_power(struct thermal_zone_device *tz)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ struct thermal_cooling_device *cdev;
+ struct thermal_instance *instance;
+ u32 req_power;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (!power_actor_is_valid(params, instance))
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ cdev = instance->cdev;
+@@ -581,13 +582,16 @@ static void allow_maximum_power(struct thermal_zone_device *tz)
+ static int check_power_actors(struct thermal_zone_device *tz,
+ struct power_allocator_params *params)
+ {
++ const struct thermal_trip_desc *td;
+ struct thermal_instance *instance;
+ int ret = 0;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (instance->trip != params->trip_max)
+- continue;
++ if (!params->trip_max)
++ return 0;
++
++ td = trip_to_trip_desc(params->trip_max);
+
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ if (!cdev_is_power_actor(instance->cdev)) {
+ dev_warn(&tz->device, "power_allocator: %s is not a power actor\n",
+ instance->cdev->type);
+@@ -631,30 +635,43 @@ static int allocate_actors_buffer(struct power_allocator_params *params,
+ return ret;
+ }
+
++static void power_allocator_update_weight(struct power_allocator_params *params)
++{
++ const struct thermal_trip_desc *td;
++ struct thermal_instance *instance;
++
++ if (!params->trip_max)
++ return;
++
++ td = trip_to_trip_desc(params->trip_max);
++
++ params->total_weight = 0;
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ if (power_actor_is_valid(instance))
++ params->total_weight += instance->weight;
++}
++
+ static void power_allocator_update_tz(struct thermal_zone_device *tz,
+ enum thermal_notify_event reason)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ struct thermal_instance *instance;
+ int num_actors = 0;
+
+ switch (reason) {
+ case THERMAL_TZ_BIND_CDEV:
+ case THERMAL_TZ_UNBIND_CDEV:
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+- if (power_actor_is_valid(params, instance))
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ if (power_actor_is_valid(instance))
+ num_actors++;
+
+- if (num_actors == params->num_actors)
+- return;
++ if (num_actors != params->num_actors)
++ allocate_actors_buffer(params, num_actors);
+
+- allocate_actors_buffer(params, num_actors);
+- break;
++ fallthrough;
+ case THERMAL_INSTANCE_WEIGHT_CHANGED:
+- params->total_weight = 0;
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+- if (power_actor_is_valid(params, instance))
+- params->total_weight += instance->weight;
++ power_allocator_update_weight(params);
+ break;
+ default:
+ break;
+@@ -720,6 +737,8 @@ static int power_allocator_bind(struct thermal_zone_device *tz)
+
+ tz->governor_data = params;
+
++ power_allocator_update_weight(params);
++
+ return 0;
+
+ free_params:
+diff --git a/drivers/thermal/gov_step_wise.c b/drivers/thermal/gov_step_wise.c
+index fd5527188cf91a..ea4bf88d37f337 100644
+--- a/drivers/thermal/gov_step_wise.c
++++ b/drivers/thermal/gov_step_wise.c
+@@ -66,9 +66,10 @@ static unsigned long get_target_state(struct thermal_instance *instance,
+ }
+
+ static void thermal_zone_trip_update(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ const struct thermal_trip_desc *td,
+ int trip_threshold)
+ {
++ const struct thermal_trip *trip = &td->trip;
+ enum thermal_trend trend = get_tz_trend(tz, trip);
+ int trip_id = thermal_zone_trip_id(tz, trip);
+ struct thermal_instance *instance;
+@@ -82,12 +83,9 @@ static void thermal_zone_trip_update(struct thermal_zone_device *tz,
+ dev_dbg(&tz->device, "Trip%d[type=%d,temp=%d]:trend=%d,throttle=%d\n",
+ trip_id, trip->type, trip_threshold, trend, throttle);
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ int old_target;
+
+- if (instance->trip != trip)
+- continue;
+-
+ old_target = instance->target;
+ instance->target = get_target_state(instance, trend, throttle);
+
+@@ -127,11 +125,13 @@ static void step_wise_manage(struct thermal_zone_device *tz)
+ trip->type == THERMAL_TRIP_HOT)
+ continue;
+
+- thermal_zone_trip_update(tz, trip, td->threshold);
++ thermal_zone_trip_update(tz, td, td->threshold);
+ }
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+- thermal_cdev_update(instance->cdev);
++ for_each_trip_desc(tz, td) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ thermal_cdev_update(instance->cdev);
++ }
+ }
+
+ static struct thermal_governor thermal_gov_step_wise = {
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 1d2f2b307bac50..c2fa236e10cda7 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -490,7 +490,7 @@ static void thermal_zone_device_check(struct work_struct *work)
+
+ static void thermal_zone_device_init(struct thermal_zone_device *tz)
+ {
+- struct thermal_instance *pos;
++ struct thermal_trip_desc *td;
+
+ INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_check);
+
+@@ -498,8 +498,12 @@ static void thermal_zone_device_init(struct thermal_zone_device *tz)
+ tz->passive = 0;
+ tz->prev_low_trip = -INT_MAX;
+ tz->prev_high_trip = INT_MAX;
+- list_for_each_entry(pos, &tz->thermal_instances, tz_node)
+- pos->initialized = false;
++ for_each_trip_desc(tz, td) {
++ struct thermal_instance *instance;
++
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ instance->initialized = false;
++ }
+ }
+
+ static void thermal_governor_trip_crossed(struct thermal_governor *governor,
+@@ -764,12 +768,12 @@ struct thermal_zone_device *thermal_zone_get_by_id(int id)
+ * Return: 0 on success, the proper error value otherwise.
+ */
+ static int thermal_bind_cdev_to_trip(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ struct thermal_trip *trip,
+ struct thermal_cooling_device *cdev,
+ struct cooling_spec *cool_spec)
+ {
+- struct thermal_instance *dev;
+- struct thermal_instance *pos;
++ struct thermal_trip_desc *td = trip_to_trip_desc(trip);
++ struct thermal_instance *dev, *instance;
+ bool upper_no_limit;
+ int result;
+
+@@ -832,13 +836,13 @@ static int thermal_bind_cdev_to_trip(struct thermal_zone_device *tz,
+ goto remove_trip_file;
+
+ mutex_lock(&cdev->lock);
+- list_for_each_entry(pos, &tz->thermal_instances, tz_node)
+- if (pos->trip == trip && pos->cdev == cdev) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ if (instance->cdev == cdev) {
+ result = -EEXIST;
+ break;
+ }
+ if (!result) {
+- list_add_tail(&dev->tz_node, &tz->thermal_instances);
++ list_add_tail(&dev->trip_node, &td->thermal_instances);
+ list_add_tail(&dev->cdev_node, &cdev->thermal_instances);
+ atomic_set(&tz->need_update, 1);
+
+@@ -872,15 +876,16 @@ static int thermal_bind_cdev_to_trip(struct thermal_zone_device *tz,
+ * This function is usually called in the thermal zone device .unbind callback.
+ */
+ static void thermal_unbind_cdev_from_trip(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ struct thermal_trip *trip,
+ struct thermal_cooling_device *cdev)
+ {
++ struct thermal_trip_desc *td = trip_to_trip_desc(trip);
+ struct thermal_instance *pos, *next;
+
+ mutex_lock(&cdev->lock);
+- list_for_each_entry_safe(pos, next, &tz->thermal_instances, tz_node) {
+- if (pos->trip == trip && pos->cdev == cdev) {
+- list_del(&pos->tz_node);
++ list_for_each_entry_safe(pos, next, &td->thermal_instances, trip_node) {
++ if (pos->cdev == cdev) {
++ list_del(&pos->trip_node);
+ list_del(&pos->cdev_node);
+
+ thermal_governor_update_tz(tz, THERMAL_TZ_UNBIND_CDEV);
+@@ -1435,7 +1440,6 @@ thermal_zone_device_register_with_trips(const char *type,
+ }
+ }
+
+- INIT_LIST_HEAD(&tz->thermal_instances);
+ INIT_LIST_HEAD(&tz->node);
+ ida_init(&tz->ida);
+ mutex_init(&tz->lock);
+@@ -1459,6 +1463,7 @@ thermal_zone_device_register_with_trips(const char *type,
+ tz->num_trips = num_trips;
+ for_each_trip_desc(tz, td) {
+ td->trip = *trip++;
++ INIT_LIST_HEAD(&td->thermal_instances);
+ /*
+ * Mark all thresholds as invalid to start with even though
+ * this only matters for the trips that start as invalid and
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 421522a2bb9d4c..163871699a602c 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -30,6 +30,7 @@ struct thermal_trip_desc {
+ struct thermal_trip trip;
+ struct thermal_trip_attrs trip_attrs;
+ struct list_head notify_list_node;
++ struct list_head thermal_instances;
+ int notify_temp;
+ int threshold;
+ };
+@@ -99,7 +100,6 @@ struct thermal_governor {
+ * @tzp: thermal zone parameters
+ * @governor: pointer to the governor for this thermal zone
+ * @governor_data: private pointer for governor data
+- * @thermal_instances: list of &struct thermal_instance of this thermal zone
+ * @ida: &struct ida to generate unique id for this zone's cooling
+ * devices
+ * @lock: lock to protect thermal_instances list
+@@ -133,7 +133,6 @@ struct thermal_zone_device {
+ struct thermal_zone_params *tzp;
+ struct thermal_governor *governor;
+ void *governor_data;
+- struct list_head thermal_instances;
+ struct ida ida;
+ struct mutex lock;
+ struct list_head node;
+@@ -230,7 +229,7 @@ struct thermal_instance {
+ struct device_attribute attr;
+ char weight_attr_name[THERMAL_NAME_LENGTH];
+ struct device_attribute weight_attr;
+- struct list_head tz_node; /* node in tz->thermal_instances */
++ struct list_head trip_node; /* node in trip->thermal_instances */
+ struct list_head cdev_node; /* node in cdev->thermal_instances */
+ unsigned int weight; /* The weight of the cooling device */
+ bool upper_no_limit;
+diff --git a/drivers/thermal/thermal_helpers.c b/drivers/thermal/thermal_helpers.c
+index dc374a7a1a659f..403d62d3ce77ee 100644
+--- a/drivers/thermal/thermal_helpers.c
++++ b/drivers/thermal/thermal_helpers.c
+@@ -43,10 +43,11 @@ static bool thermal_instance_present(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev,
+ const struct thermal_trip *trip)
+ {
++ const struct thermal_trip_desc *td = trip_to_trip_desc(trip);
+ struct thermal_instance *ti;
+
+- list_for_each_entry(ti, &tz->thermal_instances, tz_node) {
+- if (ti->trip == trip && ti->cdev == cdev)
++ list_for_each_entry(ti, &td->thermal_instances, trip_node) {
++ if (ti->cdev == cdev)
+ return true;
+ }
+
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 5d3d8ce672cd51..e0aa9d9d5604b7 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -293,12 +293,40 @@ static bool thermal_of_get_cooling_spec(struct device_node *map_np, int index,
+ return true;
+ }
+
++static bool thermal_of_cm_lookup(struct device_node *cm_np,
++ const struct thermal_trip *trip,
++ struct thermal_cooling_device *cdev,
++ struct cooling_spec *c)
++{
++ for_each_child_of_node_scoped(cm_np, child) {
++ struct device_node *tr_np;
++ int count, i;
++
++ tr_np = of_parse_phandle(child, "trip", 0);
++ if (tr_np != trip->priv)
++ continue;
++
++ /* The trip has been found, look up the cdev. */
++ count = of_count_phandle_with_args(child, "cooling-device",
++ "#cooling-cells");
++ if (count <= 0)
++ pr_err("Add a cooling_device property with at least one device\n");
++
++ for (i = 0; i < count; i++) {
++ if (thermal_of_get_cooling_spec(child, i, cdev, c))
++ return true;
++ }
++ }
++
++ return false;
++}
++
+ static bool thermal_of_should_bind(struct thermal_zone_device *tz,
+ const struct thermal_trip *trip,
+ struct thermal_cooling_device *cdev,
+ struct cooling_spec *c)
+ {
+- struct device_node *tz_np, *cm_np, *child;
++ struct device_node *tz_np, *cm_np;
+ bool result = false;
+
+ tz_np = thermal_of_zone_get_by_name(tz);
+@@ -312,28 +340,7 @@ static bool thermal_of_should_bind(struct thermal_zone_device *tz,
+ goto out;
+
+ /* Look up the trip and the cdev in the cooling maps. */
+- for_each_child_of_node(cm_np, child) {
+- struct device_node *tr_np;
+- int count, i;
+-
+- tr_np = of_parse_phandle(child, "trip", 0);
+- if (tr_np != trip->priv)
+- continue;
+-
+- /* The trip has been found, look up the cdev. */
+- count = of_count_phandle_with_args(child, "cooling-device", "#cooling-cells");
+- if (count <= 0)
+- pr_err("Add a cooling_device property with at least one device\n");
+-
+- for (i = 0; i < count; i++) {
+- result = thermal_of_get_cooling_spec(child, i, cdev, c);
+- if (result)
+- break;
+- }
+-
+- of_node_put(child);
+- break;
+- }
++ result = thermal_of_cm_lookup(cm_np, trip, cdev, c);
+
+ of_node_put(cm_np);
+ out:
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 8d4ad0a3f2cf02..252186124669a8 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -194,10 +194,12 @@ static int ufs_bsg_request(struct bsg_job *job)
+ ufshcd_rpm_put_sync(hba);
+ kfree(buff);
+ bsg_reply->result = ret;
+- job->reply_len = !rpmb ? sizeof(struct ufs_bsg_reply) : sizeof(struct ufs_rpmb_reply);
+ /* complete the job here only if no error */
+- if (ret == 0)
++ if (ret == 0) {
++ job->reply_len = rpmb ? sizeof(struct ufs_rpmb_reply) :
++ sizeof(struct ufs_bsg_reply);
+ bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 67410c4cebee6d..a3e95ef5eda82e 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -266,7 +266,7 @@ static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
+
+ static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba)
+ {
+- return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba);
++ return scsi_host_busy(hba->host) || ufshcd_has_pending_tasks(hba);
+ }
+
+ static const struct ufs_dev_quirk ufs_fixups[] = {
+@@ -639,8 +639,8 @@ static void ufshcd_print_host_state(struct ufs_hba *hba)
+ const struct scsi_device *sdev_ufs = hba->ufs_device_wlun;
+
+ dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state);
+- dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n",
+- hba->outstanding_reqs, hba->outstanding_tasks);
++ dev_err(hba->dev, "%d outstanding reqs, tasks=0x%lx\n",
++ scsi_host_busy(hba->host), hba->outstanding_tasks);
+ dev_err(hba->dev, "saved_err=0x%x, saved_uic_err=0x%x\n",
+ hba->saved_err, hba->saved_uic_err);
+ dev_err(hba->dev, "Device power mode=%d, UIC link state=%d\n",
+@@ -8975,7 +8975,7 @@ static enum scsi_timeout_action ufshcd_eh_timed_out(struct scsi_cmnd *scmd)
+ dev_info(hba->dev, "%s() finished; outstanding_tasks = %#lx.\n",
+ __func__, hba->outstanding_tasks);
+
+- return hba->outstanding_reqs ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE;
++ return scsi_host_busy(hba->host) ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE;
+ }
+
+ static const struct attribute_group *ufshcd_driver_groups[] = {
+@@ -10457,6 +10457,21 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ */
+ spin_lock_init(&hba->clk_gating.lock);
+
++ /*
++ * Set the default power management level for runtime and system PM.
++ * Host controller drivers can override them in their
++ * 'ufs_hba_variant_ops::init' callback.
++ *
++ * Default power saving mode is to keep UFS link in Hibern8 state
++ * and UFS device in sleep state.
++ */
++ hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ UFS_SLEEP_PWR_MODE,
++ UIC_LINK_HIBERN8_STATE);
++ hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ UFS_SLEEP_PWR_MODE,
++ UIC_LINK_HIBERN8_STATE);
++
+ err = ufshcd_hba_init(hba);
+ if (err)
+ goto out_error;
+@@ -10606,21 +10621,6 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ goto free_tmf_queue;
+ }
+
+- /*
+- * Set the default power management level for runtime and system PM if
+- * not set by the host controller drivers.
+- * Default power saving mode is to keep UFS link in Hibern8 state
+- * and UFS device in sleep state.
+- */
+- if (!hba->rpm_lvl)
+- hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+- UFS_SLEEP_PWR_MODE,
+- UIC_LINK_HIBERN8_STATE);
+- if (!hba->spm_lvl)
+- hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+- UFS_SLEEP_PWR_MODE,
+- UIC_LINK_HIBERN8_STATE);
+-
+ INIT_DELAYED_WORK(&hba->rpm_dev_flush_recheck_work, ufshcd_rpm_dev_flush_recheck_work);
+ INIT_DELAYED_WORK(&hba->ufs_rtc_update_work, ufshcd_rtc_work);
+
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index 038f9d0ae3af8e..4504e16b458cc1 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -163,6 +163,8 @@ static struct afs_server *afs_install_server(struct afs_cell *cell,
+ rb_insert_color(&server->uuid_rb, &net->fs_servers);
+ hlist_add_head_rcu(&server->proc_link, &net->fs_proc);
+
++ afs_get_cell(cell, afs_cell_trace_get_server);
++
+ added_dup:
+ write_seqlock(&net->fs_addr_lock);
+ estate = rcu_dereference_protected(server->endpoint_state,
+@@ -442,6 +444,7 @@ static void afs_server_rcu(struct rcu_head *rcu)
+ atomic_read(&server->active), afs_server_trace_free);
+ afs_put_endpoint_state(rcu_access_pointer(server->endpoint_state),
+ afs_estate_trace_put_server);
++ afs_put_cell(server->cell, afs_cell_trace_put_server);
+ kfree(server);
+ }
+
+diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c
+index 7e7e567a7f8a20..d20cd902ef949a 100644
+--- a/fs/afs/server_list.c
++++ b/fs/afs/server_list.c
+@@ -97,8 +97,8 @@ struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume,
+ break;
+ if (j < slist->nr_servers) {
+ if (slist->servers[j].server == server) {
+- afs_put_server(volume->cell->net, server,
+- afs_server_trace_put_slist_isort);
++ afs_unuse_server(volume->cell->net, server,
++ afs_server_trace_put_slist_isort);
+ continue;
+ }
+
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 035ba52742a504..4db912f5623055 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -780,6 +780,43 @@ int nfs4_inode_return_delegation(struct inode *inode)
+ return 0;
+ }
+
++/**
++ * nfs4_inode_set_return_delegation_on_close - asynchronously return a delegation
++ * @inode: inode to process
++ *
++ * This routine is called to request that the delegation be returned as soon
++ * as the file is closed. If the file is already closed, the delegation is
++ * immediately returned.
++ */
++void nfs4_inode_set_return_delegation_on_close(struct inode *inode)
++{
++ struct nfs_delegation *delegation;
++ struct nfs_delegation *ret = NULL;
++
++ if (!inode)
++ return;
++ rcu_read_lock();
++ delegation = nfs4_get_valid_delegation(inode);
++ if (!delegation)
++ goto out;
++ spin_lock(&delegation->lock);
++ if (!delegation->inode)
++ goto out_unlock;
++ if (list_empty(&NFS_I(inode)->open_files) &&
++ !test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
++ /* Refcount matched in nfs_end_delegation_return() */
++ ret = nfs_get_delegation(delegation);
++ } else
++ set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
++out_unlock:
++ spin_unlock(&delegation->lock);
++ if (ret)
++ nfs_clear_verifier_delegated(inode);
++out:
++ rcu_read_unlock();
++ nfs_end_delegation_return(inode, ret, 0);
++}
++
+ /**
+ * nfs4_inode_return_delegation_on_close - asynchronously return a delegation
+ * @inode: inode to process
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index 71524d34ed207c..8ff5ab9c5c2565 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -49,6 +49,7 @@ void nfs_inode_reclaim_delegation(struct inode *inode, const struct cred *cred,
+ unsigned long pagemod_limit, u32 deleg_type);
+ int nfs4_inode_return_delegation(struct inode *inode);
+ void nfs4_inode_return_delegation_on_close(struct inode *inode);
++void nfs4_inode_set_return_delegation_on_close(struct inode *inode);
+ int nfs_async_inode_return_delegation(struct inode *inode, const nfs4_stateid *stateid);
+ void nfs_inode_evict_delegation(struct inode *inode);
+
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 90079ca134dd3c..c1f1b826888c98 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -56,6 +56,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/atomic.h>
+
++#include "delegation.h"
+ #include "internal.h"
+ #include "iostat.h"
+ #include "pnfs.h"
+@@ -130,6 +131,20 @@ static void nfs_direct_truncate_request(struct nfs_direct_req *dreq,
+ dreq->count = req_start;
+ }
+
++static void nfs_direct_file_adjust_size_locked(struct inode *inode,
++ loff_t offset, size_t count)
++{
++ loff_t newsize = offset + (loff_t)count;
++ loff_t oldsize = i_size_read(inode);
++
++ if (newsize > oldsize) {
++ i_size_write(inode, newsize);
++ NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_SIZE;
++ trace_nfs_size_grow(inode, newsize);
++ nfs_inc_stats(inode, NFSIOS_EXTENDWRITE);
++ }
++}
++
+ /**
+ * nfs_swap_rw - NFS address space operation for swap I/O
+ * @iocb: target I/O control block
+@@ -272,6 +287,8 @@ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr)
+ nfs_direct_count_bytes(dreq, hdr);
+ spin_unlock(&dreq->lock);
+
++ nfs_update_delegated_atime(dreq->inode);
++
+ while (!list_empty(&hdr->pages)) {
+ struct nfs_page *req = nfs_list_entry(hdr->pages.next);
+ struct page *page = req->wb_page;
+@@ -732,6 +749,7 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ struct nfs_direct_req *dreq = hdr->dreq;
+ struct nfs_commit_info cinfo;
+ struct nfs_page *req = nfs_list_entry(hdr->pages.next);
++ struct inode *inode = dreq->inode;
+ int flags = NFS_ODIRECT_DONE;
+
+ trace_nfs_direct_write_completion(dreq);
+@@ -753,6 +771,11 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ }
+ spin_unlock(&dreq->lock);
+
++ spin_lock(&inode->i_lock);
++ nfs_direct_file_adjust_size_locked(inode, dreq->io_start, dreq->count);
++ nfs_update_delegated_mtime_locked(dreq->inode);
++ spin_unlock(&inode->i_lock);
++
+ while (!list_empty(&hdr->pages)) {
+
+ req = nfs_list_entry(hdr->pages.next);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 405f17e6e0b45b..e7bc99c69743cf 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3898,8 +3898,11 @@ nfs4_atomic_open(struct inode *dir, struct nfs_open_context *ctx,
+
+ static void nfs4_close_context(struct nfs_open_context *ctx, int is_sync)
+ {
++ struct dentry *dentry = ctx->dentry;
+ if (ctx->state == NULL)
+ return;
++ if (dentry->d_flags & DCACHE_NFSFS_RENAMED)
++ nfs4_inode_set_return_delegation_on_close(d_inode(dentry));
+ if (is_sync)
+ nfs4_close_sync(ctx->state, _nfs4_ctx_to_openmode(ctx));
+ else
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index b2c78621da44a4..4388004a319d0c 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -619,7 +619,6 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ err = PTR_ERR(upper);
+ if (!IS_ERR(upper)) {
+ err = ovl_do_link(ofs, ovl_dentry_upper(c->dentry), udir, upper);
+- dput(upper);
+
+ if (!err) {
+ /* Restore timestamps on parent (best effort) */
+@@ -627,6 +626,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ ovl_dentry_set_upper_alias(c->dentry);
+ ovl_dentry_update_reval(c->dentry, upper);
+ }
++ dput(upper);
+ }
+ inode_unlock(udir);
+ if (err)
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index fa284b64b2de20..23b358a1271cd9 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -450,7 +450,7 @@
+ . = ALIGN((align)); \
+ .rodata : AT(ADDR(.rodata) - LOAD_OFFSET) { \
+ __start_rodata = .; \
+- *(.rodata) *(.rodata.*) \
++ *(.rodata) *(.rodata.*) *(.data.rel.ro*) \
+ SCHED_DATA \
+ RO_AFTER_INIT_DATA /* Read only after init */ \
+ . = ALIGN(8); \
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index b7f327ce797e5b..8f37c5dd52b215 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -196,10 +196,11 @@ struct gendisk {
+ unsigned int zone_capacity;
+ unsigned int last_zone_capacity;
+ unsigned long __rcu *conv_zones_bitmap;
+- unsigned int zone_wplugs_hash_bits;
+- spinlock_t zone_wplugs_lock;
++ unsigned int zone_wplugs_hash_bits;
++ atomic_t nr_zone_wplugs;
++ spinlock_t zone_wplugs_lock;
+ struct mempool_s *zone_wplugs_pool;
+- struct hlist_head *zone_wplugs_hash;
++ struct hlist_head *zone_wplugs_hash;
+ struct workqueue_struct *zone_wplugs_wq;
+ #endif /* CONFIG_BLK_DEV_ZONED */
+
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index cd6f9aae311fca..070b3b680209cd 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -52,18 +52,6 @@
+ */
+ #define barrier_before_unreachable() asm volatile("")
+
+-/*
+- * Mark a position in code as unreachable. This can be used to
+- * suppress control flow warnings after asm blocks that transfer
+- * control elsewhere.
+- */
+-#define unreachable() \
+- do { \
+- annotate_unreachable(); \
+- barrier_before_unreachable(); \
+- __builtin_unreachable(); \
+- } while (0)
+-
+ #if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP)
+ #define __HAVE_BUILTIN_BSWAP32__
+ #define __HAVE_BUILTIN_BSWAP64__
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 2d962dade9faee..b15911e201bf95 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -109,44 +109,21 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+
+ /* Unreachable code */
+ #ifdef CONFIG_OBJTOOL
+-/*
+- * These macros help objtool understand GCC code flow for unreachable code.
+- * The __COUNTER__ based labels are a hack to make each instance of the macros
+- * unique, to convince GCC not to merge duplicate inline asm statements.
+- */
+-#define __stringify_label(n) #n
+-
+-#define __annotate_reachable(c) ({ \
+- asm volatile(__stringify_label(c) ":\n\t" \
+- ".pushsection .discard.reachable\n\t" \
+- ".long " __stringify_label(c) "b - .\n\t" \
+- ".popsection\n\t"); \
+-})
+-#define annotate_reachable() __annotate_reachable(__COUNTER__)
+-
+-#define __annotate_unreachable(c) ({ \
+- asm volatile(__stringify_label(c) ":\n\t" \
+- ".pushsection .discard.unreachable\n\t" \
+- ".long " __stringify_label(c) "b - .\n\t" \
+- ".popsection\n\t" : : "i" (c)); \
+-})
+-#define annotate_unreachable() __annotate_unreachable(__COUNTER__)
+-
+ /* Annotate a C jump table to allow objtool to follow the code flow */
+-#define __annotate_jump_table __section(".rodata..c_jump_table,\"a\",@progbits #")
+-
++#define __annotate_jump_table __section(".data.rel.ro.c_jump_table")
+ #else /* !CONFIG_OBJTOOL */
+-#define annotate_reachable()
+-#define annotate_unreachable()
+ #define __annotate_jump_table
+ #endif /* CONFIG_OBJTOOL */
+
+-#ifndef unreachable
+-# define unreachable() do { \
+- annotate_unreachable(); \
++/*
++ * Mark a position in code as unreachable. This can be used to
++ * suppress control flow warnings after asm blocks that transfer
++ * control elsewhere.
++ */
++#define unreachable() do { \
++ barrier_before_unreachable(); \
+ __builtin_unreachable(); \
+ } while (0)
+-#endif
+
+ /*
+ * KENTRY - kernel entry point
+diff --git a/include/linux/rcuref.h b/include/linux/rcuref.h
+index 2c8bfd0f1b6b3a..6322d8c1c6b429 100644
+--- a/include/linux/rcuref.h
++++ b/include/linux/rcuref.h
+@@ -71,27 +71,30 @@ static inline __must_check bool rcuref_get(rcuref_t *ref)
+ return rcuref_get_slowpath(ref);
+ }
+
+-extern __must_check bool rcuref_put_slowpath(rcuref_t *ref);
++extern __must_check bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt);
+
+ /*
+ * Internal helper. Do not invoke directly.
+ */
+ static __always_inline __must_check bool __rcuref_put(rcuref_t *ref)
+ {
++ int cnt;
++
+ RCU_LOCKDEP_WARN(!rcu_read_lock_held() && preemptible(),
+ "suspicious rcuref_put_rcusafe() usage");
+ /*
+ * Unconditionally decrease the reference count. The saturation and
+ * dead zones provide enough tolerance for this.
+ */
+- if (likely(!atomic_add_negative_release(-1, &ref->refcnt)))
++ cnt = atomic_sub_return_release(1, &ref->refcnt);
++ if (likely(cnt >= 0))
+ return false;
+
+ /*
+ * Handle the last reference drop and cases inside the saturation
+ * and dead zones.
+ */
+- return rcuref_put_slowpath(ref);
++ return rcuref_put_slowpath(ref, cnt);
+ }
+
+ /**
+diff --git a/include/linux/socket.h b/include/linux/socket.h
+index d18cc47e89bd01..c3322eb3d6865d 100644
+--- a/include/linux/socket.h
++++ b/include/linux/socket.h
+@@ -392,6 +392,8 @@ struct ucred {
+
+ extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr);
+ extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data);
++extern int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len,
++ void *data);
+
+ struct timespec64;
+ struct __kernel_timespec;
+diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
+index fec1e8a1570c36..eac57914dcf320 100644
+--- a/include/linux/sunrpc/sched.h
++++ b/include/linux/sunrpc/sched.h
+@@ -158,7 +158,6 @@ enum {
+ RPC_TASK_NEED_XMIT,
+ RPC_TASK_NEED_RECV,
+ RPC_TASK_MSG_PIN_WAIT,
+- RPC_TASK_SIGNALLED,
+ };
+
+ #define rpc_test_and_set_running(t) \
+@@ -171,7 +170,7 @@ enum {
+
+ #define RPC_IS_ACTIVATED(t) test_bit(RPC_TASK_ACTIVE, &(t)->tk_runstate)
+
+-#define RPC_SIGNALLED(t) test_bit(RPC_TASK_SIGNALLED, &(t)->tk_runstate)
++#define RPC_SIGNALLED(t) (READ_ONCE(task->tk_rpc_status) == -ERESTARTSYS)
+
+ /*
+ * Task priorities.
+diff --git a/include/net/ip.h b/include/net/ip.h
+index fe4f8543811433..bd201278c55a58 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -424,6 +424,11 @@ int ip_decrease_ttl(struct iphdr *iph)
+ return --iph->ttl;
+ }
+
++static inline dscp_t ip4h_dscp(const struct iphdr *ip4h)
++{
++ return inet_dsfield_to_dscp(ip4h->tos);
++}
++
+ static inline int ip_mtu_locked(const struct dst_entry *dst)
+ {
+ const struct rtable *rt = dst_rtable(dst);
+diff --git a/include/net/route.h b/include/net/route.h
+index da34b6fa9862dc..8a11d19f897bb2 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -208,12 +208,13 @@ int ip_route_use_hint(struct sk_buff *skb, __be32 dst, __be32 src,
+ const struct sk_buff *hint);
+
+ static inline int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src,
+- u8 tos, struct net_device *devin)
++ dscp_t dscp, struct net_device *devin)
+ {
+ int err;
+
+ rcu_read_lock();
+- err = ip_route_input_noref(skb, dst, src, tos, devin);
++ err = ip_route_input_noref(skb, dst, src, inet_dscp_to_dsfield(dscp),
++ devin);
+ if (!err) {
+ skb_dst_force(skb);
+ if (!skb_dst(skb))
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index 3dc7a1551ac350..5d653a3491d073 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -12,6 +12,7 @@
+ #include <linux/firmware/cirrus/cs_dsp.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/regmap.h>
++#include <linux/spi/spi.h>
+ #include <sound/cs-amp-lib.h>
+
+ #define CS35L56_DEVID 0x0000000
+@@ -61,6 +62,7 @@
+ #define CS35L56_IRQ1_MASK_8 0x000E0AC
+ #define CS35L56_IRQ1_MASK_18 0x000E0D4
+ #define CS35L56_IRQ1_MASK_20 0x000E0DC
++#define CS35L56_DSP_MBOX_1_RAW 0x0011000
+ #define CS35L56_DSP_VIRTUAL1_MBOX_1 0x0011020
+ #define CS35L56_DSP_VIRTUAL1_MBOX_2 0x0011024
+ #define CS35L56_DSP_VIRTUAL1_MBOX_3 0x0011028
+@@ -224,6 +226,7 @@
+ #define CS35L56_HALO_STATE_SHUTDOWN 1
+ #define CS35L56_HALO_STATE_BOOT_DONE 2
+
++#define CS35L56_MBOX_CMD_PING 0x0A000000
+ #define CS35L56_MBOX_CMD_AUDIO_PLAY 0x0B000001
+ #define CS35L56_MBOX_CMD_AUDIO_PAUSE 0x0B000002
+ #define CS35L56_MBOX_CMD_AUDIO_REINIT 0x0B000003
+@@ -254,6 +257,16 @@
+ #define CS35L56_NUM_BULK_SUPPLIES 3
+ #define CS35L56_NUM_DSP_REGIONS 5
+
++/* Additional margin for SYSTEM_RESET to control port ready on SPI */
++#define CS35L56_SPI_RESET_TO_PORT_READY_US (CS35L56_CONTROL_PORT_READY_US + 2500)
++
++struct cs35l56_spi_payload {
++ __be32 addr;
++ __be16 pad;
++ __be32 value;
++} __packed;
++static_assert(sizeof(struct cs35l56_spi_payload) == 10);
++
+ struct cs35l56_base {
+ struct device *dev;
+ struct regmap *regmap;
+@@ -269,6 +282,7 @@ struct cs35l56_base {
+ s8 cal_index;
+ struct cirrus_amp_cal_data cal_data;
+ struct gpio_desc *reset_gpio;
++ struct cs35l56_spi_payload *spi_payload_buf;
+ };
+
+ static inline bool cs35l56_is_otp_register(unsigned int reg)
+@@ -276,6 +290,23 @@ static inline bool cs35l56_is_otp_register(unsigned int reg)
+ return (reg >> 16) == 3;
+ }
+
++static inline int cs35l56_init_config_for_spi(struct cs35l56_base *cs35l56,
++ struct spi_device *spi)
++{
++ cs35l56->spi_payload_buf = devm_kzalloc(&spi->dev,
++ sizeof(*cs35l56->spi_payload_buf),
++ GFP_KERNEL | GFP_DMA);
++ if (!cs35l56->spi_payload_buf)
++ return -ENOMEM;
++
++ return 0;
++}
++
++static inline bool cs35l56_is_spi(struct cs35l56_base *cs35l56)
++{
++ return IS_ENABLED(CONFIG_SPI_MASTER) && !!cs35l56->spi_payload_buf;
++}
++
+ extern const struct regmap_config cs35l56_regmap_i2c;
+ extern const struct regmap_config cs35l56_regmap_spi;
+ extern const struct regmap_config cs35l56_regmap_sdw;
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index 9a75590227f262..3dddfc6abf0ee3 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -173,6 +173,7 @@ enum yfs_cm_operation {
+ EM(afs_cell_trace_get_queue_dns, "GET q-dns ") \
+ EM(afs_cell_trace_get_queue_manage, "GET q-mng ") \
+ EM(afs_cell_trace_get_queue_new, "GET q-new ") \
++ EM(afs_cell_trace_get_server, "GET server") \
+ EM(afs_cell_trace_get_vol, "GET vol ") \
+ EM(afs_cell_trace_insert, "INSERT ") \
+ EM(afs_cell_trace_manage, "MANAGE ") \
+@@ -180,6 +181,7 @@ enum yfs_cm_operation {
+ EM(afs_cell_trace_put_destroy, "PUT destry") \
+ EM(afs_cell_trace_put_queue_work, "PUT q-work") \
+ EM(afs_cell_trace_put_queue_fail, "PUT q-fail") \
++ EM(afs_cell_trace_put_server, "PUT server") \
+ EM(afs_cell_trace_put_vol, "PUT vol ") \
+ EM(afs_cell_trace_see_source, "SEE source") \
+ EM(afs_cell_trace_see_ws, "SEE ws ") \
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 5e849521668954..5fe852bd31abc9 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -360,8 +360,7 @@ TRACE_EVENT(rpc_request,
+ { (1UL << RPC_TASK_ACTIVE), "ACTIVE" }, \
+ { (1UL << RPC_TASK_NEED_XMIT), "NEED_XMIT" }, \
+ { (1UL << RPC_TASK_NEED_RECV), "NEED_RECV" }, \
+- { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" }, \
+- { (1UL << RPC_TASK_SIGNALLED), "SIGNALLED" })
++ { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" })
+
+ DECLARE_EVENT_CLASS(rpc_task_running,
+
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 3974c417fe2644..f32311f6411338 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -334,7 +334,9 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+ if (unlikely(ret))
+ return ret;
+
+- return __get_compat_msghdr(&iomsg->msg, &cmsg, NULL);
++ ret = __get_compat_msghdr(&iomsg->msg, &cmsg, NULL);
++ sr->msg_control = iomsg->msg.msg_control_user;
++ return ret;
+ }
+ #endif
+
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 501d8c2fedff40..a0e1d2124727e1 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -4957,7 +4957,7 @@ static struct perf_event_pmu_context *
+ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
+ struct perf_event *event)
+ {
+- struct perf_event_pmu_context *new = NULL, *epc;
++ struct perf_event_pmu_context *new = NULL, *pos = NULL, *epc;
+ void *task_ctx_data = NULL;
+
+ if (!ctx->task) {
+@@ -5014,12 +5014,19 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
+ atomic_inc(&epc->refcount);
+ goto found_epc;
+ }
++ /* Make sure the pmu_ctx_list is sorted by PMU type: */
++ if (!pos && epc->pmu->type > pmu->type)
++ pos = epc;
+ }
+
+ epc = new;
+ new = NULL;
+
+- list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list);
++ if (!pos)
++ list_add_tail(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list);
++ else
++ list_add(&epc->pmu_ctx_entry, pos->pmu_ctx_entry.prev);
++
+ epc->ctx = ctx;
+
+ found_epc:
+@@ -5969,14 +5976,15 @@ static int _perf_event_period(struct perf_event *event, u64 value)
+ if (!value)
+ return -EINVAL;
+
+- if (event->attr.freq && value > sysctl_perf_event_sample_rate)
+- return -EINVAL;
+-
+- if (perf_event_check_period(event, value))
+- return -EINVAL;
+-
+- if (!event->attr.freq && (value & (1ULL << 63)))
+- return -EINVAL;
++ if (event->attr.freq) {
++ if (value > sysctl_perf_event_sample_rate)
++ return -EINVAL;
++ } else {
++ if (perf_event_check_period(event, value))
++ return -EINVAL;
++ if (value & (1ULL << 63))
++ return -EINVAL;
++ }
+
+ event_function_call(event, __perf_event_period, &value);
+
+@@ -8233,7 +8241,8 @@ void perf_event_exec(void)
+
+ perf_event_enable_on_exec(ctx);
+ perf_event_remove_on_exec(ctx);
+- perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true);
++ scoped_guard(rcu)
++ perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true);
+
+ perf_unpin_context(ctx);
+ put_ctx(ctx);
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 4b52cb2ae6d620..a0e0676f5d8bbe 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -489,6 +489,11 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
+ if (ret <= 0)
+ goto put_old;
+
++ if (is_zero_page(old_page)) {
++ ret = -EINVAL;
++ goto put_old;
++ }
++
+ if (WARN(!is_register && PageCompound(old_page),
+ "uprobe unregister should never work on compound page\n")) {
+ ret = -EINVAL;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c72356836eb628..9803f10a082a7b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -7229,7 +7229,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
+ #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+ int __sched __cond_resched(void)
+ {
+- if (should_resched(0)) {
++ if (should_resched(0) && !irqs_disabled()) {
+ preempt_schedule_common();
+ return 1;
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index aa57ae3eb1ff5e..325fd5b9d47152 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3047,7 +3047,6 @@ static struct task_struct *pick_task_scx(struct rq *rq)
+ {
+ struct task_struct *prev = rq->curr;
+ struct task_struct *p;
+- bool prev_on_scx = prev->sched_class == &ext_sched_class;
+ bool keep_prev = rq->scx.flags & SCX_RQ_BAL_KEEP;
+ bool kick_idle = false;
+
+@@ -3067,14 +3066,18 @@ static struct task_struct *pick_task_scx(struct rq *rq)
+ * if pick_task_scx() is called without preceding balance_scx().
+ */
+ if (unlikely(rq->scx.flags & SCX_RQ_BAL_PENDING)) {
+- if (prev_on_scx) {
++ if (prev->scx.flags & SCX_TASK_QUEUED) {
+ keep_prev = true;
+ } else {
+ keep_prev = false;
+ kick_idle = true;
+ }
+- } else if (unlikely(keep_prev && !prev_on_scx)) {
+- /* only allowed during transitions */
++ } else if (unlikely(keep_prev &&
++ prev->sched_class != &ext_sched_class)) {
++ /*
++ * Can happen while enabling as SCX_RQ_BAL_PENDING assertion is
++ * conditional on scx_enabled() and may have been skipped.
++ */
+ WARN_ON_ONCE(scx_ops_enable_state() == SCX_OPS_ENABLED);
+ keep_prev = false;
+ }
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 71cc1bbfe9aa3e..dbd375f28ee098 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -541,6 +541,7 @@ static int function_stat_show(struct seq_file *m, void *v)
+ static struct trace_seq s;
+ unsigned long long avg;
+ unsigned long long stddev;
++ unsigned long long stddev_denom;
+ #endif
+ mutex_lock(&ftrace_profile_lock);
+
+@@ -562,23 +563,19 @@ static int function_stat_show(struct seq_file *m, void *v)
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ seq_puts(m, " ");
+
+- /* Sample standard deviation (s^2) */
+- if (rec->counter <= 1)
+- stddev = 0;
+- else {
+- /*
+- * Apply Welford's method:
+- * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
+- */
++ /*
++ * Variance formula:
++ * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
++ * Maybe Welford's method is better here?
++ * Divide only by 1000 for ns^2 -> us^2 conversion.
++ * trace_print_graph_duration will divide by 1000 again.
++ */
++ stddev = 0;
++ stddev_denom = rec->counter * (rec->counter - 1) * 1000;
++ if (stddev_denom) {
+ stddev = rec->counter * rec->time_squared -
+ rec->time * rec->time;
+-
+- /*
+- * Divide only 1000 for ns^2 -> us^2 conversion.
+- * trace_print_graph_duration will divide 1000 again.
+- */
+- stddev = div64_ul(stddev,
+- rec->counter * (rec->counter - 1) * 1000);
++ stddev = div64_ul(stddev, stddev_denom);
+ }
+
+ trace_seq_init(&s);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 5f9119eb7c67f6..31f5ad322fab0a 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -6652,27 +6652,27 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
+ if (existing_hist_update_only(glob, trigger_data, file))
+ goto out_free;
+
+- ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
+- if (ret < 0)
+- goto out_free;
++ if (!get_named_trigger_data(trigger_data)) {
+
+- if (get_named_trigger_data(trigger_data))
+- goto enable;
++ ret = create_actions(hist_data);
++ if (ret)
++ goto out_free;
+
+- ret = create_actions(hist_data);
+- if (ret)
+- goto out_unreg;
++ if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
++ ret = save_hist_vars(hist_data);
++ if (ret)
++ goto out_free;
++ }
+
+- if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
+- ret = save_hist_vars(hist_data);
++ ret = tracing_map_init(hist_data->map);
+ if (ret)
+- goto out_unreg;
++ goto out_free;
+ }
+
+- ret = tracing_map_init(hist_data->map);
+- if (ret)
+- goto out_unreg;
+-enable:
++ ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
++ if (ret < 0)
++ goto out_free;
++
+ ret = hist_trigger_enable(trigger_data, file);
+ if (ret)
+ goto out_unreg;
+diff --git a/lib/rcuref.c b/lib/rcuref.c
+index 97f300eca927ce..5bd726b71e3936 100644
+--- a/lib/rcuref.c
++++ b/lib/rcuref.c
+@@ -220,6 +220,7 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
+ /**
+ * rcuref_put_slowpath - Slowpath of __rcuref_put()
+ * @ref: Pointer to the reference count
++ * @cnt: The resulting value of the fastpath decrement
+ *
+ * Invoked when the reference count is outside of the valid zone.
+ *
+@@ -233,10 +234,8 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
+ * with a concurrent get()/put() pair. Caller is not allowed to
+ * deconstruct the protected object.
+ */
+-bool rcuref_put_slowpath(rcuref_t *ref)
++bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt)
+ {
+- unsigned int cnt = atomic_read(&ref->refcnt);
+-
+ /* Did this drop the last reference? */
+ if (likely(cnt == RCUREF_NOREF)) {
+ /*
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 27b4c4a2ba1fdd..728a5ce9b50587 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -636,7 +636,8 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
+ test_bit(FLAG_HOLD_HCI_CONN, &chan->flags))
+ hci_conn_hold(conn->hcon);
+
+- list_add(&chan->list, &conn->chan_l);
++ /* Append to the list since the order matters for ECRED */
++ list_add_tail(&chan->list, &conn->chan_l);
+ }
+
+ void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
+@@ -3776,7 +3777,11 @@ static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
+ struct l2cap_ecred_conn_rsp *rsp_flex =
+ container_of(&rsp->pdu.rsp, struct l2cap_ecred_conn_rsp, hdr);
+
+- if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
++ /* Check if channel for outgoing connection or if it wasn't deferred
++ * since in those cases it must be skipped.
++ */
++ if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags) ||
++ !test_and_clear_bit(FLAG_DEFER_SETUP, &chan->flags))
+ return;
+
+ /* Reset ident so only one response is sent */
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 1d458e9da660c9..17a5f5923d615d 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -370,9 +370,9 @@ br_nf_ipv4_daddr_was_changed(const struct sk_buff *skb,
+ */
+ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+- struct net_device *dev = skb->dev, *br_indev;
+- struct iphdr *iph = ip_hdr(skb);
+ struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
++ struct net_device *dev = skb->dev, *br_indev;
++ const struct iphdr *iph = ip_hdr(skb);
+ struct rtable *rt;
+ int err;
+
+@@ -390,7 +390,9 @@ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_
+ }
+ nf_bridge->in_prerouting = 0;
+ if (br_nf_ipv4_daddr_was_changed(skb, nf_bridge)) {
+- if ((err = ip_route_input(skb, iph->daddr, iph->saddr, iph->tos, dev))) {
++ err = ip_route_input(skb, iph->daddr, iph->saddr,
++ ip4h_dscp(iph), dev);
++ if (err) {
+ struct in_device *in_dev = __in_dev_get_rcu(dev);
+
+ /* If err equals -EHOSTUNREACH the error is due to a
+diff --git a/net/core/gro.c b/net/core/gro.c
+index 78b320b6317445..0ad549b07e0399 100644
+--- a/net/core/gro.c
++++ b/net/core/gro.c
+@@ -653,6 +653,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
+ skb->pkt_type = PACKET_HOST;
+
+ skb->encapsulation = 0;
++ skb->ip_summed = CHECKSUM_NONE;
+ skb_shinfo(skb)->gso_type = 0;
+ skb_shinfo(skb)->gso_size = 0;
+ if (unlikely(skb->slow_gro)) {
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 4f6a14babe5ae3..733c0cbd393d24 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -282,6 +282,16 @@ int put_cmsg(struct msghdr * msg, int level, int type, int len, void *data)
+ }
+ EXPORT_SYMBOL(put_cmsg);
+
++int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len,
++ void *data)
++{
++ /* Don't produce truncated CMSGs */
++ if (!msg->msg_control || msg->msg_controllen < CMSG_LEN(len))
++ return -ETOOSMALL;
++
++ return put_cmsg(msg, level, type, len, data);
++}
++
+ void put_cmsg_scm_timestamping64(struct msghdr *msg, struct scm_timestamping_internal *tss_internal)
+ {
+ struct scm_timestamping64 tss;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 61a950f13a91c7..f220306731dac8 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6127,11 +6127,11 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
+ skb->offload_fwd_mark = 0;
+ skb->offload_l3_fwd_mark = 0;
+ #endif
++ ipvs_reset(skb);
+
+ if (!xnet)
+ return;
+
+- ipvs_reset(skb);
+ skb->mark = 0;
+ skb_clear_tstamp(skb);
+ }
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 5dd54a81339806..47e2743ffe2289 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -34,6 +34,7 @@ static int min_sndbuf = SOCK_MIN_SNDBUF;
+ static int min_rcvbuf = SOCK_MIN_RCVBUF;
+ static int max_skb_frags = MAX_SKB_FRAGS;
+ static int min_mem_pcpu_rsv = SK_MEMORY_PCPU_RESERVE;
++static int netdev_budget_usecs_min = 2 * USEC_PER_SEC / HZ;
+
+ static int net_msg_warn; /* Unused, but still a sysctl */
+
+@@ -580,7 +581,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+- .extra1 = SYSCTL_ZERO,
++ .extra1 = &netdev_budget_usecs_min,
+ },
+ {
+ .procname = "fb_tunnels_only_for_init_net",
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index f45bc187a92a7e..b8111ec651b545 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -477,13 +477,11 @@ static struct net_device *icmp_get_route_lookup_dev(struct sk_buff *skb)
+ return route_lookup_dev;
+ }
+
+-static struct rtable *icmp_route_lookup(struct net *net,
+- struct flowi4 *fl4,
++static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4,
+ struct sk_buff *skb_in,
+- const struct iphdr *iph,
+- __be32 saddr, u8 tos, u32 mark,
+- int type, int code,
+- struct icmp_bxm *param)
++ const struct iphdr *iph, __be32 saddr,
++ dscp_t dscp, u32 mark, int type,
++ int code, struct icmp_bxm *param)
+ {
+ struct net_device *route_lookup_dev;
+ struct dst_entry *dst, *dst2;
+@@ -497,7 +495,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ fl4->saddr = saddr;
+ fl4->flowi4_mark = mark;
+ fl4->flowi4_uid = sock_net_uid(net, NULL);
+- fl4->flowi4_tos = tos & INET_DSCP_MASK;
++ fl4->flowi4_tos = inet_dscp_to_dsfield(dscp);
+ fl4->flowi4_proto = IPPROTO_ICMP;
+ fl4->fl4_icmp_type = type;
+ fl4->fl4_icmp_code = code;
+@@ -549,7 +547,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ orefdst = skb_in->_skb_refdst; /* save old refdst */
+ skb_dst_set(skb_in, NULL);
+ err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr,
+- tos, rt2->dst.dev);
++ dscp, rt2->dst.dev);
+
+ dst_release(&rt2->dst);
+ rt2 = skb_rtable(skb_in);
+@@ -745,8 +743,9 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ ipc.opt = &icmp_param.replyopts.opt;
+ ipc.sockc.mark = mark;
+
+- rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos, mark,
+- type, code, &icmp_param);
++ rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr,
++ inet_dsfield_to_dscp(tos), mark, type, code,
++ &icmp_param);
+ if (IS_ERR(rt))
+ goto out_unlock;
+
+diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
+index 68aedb8877b9f4..81e86e5defee6b 100644
+--- a/net/ipv4/ip_options.c
++++ b/net/ipv4/ip_options.c
+@@ -617,7 +617,8 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
+
+ orefdst = skb->_skb_refdst;
+ skb_dst_set(skb, NULL);
+- err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, dev);
++ err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph),
++ dev);
+ rt2 = skb_rtable(skb);
+ if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
+ skb_dst_drop(skb);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 68cb6a966b18b8..b731a4a8f2b0d5 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2456,14 +2456,12 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
+ */
+ memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg));
+ dmabuf_cmsg.frag_size = copy;
+- err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
+- sizeof(dmabuf_cmsg), &dmabuf_cmsg);
+- if (err || msg->msg_flags & MSG_CTRUNC) {
+- msg->msg_flags &= ~MSG_CTRUNC;
+- if (!err)
+- err = -ETOOSMALL;
++ err = put_cmsg_notrunc(msg, SOL_SOCKET,
++ SO_DEVMEM_LINEAR,
++ sizeof(dmabuf_cmsg),
++ &dmabuf_cmsg);
++ if (err)
+ goto out;
+- }
+
+ sent += copy;
+
+@@ -2517,16 +2515,12 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
+ offset += copy;
+ remaining_len -= copy;
+
+- err = put_cmsg(msg, SOL_SOCKET,
+- SO_DEVMEM_DMABUF,
+- sizeof(dmabuf_cmsg),
+- &dmabuf_cmsg);
+- if (err || msg->msg_flags & MSG_CTRUNC) {
+- msg->msg_flags &= ~MSG_CTRUNC;
+- if (!err)
+- err = -ETOOSMALL;
++ err = put_cmsg_notrunc(msg, SOL_SOCKET,
++ SO_DEVMEM_DMABUF,
++ sizeof(dmabuf_cmsg),
++ &dmabuf_cmsg);
++ if (err)
+ goto out;
+- }
+
+ atomic_long_inc(&niov->pp_ref_count);
+ tcp_xa_pool.netmems[tcp_xa_pool.idx++] = skb_frag_netmem(frag);
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index bb1fe1ba867ac3..f3e4fc9572196c 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -806,12 +806,6 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+
+ /* In sequence, PAWS is OK. */
+
+- /* TODO: We probably should defer ts_recent change once
+- * we take ownership of @req.
+- */
+- if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
+- WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval);
+-
+ if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) {
+ /* Truncate SYN, it is out of window starting
+ at tcp_rsk(req)->rcv_isn + 1. */
+@@ -860,6 +854,10 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ if (!child)
+ goto listen_overflow;
+
++ if (own_req && tmp_opt.saw_tstamp &&
++ !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
++ tcp_sk(child)->rx_opt.ts_recent = tmp_opt.rcv_tsval;
++
+ if (own_req && rsk_drop_req(req)) {
+ reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
+ inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index b60e13c42bcacd..48fd53b9897265 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -630,8 +630,8 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ }
+ skb_dst_set(skb2, &rt->dst);
+ } else {
+- if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos,
+- skb2->dev) ||
++ if (ip_route_input(skb2, eiph->daddr, eiph->saddr,
++ ip4h_dscp(eiph), skb2->dev) ||
+ skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6)
+ goto out;
+ }
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 0ac4283acdf20c..7c05ac846646f3 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -262,10 +262,18 @@ static int rpl_input(struct sk_buff *skb)
+ {
+ struct dst_entry *orig_dst = skb_dst(skb);
+ struct dst_entry *dst = NULL;
++ struct lwtunnel_state *lwtst;
+ struct rpl_lwt *rlwt;
+ int err;
+
+- rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
++ /* We cannot dereference "orig_dst" once ip6_route_input() or
++ * skb_dst_drop() is called. However, in order to detect a dst loop, we
++ * need the address of its lwtstate. So, save the address of lwtstate
++ * now and use it later as a comparison.
++ */
++ lwtst = orig_dst->lwtstate;
++
++ rlwt = rpl_lwt_lwtunnel(lwtst);
+
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
+@@ -280,7 +288,9 @@ static int rpl_input(struct sk_buff *skb)
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+- if (!dst->error) {
++
++ /* cache only if we don't create a dst reference loop */
++ if (!dst->error && lwtst != dst->lwtstate) {
+ local_bh_disable();
+ dst_cache_set_ip6(&rlwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 33833b2064c072..51583461ae29ba 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -472,10 +472,18 @@ static int seg6_input_core(struct net *net, struct sock *sk,
+ {
+ struct dst_entry *orig_dst = skb_dst(skb);
+ struct dst_entry *dst = NULL;
++ struct lwtunnel_state *lwtst;
+ struct seg6_lwt *slwt;
+ int err;
+
+- slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
++ /* We cannot dereference "orig_dst" once ip6_route_input() or
++ * skb_dst_drop() is called. However, in order to detect a dst loop, we
++ * need the address of its lwtstate. So, save the address of lwtstate
++ * now and use it later as a comparison.
++ */
++ lwtst = orig_dst->lwtstate;
++
++ slwt = seg6_lwt_lwtunnel(lwtst);
+
+ local_bh_disable();
+ dst = dst_cache_get(&slwt->cache);
+@@ -490,7 +498,9 @@ static int seg6_input_core(struct net *net, struct sock *sk,
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+- if (!dst->error) {
++
++ /* cache only if we don't create a dst reference loop */
++ if (!dst->error && lwtst != dst->lwtstate) {
+ local_bh_disable();
+ dst_cache_set_ip6(&slwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 8c4f934d198cc6..b4ba2d9f041765 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -1513,11 +1513,6 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ if (mptcp_pm_is_userspace(msk))
+ goto next;
+
+- if (list_empty(&msk->conn_list)) {
+- mptcp_pm_remove_anno_addr(msk, addr, false);
+- goto next;
+- }
+-
+ lock_sock(sk);
+ remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr);
+ mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 860903e0642255..b56bbee7312c48 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1140,7 +1140,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ if (data_len == 0) {
+ pr_debug("infinite mapping received\n");
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
+- subflow->map_data_len = 0;
+ return MAPPING_INVALID;
+ }
+
+@@ -1284,18 +1283,6 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
+ mptcp_schedule_work(sk);
+ }
+
+-static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
+-{
+- struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+-
+- if (subflow->mp_join)
+- return false;
+- else if (READ_ONCE(msk->csum_enabled))
+- return !subflow->valid_csum_seen;
+- else
+- return READ_ONCE(msk->allow_infinite_fallback);
+-}
+-
+ static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+@@ -1391,7 +1378,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ return true;
+ }
+
+- if (!subflow_can_fallback(subflow) && subflow->map_data_len) {
++ if (!READ_ONCE(msk->allow_infinite_fallback)) {
+ /* fatal protocol error, close the socket.
+ * subflow_error_report() will introduce the appropriate barriers
+ */
+diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c
+index 085e7892d31040..b1536da2246b82 100644
+--- a/net/rxrpc/rxperf.c
++++ b/net/rxrpc/rxperf.c
+@@ -478,6 +478,18 @@ static int rxperf_deliver_request(struct rxperf_call *call)
+ call->unmarshal++;
+ fallthrough;
+ case 2:
++ ret = rxperf_extract_data(call, true);
++ if (ret < 0)
++ return ret;
++
++ /* Deal with the terminal magic cookie. */
++ call->iov_len = 4;
++ call->kvec[0].iov_len = call->iov_len;
++ call->kvec[0].iov_base = call->tmp;
++ iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
++ call->unmarshal++;
++ fallthrough;
++ case 3:
+ ret = rxperf_extract_data(call, false);
+ if (ret < 0)
+ return ret;
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 059f6ef1ad1898..7fcb0574fc79e7 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -1669,12 +1669,14 @@ static void remove_cache_proc_entries(struct cache_detail *cd)
+ }
+ }
+
+-#ifdef CONFIG_PROC_FS
+ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
+ {
+ struct proc_dir_entry *p;
+ struct sunrpc_net *sn;
+
++ if (!IS_ENABLED(CONFIG_PROC_FS))
++ return 0;
++
+ sn = net_generic(net, sunrpc_net_id);
+ cd->procfs = proc_mkdir(cd->name, sn->proc_net_rpc);
+ if (cd->procfs == NULL)
+@@ -1702,12 +1704,6 @@ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
+ remove_cache_proc_entries(cd);
+ return -ENOMEM;
+ }
+-#else /* CONFIG_PROC_FS */
+-static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
+-{
+- return 0;
+-}
+-#endif
+
+ void __init cache_initialize(void)
+ {
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index cef623ea150609..9b45fbdc90cabe 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -864,8 +864,6 @@ void rpc_signal_task(struct rpc_task *task)
+ if (!rpc_task_set_rpc_status(task, -ERESTARTSYS))
+ return;
+ trace_rpc_task_signalled(task, task->tk_action);
+- set_bit(RPC_TASK_SIGNALLED, &task->tk_runstate);
+- smp_mb__after_atomic();
+ queue = READ_ONCE(task->tk_waitqueue);
+ if (queue)
+ rpc_wake_up_queued_task(queue, task);
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index b69e6290acfabe..171ad4e2523f13 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2580,7 +2580,15 @@ static void xs_tls_handshake_done(void *data, int status, key_serial_t peerid)
+ struct sock_xprt *lower_transport =
+ container_of(lower_xprt, struct sock_xprt, xprt);
+
+- lower_transport->xprt_err = status ? -EACCES : 0;
++ switch (status) {
++ case 0:
++ case -EACCES:
++ case -ETIMEDOUT:
++ lower_transport->xprt_err = status;
++ break;
++ default:
++ lower_transport->xprt_err = -EACCES;
++ }
+ complete(&lower_transport->handshake_done);
+ xprt_put(lower_xprt);
+ }
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 3c323ca213d42c..abfdb4905ca2ac 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -149,6 +149,9 @@ struct ima_kexec_hdr {
+ #define IMA_CHECK_BLACKLIST 0x40000000
+ #define IMA_VERITY_REQUIRED 0x80000000
+
++/* Exclude non-action flags which are not rule-specific. */
++#define IMA_NONACTION_RULE_FLAGS (IMA_NONACTION_FLAGS & ~IMA_NEW_FILE)
++
+ #define IMA_DO_MASK (IMA_MEASURE | IMA_APPRAISE | IMA_AUDIT | \
+ IMA_HASH | IMA_APPRAISE_SUBMASK)
+ #define IMA_DONE_MASK (IMA_MEASURED | IMA_APPRAISED | IMA_AUDITED | \
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 06132cf47016da..4b213de8dcb40c 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -269,10 +269,13 @@ static int process_measurement(struct file *file, const struct cred *cred,
+ mutex_lock(&iint->mutex);
+
+ if (test_and_clear_bit(IMA_CHANGE_ATTR, &iint->atomic_flags))
+- /* reset appraisal flags if ima_inode_post_setattr was called */
++ /*
++ * Reset appraisal flags (action and non-action rule-specific)
++ * if ima_inode_post_setattr was called.
++ */
+ iint->flags &= ~(IMA_APPRAISE | IMA_APPRAISED |
+ IMA_APPRAISE_SUBMASK | IMA_APPRAISED_SUBMASK |
+- IMA_NONACTION_FLAGS);
++ IMA_NONACTION_RULE_FLAGS);
+
+ /*
+ * Re-evaulate the file if either the xattr has changed or the
+diff --git a/security/landlock/net.c b/security/landlock/net.c
+index d5dcc4407a197b..104b6c01fe503b 100644
+--- a/security/landlock/net.c
++++ b/security/landlock/net.c
+@@ -63,8 +63,7 @@ static int current_check_access_socket(struct socket *const sock,
+ if (WARN_ON_ONCE(dom->num_layers < 1))
+ return -EACCES;
+
+- /* Checks if it's a (potential) TCP socket. */
+- if (sock->type != SOCK_STREAM)
++ if (!sk_is_tcp(sock->sk))
+ return 0;
+
+ /* Checks for minimal header length to safely read sa_family. */
+diff --git a/sound/pci/hda/cs35l56_hda_spi.c b/sound/pci/hda/cs35l56_hda_spi.c
+index 7f02155fe61e3c..7c94110b6272a6 100644
+--- a/sound/pci/hda/cs35l56_hda_spi.c
++++ b/sound/pci/hda/cs35l56_hda_spi.c
+@@ -22,6 +22,9 @@ static int cs35l56_hda_spi_probe(struct spi_device *spi)
+ return -ENOMEM;
+
+ cs35l56->base.dev = &spi->dev;
++ ret = cs35l56_init_config_for_spi(&cs35l56->base, spi);
++ if (ret)
++ return ret;
+
+ #ifdef CS35L56_WAKE_HOLD_TIME_US
+ cs35l56->base.can_hibernate = true;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9bf99fe6cd34dd..4a3b4c6d4114b9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10564,6 +10564,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1460, "Asus VivoBook 15", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1463, "Asus GA402X/GA402N", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604VI/VC/VE/VG/VJ/VQ/VU/VV/VY/VZ", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603VQ/VU/VV/VJ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+@@ -10597,7 +10598,6 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+- SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1a63, "ASUS UX3405MA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index e45e9ae01bc668..195841a567c3d4 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -10,6 +10,7 @@
+ #include <linux/gpio/consumer.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/spi/spi.h>
+ #include <linux/types.h>
+ #include <sound/cs-amp-lib.h>
+
+@@ -303,6 +304,79 @@ void cs35l56_wait_min_reset_pulse(void)
+ }
+ EXPORT_SYMBOL_NS_GPL(cs35l56_wait_min_reset_pulse, SND_SOC_CS35L56_SHARED);
+
++static const struct {
++ u32 addr;
++ u32 value;
++} cs35l56_spi_system_reset_stages[] = {
++ { .addr = CS35L56_DSP_VIRTUAL1_MBOX_1, .value = CS35L56_MBOX_CMD_SYSTEM_RESET },
++ /* The next write is necessary to delimit the soft reset */
++ { .addr = CS35L56_DSP_MBOX_1_RAW, .value = CS35L56_MBOX_CMD_PING },
++};
++
++static void cs35l56_spi_issue_bus_locked_reset(struct cs35l56_base *cs35l56_base,
++ struct spi_device *spi)
++{
++ struct cs35l56_spi_payload *buf = cs35l56_base->spi_payload_buf;
++ struct spi_transfer t = {
++ .tx_buf = buf,
++ .len = sizeof(*buf),
++ };
++ struct spi_message m;
++ int i, ret;
++
++ for (i = 0; i < ARRAY_SIZE(cs35l56_spi_system_reset_stages); i++) {
++ buf->addr = cpu_to_be32(cs35l56_spi_system_reset_stages[i].addr);
++ buf->value = cpu_to_be32(cs35l56_spi_system_reset_stages[i].value);
++ spi_message_init_with_transfers(&m, &t, 1);
++ ret = spi_sync_locked(spi, &m);
++ if (ret)
++ dev_warn(cs35l56_base->dev, "spi_sync failed: %d\n", ret);
++
++ usleep_range(CS35L56_SPI_RESET_TO_PORT_READY_US,
++ 2 * CS35L56_SPI_RESET_TO_PORT_READY_US);
++ }
++}
++
++static void cs35l56_spi_system_reset(struct cs35l56_base *cs35l56_base)
++{
++ struct spi_device *spi = to_spi_device(cs35l56_base->dev);
++ unsigned int val;
++ int read_ret, ret;
++
++ /*
++ * There must not be any other SPI bus activity while the amp is
++ * soft-resetting.
++ */
++ ret = spi_bus_lock(spi->controller);
++ if (ret) {
++ dev_warn(cs35l56_base->dev, "spi_bus_lock failed: %d\n", ret);
++ return;
++ }
++
++ cs35l56_spi_issue_bus_locked_reset(cs35l56_base, spi);
++ spi_bus_unlock(spi->controller);
++
++ /*
++ * Check firmware boot by testing for a response in MBOX_2.
++ * HALO_STATE cannot be trusted yet because the reset sequence
++ * can leave it with stale state. But MBOX is reset.
++ * The regmap must remain in cache-only until the chip has
++ * booted, so use a bypassed read.
++ */
++ ret = read_poll_timeout(regmap_read_bypassed, read_ret,
++ (val > 0) && (val < 0xffffffff),
++ CS35L56_HALO_STATE_POLL_US,
++ CS35L56_HALO_STATE_TIMEOUT_US,
++ false,
++ cs35l56_base->regmap,
++ CS35L56_DSP_VIRTUAL1_MBOX_2,
++ &val);
++ if (ret) {
++ dev_err(cs35l56_base->dev, "SPI reboot timed out(%d): MBOX2=%#x\n",
++ read_ret, val);
++ }
++}
++
+ static const struct reg_sequence cs35l56_system_reset_seq[] = {
+ REG_SEQ0(CS35L56_DSP1_HALO_STATE, 0),
+ REG_SEQ0(CS35L56_DSP_VIRTUAL1_MBOX_1, CS35L56_MBOX_CMD_SYSTEM_RESET),
+@@ -315,6 +389,12 @@ void cs35l56_system_reset(struct cs35l56_base *cs35l56_base, bool is_soundwire)
+ * accesses other than the controlled system reset sequence below.
+ */
+ regcache_cache_only(cs35l56_base->regmap, true);
++
++ if (cs35l56_is_spi(cs35l56_base)) {
++ cs35l56_spi_system_reset(cs35l56_base);
++ return;
++ }
++
+ regmap_multi_reg_write_bypassed(cs35l56_base->regmap,
+ cs35l56_system_reset_seq,
+ ARRAY_SIZE(cs35l56_system_reset_seq));
+diff --git a/sound/soc/codecs/cs35l56-spi.c b/sound/soc/codecs/cs35l56-spi.c
+index b07b798b0b45d6..568f554a8638bf 100644
+--- a/sound/soc/codecs/cs35l56-spi.c
++++ b/sound/soc/codecs/cs35l56-spi.c
+@@ -33,6 +33,9 @@ static int cs35l56_spi_probe(struct spi_device *spi)
+
+ cs35l56->base.dev = &spi->dev;
+ cs35l56->base.can_hibernate = true;
++ ret = cs35l56_init_config_for_spi(&cs35l56->base, spi);
++ if (ret)
++ return ret;
+
+ ret = cs35l56_common_probe(cs35l56);
+ if (ret != 0)
+diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c
+index f3c97da798dc8e..76159c45e6b52e 100644
+--- a/sound/soc/codecs/es8328.c
++++ b/sound/soc/codecs/es8328.c
+@@ -233,7 +233,6 @@ static const struct snd_kcontrol_new es8328_right_line_controls =
+
+ /* Left Mixer */
+ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = {
+- SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL17, 7, 1, 0),
+ SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL17, 6, 1, 0),
+ SOC_DAPM_SINGLE("Right Playback Switch", ES8328_DACCONTROL18, 7, 1, 0),
+ SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL18, 6, 1, 0),
+@@ -243,7 +242,6 @@ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = {
+ static const struct snd_kcontrol_new es8328_right_mixer_controls[] = {
+ SOC_DAPM_SINGLE("Left Playback Switch", ES8328_DACCONTROL19, 7, 1, 0),
+ SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL19, 6, 1, 0),
+- SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL20, 7, 1, 0),
+ SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL20, 6, 1, 0),
+ };
+
+@@ -336,10 +334,10 @@ static const struct snd_soc_dapm_widget es8328_dapm_widgets[] = {
+ SND_SOC_DAPM_DAC("Left DAC", "Left Playback", ES8328_DACPOWER,
+ ES8328_DACPOWER_LDAC_OFF, 1),
+
+- SND_SOC_DAPM_MIXER("Left Mixer", SND_SOC_NOPM, 0, 0,
++ SND_SOC_DAPM_MIXER("Left Mixer", ES8328_DACCONTROL17, 7, 0,
+ &es8328_left_mixer_controls[0],
+ ARRAY_SIZE(es8328_left_mixer_controls)),
+- SND_SOC_DAPM_MIXER("Right Mixer", SND_SOC_NOPM, 0, 0,
++ SND_SOC_DAPM_MIXER("Right Mixer", ES8328_DACCONTROL20, 7, 0,
+ &es8328_right_mixer_controls[0],
+ ARRAY_SIZE(es8328_right_mixer_controls)),
+
+@@ -418,19 +416,14 @@ static const struct snd_soc_dapm_route es8328_dapm_routes[] = {
+ { "Right Line Mux", "PGA", "Right PGA Mux" },
+ { "Right Line Mux", "Differential", "Differential Mux" },
+
+- { "Left Out 1", NULL, "Left DAC" },
+- { "Right Out 1", NULL, "Right DAC" },
+- { "Left Out 2", NULL, "Left DAC" },
+- { "Right Out 2", NULL, "Right DAC" },
+-
+- { "Left Mixer", "Playback Switch", "Left DAC" },
++ { "Left Mixer", NULL, "Left DAC" },
+ { "Left Mixer", "Left Bypass Switch", "Left Line Mux" },
+ { "Left Mixer", "Right Playback Switch", "Right DAC" },
+ { "Left Mixer", "Right Bypass Switch", "Right Line Mux" },
+
+ { "Right Mixer", "Left Playback Switch", "Left DAC" },
+ { "Right Mixer", "Left Bypass Switch", "Left Line Mux" },
+- { "Right Mixer", "Playback Switch", "Right DAC" },
++ { "Right Mixer", NULL, "Right DAC" },
+ { "Right Mixer", "Right Bypass Switch", "Right Line Mux" },
+
+ { "DAC DIG", NULL, "DAC STM" },
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 634168d2bb6e54..c5efbceb06d1fc 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -994,10 +994,10 @@ static struct snd_soc_dai_driver fsl_sai_dai_template[] = {
+ {
+ .name = "sai-tx",
+ .playback = {
+- .stream_name = "CPU-Playback",
++ .stream_name = "SAI-Playback",
+ .channels_min = 1,
+ .channels_max = 32,
+- .rate_min = 8000,
++ .rate_min = 8000,
+ .rate_max = 2822400,
+ .rates = SNDRV_PCM_RATE_KNOT,
+ .formats = FSL_SAI_FORMATS,
+@@ -1007,7 +1007,7 @@ static struct snd_soc_dai_driver fsl_sai_dai_template[] = {
+ {
+ .name = "sai-rx",
+ .capture = {
+- .stream_name = "CPU-Capture",
++ .stream_name = "SAI-Capture",
+ .channels_min = 1,
+ .channels_max = 32,
+ .rate_min = 8000,
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index ff3671226306bd..ca33ecad075218 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -119,8 +119,8 @@ static const struct snd_soc_ops imx_audmix_be_ops = {
+ static const char *name[][3] = {
+ {"HiFi-AUDMIX-FE-0", "HiFi-AUDMIX-FE-1", "HiFi-AUDMIX-FE-2"},
+ {"sai-tx", "sai-tx", "sai-rx"},
+- {"AUDMIX-Playback-0", "AUDMIX-Playback-1", "CPU-Capture"},
+- {"CPU-Playback", "CPU-Playback", "AUDMIX-Capture-0"},
++ {"AUDMIX-Playback-0", "AUDMIX-Playback-1", "SAI-Capture"},
++ {"SAI-Playback", "SAI-Playback", "AUDMIX-Capture-0"},
+ };
+
+ static int imx_audmix_probe(struct platform_device *pdev)
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 737dd00e97b142..779d97d31f170e 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1145,7 +1145,7 @@ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream)
+ {
+ struct usbmidi_out_port *port = substream->runtime->private_data;
+
+- cancel_work_sync(&port->ep->work);
++ flush_work(&port->ep->work);
+ return substream_open(substream, 0, 0);
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index a97efb7b131ea2..09210fb4ac60c1 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1868,6 +1868,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
+ subs->stream_offset_adj = 2;
+ break;
++ case USB_ID(0x2b73, 0x000a): /* Pioneer DJM-900NXS2 */
+ case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */
+ pioneer_djm_set_format_quirk(subs, 0x0082);
+ break;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 8e02db7e83323b..1691aa6e6ce32d 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -639,47 +639,8 @@ static int add_dead_ends(struct objtool_file *file)
+ uint64_t offset;
+
+ /*
+- * Check for manually annotated dead ends.
+- */
+- rsec = find_section_by_name(file->elf, ".rela.discard.unreachable");
+- if (!rsec)
+- goto reachable;
+-
+- for_each_reloc(rsec, reloc) {
+- if (reloc->sym->type == STT_SECTION) {
+- offset = reloc_addend(reloc);
+- } else if (reloc->sym->local_label) {
+- offset = reloc->sym->offset;
+- } else {
+- WARN("unexpected relocation symbol type in %s", rsec->name);
+- return -1;
+- }
+-
+- insn = find_insn(file, reloc->sym->sec, offset);
+- if (insn)
+- insn = prev_insn_same_sec(file, insn);
+- else if (offset == reloc->sym->sec->sh.sh_size) {
+- insn = find_last_insn(file, reloc->sym->sec);
+- if (!insn) {
+- WARN("can't find unreachable insn at %s+0x%" PRIx64,
+- reloc->sym->sec->name, offset);
+- return -1;
+- }
+- } else {
+- WARN("can't find unreachable insn at %s+0x%" PRIx64,
+- reloc->sym->sec->name, offset);
+- return -1;
+- }
+-
+- insn->dead_end = true;
+- }
+-
+-reachable:
+- /*
+- * These manually annotated reachable checks are needed for GCC 4.4,
+- * where the Linux unreachable() macro isn't supported. In that case
+- * GCC doesn't know the "ud2" is fatal, so it generates code as if it's
+- * not a dead end.
++ * UD2 defaults to being a dead-end, allow them to be annotated for
++ * non-fatal, eg WARN.
+ */
+ rsec = find_section_by_name(file->elf, ".rela.discard.reachable");
+ if (!rsec)
+@@ -2628,13 +2589,14 @@ static void mark_rodata(struct objtool_file *file)
+ *
+ * - .rodata: can contain GCC switch tables
+ * - .rodata.<func>: same, if -fdata-sections is being used
+- * - .rodata..c_jump_table: contains C annotated jump tables
++ * - .data.rel.ro.c_jump_table: contains C annotated jump tables
+ *
+ * .rodata.str1.* sections are ignored; they don't contain jump tables.
+ */
+ for_each_sec(file, sec) {
+- if (!strncmp(sec->name, ".rodata", 7) &&
+- !strstr(sec->name, ".str1.")) {
++ if ((!strncmp(sec->name, ".rodata", 7) &&
++ !strstr(sec->name, ".str1.")) ||
++ !strncmp(sec->name, ".data.rel.ro", 12)) {
+ sec->rodata = true;
+ found = true;
+ }
+diff --git a/tools/objtool/include/objtool/special.h b/tools/objtool/include/objtool/special.h
+index 86d4af9c5aa9dc..89ee12b1a13849 100644
+--- a/tools/objtool/include/objtool/special.h
++++ b/tools/objtool/include/objtool/special.h
+@@ -10,7 +10,7 @@
+ #include <objtool/check.h>
+ #include <objtool/elf.h>
+
+-#define C_JUMP_TABLE_SECTION ".rodata..c_jump_table"
++#define C_JUMP_TABLE_SECTION ".data.rel.ro.c_jump_table"
+
+ struct special_alt {
+ struct list_head list;
+diff --git a/tools/testing/selftests/drivers/net/queues.py b/tools/testing/selftests/drivers/net/queues.py
+index 30f29096e27c22..4868b514ae78d8 100755
+--- a/tools/testing/selftests/drivers/net/queues.py
++++ b/tools/testing/selftests/drivers/net/queues.py
+@@ -40,10 +40,9 @@ def addremove_queues(cfg, nl) -> None:
+
+ netnl = EthtoolFamily()
+ channels = netnl.channels_get({'header': {'dev-index': cfg.ifindex}})
+- if channels['combined-count'] == 0:
+- rx_type = 'rx'
+- else:
+- rx_type = 'combined'
++ rx_type = 'rx'
++ if channels.get('combined-count', 0) > 0:
++ rx_type = 'combined'
+
+ expected = curr_queues - 1
+ cmd(f"ethtool -L {cfg.dev['ifname']} {rx_type} {expected}", timeout=10)
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 61056fa074bb2f..40a2def50b837e 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -234,6 +234,7 @@ enforce_ruleset(struct __test_metadata *const _metadata, const int ruleset_fd)
+ struct protocol_variant {
+ int domain;
+ int type;
++ int protocol;
+ };
+
+ struct service_fixture {
+diff --git a/tools/testing/selftests/landlock/config b/tools/testing/selftests/landlock/config
+index 29af19c4e9f981..a8982da4acbdc3 100644
+--- a/tools/testing/selftests/landlock/config
++++ b/tools/testing/selftests/landlock/config
+@@ -3,6 +3,8 @@ CONFIG_CGROUP_SCHED=y
+ CONFIG_INET=y
+ CONFIG_IPV6=y
+ CONFIG_KEYS=y
++CONFIG_MPTCP=y
++CONFIG_MPTCP_IPV6=y
+ CONFIG_NET=y
+ CONFIG_NET_NS=y
+ CONFIG_OVERLAY_FS=y
+diff --git a/tools/testing/selftests/landlock/net_test.c b/tools/testing/selftests/landlock/net_test.c
+index 4e0aeb53b225a5..376079d70d3fc0 100644
+--- a/tools/testing/selftests/landlock/net_test.c
++++ b/tools/testing/selftests/landlock/net_test.c
+@@ -85,18 +85,18 @@ static void setup_loopback(struct __test_metadata *const _metadata)
+ clear_ambient_cap(_metadata, CAP_NET_ADMIN);
+ }
+
++static bool prot_is_tcp(const struct protocol_variant *const prot)
++{
++ return (prot->domain == AF_INET || prot->domain == AF_INET6) &&
++ prot->type == SOCK_STREAM &&
++ (prot->protocol == IPPROTO_TCP || prot->protocol == IPPROTO_IP);
++}
++
+ static bool is_restricted(const struct protocol_variant *const prot,
+ const enum sandbox_type sandbox)
+ {
+- switch (prot->domain) {
+- case AF_INET:
+- case AF_INET6:
+- switch (prot->type) {
+- case SOCK_STREAM:
+- return sandbox == TCP_SANDBOX;
+- }
+- break;
+- }
++ if (sandbox == TCP_SANDBOX)
++ return prot_is_tcp(prot);
+ return false;
+ }
+
+@@ -105,7 +105,7 @@ static int socket_variant(const struct service_fixture *const srv)
+ int ret;
+
+ ret = socket(srv->protocol.domain, srv->protocol.type | SOCK_CLOEXEC,
+- 0);
++ srv->protocol.protocol);
+ if (ret < 0)
+ return -errno;
+ return ret;
+@@ -290,22 +290,59 @@ FIXTURE_TEARDOWN(protocol)
+ }
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp) {
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp1) {
+ /* clang-format on */
+ .sandbox = NO_SANDBOX,
+ .prot = {
+ .domain = AF_INET,
+ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
+ },
+ };
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp) {
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp2) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp1) {
+ /* clang-format on */
+ .sandbox = NO_SANDBOX,
+ .prot = {
+ .domain = AF_INET6,
+ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp2) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_mptcp) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
+ },
+ };
+
+@@ -329,6 +366,17 @@ FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_udp) {
+ },
+ };
+
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_mptcp) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
++ },
++};
++
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_unix_stream) {
+ /* clang-format on */
+@@ -350,22 +398,48 @@ FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_unix_datagram) {
+ };
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp) {
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp1) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp2) {
+ /* clang-format on */
+ .sandbox = TCP_SANDBOX,
+ .prot = {
+ .domain = AF_INET,
+ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp1) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
+ },
+ };
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp) {
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp2) {
+ /* clang-format on */
+ .sandbox = TCP_SANDBOX,
+ .prot = {
+ .domain = AF_INET6,
+ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
+ },
+ };
+
+@@ -389,6 +463,17 @@ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_udp) {
+ },
+ };
+
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_mptcp) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
++ },
++};
++
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_stream) {
+ /* clang-format on */
+@@ -399,6 +484,17 @@ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_stream) {
+ },
+ };
+
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_mptcp) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
++ },
++};
++
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_datagram) {
+ /* clang-format on */
+diff --git a/tools/testing/selftests/rseq/rseq-riscv-bits.h b/tools/testing/selftests/rseq/rseq-riscv-bits.h
+index de31a0143139b7..f02f411d550d18 100644
+--- a/tools/testing/selftests/rseq/rseq-riscv-bits.h
++++ b/tools/testing/selftests/rseq/rseq-riscv-bits.h
+@@ -243,7 +243,7 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset_deref_addv)(intptr_t *ptr, off_t off, i
+ #ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ #endif
+- RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, 3)
++ RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, 3)
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+@@ -251,8 +251,8 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset_deref_addv)(intptr_t *ptr, off_t off, i
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [ptr] "r" (ptr),
+- [off] "er" (off),
+- [inc] "er" (inc)
++ [off] "r" (off),
++ [inc] "r" (inc)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+diff --git a/tools/testing/selftests/rseq/rseq-riscv.h b/tools/testing/selftests/rseq/rseq-riscv.h
+index 37e598d0a365e2..67d544aaa9a3b0 100644
+--- a/tools/testing/selftests/rseq/rseq-riscv.h
++++ b/tools/testing/selftests/rseq/rseq-riscv.h
+@@ -158,7 +158,7 @@ do { \
+ "bnez " RSEQ_ASM_TMP_REG_1 ", 222b\n" \
+ "333:\n"
+
+-#define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, post_commit_label) \
++#define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, post_commit_label) \
+ "mv " RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(ptr) "]\n" \
+ RSEQ_ASM_OP_R_ADD(off) \
+ REG_L RSEQ_ASM_TMP_REG_1 ", 0(" RSEQ_ASM_TMP_REG_1 ")\n" \
^ permalink raw reply related [flat|nested] 31+ messages in thread
end of thread, other threads:[~2025-03-07 18:22 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-27 13:22 [gentoo-commits] proj/linux-patches:6.12 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2025-03-07 18:22 Mike Pagano
2025-02-21 13:31 Mike Pagano
2025-02-18 11:26 Mike Pagano
2025-02-17 15:44 Mike Pagano
2025-02-17 11:25 Mike Pagano
2025-02-17 11:16 Mike Pagano
2025-02-16 21:48 Mike Pagano
2025-02-08 11:26 Mike Pagano
2025-02-01 23:07 Mike Pagano
2025-01-30 12:47 Mike Pagano
2025-01-23 17:02 Mike Pagano
2025-01-17 13:18 Mike Pagano
2025-01-17 13:18 Mike Pagano
2025-01-09 13:51 Mike Pagano
2025-01-02 12:31 Mike Pagano
2024-12-27 14:08 Mike Pagano
2024-12-19 18:07 Mike Pagano
2024-12-15 0:02 Mike Pagano
2024-12-14 23:59 Mike Pagano
2024-12-14 23:47 Mike Pagano
2024-12-11 21:01 Mike Pagano
2024-12-09 23:13 Mike Pagano
2024-12-09 11:35 Mike Pagano
2024-12-06 12:44 Mike Pagano
2024-12-05 20:05 Mike Pagano
2024-12-05 14:06 Mike Pagano
2024-12-02 17:15 Mike Pagano
2024-11-30 17:33 Mike Pagano
2024-11-22 17:45 Mike Pagano
2024-11-21 13:12 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox